One of the realities of applied mathematics is the dearth of really new and important algorithms. It is not surprising, it is a tough and difficult process. In turn, the mathematical tools we will use in the next 20 years are, for the most part, probably in our hands already. How can this fact help in predicting the future you say? Well, let us combine this observation with another one in the health care area, but it could be any other field that can be transformed through synthetic biology thanks to genomic sequencing.
...First of all, acute lymphoblastic leukemia. When I was a house officer in the later '60s and early '70s, acute lymphoblastic leukemia was the most common form of childhood leukemia and had a 95 percent mortality rate - 95 percent mortality. Nowadays, acute lymphoblastic leukemia remains the most common chilhood leukemia. It has 95 percent survival rate, 95 percent survival/ So it went from 95 percent mortality to 95 percent survival. So what account for that change ? So actually if you look at it, the medicines that are currently being used are very similar, if not identical, to the medicines that we used all those years ago. So it's not the kinds of medicines that are being used. What it is, I would argue, is that oncologists have learned that this diagnosis is actually a heterogeneous group of disorders. And they've learned how to use gene expression profiling, age of onset, DNA sequence variation and other tools to subdivide the patients. In other words, move from one collective diagnosis to subcategories of diagnosis moving towards individualizing the diagnosis to individual patients and the manipulating their treatment according to which subdivision the patient falls. And that approach. a more informed approach in terms of differences between individual patient with the same diagnosis, has had a dramatic effect on the consequences of having ALL....
In other words, starting with the same medicines, it took us 40 years (most of that time without sequencing capabilities) to match a hierarchy of diseases to a hierarchy of drugs and processes. Back in the 70s, this matching of hierarchies entailed:
- the ability to get a rapid feedback from drug trials
- the ability to have enough statistics from a sub-group for certain drug trials
Hal Dietz: Yeah, so that'sa great question. The question is, are adults with Marfan all treatable or is the window of opportunity to make a difference over in childhood ? At least in our mice, we can allow them to become mature adults. They're sexually mature at about two months, by six months of age they are sort of mid-adult life and by a year of age they are old mice. And whether we start treatment right after birth, in the middle of that sequence, or at the end, we see the same kind of benefits. So we think that the window doesn't close, that there is an opportunity even later in life.
In short, with genomic sequencing, the matching process occurring in health care -a data driven hypothesis process- now becomes
- the ability to get a rapid feedback from drug trials
- the ability to get an information rich feedback from these drug trials
The Steamrollers that are
Moore's law and
Rapid Genomic Sequencing point to an ability to generate higher quality data at a faster pace than ever before while profoundly changing survival rates or curing diseases.
In short, the useful data actually "lives" in a much much much smaller world than the one produced by the combination of
the Steamrollers. In order to handle this parsimonious needles within these very large haystacks, mathematical
concentration of measure type of results have recently yielded different tools. Some of these methods use randomness as an efficient way of compressing this useful but sparse information.
What is the time frame for these tools using parsimony and randomness to be part of the standard toolbox in personalized medicine ?
Which gets us back to the original question: What can I say with certainty about August 25th, 2030 ? We will manage the large datasets coming out of the
steamrollers only through the use of near philosophical concepts such as
parsimony and
randomness. By doing so we are likely to reduce tremendously our current number 1 and number 2 causes of death.
Stay tuned for the third installment of this series on Predicting the Future.