Showing posts with label PredictingTheFuture. Show all posts
Showing posts with label PredictingTheFuture. Show all posts

Sunday, September 11, 2016

Predicting the Future: The Steamrollers and Machine Learning.

Four years ago, I wrote that when one wants to predict the future, it is always a good thing to rely on the steamrollers, i.e. the exponential trends on which we can surf to predict the future because they are highly efficient. Let see how this trend continues.

Since 2012, genome sequencing reached a second inflection point but I still expect a third inflection point due to the use of the nanopore sequencing technology as we had mentioned then. So we are good.


For Moore's law it is a little more complicated. While the number of transistors do increase (but probably not as fast a Moore's law anymore. In this most recent graph, the last five year show a slight inflection point) the performance is beginning to stall. Dennard scaling stopped around 2005


and while the number of transistors increased, we are beginning to see a decrease in the use of that silicon. This last trend has a name: Dark Silicon. Bill Dally, whom we had invited at the now-not-gonna-happen NIPS workshop said the following three years ago:

“The number of transistors is still going up in a very healthy way,” Dally contends, “We’re still getting the Moore’s law increase in the number of devices, but without Dennard scaling to make them useful, all the things we care about – like clock frequency of our parts, single thread performance and even the number of cores we can put in our parts – is flattening out, leading to the end of scaling as we know it. We’re no longer getting this free lunch of technology giving us faster computers.”
This is a problem that needs to be solved because our 21st century economy hinges on continued computational progress. With the end of Dennard scaling, all computing power becomes limited, and performance going forward is determined by energy efficiency. As process technology can no longer be counted upon for exponential advances, the focus must shift to architecture and circuits, according to Dally.
Since Machine Learning has become the main driver in applications, it has become a target of interest for the silicon industry. In a way, one of the lessons of that story is for architecture of Machine Learning algorithms to be a better fit for silicon architectures.  No wonder we recently saw the rise of Google's TPU, the numerous NVIDIA platforms like the DGX 1, the recent acquisition of silicon-machine learning companies like Nervana and Movidius, the effort of Baidu in the GPU clusters realm...

Let me finish this short blog post by pointing out that while silicon will become more efficient because Machine Learning algorithms are used as a constraint,  Making sense of the world in the future will still require new technologies. From the recently released report produced by the Semiconductor Industry Association and the Semiconductor Research Corporation entitled "Rebooting the IT Revolution: A Call to Action", one can read that given the data explosion we are predicting, computing in 2040 may become problematic. From Appendix A Data Explosion Facts , A4. Total Effective Capacity to Compute Information in 2040:
 For this benchmark energy per bit, computing will not be sustainable by 2040, when the energy required for computing will exceed the estimated world’s energy production. Thus, radical improvement in the energy efficiency of computing is needed.

And according to the authors of  "Moore's law: the future of Si microelectronics

The time frame to implement a radically new device is estimated to be ∼30 years. 

We need to find less than radically new devices and better algorithms to make sense of this upcoming Miller's wave in 2025. It's time to get cranking.

Sunday, December 27, 2015

Sunday Morning Insight: 10x Not 10% by Ken Norton

Here is Ken Norton's outstanding video featuring his 10x Not 10%, Product management by orders of magnitude presentation (the link goes to Ken long form essay of that presentation). You may notice certain themes mentioned here on Nuit Blanche before and highlighted below. He mentions betting on trends while we call them the steamrollers. One should notice that while Ken marvels at the images of Pluto having been dowloaded a few hours earlier, the talk was given fifteen days too early when The Second Inflection Point in Genome Sequencing occured. Let us note that we wondered when that second inflection point back in 2014 and that it took place about a year later. In all, there are certainly elements of the strategy described by Ken we are trying to accomplish at Lighton. Enjoy the video !


  
 

Related:




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, August 19, 2014

Videos and Slides: Next-Generation Sequencing Technologies - Elaine Mardis (2014)

Last time I mentioned Elaine Mardis videos giving a summary of next generation sequencing the video shot up and garnered more than 127,893 viewers. Coincidence ? I think not :-)

Here is the new survey on Next Generation Sequencing where she talks about the current PacBio and Nanopore technology add-ons to the lab. I note the biology people liking the term "massively parallel" sequencing. Anyway, those third generation technologies are very interesting because instead of cutting DNA strands in small pieces and trying to put them back together, they output very long reads (up 50Kbp from 200bp for earlier next generation sequencing technology) thereby reducing much of the guessing in the read alignments. The current downside of those technologies are that they have large error rates. PacBio, for instance, with its SMRT technology has about 15pct error rate for a single strand of DNA but that error goes away when combining several DNA strand reading together down to 0.01pct overall. Nanopore, according to Elaine, is in the 30pct realm but one would have to check people looking into it to be really more accurate on that figure. Irrespective, the longer reads with oversampling means that one can get much nicer view of the genome that was, for chemical reasons, not reachable otherwise. I also note that the PacBio approach uses fluorescence and hence uses camera technology, one of the steamrollers. Fluorescence is not unheard of in compressive sensing and maybe an improvement of the technique might provide enhanced accuracy. The question is how do algorithms featured here on Nuit Blanche can help in realizing A Second Inflection Point in Genome Sequencing. (For people confused Next Generation sequencing refers to 2nd generation sequencing while third generation sequencing refers to newer sensors). More on that later.

Without further ado, here is a summary of sequencing technology as July 30th, 2014.




The slides are in Color or Grayscale

This lecture was part of a larger series of talks in Current Topics in Genome Analysis 2014



Also of interest: PBSuite, Software for Long-Read Sequencing Data from PacBio


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, July 16, 2014

Being a child of Moore's Law

Food for thought for the day.

About two years ago, I wrote about predicting the future from the perspective of thinking linearly on an exponential slope (Predicting the Future: The Steamrollers, Predicting the Future: Randomness and Parsimony ). Here are three images I found on my Twitter feed that illustrate this. Recall that Genomic sequencing is going faster than Moore's law ( A Second Inflection Point in Genome Sequencing ? and then ...)




The screen of what we initially called a telephone got bigger in order for people to interact with it (that and for contemplating one's selfies)



Maybe it's time for Zero Knowledge / Data Driven Sensor Design






Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, July 04, 2014

A Second Inflection Point in Genome Sequencing ? and then ...

It's Friday afternoon, it Hamming's time.

If you read Nuit Blanche (Predicting the Future: The Steamrollers ), you are probably more aware of how the future unfolds. But probably not enough to figure out this upcoming game score.


The good folks at Google predict France will win based on touch by touch data. Anyway, we'll know in a few hours. What about looking into the future farther ahead ?




In genome sequencing, current techniques usually require the chemical cut of the DNA in small pieces to be eventually algorithmically reasembled into a sequence (see the Partial Digest Problem in Reconstruction of Integers from Pairwise Distances). A first inflection point in the democratization of DNA sequencing could be noticed back in 2007-2008 thanks to process parallelization [2]. Another is taking place right before our eyes and is probably going to be more prominent as massive data coming from Nanopore technology [1] unfolds. From Nanopores are here!:
In what appears to be the first example of publicly available, user-generated Oxford Nanopore MinION data, Nick Loman (aka @pathogenomenick) has given us a glimpse into the future.

Let us remember that previously on Nuit Blanche, we could already get our hands on raw data from a similar technology (Quantum Biosystems Provides Raw Data Access to New Sequencing Technology).

But soon enough, getting all this information for every person will mean that we will be able to search for the outliers (rare diseases) first [7] and search for actual drug that can target molecular networks [6] i.e; have more Stephanie Events with algorithms mentioned here on Nuit Blanche.

From [4]

For instance, recently The SwAMP Thing! entry pointed to the possibility of using AMP algorithms to perform group testing, in another entry ( ... There will be a "before" and "after" this paper ...,), other authors have evaluated what population sampling requirements when performing GWAS based on our current solvers capabilities. Again, all techniques and algorithms often featured here...

[5] Applying compressed sensing to genome-wide association studies by Shashaank Vattikuti, James J Lee, Christopher C Chang, Stephen D H Hsu and Carson C Chow

The study of molecular networks has recently moved into the limelight of biomedical research. While it has certainly provided us with plenty of new insights into cellular mechanisms, the challenge now is how to modify or even restructure these networks. This is especially true for human diseases, which can be regarded as manifestations of distorted states of molecular networks. Of the possible interventions for altering networks, the use of drugs is presently the most feasible. In this mini-review, we present and discuss some exemplary approaches of how analysis of molecular interaction networks can contribute to pharmacology (e.g., by identifying new drug targets or prediction of drug side effects), as well as list pointers to relevant resources and software to guide future research. We also outline recent progress in the use of drugs for in vitro reprogramming of cells, which constitutes an example par excellence for altering molecular interaction networks with drugs.

Abstract 
Background
Genome-wide association studies have revealed that rare variants are responsible for a large portion of the heritability of some complex human diseases. This highlights the increasing importance of detecting and screening for rare variants. Although the massively parallel sequencing technologies have greatly reduced the cost of DNA sequencing, the identification of rare variant carriers by large-scale re-sequencing remains prohibitively expensive because of the huge challenge of constructing libraries for thousands of samples. Recently, several studies have reported that techniques from group testing theory and compressed sensing could help identify rare variant carriers in large-scale samples with few pooled sequencing experiments and a dramatically reduced cost.
Results
Based on quantitative group testing, we propose an efficient overlapping pool sequencing strategy that allows the efficient recovery of variant carriers in numerous individuals with much lower costs than conventional methods. We used random k-set pool designs to mix samples, and optimized the design parameters according to an indicative probability. Based on a mathematical model of sequencing depth distribution, an optimal threshold was selected to declare a pool positive or negative. Then, using the quantitative information contained in the sequencing results, we designed a heuristic Bayesian probability decoding algorithm to identify variant carriers. Finally, we conducted in silico experiments to find variant carriers among 200 simulated Escherichia coli strains. With the simulated pools and publicly available Illumina sequencing data, our method correctly identified the variant carriers for 91.5-97.9% variants with the variant frequency ranging from 0.5 to 1.5%.
Conclusions
Using the number of reads, variant carriers could be identified precisely even though samples were randomly selected and pooled. Our method performed better than the published DNA Sudoku design and compressed sequencing, especially in reducing the required data throughput and cost.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 25, 2014

Predicting the Future: The Upcoming Stephanie Events

You probably recall this entry on The Steamrollers, technologies that are improving much faster than Moore's law if not more. It turns out that at the Paris Machine Learning Meetup #5, Jean-Philippe Vert [1] provided some insight as to what happened in 2007-2008 on this curve:


Namely, the fact that the genomic pipelines went parallel. The curve has been going down ever since but one wonders if we will see another phase transition.

The answer is yes. 

Why ? from [3]
However, nearly all of these new techniques concomitantly decrease genome quality, primarily due to the inability of their relatively short read lengths to bridge certain genomic regions, e.g., those containing repeats. Fragmentation of predicted open reading frames (ORFs) is one possible consequence of this decreased quality.

it is one thing to go fast it is another to produce good results. Both of these issues of speed and accuracy due to short reads may well answered with nanopore technology and specifically the fact that entities like Quantum Biosystems Provides Raw Data Access to New Sequencing Technology and I am hearing through the grapevine that their data release program is successful. What are the consequences of the current capabilities ? Eric Schadt calls it the The Stephanie Event (see also [4]). How much faster are we going to have those events in the future if we have very accurate long read lengths and attendant algorithms [2, 5] ? I am betting more.

Tuesday, January 28, 2014

Quantum Biosystems Provides Raw Data Access to New Sequencing Technology



Ever since writing this entry on Predicting the Future: The Steamrollers, I have been on the lookout for any type of improvements on next generation sequencing techniques such as the one using nanopores. If you look at the nanopore tag, you'll notice a dearth of data coming from actual hardware sensing. Similarly, a recent question on one of the linkedin group on NGS yielded very little in terms of actual data that people could try their machine learning algorithms on. In fact it pretty looked hopeless. Things changed yesterday: At PMWC, Quantum Biosystems decided to start a process of openly share data with the rest of the scientific community. Here is the press release: Quantum Biosystems demonstrates First Reads using Quantum Single Molecule Sequencing. From the press release (and its connection to the steamrollers)

....The platform allows the direct sequencing of single stranded DNA and RNA without labelling or modification, on silicon devices which can be produced on the same production lines as consumer grade integrated circuits. As the system uses no proteins or other reagents it is potentially ultra-low cost, enabling consumer level genome sequencing.


But more importantly, the data is at: http://www.quantumbiosystems.com/data/, with the note:
Raw quantum sequencing data are freely available for scientific and research use, allowing you, for example, to develop your own algorithms and software for quantum sequencing. We will add new data sets from time to time, updated as needed.
Hit the Data Download button, put your name/company/university/blog and you are good to go.

Here is the Data Usage policy that you need to be Ok with, as soon as you fill this in, you can get the data:

Data Usage Policy (January 26, 2014)
All data in this site belong to Quantum Biosystems. These pre-publication data are preliminary and may contain errors. The goal of our policy is that early release should enable the progress of science.
The data is published under a Creative Commons licence 
Attribution-NonCommercial 3.0 (CC-BY-NC 3.0)
http://creativecommons.org/licenses/by-nc/3.0/us/legalcode
Creative Commons license CC-BY-NC 3.0:
"A license whereby licensees may copy, distribute, display and perform the work and make derivative works based on it only for non-commercial purposes, only if they give the author or licensor the credits in the manner specified."
By accessing these data, you agree not to use for commercial use without any prior written consent of Quantum Biosystems.
2014 Quantum Biosystems

Disclaimer: at the time of this writing, I hold no stake in Quantum Biosystems

Join the CompressiveSensing subreddit or the Google+ Community and post there !

Monday, August 27, 2012

Predicting the Future: Randomness and Parsimony

[ Thus is part 2. Part 1 is here: Predicting the Future: The Steamrollers]

For many people, predicting the future means predicting the breakthroughs, and while this might seem reasonable at first, one should probably focus on how the steamrollers can precipitate these findings as opposed to expecting chance to be on our side. 

One of the realities of applied mathematics is the dearth of really new and important algorithms. It is not surprising, it is a tough and difficult process. In turn, the mathematical tools we will use in the next 20 years are, for the most part, probably in our hands already. How can this fact help in predicting the future you say? Well, let us combine this observation with another one in the health care area, but it could be any other field that can be transformed through synthetic biology thanks to genomic sequencing.

First there is this stunning example recounted by David Valle in The Human Genome and Individualized Medicine (it is at 57min32s)
...First of all, acute lymphoblastic leukemia. When I was a house officer in the later '60s and early '70s, acute lymphoblastic leukemia was the most common form of childhood leukemia and had a 95 percent mortality rate - 95 percent mortality. Nowadays, acute lymphoblastic leukemia remains the most common chilhood leukemia. It has 95 percent survival rate, 95 percent survival/ So it went from 95 percent mortality to 95 percent survival. So what account for that change ? So actually if you look at it, the medicines that are currently being used are very similar, if not identical, to the medicines that we used all those years ago. So it's not the kinds of medicines that are being used. What it is, I would argue, is that oncologists have learned that this diagnosis is actually a heterogeneous group of disorders. And they've learned how to use gene expression profiling, age of onset, DNA sequence variation and other tools to subdivide the patients. In other words, move from one collective diagnosis to  subcategories of diagnosis moving towards individualizing the diagnosis to individual patients and the  manipulating their treatment according to which subdivision the patient falls. And that approach. a more informed approach in terms of differences between individual patient with the same diagnosis, has had a dramatic effect on the consequences of having ALL....

In other words, starting with the same medicines, it took us 40 years (most of that time without sequencing capabilities) to match a hierarchy of diseases to a hierarchy of drugs and processes. Back in the 70s, this  matching of hierarchies entailed:
  • the ability to get a rapid feedback from drug trials
  • the ability to have enough statistics from a sub-group for certain drug trials
Because of the statistics required, treating rare diseases have been at odds with this process. How is this different nowadays ?  Hal Dietz discusses that in Rational therapeutics for genetic conditions (see "...The Window Doesn't Close..."). and he points out that if you have the right tool to examine deep inside the metabolic networks through genome sequencing, then the window doesn't close. From the Q&A:

Question: Are Adults with Marfan syndrome all treatable ?

Hal Dietz: Yeah, so that'sa great question. The question is, are adults with Marfan all treatable or is the window of opportunity to make a difference over in childhood ? At least in our mice, we can allow them to become mature adults. They're sexually mature at about two months, by six months of age they are sort of mid-adult life and by a year of age they are old mice. And whether we start treatment right after birth, in the middle of that sequence, or at the end, we see the same kind of benefits. So we think that the window doesn't close, that there is an opportunity even later in life.

In short, with genomic sequencing,  the matching process occurring in health care -a data driven hypothesis process- now becomes 
  • the ability to get a rapid feedback from drug trials
  • the ability to get an information rich feedback from these drug trials
The Steamrollers that are Moore's law and Rapid Genomic Sequencing point to an ability to generate higher quality data at a faster pace than ever before while profoundly changing survival rates or curing diseases.

All would be well if the quality of the information from genomic sequencing did not come at the expense off an attendant large quantity of data. Let's put this in perspective: The genome comprises a billion information, the microbiome about ten times that and there are about seven billion people on Earth. If one were to decode the genome of the entire population, we would generate about 10^19 data. This is huge, it's more information than there are stars in the universe. However huge, this data is not that information rich, simply speaking there is a larger variety in the human genome between folks from the same tribe in Africa than any other humans living on the four other continents.  




In short, the useful data actually "lives" in a much much much smaller world than the one produced by the combination of the Steamrollers. In order to handle this parsimonious needles within these very large haystacks, mathematical concentration of measure type of results have recently yielded different tools. Some of these methods use randomness as an efficient way of compressing this useful but sparse information. 

What is the time frame for these tools using parsimony and randomness to be part of the standard toolbox in personalized medicine ?

Certainly less than 18 years. It took about 27 years to build efficient tools ( EISPACK (1972) - LAPACK (1999)) in linear algebra that are just now considering randomization (see Slowly but surely they'll join our side of the Force...). Using the parsimony of the data will probably be handled at a faster pace by crowdsourcing efforts such as scikit-learn. In the next eighteen years, we should expect libraries featuring standardized :Advanced Matrix Factorization Techniques as well as factorization in the Streaming Data model to be readily available in ready-to-use toolboxes. Parsimony also effectively embeds graph related concepts as well and one already sees the development of distributed computing beyond the now seven year old Hadoop such as GraphLab.




But the concepts of parsimony and randomness will also play a tremendous role in how we take data in the first place by changing the way we design diagnostic instruments. Sensing with parsimony aka Compressive Sensing will help in making this a reality. Besides aiding in reverse engineer biochemical networks,ir providing an effective way to compare genomic data, it will also help engineers devise new sensors or perfect older ones such as MRI. Expect new diagnostic tools.

Which gets us back to the original question: What can I say with certainty about August 25th, 2030 ? We will manage the large datasets coming out of the steamrollers only through the use of near philosophical concepts such as parsimony and randomness. By doing so we are likely to reduce tremendously our current number 1 and number 2 causes of death.






( Source for the last graph: The Burden of Disease and the Changing Task of Medicine, New England Journal of Medicine).


Stay tuned for the third installment of this series on Predicting the Future.

Sunday, August 26, 2012

Predicting the Future: The Steamrollers

[ Thus is part 1. Part 2 is here:Predicting the Future: Randomness and Parsimony ]

In order to predict what will happen on August 25th, 2030, here is some history and perspective:



In 1975, while they had no idea how it would affect their business, Kodak engineers produced the first digital camera using a technology called CCD. It took the economies of scale of CMOS (brought forth by the development of computing) to drive down the cost of cameras so that anybody could own one.



CMOS for imaging started in the lab in the mid-90's, fast forward 14 years later, a billion smartphones with integrated CMOS cameras are produced per year. In short the economies of scale that followed Moore's law enabled the growth from a prototype into a mass market item in little over a decade. It did not happen by magic as entire industries' capital and government funding was poured in making this reality happen. In fact, algorithm development also helped. From Fueling Innovation and Discovery: The Mathematical Sciences in the 21st Centurythere is a description of how the Fast Multipole Algorithm (FMM), discovered in the late 80s, helped:

....The applications of the Fast Multipole Method have not been limited to the military. In fact, its most important application from a business perspective is for the fabrication of computer chips and electronic components. Integrated circuits now pack 10 billion transistors into a few square centimeters, and this makes their electromagnetic behavior hard to predict. The electrons don’t just go through the wires they are supposed to, as they would in a normal-sized circuit. A charge in one wire can induce a parasitic charge in other wires that are only a few microns away. Predicting the actual behavior of a chip means solving Maxwell’s equations, and the Fast Multipole Method has proved to be the perfect tool. For example, most cell phones now contain components that were tested with the Fast Multipole Method before they were ever manufactured....

Some could argue that there must be a caveat, that this Moore's law cannot continue forever. Indeed, witness the recent bumps that have emerged in this rosy picture in the Scariest Graph I've Seen Recently. But that would be too pessimistic, this type of comparison featuring upcoming showstoppers with current technologies never allows for the inclusion of other competing technologies (see 1-bit cameras concepts (QIS, Gigavision Camera)), hence, because there is no reason it will stop, we will consider Moore's law to be holding for the next twenty years. 




The other lesson from this recent past history shows that any competing technology to CMOS is bound to yield under the stress of this exponential growth (see Do Not Mess with CMOS ). In all, Moore's law or any law that is exponential in nature act as a steamroller and provides the best predictor to what is going to happen in the next fifteen to eighteen years.


I recently mentioned (see What is Faster than Moore's Law and Why You Should Care) that there was another technology that was growing faster than Moore's law: Genomic Sequencing.
From 

In those cases of rapid growth, one always wonder if we are witnessing some sort of transient, an accidental bump or some sorts of asymptotics i.e. a real "law". Thanks to the Nanopore technology featured in a real test this year (2012), it looks like the growth rate of the Genomics law is bound to remain larger than Moore's law for a while longer.  This fact translates into, for instance, the fact that data generated by genomics will overwhelm data currently produced by imaging CMOS (to give an idea of the growth of images and videos, YouTube recently stated that 72 hours of videos were uploaded to YouTube every minute  ). While it took 14 years to go from one prototype to 1 billion units generated large amount of data in the CMOS example. It may take a shorter time span for this genomics  technology to be an integral part of the medical profession and be part of our everyday life through synthetic biology.

It is stunning and somehow seems ludicrous to think of such a rapid integration of genomics into medicine.  Indeed, one wonders how can something like this be that fast ? I have two examples

  • The first is anecdotal: Elaine Mardis, a researcher performing sequencing, has to change her presentation slides every fifteen days (she says so in this video). I have never heard somebody in the science and technology making such a statement, even off-hand.
"....Interpreting the Human Genome
In all, human DNA contains about 3 billion “base pairs,” or rungs of the staircase. The goal of the Human Genome Project was to list all of them in order. Unfortunately, chemists can sequence only a few hundred base pairs at a time. To sequence the whole genome, scientists had to chop it into millions of shorter pieces, sequence those pieces, and reassemble them. "
[Emphasis added ]
If the Nanopore technology results hold, most of these algorithms for sequencing will become effectively useless (for sequencing) as the DNA is chopped off in larger fragments than current sequencing technology and hence can easily be put back together...

In summary, in this first part of this piece on predicting the future I described two technological steamrollers that will shape our technology landscape for the next 18 years. In part 2, I will describe  the type of algorithms that will help us make sense of this tsunami of data being generated as a result of these two technologies. I will then try to translate this into what it will mean on August 25th, 2030. Stay Tuned. 

Printfriendly