Thursday, March 04, 2021

Video: LightOn unlocks Transformative AI

In the coming days, we'll be making another announcement but I wanted to first share a video we did recently. At LightOn, we don't build photonic computing hardware because it's fancy or cool (even though, it is cool) but because computing hardware is hitting the limits. I know what some say about Moore's law not being dead but the recent focus on Transformers and their attendant scaling laws makes it obvious that in order for more people to have access to these models, we need a new computing paradigm. Indeed not everyone can afford to spend a billion dollars in training these models. As Azeem was recently pointing out in one of his newsletters, this is how bad things will become:
The amazing thing is that we can start to compare the cost of training single AI models with the cost of building the physical fabs that make chips. TSMC’s state-of-the-art 3nm fab will run to around $20bn when it is completed in two years. A fab like this may be competitive for 5-7 years, which means it’ll need to churn out $7-8m worth of chips every day before it pays back.

And so at LightOn, we think that a combination of algorithms and (cool) hardware as the only pathway forward for computing large-scale AI. The video is right here, enjoy!







 
Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

No comments:

Printfriendly