From Vladimir Koifman's blog, here is Image Sensor Architecture for Continuous Mobile Vision a presentation by Robert LiKamWa on his work. The presentation is at: "RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision"
While the paper is: RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision by Robert LiKamWa, Yunhui Hou, Yuan Gao, Mia Polansky, Lin Zhong .
The Redee repository is at: https://github.com/JulianYG/redeye_sim that features the following:
Continuous mobile vision is limited by the inability to efficiently capture image frames and process vision features. This is largely due to the energy burden of analog readout circuitry, data traffic, and intensive computation. To promote efficiency, we shift early vision processing into the analog domain. This results in RedEye, an analog convolutional image sensor that performs layers of a convolutional neural network in the analog domain before quantization. We design RedEye to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse. RedEye uses programmable mechanisms to admit noise for tunable energy reduction. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy, 73% reduction in cloudlet-based system energy, and a 45% reduction in computation-based system energy.
The Redee repository is at: https://github.com/JulianYG/redeye_sim that features the following:
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment