Laurent Jacques reminded of this excellent comparison study between different compressive sensing architectures and traditional coded aperture featured in Compressive Optical Imaging: Architectures and Algorithms by Roummel F. Marcia, Rebecca M. Willett, and Zachary T. Harmany
I like the challenge section at the very end:
....First, directly implementing CS theory by collecting a series of independent pseudorandom projections of a scene requires either (a) a very large physical system or (b) observations collected sequentially over time. This latter approach is successfully used, for instance, in the Rice Single Pixel Camera. Alternative snapshot architectures (which capture all observations simultaneously) with a compact form factor include coded aperture techniques. These approaches impose structure upon the pseudorandom projections, most notably by limiting their independence. As a result, the number of measurements required to accurately recover an image is higher with snapshot coded aperture systems.A second key challenge relates to the nonnegativity of image intensities and measurements which can be collected by linear optical systems. Much of the theoretical literature on CS allows for negative measurements and does not consider nonnegativity during the reconstruction process. In this chapter we have shown that (a) explicitly incorporating nonnegativity constraints can improve reconstruction accuracy and (b) pre-processing observations to account for nonnegative sensing matrices improves reconstruction performance because of central assumptions underlying some fast CS algorithms. However, one important open question is whether novel approaches based on nonlinear optics can successfully circumvent these positivity constraints to improve performance.
I added the links to the text for a rapid connection to the example given. But the item that I have found to be most interesting because it does not seem to be a very active area of research is the following statement on the fill factor of the coded aperture:
The coded aperture work discussed earlier is focused on developing a coded aperture system with a ﬁll factor of 50%; i.e., 50% of the positions in the coded mask were opaque, and the remainder allowed light through to the detector. In low-light settings, this high ﬁll factor is desirable because it allows a signiﬁcant proportion of the light through to the detector and “wastes” very few photons. This approach is particularly effective when the scene is sparse in the canonical or pixel basis (e.g., faint stars against a dark sky). However, when the scene is sparse in some other basis, such as a wavelet basis, and not sparse in the pixel basis, a large ﬁll factor can cause signiﬁcant noise challenges. These challenges are described from a theoretical perspective in . Intuitively, consider that when the ﬁll factor is close to 50%, most of the coded aperture measurements will have the same average intensity plus a small ﬂuctuation, and the photon noise level will scale with this average intensity. As a result, in low light settings the noise will overwhelm the small ﬂuctuations which are critical to accurate reconstruction unless the scene is sparse in the pixel basis and the average intensity per pixel is low. These challenges can be mitigated somewhat by using smaller ﬁll factors, but in general limit the utility of any linear optical CS architecture for very low-intensity images which are not sparse in the canonical basis 
Is this calling for a third way of mixing light rays ? does this call for sparse measurement matrices ? And there are many important questions to be asnwered thanks to Roummel Marcia, Rebecca Willett, and Zachary Harmany.
David San Segundo let me know of a similar paper but a different paper entitled compressice sensing for practical optical imaging systems: a tutorial.
It talks about different technology implementation but also makes a good reference to the Donoho-Tanner phase transition, which in a tutorial dedicated to hardware is good news! Thanks David!