I cannot seem to find the paper anywhere of this lens free tiny imager, but I have gone through this route several times already. It looks like we are witnessing a typical Compressive Sensing problem but the designers are probably not aware of it. It is understandable as it takes a lot of effort to design hardware and sometimes the deconvolution is just, if not an afterthought, something that is considered known . If I go to Alyosha Molnar's group and go to his research page, one can see some of the drawing/figures of what this sensor is doing.
In short, the response of the chip is angle dependent because for each ray of light (that one could consider a dirac) striking it at a specific angle, the chip records three different voltages. These voltages are different enough over the range of the field of view to permit some discretization thanks to probably the dynamic range of the chip (this is not unlike the Anger camera scheme)
In this example, we have a ray of light that is providing three measurements in parallel. Remember this is a one-to-many set up similar to the analog-to-digital converters work being undertaken by several groups. For one pixel, several rays of light of different intensities (figure 3) will strike the chip. This operation is similar to a typical multiplexing operation. In order to obtain the image back from the chip measurement, one is facing a deconvolution problem with a transfer function that looks like the second figure above. What do we get out of this if we naively take the results from the chip and deconvolute these using, I am making a guess, a least square solver? Well, from the press release, we get this pretty bad Mona Lisa (on the right).
Why ? Most probably because an L_1 reconstruction solver was not used to deconvolute this image. There are many solutions to deconvoluting this multiplexing process since there are not that many sensors (it is very likely underdetermined). The L_1 solvers have the particularity that they will "search" for the sparsest decomposition that fits the measurements provided by the chip as opposed to search for a solution with the smallest error. It turns out, in most cases, the L_1 solution is also the one with the least error among all the ones with small reconstruction error (including the Least Squares solution)..
After having spent so many hours devising this ingenious piece of hardware, how can we make this Mona Lisa look better ... fast ? L_1 reconstruction solvers can be used like any other Least Squares solvers. If one were interested in getting a better looking Mona Lisa, easy to use and efficient implementations can be used such as: SPGL1, GPSR or SL0. Instead of performing y = A\x in Matlab, one just needs to perform y = SPGL1(A,x,..) or y = GPSR_BB(A, x,..) or y = SL0(A, x,..) after having downloaded any of these packages. Please note that the "...." replace the arguments generally used in the examples provided in these packages. There should not be a need for tuning these parameters to get a better Mona Lisa. Let us also note that for an optimal result, one should certainly add a wavelet dictionary in the reconstruction process, i.e. use A*W as opposed to just A with W representing a wavelet basis (see Sparco's problems for an easy to use implementation).