One of the difficulties working with video material is that suddenly everything gets bigger and computations go slower fast. This is the reason we had to crop the scene in the Fukushima endoscopic videos. The scene analyzed was essentially a cropped version of the next 12 seconds of the following video
(at one minute into the video)
Obviously, random projections as implemented in SpaRCS would be helpful. A compressive sensing of the dynamical scenes of this video might also give us a way to think of the type of important parameters a compressive sensing system were to use in this type of situation. One of the interesting feature of these shots is that the "noise" or blips observed on the focal plane array, are not unlike what one would see in a hyperspectral video a la CASSI where a third dimension (spectral information) is embedded in each 2D shot through a hardware based convolution. Here, the hardware based convolution is the interaction between the radiation field and the hardware instance of the focal plane array which result in these "noisy" or more exactly radiation modulated images.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.