Pages

Thursday, August 16, 2012

Compressive Sensing with Local Geometric Features: Hardware and Implementation





We propose a framework for compressive sensing of images with local distinguishable objects, such as stars, and apply it to solve a problem in celestial navigation. Specifically, let x be an N-pixel real-valued image, consisting of a small number of local distinguishable objects plus noise. Our goal is to design an m-by-N measurement matrix A with m << N, such that we can recover an approximation to x from the measurements Ax. We construct a matrix A and recovery algorithm with the following properties: (i) if there are k objects, the number of measurements m is O((k log N)/(log k)), undercutting the best known bound of O(k log(N/k)) (ii) the matrix A is very sparse, which is important for hardware implementations of compressive sensing algorithms, and (iii) the recovery algorithm is empirically fast and runs in time polynomial in k and log(N). We also present a comprehensive study of the application of our algorithm to attitude determination, or finding one's orientation in space. Spacecraft typically use cameras to acquire an image of the sky, and then identify stars in the image to compute their orientation. Taking pictures is very expensive for small spacecraft, since camera sensors use a lot of power. Our algorithm optically compresses the image before it reaches the camera's array of pixels, reducing the number of sensors that are required.

Pretty much like the compressive sensing radar featured earlier, let us note that these implementations do not rely on simple vanilla compressive sensing.  Specific acquisition strategies and attendant algorithms are being developed. Here this is a derivative of the count-min sketch algorithm. If you recall this type of algorithm needs to be counting things that follow some pretty drastic power law ( see the excellent Probabilistic Data Structures for Web Analytics and Data Mining by Ilya Katsov and attendant figure below )




Let us see here the type of constraints that this type of hardware need to adhere to:

3.1 Current star trackers
A star tracker is essentially a digital camera, called a star camera, connected to a microprocessor. We describe various characteristics of the camera hardware and star identi cation algorithms.
3.1.1 Numbers
We first provide some numbers from [Lie02] and [WL99] to give a sense of scale. As of 2001, a typical CCD star tracker consumes 5-15W of power. A small spacecraft uses 200W of power, and a minimal one uses less than 100W, so this can be a substantial amount. A high-end star tracker can resolve approximately the same set of stars that an unaided human can on a moonless night away from all light pollution. The number of stars in a star tracker's database varies from 58 to many thousands. The camera's eld of view can vary from 2 2 degrees to 30 30 degrees, or anywhere from .01% to 4% of the sky. For comparison, the full moon is about .5 degrees across, and an adult st held at arm's length is about 10 degrees across. A CCD camera can have up to a million pixels, and the accuracy of the nal attitude is usually around .001 degrees (1 standard deviation), compared to .01 degrees for the next best sensors. The attitude is updated anywhere from 0.5 to 10 times a second.

The CCD has a single ADC located in the corner of the pixel array. A CCD array is read as follows: each pixel repeatedly transfers its charge into a neighboring pixel, so that the charge from any given pixel eventually travels a taxicab path to the ADC. Charges from di erent pixels are never combined, so there are a total of (n3) charge transfers. Since the ADC only digitizes one pixel at a time, it also takes (n2) time to read the whole array. In addition, each charge transfer leaves a fraction of the electrons behind, where 1   equals the charge transfer efficiency. The electrons in the farthest pixels undergo (n) charge transfers, and in practice it is costly to achieve < 10 5, which puts a bound on the maximum size of a CCD array [Hol98, Fos93]. Even if future technology were to allow a better charge transfer efficiency, it is worth noting that each charge transfer uses a large constant amount of power, and that the total number of charge transfers is super-linear in the number of pixels.
On the other hand, CMOS devices have an ADC built into every pixel. This solves all of the problems noted above, and adds another important feature: random access reading. In other words, we can choose to read and digitize only a subset of the pixels, and in practice, that is done, saving power and subsequent digital processing costs [Lie02]. However, the ADCs take up valuable real estate, and reduce the percentage of the chip that is available to collect photons. CMOS devices also generate substantially more noise than CCDs, further reducing the signal to noise ratio [Lit01].
In practice, many consumer products such as cell phone cameras use CMOS, while scienti c instruments use CCDs. Nevertheless, star trackers on small or low-power-budget satellites are starting to use CMOS, forgoing factors of 10 and higher in precision. We give the speci cation of a CCD tracker and a CMOS tracker in current (2011) production to illustrate the di erence, as well as some of the numbers in highlighted in Section 3.1.1. The CT-602 Star Tracker has a CCD camera and is made by Ball Aerospace & Technologies. It uses 8-9W of power, weighs 5.5kg, has 6000 stars in its database, an 8 8 degree eld of view, 512 512 pixels, an attitude accuracy of .0008 degrees, and updates 10 times a second [Bal]. Comtech AeroAstro's Miniature Star Tracker has a CMOS camera, uses < 2W of power, weighs .4-.8 kg, has 1200 stars in its database, a 24 30 degree field of view, and 1024 1280 pixels, but has an accuracy of only .03 degrees and updates only 2 times a second [Com].

and at the end:

The first observation we make is that ADUAF works very well down to an almost minimal number of measurements. The product p01p02 has to be greater than 800, and the minimal set of primes is 26 and 31. As the number of measurements increases, SSMP catches up and surpasses ADUAF, but we note that running SSMP (implemented in C) takes 2.4 seconds per trial on a 2.3 GHz laptop, while ADUAF (implemented in Octave/Matlab) takes .03 seconds per trial. Computation power on a satellite is substantially lower than that of a low end laptop, and given that the entire acquisition has to happen in .1 to 2 seconds, it seems unlikely that any algorithm linear or near linear in N is going to be practical. Finally, we note that the plot lines for both SSMP and ADUAF could be improved by a more sophisticated implementation of the star identi cation algorithm.
I note that the potential hardware implementation

is very similar to an idea we began to implement for a star tracker for the GIFTS program (which eventually got cancelled). There we used two field of views that we had to deconvolute.




The attendant code is here. The Smithsonian astrophysical observatory star catalog is available at http://heasarc.gsfc.nasa.gov/W3Browse/star-catalog/sao.html.

Talking about star trackers, you probably recall this magnificient shot from Hayabusa using its star tracker to image Earth and the Moon




No comments:

Post a Comment