A parallel structure to do spectrum sensing in Cognitive Radio (CR) at sub-Nyquist rate is proposed. The structure is based on Compressed Sensing (CS) that exploits the sparsity of frequency utilization. Specifically, the received analog signal is segmented or time-windowed and CS is applied to each segment independently using an analog implementation of the inner product, then all the samples are processed together to reconstruct the signal. Applying the CS framework to the analog signal directly relaxes the requirements in wideband RF receiver front-ends. Moreover, the parallel structure provides a design flexibility and scalability on the sensing rate and system complexity. This paper also provides a joint reconstruction algorithm that optimally detects the information symbols from the sub-Nyquist analog projection coefficients. Simulations showing the efficiency of the proposed approach are also presented.
Here the sparsity of spectrum utilization in cognitive radio allows compressed sensing to be used in wideband spectrum sensing. One should note the similarity with the Georgia Tech transform imager and the Random Demodulator.
Also, found on the internets:
Compressed Sensing of Analog Signals by Yonina C. Eldar
A traditional assumption underlying most data converters is that the signal should be sampled at a rate which exceeds twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals posses a sparse structure so that a large part of the bandwidth is not exploited. In this paper, we consider a framework for utilizing this sparsity in order to sample such analog signals at a low rate. More specifically, we consider continuous-time signals that lie in a shift-invariant (SI) space generated by m kernels, so that any signal in the space can be expressed as an infinite linear combination of the shifted kernels. If the period of the underlying SI space is equal to T, then such signals can be perfectly reconstructed from samples at a rate of m/T. Here we treat the case in which only k out of the m generators are active, meaning that the signal actually lies in a lower dimensional space spanned by k generators. However, we do not know which k are chosen. By relying on results developed in the context of compressed sensing (CS) of finite-length vectors, we develop a general framework for sampling such signals at a rate much lower than m/T. The distinguishing feature of our results is that in contrast to the problems treated in the context of CS, here we consider sampling of analog-signals for which no underlying finite-dimensional model exists. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite CS and then rely on efficient and stable algorithms developed in that context.
Tomographic inversion using L1-norm regularization of wavelet coefficients by Ignace Loris, Guust Nolet, Ingrid Daubechies and Tony Dahlen.
Like most geophysical inverse problems, the linearized problem Am = d in seismic tomography is underdetermined, or at best offers a mix of overdetermined and underdetermined parameters. It has therefore long been recognized that it is important to suppress artifacts that could be falsely interpreted as ‘structure’ in the earth’s interior. Not surprisingly, strategies that yield the smoothest solution m — in the sense of minimizing a gradient (∇m) or second derivative (∇2m) norm — have been dominant in applications, either by using a low-degree spherical harmonic expansion (Dziewonski et al. 1975; Dziewonski & Woodhouse 1987; Masters et al. 1996) or by regularizing a dense local parametrization (Nolet 1987; Constable et al. 1987; Spakman & Nolet 1988; VanDecar & Snieder 1994; Trampert & Snieder 1996). Smooth solutions, however, while not introducing small-scale artifacts, produce a distorted image of the earth through the strong averaging over large areas, thereby making small-scale detail difficult to see, or even hiding it. Sharp discontinuities are blurred into gradual transitions. For example, the inability of global, spherical-harmonic, tomographic models to yield as clear an image of upper-mantle subduction zones as produced by more localized studies has long been held against them. Deal et al. (1999) and Deal & Nolet (1999) optimize images of upper-mantle slabs to fit physical models of heat diffusion, in an effort to suppress small-scale imaging artifacts while retaining sharp boundaries. Portniaguine and Zhdanov (1999) use a conjugate-gradient method to produce the smallest possible anomalous domain by minimizing a norm based on the gradient support ∇m/(∇m · ∇m + γ^2)^(1/2) , where γ is a small constant. Like all methods that deviate from a least-squares type of solution, both these methods are nonlinear and pose their own problems of practical implementation. The notion that we seek the ‘simplest’ model m that fits a measured set of data d to within the assigned errors is intuitively equivalent to the notion that the model should be describable with a small number of parameters. But, clearly, restricting the model to a few low-degree spherical-harmonic or Fourier coefficients, or a few large-scale blocks or tetrahedra, does not necessarily lead to a geophysically plausible solution. In this paper we investigate whether a multiscale representation based upon wavelets has enough flexibility to represent the class of models we seek. We propose an ℓ1-norm regularization method which yields a model m that has a strong tendency to be sparse in a wavelet basis (Daubechies 1992), meaning that it can be faithfully represented by a relatively small number of nonzero wavelet coefficients. This allows for models that vary smoothly in regions of limited coverage without sacrificing any sharp or small-scale features in well-covered regions that are required to fit the data. Our approach is different from an approach briefly suggested by de Hoop and van der Hilst (2005), in which the mapping between data and model is decomposed in curvelets: here we are concerned with applying the principle of parsimony to the solution of the inverse problem, without any special preference for singling out linear features, for which curvelets are probably better adapted than wavelets.
and two presentations:
Credit: Discover Magazine.
No comments:
Post a Comment