I am putting in one page all the links to the entries from the generic thread entitled "These Technologies Do Not Exist".
There are some news about the PACS camera on Herschel, from the project website:
There are some news about the PACS camera on Herschel, from the project website:
Status summary: Herschel was successfully launched together with Planck on 14 May 2009. After a successful Commissioning phase, including cryocover opening 1 month after launch, the Performance Verification phase commenced in mid-July. An approximate overview of the early mission phases includes Commissioning Phase (COP) 2 months, Performance Verification Phase (PVP) 3 months, followed by the Science Demonstration Phase (SDP) and a gradual transition into Routine Science Phase (RSP). Currently as of late November, except for HIFI the PVP activities have basically been completed, SDP has been underway for some time, and gradually more and more RSP observations are being executed. See also the HSC Operations (B)Log.
Let's hope that the Compressive Sensing encoding scheme for PACS has yielded good results.
I found the following on the interwebs:
Low-Dimensional Models for Dimensionality Reduction and Signal Recovery: A Geometric Perspective by Richard Baraniuk, Volkan Cevher, Michael Wakin. The abstract reads:
We compare and contrast from a geometric perspective a number of low-dimensional signal models that support stable information-preserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal models, point clouds, and manifold signal models. Each model has a particular geometrical structure that enables signal information to be stably preserved via a simple linear and nonadaptive projection to a much lower dimensional space; in each case the projection dimension is independent of the signal’s ambient dimension at best or grows logarithmically with it at worst. As a bonus, we point out a common misconception related to probabilistic compressible signal models, namely, by showing that the oft-used generalized Gaussian and Laplacian models do not support stable linear dimensionality reduction.
Joint Manifolds for Data Fusion by Mark Davenport, Chinmay Hegde, Marco Duarte and Richard Baraniuk. The abstract reads:
The emergence of low-cost sensing architectures for diverse modalities has made it possible to deploy sensor networks that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these networks acquire large amounts of very high-dimensional data. For example, even a relatively small network of cameras can generate massive amounts of high-dimensional image and video data. One way to cope with such a data deluge is to develop low-dimensional data models. Manifold models provide a particularly powerful theoretical and algorithmic framework for capturing the structure of data governed by a low-dimensional set of parameters, as is often the case in a sensor network. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that joint manifold structure can lead to improved performance for a variety of signal processing algorithms for applications including classification and manifold learning. Additionally, recent results concerning random projections of manifolds enable us to formulate a network-scalable dimensionality reduction scheme that efficiently fuses the data from all sensors.Group Testing Strategies for Recovery of Sparse Signals in Noise by Mark Iwen. The abstract reads:
We consider the recovery of sparse signals, f 2 RN, containing at most k \lt\lt N nonzero entries using linear measurements contaminated with i.i.d. Gaussian background noise. Given this measurement model, we present and analyze a computationally efficient group testing strategy for recovering the support of f and approximating its nonzero entries. In particular, we demonstrate that group testing measurement matrix constructions may be combined with statistical binary detection and estimation methods to produce efficient adaptive sequential algorithms for sparse signal support recovery. Furthermore, when f exhibits sufficient sparsity, we show that these adaptive group testing methods allow the recovery of sparse signals using fewer noisy linear measurements than possible with non-adaptive methods based on Gaussian measurement ensembles. As a result we improve on previous sufficient conditions for sparsity pattern recovery in the noisy sublinear-sparsity regime.
Now on the reconstruction solvers side we also have:
Non-convexly constrained linear inverse problems by Thomas Blumensath.The abstract reads:
This paper considers the inversion of ill-posed linear operators. To regularise the problem the solution is enforced to lie in a non-convex subset. Theoretical properties for the stable inversion are derived and an iterative algorithm akin to the projected Landweber algorithm is studied. This work extends recent progress made on the efficient inversion of finite dimensional linear systems under a sparsity constraint to the Hilbert space setting and to more general non-convex constraints.A Unified Primal-Dual Algorithm Framework Based on Bregman Iteration by Xiaoqun Zhang, Martin Burger and Stanley Osher. The abstract reads:
In this paper, we propose a uni¯ed primal-dual algorithm framework for two classes of problems that arise from various signal and image processing applications. We also show the connections to existing methods, in particular Bregman iteration [41] based method, such as linearized Bregman [42, 9, 10, 49] and split Bregman [31]. The convergence of the general algorithm framework is proved under mild assumptions. The applications to `1 basis pursuit, TV¡L2 minimization and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy to implement, efficient, stable and °exible enough to cover a wide variety of applications.
Multiplicative noise removal using L1 fidelity on frame coefficients by Durand S., J. Fadili and M. Nikolova. The abstract reads:
We address the denoising of images contaminated with multiplicative noise, e.g. speckle noise. Classical ways to solve such problems are filtering, statistical (Bayesian) methods, variational methods, and methods that convert the multiplicative noise into additive noise (using a logarithmic function), apply a variational method on the log data or shrink their coefficients in a frame (e.g. a wavelet basis), and transform back the result using an exponential function. We propose a method composed of several stages: we use the log-image data and apply a reasonable under-optimal hard-thresholding on its curvelet transform; then we apply a variational method where we minimize a specialized hybrid criterion composed of an ℓ1 data-fidelity to the thresholded coefficients and a Total Variation regularization (TV) term in the log-image domain; the restored image is an exponential of the obtained minimizer, weighted in a such way that the mean of the original image is preserved. Our restored images combine the advantages of shrinkage and variational methods and avoid their main drawbacks. Theoretical results on our hybrid criterion are presented. For the minimization stage, we propose a properly adapted fast scheme based on Douglas-Rachford splitting. The existence of a minimizer of our specialized criterion being proven, we demonstrate the convergence of the minimization scheme. The obtained numerical results clearly outperform the main alternative methods especially for images containing tricky geometrical structures.
If some of the solvers need to be sped up,why not think of GPUs. Some news in that respect include:
How fast would a solution be if one were toimplement the Approximate Message Passing Algorithm of last week on a GPU ?
Finally, specific codes have gotten an update or have just appeared:
* NESTA: The current version of code for NESTA requires that the measurement matrix A is an orthogonal projection. It is sometimes possible to extend NESTA to allow general matrices A; a few special cases have been implemented in the code.
UPDATE: NESTA v.1.1 is released (188 kB zip file, Nov 24 2009). There are no algorithmic changes, but there are some bug fixes, more demos, and more support for complex numbers and the case when A is not a projector. You may view the important changes in the change log.
Nov 24 2009: the PROPACK precision error is caused by incompatibility with 64-bit architectures, in particular in* PPApack version 0 -- a MATLAB software for nuclear norm minimization based on proximal point algorithms by Yongjin Liu, Defeng Sun and Kim-Chuan Tohdbdsqr
. This has been fixed, but the new mex files are not yet in the SVT package. You may download them separately here. Also included isbdsqr
in C (before, this relied on some code in fortran), so it is much easier compile on Windows computers; additionally, there is a fileXonOmegaTranspose
that can be used in place ofXonOmega
in the unlikely event thatXonOmega
is a speed bottleneck. For details, email Stephen.
The software was first released on 18 Nov 2009. It was last updated in 25 Nov 2009 with some minor bugs corrected. The software is designed to solve nuclear norm minimization problems of the form:min_X {sum(svd(X)) : norm(A1(X)-b1) \le delta, A2(X)-b2 \ge0, A3(X)-b3=0}
where delta is a non-negative scalar.