Monday, October 27, 2008

CS: Alternating l1 Method Code, Expander codes, explicit Euclidean sections, and CS, a Workshop, Finding Significant Fourier Coefs, Fast Opt. Algos

Stephane Chretien just made available his code for the Alternating l1 Method for Compressed Sensing that we mentioned earlier. The page for the code is here. Thank you Stephane !

On September 22, Venkatesan Guruswami gave a talk entitled: Expander codes, explicit Euclidean sections, and compressed sensing, the summary of the talk reads:

Classical results in high-dimensional geometry from the 1970's state that a random subspace of R^N of dimension N/2 has "distortion" O(1) w.h.p where the distortion is measured by sqrt{N} times the largest ratio of the L_2 to L_1 norms of a vector in the subspace. (So O(1) distortion means each vector in the subspace has its mass spread nearly evenly over a constant fraction of the coordinates.) This result is a particular example of the use of the probabilistic method, a technique which is ubiquitous in asymptotic convex geometry. Similar randomized geometric constructions arise in areas like dimensionality reduction and compressed sensing, but it seems that such objects are very hard to come by explicitly.

We will describe constructions inspired by expander codes that can be used to produce subspaces with distortion nearly as good as a random subspace either explicitly or with sublinear randomness. Specifically, we construct an explicit subspace of R^N with dimension N(1-o(1)) and distortion (slightly worse than) poly(log N). The construction combines various ingredients such as Ramanujan graphs, sum-product theorems in finite fields, and Kerdock codes/mutually unbiased bases. We are also able to construct subspaces of R^N of proportional dimension with O(1) distortion using N^delta random bits for any constant delta \gt 0 (and hence deterministically in sub-exponential time). The talk will also briefly discuss connections to sparse signal recovery (i.e., compressed sensing).

[Based on joint works with James Lee, Alexander Razborov, and Avi Wigderson]
The Center for Computational Intractability will organize a Workshop on Geometry and Algorithms on October 29-31 at the Friend Center 06 at Princeton. The program reads:

Wednesday, Oct 29

9:00-10:10 Piotr Indyk, Survey on Compressed Sensing
10:10-10:50 break
10:50-11:25 Stephen Smale, Understanding Patterns in Data
11:25-12:00 Santosh Vempala, Isotropic Principal Components

12:00-2:00 lunch

2:00-2:35 James Lee, On the geometry of graphs and L_1 embeddings
2:35-3:10 Joel Tropp, Column subset selection, matrix factorization, and eigenvalue optimization
3:10-3:45 Assaf Naor, TBA
3:45-4:25 break
4:25-5:00 Yury Makarychev, Lower bounds for Sherali Adams via local-global metrics
5:00-5:35 Robi Krauthgamer, Metric Embeddings As Computational Primitives

Thursday, Oct 30

9:00-9:35 Guy Kindler, Can cubic tiles be sphere-like?
9:35-10:10 Leonard Schulman, Contraction and Expansion of Convex Sets
10:10-10:50 break
10:50-11:25 Venkat Guruswami, Explicit Euclidean sections from expander codes
11:25-12:00 Ben Recht, Exact Low-rank Matrix Completion via Convex Optimization

12:00-2:00 lunch

2:00-2:35 Partha Niyogi, Manifold Learning: A geometric perspective on learning theory and algorithms
2:35-3:10 Sanjoy Dasgupta, Open problems in learning low dimensional representations
3:10-3:45 Anna Gilbert, Applications of Compressed Sensing to Biology
3:45-4:25 break
4:25-5:35 Rump session: open problems, brief announcements, etc

6:15 Banquet at Palmer House

Friday, Oct 31

9:00-9:35 Yuval Rabani, Explicit construction of a small epsilon-net for linear threshold functions
9:35-10:10 Hamed Hatami, Graph norms and Sidorenko’s conjecture
10:10-10:40 break
10:40-11:00 Navin Goyal, Learning Convex Bodies is Hard (short talk)
11:00-11:35 Gideon Schechtman, TBA
11:35-12:10 Ilan Newman, Online embedding of metrics

12:10-1:10 lunch

1:10-1:45 Prasad Raghavendra, TBA
1:45-2:20 Luis Rademacher, Expanders via Random Spanning Trees
2:20-2:55 Moritz Hardt, Rounding Parallel Repetitions of Unique Games
2:55-3:30 Alexandr Andoni, Hardness of Nearest Neighbor under L_infinity

Adi Akavia, recently presented a talk at DIMACS entitled: Locally & Universally Finding Significant Fourier Coefficients. The summary of the talk reads:

Computing the Fourier transform is a basic building block used in numerous applications. For data intensive applications, even the O(N\log N) running time of the Fast Fourier Transform (FFT) algorithm may be too slow, and sub-linear running time is necessary. Clearly, outputting the entire Fourier transform in sub-linear time is infeasible, nevertheless, in many applications it suffices to find only the significant Fourier coefficients}, that is, the Fourier coefficients whose magnitude is at least fraction (say, 1%) of the sum of squared Fourier coefficients.

In this paper we present an algorithm that finds the significant Fourier coefficients of functions f over any finite abelian group G, which is: 1. Local. The running time and number of function entries read by the algorithm is polynomial in \log|G|, 1/\tau and L_1(f) (for L_1(f) denoting the sum of absolute values of the Fourier coefficients of f). 2. Universal. The same fixed set of entries is read for all functions over the same domain and with the same upper bound on their sum of Fourier coefficients. 3. Robust to noise. The algorithm finds the significant Fourier coefficients of f, even if the function entries it receives are corrupted by random noise. Furthermore, we present extensions of this algorithm to handle adversarial noise in running time sub-linear in the domain size.

Our algorithm improves on the prior universal algorithms in: (1) handling functions over any finite abelian group, (2) being robust to noise, and (3) achieving better running time in terms of L_1(f).

We present applications of our algorithm to compressed sensing, to proving cryptographic bit security results, and to efficiently decoding polynomial rate variants of Homomorphism codes, MPC codes and other exponential rate codes.

Finally, Pierre Weiss defended his dissertation on October 21. The title of his dissertation is: "Algorithmes rapides d'optimisation convexe. Applications à la restauration d'images et à la détection de changements" or "Fast Convex Optimization Algorithms: Application to Image Restoration and Change Detection". The abstract of the dissertation reads:

This PhD contains contributions in numerical analysis and in computer vision. The talk will be divided in two parts.

In the first part, we will focus on the fast resolution, using first order methods, of convex optimization problems. Those problems appear naturally in many image processing tasks like image reconstruction, compressed sensing or texture+cartoon decompositions. They are generally non differentiable or ill-conditioned. We show that they can be solved very efficiently using fine properties of the functions to be minimized. We analyze in a systematic way their convergence rate using recent results due to Y. Nesterov. To our knowledge, the proposed methods correspond to the state of the art of the first order methods.

In the second part, we will focus on the problem of change detection between two remotely sensed images taken from the same location at two different times. One of the main difficulty to solve this problem is the differences in the illumination conditions between the two shots. This leads us to study the level line illumination invariance. We completely characterize the 3D scenes which produce invariant level lines. We show that they correspond quite well to urban scenes. Then we propose a variational framework and a simple change detection algorithm which gives satisfying results both on synthetic OpenGL scenes and real Quickbird images.

Image Credit: NASA/JPL/Space Science Institute, Saturn rings taken 4 days ago.

No comments: