You probably have heard of the 3D IC at Intel recently. Here is some studies that seem to have a direct impact on that type of work. Before you read it, just know what is being done works because f is assumed a smooth function of its variables and therefore, picking a random sampling of the parameters into the lumped parameter code is really equivalent to doing the same as what was shown in How to Wow your friends, i.e. project that smooth functions unto random located diracs. As practicing engineers know, most of precomputed tables are generally smooth so this type of method is bound to have a great future given that they understand its limitations.
Estimation of Crosstalk among Multiple Stripline Traces Crossing a Split by Compressed Sensing by Tao Wang, Yiyu Shi, Songping Wu, and Jun Fan. The abstract reads:
In printed circuit board (PCB) designs, it is common to split power/ground planes into different partitions, which leads to more crosstalk among signal traces that route crossing a split. It is of general interest to develop a crosstalk model for various geometric parameters. However, the long time required to simulate the structure with any given set of geometric parameters renders general modelling approaches such as interpolation inefficient. In this paper, we develop an empirical model based upon the compressed sensing technique to characterize the crosstalk among traces as a function of geometric parameters. A good agreement between the empirical model and full-wave simulations is observed for various test examples, with an exceptionally small number of samples.
Compressed Sensing Based Analytical Modeling for Through-Silicon-Via Pairs by Tao Wang, Jingook Kim, Jun Fan, Yiyu Shi. The abstract reads:
Through-Silicon-Vias (TSVs) are the critical enabling technique for three-dimensional integrated circuits (3D ICs). While there are a few existing works in literature to model the electrical performance of TSVs, they are either for fixed geometry or in lack of accuracy. In this paper, we use compressed sensing technique to model the electrical performance of TSV pairs. Experimental results indicate that with an exceptionally small number of samples, our model has a maximum relative error of 3.94% compared with full-wave simulations over a wide range of geometry parameters and frequencies.
There is a short course on Algorithmic Group Testing and Applications (09/05/2011 -- 27/05/2011) by Ngô Quang Hưng at SUNY Buffalo
This is a short course on algorithmic combinatorial group testing and applications. The basic setting of the group testing problem is to identify a subset of "positive" items from a huge item population using as few "tests" as possible. The meaning of "positive", "tests" and "items" are dependent on the application. For example, dated back to World War II when the area of group testing started, "items" are blood samples, "positive" means syphilis-positive, and a "test" contains a pool of blood samples which results in a positive outcome if there is at least one sample in the pool positive for syphylis. This basic problem paradigm has found numerous applications in biology, cryptography, networking, signal processing, coding theory, statistical learning theory, data streaming, etc. This short course aims to introduce group testing from a computational view point, where not only the constructions of group testing strategies are of interest, but also the computational efficiency of both the construction and the decoding procedures are studied. We will also briefly introduce the probabilistic method, algorithmic coding theory, and several direct applications of group testing.
while looking for group testing on the Google, I found the following abstract forEngineering Competitive and Query-Optimal Minimal-Adaptive Randomized Group Testing Strategies by Muhammad Azam Sheikh and it seems to provide some insight as to why some adaptive strategy might be good for not so sparse defects. From the abstract:
"...Another main result is related to the design of query-optimal and minimal-adaptive strategies. We have shown that a 2-stage randomized strategy with prescribed success probability can asymptotically achieve the information-theoretic lower bound for d much less than n and growing much slower than n. Similarly, we can approach the entropy lower bound in 4 stages when d = o(n)..."Finally, here is a paper with only an abstract: A compressive sensing perspective on simultaneous marine acquisition by Hassan Mansour, Haneet Wason, Tim Lin and Felix Herrmann. The abstract reads:
The high cost of acquiring seismic data in Marine environments compels the adoption of simultaneous- source acquisition - an emerging technology that is stimulating both geophysical research and commercial efforts. In this paper, we discuss the properties of randomized simultaneous acquisition matrices and demonstrate that sparsity-promoting recovery improves the quality of the reconstructed seismic data volumes. Simultaneous Marine acquisition calls for the development of a new set of design principles and post-processing tools. Leveraging established findings from the field of compressed sensing, the recovery from simultaneous sources depends on a sparsifying transform that compresses seismic data, is fast, and reasonably incoherent with the compressive sampling matrix. To achieve this incoherence, we use random time dithering where sequential acquisition with a single airgun is replaced by continuous acquisition with multiple airguns firing at random times and at random locations. We demonstrate our results with simulations of simultaneous Marine acquisition using periodic and randomized time dithering.
I also found the following poster session at ISMRM conference:
Compressed Sensing & Receive Arrays at ISMRM
No comments:
Post a Comment