Monday, August 17, 2015

Single-sensor multispeaker listening with acoustic metamaterials / Spectral-temporal compressive imaging

The idea of compressive sensing is to project a signal onto a specific basis such that any sparse signal can be recovered with few measurements. From the very beginning, the field has been focused on finding ways of changing the first layer of data acquisition devices. Here are two examples today from the DISP group of David Brady at Duke. One uses a single sensor and 3D printing to enhance  acoustic recording while the other continues earlier work on getting a multispectral video camera on a single camera. wow!



 
 
 
Single-sensor multispeaker listening with acoustic metamaterials by Yangbo Xie, Tsung-Han Tsai, Adam Konneker, Bogdan-Ioan Popa, David J. Brady, and Steven A. Cummer
Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications.
I note from the methods,

The design process was aided with a commercial full-wave simulation package COMSOL Multiphysics. Three-dimensional simulations with Pressure Acoustics Module were conducted to extract the frequency responses of all of the waveguides.




Spectral-temporal compressive imaging by Tsung-Han Tsai, Patrick Llull, Xin Yuan, Lawrence Carin, and David Brady
We present a compressive camera that combines mechanical translation and spectral dispersion to compress a multi-spectral, high-speed scene onto a monochrome, video-rate detector. Single-frame reconstructions of 15 spectral channels and 10 temporal frames are reported.

previously we had:

Coded aperture compressive temporal imaging by Patrick Llull, Xuejun Liao, Xin Yuan, Jianbo Yang, David Kittle,Lawrence Carin, Guillermo Sapiro, and David J. Brady

We use mechanical translation of a coded aperture for code division multiple access compression of video. We present experimental results for reconstruction at 148 frames per coded snapshot.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly