Zhangyang/Atlas just sent me the following:
Dear Igor,I hope this email finds you well.Having been a fan of your blog since 2012, I would like to first say thanks, for introducing our deep l0 encoder work on your blog post. The piece of paper has been accepted by AAAI’16, in Phoenix, AZ, Feb 2016. I look forward to presenting and discussing the paper in depth with peers soon!I would further like to bring to your attention two more pieces of our recent work, which could be viewed as successions of the deep l0 encoder:1. "Learning A Task-Specific Deep Architecture For Clustering’’, accepted by SDM’16, manuscript available at: http://arxiv.org/abs/1509.
001512. "D3: Deep Dual-Domain Based Fast Restoration of JPEG-Compressed Images’’, under review, manuscript available at: http://arxiv.org/abs/1601. 04149The total of three papers tackle different applications. However, they share the same underlying theme, to reveal how the analytic tools of “shallow” models (mostly sparse coding) can be translated to guide the architecture design and improve the performance of deep models. More about the "big picture’’ could be found in a research statement, on my website.Thanks, and best regards,
Thanks Atlas ! Here are the preprints:
Learning A Task-Specific Deep Architecture For Clustering
Zhangyang Wang, Shiyu Chang, Jiayu Zhou, Meng Wang, Thomas S. Huang
While sparse coding-based clustering methods have shown to be successful, their bottlenecks in both efficiency and scalability limit the practical usage. In recent years, deep learning has been proved to be a highly effective, efficient and scalable feature learning tool. In this paper, we propose to emulate the sparse coding-based clustering pipeline in the context of deep learning, leading to a carefully crafted deep model benefiting from both. A feed-forward network structure, named TAGnet, is constructed based on a graph-regularized sparse coding algorithm. It is then trained with task-specific loss functions from end to end. We discover that connecting deep learning to sparse coding benefits not only the model performance, but also its initialization and interpretation. Moreover, by introducing auxiliary clustering tasks to the intermediate feature hierarchy, we formulate DTAGnet and obtain a further performance boost. Extensive experiments demonstrate that the proposed model gains remarkable margins over several state-of-the-art methods.
D3: Deep Dual-Domain Based Fast Restoration of JPEG-Compressed Images
Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Thomas S. Huang
In this paper, we design a Deep Dual-Domain (
D3) based fast restoration model to remove artifacts of JPEG compressed images. It leverages the large learning capacity of deep networks, as well as the problem-specific expertise that was hardly incorporated in the past design of deep architectures. For the latter, we take into consideration both the prior knowledge of the JPEG compression scheme, and the successful practice of the sparsity-based dual-domain approach. We further design the One-Step Sparse Inference (1-SI) module, as an efficient and light-weighted feed-forward approximation of sparse coding. Extensive experiments verify the superiority of the proposed D3model over several state-of-the-art methods. Specifically, our best model is capable of outperforming the latest deep model for around 1 dB in PSNR, and is 30 times faster.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.