Learning Dictionaries with Bounded Coherence
Over-complete dictionaries can achieve a lower approximation error than orthogonal dictionaries in sparse coding applications. On the one hand, increasing the number of atoms leads to sparser solutions, on the other hand, a dictionary with low self-coherence has several advantages. We propose a dictionary learing algorithm that can achieve any trade-off between both objectives.
Speech Enhancement with Sparse Coding in Learned Dictionaries
Speech enhancement is difficult if the target and interferer sources are partially coherent, such as for speech in the presence of babble noise. We propose a sparse coding algorithm (called LARC) for training dictionaries and enhancing speech in the presence of challenging non-stationary interferers.
EM for Sparse and Non-Negative PCA
Classical principal component analysis produces full projection vectors with mixed signs. For some applications, sparse and/or non-negative solutions are more appropriate. We propose an algorithm that is efficient for large and high-dimensional datasets and can handle the case where the number of features exceeds the number of observations.
Since the publication of the ICML paper, the method has been substantially expanded, and a mature implementation is available as an R package. I also wrote up an example from the domain of portfolio optimization, and compared different methods on a gene expression data set.
Non-Negative CCA for Audio-Visual Source Separation
We apply canonical correlation analysis to the task of audio-visual source separation. By enforcing the proper constraints on the audio and video projection vectors, we are able to identify sources in video and acoustically separate them with the help of a microphone array.
The algorithm presented in the MLSP paper has also been substantially expanded into a general purpose R package for sparse and non-negative canoncial correlation analysis. A demonstration is given in this blog post.
Any comments, reviews, critiques or objections are invited.