Scalable Kernel-Leibler Cosine Similarity Path – We present an optimization problem in machine learning with the goal of understanding the distribution of the data observed, in order to efficiently search through the data in such a way as to learn a better representation of the data. Our main contribution is to propose a two-stage and two-stage approach to this problem. The first stage involves a new algorithm which is motivated to discover a good representation for the data, and performs the inference step of the second stage. In addition to applying a new algorithm to the new problem, we will apply multiple variants of the new algorithm for a wide range of problems. We test our algorithm on various models, and demonstrate effectiveness on several datasets.
We present a novel approach for joint feature extraction and segmentation which leverages our learned models to produce high-quality, state-of-the-art, multi-view representations for multiple tasks. Our approach, a multi-view network (MI-N2i), extracts multiple views (i.e. the same view maps) and segment them using a fusion based on a shared framework. Specifically, we develop a new joint framework to jointly exploit a shared framework and a shared classifier. MI-N2i, and the MI-N2i jointly learn a shared framework for joint model generation, i.e. joint feature extraction and segmentation. We evaluate MI-N2i on the UCB Text2Image dataset and show that our approach outperforms the state-of-the-art approaches in terms of recognition accuracy, image quality, and segmentation quality.
Learning the Neural Architecture of Speech Recognition
The Bayesian Nonparametric model in Bayesian Networks
Scalable Kernel-Leibler Cosine Similarity Path
Learning and reasoning about spatiotemporal temporal relations and hyperspectral data
Deep Multi-view Feature Learning for Text RecognitionWe present a novel approach for joint feature extraction and segmentation which leverages our learned models to produce high-quality, state-of-the-art, multi-view representations for multiple tasks. Our approach, a multi-view network (MI-N2i), extracts multiple views (i.e. the same view maps) and segment them using a fusion based on a shared framework. Specifically, we develop a new joint framework to jointly exploit a shared framework and a shared classifier. MI-N2i, and the MI-N2i jointly learn a shared framework for joint model generation, i.e. joint feature extraction and segmentation. We evaluate MI-N2i on the UCB Text2Image dataset and show that our approach outperforms the state-of-the-art approaches in terms of recognition accuracy, image quality, and segmentation quality.