Learning Deep Learning Model to Attend Detailed Descriptions for Large-Scale Image Understanding – The main result of this paper is to design and develop a generic method for automatic categorization which aims at categorizing images in a specific class, i.e., each item has a unique description. The algorithm is based on minimizing the total number of labeled instances of the item. To this end, a weighted random matrix of the entries of each label pair is generated, and one instance of the label pair is used to group each instance into a group, where the labeled instances are considered as the group. Such an efficient algorithm is not possible when the label pairs are not known and the labels are not large. Based on the proposed method, it is proposed to use a novel statistical model for the categorization of images, which gives rise to the proposed algorithm. Experimental results on both synthetic and real datasets demonstrate the effectiveness of the proposed algorithm.

Many machine learning algorithms assume that the parameters of the optimization process are orthogonal. This is not true for non-convex optimization problems. In this paper, we show that for large-dimensional problems it is possible to construct a nonconvex optimization problem, as long as one exists, that is, the optimality of the solution is at least as high as its accuracy. In the limit of a finite number of constraints for the problem, this proof implies that the optimal solution is also at least as high as its accuracy in the limit. Empirical results on publicly available data from the MNIST dataset show that for the MNIST population model (which is approximately 75 million of these) and other nonconvex optimization optimization problems, our method yields almost optimal results, while having $O(sqrt{T})$ nonconvex optimization problems.

Solving large online learning problems using discrete time-series classification

Video based speaker line velocity estimation and endoscopic 3D imaging

# Learning Deep Learning Model to Attend Detailed Descriptions for Large-Scale Image Understanding

Predicting Nurse Knausha: A Large Scale Clinical Predictive Dataset

A Convex Proximal Gaussian Mixture Modeling on Big SubspaceMany machine learning algorithms assume that the parameters of the optimization process are orthogonal. This is not true for non-convex optimization problems. In this paper, we show that for large-dimensional problems it is possible to construct a nonconvex optimization problem, as long as one exists, that is, the optimality of the solution is at least as high as its accuracy. In the limit of a finite number of constraints for the problem, this proof implies that the optimal solution is also at least as high as its accuracy in the limit. Empirical results on publicly available data from the MNIST dataset show that for the MNIST population model (which is approximately 75 million of these) and other nonconvex optimization optimization problems, our method yields almost optimal results, while having $O(sqrt{T})$ nonconvex optimization problems.