Stochastic Dual Coordinate Ascent via Convex Expansion Constraint – We propose a new approach for stochastic dual coordinate descent (DirCd). We show that there exists a principled upper bound for the convergence rate of the DirCd algorithm. Moreover, the convergence rate has nonzero upper and lower bound on the mean-field of the algorithm. With the nonzero lower bound, as well as the nonconvex upper bound on the mean-field, our algorithm is able to guarantee convergence to the target point and to the optimal solution at any point in the optimal solution space and with high variance. We conduct experimental evaluation of the proposed DirCd algorithm on the MNIST dataset and show that the proposed DirCd algorithm achieves similar or better performance than other gradient descent algorithm in the datasets.

In this work, we propose a new method of learning the probability distribution based on the joint distribution of the data points. A novel method of Bayesian model learning is proposed that learns and uses the conditional independence of latent variables. The conditional independence is obtained by using the conditional probability distributions of each latent variable in the joint distribution. The Bayesian model allows to learn posterior distributions of the data points by exploiting the joint distribution matrix of the latent variables and the conditional independence matrix of the conditional distribution. The joint distribution matrix can then be used for the conditional inference. The experiments on two real data sets show the superiority of the proposed method for both machine learning applications and real-world problems.

The Fast Coreset for Regression and Classification

# Stochastic Dual Coordinate Ascent via Convex Expansion Constraint

A Novel Approach for Detection of Medulla during MRIs using Mammogram and CT Images

Learning to Learn Sequences via Nonlocal Incremental LearningIn this work, we propose a new method of learning the probability distribution based on the joint distribution of the data points. A novel method of Bayesian model learning is proposed that learns and uses the conditional independence of latent variables. The conditional independence is obtained by using the conditional probability distributions of each latent variable in the joint distribution. The Bayesian model allows to learn posterior distributions of the data points by exploiting the joint distribution matrix of the latent variables and the conditional independence matrix of the conditional distribution. The joint distribution matrix can then be used for the conditional inference. The experiments on two real data sets show the superiority of the proposed method for both machine learning applications and real-world problems.