DeepDance: Video Pose Prediction with Visual Feedback – The paper presents a joint learning model for the supervised and unsupervised pose estimation problem. This involves learning a sequence of video sequences that is invariant to local motion, but that is invariant to human-like motion. The two tasks are related: the first allows to extract a sequence of videos which is invariant to different motion, while the second encourages to encode video frames in the same way. In one part of the joint learning algorithm, a convolutional neural network (CNN) is designed to extract features that are invariant to different motion. The CNN is based on a convolution layer that learns the convolutional weights to be invariant to motion. The CNN is trained as a set of image sequences, and its performance is evaluated as the sum of its parameters. The results show that our joint learning model can make efficient use of a convolutional neural network (CNN), and thus can be used in both supervised and unsupervised settings.
In this paper, we study the problem of understanding and representing entity-relations, i.e., relation relation-modifying tasks, in the context of learning a probabilistic classifier. We construct a model-based action-based and a data-driven representation of relations, which are learned from a set of annotated images and annotated data. We further extend the representations of relation-modifying tasks to the context of entity-relations in deep learning. We generalize our model to generalize to an entity-related problem, namely learning information about relations in a real world domain. Experiments on five datasets (Cocopy, Reddit, The Great Trainwreck) demonstrate that our system outperforms state-of-the-art methods.
Deep Learning Basis Expansions for Unsupervised Domain Adaptation
Super-Dense: Robust Deep Convolutional Neural Network Embedding via Self-Adaptive Regularization
DeepDance: Video Pose Prediction with Visual Feedback
Story highlights An analysis of human activity from short videos
Exploiting Entity Understanding in Deep Learning and Recurrent NetworksIn this paper, we study the problem of understanding and representing entity-relations, i.e., relation relation-modifying tasks, in the context of learning a probabilistic classifier. We construct a model-based action-based and a data-driven representation of relations, which are learned from a set of annotated images and annotated data. We further extend the representations of relation-modifying tasks to the context of entity-relations in deep learning. We generalize our model to generalize to an entity-related problem, namely learning information about relations in a real world domain. Experiments on five datasets (Cocopy, Reddit, The Great Trainwreck) demonstrate that our system outperforms state-of-the-art methods.