-
Learning to Map Temporal Paths for Future Part-of-Spatial Planner Recommendations
Learning to Map Temporal Paths for Future Part-of-Spatial Planner Recommendations – In the paper, a novel method for clustering has been presented in this paper. The main idea of clustering is that by using the information in an unseen space, a local clustering method for clustering is constructed. The method based on this approach consists […]
-
Exploring the temporal structure of complex, transient and long-term temporal structure in complex networks
Exploring the temporal structure of complex, transient and long-term temporal structure in complex networks – The structure of the networks of neurons has been studied extensively since the early 1990’s. Many researchers were developing deep learning methods to learn the structure of the neurons within networks. A number of models have been developed that use […]
-
High-accuracy sparse factor models for multi-label and multi-domain clustering
High-accuracy sparse factor models for multi-label and multi-domain clustering – We present a new deep learning-based method to automatically categorize a dataset of labeled text into a subset of similar texts. A classification algorithm firstly constructs the text from a subset of similar texts, and outputs a set of predictions that are then used to […]
-
An Instance Segmentation based Hybrid Model for Object Recognition
An Instance Segmentation based Hybrid Model for Object Recognition – This paper presents an initial survey of the recent recent data collected in the context of face recognition. This topic is currently an active research topic for researchers and practitioners in various fields. We propose the use of an application to face recognition to the […]
-
Modelling the Modal Rate of Interest for a Large Discrete Random Variable
Modelling the Modal Rate of Interest for a Large Discrete Random Variable – Most of the existing literature is dominated by theoretical work where there are a lot of assumptions that can be made about the unknown distribution of the sample, leading to a significant amount of uncertainty. In this work we propose a novel […]
-
The Largest Linear Sequence Regression Model for Sequential Data
The Largest Linear Sequence Regression Model for Sequential Data – While linear regression has been widely used for a wide range of applications using natural language processing, the statistical performance of linear regression is not generally well studied. In this paper, we develop a simple, yet effective graphical system for linear regression that is more […]
-
Constraint-Based, Minimum Description Length Computation and Total Sampling for Efficient Constraint Problems
Constraint-Based, Minimum Description Length Computation and Total Sampling for Efficient Constraint Problems – Non-parametric sparse coding (NSCC) is an efficient sparse coding algorithm for sparse coding which has been extensively studied in the literature. Although NSCC works well for many real-world problems, its simplicity and high computational complexity makes it difficult to learn the code […]
-
Learning Deep Generative Models with Log-Like Motion Features
Learning Deep Generative Models with Log-Like Motion Features – The present work investigates the problem of learning Deep Generative models with log-like motion features for recognition task. We consider the problem of learning Generative representations that take as input the motion feature vectors of a dataset, a video and a text. In the video representation […]
-
Learning to Predict Likelihood of Natural Scene Matching
Learning to Predict Likelihood of Natural Scene Matching – The recently proposed model, the Multi-Layer Autoregressive (MLA) Autoregressive (MOA), shows promising results on a number of visual task, from the video recognition task to video processing and motion segmentation. However, due to the large amount of labeled data and the computational load of MOA, several […]
-
Inverted Reservoir Computing
Inverted Reservoir Computing – We present a method for solving a nonconvex optimization problem with stochastic gradient descent. We show that the stochastic gradient descent can be used to generalise (i.e., to generalise to other settings) and to find the best sample with optimal solution (i.e., where the optimization is optimal). Here, this is achieved […]