An Expectation-Maximization Algorithm for Learning the Geometry of Nonparallel Constraint-O(N) Spaces – We propose an algorithm for learning sparse representation of an objective function and a sparse representation of a sparse function by exploiting the geometric properties of the manifold space. The resulting algorithm generalizes a widely used approach for convex optimization, which is based on Bayesian networks. Our algorithm is particularly relevant for convex optimization where the manifold space is convex. We give an efficient variant of the method, called the maximum likelihood based convex optimization method (mFBO), and we compare it to other methods, such as the Maximum Mean Discriminant Analysis (LMDA) and Max-Span, which use the manifold space representation objective function to capture the objective in a finite manifold. The optimization loss is $ell_f$ (or $f_alpha$, depending on the manifold) and thus can be computed from a finite set of manifold spaces. We show that the proposed algorithm is not only efficient but also has robustness and convergence guarantees.

This paper investigates the use of nonlinear networks as basis for modeling decision support systems (PDS). Nonlinear networks are a powerful approach for modeling PDS, as it is simple to describe their model to the user via the network structure and the user behaviour. Unfortunately, these networks are expensive to build compared to linear networks when handling complex decision problems. In this paper, we present a new approach for modelling nonlinear PDS with a linear network architecture, which we refer to as the nonlinear PDS network framework (NP-POM) architecture. The NP-POM architecture has three advantages: an efficient model-building process and a low-level architecture that can be optimized efficiently. The NP-POM architecture can solve real-valued problems from a wide variety of PDAs, but it is also computationally efficient, unlike many linear PDS. The NP-POM architecture is implemented as an extension of the standard NP-POM framework, which is shown to be a better alternative than the one used in this paper.

Fast Graph Matching based on the Local Maximal Log Gabor Surrogate Descent

Euclidean Metric Learning with Exponential Families

# An Expectation-Maximization Algorithm for Learning the Geometry of Nonparallel Constraint-O(N) Spaces

Conceptual Constraint-based Neural Networks

Convolutional Neural Networks, Part I: General PrinciplesThis paper investigates the use of nonlinear networks as basis for modeling decision support systems (PDS). Nonlinear networks are a powerful approach for modeling PDS, as it is simple to describe their model to the user via the network structure and the user behaviour. Unfortunately, these networks are expensive to build compared to linear networks when handling complex decision problems. In this paper, we present a new approach for modelling nonlinear PDS with a linear network architecture, which we refer to as the nonlinear PDS network framework (NP-POM) architecture. The NP-POM architecture has three advantages: an efficient model-building process and a low-level architecture that can be optimized efficiently. The NP-POM architecture can solve real-valued problems from a wide variety of PDAs, but it is also computationally efficient, unlike many linear PDS. The NP-POM architecture is implemented as an extension of the standard NP-POM framework, which is shown to be a better alternative than the one used in this paper.