Exploring Temporal Context Knowledge for Real-time, Multi-lingual Conversational Search – We present a novel method for understanding temporal ambiguity in the wild. The proposed model is a neural network trained to predict the current tense state of a language user’s speech, or a sequence of sentences. As the user’s speech becomes more and more important (i.e., more relevant to the current tense state), this is an opportunity for the user to improve his or her understanding of the language’s tense state. An automatic learning tool, we call Temporal Context Knowledge (TCK), is used to predict the next tense state of a user’s speech to achieve a more detailed understanding of the current tense state. Our model combines the temporal context knowledge from the user and the semantic content in his or her speech into the state-action tree. We build an automatic and robust neural network model to predict the current tense state of user’s speech using the knowledge extracted by our neural network. Experiments are conducted using the MIMI dataset and on two different languages. Results show that our model outperforms current state-action learning methods for predicting the current tense state of users by a large margin.

A major problem in statistical learning methods is to learn a mixture of two groups of data. We propose a hybrid framework for modeling the mixture of both groups of data and propose to model them independently on their variance. Our framework uses a Bayesian metric for the unknown variable, which can be seen as a surrogate for the variance of the mixture. Given the covariance matrix, we use an inference strategy using the linear kernel to approximate the expected distribution of the observed covariance matrix and a logistic regression method, which can be used to build a model. The model is then transformed to a nonparametric mixture and the parameters are learned as the covariance matrix. We have designed the framework using a novel algorithm based on variational inference to learn the parameters. Experimental evaluation results show that the framework is very efficient, outperforming state-of-the-art approaches (such as Viterbi et al). The framework is also scalable with a reasonable performance.

Examining Kernel Programs Using Naive Bayes

Towards automated translation of Isolated text in Bangla

# Exploring Temporal Context Knowledge for Real-time, Multi-lingual Conversational Search

Multi-modality Deep Learning with Variational Hidden-Markov Models for ClassificationA major problem in statistical learning methods is to learn a mixture of two groups of data. We propose a hybrid framework for modeling the mixture of both groups of data and propose to model them independently on their variance. Our framework uses a Bayesian metric for the unknown variable, which can be seen as a surrogate for the variance of the mixture. Given the covariance matrix, we use an inference strategy using the linear kernel to approximate the expected distribution of the observed covariance matrix and a logistic regression method, which can be used to build a model. The model is then transformed to a nonparametric mixture and the parameters are learned as the covariance matrix. We have designed the framework using a novel algorithm based on variational inference to learn the parameters. Experimental evaluation results show that the framework is very efficient, outperforming state-of-the-art approaches (such as Viterbi et al). The framework is also scalable with a reasonable performance.