Research

Bayesian Transfer Learning for Deep Networks

We propose a method for transfer learning for deep networks through Bayesian inference, where an approximate posterior distribution q (w| θ) of model parameters w is learned through variational approximation. Utilizing Bayes by Backprop we optimize the parameters θ associated with the approximate distribution. When performing transfer learning we consider two tasks; A and B. Firstly, an approximate posterior qA (w| θ) is learned from task A which is afterwards transferred as a prior p (w)→ qA (w| θ) when learning the approximate posterior distribution qB (w| θ) for task B. Initially, we consider a multivariate normal distribution q (w| θ)= N (µ, Σ), with diagonal covariance matrix Σ. Secondly, we consider the prospects of introducing more expressive approximate distributions-specifically those known as normalizing flows. By investigating these concepts on the MNIST data set we conclude that utilizing normalizing flows does not improve Bayesian inference in the context presented here. Further, we show that transfer learning is not feasible using our proposed architecture and our definition of task A and task B, but no general conclusion regarding rejecting a Bayesian approach to transfer learning can be made.

Read More

Deep Neural Network based system for solving Arithmetic Word problems

This paper presents DILTON a system which solves simple arithmetic word problems. DILTON uses a Deep Neural based model to solve math word problems. DILTON divides the question into two parts-worldstate and query. The worldstate and the query are processed separately in two different networks and finally, the networks are merged to predict the final operation. We report the first deep learning approach for the prediction of operation between two numbers. DILTON learns to predict operations with 88.81% accuracy in a corpus of primary school questions.

Read More