Master's defense: Rune Midjord Nielsen
Deep Neural Networks and Unsupervised Learning
The purpose of this thesis is to delve deeper into the inner workings of the Deep Belief Network (DBN). To a physicist, Machine Learning exists in a strange place halfway between the purest form of theoretical physics, namely the entropy equations at the center of our neural network code - and then something close to engineering, with our very concrete purpose of finding patterns in a given dataset. In that sense, Neural Networks and in particular Deep Learning is the bastard child of data science and theoretical/statistical physics.
Curiously, physicists were actually very late in coming to the field of Neural Networks and in particular Deep Learning. Not until the mid-90s did physicists properly join the fray, as it became clear that at the core of any and all neural network (shallow or deep) algorithms were highly theoretical entropy measures to measure the error between input and output - in fact, data scientists had reinvented the Boltzmann distribution several times over without realizing so, by nothing else than trial and error.