Refining the Structure of Neural Networks Using Matrix Conditioning

TitleRefining the Structure of Neural Networks Using Matrix Conditioning
Publication TypeJournal Article
Year of Publication2019
AuthorsYousefzadeh, R, O'Leary, DP
Date Published8/6/2019
Abstract

Deep learning models have proven to be exceptionally useful in performing many machine learning tasks. However, for each new dataset, choosing an effective size and structure of the model can be a time-consuming process of trial and error. While a small network with few neurons might not be able to capture the intricacies of a given task, having too many neurons can lead to overfitting and poor generalization. Here, we propose a practical method that employs matrix conditioning to automatically design the structure of layers of a feed-forward network, by first adjusting the proportion of neurons among the layers of a network and then scaling the size of network up or down. Results on sample image and non-image datasets demonstrate that our method results in small networks with high accuracies. Finally, guided by matrix conditioning, we provide a method to effectively squeeze models that are already trained. Our techniques reduce the human cost of designing deep learning models and can also reduce training time and the expense of using neural networks for applications.

URLhttps://arxiv.org/abs/1908.02400