Popis: |
V diplomski nalogi preizkusimo novo metodo inicializacije uteži nevronskih mrež, kjer vhodno matriko podatkov faktoriziramo v več manjših matrik, v katerih je zgoščena predstavitev podatkov. Iz teh matrik zgradimo in naučimo več enonivojskih nevronskih mrež, ki preslikajo podatke iz ene zgoščene predstavitve, dobljene z faktorizacijo, v drugo predstavitev, na koncu pa dodamo še nevronsko mrežo, ki preslika podatke iz zadnje zgoščene predstavitve podatkov v pravilne razrede pri klasifikaciji oziroma v pravilno vrednost pri regresiji. Vse te naučene enonivojske mreže nato združimo v ciljno nevronsko mrežo in jo še dodatno učimo. Ta postopek je običajno boljši pri inicializaciji globokih nevronskih mrež, kjer pri naključni inicializaciji pogosto prihaja do zelo počasnega učenja. Za delovanje in primerjavo inicializacije so uporabljeni podatki MNIST za klasifikacijo in Jester jokes za regresijo. This thesis proposes a new data-driven method for neural network weight initialization, where input data matrix is first factorized into multiple smaller matrices, each containing a summarized version of original data. Multiple shallow neural networks are then trained using acquired smaller matrices to learn simple functions, mapping one summarized data matrix into another, usually smaller matrix. One last shallow neural network is added to map the last summarized data matrix into their respective class labels if we are trying to classify data into multiple classes. On the other hand, if we are dealing with regression problem, the last neural network represents a simple mapping from summarized data into a single real value. All shallow neural networks are then combined into one deep network and additionally trained as a single neural network. The proposed method usually works better for deep neural networks, where random initialization often overfits or learns very slowly. To evaluate and compare the proposed method with other initialization methods, two datasets were used. The MNIST dataset was used to test classification accuracy and the Jester jokes dataset was used to predict ratings for individual jokes. |