Comparison of Gradient Descent and Least Squares Algorithms in Deep Model
Autor: | Lianchao Jin, Shengming Jiang |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Zdroj: | Journal of Physics: Conference Series. 1621:012027 |
ISSN: | 1742-6596 1742-6588 |
Popis: | Deep neural networks have a wide range of applications in stock forecasting, big data forecasting of influenza outbreaks, prediction of game outcomes and so on. They all involve a common problem—regression prediction problems. Deep model can be applied to solve regression prediction problems by optimizing model structure through optimization algorithm. The optimization algorithm generally uses the gradient descent method to optimize the model structure by inputting much real data. When the previous mathematical model solves the problem of regression analysis, one of the most basic optimization methods is the least squares method. This paper studies the performance of the vgg16 convolutional neural network model combined with transfer learning reconstruction when the amount of data is large. By analyzing their algorithm structure to more clearly understand how they are optimized for deep model, better algorithms can be used for future model optimization. |
Databáze: | OpenAIRE |
Externí odkaz: |