Autor: |
Yunlong Lu, Wenyu Li, Hongwei Wang |
Jazyk: |
angličtina |
Rok vydání: |
2020 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 8, Pp 100185-100193 (2020) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2020.2997867 |
Popis: |
A batch variable learning rate gradient descent algorithm is proposed to efficiently train a neuro-fuzzy network of zero-order Takagi-Sugeno inference systems. By using the advantages of regularization, the smoothing L1/2 regularization is utilized to find more appropriate sparse network. Combining the second-order information of the smoothing error function, a variable learning rate is chosen along the steep descent direction, which avoids line search procedure and may reduce the cost of computation. In order to appropriately adjust the Lipschitz constant of the smoothing error function in the learning rate, a new scheme is proposed by introducing a hyper-parameter. Also the article applies the modified secant equation for estimating the Lipschitz constant, which makes the algorithm greatly reduce the oscillating phenomenon and improve the robustness. Under appropriate assumptions, a convergent result of the proposed algorithm is also given. Simulation results for two identification and classification problems show that the proposed algorithm has better numerical performance and promotes the sparsity capability of the network, compared with the common batch gradient descent algorithm and a variable learning rate gradient-based algorithm. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|