Dynamical Neural Networks that Ensure Exponential Identification Error Convergence
Autor: | Kosmatopoulos, E. B., Christodoulou, Manolis A., Ioannou, Petros A. |
---|---|
Přispěvatelé: | Ioannou, Petros A. [0000-0001-6981-0704] |
Rok vydání: | 1997 |
Předmět: |
Errors
Cognitive Neuroscience Exponential identification error convergence Artificial Intelligence Control theory Convergence (routing) dynamical system identification robust adaptive algorithms Mathematics Mathematical models learning algorithm Learning systems Adaptive algorithm Artificial neural network Estimation theory article System identification Identification (control systems) Adaptive algorithms Exponential function Nonlinear system Identification (information) priority journal recurrent high order neural networks Neural networks artificial neural network |
Zdroj: | Neural Networks Neural Netw. |
ISSN: | 1879-2782 |
Popis: | Classical adaptive and robust adaptive schemes, are unable to ensure convergence of the identification error to zero, in the case of modeling errors. Therefore, the usage of such schemes to 'black-box' identification of nonlinear systems ensures - in the best case - bounded identification error. In this paper, new learning (adaptive) laws are proposed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the identification error is initially zero, it remains equal to zero during the whole identification process. The parameter convergence properties of the proposed scheme, that is, their capability of converging to the optimal neural network model, is also examined, it is shown to be similar to that of classical adaptive and parameter estimation schemes. Finally, it is mentioned that the proposed learning laws are not locally implementable, as they make use of global knowledge of signals and parameters. Classical adaptive and robust adaptive schemes, are unable to ensure convergence of the identification error to zero, in the case of modeling errors. Therefore, the usage of such schemes to 'black-box' identification of nonlinear systems ensures - in the best case - bounded identification error. In this paper, new learning (adaptive) laws are proposed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the identification error is initially zero, it remains equal to zero during the whole identification process. The parameter convergence properties of the proposed scheme, that is, their capability of converging to the optimal neural network model, is also examined it is shown to be similar to that of classical adaptive and parameter estimation schemes. Finally, it is mentioned that the proposed learning laws are not locally implementable, as they make use of global knowledge of signals and parameters. 10 2 299 314 Cited By :68 |
Databáze: | OpenAIRE |
Externí odkaz: |