Recurrent Neural Networks Are Universal Approximators.

Autor: Kollias, Stefanos, Stafylopatis, Andreas, Duch, Włodzisław, Oja, Erkki, Schäfer, Anton Maximilian, Zimmermann, Hans Georg
Zdroj: Artificial Neural Networks - ICANN 2006; 2006, p632-640, 9p
Abstrakt: Neural networks represent a class of functions for the efficient identification and forecasting of dynamical systems. It has been shown that feedforward networks are able to approximate any (Borel-)measurable function on a compact domain [1,2,3]. Recurrent neural networks (RNNs) have been developed for a better understanding and analysis of open dynamical systems. Compared to feedforward networks they have several advantages which have been discussed extensively in several papers and books, e.g. [4]. Still the question often arises if RNNs are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this paper we give a proof for the universal approximation ability of RNNs in state space model form. The proof is based on the work of Hornik, Stinchcombe, and White about feedforward neural networks [1]. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index