Fluctuation-driven learning rule for continuous-time recurrent neural networks and its application to dynamical system control.

Autor: Watanabe, Kazuhisa, Haba, Takahiro, Kudo, Noboru, Oohori, Takahumi
Předmět:
Zdroj: Systems & Computers in Japan; 3/1/2001, Vol. 32 Issue 3, p14-23, 10p
Abstrakt: Fluctuation-driven learning rule is proposed for continuous-time recurrent neural networks. In so doing, random fluctuations nj(p, t)(j: neuron number, p: input pattern number, 0 ⩽ t ⩽ Tp, Tp: pattern length) are superimposed on every neuron's threshold. Probability density Nj(nj) of fluctuation amplitude is treated as a time-invariant, and auxiliary function gj(nj): -dNj/dnj = gjNj is introduced. For fluctuations nj(p, t), neuron outputs rj(p, t) and instantaneous error e(p, t) are probabilistic quantities. In so doing, learning rule for synaptic weight wji from i-th neuron is Rji(p, t) = ∫t0gj ridτ/τj, &Deltapwji = -μ ∫TpOe Rjidt/Tp (rj: time constant of membrane potential, μ : learning coefficient). It is shown theoretically that expected mean error (∫Tp0edt/Tp) may be minimized by steepest descent. This learning rule does not require any additional functions such as adjoint system or sensitivity system, and can be executed in time-forward direction by simple integrating, which is distinctive of previous algorithms. The features of the proposed method are confirmed through numerical experiments with JK flip-flop, dynamical system's inverse model, and speed control of moving object. © 2001 Scripta Technica, Syst Comp Jpn, 32(3): 14–23, 2001 [ABSTRACT FROM AUTHOR]
Databáze: Supplemental Index