Popis: |
To explain the multistable perception in “reversible figures”, an artificial perceptron neural network model is proposed. The networks are composed by a shifting invariant smart eye preprocessing unit, a depth processing unit and the main brain computing unit. The shift invariant preprocessing unit could be achieved by wavelet transform optical filters (Sheng et al., 1993) or fixed artificial neural networks (Widrow & Winter, 1988). The depths processing unit is constructed by two McCulloch-Pitts neurons with time-dependent threshold, which is updated by modified Haken’s time-dependent attention parameters dynamics. The main brain computing network is a revised back error propagation network. Following Haken’s (1991) description, we adopt Stadler’s connectionist approach. We use a polynomial energy function, e.g., £4 field, for the performance measure replacing the least mean square energy. The training of the computing network follows a standard supervised delta learning rule of interconnected weights. The test of “reversible figures” is subsequently controlled by the phase transition tuning parameter driven bottom up from test image data. The effects of the tuning parameter are illustrated. And the modeling of multistable perception is discussed. We demonstrate that the brain computing networks trained with the new energy function generally perform better in training speed and classification of patterns than the standard back error propagation networks trained by the least mean square energy. |