Popis: |
Inspired by the importance of both communication and feedback on errors in human learning, our main goal was to implement a similar mechanism in supervised learning of artificial neural networks. The starting point in our study was the observation that words should accompany the input vectors included in the training set, thus extending the ANN input space. This had as consequence the necessity to take into consideration a modified sigmoid activation function for neurons in the first hidden layer (in agreement with a specific MLP apartment structure), and also a modified version of the Backpropagation algorithm, which allows using of unspecified (null) desired output components. Following the belief that basic concepts should be tested on simple examples, the previous mentioned mechanism was applied on both the XOR problem and a didactic color case study. In this context, we noticed the interesting fact that the ANN was capable to categorize all desired input vectors in the absence of their corresponding words, even though the training set included only word accompanied inputs, in both positive and negative examples. Further analysis along applying this approach to more complex scenarios is currently in progress, as we consider the proposed language-driven algorithm might contribute to a better understanding of learning in humans, opening as well the possibility to create a specific category of artificial neural networks, with abstraction capabilities. |