Encouraging an Appropriate Representation Simplifies Training of Neural Networks
Autor: | Krisztian Buza |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
integration of domain knowledge Computer Science - Machine Learning representation Computer science Generalization Geography Planning and Development Machine Learning (stat.ML) 02 engineering and technology Training (civil) Machine Learning (cs.LG) 03 medical and health sciences 68t07 0302 clinical medicine Simple (abstract algebra) Statistics - Machine Learning 0202 electrical engineering electronic engineering information engineering Representation (mathematics) Artificial neural network business.industry QA75.5-76.95 neural networks Electronic computers. Computer science Domain knowledge 020201 artificial intelligence & image processing Artificial intelligence business 030217 neurology & neurosurgery |
Zdroj: | Acta Universitatis Sapientiae: Informatica, Vol 12, Iss 1, Pp 102-111 (2020) |
Popis: | A common assumption about neural networks is that they can learn an appropriate internal representation on their own, see e.g. end-to-end learning. In this work we challenge this assumption. We consider two simple tasks and show that the state-of-the-art training algorithm fails, although the model itself is able to represent an appropriate solution. We will demonstrate that encouraging an appropriate internal representation allows the same model to solve these tasks. While we do not claim that it is impossible to solve these tasks by other means (such as neural networks with more layers), our results illustrate that integration of domain knowledge in form of a desired internal representation may improve the generalization ability of neural networks. |
Databáze: | OpenAIRE |
Externí odkaz: |