Progressive Operational Perceptrons

Autor: Serkan Kiranyaz, Alexandros Iosifidis, Turker Ince, Moncef Gabbouj
Rok vydání: 2017
Předmět:
Generalization
Computer science
Complex configuration
Backpropagation
02 engineering and technology
computer.software_genre
back propagation
0302 clinical medicine
Operator (computer programming)
0202 electrical engineering
electronic engineering
information engineering

Generalized models
linear system
Mathematical operators
Bench-mark problems
Diversity
Artificial neural network
mathematical parameters
Computer Science Applications
Benchmarking
priority journal
Benchmark (computing)
020201 artificial intelligence & image processing
nerve cell
Cybernetics
Neural networks
Optimal operators
Cognitive Neuroscience
Complex networks
Multi-layer perceptrons (MLPs)
Biological neuron model
Machine learning
Article
mathematical analysis
learning disorder
03 medical and health sciences
perceptron
Artificial Intelligence
Robustness (computer science)
mathematical computing
Generalization performance
generalized operational perceptron
business.industry
statistical model
Scalability
Perceptron
progressive operational perceptron
Artificial intelligence
business
computer
artificial neural network
030217 neurology & neurosurgery
Multi-layer perceptrons
Zdroj: Neurocomputing. 224:142-154
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2016.10.044
Popis: There are well-known limitations and drawbacks on the performance and robustness of the feed-forward, fully-connected Artificial Neural Networks (ANNs), or the so-called Multi-Layer Perceptrons (MLPs). In this study we shall address them by Generalized Operational Perceptrons (GOPs) that consist of neurons with distinct (non-)linear operators to achieve a generalized model of the biological neurons and ultimately a superior diversity. We modified the conventional back-propagation (BP) to train GOPs and furthermore, proposed Progressive Operational Perceptrons (POPs) to achieve self-organized and depth-adaptive GOPs according to the learning problem. The most crucial property of the POPs is their ability to simultaneously search for the optimal operator set and train each layer individually. The final POP is, therefore, formed layer by layer and in this paper we shall show that this ability enables POPs with minimal network depth to attack the most challenging learning problems that cannot be learned by conventional ANNs even with a deeper and significantly complex configuration. Experimental results show that POPs can scale up very well with the problem size and can have the potential to achieve a superior generalization performance on real benchmark problems with a significant gain. To address the limitations and drawbacks of MLP neuron model the generalized model of the biological neurons is proposed.Progressive Operational Perceptrons (POPs) is self adaptive and built progressively (incrementally) just like biological neurons.A POP shares the same properties of a typical MLPs and can be identical to a MLP providing that the MLP operators are used.The best set of operators is searched according to the learning problem at hand and the minimal network is built progressively.With the right blend of non-linear operators, POPs can learn very complex problems that cannot be learnt by deeper MLPs.
Databáze: OpenAIRE