RMAF: Relu-Memristor-Like Activation Function for Deep Learning

Autor: Yongbin Yu, Kwabena Adu, Nyima Tashi, Patrick Anokye, Xiangxiang Wang, Mighty Abra Ayidzoe
Jazyk: angličtina
Rok vydání: 2020
Předmět:
Zdroj: IEEE Access, Vol 8, Pp 72727-72741 (2020)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2020.2987829
Popis: Activation functions facilitate deep neural networks by introducing non-linearity to the learning process. The non-linearity feature gives the neural network the ability to learn complex patterns. Recently, the most widely used activation function is the Rectified Linear Unit (ReLU). Though, other various existing activation including hand-designed alternatives to ReLU have been proposed. However, none has succeeded in replacing ReLU due to their existing inconsistencies. In this work, activation function called ReLUMemristor-like Activation Function (RMAF) is proposed to leverage benefits of negative values in neural networks. RMAF introduces a constant parameter (α) and a threshold parameter (p) making the function smooth, non-monotonous, and introduces non-linearity in the network. Our experiments show that, the RMAF works better than ReLU and other activation functions on deeper models and across number of challenging datasets. Firstly, experiments are performed by training and classifying on multi-layer perceptron (MLP) over benchmark data such as the Wisconsin breast cancer, MNIST, Iris and Car evaluation. RMAF achieves high performance of 98.74%, 99.67%, 98.81% and 99.42% respectively, compared to Sigmoid, Tanh and ReLU. Secondly, experiments were performed on convolution neural network (ResNet) over MNIST, CIFAR-10 and CIFAR-100 data and observed the proposed activation function achieves higher performance accuracy of 99.73%, 98.77% and 79.82% respectively than Tanh, ReLU and Swish. Additionally, we experimented our work on deep networks i.e. squeeze network (SqueezeNet), Dense connected neural network (DenseNet121) and ImageNet dataset, which RMAF produced the best performance. We note that, the RMAF converges faster than the other functions and can replace ReLU in any neural network due to the efficiency, scalability and its similarity to both ReLU and Swish.
Databáze: Directory of Open Access Journals