Yedroudj-Net: An Efficient CNN for Spatial Steganalysis

Autor: Mehdi Yedroudj, Marc Chaumont, Frédéric Comby
Přispěvatelé: Image & Interaction (ICAR), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), Université de Nîmes (UNIMES)
Jazyk: angličtina
Rok vydání: 2018
Předmět:
Zdroj: 43rd International Conference on Acoustics, Speech and Signal Processing
ICASSP: International Conference on Acoustics, Speech and Signal Processing
ICASSP: International Conference on Acoustics, Speech and Signal Processing, Apr 2018, Calgary, Alberta, Canada. pp.2092-2096, ⟨10.1109/ICASSP.2018.8461438⟩
ICASSP
DOI: 10.1109/ICASSP.2018.8461438⟩
Popis: International audience; For about 10 years, detecting the presence of a secret message hidden in an image was performed with an Ensemble Classifier trained with Rich features. In recent years, studies such as Xu et al. have indicated that well-designed convolutional Neural Networks (CNN) can achieve comparable performance to the two-step machine learning approaches. In this paper, we propose a CNN that outperforms the state-of-the-art in terms of error probability. The proposition is in the continuity of what has been recently proposed and it is a clever fusion of important bricks used in various papers. Among the essential parts of the CNN, one can cite the use of a pre-processing filter-bank and a Truncation activation function, five convolutional layers with a Batch Normalization associated with a Scale Layer, as well as the use of a sufficiently sized fully connected section. An augmented database has also been used to improve the training of the CNN. Our CNN was experimentally evaluated against S-UNIWARD and WOW embedding algorithms and its performances were compared with those of three other methods: an Ensemble Classifier plus a Rich Model, and two other CNN steganalyzers.
Databáze: OpenAIRE