Autor: |
Dengyong ZHANG, Huang WEN, Feng LI, Peng CAO, Lingyun XIANG, Gaobo YANG, Xiangling DING |
Jazyk: |
English<br />Chinese |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
网络与信息安全学报, Vol 8, Pp 110-122 (2022) |
Druh dokumentu: |
article |
ISSN: |
2096-109X |
DOI: |
10.11959/j.issn.2096-109x.2022084 |
Popis: |
Image inpainting is a technique that uses information from known areas of an image to repair missing or damaged areas of the image.Image editing software based on it has made it easy to edit and modify the content of digital images without any specialized foundation.When image inpainting techniques are used to maliciously remove the content of an image, it will cause confidence crisis on the real image.Current researches in image inpainting forensics can only effectively detect a certain type of image inpainting.To address this problem, a passive forensic method for image inpainting was proposed, which is based on a two-branch network.The high-pass filtered convolutional network in the dual branch first used a set of high-pass filters to attenuate the low-frequency components in the image.Then features were extracted using four residual blocks, and two transposed convolutions were performed with 4x up-sampling to zoom in on the feature map.And thereafter a 5×5 convolution was used to attenuate the tessellation artifacts from the transposed convolutions to generate a discriminative feature map on the high-frequency components of the image.The dual-attention feature fusion branch in the dual branch first added a local binary pattern feature map to the image using a preprocessing block.Then the dual-attention convolution block was used to adaptively integrate the image’s local features and global dependencies to capture the differences in content and texture between the inpainted and pristine regions of the image.Additionally, the features extracted from the dual-attention convolution block were fused, and the feature maps were up-sampled identically to generate the discriminative image content and texture on the feature maps.The extensive experimental results show the proposed method improved the F1 score by 2.05% and the Intersection over Union(IoU) by 3.53% for the exemplar-based method and by 1.06% and 1.22% for the deep-learning-based method in detecting the inpainted region of the removed object.Visualization of the results shows that the edges of the removed objects can be accurately located on the detected inpainted area. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|