U-Net versus Pix2Pix: a comparative study on degraded document image binarization

Autor: Showmik Bhowmik, Riktim Mondal, Arpan Basu, Ram Sarkar
Rok vydání: 2020
Předmět:
Zdroj: Journal of Electronic Imaging. 29
ISSN: 1017-9909
DOI: 10.1117/1.jei.29.6.063019
Popis: Document image binarization is the process in which pixels in a document image are classified into two groups—foreground and background. This process becomes challenging when it deals with various degradation and noise present in the images. In the recent past, it has been observed that researchers are relying on deep learning-based approaches to solve the problem of document image binarization. Of these, a group of methods considers the segmentation as a pixel-level classification problem, whereas another group considers it as an image-to-image translation problem. We have explored two popular deep learning-based architectures, one from each group, namely, U-Net and Pix2Pix, and presented a comparative assessment of their performances when applied for degraded document image binarization. In this study, no preprocessing or postprocessing methods are applied, which helps us to realize the actual strength of these architectures for the said purpose. For the performance evaluation and comparative assessment, six publicly available standard datasets, namely, document image binarization competition 2013 (DIBCO 2013), H-DIBCO 2014, H-DIBCO 2016, DIBCO 2017, H-DIBCO 2018, and DIBCO 2019, are considered. The performances of these architectures are compared with the best performing methods of the respective binarization competitions, some state-of-the-art nondeep learning-based methods, and some recently published deep learning-based methods separately. The obtained results confirm that in most of the cases U-Net outperforms the Pix2Pix model.
Databáze: OpenAIRE