Comparison of Methods to Segment Variable-Contrast XCT Images of Methane-Bearing Sand Using U-Nets Trained on Single Dataset Sub-Volumes

Autor: Fernando J. Alvarez-Borges, Oliver N. F. King, Bangalore N. Madhusudhan, Thomas Connolley, Mark Basham, Sharif I. Ahmed
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Methane, Vol 2, Iss 1, Pp 1-23 (2022)
Druh dokumentu: article
ISSN: 2674-0389
DOI: 10.3390/methane2010001
Popis: Methane (CH4) hydrate dissociation and CH4 release are potential geohazards currently investigated using X-ray computed tomography (XCT). Image segmentation is an important data processing step for this type of research. However, it is often time consuming, computing resource-intensive, operator-dependent, and tailored for each XCT dataset due to differences in greyscale contrast. In this paper, an investigation is carried out using U-Nets, a class of Convolutional Neural Network, to segment synchrotron XCT images of CH4-bearing sand during hydrate formation, and extract porosity and CH4 gas saturation. Three U-Net deployments previously untried for this task are assessed: (1) a bespoke 3D hierarchical method, (2) a 2D multi-label, multi-axis method and (3) RootPainter, a 2D U-Net application with interactive corrections. U-Nets are trained using small, targeted hand-annotated datasets to reduce operator time. It was found that the segmentation accuracy of all three methods surpass mainstream watershed and thresholding techniques. Accuracy slightly reduces in low-contrast data, which affects volume fraction measurements, but errors are small compared with gravimetric methods. Moreover, U-Net models trained on low-contrast images can be used to segment higher-contrast datasets, without further training. This demonstrates model portability, which can expedite the segmentation of large datasets over short timespans.
Databáze: Directory of Open Access Journals