Popis: |
Determining the characteristics of newly drilled wells (e.g. reservoir formation properties) is a major challenge. One of the corresponding tasks is a well-interval similarity assessment: if we can learn to predict which oilfields are rich and which are not by comparing them with existing ones, this will lead to significant cost reductions. There are three main requirements for applying machine learning to oil&gas data: high quality even for unreliable data, low manual effort and interpretability of the model itself. Neural networks can be used to address these challenges. The use of a self-supervised paradigm leads to automatic model construction. However, existing approaches lack interpretability, and their quality prevents their use in applications. In particular, existing approaches like LSTM suffer from short-term memory, paying more attention to the end of a sequence. Instead, neural networks with Transformer architecture cast their attention over all sequences to make a decision. To make them more efficient in terms of computational time and more robust to noisy or absent values, we introduce a limited attention mechanism similar to that of the Informer architecture that considers only top correspondences. We run experiments on an open dataset with more than $20$ wells, making our experiments reliable and suitable for industrial use. The best results were obtained with our adaptation of the Informer variant of Transformer with ROC AUC $0.982$. It outperforms classical approaches with ROC AUC $0.824$, recurrent neural networks (RNNs) with ROC AUC $0.934$ and the direct use of Transformer with ROC AUC $0.961$. We show that well-interval representations obtained by Informer are of higher quality than those extracted by RNNs. Moreover, the obtained attention is interpretable, as it corresponds to the importance of a particular part of an interval for the similarity estimation. |