Using generative adversarial networks to translate microresistivity image logs of carbonates into synthetic core images with accurate Dunham textures

Core images provide a high-resolution data set of the textures and fabrics of sedimentary rocks, but their availability is often limited by cost and/or poor core recovery. An alternative solution is to utilize advanced high-resolution micro-resistivity images acquired through wireline logging, such as the Formation MicroScanner (FMS). But interpreting FMS image logs requires specialized knowledge that not all geologists possess. In this study, we explore the potential of Generative Adversarial Networks (GANs) in generating realistic core images from FMS logs using supervised image-to-image translation models. We trained a total of 10 models, testing various combinations of FMS data input formats, image processing methods, GAN architecture, and training hyperparameters. The supervised pix2pixHD model trained using concatenated FMS pad images with a batch size of 4 produces the most realistic core images: these have low Root Mean Square Error, high Peak Signal to Noise Ratio and high Structural Similarity Index Method values when compared to the ground truth core images. Our results also stress the importance of a diversified data set to reduce bias and enhance the applicability of the trained models to other wells and fields. Blind testing with geologists shows that classification accuracy for Dunham textures increases from 14% on FMS image logs to 73% on synthetic core images. The approach proposed here has thus the potential to transform the field of subsurface characterization by bridging the gap between the limited availability of core samples and the need for comprehensive geological facies classification.