A common problem in the geosciences is the need to deduce unseen physical structure based on limited observations. A ground-penetrating radar observation attempts to infer underground structure without any in situ measurements. This class of problems is called inversion. Here an assumed physical model is repeatedly adjusted until it is consistent with observations.
The results of inversion can be heavily affected by the choice of models. This acts as a Bayesian prior. Models are generally less complex than the physical world is. The process can also result in an oversimplified solution. It is common to augment a theoretical model with known real-world instances, to combat these difficulties. This combination can result in a number of model permutations to provide more realistic diversity for the prior.
Recent advances in this approach have been achieved on the basis of machine learning techniques. Convolutional neural networks similar to those used in computer vision have proven successful in integrating many training samples to produce more nuanced priors with increased spatial resolution. Scientists examined one such neural network approach named the variational autoencoder (VAE).
Variational autoencoders are capable of more than just “regurgitating” past training data. They can generate new samples that are consistent with. The sorts of patterns observed in the input images. The authors test this capability by comparing VAEs trained using individual input images with ones trained on sets of images across synthetic and real observational data.
One key result of the study is that VAEs trained using collections of images appear to perform better than those based on only a single input. The combined VAE performs nearly as well as the single best training image for both synthetic and field data. It is significantly more efficient to combine the training inputs into one VAE and perform only one inversion, rather than searching for the “right match” model by performing many inversions with different inputs.