Browse Prior Art Database

Adaptive Sparse Inversion

IP.com Disclosure Number: IPCOM000241805D
Publication Date: 2015-Jun-01
Document File: 4 page(s) / 583K

Publishing Venue

The IP.com Prior Art Database

Abstract

We present a solution for the parameterization of underdetermined geophysical inverse problems. In such problems, physical observations are made at the earth’s surface and used to determine the distribution of material properties within the earth. The earth is much more complex than our inaccurate and incomplete surface measurements could ever detect. Thus, the number of unknowns representing the earth is much larger than the number of constraints obtained at its surface. The question then arises, ‘How many unknowns, or parameters, do we use to populate our earth model before reconciling those parameters with our constraints through inversion?’ Usually, this is answered by trial and error. A human iterates upon multiple parameterizations seeking a set of parameters that discretizes the earth finely enough to achieve a proper fit to the data, in line with some expectation of “resolution”, while remaining course enough to fit into computer memory and be conducive to numerical solution in a useful amount of time. We propose an algorithm to take the human effort out of the selection of model parameterization, and rather than have the parameterization fixed a priori, it is adapted during the course of inversion based on the demands of the data. Parsimony in the model is maintained, such that the models start with only a few independent seed parameters, and new parameters are only added as needed to fit the data. This has twofold benefits: speeding up the inversion by keeping the number of free parameters to a minimum, and removing the temptation to over-ascribe resolution to the data.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 41% of the total text.

Page 01 of 4

ADAPTIVE SPARSE INVERSION

We present a solution for the parameterization of underdetermined geophysical inverse problems. In such problems, physical observations are made at the earth's surface and used to determine the distribution of material properties within the earth. The earth is much more complex than our inaccurate and incomplete surface measurements could ever detect. Thus, the number of unknowns representing the earth is much larger than the number of constraints obtained at its surface. The question then arises, 'How many unknowns, or parameters, do we use to populate our earth model before reconciling those parameters with our constraints through inversion?' Usually, this is answered by trial and error. A human iterates upon multiple parameterizations seeking a set of parameters that discretizes the earth finely enough to achieve a proper fit to the data, in line with some expectation of "resolution", while remaining course enough to fit into computer memory and be conducive to numerical solution in a useful amount of time. We propose an algorithm to take the human effort out of the selection of model parameterization, and rather than have the parameterization fixed a priori, it is adapted during the course of inversion based on the demands of the data. Parsimony in the model is maintained, such that the models start with only a few independent seed parameters, and new parameters are only added as needed to fit the data. This has twofold benefits: speeding up the inversion by keeping the number of free parameters to a minimum, and removing the temptation to over-ascribe resolution to the data.

Syn-inversion adaptive-parameterization is not a novel idea, but the manner we present here has not been encountered by the authors before. First we start with a model regularization which penalizes the 1-norm of either the gradients across the parameter interfaces, or the 1- norm of the parameters themselves. The former formulation produces "blocky" models, while the latter produces "spiky" models. Both make use of quadratic programming (QP), or, more specifically, the algorithm non-negative least squares (NNLS). The title "non-negative" refers to the requirement in a QP problem that the model vector have values that are either zero or positive, never negative. Such a requirement imposed on an inverse problem results in sparse models. In fact, it has been proven that the number of non-zero elements in a solution vector can never exceed the number of constraints, i.e. one will never end up with a greater number of active model parameters than there are data points. In practice, especially for non-linear inverse problems, the number of non-zero elements in an NNLS solution is far fewer than the number of data. Note that the use of NNLS is quite different than inverting for the logarithm of a parameter with standard least-squares, because the latter solution can never reach zero.

Once equipped with an inverse method which produces...