Abstract
Inverse problems are usually ill-posed in the sence that their solution is unstable with respect to data perturbations and so-called regularization methods have to be used for their stable solution. Two drawbacks of standard regularization methods are
- saturation, i.e., only suboptimal approximations can be
found for smooth solutions. This is the case, e.g., for
Tikhonov regularization.
- the large number of iterations, e.g., for Landweber
iteration.
A framework that allows to overcome both effects for certain classes of inverse problems is regularization in Hilbert scales. There, the solution is searched in a different space out of a scale of spaces (Hilbert scale) over the pre-image space, but convergence is achieved in the original space.
Regularization methods in Hilbert scales can be viewed as modified (preconditioned) versions of standard methods.
In order to make the advantages of the Hilbert scale approach applicable to a new class of problems,
we propose to use a scale of spaces over the image space instead. This result in a new family of Y−scale regularization methods, whose (optimal) convergence properties are analyzed. One of the key steps in the analysis is the formulation of an adequate a-posteriori stopping rule, which provides optimal convergence rates. The theoretical results are illustrated in several numerical
examples.
Original language | English |
---|---|
Number of pages | 16 |
Publication status | Published - 2006 |
Fields of science
- 101 Mathematics
- 101020 Technical mathematics