Abstract
In this paper we propose an extension of the iteratively regularized Gauss-Newton method to the Banach space setting by defining the iterates via convex optimization problems. We consider some a posteriori stopping rules to terminate the iteration and present the detailed convergence analysis. The remarkable point is that in each convex optimization problem we allow non-smooth penalty terms including L1 and total variation like penalty functionals. This enables us to reconstruct special features of solutions such as sparsity and discontinuities in practical applications. Some numerical experiments on parameter identification in partial differential equations are reported to test the performance of our method.
Original language | English |
---|---|
Pages (from-to) | 647-683 |
Number of pages | 37 |
Journal | Numerische Mathematik |
Volume | 124 |
Issue number | 4 |
DOIs | |
Publication status | Published - Aug 2013 |