Newton’s Method in the Context of Gradients Page: 3
13 p.View a full description of this article.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
NEWTON'S METHOD
[4, 5]). For instance, if the spectral bounds of the operators F'(u) are between
uniform constants M > m > 0 (in the original resp. the energy inner product),
then the constant stepsize an= 2/(M + m) yields convergence with ratio
M-m
q M + m
for the sequences (2.3) and (2.5), resp. Clearly, the aim of the change of the inner
product is to achieve better spectral bounds in the new inner product. For instance,
for PDEs a sometimes dramatic improvement can be achieved by using the Sobolev
inner product instead of the original L2 one (see the monograph [8] on Sobolev
gradients).
2.1.2. Steepest descent under a variable inner product. Assume that the nth term
of an iterative sequence is constructed and let B : H - H be a strongly positive
bounded self-adjoint linear operator. It follows similarly to (2.4) that the gradient
of 0 with respect to the inner product (., .)B is
VBn, = B7-F(u) (u E H). (2.6)
The relation (2.6) means that a one-step iterative sequence
un+l = un - a B-1F(un) (2.7)
(with some constants an > 0) is a variable steepest descent iteration corresponding
to 0 such that in the nth step the gradient of 0 is taken with respect to the inner
product (.,.)B..
Several such types of iterative method are known including variable metric meth-
ods (see e.g. the monograph [13]). In this context 'variable' is understood as
depending on the step n. We note that Sobolev gradients under variable inner
product can also be defined in the context of continuous steepest descent, and the
inner product may depend continuously on each element of the Sobolev space (see
[11, 12]).
Convergence results for sequences of the form (2.7) are given in [2, 7], formu-
lated again for convex functionals in terms of spectral bounds. Namely, under the
stepwise spectral equivalence relation
ms(B h, h) (F'(un)h, h) Mn(Ba h, h) (nmE N, h e H) (2.8)
(with some constants Mn ;> m > 0) and assuming the Lipschitz continuity of F',
one can achieve convergence with ratio
q = limsup .
M +m
(This convergence is global if an includes damping.) In particular, superlinear
convergence can also be obtained when q = 0, and its rate is characterized by the
speed asMn/mn - 1.
Clearly, the variable steepest descent iteration (2.7) can also be regarded as a
quasi-Newton method, since the relation (2.8) provides the operators B as ap-
proximations of F'(un). Moreover, the choice B = F'(un) yields optimal spectral
bounds mn = Mn = 1 in (2.8), and the corresponding variable steepest descent
iteration (2.7) becomes Newton method with quadratic convergence speed.EJDE-2007/ 124
3
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Karátson, János & Neuberger, J. W. Newton’s Method in the Context of Gradients, article, September 24, 2007; San Marcos, Texas. (https://digital.library.unt.edu/ark:/67531/metadc1164512/m1/3/: accessed July 18, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT College of Arts and Sciences.