Newton’s Method in the Context of Gradients Page: 1
13 p.View a full description of this article.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
Electronic Journal of Differential Equations, Vol. 2007(2007), No. 124, pp. 1-13.
ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu
ftp ejde.math.txstate.edu (login: ftp)
NEWTON'S METHOD IN THE CONTEXT OF GRADIENTS
J. KARATSON, J. W. NEUBERGER
ABSTRACT. This paper gives a common theoretical treatment for gradient and
Newton type methods for general classes of problems. First, for Euler-Lagrange
equations Newton's method is characterized as an (asymptotically) optimal
variable steepest descent method. Second, Sobolev gradient type minimization
is developed for general problems using a continuous Newton method which
takes into account a 'boundary condition' operator.
1. INTRODUCTION
Gradient and Newton type methods are among the most important approaches
for the solution of nonlinear equations, both in R1 and in abstract spaces. The
latter are often connected to PDE applications, and here the involvement of Sobolev
spaces has proved an efficient strategy, see e.g. [8, 12] on the Sobolev gradient
approach and [1, 5] on Newton type methods. Further applications of Sobolev
space iterations are found in [4].
The two types of methods (gradient and Newton) are generally considered as two
different approaches, although their connection has been studied in some papers,
see e.g. [3] in the context of continuous steepest-descent, [7] on variable precondi-
tioning and quasi-Newton methods, and [8, Chapter 7] on Newton's method and
constrained optimization.
The goal of this paper is to establish a common theoretical framework in which
gradient and Newton type methods can be treated, and thereby to clarify the
relation of the two types of methods for general classes of problems.
Note that there are two distinct ways systems of differential equations may be
placed into an optimization setting. Sometimes it is possible to show that a given
system of PDEs are Euler-Lagrange equations for some functional 0. In the more
general case one looks for the critical points of a least-squares functional associated
with the given system. Furthermore, one can approach Newton type methods
also in two different ways: from numerical aspect it is the study of the discrete
(i.e. iterative) solution method that is mostly relevant, whereas continuous Newton
methods can lead to attractive theoretical results.
The first part of this paper characterizes Newton's method in the Euler-Lagrange
case as an (asymptotically) optimal variable steepest descent method for the itera-
tive minimization of the corresponding functional. The second part treats the more
2000 Mathematics Subject Classification. 65J15.
Key words and phrases. Newton's method; Sobolev; gradients.
@2007 Texas State University - San Marcos.
Submitted August 8, 2005. Published September 24, 2007.
1
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Karátson, János & Neuberger, J. W. Newton’s Method in the Context of Gradients, article, September 24, 2007; San Marcos, Texas. (https://digital.library.unt.edu/ark:/67531/metadc1164512/m1/1/: accessed July 18, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT College of Arts and Sciences.