Browse Prior Art Database

Extrapolation for Univariate Optimization

IP.com Disclosure Number: IPCOM000085946D
Original Publication Date: 1976-Jun-01
Included in the Prior Art Database: 2005-Mar-03
Document File: 2 page(s) / 15K

Publishing Venue

IBM

Related People

Wolfe, PS: AUTHOR

Abstract

Minimizing a function of one variable is a task common to most optimization procedures, accounting for almost all of the repeated evaluations of the function of several variables whose minimum is sought, and thus for most of the computing time required in performing the overall task. Accordingly, any improvement in the efficiency of univariate minimization is immediately reflected in improvement of general optimization schemes, as well as being of value for the smaller class of purely one-variable problems.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2

Extrapolation for Univariate Optimization

Minimizing a function of one variable is a task common to most optimization procedures, accounting for almost all of the repeated evaluations of the function of several variables whose minimum is sought, and thus for most of the computing time required in performing the overall task. Accordingly, any improvement in the efficiency of univariate minimization is immediately reflected in improvement of general optimization schemes, as well as being of value for the smaller class of purely one-variable problems.

For the most important problems and methods of solution it is normally assumed that not only the value of the function being minimized, but also its first derivatives, can be obtained for any value x of its independent variables. Thus, for the problem of minimizing the function f of the single variable x, it is assumed that f(x) and f' (x) are available for x given. Further, the minimization will be started from a known value x = x[0]; the direction in which f decreases is known from the sign of f' (x[0]), so it may be supposed without loss of generality that f' (x[0]) < 0. (x[k] will be written where the subscript k would normally be used.)

Finally, there will be at hand an initial guess x[1] at the approximate location of the minimum of f for x>0; a good way for making that guess has been established [1]. For simplicity of notation it will be supposed that the origin of the (x,f) plane to be at (x[0],f(x[0]), so the starting conditions are f(0) = 0, f'(0) < 0.

It is important to establish a criterion for an acceptable approximate univariate minimizer x of f which can be shown to ensure the convergence of the algorithm calling for univariate minimization, but which does not require high accuracy. Such a criterion, now widely used, has been given [2]. It consists in these two requirements on x: (i) f'(x) > or - A f' (0) and (ii) f(x) < or - B f' (0) x, where the constants A,B satisfy 0 < B < A < 1/2. It is easy to show that such x exists if f' (0) < 0.

A univariate procedure is conveniently divided into two phases, extrapolation and interpolation. The initial phase is extrapolation; the interpolation phase is entered when two successive values x[k-1]<x[k] have been found, such that the existence of a point x between them satisfying (i,ii) can be shown; the interpolation phase then finds it. For the present purposes, it suffices to note that the existence of such a point can be shown when: (iii) x[k-1] satisfies (ii) and f' (x[k-1]) <0, and (iv) x[k] either satisfies (i) or violates (ii).

The method described herein concerns the extrapolation phase of univariate minimization: finding x[k-1] x[k] satisfying (iii, iv) above. A standard procedure, used in all but one of the modern routines examined so far and suggested in all the current texts, is to 'step out' by factors of two: set x[k+1] = x[k] + 2(x[k]-x[k-1]) until f' (x[k+1]) > 0. Such a pr...