Dismiss
InnovationQ will be updated on Sunday, Jan. 21, from 9am - 11am ET. You may experience brief service interruptions during that time.
Browse Prior Art Database

# Elimination of Simple Constraints in Optimization

IP.com Disclosure Number: IPCOM000088349D
Original Publication Date: 1977-May-01
Included in the Prior Art Database: 2005-Mar-04
Document File: 4 page(s) / 29K

IBM

## Related People

Crowder, HP: AUTHOR [+2]

## Abstract

This article relates to a discovery of a change of variable for certain classes of optimization. The algorithm herein disclosed represents a discovery that a change of variable for certain classes of optimization problems allow such problems to be solved by procedures for unconstrained optimization.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 42% of the total text.

Page 1 of 4

Elimination of Simple Constraints in Optimization

This article relates to a discovery of a change of variable for certain classes of optimization. The algorithm herein disclosed represents a discovery that a change of variable for certain classes of optimization problems allow such problems to be solved by procedures for unconstrained optimization.

The (finite dimensional) inequality constrained optimization problem is that of finding values of the n real variables x(1),...,x(n), which minimize a given function f of those variables among all values for them which satisfy contraints of the form g(1)(x(1),...,x(n)) </= 0, i=1,...,m, where the functions g(i) are also given. For many such problems that arise in practice in scientific work, the constraints are all simple: each can be written as x(j) >/= a or x(j) </= b (or both simultaneously) for certain j = 1,...,n. In such a case the problem can be handled for making a transformation of the variables x(i) thus constrained in this way: for each such j, a function T(j) of a single variable is chosen so that, for any value y of its argument, its value T(j)(y(j)) must satisfy the desired constraints on x(j). If the transformations T(j) are suitably chosen, then the problem of minimizing the transformed function F defined by F(y(1),...,y(n)) = f(T(1)(y(1)),...,T(n)(y(n))) can be solved by one of the existing powerful techniques for minimizing unconstrained functions, and the optimal values of the variables y will immediately yield the desired values of the variables x(j), satisfying the constraints and minimizing f. From this point it will be assumed that the simple constraints to be handled have one of the two forms x(j) >/= 0 or 0 </= x(j) </= 1, since any other simple constraint arises from one of these by an easy linear change of variable

This idea is several decades old, and has been suggested occasionally in print (primarily in [1]) and advocated by some practitioners [2] and denounced by others, but until now it has not been seriously analyzed. Recent work on its theory shows that it can be useful for many problems, and shows what conditions the problem, the transformation, and the starting point must satisfy in order that the method might work properly. Briefly stated, these are:

(1) The constraints must be simple, as defined above (certain proposals that have been made for handling other types of constraints by similar transformations are not practicable).

(2) The transformations T must be such that when T(y) attains an extreme value we have T'(y) = 0; if the extreme value is a lower bound, T''(y) > 0; if the extreme value is an upper bound, then T''(y) > 0. (These conditions ensure that a solution of the transformed problem satisfies necessary solution conditions for the original problem.)

(3) The second derivatives of the transformations T must satisfy a Lipschitz condition (in order that the powerful convergence properties of the better unconstrained optimization procedures,...