Info

global) if movement away from it causes the function fix) to decrease. A point is a minimum if movement away from it causes an increase. Mathematically, a point x* is a local maximum of the function fix) if fix*) > fix) (1)

where 5 is a small positive number. The point x* is termed a global maximum when the inequality (Equation 1) holds true for all values of x. Definitions of local and global minima are similar. Point F, however, whose first derivative is zero (a stationary point), is neither a maximum nor a minimum, but a mini-max, better known as a saddle point. In this case, movement away from point F will result in an increase or decrease in f(x) depending on the direction of the movement.

The classic theory of maxima and minima is simply a search to find all local maxima or minima and then a comparison of the individual values to determine the global (absolute) maximum or minimum. The critical places to look for the extreme values are

2. on the boundaries of the domain of the decision variables, and

3. at points where the first derivatives are discontinuous.

When/(x) and f '(x) are continuous functions, the extreme points will most likely be at the stationary points (except for cases involving saddle points). Thus the problem becomes one of locating the points where the partial derivatives are zero. To accomplish this goal, the following n algebraic equations df

— (x1; x2, . . . , xn) = 0 (j = 1, . . . , n) (3)

dXj must be simultaneously solved. Unfortunately, differential calculus does not always provide a method for solving such equations, particularly when real solutions do not exist. If some continuous and real functions Zjix) exist, an approximate solution to the equation

can be obtained by minimizing the sum of squares of the residuals defined by n

The calculus approach is useful when the equation 3 can be solved directly (when they are all linear, eg). It can also reduce the dimensionality, the number of variables required to solve the problem, in some cases.

These analytical methods may not always work. Optimization can still be performed, however, by iterative methods using a computer. Iterative methods are useful for solving simultaneous equations such as equation 4, but cannot guarantee optimization when used to solve equation 3, primarily because stationary points are not always optima (they may be saddle points). For that, it is usually better to apply the iterations directly to the function fix) and try to find the minimum (or maximum) by the following strategy (12):

1. Take a trial solution, xk.

2. Find a direction from this trial solution in which f(x) decreases (or increases).

3. Find a point xk + 1 in this direction so that fixk + 1) < f(xk) (or fixk + l) > fixk)).

4. Repeat the process from this new trial solution.

Even iterative methods might fail to produce an optimum, however. This might be due to a number of reasons, including the nature or structure of the system, slow convergence, rounding-off errors, and finding a local instead of a global extreme. With the preceding background, restrictions, and limitations in mind some basic optimization methods are presented next.

Unconstrained Optimization

Unconstrained optimization techniques are applicable when searching for a minimum (or maximum) of a function f(x) that is not subject to any constraints. Sufficient conditions must be developed to allow evaluation of the nature of the stationary points and determine if they are minima, maxima, or other. Once all of the local maxima and minima of the function fix) are located, a comparison among them produces the global optimum (maximum or minimum).

If the function fix) has only one independent variable, a Taylor series expansion about the stationary point x* can be performed.

fix) = f(x*) + f'(x*)ix - x*) + ~^f"ix*)ix - x*f

The first derivative vanishes at the stationary point ifix*) = 0). Also, if x is sufficiently close to x*, the higher-order terms are negligible compared with the second-order terms, and equation 6 becomes fix) ~ fix*) + i f"ix*)ix - x*f ~ i f"ix*)ix - X*?

Because (x — x*f is always positive, the nature of fix*) depends on the value of f'\x*), the second derivative of the function at the stationary point:

In the case of f "(x*) = zero, higher order derivatives must be examined. This method can be extended to systems with two or more variables by using multidimensional Taylor series expansions and Hessian matrices (5,13-15).

Another popular approach for finding local maxima (or minima) is an optimization algorithm known as Newton's method. It computes/Ox:) and/'(x) at an initial point x; and then finds an improved estimate x, +1 by linear extrapolation (12). Algebraically, this is gi(x 1, X2,

If the function fix) is either convex or concave, Newton's method should converge toward a local maximum (or minimum) very rapidly. If/(x) is an S-shaped function, and the initial point x,: is poorly chosen, the method may oscillate with a continuously increasing amplitude. Newton's method can be extended and applied to multidimensional optimization (12). Other methods such as conjugate gradients, quasi-Newton methods, and various Newtonlike methods have also been successfully applied (12,16).

Finally, several other optimization methods work well with one-dimensional unconstrained systems. The bisection method and its variations, the method of false position and its modifications, the golden section search, the Fibonacci search, and so on (12,13) all have certain advantages and disadvantages.

When searching for a method to optimize a multidimensional system, the following questions deserve consideration (12):

1. Is it easy to calculate the values of the function?

2. Can the first derivatives be found easily?

3. Can the second derivatives be found easily?

4. Can the (n X n) Hessian matrix and its inverse be stored affordably?

5. Are many of the Hessian's elements zero? Can that be capitalized on and the calculation time reduced?

6. Is there anything special about the particular problem that might make the solution easier?

Because there are no easy answers to most of the preceding questions, it is advisable to further study and understand the strengths and weaknesses of each optimization method before applying it to solve specific problems.

Constrained Optimization

In a number of practical situations, optimization of a function occurs over a restricted domain of the independent variables. For example, flow rates, concentration, shelflife, and so on may never take negative values. Optimization techniques still locate the stationary points of the function, but this time the solutions for optima must be subject to equality or inequality constraints.

Optimization of Systems with Equality Constraints. For a function /(x1( X'l j • » ♦ 0Cri ) of re independent variables and subject to m < n constraint equations three basic analytical methods can be used to locate the extreme points: direct substitution, solution by constrained variation, and Lagrange multipliers. The first method involves a simple substitution of the m constraint equation 9 directly into the function to be optimized. This results in an equation with (n — m) unknowns subject to no constraints. The methods of unconstrained optimization may then be applied. Equation 9 is usually complicated, and it is not always possible to perform the substitutions.

Figure 5 illustrates the second method, solution by constrained variation. In this case, there are two independent variables x1 and x2, and the function fixx, x2) is subject to the constraintg(xlf x2) = 0. It is obvious that the maximum of the constrained system is point B and the maximum of the unconstrained system is point A. At point B the curve fixj, x2) = constant and the curve g(xlt x2) = 0 have the same slope and are tangent. Therefore, infinitesimal changes dx1 and dx2 affect the dependent variables f(x1, x2) and g(x1: x2) in a similar way. In other words

Because / = constant and g = 0, the total derivatives of both functions at point B are

idx2\ _

(dx2\

\dxjf

\dxjg

It should be recalled that in this case (constraint optimization) df/dx1 and d//dx2 are not zero (as they would have been in an unconstraint case). Equations 10 to 12 can be i

600 Chocolate Recipes

600 Chocolate Recipes

Within this in cookbook full of chocolate recipes you will find over 600 Chocolate Recipes For Chocolate Lovers.

Get My Free Ebook


Post a comment