Non-linear Programming: Grooming and saddling the right racehorse

Vortrag im Rahmen der Ringvorlesung des Graduiertenkollegs

Prof. Dr. Andreas Griewank


Zusammenfassung

The defining attribute "non-linear" of our subject might suggest that we are dealing with deviations from the "natural" state of linearity, which pervades much of our mathematics education and basic scientific modeling. However, without non-linearity there can be no change of stability, no hysteresis, no chaotic iteration, and most importantly in the present context: no separated local optima.

On the other hand linearity is clearly more than just a special case. Linear models closely reflect the properties of general optimization problems in the vicinity of their local optima, provided there is enough smoothness and regularity. This gives rise to a whole host of nonlinear programming methods based on successive linearizations and thus the work horses developed for linear programming. For these Newton-like algorithms one may then establish rates of convergence and complexity estimates on at least locally convex problems. The only practical issue in this kind of nearly-linear programming is the evaluation or approximation of the derivatives that define the local linearizations.

More genuinely nonlinear issues arise when the starting guesses are further afield or the user demands the computation of global optima. Then there is usually no guarantee of convergence and algorithms behave like racehorses: very fast under some conditions, hopelessly slow under others. Moreover, judging the conditions and picking the right horse is no easy task. It often has to be based on numerical experiments rather than mathematical results. Nevertheless, evermore complicated and highly nonlinear computer models are successfully being adapted for systematic optimization calculations rather than ad hoc search procedures. The talk will highlight some of the techniques and their application in various areas.