« Previous: Lecture 24 Summary | Next: Lecture 26 Summary » |
Adjoint methods for recurrence relations, following notes.
Start discussing a particular example of a nonlinear optimization scheme, solving the full inequality-constrained nonlinear-programming problem: the CCSA and MMA algorithms, as refined by Svanberg (2002). This is a surprisingly simple algorithm (the NLopt implementation is only 300 lines of C code), but is robust and provably convergent, and illustrates a number of important ideas in optimization: optimizing an approximation to update the parameters x, guarding the approximation with trust regions and penalty terms, and optimizing via the dual function (Lagrange multipliers). Like many optimization algorithms, the general ideas are very straightforward, but getting the details right can be delicate!
(Some concepts that CCSA does not use, that we will return to later: using second-derivative information [quasi-Newton methods], optimization with function values only [no gradients], and global optimization. Note that the "globally convergent" property in the title of the Svanberg paper means that it converges to a local optimum from any feasible starting point, not that it necessarily gives the global optimum.)
Outlined the inner/outer iteration structure of CCSA, and the interesting property that it produces a sequence of feasible iterates from a feasible starting point, which means that you can stop it early and still have a feasible solution (which is very useful for many applications where 99% of optimal is fine, but feasibility is essential).