|
| 1 | +# Class 2 — 08/29/2025 |
| 2 | + |
| 3 | +**Presenter:** Arnaud Deza |
| 4 | + |
| 5 | +**Topic:** Numerical optimization for control (gradient/SQP/QP); ALM vs. interior-point vs. penalty methods |
| 6 | + |
| 7 | +--- |
| 8 | + |
| 9 | +## Overview |
| 10 | + |
| 11 | +This class covers the fundamental numerical optimization techniques essential for optimal control problems. We explore gradient-based methods, Sequential Quadratic Programming (SQP), and various approaches to handling constraints including Augmented Lagrangian Methods (ALM), interior-point methods, and penalty methods. |
| 12 | + |
| 13 | +## Learning Objectives |
| 14 | + |
| 15 | +By the end of this class, students will be able to: |
| 16 | + |
| 17 | +- Understand the mathematical foundations of gradient-based optimization |
| 18 | +- Implement Newton's method for unconstrained minimization |
| 19 | +- Apply root-finding techniques for implicit integration schemes |
| 20 | +- Solve equality-constrained optimization problems using Lagrange multipliers |
| 21 | +- Compare and contrast different constraint handling methods (ALM, interior-point, penalty) |
| 22 | +- Implement Sequential Quadratic Programming (SQP) for nonlinear optimization |
| 23 | + |
| 24 | +## Prerequisites |
| 25 | + |
| 26 | +- Solid understanding of linear algebra and calculus |
| 27 | +- Familiarity with Julia programming |
| 28 | +- Basic knowledge of differential equations |
| 29 | +- Understanding of optimization concepts from Class 1 |
| 30 | + |
| 31 | +## Materials |
| 32 | + |
| 33 | +### Interactive Notebooks |
| 34 | + |
| 35 | +The class is structured around four interactive Jupyter notebooks that build upon each other: |
| 36 | + |
| 37 | +1. **[Part 1: Minimization via Newton's Method](part1_minimization.html)** |
| 38 | + - Unconstrained optimization fundamentals |
| 39 | + - Newton's method for minimization |
| 40 | + - Hessian matrix and positive definiteness |
| 41 | + - Regularization and line search techniques |
| 42 | + - Practical implementation with Julia |
| 43 | + |
| 44 | +2. **[Part 1: Root Finding & Backward Euler](part1_root_finding.html)** |
| 45 | + - Root-finding algorithms for implicit integration |
| 46 | + - Fixed-point iteration vs. Newton's method |
| 47 | + - Backward Euler implementation for ODEs |
| 48 | + - Convergence analysis and comparison |
| 49 | + - Application to pendulum dynamics |
| 50 | + |
| 51 | +3. **[Part 2: Equality Constraints](part2_eq_constraints.html)** |
| 52 | + - Lagrange multiplier theory |
| 53 | + - KKT conditions for equality constraints |
| 54 | + - Quadratic programming with equality constraints |
| 55 | + - Visualization of constrained optimization landscapes |
| 56 | + - Practical implementation examples |
| 57 | + |
| 58 | +4. **[Part 3: Interior-Point Methods](part3_ipm.ipynb)** |
| 59 | + - Inequality constraint handling |
| 60 | + - Barrier methods and log-barrier functions |
| 61 | + - Interior-point algorithm implementation |
| 62 | + - Comparison with penalty methods |
| 63 | + - Convergence properties and practical considerations |
| 64 | + |
| 65 | +### Additional Resources |
| 66 | + |
| 67 | +- **[Lecture Slides (PDF)](ISYE_8803___Lecture_2___Slides.pdf)** - Complete slide deck from the presentation |
| 68 | +- **[LaTeX Source Files](main.tex)** - Source code for the lecture slides |
| 69 | +- **[Demo Script](penalty_barrier_demo.py)** - Python demonstration of penalty vs. barrier methods |
| 70 | + |
| 71 | +## Key Concepts Covered |
| 72 | + |
| 73 | +### Mathematical Foundations |
| 74 | +- **Gradient and Hessian**: Understanding first and second derivatives in optimization |
| 75 | +- **Newton's Method**: Quadratic convergence and implementation details |
| 76 | +- **KKT Conditions**: Necessary and sufficient conditions for optimality |
| 77 | +- **Duality Theory**: Lagrange multipliers and dual problems |
| 78 | + |
| 79 | +### Numerical Methods |
| 80 | +- **Root Finding**: Fixed-point iteration, Newton-Raphson method |
| 81 | +- **Implicit Integration**: Backward Euler for stiff ODEs |
| 82 | +- **Sequential Quadratic Programming**: Local quadratic approximations |
| 83 | +- **Interior-Point Methods**: Barrier functions and path-following |
| 84 | + |
| 85 | +### Constraint Handling |
| 86 | +- **Equality Constraints**: Lagrange multipliers and null-space methods |
| 87 | +- **Inequality Constraints**: Active set methods and interior-point approaches |
| 88 | +- **Penalty Methods**: Quadratic and exact penalty functions |
| 89 | +- **Augmented Lagrangian**: Combining penalty and multiplier methods |
| 90 | + |
| 91 | +## Practical Applications |
| 92 | + |
| 93 | +The methods covered in this class are fundamental to: |
| 94 | +- **Optimal Control**: Trajectory optimization and feedback control design |
| 95 | +- **Model Predictive Control**: Real-time optimization with constraints |
| 96 | +- **Robotics**: Motion planning and control with obstacle avoidance |
| 97 | +- **Engineering Design**: Constrained optimization in mechanical systems |
| 98 | + |
| 99 | +## Further Reading |
| 100 | + |
| 101 | +- Nocedal, J., & Wright, S. J. (2006). *Numerical Optimization* |
| 102 | +- Boyd, S., & Vandenberghe, L. (2004). *Convex Optimization* |
| 103 | +- Betts, J. T. (2010). *Practical Methods for Optimal Control and Estimation Using Nonlinear Programming* |
| 104 | + |
| 105 | +## Next Steps |
| 106 | + |
| 107 | +This class provides the foundation for the advanced topics covered in subsequent classes, including: |
| 108 | +- Pontryagin's Maximum Principle (Class 3) |
| 109 | +- Nonlinear trajectory optimization (Class 5) |
| 110 | +- Stochastic optimal control (Class 7) |
| 111 | +- Physics-Informed Neural Networks (Class 10) |
| 112 | + |
| 113 | +--- |
| 114 | + |
| 115 | +*For questions or clarifications, please refer to the interactive notebooks or contact the instructor.* |
0 commit comments