How to maximise gains (or to minimise costs) and how to determine optimal strategies or policies are fundamental questions for engineers, economists, doctors designing a cancer therapy, fund managers or a government agency planning social policies. Several problems in Science (e. g. mechanics, physics, neuroscience or biology) can be also formulated as optimisation problems in random environments. The theory of stochastic optimal control and games is an indispensable tool in many areas of applied mathematics. In the first part of this unit, you will be familiarised with the dynamic programming principle and learn how to show that it provides a unified approach to a large number of seemingly unrelated problems. The second part is devoted to backward stochastic differential equations and their applications to stochastic optimal control and game theory. You will learn how to solve continuous time problems based either on the Wiener process or more general classes of stochastic processes. After completing this unit, you will be able to formulate a diverse suite of problems arising in finance, applied sciences, engineering and medicine as stochastic optimal control problems and solve them using the concepts of the Bellman principle, Hamilton-Jacobi-Bellman equation and backward stochastic differential equations.
3 x 1 hr lectures and 1 x 1hr tutorial per week for 13 weeks
2 x take-home assignments (40%), final exam (60%)
This unit is only available in even years.
At least 6 credit points of (2000-level Advanced Mathematics or 3000-level Advanced Mathematics or 4000-level Mathematics units) or equivalent.