Seminar 1: BrIAS Fellow Prof. Yilin Mo
Title: Data-Driven Learning of a Verifiable Controller Inspired by MPC
Abstract: Recent years have witnessed the development of learning-based control, many of which utilize general neural networks, such as Multi-Layer Perceptron (MLP), as the entirety or a part of the control policy. Despite their remarkable empirical performance, the existence of an even moderate size neural network makes it almost impossible to certify stability or provide performance guarantees. In this talk, we introduce a new class of learnable controller, drawing inspiration from Model Predictive Control (MPC). The controller resembles a Quadratic Programming (QP) solver of a linear MPC problem, and is differentiable with respect to its parameters, which enables the calculation of policy gradient and the usage of Deep Reinforcement Learning (DRL) to train the parameters, instead deriving them from a predictive model like in the MPC. Due to the structure imposed on the QP-based controller, one can verify its properties, such as persistent feasibility and asymptotic stability, using the same procedure as in the verification of MPC. On the other hand, numerical examples illustrate that the proposed controller empirically matches MPC and MLP controllers in terms of control performance and has superior robustness against modeling uncertainty and noises. Real-world experiments on vehicle drift maneuvering task demonstrate the potential of these controllers for robotics and other demanding control tasks.
Seminar 2: BrIAS Junior Fellow Dr. Yailen Martinez Jimenez
Title: Application of Reinforcement Learning in different scheduling scenarios