Bayesian inverse problems have gained significant attention in recent years due to their strong mathematical foundations, which allow for thorough theoretical analysis. Moreover, the incorporation of measurement errors and noise is of utmost importance to obtain reliable results, for instance, in the context of computational imaging problems, such as computed tomography (CT) and magnetic resonance imaging (MRI).
Energy-based methods constitute a particular way to model probability distributions via Gibbs densities containing a suitable energy functional. These approaches can be used to model prior distributions in Bayesian inverse problems. The energy functional is parametrized, for instance, via fields of expert models and learned from image data using modern machine learning architectures. Thanks to its rich mathematical foundations, the Bayesian framework allows for rigorous theoretical guarantees regarding the obtained image reconstructions. A review paper on the topic of energy-based models for inverse imaging problems is available in [HHPZ2025].
A crucial challenge in learning the components of the prior from data lies in sampling from the prior distribution. Two methods, based on subgradient steps and the unadjusted Langevin algorithm, were developed in [HHP2024] and can also be applied to certain non-smooth potentials.
To address this, a well-established strategy is to acquire only a reduced amount of data during an MR measurement and to compensate for the missing information using either hand-crafted or learned imaging models that incorporate prior knowledge about the structure of typical images.
Our research group is actively working in this field, particularly in the context of dynamic MRI. In our work, we have for example developed novel image reconstruction methods that reduce MRI data acquisition times by a factor of eight or more, without compromising the quality or diagnostic value of the reconstructed images.
Optimal control problems play an important role in several areas of applied mathematics. The governing dynamical systems frequently involve parameters, and the resulting optimal control problem needs to be solved quickly for many different values of the parameters – for instance in a real-time or many-query context. Solving the exact optimal control problem is usually already costly for a single parameter. Hence, doing so for many values of the parameter is prohibitively expensive and infeasible in most applications.
We are interested in applying model order reduction combined with machine learning approaches. Such a combination has proven to achieve significant speedups while maintaining theoretical guarantees such as a posteriori error estimates. In particular, the certifications available for model order reduction methods, such as the reduced basis method, can be transferred to the machine learning prediction as well. This results in certified machine learning methods that are orders of magnitude faster than classical methods [KLM2025], [KR2025]. A purely data-driven method with certification using the high-fidelity model was developed in [KKLOO2022] to improve the results of enhanced oil recovery.
The aforementioned machine learning methods can be integrated in adaptive model hierarchies [HKOSW2023] consisting of a full order model, a reduced model and a machine learning surrogate. It is possible to apply different machine learning approaches [WHKOS2024] while maintaining the certification via the reduced basis approach. Such adaptive model hierarchies are automatically tailored to the parameters of interest and adjust dynamically depending on the performance of the individual components. In the context of parametrized optimal control problems, adaptive model hierarchies have for instance been applied in [K2024].
Most of the methods developed in this research topic are presented extensively in [K2025].
The methods developed in this research area contribute to Work Group 2 (ML for CT) of the COST Action InterCoML, in which we actively participate.