Explainable & Causal AI
We employ representation learning combined with interpretability methods as a framework for the analysis and interpretation of deep neural networks trained on lattice simulation data. Information about a theory's phase structure can be extracted from networks performing a pretext task on the field configurations, such as action parameter regression. We aim at the identification of new observables or structures characterising the dynamics of strongly correlated systems.
Towards Novel Insights in Lattice Field Theory with Explainable Machine Learning - S. Blücher, L. Kades, J.M. Pawlowski, N. Strodthoff, J.M. Urban - arXiv:2003.01504 [hep-lat] - doi:10.1103/PhysRevD.101.094507
Causal machine learning aims to infer causal connections from data in order to build causal models. These would facilitate genuine reasoning and the consideration of alternative "what-if" scenarios, commonly called counterfactuals. By investigating causal models, we hope to start a journey towards general artificial intelligence. Since causal connections can only be guessed in real world scenarios where hidden variables can always be present, a truly intelligent actor will want to test them. Thus, we envision the inclusion of explorative elements from reinforcement learning into causal algorithms in the long run. A general artificial intelligence could imitate and reason about other intelligent actors in order to learn faster as humans do. Click here for more information.
We investigate approaches to efficiently encode quantum systems as well as lattice field theories with generative deep learning models that can be used to approximate probability distributions. Examples include generative adversarial networks (GAN) as well as normalizing flows. These unsupervised machine learning methods aid our understanding of strongly correlated systems and allow access to otherwise prohibitively expensive thermodynamic observables. Futhermore, they can accelerate or replace traditional Monte Carlo simulations.
Flow-based sampling for fermionic lattice field theories - M.S. Albergo, G. Kanwar, S. Racanière, D.J. Rezende, J.M. Urban, D. Boyda, K. Cranmer, D.C. Hackett, P.E. Shanahan - arXiv:2106.05934 [hep-lat]
We explore Gaussian processes and neural networks for the reconstruction of spectral functions from imaginary time Green’s functions, a classic ill-conditioned inverse problem. Our ansatz is based on an optimization framework where physics knowledge is encoded in the prior or training data. We investigate this reconstruction approach with the goal of calculating real-time quantities, such as transport coefficients, for QCD and beyond.
Spectral Reconstruction with Deep Neural Networks - L. Kades, J.M. Pawlowski, A. Rothkopf, M. Scherzer, J.M. Urban, S.J. Wetzel, N. Wink, F.P.G. Ziegler - arXiv:1905.04305 [physics.comp-ph] - doi:10.1103/PhysRevD.102.096001
We are interested both in the theoretical description of neuromorphic systems as well as the implementation of simulation algorithms on hardware specialized for AI applications. In particular, we examine the dynamics of spiking neural networks i.t.o. stochastic processes in collaboration with the Electronic Vision(s) Group. Furthermore, we investigate AI accelerators capable of performing efficient parallelized tensor operations as computing platforms for lattice calculations.
Sampling scheme for neuromorphic simulation of entangled quantum systems - S. Czischek, J.M. Pawlowski, T. Gasenzer, M. Gärttner - arXiv:1907.12844 [quant-ph] - doi:10.1103/PhysRevB.100.195120