Please use this identifier to cite or link to this item:
Authors: Braun, Johannes Frederic
Title: A framework for distributed training of physics-informed neural networks using JAX
Issue Date: 2021 Abschlussarbeit (Bachelor) 52
Abstract: The intention of this thesis is to evaluate the high-performance machine learning framework JAX. In the course of this work, a physics-informed neural network that solves the Burgers’ equation is implemented. This problem is chosen, as it is a well known and researched numerical problem and thus allows for great comparability. Here, a basic version of the physics-informed neural network with Flax is first created, which is an ecosystem for JAX that allows to implement neural networks. This version was then first improved with the tools offered via JAX. Afterwards, a SPMD version of this physics-informed neural network is also implemented, where multiple graphics processor units are utilized in the training. Additionally, the physics-informed neural network is extended to predict the parameters of the partial differential equation that describes the Burgers’ equation. This was done by the physics-informed neural network, while still learning to estimate the Burgers’ equation. For the optimized basic physics-informed neural network and the physics-informed neural network that also estimates the parameters of the partial differential equation promising results with JAX were achieved. The outcome of the SPMD physics-informed neural network was dissatisfactory, as it did not yield any improvements compared to the basic version. Although, this might stem from the small amount of data points used for each iteration and further points discussed in this paper. Additionally, a caveat must be voiced, as it often becomes apparent that the documentations of JAX and Flax are a work in progress. Because of this, a lot of crucial features have to be found out by trial and error, while working with these frameworks. Yet still, JAXand hence also Flax are considered a compelling framework to implement high-performance neural networks. Especially because of its potent Autograd and straightforward XLA just in time compilation. Through these components a performant physics-informed neural network can be quickly setup as shown in this thesis. Here, Autograd aids in creating the necessary gradients for the physical loss. Whereas the XLA just in time compilation yields drastic improvements to the run time of the training performed on the physics-informed neural network. These features then lead to previously mentioned promising results for the basic physics-informed neural network and the physics-informed neural network that also estimates the parameters of the partial differential equation.
Appears in Collections:05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Files in This Item:
File Description SizeFormat 
BA_Johannes_Braun_Druckversion.pdf3,66 MBAdobe PDFView/Open

Items in OPUS are protected by copyright, with all rights reserved, unless otherwise indicated.