Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-11869
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorBraun, Johannes Frederic-
dc.date.accessioned2022-01-11T15:24:15Z-
dc.date.available2022-01-11T15:24:15Z-
dc.date.issued2021de
dc.identifier.other1786150832-
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-118868de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/11886-
dc.identifier.urihttp://dx.doi.org/10.18419/opus-11869-
dc.description.abstractThe intention of this thesis is to evaluate the high-performance machine learning framework JAX. In the course of this work, a physics-informed neural network that solves the Burgers’ equation is implemented. This problem is chosen, as it is a well known and researched numerical problem and thus allows for great comparability. Here, a basic version of the physics-informed neural network with Flax is first created, which is an ecosystem for JAX that allows to implement neural networks. This version was then first improved with the tools offered via JAX. Afterwards, a SPMD version of this physics-informed neural network is also implemented, where multiple graphics processor units are utilized in the training. Additionally, the physics-informed neural network is extended to predict the parameters of the partial differential equation that describes the Burgers’ equation. This was done by the physics-informed neural network, while still learning to estimate the Burgers’ equation. For the optimized basic physics-informed neural network and the physics-informed neural network that also estimates the parameters of the partial differential equation promising results with JAX were achieved. The outcome of the SPMD physics-informed neural network was dissatisfactory, as it did not yield any improvements compared to the basic version. Although, this might stem from the small amount of data points used for each iteration and further points discussed in this paper. Additionally, a caveat must be voiced, as it often becomes apparent that the documentations of JAX and Flax are a work in progress. Because of this, a lot of crucial features have to be found out by trial and error, while working with these frameworks. Yet still, JAXand hence also Flax are considered a compelling framework to implement high-performance neural networks. Especially because of its potent Autograd and straightforward XLA just in time compilation. Through these components a performant physics-informed neural network can be quickly setup as shown in this thesis. Here, Autograd aids in creating the necessary gradients for the physical loss. Whereas the XLA just in time compilation yields drastic improvements to the run time of the training performed on the physics-informed neural network. These features then lead to previously mentioned promising results for the basic physics-informed neural network and the physics-informed neural network that also estimates the parameters of the partial differential equation.en
dc.language.isoende
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.subject.ddc004de
dc.titleA framework for distributed training of physics-informed neural networks using JAXen
dc.typebachelorThesisde
ubs.fakultaetInformatik, Elektrotechnik und Informationstechnikde
ubs.institutInstitut für Parallele und Verteilte Systemede
ubs.publikation.seiten52de
ubs.publikation.typAbschlussarbeit (Bachelor)de
Enthalten in den Sammlungen:05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
BA_Johannes_Braun_Druckversion.pdf3,66 MBAdobe PDFÖffnen/Anzeigen


Alle Ressourcen in diesem Repositorium sind urheberrechtlich geschützt.