Universität Stuttgart

Permanent URI for this communityhttps://elib.uni-stuttgart.de/handle/11682/1

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    ItemOpen Access
  • Thumbnail Image
    ItemOpen Access
    A deep learning approach for large-scale groundwater heat pump temperature prediction
    (2022) Scheurer, Stefania
    Heating and cooling buildings is one of the most energy-intensive aspects of modern life. To minimize the impact on global warming and decelerate climate change, more efficient and carbon emission-mitigating technologies such as openloop groundwater heat pumps (GWHP) for heating and cooling buildings are being used and quickly adopted. Nowadays, in order to guarantee their optimal use and prevent negative interactions, city planners need to optimize their placement in the urban landscape. This optimization process requires fast models that simulate the effect of a GWHP on the groundwater temperature. Considering a large domain with multiple GWHPs, this work introduces a framework for the groundwater temperature prediction. While using a learned local surrogate model, a convolutional neural network, to predict the local temperature field around every single GWHP, a physics-informed neural network (PINN) is employed afterwards to correct the global initial solution of stitched together local predictions. As the violations of the physical laws described by the underlying partial differential equation(s) are spatially unevenly distributed, two different methods for drawing sampling points, on the basis of which the training of the PINN to correct the global initial solution takes place, are investigated and compared. This work shows that it is possible for a PINN to correct the global initial solution of stitched together local predictions in a domain with multiple GWHPs. However, there are still opportunities to improve the quality and decrease the computational time of the presented framework. The best method for drawing sampling points depends on the scenario and the placement of the GWHPs. Thus, no general statement can be made, which of the two methods is more suitable. This work provides a good basis for further investigation of the presented framework.
  • Thumbnail Image
    ItemOpen Access
    Simulating stochastic processes with variational quantum circuits
    (2022) Fink, Daniel
    Simulating future outcomes based on past observations is a key task in predictive modeling and has found application in many areas ranging from neuroscience to the modeling of financial markets. The classical provably optimal models for stationary stochastic processes are so-called ϵ-machines, which have the structure of a unifilar hidden Markov model and offer a minimal set of internal states. However, these models are not optimal in the quantum setting, i.e., when the models have access to quantum devices. The methods proposed so far for quantum predictive models rely either on the knowledge of an ϵ-machine, or on learning a classical representation thereof, which is memory inefficient since it requires exponentially many resources in the Markov order. Meanwhile, variational quantum algorithms (VQAs) are a promising approach for using near-term quantum devices to tackle problems arising from many different areas in science and technology. Within this work, we propose a VQA for learning quantum predictive models directly from data on a quantum computer. The learning algorithm is inspired by recent developments in the area of implicit generative modeling, where a kernel-based two-sample-test, called maximum mean discrepancy (MMD), is used as a cost function. A major challenge of learning predictive models is to ensure that arbitrarily many time steps can be simulated accurately. For this purpose, we propose a quantum post-processing step that yields a regularization term for the cost function and penalizes models with a large set of internal states. As a proof of concept, we apply the algorithm to a stationary stochastic process and show that the regularization leads to a small set of internal states and a constantly good simulation performance over multiple future time steps, measured in the Kullback-Leibler divergence and the total variation distance.
  • Thumbnail Image
    ItemOpen Access
    Prompt-based continual learning for visual question answering
    (2024) Ostertag, Magnus
    In an ever-evolving world, Continual Learning (CL) strives to enable a costly trained model to learn new tasks without forgetting previously acquired knowledge. This work critically examines current CL benchmarks for Visual Question Answering (VQA), identifying significant shortcomings in the construction introducing bias. To address these issues, we propose a new CL-VQA benchmark based on GQA, designed to be incremental in both the language and the visual modality. Combined with learning it in one modality only, it can offer rich new diagnostics for a model. Additionally, we extend DualPrompt, a prompt-based CL method, DualPrompt, to the multi-modal domain. Using Dark Experience Replay as a baseline, we evaluate the performance against the new benchmark.