05 Fakultät Informatik, Elektrotechnik und Informationstechnik
Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6
Browse
Item Open Access Additively manufactured transverse flux machine components with integrated slits for loss reduction(2022) Kresse, Thomas; Schurr, Julian; Lanz, Maximilian; Kunert, Torsten; Schmid, Martin; Parspour, Nejila; Schneider, Gerhard; Goll, DagmarLaser powder bed fusion (L-PBF) was used to produce stator half-shells of a transverse flux machine from pure iron (99.9% Fe). In order to reduce iron losses in the bulk components, radially extending slits with a nominal width of 150 and 300 µm, respectively, were integrated during manufacturing. The components were subjected to a suitable heat treatment. In addition to a microscopic examination of the slit quality, the iron losses were also measured using both a commercial and a self-developed measurement setup. The investigations showed the iron losses can be reduced by up to 49% due to the integrated slits and the heat treatment.Item Open Access Adopting microservices and DevOps in the cyber‐physical systems domain : a rapid review and case study(2022) Fritzsch, Jonas; Bogner, Justus; Haug, Markus; Franco da Silva, Ana Cristina; Rubner, Carolin; Saft, Matthias; Sauer, Horst; Wagner, StefanThe domain of cyber‐physical systems (CPS) has recently seen strong growth, for example, due to the rise of the Internet of Things (IoT) in industrial domains, commonly referred to as “Industry 4.0.” However, CPS challenges like the strong hardware focus can impact modern software development practices, especially in the context of modernizing legacy systems. While microservices and DevOps have been widely studied for enterprise applications, there is insufficient coverage for the CPS domain. Our goal is therefore to analyze the peculiarities of such systems regarding challenges and practices for using and migrating towards microservices and DevOps. We conducted a rapid review based on 146 scientific papers, and subsequently validated our findings in an interview‐based case study with nine CPS professionals in different business units at Siemens AG. The combined results picture the specifics of microservices and DevOps in the CPS domain. While several differences were revealed that may require adapted methods, many challenges and practices are shared with typical enterprise applications. Our study supports CPS researchers and practitioners with a summary of challenges, practices to address them, and research opportunities.Item Open Access Advances in clinical voice quality analysis with VOXplot(2023) Barsties von Latoszek, Ben; Mayer, Jörg; Watts, Christopher R.; Lehnert, BernhardBackground: The assessment of voice quality can be evaluated perceptually with standard clinical practice, also including acoustic evaluation of digital voice recordings to validate and further interpret perceptual judgments. The goal of the present study was to determine the strongest acoustic voice quality parameters for perceived hoarseness and breathiness when analyzing the sustained vowel [a:] using a new clinical acoustic tool, the VOXplot software. Methods: A total of 218 voice samples of individuals with and without voice disorders were applied to perceptual and acoustic analyses. Overall, 13 single acoustic parameters were included to determine validity aspects in relation to perceptions of hoarseness and breathiness. Results: Four single acoustic measures could be clearly associated with perceptions of hoarseness or breathiness. For hoarseness, the harmonics-to-noise ratio (HNR) and pitch perturbation quotient with a smoothing factor of five periods (PPQ5), and, for breathiness, the smoothed cepstral peak prominence (CPPS) and the glottal-to-noise excitation ratio (GNE) were shown to be highly valid, with a significant difference being demonstrated for each of the other perceptual voice quality aspects. Conclusions: Two acoustic measures, the HNR and the PPQ5, were both strongly associated with perceptions of hoarseness and were able to discriminate hoarseness from breathiness with good confidence. Two other acoustic measures, the CPPS and the GNE, were both strongly associated with perceptions of breathiness and were able to discriminate breathiness from hoarseness with good confidence.Item Open Access All-inorganic CsPbI2Br perovskite solar cells with thermal stability at 250 °C and moisture-resilience via polymeric protection layers(2025) Roy, Rajarshi; Byranvand, Mahdi Malekshahi; Zohdi, Mohamed Reza; Magorian Friedlmeier, Theresa; Das, Chittaranjan; Hempel, Wolfram; Zuo, Weiwei; Kedia, Mayank; Rendon, Jose Jeronimo; Boehringer, Stephan; Hailegnanw, Bekele; Vorochta, Michael; Mehl, Sascha; Rai, Monika; Kulkarni, Ashish; Mathur, Sanjay; Saliba, MichaelAll-inorganic perovskites, such as CsPbI2Br, have emerged as promising compositions due to their enhanced thermal stability. However, they face significant challenges due to their susceptibility to humidity. In this work, CsPbI2Br perovskite is treated with poly(3-hexylthiophen-2,5-diyl) (P3HT) during the crystallization resulting in significant stability improvements against thermal, moisture and steady-state operation stressors. The perovskite solar cell retains ∼90% of the initial efficiency under relative humidity (RH) at ∼60% for 30 min, which is among the most stable all-inorganic perovskite devices to date under such harsh conditions. Furthermore, the P3HT treatment ensures high thermal stress tolerance at 250 °C for over 5 h. In addition to the stability enhancements, the champion P3HT-treated device shows a higher power conversion efficiency (PCE) of 13.5% compared to 12.7% (reference) with the stabilized power output (SPO) for 300 s. In addition, the P3HT-protected perovskite layer in ambient conditions shows ∼75% of the initial efficiency compared to the unprotected devices with ∼28% of their initial efficiency after 7 days of shelf life.Item Open Access All-perovskite tandem solar cells : from fundamentals to technological progress(2024) Lim, Jaekeun; Park, Nam-Gyu; Seok, Sang Il; Saliba, MichaelOrganic-inorganic perovskite materials have gradually progressed from single-junction solar cells to tandem (double) or even multi-junction (triple-junction) solar cells as all-perovskite tandem solar cells (APTSCs). Perovskites have numerous advantages: (1) tunable optical bandgaps, (2) low-cost, e.g. via solution-processing, inexpensive precursors, and compatibility with many thin-film processing technologies, (3) scalability and lightweight, and (4) eco-friendliness related to low CO2 emission. However, APTSCs face challenges regarding stability caused by Sn2+ oxidation in narrow bandgap perovskites, low performance due to Voc deficit in the wide bandgap range, non-standardisation of charge recombination layers, and challenging thin-film deposition as each layer must be nearly perfectly homogenous. Here, we discuss the fundamentals of APTSCs and technological progress in constructing each layer of the all-perovskite stacks. Furthermore, the theoretical power conversion efficiency (PCE) limitation of APTSCs is discussed using simulations.Item Open Access The aluminum standard : using generative Artificial Intelligence tools to synthesize and annotate non-structured patient data(2024) Diaz Ochoa, Juan G.; Mustafa, Faizan E.; Weil, Felix; Wang, Yi; Kama, Kudret; Knott, MarkusBackground. Medical narratives are fundamental to the correct identification of a patient’s health condition. This is not only because it describes the patient’s situation. It also contains relevant information about the patient’s context and health state evolution. Narratives are usually vague and cannot be categorized easily. On the other hand, once the patient’s situation is correctly identified based on a narrative, it is then possible to map the patient’s situation into precise classification schemas and ontologies that are machine-readable. To this end, language models can be trained to read and extract elements from these narratives. However, the main problem is the lack of data for model identification and model training in languages other than English. First, gold standard annotations are usually not available due to the high level of data protection for patient data. Second, gold standard annotations (if available) are difficult to access. Alternative available data, like MIMIC (Sci Data 3:1, 2016) is written in English and for specific patient conditions like intensive care. Thus, when model training is required for other types of patients, like oncology (and not intensive care), this could lead to bias. To facilitate clinical narrative model training, a method for creating high-quality synthetic narratives is needed. Method. We devised workflows based on generative AI methods to synthesize narratives in the German language to avoid the disclosure of patient’s health data. Since we required highly realistic narratives, we generated prompts, written with high-quality medical terminology, asking for clinical narratives containing both a main and co-disease. The frequency of distribution of both the main and co-disease was extracted from the hospital’s structured data, such that the synthetic narratives reflect the disease distribution among the patient’s cohort. In order to validate the quality of the synthetic narratives, we annotated them to train a Named Entity Recognition (NER) algorithm. According to our assumptions, the validation of this system implies that the synthesized data used for its training are of acceptable quality. Result. We report precision, recall and F1 score for the NER model while also considering metrics that take into account both exact and partial entity matches. Trained models are cautious, with a precision up to 0.8 for Entity Type match metric and a F1 score of 0.3. Conclusion. Despite its inherent limitations, this technology has the potential to allow data interoperability by using encoded diseases across languages and regions without compromising data safety. Additionally, it facilitates the synthesis of unstructured patient data. In this way, the identification and training of models can be accelerated. We believe that this method may be able to generate discharge letters for any combination of main and co-diseases, which will significantly reduce the amount of time spent writing these letters by healthcare professionals.Item Open Access AmericasNLI : machine translation and natural language inference systems for Indigenous languages of the Americas(2022) Kann, Katharina; Ebrahimi, Abteen; Mager, Manuel; Oncevay, Arturo; Ortega, John E.; Rios, Annette; Fan, Angela; Gutierrez-Vasques, Ximena; Chiruzzo, Luis; Giménez-Lugo, Gustavo A.; Ramos, Ricardo; Meza Ruiz, Ivan Vladimir; Mager, Elisabeth; Chaudhary, Vishrav; Neubig, Graham; Palmer, Alexis; Coto-Solano, Rolando; Vu, Ngoc ThangLittle attention has been paid to the development of human language technology for truly low-resource languages - i.e., languages with limited amounts of digitally available text data, such as Indigenous languages. However, it has been shown that pretrained multilingual models are able to perform crosslingual transfer in a zero-shot setting even for low-resource languages which are unseen during pretraining. Yet, prior work evaluating performance on unseen languages has largely been limited to shallow token-level tasks. It remains unclear if zero-shot learning of deeper semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, a natural language inference dataset covering 10 Indigenous languages of the Americas. We conduct experiments with pretrained models, exploring zero-shot learning in combination with model adaptation. Furthermore, as AmericasNLI is a multiway parallel dataset, we use it to benchmark the performance of different machine translation models for those languages. Finally, using a standard transformer model, we explore translation-based approaches for natural language inference. We find that the zero-shot performance of pretrained models without adaptation is poor for all languages in AmericasNLI, but model adaptation via continued pretraining results in improvements. All machine translation models are rather weak, but, surprisingly, translation-based approaches to natural language inference outperform all other models on that task.Item Open Access Analysis of political debates through newspaper reports : methods and outcomes(2020) Lapesa, Gabriella; Blessing, Andre; Blokker, Nico; Dayanik, Erenay; Haunss, Sebastian; Kuhn, Jonas; Padó, SebastianDiscourse network analysis is an aspiring development in political science which analyzes political debates in terms of bipartite actor/claim networks. It aims at understanding the structure and temporal dynamics of major political debates as instances of politicized democratic decision making. We discuss how such networks can be constructed on the basis of large collections of unstructured text, namely newspaper reports. We sketch a hybrid methodology of manual analysis by domain experts complemented by machine learning and exemplify it on the case study of the German public debate on immigration in the year 2015. The first half of our article sketches the conceptual building blocks of discourse network analysis and demonstrates its application. The second half discusses the potential of the application of NLP methods to support the creation of discourse network datasets.Item Open Access Analyzing the influence of hyper-parameters and regularizers of topic modeling in terms of Renyi entropy(2020) Koltcov, Sergei; Ignatenko, Vera; Boukhers, Zeyd; Staab, SteffenTopic modeling is a popular technique for clustering large collections of text documents. A variety of different types of regularization is implemented in topic modeling. In this paper, we propose a novel approach for analyzing the influence of different regularization types on results of topic modeling. Based on Renyi entropy, this approach is inspired by the concepts from statistical physics, where an inferred topical structure of a collection can be considered an information statistical system residing in a non-equilibrium state. By testing our approach on four models-Probabilistic Latent Semantic Analysis (pLSA), Additive Regularization of Topic Models (BigARTM), Latent Dirichlet Allocation (LDA) with Gibbs sampling, LDA with variational inference (VLDA)-we, first of all, show that the minimum of Renyi entropy coincides with the “true” number of topics, as determined in two labelled collections. Simultaneously, we find that Hierarchical Dirichlet Process (HDP) model as a well-known approach for topic number optimization fails to detect such optimum. Next, we demonstrate that large values of the regularization coefficient in BigARTM significantly shift the minimum of entropy from the topic number optimum, which effect is not observed for hyper-parameters in LDA with Gibbs sampling. We conclude that regularization may introduce unpredictable distortions into topic models that need further research.Item Open Access Application performance management : measuring and optimizing the digital customer experience(Troisdorf : SIGS DATACOM GmbH, 2018) Hoorn, André van; Siegl, StefanNowadays, the success of most companies is determined by the quality of their IT services and application systems. To make sure that application systems provide the expected quality of service, it is crucial to have up-to-date information about the system and the user experience to detect problems and to be able to solve them effectively. Application performance management (APM) is a core IT operations discipline that aims to achieve an adequate level of performance during operations. APM comprises methods, techniques, and tools for i) continuously monitoring the state of an applications system and its usage, as well as for ii) detecting, diagnosing, and resolving performance-related problems using the monitored data. This book provides an introduction by covering a common conceptual foundation for APM. On top of the common foundation, we introduce today's tooling landscape and highlight current challenges and directions of this discipline.Item Open Access Assessment of overload capabilities of power transformers by thermal modelling(2011) Schmidt, Nicolas; Tenbohlen, Stefan; Skrzypek, Raimund; Dolata, BartekThis contribution presents an approach to determine the overload capabilities of oil-cooled power transformers depending on the ambient temperature. For this purpose the investigated method introduces a simplified, empirical based thermal model that predicts changes in oil temperature with high accuracy. This model considers the entire transformer as a single, homogenous tempered body with a certain thermal capacity. All electrical losses are perceived as an input of equally distributed heat and assumed to be the sum of the load and no-load losses given by the transformer design. In contrary to earlier approaches the heat exchange with the ambience is modelled as a complex function depending first of all on the temperature difference between the transformer and its surroundings. Furthermore, the loading rate, material properties, levels of temperatures and emerging temperature gradients are taken into account as influencing factors determining the heat exchange. To display the behaviour of a specific transformer, the model employs several empirical factors. For determination of these empirical factors an evaluation time of two to four representative weeks of transformer operation is found to be sufficient. To validate the created model and test its operational reliability, measuring data from several ONAN- and ONAF-transformers are consulted. These data sets comprise the top oil and ambient temperature as well as the loading rate and the status of the cooling system. Furthermore, the corresponding name plate data is integrated. Subsequently to the calculation of the top oil temperature, the maximum constant loading rate resulting in a hot-spot temperature below critical level is determined based upon the remarks of IEC 60076 - 7 [1]. Finally, a characteristic linear function for each investigated transformer displaying the maximum loading rate depending solely on the ambient temperature is derived. In case of the investigated ONAN- and ONAF-transformers within a power range of 31.5 - 63 MVA, significant overload potentials could be disclosed.Item Open Access Assessment of UHF frequency range for failure classification in power transformers(2024) Schiewaldt, Karl; de Castro, Bruno Albuquerque; Ardila-Rey, Jorge Alfredo; Franchin, Marcelo Nicoletti; Andreoli, André Luiz; Tenbohlen, StefanUltrahigh-frequency (UHF) sensing is one of the most promising techniques for assessing the quality of power transformer insulation systems due to its capability to identify failures like partial discharges (PDs) by detecting the emitted UHF signals. However, there are still uncertainties regarding the frequency range that should be evaluated in measurements. For example, most publications have stated that UHF emissions range up to 3 GHz. However, a Cigré brochure revealed that the optimal spectrum is between 100 MHz and 1 GHz, and more recently, a study indicated that the optimal frequency range is between 400 MHz and 900 MHz. Since different faults require different maintenance actions, both science and industry have been developing systems that allow for failure-type identification. Hence, it is important to note that bandwidth reduction may impair classification systems, especially those that are frequency-based. This article combines three operational conditions of a power transformer (healthy state, electric arc failure, and partial discharges on bushing) with three different self-organized maps to carry out failure classification: the chromatic technique (CT), principal component analysis (PCA), and the shape analysis clustering technique (SACT). For each case, the frequency content of UHF signals was selected at three frequency bands: the full spectrum, Cigré brochure range, and between 400 MHz and 900 MHz. Therefore, the contributions of this work are to assess how spectrum band limitation may alter failure classification and to evaluate the effectiveness of signal processing methodologies based on the frequency content of UHF signals. Additionally, an advantage of this work is that it does not rely on training as is the case for some machine learning-based methods. The results indicate that the reduced frequency range was not a limiting factor for classifying the state of the operation condition of the power transformer. Therefore, there is the possibility of using lower frequency ranges, such as from 400 MHz to 900 MHz, contributing to the development of less costly data acquisition systems. Additionally, PCA was found to be the most promising technique despite the reduction in frequency band information.Item Open Access Audio guide for visually impaired people based on combination of stereo vision and musical tones(2019) Simões, Walter C. S. S.; Silva, Yuri M. L. R.; Pio, José Luiz de S.; Jazdi, Nasser; F. de Lucena, VicenteIndoor navigation systems offer many application possibilities for people who need information about the scenery and the possible fixed and mobile obstacles placed along the paths. In these systems, the main factors considered for their construction and evaluation are the level of accuracy and the delivery time of the information. However, it is necessary to notice obstacles placed above the user’s waistline to avoid accidents and collisions. In this paper, different methodologies are associated to define a hybrid navigation model called iterative pedestrian dead reckoning (i-PDR). i-PDR combines the PDR algorithm with a Kalman linear filter to correct the location, reducing the system’s margin of error iteratively. Obstacle perception was addressed through the use of stereo vision combined with a musical sounding scheme and spoken instructions that covered an angle of 120 degrees in front of the user. The results obtained in the margin of error and the maximum processing time are 0.70 m and 0.09 s, respectively, with obstacles at ground level and suspended with an accuracy equivalent to 90%.Item Open Access Automated imaging-based abdominal organ segmentation and quality control in 20,000 participants of the UK Biobank and German National Cohort Studies(2022) Kart, Turkay; Fischer, Marc; Winzeck, Stefan; Glocker, Ben; Bai, Wenjia; Bülow, Robin; Emmel, Carina; Friedrich, Lena; Kauczor, Hans-Ulrich; Keil, Thomas; Kröncke, Thomas; Mayer, Philipp; Niendorf, Thoralf; Peters, Annette; Pischon, Tobias; Schaarschmidt, Benedikt M.; Schmidt, Börge; Schulze, Matthias B.; Umutle, Lale; Völzke, Henry; Küstner, Thomas; Bamberg, Fabian; Schölkopf, Bernhard; Rückert, Daniel; Gatidis, SergiosLarge epidemiological studies such as the UK Biobank (UKBB) or German National Cohort (NAKO) provide unprecedented health-related data of the general population aiming to better understand determinants of health and disease. As part of these studies, Magnetic Resonance Imaging (MRI) is performed in a subset of participants allowing for phenotypical and functional characterization of different organ systems. Due to the large amount of imaging data, automated image analysis is required, which can be performed using deep learning methods, e. g. for automated organ segmentation. In this paper we describe a computational pipeline for automated segmentation of abdominal organs on MRI data from 20,000 participants of UKBB and NAKO and provide results of the quality control process. We found that approx. 90% of data sets showed no relevant segmentation errors while relevant errors occurred in a varying proportion of data sets depending on the organ of interest. Image-derived features based on automated organ segmentations showed relevant deviations of varying degree in the presence of segmentation errors. These results show that large-scale, deep learning-based abdominal organ segmentation on MRI data is feasible with overall high accuracy, but visual quality control remains an important step ensuring the validity of down-stream analyses in large epidemiological imaging studies.Item Open Access Availability analysis of redundant and replicated cloud services with Bayesian networks(2023) Bibartiu, Otto; Dürr, Frank; Rothermel, Kurt; Ottenwälder, Beate; Grau, AndreasDue to the growing complexity of modern data centers, failures are not uncommon any more. Therefore, fault tolerance mechanisms play a vital role in fulfilling the availability requirements. Multiple availability models have been proposed to assess compute systems, among which Bayesian network models have gained popularity in industry and research due to its powerful modeling formalism. In particular, this work focuses on assessing the availability of redundant and replicated cloud computing services with Bayesian networks. So far, research on availability has only focused on modeling either infrastructure or communication failures in Bayesian networks, but have not considered both simultaneously. This work addresses practical modeling challenges of assessing the availability of large‐scale redundant and replicated services with Bayesian networks, including cascading and common‐cause failures from the surrounding infrastructure and communication network. In order to ease the modeling task, this paper introduces a high‐level modeling formalism to build such a Bayesian network automatically. Performance evaluations demonstrate the feasibility of the presented Bayesian network approach to assess the availability of large‐scale redundant and replicated services. This model is not only applicable in the domain of cloud computing it can also be applied for general cases of local and geo‐distributed systems.Item Open Access Avoiding shortcut-learning by mutual information minimization in deep learning-based image processing(2023) Fay, Louisa; Cobos, Erick; Yang, Bin; Gatidis, Sergios; Küstner, ThomasItem Open Access Band-gap and strain engineering in GeSn alloys using post-growth pulsed laser melting(2022) Steuer, Oliver; Schwarz, Daniel; Oehme, Michael; Schulze, Jörg; Mączko, Herbert; Kudrawiec, Robert; Fischer, Inga A.; Heller, René; Hübner, René; Khan, Muhammad Moazzam; Georgiev, Yordan M.; Zhou, Shengqiang; Helm, Manfred; Prucnal, SlawomirThe pseudomorphic growth of Ge1-xSnx on Ge causes in-plane compressive strain, which degrades the superior properties of the Ge1-xSnx alloys. Therefore, efficient strain engineering is required. In this article, we present strain and band-gap engineering in Ge1-xSnx alloys grown on Ge a virtual substrate using post-growth nanosecond pulsed laser melting (PLM). Micro-Raman and x-ray diffraction (XRD) show that the initial in-plane compressive strain is removed. Moreover, for PLM energy densities higher than 0.5 J cm-2, the Ge0.89Sn0.11 layer becomes tensile strained. Simultaneously, as revealed by Rutherford Backscattering spectrometry, cross-sectional transmission electron microscopy investigations and XRD the crystalline quality and Sn-distribution in PLM-treated Ge0.89Sn0.11 layers are only slightly affected. Additionally, the change of the band structure after PLM is confirmed by low-temperature photoreflectance measurements. The presented results prove that post-growth ns-range PLM is an effective way for band-gap and strain engineering in highly-mismatched alloys.Item Open Access Behavior-aware pedestrian trajectory prediction in ego-centric camera views with spatio-temporal ego-motion estimation(2023) Czech, Phillip; Braun, Markus; Kreßel, Ulrich; Yang, BinWith the ongoing development of automated driving systems, the crucial task of predicting pedestrian behavior is attracting growing attention. The prediction of future pedestrian trajectories from the ego-vehicle camera perspective is particularly challenging due to the dynamically changing scene. Therefore, we present Behavior-Aware Pedestrian Trajectory Prediction (BA-PTP), a novel approach to pedestrian trajectory prediction for ego-centric camera views. It incorporates behavioral features extracted from real-world traffic scene observations such as the body and head orientation of pedestrians, as well as their pose, in addition to positional information from body and head bounding boxes. For each input modality, we employed independent encoding streams that are combined through a modality attention mechanism. To account for the ego-motion of the camera in an ego-centric view, we introduced Spatio-Temporal Ego-Motion Module (STEMM), a novel approach to ego-motion prediction. Compared to the related works, it utilizes spatial goal points of the ego-vehicle that are sampled from its intended route. We experimentally validated the effectiveness of our approach using two datasets for pedestrian behavior prediction in urban traffic scenes. Based on ablation studies, we show the advantages of incorporating different behavioral features for pedestrian trajectory prediction in the image plane. Moreover, we demonstrate the benefit of integrating STEMM into our pedestrian trajectory prediction method, BA-PTP. BA-PTP achieves state-of-the-art performance on the PIE dataset, outperforming prior work by 7% in MSE-1.5 s and CMSE as well as 9% in CFMSE.Item Open Access Benchmarking the performance of portfolio optimization with QAOA(2022) Brandhofer, Sebastian; Braun, Daniel; Dehn, Vanessa; Hellstern, Gerhard; Hüls, Matthias; Ji, Yanjun; Polian, Ilia; Bhatia, Amandeep Singh; Wellens, ThomasWe present a detailed study of portfolio optimization using different versions of the quantum approximate optimization algorithm (QAOA). For a given list of assets, the portfolio optimization problem is formulated as quadratic binary optimization constrained on the number of assets contained in the portfolio. QAOA has been suggested as a possible candidate for solving this problem (and similar combinatorial optimization problems) more efficiently than classical computers in the case of a sufficiently large number of assets. However, the practical implementation of this algorithm requires a careful consideration of several technical issues, not all of which are discussed in the present literature. The present article intends to fill this gap and thereby provides the reader with a useful guide for applying QAOA to the portfolio optimization problem (and similar problems). In particular, we will discuss several possible choices of the variational form and of different classical algorithms for finding the corresponding optimized parameters. Viewing at the application of QAOA on error-prone NISQ hardware, we also analyse the influence of statistical sampling errors (due to a finite number of shots) and gate and readout errors (due to imperfect quantum hardware). Finally, we define a criterion for distinguishing between ‘easy’ and ‘hard’ instances of the portfolio optimization problem.Item Open Access ‘Better see a doctor?’ status quo of symptom checker apps in Germany : a cross-sectional survey with a mixed-methods design (CHECK.APP)(2024) Wetzel, Anna-Jasmin; Koch, Roland; Koch, Nadine; Klemmt, Malte; Müller, Regina; Preiser, Christine; Rieger, Monika; Rösel, Inka; Ranisch, Robert; Ehni, Hans-Jörg; Joos, StefanieBackground: Symptom checker apps (SCAs) offer symptom classification and low-threshold self-triage for laypeople. They are already in use despite their poor accuracy and concerns that they may negatively affect primary care. This study assesses the extent to which SCAs are used by medical laypeople in Germany and which software is most popular. We examined associations between satisfaction with the general practitioner (GP) and SCA use as well as the number of GP visits and SCA use. Furthermore, we assessed the reasons for intentional non-use. Methods: We conducted a survey comprising standardised and open-ended questions. Quantitative data were weighted, and open-ended responses were examined using thematic analysis. Results: This study included 850 participants. The SCA usage rate was 8%, and approximately 50% of SCA non-users were uninterested in trying SCAs. The most commonly used SCAs were NetDoktor and Ada. Surprisingly, SCAs were most frequently used in the age group of 51–55 years. No significant associations were found between SCA usage and satisfaction with the GP or the number of GP visits and SCA usage. Thematic analysis revealed skepticism regarding the results and recommendations of SCAs and discrepancies between users’ requirements and the features of apps. Conclusion: SCAs are still widely unknown in the German population and have been sparsely used so far. Many participants were not interested in trying SCAs, and we found no positive or negative associations of SCAs and primary care.