05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 160
  • Thumbnail Image
    ItemOpen Access
    Eine Methode zum Verteilen, Adaptieren und Deployment partnerübergreifender Anwendungen
    (2022) Wild, Karoline; Leymann, Frank (Prof. Dr. Dr. h. c.)
    Ein wesentlicher Aspekt einer effektiven Kollaboration innerhalb von Organisationen, aber vor allem organisationsübergreifend, ist die Integration und Automatisierung der Prozesse. Dazu zählt auch die Bereitstellung von Anwendungssystemen, deren Komponenten von unterschiedlichen Partnern, das heißt Abteilungen oder Unternehmen, bereitgestellt und verwaltet werden. Die dadurch entstehende verteilte, dezentral verwaltete Umgebung bedarf neuer Konzepte zur Bereitstellung. Die Autonomie der Partner und die Verteilung der Komponenten führen dabei zu neuen Herausforderungen. Zum einen müssen partnerübergreifende Kommunikationsbeziehungen realisiert und zum anderen muss das automatisierte dezentrale Deployment ermöglicht werden. Eine Vielzahl von Technologien wurde in den letzten Jahren entwickelt, die alle Schritte von der Modellierung bis zur Bereitstellung und dem Management zur Laufzeit einer Anwendung abdecken. Diese Technologien basieren jedoch auf einer zentralisierten Koordination des Deployments, wodurch die Autonomie der Partner eingeschränkt ist. Auch fehlen Konzepte zur Identifikation von Problemen, die aus der Verteilung von Anwendungskomponenten resultieren und die Funktionsfähigkeit der Anwendung einschränken. Dies betrifft speziell die partnerübergreifenden Kommunikationsbeziehungen. Um diese Herausforderungen zu lösen, stellt diese Arbeit die DivA-Methode zum Verteilen, Adaptieren und Deployment partnerübergreifender Anwendungen vor. Die Methode vereinigt die globalen und lokalen Partneraktivitäten, die zur Bereitstellung partnerübergreifender Anwendungen benötigt werden. Dabei setzt die Methode auf dem deklarativen Essential Deployment Meta Model (EDMM) auf und ermöglicht damit die Einführung deploymenttechnologieunabhängiger Modellierungskonzepte zur Verteilung von Anwendungskomponenten sowie zur Modellanalyse und -adaption. Das Split-and-Match-Verfahren wird für die Verteilung von Anwendungskomponenten basierend auf festgelegten Zielumgebungen und zur Selektion kompatibler Cloud-Dienste vorgestellt. Für die Ausführung des Deployments können EDMM-Modelle in unterschiedliche Technologien transformiert werden. Um die Bereitstellung komplett dezentral durchzuführen, werden deklarative und imperative Technologien kombiniert und basierend auf den deklarativen EDMM-Modellen Workflows generiert, die die Aktivitäten zur Bereitstellung und zum Datenaustausch mit anderen Partnern zur Realisierung partnerübergreifender Kommunikationsbeziehungen orchestrieren. Diese Workflows formen implizit eine Deployment-Choreographie. Für die Modellanalyse und -adaption wird als Kern dieser Arbeit ein zweistufiges musterbasiertes Verfahren zur Problemerkennung und Modelladaption eingeführt. Dafür werden aus den textuellen Musterbeschreibungen die Problem- und Kontextdefinition analysiert und formalisiert, um die automatisierte Identifikation von Problemen in EDMM-Modellen zu ermöglichen. Besonderer Fokus liegt dabei auf Problemen, die durch die Verteilung der Komponenten entstehen und die Realisierung von Kommunikationsbeziehungen verhindern. Das gleiche Verfahren wird auch für die Selektion geeigneter konkreter Lösungsimplementierungen zur Behebung der Probleme angewendet. Zusätzlich wird ein Ansatz zur Selektion von Kommunikationstreibern abhängig von der verwendeten Integrations-Middleware vorgestellt, wodurch die Portabilität von Anwendungskomponenten verbessert werden kann. Die in dieser Arbeit vorgestellten Konzepte werden durch das DivA-Werkzeug automatisiert. Zur Validierung wird das Werkzeug prototypisch implementiert und in bestehende Systeme zur Modellierung und Ausführung des Deployments von Anwendungssystemen integriert.
  • Thumbnail Image
    ItemOpen Access
    Elastic parallel systems for high performance cloud computing
    (2020) Kehrer, Stefan; Blochinger, Wolfgang (Prof. Dr.)
    High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.
  • Thumbnail Image
    ItemOpen Access
    Rigorous compilation for near-term quantum computers
    (2024) Brandhofer, Sebastian; Polian, Ilia (Prof.)
    Quantum computing promises an exponential speedup for computational problems in material sciences, cryptography and drug design that are infeasible to resolve by traditional classical systems. As quantum computing technology matures, larger and more complex quantum states can be prepared on a quantum computer, enabling the resolution of larger problem instances, e.g. breaking larger cryptographic keys or modelling larger molecules accurately for the exploration of novel drugs. Near-term quantum computers, however, are characterized by large error rates, a relatively low number of qubits and a low connectivity between qubits. These characteristics impose strict requirements on the structure of quantum computations that must be incorporated by compilation methods targeting near-term quantum computers in order to ensure compatibility and yield highly accurate results. Rigorous compilation methods have been explored for addressing these requirements as they exactly explore the solution space and thus yield a quantum computation that is optimal with respect to the incorporated requirements. However, previous rigorous compilation methods demonstrate limited applicability and typically focus on one aspect of the imposed requirements, i.e. reducing the duration or the number of swap gates in a quantum computation. In this work, opportunities for improving near-term quantum computations through compilation are explored first. These compilation opportunities are included in rigorous compilation methods to investigate each aspect of the imposed requirements, i.e. the number of qubits, connectivity of qubits, duration and incurred errors. The developed rigorous compilation methods are then evaluated with respect to their ability to enable quantum computations that are otherwise not accessible with near-term quantum technology. Experimental results demonstrate the ability of the developed rigorous compilation methods to extend the computational reach of near-term quantum computers by generating quantum computations with a reduced requirement on the number and connectivity of qubits as well as reducing the duration and incurred errors of performed quantum computations. Furthermore, the developed rigorous compilation methods extend their applicability to quantum circuit partitioning, qubit reuse and the translation between quantum computations generated for distinct quantum technologies. Specifically, a developed rigorous compilation method exploiting the structure of a quantum computation to reuse qubits at runtime yielded a reduction in the required number of qubits of up to 5x and result error by up to 33%. The developed quantum circuit partitioning method optimally distributes a quantum computation to distinct separate partitions, reducing the required number of qubits by 40% and the cost of partitioning by 41% on average. Furthermore, a rigorous compilation method was developed for quantum computers based on neutral atoms that combines swap gate insertions and topology changes to reduce the impact of limited qubit connectivity on the quantum computation duration by up to 58% and on the result fidelity by up to 29%. Finally, the developed quantum circuit adaptation method enables to translate between distinct quantum technologies while considering heterogeneous computational primitives with distinct characteristics to reduce the idle time of qubits by up to 87% and the result fidelity by up to 40%.
  • Thumbnail Image
    ItemOpen Access
    Improving usability of gaze and voice based text entry systems
    (2023) Sengupta, Korok; Staab, Steffen (Prof. Dr.)
  • Thumbnail Image
    ItemOpen Access
    Verifikation softwareintensiver Fahrwerksysteme
    (2023) Hellhake, Dominik; Wagner, Stefan (Prof. Dr.)
    Kontext: Die zunehmende Signifikanz von softwarebasierten Funktionen in modernen Fahrzeugen ist der Auslöser vieler Veränderungen im automobilen Entwicklungsprozess. In der Vergangenheit bestand ein Fahrzeug aus mehreren Electronic Control Units (ECUs), welche jeweils individuelle und voneinander unabhängige Softwarefunktionen ausführten. Demgegenüber bilden heute mehrere ECUs funktional kohärente Subsysteme, welche übergreifende und vernetzte Softwarefunktionen wie zum Beispiel Fahrerassistenzfunktionen und automatisierte Fahrfunktionen implementieren. Dieser Trend hin zu einem hochvernetzten Softwaresystem sorgt in der Entwicklung moderner Fahrzeuge für einen hohen Bedarf an geeigneten Architekturmodellen und Entwurfsmethoden. Aufgrund der Entwicklung von ECUs durch verschiedene Entwicklungsdienstleister werden zusätzlich systematische Integrationstestmethoden benötigt, um das korrekte Interaktionsverhalten jeder individueller ECU im Laufe der Fahrzeugentwicklung zu verifizieren. Hierfür stellt Kopplung eine weit verbreitete Messgröße dar, um in komponentenbasierten Softwaresystemen Qualitätseigenschaften wie die Verständlichkeit, Wiederverwendbarkeit, Modifizierbarkeit und Testbarkeit widerzuspiegeln. Problembeschreibung: Während Kopplung eine geeignete Messgröße für die Qualität eines Softwaredesigns darstellt, existieren nur wenig wissenschaftliche Beiträge über den Mehrwert von Kopplung für den Integrationstestprozess des aus dem Design resultierenden Systems. Existierende Arbeiten über das Thema Integrationstest beschreiben die schrittweise Integration von White-Box Softwarekomponenten unter Verwendung von Eigenschaften und Messgrößen, welche aus der Implementierung abgeleitet wurden. Diese Abhängigkeit vom Quellcode und der Softwarestruktur sorgt jedoch dafür, dass diese Methoden nicht auf die Entwicklung von Fahrzeugen übertragen werden können, da Fahrzeugsysteme zu einem großen Anteil aus Black-Box Software bestehen. Folglich existieren auch keine Methoden zur Messung der Testabdeckung oder zur Priorisierung der durchzuführenden Tests. In der Praxis sorgt dies dafür, dass lediglich erfahrungsbasierte Ansätze angewendet werden, bei denen signifikante Anteile des Interaktionsverhaltens im Laufe der Fahrzeugentwicklung ungetestet bleiben. Ziele: Um Lösungen für dieses Problem zu finden, soll diese Arbeit systematische und empirisch evaluierte Testmethoden ausarbeiten, welche für die Integrationstests während der Fahrzeugentwicklung angewendet werden können. Dabei wollen wir in erster Linie auch einen Einblick in das Potential bieten, welche Messgrößen Kopplung für die Verwendung zur Testfall-Priorisierung bietet. Das Ziel dieser Arbeit ist es, eine Empfehlung für das systematische Integrationstesten von Fahrzeugsystemen zu geben, welches auf dem Interaktionsverhalten einzelner ECUs basiert. Methoden: Um diese Ziele zu erreichen, analysieren wir im ersten Schritt dieser Arbeit den Stand der Technik, so wie er gegenwärtig bei BMW für das Integrationstesten der Fahrwerkssysteme angewendet wird. Dem gegenüber analysieren wir den Stand der Wissenschaft hinsichtlich existierender Testmethoden, welche auf die Problemstellung der Integration von Fahrzeugsystemen übertragen werden können. Basierend auf diesem Set an wissenschaftlich evaluierten Methoden leiten wir anschließend konkrete Vorgehensweisen für die Messung der Testabdeckung und der TestfallPriorisierung ab. Im Rahmen dieser Arbeit werden beide Vorgehensweisen empirisch evaluiert basierend auf Test- und Fehlerdaten aus einem Fahrzeugentwicklungsprojekt. Beiträge: Zusammengefasst enthält diese Arbeit zwei Beiträge, welche wir zu einem zentralen Beitrag zusammenführen. Der erste Bereich besteht aus einer Methode zur Messung der Testabdeckung basierend auf dem inter-komponenten Datenfluss von Black-Box-Komponenten. Die Definition eines Datenfluss-Klassifikationsschemas ermöglicht es, Daten über die Verwendung von Datenflüssen in existierenden Testfällen sowie in Fehlern zu sammeln, welche in den verschiedenen Testphasen gefunden wurden. Der zweite Beitrag dieser Arbeit stellt eine Korrelationsstudie zwischen verschiedenen Messmethoden für Coupling und der Fehlerverteilung in einem Fahrwerkssystem dar. Dabei evaluieren wir die Coupling-Werte von individuellen Software-Interfaces sowie die der Komponenten, welche diese implementieren. Zusammengefasst spiegelt diese Studie das Potential wider, das solche Coupling-Messmethoden für die Verwendbarkeit zur Testpriorisierung haben. Die Erkenntnisse aus diesen Beiträgen werden in unserem Hauptbeitrag zu einer Coupling-basierten Teststrategie für Systemintegrationstests zusammengeführt. Fazit: Der Beitrag dieser Arbeit verbindet zum ersten Mal den Stand der Technik zur Systemintegration von verteilten Black-Box-Softwaresystemen mit dem Stand der Wissenschaft über systematische Ansätze zur Integration von Softwaresystemen. Das Messen der Testabdeckung basierend auf dem Datenfluss ist hierfür eine effektive Methode, da der Datenfluss in einem System das Interaktionsverhalten der einzelnen Komponenten widerspiegelt. Zusätzlich kann das mögliche Interaktionsverhalten aller Komponenten des Systems aus dessen Architektur-Spezifikationen abgeleitet werden. Aus den Studien über die Korrelation von Coupling zur Fehlerverteilung geht außerdem eine moderate Abhängigkeit hervor. Aufgrund dessen ist die Selektion von Testfällen basierend auf die im Testfall erprobten Komponenteninteraktionen und dessen Coupling ein sinnvolles Vorgehen für die Praxis. Jedoch ist die moderate Korrelation auch ein Indiz dafür, dass zusätzliche Aspekte bei der Auswahl von Testfällen für Integrationstests zu berücksichtigen sind.
  • Thumbnail Image
    ItemOpen Access
    Models for data-efficient reinforcement learning on real-world applications
    (2021) Dörr, Andreas; Toussaint, Marc (Prof. Dr.)
    Large-scale deep Reinforcement Learning is strongly contributing to many recently published success stories of Artificial Intelligence. These techniques enabled computer systems to autonomously learn and master challenging problems, such as playing the game of Go or complex strategy games such as Star-Craft on human levels or above. Naturally, the question arises which problems could be addressed with these Reinforcement Learning technologies in industrial applications. So far, machine learning technologies based on (semi-)supervised learning create the most visible impact in industrial applications. For example, image, video or text understanding are primarily dominated by models trained and derived autonomously from large-scale data sets with modern (deep) machine learning methods. Reinforcement Learning, on the opposite side, however, deals with temporal decision-making problems and is much less commonly found in the industrial context. In these problems, current decisions and actions inevitably influence the outcome and success of a process much further down the road. This work strives to address some of the core problems, which prevent the effective use of Reinforcement Learning in industrial settings. Autonomous learning of new skills is always guided by existing priors that allow for generalization from previous experience. In some scenarios, non-existing or uninformative prior knowledge can be mitigated by vast amounts of experience for a particular task at hand. Typical industrial processes are, however, operated in very restricted, tightly calibrated operating points. Exploring the space of possible actions or changes to the process naively on the search for improved performance tends to be costly or even prohibitively dangerous. Therefore, one reoccurring subject throughout this work is the emergence of priors and model structures that allow for efficient use of all available experience data. A promising direction is Model-Based Reinforcement Learning, which is explored in the first part of this work. This part derives an automatic tuning method for one of themostcommonindustrial control architectures, the PID controller. By leveraging all available data about the system’s behavior in learning a system dynamics model, the derived method can efficiently tune these controllers from scratch. Although we can easily incorporate all data into dynamics models, real systems expose additional problems to the dynamics modeling and learning task. Characteristics such as non-Gaussian noise, latent states, feedback control or non-i.i.d. data regularly prevent using off-the-shelf modeling tools. Therefore, the second part of this work is concerned with the derivation of modeling solutions that are particularly suited for the reinforcement learning problem. Despite the predominant focus on model-based reinforcement learning as a promising, data-efficient learning tool, this work’s final part revisits model assumptions in a separate branch of reinforcement learning algorithms. Again, generalization and, therefore, efficient learning in model-based methods is primarily driven by the incorporated model assumptions (e.g., smooth dynamics), which real, discontinuous processes might heavily violate. To this end, a model-free reinforcement learning is presented that carefully reintroduces prior model structure to facilitate efficient learning without the need for strong dynamic model priors. The methods and solutions proposed in this work are grounded in the challenges experienced when operating with real-world hardware systems. With applications on a humanoid upper-body robot or an autonomous model race car, the proposed methods are demonstrated to successfully model and master their complex behavior.
  • Thumbnail Image
    ItemOpen Access
    Evaluation and control of the value provision of complex IoT service systems
    (2022) Niedermaier, Sina; Wagner, Stefan (Prof. Dr.)
    The Internet of Things (IoT) represents an opportunity for companies to create additional consumer value through merging connected products with software-based services. The quality of the IoT service can determine whether an IoT service is consumed in the long-term and whether it delivers the expected value for a consumer. Since IoT services are usually provided by distributed systems and their operations are becoming increasingly complex and dynamic, continuous monitoring and control of the value provision is necessary. The individual components of IoT service systems are usually developed and operated by specialized teams in a division of labor. With the increasing specialization of the teams, practitioners struggle to derive quality requirements based on consumer needs. Consequently, the teams often observe the behavior of “their” components isolated without relation to value provision to a consumer. Inadequate monitoring and control of the value provision across the different components of an IoT system can result in quality deficiencies and a loss of value for the consumer. The goal of this dissertation is to support organizations with concepts and methods in the development and operations of IoT service systems to ensure the quality of the value provision to a consumer. By applying empirical methods, we first analyzed the challenges and applied practices in the industry as well as the state of the art. Based on the results, we refined existing concepts and approaches. To evaluate their quality in use, we conducted action research projects in collaboration with industry partners. Based on an interview study with industry experts, we have analyzed the current challenges, requirements, and applied solutions for the operations and monitoring of distributed systems in more detail. The findings of this study form the basis for further contributions of this thesis. To support and improve communication between the specialized teams in handling quality deficiencies, we have developed a classification for system anomalies. We have applied and evaluated this classification in an action research project in industry. It allows organizations to differentiate and adapt their actions according to different classes of anomalies. Thus, quick and effective actions to ensure the value provision or minimize the loss of value can be optimized separately from actions in the context of long-term and sustainable correction of the IoT system. Moreover, the classification for system anomalies enables the organization to create feedback loops for quality improvement of the system, the IoT service, and the organization. To evaluate the delivered value of an IoT service, we decompose it into discrete workflows, so-called IoT transactions. Applying distributed tracing, the dynamic behavior of an IoT transaction can be reconstructed in a further activity and can be made “observable”. Consequently, the successful completion of a transaction and its quality can be determined by applying indicators. We have developed an approach for the systematic derivation of quality indicators. By comparing actual values determined in operations with previously defined target values, the organization is able to detect anomalies in the temporal behavior of the value provision. As a result, the value provision can be controlled with appropriate actions. The quality in use of the approach is confirmed in another action research project with an industry partner. In summary, this thesis supports organizations in quantifying the delivered value of an IoT service and controlling the value provision with effective actions. Furthermore, the trust of a consumer in the IoT service provided by an IoT system and in the organization can be maintained and further increased by applying appropriate feedback loops.
  • Thumbnail Image
    ItemOpen Access
    Data-integrated methods for performance improvement of massively parallel coupled simulations
    (2022) Totounferoush, Amin; Schulte, Miriam (Prof. Dr.)
    This thesis presents data-integrated methods to improve the computational performance of partitioned multi-physics simulations, particularly on highly parallel systems. Partitioned methods allow using available single-physic solvers and well-validated numerical methods for multi-physics simulations by decomposing the domain into smaller sub-domains. Each sub-domain is solved by a separate solver and an external library is incorporated to couple the solvers. This significantly reduces the software development cost and enhances flexibility, while it introduces new challenges that must be addressed carefully. These challenges include but are not limited to, efficient data communication between sub-domains, data mapping between not-matching meshes, inter-solver load balancing, and equation coupling. In the current work, inter-solver communication is improved by introducing a two-level communication initialization scheme to the coupling library preCICE. The new method significantly speed-ups the initialization and removes memory bottlenecks of the previous implementation. In addition, a data-driven inter-solver load balancing method is developed to efficiently distribute available computational resources between coupled single-physic solvers. This method employs both regressions and deep neural networks (DNN) for modeling the performance of the solvers and derives and solves an optimization problem to distribute the available CPU and GPU cores among solvers. To accelerate the equation coupling between strongly coupled solvers, a hybrid framework is developed that integrates DNNs and classical solvers. The DNN computes a solution estimation for each time step which is used by classical solvers as a first guess to compute the final solution. To preserve DNN's efficiency during the simulation, a dynamic re-training strategy is introduced that updates the DNN's weights on-the-fly. The cheap but accurate solution estimation by the DNN surrogate solver significantly reduces the number of subsequent classical iterations necessary for solution convergence. Finally, a highly scalable simulation environment is introduced for fluid-structure interaction problems. The environment consists of highly parallel numerical solvers and an efficient and scalable coupling library. This framework is able to efficiently exploit both CPU-only and hybrid CPU-GPU machines. Numerical performance investigations using a complex test case demonstrate a very high parallel efficiency on a large number of CPUs and a significant speed-up due to the GPU acceleration.
  • Thumbnail Image
    ItemOpen Access
  • Thumbnail Image
    ItemOpen Access
    Verifiable tally-hiding remote electronic voting
    (2023) Liedtke, Julian; Küsters, Ralf (Prof. Dr.)
    Electronic voting (e-voting) refers to casting and counting votes electronically, typically through computers or other digital interfaces. E-voting systems aim to make voting secure, efficient, convenient, and accessible. Modern e-voting systems are designed to keep the votes confidential and provide verifiability, i.e., everyone can check that the published election result corresponds to how voters intended to vote. Several verifiable e-voting systems have been proposed in the literature, with Helios being one of the most prominent ones. However, almost all verifiable e-voting systems reveal not just the voting result but also the tally, consisting of the exact number of votes per candidate or even all single votes. Publishing the tally causes several issues. For example, in elections with only a few voters (e.g., boardroom or jury votings), exposing the tally prevents ballots from being anonymous, thus deterring voters from voting for their actual preference. Furthermore, attackers can exploit the tally for so-called Italian attacks that allow for easily coercing voters. Often, the voting result merely consists of a single winner or a ranking of candidates, so disclosing only this information, not the tally, is sufficient. Revealing the tally unnecessarily embarrasses defeated candidates and causes them a severe loss of reputation. For these reasons, there are several real-world elections where authorities do not publish the tally but only the result - while the current systems for this do not ensure verifiability. We call the property of disclosing the tally tally-hiding. Tally-hiding offers entirely new opportunities for voting. However, a secure e-voting system that combines tally-hiding and verifiability does not exist in the literature. Therefore, this thesis presents the first provable secure e-voting systems that achieve both tally-hiding and verifiability. Our Ordinos framework achieves the strongest notion of tally-hiding: it only reveals the election result. Many real-world elections follow an alternative variant of tally-hiding: they reveal the tally to the voting authorities and only publish the election result to the public - so far without achieving verifiability. We, for the first time, formalize this concept and coin it public tally-hiding. We propose Kryvos, which is the first provable secure e-voting system that combines public tally-hiding and verifiability. Kryvos offers a new trade-off between privacy and efficiency that differs from all previous tally-hiding systems and allows for a radically new protocol design, resulting in a practical e-voting system. We implemented and benchmarked Ordinos and Kryvos, showing the practicability of our systems for real-world elections for significant numbers of candidates, complex voting methods, and result functions. Moreover, we extensively analyze the impact of tally-hiding on privacy compared to existing practices for various elections and show that applying tally-hiding improves privacy drastically.