05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 13
  • Thumbnail Image
    ItemOpen Access
    Subjective annotation for a frame interpolation benchmark using artefact amplification
    (2020) Men, Hui; Hosu, Vlad; Lin, Hanhe; Bruhn, Andrés; Saupe, Dietmar
    Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. It contains interpolated frames from 155 methods applied to each of 8 contents. For this purpose, we collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data (3720 comparisons of 20 votes each) we reconstructed absolute quality scale values according to Thurstone’s model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA, which weights the local differences between an interpolated image and its ground truth.
  • Thumbnail Image
    ItemOpen Access
    On the impact of service-oriented patterns on software evolvability: a controlled experiment and metric-based analysis
    (2019) Bogner, Justus; Wagner, Stefan; Zimmermann, Alfred
    Background: Design patterns are supposed to improve various quality attributes of software systems. However, there is controversial quantitative evidence of this impact. Especially for younger paradigms such as service- and Microservice-based systems, there is a lack of empirical studies. Objective: In this study, we focused on the effect of four service-based patterns - namely Process Abstraction, Service Façade, Decomposed Capability, and Event-Driven Messaging - on the evolvability of a system from the viewpoint of inexperienced developers. Method: We conducted a controlled experiment with Bachelor students (N = 69). Two functionally equivalent versions of a service-based web shop - one with patterns (treatment group), one without (control group) - had to be changed and extended in three tasks. We measured evolvability by the effectiveness and efficiency of the participants in these tasks. Additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling. Results: Both experiment groups were able to complete a similar number of tasks within the allowed 90 min. Median effectiveness was 1/3. Mean efficiency was 12% higher in the treatment group, but this difference was not statistically significant. Only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. In the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. Complexity and cohesion were not impacted. Interpretation: For the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. With respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas. Conclusions: Overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. This effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers.
  • Thumbnail Image
    ItemOpen Access
    Joint state composition theorems for public-key encryption and digital signature functionalities with local computation
    (2020) Küsters, Ralf; Tuengerthal, Max; Rausch, Daniel
    In frameworks for universal composability, complex protocols can be built from sub-protocols in a modular way using composition theorems. However, as first pointed out and studied by Canetti and Rabin, this modular approach often leads to impractical implementations. For example, when using a functionality for digital signatures within a more complex protocol, parties have to generate new verification and signing keys for every session of the protocol. This motivates to generalize composition theorems to so-called joint state (composition) theorems, where different copies of a functionality may share some state, e.g., the same verification and signing keys. In this paper, we present a joint state theorem which is more general than the original theorem of Canetti and Rabin, for which several problems and limitations are pointed out. We apply our theorem to obtain joint state realizations for three functionalities: public-key encryption, replayable public-key encryption, and digital signatures. Unlike most other formulations, our functionalities model that ciphertexts and signatures are computed locally, rather than being provided by the adversary. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations proposed in the literature are shown to be unsuitable. Our work is based on the IITM model. Our definitions and results demonstrate the expressivity and simplicity of this model. For example, unlike Canetti’s UC model, in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorem in the IITM model.
  • Thumbnail Image
    ItemOpen Access
    Real-time locating system in production management
    (2020) Rácz-Szabó, András; Ruppert, Tamás; Bántay, László; Löcklin, Andreas; Jakab, László; Abonyi, János
    Real-time monitoring and optimization of production and logistics processes significantly improve the efficiency of production systems. Advanced production management solutions require real-time information about the status of products, production, and resources. As real-time locating systems (also referred to as indoor positioning systems) can enrich the available information, these systems started to gain attention in industrial environments in recent years. This paper provides a review of the possible technologies and applications related to production control and logistics, quality management, safety, and efficiency monitoring. This work also provides a workflow to clarify the steps of a typical real-time locating system project, including the cleaning, pre-processing, and analysis of the data to provide a guideline and reference for research and development of indoor positioning-based manufacturing solutions.
  • Thumbnail Image
    ItemOpen Access
    Analysis of political debates through newspaper reports : methods and outcomes
    (2020) Lapesa, Gabriella; Blessing, Andre; Blokker, Nico; Dayanik, Erenay; Haunss, Sebastian; Kuhn, Jonas; Padó, Sebastian
    Discourse network analysis is an aspiring development in political science which analyzes political debates in terms of bipartite actor/claim networks. It aims at understanding the structure and temporal dynamics of major political debates as instances of politicized democratic decision making. We discuss how such networks can be constructed on the basis of large collections of unstructured text, namely newspaper reports. We sketch a hybrid methodology of manual analysis by domain experts complemented by machine learning and exemplify it on the case study of the German public debate on immigration in the year 2015. The first half of our article sketches the conceptual building blocks of discourse network analysis and demonstrates its application. The second half discusses the potential of the application of NLP methods to support the creation of discourse network datasets.
  • Thumbnail Image
    ItemOpen Access
    Case study on privacy-aware social media data processing in disaster management
    (2020) Löchner, Marc; Fathi, Ramian; Schmid, David ‘-1’; Dunkel, Alexander; Burghardt, Dirk; Fiedrich, Frank; Koch, Steffen
    Social media data is heavily used to analyze and evaluate situations in times of disasters, and derive decisions for action from it. In these critical situations, it is not surprising that privacy is often considered a secondary problem. In order to prevent subsequent abuse, theft or public exposure of collected datasets, however, protecting the privacy of social media users is crucial. Avoiding unnecessary data retention is an important question that is currently largely unsolved. There are a number of technical approaches available, but their deployment in disaster management is either impractical or requires special adaption, limiting its utility. In this case study, we explore the deployment of a cardinality estimation algorithm called HyperLogLog into disaster management processes. It is particularly suited for this field, because it allows to stream data in a format that cannot be used for purposes other than the originally intended. We develop and conduct a focus group discussion with teams of social media analysts. We identify challenges and opportunities of working with such a privacy-enhanced social media data format and compare the process with conventional techniques. Our findings show that, with the exception of training scenarios, deploying HyperLogLog in the data acquisition process will not distract the data analysis process. Instead, several benefits, such as improved working with huge datasets, may contribute to a more widespread use and adoption of the presented technique, which provides a basis for a better integration of privacy considerations in disaster management.
  • Thumbnail Image
    ItemOpen Access
    The IITM model : a simple and expressive model for universal composability
    (2020) Küsters, Ralf; Tuengerthal, Max; Rausch, Daniel
    The universal composability paradigm allows for the modular design and analysis of cryptographic protocols. It has been widely and successfully used in cryptography. However, devising a coherent yet simple and expressive model for universal composability is, as the history of such models shows, highly non-trivial. For example, several partly severe problems have been pointed out in the literature for the UC model. In this work, we propose a coherent model for universal composability, called the IITM model (“Inexhaustible Interactive Turing Machine”). A main feature of the model is that it is stated without a priori fixing irrelevant details, such as a specific way of addressing of machines by session and party identifiers, a specific modeling of corruption, or a specific protocol hierarchy. In addition, we employ a very general notion of runtime. All reasonable protocols and ideal functionalities should be expressible based on this notion in a direct and natural way, and without tweaks, such as (artificial) padding of messages or (artificially) adding extra messages. Not least because of these features, the model is simple and expressive. Also the general results that we prove, such as composition theorems, hold independently of how such details are fixed for concrete applications. Being inspired by other models for universal composability, in particular the UC model and because of the flexibility and expressivity of the IITM model, conceptually, results formulated in these models directly carry over to the IITM model.
  • Thumbnail Image
    ItemOpen Access
    Error control scheme for malicious and natural faults in cryptographic modules
    (2020) Gay, Mael; Karp, Batya; Keren, Osnat; Polian, Ilia
    Today’s electronic systems must simultaneously fulfill strict requirements on security and reliability. In particular, their cryptographic modules are exposed to faults, which can be due to natural failures (e.g., radiation or electromagnetic noise) or malicious fault-injection attacks. We present an architecture based on a new class of error-detecting codes that combine robustness properties with a minimal distance. The new architecture guarantees (with some probability) the detection of faults injected by an intelligent and strategic adversary who can precisely control the disturbance. At the same time it supports automatic correction of low-multiplicity faults. To this end, we discuss an efficient technique to correct single nibble/byte errors while avoiding full syndrome analysis. We also examine a Compact Protection Code (CPC)-based system level fault manager that considers this code an inner code (and the CPC as its outer code). We report experimental results obtained by physical fault injection on the SAKURA-G FPGA board. The experimental results reconfirm the assumption that faults may cause an arbitrary number of bit flips. They indicate that a combined inner-outer coding scheme can significantly reduce the number of fault events that go undetected due to erroneous corrections of the inner code.
  • Thumbnail Image
    ItemOpen Access
    Audio guide for visually impaired people based on combination of stereo vision and musical tones
    (2019) Simões, Walter C. S. S.; Silva, Yuri M. L. R.; Pio, José Luiz de S.; Jazdi, Nasser; F. de Lucena, Vicente
    Indoor navigation systems offer many application possibilities for people who need information about the scenery and the possible fixed and mobile obstacles placed along the paths. In these systems, the main factors considered for their construction and evaluation are the level of accuracy and the delivery time of the information. However, it is necessary to notice obstacles placed above the user’s waistline to avoid accidents and collisions. In this paper, different methodologies are associated to define a hybrid navigation model called iterative pedestrian dead reckoning (i-PDR). i-PDR combines the PDR algorithm with a Kalman linear filter to correct the location, reducing the system’s margin of error iteratively. Obstacle perception was addressed through the use of stereo vision combined with a musical sounding scheme and spoken instructions that covered an angle of 120 degrees in front of the user. The results obtained in the margin of error and the maximum processing time are 0.70 m and 0.09 s, respectively, with obstacles at ground level and suspended with an accuracy equivalent to 90%.
  • Thumbnail Image
    ItemOpen Access
    Unsupervised and generic short-term anticipation of human body motions
    (2020) Enes, Kristina; Errami, Hassan; Wolter, Moritz; Krake, Tim; Eberhardt, Bernhard; Weber, Andreas; Zimmermann, Jörg
    Various neural network based methods are capable of anticipating human body motions from data for a short period of time. What these methods lack are the interpretability and explainability of the network and its results. We propose to use Dynamic Mode Decomposition with delays to represent and anticipate human body motions. Exploring the influence of the number of delays on the reconstruction and prediction of various motion classes, we show that the anticipation errors in our results are comparable to or even better for very short anticipation times (<0.4 s) than a recurrent neural network based method. We perceive our method as a first step towards the interpretability of the results by representing human body motions as linear combinations of previous states and delays. In addition, compared to the neural network based methods large training times are not needed. Actually, our methods do not even regress to any other motions than the one to be anticipated and hence it is of a generic nature.