Repository logoOPUS - Online Publications of University Stuttgart
de / en
Log In
New user? Click here to register.Have you forgotten your password?
Communities & Collections
All of DSpace
  1. Home
  2. Browse by Author

Browsing by Author "Wyrich, Marvin"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    ItemOpen Access
    Evidence for the design of code comprehension experiments
    (2023) Wyrich, Marvin; Wagner, Stefan (Prof. Dr.)
    Context: Valid studies establish confidence in scientific findings. However, to carefully assess a study design, specific domain knowledge is required in addition to general expertise in research methodologies. For example, in an experiment, the influence of a manipulated condition on an observation can be influenced by many other conditions. We refer to these as confounding variables. Knowing possible confounding variables in the thematic context is essential to be able to assess a study design. If certain confounding variables are not identified and consequently not controlled, this can pose a threat to the validity of the study results. Problem: So far, the assessment of the validity of a study is only intuitive. The potential bias of study findings due to confounding variables is thus speculative, rather than evidence-based. This leads to uncertainty in the design of studies, as well as disagreement in peer review. However, two barriers currently impede evidence-based evaluation of study designs. First, many of the suspected confounding variables have not yet been adequately researched to demonstrate their true effects. Second, there is a lack of a pragmatic method to synthesize the existing evidence from primary studies in a way that is easily accessible to researchers. Scope: We investigate the problem in the context of experimental research methods with human study participants and in the thematic context of code comprehension research. Contributions: We first systematically analyze the design choices in code comprehension experiments over the past 40 years and the threats to the validity of these studies. This forms the basis for a subsequent discussion of the wide variety of design options in the absence of evidence on their consequences and comparability. We then conduct experiments that provide evidence on the influence of intelligence, personality, and cognitive biases on code comprehension. While previously only speculating on the influence of these variables, we now have some initial data points on their actual influence. Finally, we show how combining different primary studies into evidence profiles facilitates evidence-based discussion of experimental designs. For the three most commonly discussed threats to validity in code comprehension experiments, we create evidence profiles and discuss their implications. Conclusion: Evidence for and against threats to validity can be found for frequently discussed threats. Such conflicting evidence is explained by the need to consider individual confounding variables in the context of a specific study design, rather than as a universal rule, as is often the case. Evidence profiles highlight such a spectrum of evidence and serve as an entry point for researchers to engage in an evidence-based discussion of their study design. However, as with all types of systematic secondary studies, the success of evidence profiles relies on publishing a sufficient number of studies on the same respective research question. This is a particular challenge in a research field where the novelty of a manuscript's research findings is one of the evaluation criteria of any major conference. Nevertheless, we are optimistic about the future, as even evidence profiles that will merely indicate that evidence on a particular controversial issue is scarce will make a contribution: they will identify opinionated assessments of study designs as such, as well as motivate additional studies to provide more evidence.
  • Thumbnail Image
    ItemOpen Access
    A fine-grained data set and analysis of tangling in bug fixing commits
    (2022) Herbold, Steffen; Trautsch, Alexander; Ledel, Benjamin; Aghamohammadi, Alireza; Ghaleb, Taher A.; Chahal, Kuljit Kaur; Bossenmaier, Tim; Nagaria, Bhaveet; Makedonski, Philip; Ahmadabadi, Matin Nili; Szabados, Kristof; Spieker, Helge; Madeja, Matej; Hoy, Nathaniel; Lenarduzzi, Valentina; Wang, Shangwen; Rodríguez-Pérez, Gema; Colomo-Palacios, Ricardo; Verdecchia, Roberto; Singh, Paramvir; Qin, Yihao; Chakroborti, Debasish; Davis, Willard; Walunj, Vijay; Wu, Hongjun; Marcilio, Diego; Alam, Omar; Aldaeej, Abdullah; Amit, Idan; Turhan, Burak; Eismann, Simon; Wickert, Anna-Katharina; Malavolta, Ivano; Sulír, Matúš; Fard, Fatemeh; Henley, Austin Z.; Kourtzanidis, Stratos; Tuzun, Eray; Treude, Christoph; Shamasbi, Simin Maleki; Pashchenko, Ivan; Wyrich, Marvin; Davis, James; Serebrenik, Alexander; Albrecht, Ella; Aktas, Ethem Utku; Strüber, Daniel; Erbel, Johannes
    Context: Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelevant for the study of bugs. Objective: We want to improve our understanding of the prevalence of tangling and the types of changes that are tangled within bug fixing commits. Methods: We use a crowd sourcing approach for manual labeling to validate which changes contribute to bug fixes for each line in bug fixing commits. Each line is labeled by four participants. If at least three participants agree on the same label, we have consensus. Results: We estimate that between 17% and 32% of all changes in bug fixing commits modify the source code to fix the underlying problem. However, when we only consider changes to the production code files this ratio increases to 66% to 87%. We find that about 11% of lines are hard to label leading to active disagreements between participants. Due to confirmed tangling and the uncertainty in our data, we estimate that 3% to 47% of data is noisy without manual untangling, depending on the use case.
  • Thumbnail Image
    ItemOpen Access
    Individual characteristics of successful coding challengers
    (2017) Wyrich, Marvin
    Assessing a software engineer's problem-solving ability to algorithmic programming tasks has been an essential part of technical interviews at some of the most successful technology companies for several years now. Despite the adoption of coding challenges among these companies, we do not know what influences the performance of different software engineers in solving such coding challenges. We conducted an exploratory study with software engineering students to find hypothesis on what individual characteristics make a good coding challenge solver. Our findings show that the better coding challengers have also better exam grades and more programming experience. Furthermore, conscientious as well as sad software engineers performed worse in our study.
  • Thumbnail Image
    ItemOpen Access
    A theory on individual characteristics of successful coding challenge solvers
    (2019) Wyrich, Marvin; Graziotin, Daniel; Wagner, Stefan
    Background: Assessing a software engineer’s ability to solve algorithmic programming tasks has been an essential part of technical interviews at some of the most successful technology companies for several years now. We do not know to what extent individual characteristics, such as personality or programming experience, predict the performance in such tasks. Decision makers’ unawareness of possible predictor variables has the potential to bias hiring decisions which can result in expensive false negatives as well as in the unintended exclusion of software engineers with actually desirable characteristics. Methods: We conducted an exploratory quantitative study with 32 software engineering students to develop an empirical theory on which individual characteristics predict the performance in solving coding challenges. We developed our theory based on an established taxonomy framework by Gregor (2006). Results: Our findings show that the better coding challenge solvers also have better exam grades and more programming experience. Furthermore, conscientious as well as sad software engineers performed worse in our study. We make the theory available in this paper for empirical testing. Discussion: The theory raises awareness to the influence of individual characteristics on the outcome of technical interviews. Should the theory find empirical support in future studies, hiring costs could be reduced by selecting appropriate criteria for preselecting candidates for on-site interviews and potential bias in hiring decisions could be reduced by taking suitable measures.
OPUS
  • About OPUS
  • Publish with OPUS
  • Legal information
DSpace
  • Cookie settings
  • Privacy policy
  • Send Feedback
University Stuttgart
  • University Stuttgart
  • University Library Stuttgart