Please use this identifier to cite or link to this item: http://dx.doi.org/10.18419/opus-9006
Authors: Pretschner, Alexander
Prenninger, Wolfgang
Wagner, Stefan
Kühnel, Christian
Baumgartner, Martin
Sostawa, Bernd
Zölch, Rüdiger
Stauner, Thomas
Title: One evaluation of model-based testing and its automation
Issue Date: 2005
metadata.ubs.publikation.typ: Konferenzbeitrag
metadata.ubs.konferenzname: International Conference on Software Engineering (27th, 2005, Saint Louis, Mo.)
metadata.ubs.publikation.source: Proceedings of the 27th International Conference on Software Engineering (ICSE '05) : St. Louis, Mo., USA - May 15-21, 2005. New York, NY : ACM, 2005. - ISBN 1-58113-963-2, S. 392-401
URI: http://elib.uni-stuttgart.de/handle/11682/9023
http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-90239
http://dx.doi.org/10.18419/opus-9006
ISBN: 1-58113-963-2
metadata.ubs.bemerkung.extern: Copyright ACM. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 27th International Conference on Software Engineering, http://dx.doi.org/10.1145/1062455.1062529.
Abstract: Model-based testing relies on behavior models for the generation of model traces: input and expected output - test cases - for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.
Appears in Collections:15 Fakultätsübergreifend / Sonstige Einrichtung

Files in This Item:
File Description SizeFormat 
main.pdf205,73 kBAdobe PDFView/Open


Items in OPUS are protected by copyright, with all rights reserved, unless otherwise indicated.