Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-6387
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorSethi, Muhammad Wahajde
dc.date.accessioned2012-10-24de
dc.date.accessioned2016-03-31T10:25:57Z-
dc.date.available2012-10-24de
dc.date.available2016-03-31T10:25:57Z-
dc.date.issued2011de
dc.identifier.other373326289de
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-78009de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/6404-
dc.identifier.urihttp://dx.doi.org/10.18419/opus-6387-
dc.description.abstractHigh-performance architectures are becoming more and more complex with the passage of time. These large scale, heterogeneous architectures and multi-core system are difficult to program. New programming models are required to make expression of parallelism easier, while keeping productivity of the developer higher. Partition Global Address-space (PGAS) languages such as UPC appeared to augment developer’s productivity for distributed memory systems. UPC provides a simpler, shared memory-like model with a user control over data layout. But it is developer’s responsibility to take care of the data locality, by using appropriate data layouts. SMPSs/StarSs programming model tries to simplify the parallel programming on multicore architectures. It offers task level parallelism, where dependencies among the tasks are determined at the run time. In addition, runtime take cares of the data locality, while scheduling tasks. Hence, providing two-folds improvement in productivity; first, saving developer’s time by using automatic dependency detection, instead of hard coding them. Second, save cache optimization time, as runtime take cares of data locality. The purpose of this thesis is to use the PGAS programming model e.g. UPC for different nodes with the shared memory task based parallelization model i.e. StarSs to take the advantage of the multi core systems and contrast this approach to the legacy MPI and OpenMP combination. Performance as well as programmability is considered in the evaluation. The combination UPC + SMPSs, results in approximately the same execution time as MPI and OpenMP. The current lack of features such as multi-dimensional data distribution or virtual topologies in UPC, make the hybrid UPC + SMPSs/StarSs programming model less programmable than MPI + OpenMP for the application studied in this thesis.en
dc.language.isoende
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.subject.ddc004de
dc.titleHybrid parallel computing beyond MPI & OpenMP - introducing PGAS & StarSsen
dc.typemasterThesisde
ubs.fakultaetZentrale Universitätseinrichtungende
ubs.fakultaetFakultät Informatik, Elektrotechnik und Informationstechnikde
ubs.institutIZUS HLRS-Höchstleistungsrechenzentrum Stuttgart (HLRS)de
ubs.institutInstitut für Parallele und Verteilte Systemede
ubs.opusid7800de
ubs.publikation.typAbschlussarbeit (Master)de
Enthalten in den Sammlungen:13 Zentrale Universitätseinrichtungen

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
MSTR_3215.pdf617,36 kBAdobe PDFÖffnen/Anzeigen


Alle Ressourcen in diesem Repositorium sind urheberrechtlich geschützt.