05 Fakultät Informatik, Elektrotechnik und Informationstechnik

Permanent URI for this collectionhttps://elib.uni-stuttgart.de/handle/11682/6

Browse

Search Results

Now showing 1 - 10 of 13
  • Thumbnail Image
    ItemOpen Access
    Effiziente Leistungsverstärkerarchitekturen für Mobilfunkbasisstationen
    (2009) Dettmann, Ingo; Berroth, Manfred (Prof. Dr.-Ing.)
    Heutige Kommunikationsstandards erfordern Modulationsverfahren, welche die Information sowohl in der Phase als auch in der Amplitude des Trägers modulieren. Die daraus resultierenden Signale weisen hohe Amplitudenschwankungen auf. Die dafür notwendigen linearen Leistungsverstärker zeigen jedoch geringe Effizienzen. In der vorliegenden Arbeit werden zunächst die Anforderungen an Leistungsverstärker diskutiert und der Einfluss der Modulations- und Zugriffsverfahren untersucht. Anschließend werden die Anforderungen an den Transistor definiert und die Anforderungen an die Technologie formuliert. Es folgt eine Untersuchung der Betriebsarten von Verstärkern, welche die Grundlage für effizienzsteigernde Verstärkerarchitekturen bilden. Lineare Verstärker wie Klasse-A-, -AB- und -B-Verstärker zeigen eine hohe Linearität, die Effizienz fällt aber unterhalb der maximalen Ausgangsleistung schnell ab. Schaltverstärker wie Klasse-D- und -E-Verstärker sind zwar sehr effizient, können aber keine amplitudenmodulierten Signale verstärken. Es werden vier Methoden diskutiert, um die Effizienz unterhalb der maximalen Ausgangsleistung zu erhöhen: Der Doherty-Verstärker, der Chireix-Verstärker, die Versorgungsspannungsmodulation und der Bandpass-Klasse-S-Verstärker. Der Doherty-Verstärker bietet eine einfache Möglichkeit, die Effizienz auch unterhalb der maximalen Ausgangsleistung zu erhöhen. Das Prinzip beruht auf der Variation der Lastimpedanzen. Zwei Verstärker - ein Hauptverstärker und ein Spitzenverstärker - treiben dabei den gleichen Lastwiderstand. Der Spitzenverstärker wird nur bei hohen Ausgangsleistungen eingeschaltet und verändert das Kompressionsverhalten des Hauptverstärkers. Beim entworfenen Doherty-Verstärker erhöht sich die Effizienz 7 dB unterhalb der maximalen Ausgangsleistung von 15 % auf etwas über 27 %. Die maximale Ausgangsleistung reduziert sich allerdings von 85 W auf 56 W. Durch eine adaptive Arbeitspunktregelung des Spitzenverstärkers kann die Ausgangsleistung wieder auf 85 W erhöht werden. Die Effizienz steigt dabei nochmals um 5 % auf 32 %. Der Chireix-Verstärker basiert auf dem Prinzip der linearen Verstärkung durch nichtlineare Komponenten. Das zu verstärkende amplituden- und phasenmodulierte Signal wird durch einen Phasenmodulator in zwei gegenphasige Signale mit konstanter Amplitude aufgeteilt. Diese beiden Signale werden über hocheffiziente Verstärker verstärkt. Das ursprüngliche Signal wird durch Summation der beiden Signale wieder demoduliert. Eine Effizienzsteigerung erfolgt unter Verwendung von nichtisolierenden Summierern. Die Effizienzsteigerung beruht dabei auf der Variation der Lastgeraden. Der aufgebaute Chireix-Verstärker basiert auf dem GaAs-Transistor MRFG35010 von Freescale. Die Einzelverstärker werden im Klasse-B-Betrieb betrieben und haben eine maximale Ausgangsleistung von 5 W bei einer Frequenz von 2 GHz. Die Gesamtleistung ergibt sich damit zu 10 W. Die Effizienz beträgt maximal 52 %. Die Effizienz beim Chireix-Verstärker erhöht sich 7 dB unter der maximalen Ausgangsleistung von 25 % auf 32 % und bei 5 dB unter der maximalen Ausgangsleistung von 33 % auf 44 %. Die Versorgungsspannungsmodulation variiert die Drain- bzw. Kollektorspannung eines Verstärkers in Abhängigkeit der Aussteuerung des Transistors. Es ist das einzige untersuchte Verstärkerkonzept, welches mit allen Verstärkerklassen funktioniert. Es ist auch das einzige Konzept, welches die Bandbreite des HF-Verstärkers nicht einschränkt, solange der erforderliche Spannungsmodulator der Einhüllenden des HF-Signals folgen kann. Die Effizienz berechnet sich aus der Verkettung der Effizienzen des HF-Verstärkers und des Spannungsmodulators. Ein Verstärker auf Basis des GaAs-Transistors MRFG350101 wurde aufgebaut, dessen Versorgungsspannung über einen Klasse-AD-Verstärker geregelt wird. Die maximale Ausgangsleistung des Verstärkers beträgt 6.3 W bei einer Effizienz von 67 %. Die Versorgungsspannung wird im Bereich von 6 V - 12 V geregelt. Die Effizienz 7 dB unter der maximalen Ausgangsleistung steigt dabei von 30 % auf 44 %. Die Bandbreite des Modulators ist dabei größer als 3 MHz. Bandpass-Klasse-S-Verstärker verwenden Schaltverstärker, um ein analoges Signal hocheffizient zu verstärken. Das analoge Eingangssignal wird über einen Modulator in eine binäre Pulsfolge gewandelt, welche über einen Schaltverstärker effizient verstärkt wird. Anschließend wird das verstärkte Signal wieder demoduliert. Bandpass-Delta-Sigma-Modulatoren (BPDSM) stellen ein vielversprechendes Modulationsverfahren dar. Als Schaltverstärker können sowohl Klasse-D- Verstärker verwendet werden. Erstmals werden in dieser Arbeit analytische Untersuchungen zur Effizienz von sowohl nichtinvertierten als auch invertierten Klasse-D-Verstärkern bei Ansteuerung mit BPDSM-Signalen durchgeführt. Dies erlaubt eine Abschätzung der Effizienz von Bandpass-Klasse-S-Verstärkern unter Verwendung von Klasse-D-Verstärkern.
  • Thumbnail Image
    ItemOpen Access
    System support for adaptive pervasive applications
    (2009) Handte, Marcus; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    Driven by the ongoing miniaturization of computer technology as well as the proliferation of wireless communication technology, Pervasive Computing envisions seamless and distraction-free task support by distributed applications that are executed on computers embedded in everyday objects. As such, this vision is equally appealing to the computer industry and the user. Induced by various factors such as invisible integration, user mobility and computer failures, the resulting computer systems are heterogeneous, highly dynamic and evolving. As a consequence, applications that are executed in these systems need to adapt continuously to their ever-changing execution environment. Without further precautions, the need for adaptation can complicate application development and utilization which hinders the realization of the basic vision. As solution to this dilemma, this dissertation describes the design of system software for Pervasive Computing that simplifies the development of adaptive applications. As opposed to shifting the responsibility for adapting an application to the user or the application developer, the system software introduces a component-based application model that can be configured and adapted automatically. To enable automation at the system level, the application developer specifies the dependencies on components and resources in an abstract manner using contracts. Upon application startup, the system uses the contractual descriptions to compute and execute valid configurations. At runtime, it detects changes to the configuration that require adaptation and it reconfigures the application. To compute valid configurations upon application startup, the dissertation identifies the requirements for configuration algorithms. Based on an analysis of the problem complexity, the dissertation classifies possible algorithmic solutions and it presents an integrated approach for configuration based on a parallel backtracking algorithm. Besides from scenario specific modifications, retrofitting the backtracking algorithm requires a problem mapping from configuration to constraint satisfaction which can be computed on-the-fly at runtime. The resulting approach for configuration is then extended to support the optimization of a cost function that captures the most relevant cost factors during adaptation. This enables the use of the approach for configuration upon startup and reconfiguration during runtime adaptation. As basis for the evaluation of the system software and the algorithm, the dissertation outlines a prototypical implementation. The prototypical implementation is used for a thorough evaluation of the presented concepts and algorithms by means of real world measurements and a number of simulations. The evaluation results suggest that the presented system software can indeed simplify the development of distributed applications that compensate the heterogeneity, dynamics and evolution of the underlying system. Furthermore, they indicate that the algorithm for configuration and the extensions for adaptation provide a sufficiently high performance in typical applications scenarios. Moreover, the results also suggest that they are preferable over of alternative solutions. To position the presented solution within the space of possible and existing solutions, the dissertation discusses major representatives of existing systems and it proposes a classification of the relevant aspects. The relevant aspects are the underlying conceptual model of the system and the distribution of the responsibility for configuration and adaptation. The classification underlines that in contrast to other solutions, the presented solution provides a higher degree of automation without relying on the availability of a powerful computer. Thus, it simplifies the task of the application developer without distracting the user while being applicable to a broader range of scenarios. After discussing the related approaches and clarifying similarities and differences, the dissertation concludes with a short summary and an outlook on future work.
  • Thumbnail Image
    ItemOpen Access
    Design of radio frequency power amplifiers for cellular phones and base stations in modern mobile communication systems
    (2009) Wu, Lei; Berroth, Manfred (Prof. Dr.-Ing.)
    The mobile radio communication has begun with Guglielmo Marconi's and Alexander Popov's experiments with ship-to-shore communication in the 1890's. Land mobile radio telephone systems have been used since the Detroit City Police Department installed the first wireless communication system in 1921. Since that time, radio systems have become more and more important for both voice and data communication. The modern mobile communication systems are mainly designed in high frequency ranges due to the larger available bandwidth at these frequencies. Today, the mostly used mobile communication systems in the United States are cellular telephone systems operating at 800 - 900 MHz and personal communication systems (PCS) at 1800 - 2000 MHz. In Europe, these include the Global System for Mobile Communication (GSM) and Universal Mobile Telecommunications System (UMTS). China now has GSM/GPRS and Code Division Multiple Access (CDMA) networks. For the third generation services, China has been planning a 3G standard called Time Division Synchronous CDMA (TD-SCDMA) since 1999, which is planned to operate at 2010 MHz - 2025 MHz. In this work, attentions are paid on the uplink and downlink applications in the GSM and the UMTS systems adopted in Europe. No matter which system is discussed, a wireless communication link usually includes a transmitter, a receiver, and a channel. The functions of the quantization, of the coding and of the decoding are only performed in digital systems. Most links are fully duplex and include a transmitter and a receiver or a transceiver at each end of the link. Obviously, to send or receive large enough signals, power amplifiers and their driving amplifiers are necessary on both sides of the link. A radio frequency power amplifier is a circuit for converting directional current input power into a significant amount of RF output power. One of the principal differences between a small-signal amplifier design and a power amplifier design is that the main purpose of the latter is the maximum output power, not the maximum gain. However, a power amplifier cannot simply be regarded as a small-signal amplifier driven into the saturation. There is a great variety of different power amplifiers, while most of them employ techniques beyond simple linear amplification. In other words, RF power can be generated by a wide variety of techniques using a wide variety of devices. In this work, the fundamental theories used for the design of RF power amplifiers are systematically introduced. Using these theories, power amplifier circuits are designed both for the base stations and for the cellular phones adopted in the modern mobile communication systems in Europe.
  • Thumbnail Image
    ItemOpen Access
    Bridging the gap between volume visualization and medical applications
    (2009) Rößler, Friedemann Andreas; Ertl, Thomas (Prof. Dr.)
    Direct volume visualization has been established as a common visualization technique for tomographic volume datasets in many medical application fields. In particular, the introduction of volume visualization techniques that exploit the computing power of modern graphics hardware has expanded the application capabilities enormously. However, the employment of programmable graphics processing units (GPUs) usually requires an individual adaption of the algorithms for each different medical visualization task. Thus, only few sophisticated volume visualization algorithms have yet found the way into daily medical practice. In this thesis several new techniques for medical volume visualization are presented that aid to bridge this gap between volume visualization and medical applications. Thereby, the problem of medical volume visualization is addressed on three different levels of abstraction, which build upon each other. On the lowest level a flexible framework for the simultaneous rendering of multiple volume datasets is introduced. This is needed when multiple volumes, which may be acquired with different imaging modalities or at different points in time, should be combined into a single image. Therefore, a render graph was developed that allows the definition of complex visualization rules for arbitrary multi-volume scenes. From this graph GPU programs for optimized rendering are generated automatically. The second level comprises interactive volume visualization applications for different medical tasks. Several tools and techniques are presented that demonstrate the flexibility of the multi-volume rendering framework. Specifically, a visualization tool was developed that permits the direct configuration of the render graph via a graphical user interface. Another application focuses on the simultaneous visualization of functional and anatomical brain images, as they are acquired in studies for cognitive neuroscience. Moreover, an algorithm for direct volume deformation is presented, which can be applied for surgical simulation. On the third level the automation of visualization processes is considered. This can be applied for standard visualization taks to support medical doctors in their daily work. First, 3D object movies are proposed for the representation of automatically generated visualizations. These allow intuitive navigation along precomputed views of an object. Then, a visualization service is presented that delegates the costly computation of video sequences and object movies of a volume dataset to a GPU-cluster. In conclusion, a processing model for the development of medical volume visualization solutions is proposed. Beginning from the initial request for the application of volume-visualization techniques for a certain medical task, this covers the whole life cycle of such a solution from a prototype to an automated service. Thereby, it is shown how the techniques that where developed for this thesis support the generation of the visualization solutions on the different stages.
  • Thumbnail Image
    ItemOpen Access
    Mikrowellenmodellierung von photonischen Kristallen und Metamaterialien für die optische Nachrichtentechnik
    (2009) Rumberg, Axel; Berroth, Manfred (Prof. Dr.-Ing.)
    Negativ-Index-Materialien sind ein neues Forschungsgebiet. Das erste Metamaterial mit einem negativen Brechungsindex wurde 2001 vorgestellt. Das theoretische Konzept der Wellenausbreitung in Negativ-Index-Materialien ist aber bereits 1968 von V. Veselago entwickelt worden. Die der Arbeit zugrunde liegende Aufgabe besteht in der Modellierung von negativ brechenden photonischen Kristallen und Metamaterialien im Mikrowellenbereich. Sie werden im Hinblick auf ihre Verwendbarkeit im Bereich der optischen Telekommunikationswellenlängen untersucht. Durch die aufgrund der Skalierung größeren Abmessungen lassen sich die Strukturen einfacher herstellen und vermessen. Die Arbeitsprinzipien der Strukturen sind frequenzunabhängig. Metamaterialien bieten die Möglichkeit, die Permittivität und die Permeabilität maßzuschneidern. In den Einheitszellen dieser meist periodisch aufgebauten metallischen Strukturen werden künstliche magnetische Atome durch resonanzfähige Strukturen generiert. Die Periode des Metamaterials muss klein gegenüber der Wellenlänge sein. Neben den auf dem Resonatorkonzept aufgebauten Metamaterialien werden auch auf anderen Prinzipien beruhende Strukturen untersucht. Hier sind die leitungsgebundenen Metamaterialien zu nennen. Als einbettende Leitungen können z. B. Mikrostreifenleitungen verwendet werden. Im Vergleich zur konventionellen Transmissionsleitung werden Kapazität und Induktivität vertauscht. Auch photonische Kristalle können negativ brechen. In diesen periodischen Strukturen ist die Wellenlänge vergleichbar mit der Gitterkonstanten und die Einzelelemente, z. B. Metallzylinder, können aufgelöst werden. Bei bestimmten Frequenzen kann über das Dispersionsdiagramm ein effektiver negativer Index zugeordnet werden. Die negative Brechung der photonischen Kristalle kann dazu genutzt werden, eine von einer Quelle ausgehende Welle zu fokussieren. Diese Fokussierung wird mit zweidimensionalen photonischen Kristallen, die aus Löchern in einer Schichtwellenleiterstruktur bestehen, im Frequenzbereich um 20 GHz gezeigt. Das verwendete wellenführende Materialsystem TMM10 - Teflon modelliert das in photonischen integrierten Schaltkreisen verwendete Silizium-Siliziumdioxid. Nach erfolgreicher Demonstration der Fokussierung wird gezeigt, dass mittels photonischer Kristalle die Einkoppeleffizienz in einen Wellenleiter verbessert werden kann. In einer Teststrecke, die aus zwei sich gegenüberliegenden Wellenleitern mit dazwischen liegendem Schichtwellenleiter besteht, wird die Kopplung von einem Wellenleiter zum anderen durch Einsatz eines photonischen Kristalls gesteigert. Der photonische Kristall wird in den Schichtwellenleiter eingebracht. Die Kopplung wird im Vergleich zur Kopplung durch den reinen Schichtwellenleiter verbessert. Die in dieser Arbeit untersuchten resonanten Strukturen bieten das Potenzial, einen auf negativer Permittivität und Permeabilität beruhenden negativen Index zu erzeugen. Mit dem Drahtpaar, einer Abwandelung des Spaltring-Resonators, wird ein auch gut in der Optik zu vermessendes magnetisches Atom untersucht. Der negative Index wird im Frequenzbereich um 10 GHz festgestellt. Zur Untersuchung von Volumenmaterialien werden gestapelte Strukturen untersucht. Leitungsgebundene Strukturen bieten ebenfalls das Potenzial eines negativen Indexes. Eine Struktur wird aus hochfrequenztauglichem Material aufgebaut. Das einbettende Medium wird durch einen Parallelplattenhohlleiter gebildet. Die zur Erlangung des negativen Indexes benötigten Induktivitäten und Kapazitäten werden durch kurzgeschlossene Parallelplattenhohlleiter und metallische Durchkontaktierungen realisiert. Bei den Messungen wird ein negativer Index um 10 GHz festgestellt. Der letzte Abschnitt der Arbeit befasst sich mit der Skalierbarkeit der Strukturen. In der Simulation werden die Abmaße eines Drahtpaar soweit skaliert, dass sich eine Arbeitsfrequenz von 100 THz ergibt. Hierbei fällt auf, dass die Abmaße aufgrund der Eigenschaften von Metallen nicht direkt skaliert werden können. Lagen die Arbeitsfrequenzen der Metamaterialien anfangs im Mikrowellenbereich, so sind sie inzwischen durch Skalierung im optischen Frequenzbereich angelangt. Es wird daran gearbeitet, verlustarme Volumenmaterialien zu bauen. Konkrete Anwendungen gibt es bereits im Mikrowellenbereich. Es ist z. B. möglich, kompakte Koppler oder Leckwellenantennen zu bauen. Auch ist eine Tarnkappe realisiert worden. Die weiteren potenziellen Anwendungsgebiete im optischen Frequenzbereich sind weitreichend. Es ist möglich, das Licht auf unkonventionelle Art und Weise zu führen. Als Anwendungsbeispiel zu nennen sind hier die in dieser Arbeit vorgestellten Kopplungen mit photonischen Kristallen, die in photonischen integrierten Schaltkreisen als Schlüsselkomponenten eingesetzt werden können.
  • Thumbnail Image
    ItemOpen Access
    Semantics of projective locative expressions : an empirical evaluation of geometrical conditions
    (2009) Hying, Christian; Kamp, Hans (Prof. Dr. h.c., Ph.D.)
    This thesis presents a method for evaluating semantic theories of projective locative expressions such as "X is above Y" and "X to the right of Y". The method is implemented for semantic theories that represent meaning of projective locative expressions in terms of geometrical constraints in two-dimensional space. A set of semantic theories is defined according to proposals from the literature. These theories predict precise geometrical constraints for projective locative expressions. Furthermore, a formalism is proposed which is used to combine these theories in order to generate new semantic theories that are capable of handling vagueness of projective locative expressions. The empirical basis of the evaluation is a set of expressions that subjects of a "map task" experiment (Anderson et al., 1991) have used to describe spatial relations in two-dimensional space. Each expression refers to a specific map of which two-dimensional geometrical representations are derived. The semantic theories are tested with these data by checking whether the geometrical constraints predicted for an expression are satisfied by the corresponding geometrical representation. The evaluations show good results for most theories which have been proposed in the literature. The results are systematically improved by the corresponding theories that handle vagueness.
  • Thumbnail Image
    ItemOpen Access
    An architectural decision modeling framework for service oriented architecture design
    (2009) Zimmermann, Olaf; Leymann, Frank (Prof. Dr.)
    In this thesis, we investigate whether reusable architectural decision models can support Service-Oriented Architecture (SOA) design. In the current state of the art, architectural decisions are captured ad hoc and retrospectively on projects; this is a labor-intensive undertaking without immediate benefits. On the contrary, we investigate the role reusable architectural decision models can play during SOA design: We treat recurring architectural decisions as first-class method elements and propose an architectural decision modeling framework and a reusable architectural decision model for SOA which guide the architect through the SOA design. Our approach is tool supported. Our framework is called SOA Decision Modeling (SOAD). SOAD provides a technique to systematically identify recurring decisions. Our reusable architectural decision model for SOA conforms to a metamodel supporting reuse and collaboration. The model organization follows Model-Driven Architecture (MDA) principles and separates long lasting platform-independent decisions from rapidly changing platform-specific ones. The alternatives in a conceptual model level reference SOA patterns. This simplifies the initial population and ongoing maintenance of the decision model. Decision dependency management allows knowledge engineers and software architects to check model consistency and prune irrelevant decisions. Moreover, a managed issue list guides through the decision making process. To update design artifacts according to decisions made, decision outcome information is injected into design model transformations. Finally, a Web-based collaboration system provides tool support for the framework steps and concepts. The SOAD framework is not only applicable to enterprise application and SOA design, but also to other application genres and architectural styles. SOAD supports use cases such as education, knowledge exchange, design method, review technique, and governance instrument.
  • Thumbnail Image
    ItemOpen Access
    Optimization of query sequences
    (2009) Kraft, Tobias; Mitschang, Bernhard (Prof. Dr.-Ing. habil.)
    Query optimization is a well-known topic in database research since the 1970s. This thesis highlights a special area of query optimization that arises from new trends in the usage of databases. Whereas in the beginning databases were primarily used for transaction-oriented processing of operative data, today databases are also used to facilitate reporting and analysis on consolidated, historic data. For the latter, the data is loaded into a large data warehouse and afterwards it is being analyzed by the use of tools. The tools used to model the flows that extract the operative data from the source systems, transform these data and load it into the data warehouse as well as the tools that process the data stored in the data warehouse often generate sequences of SQL statements that break down a complex flow or request into a sequence of computational steps. The optimization of this kind of sequences with respect to runtime is the focus of this thesis. We propose a heuristic as well as a cost-based approach for this optimization problem. The cost-based approach is just an enhancement of the heuristic approach. It results from adding a cost estimation component to the optimizer architecture of the heuristic approach and by replacing the heuristic control strategy by a control strategy that considers cost estimates. Both approaches are rule-based approaches that rewrite a given sequence of SQL statements into a syntactically different but semantically equivalent sequence of SQL statements. Therefore, we specify a set of rewrite rules. For cost estimation, we employ the capabilities of the query optimizer of the underlying database management system (DBMS) which is responsible for the execution of the query sequences. To improve the quality of these cost estimates, we support the query optimizer of the underlying DBMS with statistics that we derive from histogram propagation. For this purpose, we need an interface that allows to access and manipulate statistics in the underlying DBMS. Since there exists no standardized interface for this purpose, we define our own DBMS-independent interface. For the heuristic approach as well as for the cost-based approach, we provide prototypic implementations in JAVA. Furthermore, we have implemented the DBMS-independent interface for the three commercial DBMSs IBM DB2, Oracle, and Microsoft SQL Server. We report on the results of experiments that we conducted with our prototypes and some sample sequences that we derived by using a commercial tool for online analytical processing (OLAP). They show the effectiveness of our optimization approach and they highlight the optimization potential that lies in rewriting sequences of SQL statements. Finally, we draw a conclusion and suggest some interesting points for future research.
  • Thumbnail Image
    ItemOpen Access
    Fundamental storage mechanisms for location based services in mobile ad hoc networks
    (2009) Dudkowski, Dominique; Rothermel, Kurt (Prof. Dr. rer. nat. Dr. h. c.)
    The proliferation of mobile wireless communication technology has reached a considerable magnitude. As of 2009, a large fraction of the people in most industrial and emerging nations is equipped with mobile phones and other types of portable devices. Supported by trends in miniaturization and price decline of electronic components, devices become enhanced with localization technology, which delivers, via the Global Positioning System, the geographic position to the user. The combination of both trends enables location-based services, bringing information and services to users based on their whereabouts in the physical world, for instance, in the form of navigation systems, city information systems, and friend locators. A growing number of wireless communication technologies, such as Wireless Local Area Networks, Bluetooth, and ZigBee, enable mobile devices to communicate in a purely peer-to-peer fashion, thereby forming mobile ad-hoc networks. Together with localization technology, these communication technologies make it feasible, in principle, to implement distributed locationbased services without relying on any support by infrastructure components. However, the specific characteristics of mobile ad-hoc networks, especially the significant mobility of user devices and the highly dynamic topology of the network, make the implementation of locationbased services extremely challenging. Current research does not provide an adequate answer to how such services can be supported. Efficient, robust, and scalable fundamental mechanisms that allow for generic and accurate services are lacking. This dissertation presents a solution to the fundamental support of location-based services in mobile ad-hoc networks. A conceptual framework is outlined that implements mechanisms on the levels of routing, data storage, location updating, and query processing to support and demonstrate the feasibility of location-based services in mobile ad-hoc networks. The first contribution is the concept of location-centric storage and the implementation of robust routing and data storage mechanisms in accordance with this concept. This part of the framework provides a solution to the problems of data storage that stem from device mobility and dynamic network topology. The second contribution is a comprehensive set of algorithms for location updating and the processing of spatial queries, such as nearest neighbor queries. To address more realistic location-based application scenarios, we consider the inaccuracy of position information of objects in the physical world in these algorithms. Extensive analytical and numerical analyses show that the proposed framework of algorithms possesses the necessary performance characteristics to allow the deployment of location-based services in purely infrastructureless networks. A corollary from these results is that currently feasible location-based services in infrastructure-based networks may be extended to the infrastructureless case, opening up new business opportunities for service providers.
  • Thumbnail Image
    ItemOpen Access
    Software based self test under memory, time and power constraints
    (2009) Zhou, Jun; Wunderlich, Hans-Joachim (Prof. Dr. habil.)
    The use of embedded system is ubiquitous in modern life, from automotive industry, robot techniques to the household appliances. As the core component of embedded systems, microprocessors play the central role in information processing, and fulfill several specialized tasks. The involvment in safty-critical applications requires high reliability of these systems. Test is known to be important for product quality and used to detect defects both during manufacturing or in-field scenario. In particular, the following aspects are crucial for microprocessor test. First, a low-cost test is preferrable in terms of hardware overhead, test time for example. To detect timing faults, testing at the system frequency, namely at-speed testing, is required. Design modification to incorporate hardware dedicated for test is not desirable due to potential performance degradation. Overtesting accounts for possible yield loss in that chips may fail the structural test, although those faults can never be sensitized in normal functional operations. The advent of software-based self-test solve these problems and therefore becomes the target of this work. However, constrained test generation is still of need. The dissertation looks into this aspect and proposes a novel SBST scheme, which optimizes memory, test application and power consumption at the same time without penalty of fault coverage. In addition, this work also presents a method to improve test quality of SBST through adding new instructions under the ASIP (application-specific instruction-set processor) framework.