Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.18419/opus-9387
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.advisorSteinwart, Ingo (Prof. Dr.)-
dc.contributor.authorFarooq, Muhammad-
dc.date.accessioned2017-12-05T08:24:17Z-
dc.date.available2017-12-05T08:24:17Z-
dc.date.issued2017de
dc.identifier.other1009133969-
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-94043de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/9404-
dc.identifier.urihttp://dx.doi.org/10.18419/opus-9387-
dc.description.abstractConditional expectiles are becoming an increasingly important tool in finance as well as in other areas of application such as demography when the goal is to explore the conditional distribution beyond the conditional mean. In this thesis, we consider a support vector machine (SVM) type approach with the asymmetric least squares loss for estimating conditional expectiles. Firstly, we establish learning rates for this approach that are minimax optimal modulo a logarithmic factor if Gaussian RBF kernels are used and the desired expectile is smooth in a Besov sense. It turns out that our learning rates, as a special case, improve the best known rates for kernel-based least squares regression in aforementioned scenario. As key ingredients of our statistical analysis, we establish a general calibration inequality for the asymmetric least squares loss, a corresponding variance bound as well as an improved entropy number bound for Gaussian RBF kernels. Furthermore, we establish optimal learning rates in the case of a generic kernel under the assumption that the target function is in a real interpolation space. Secondly, we complement the theoretical results of our SVM approach with the empirical findings. For this purpose we use a sequential minimal optimization method and design an SVM-like solver for expectile regression considering Gaussian RBF kernels. We conduct various experiments in order to investigate the behavior of the designed solver with respect to different combinations of initialization strategies, working set selection strategies, stopping criteria and number of nearest neighbors, and then look for the best combination of them. We further compare the results of our solver to the recent R-package ER-Boost and find that our solver exhibits a better test performance. In terms of training time, our solver is found to be more sensitive to the training set size and less sensitive to the dimensions of the data set, whereas, ER-Boost behaves the other way around. In addition, our solver is found to be faster than a similarly implemented solver for the quantile regression. Finally, we show the convergence of our designed solver.en
dc.language.isoende
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.subject.ddc510de
dc.titleKernel-based expectile regressionen
dc.typedoctoralThesisde
ubs.dateAccepted2017-10-13-
ubs.fakultaetMathematik und Physikde
ubs.institutInstitut für Stochastik und Anwendungende
ubs.publikation.seitenviii, 138de
ubs.publikation.typDissertationde
ubs.thesis.grantorMathematik und Physikde
Enthalten in den Sammlungen:08 Fakultät Mathematik und Physik

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
farooq2017KernelBasedExpectileRegression.pdf2,12 MBAdobe PDFÖffnen/Anzeigen


Alle Ressourcen in diesem Repositorium sind urheberrechtlich geschützt.