Repository logoOPUS - Online Publications of University Stuttgart
de / en
Log In
New user? Click here to register.Have you forgotten your password?
Communities & Collections
All of DSpace
  1. Home
  2. Browse by Author

Browsing by Author "Bruhn, Andrés"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    ItemOpen Access
    Efficient and robust background modeling with dynamic mode decomposition
    (2022) Krake, Tim; Bruhn, Andrés; Eberhardt, Bernhard; Weiskopf, Daniel
    A large number of modern video background modeling algorithms deal with computational costly minimization problems that often need parameter adjustments. While in most cases spatial and temporal constraints are added artificially to the minimization process, our approach is to exploit Dynamic Mode Decomposition (DMD), a spectral decomposition technique that naturally extracts spatio-temporal patterns from data. Applied to video data, DMD can compute background models. However, the original DMD algorithm for background modeling is neither efficient nor robust. In this paper, we present an equivalent reformulation with constraints leading to a more suitable decomposition into fore- and background. Due to the reformulation, which uses sparse and low-dimensional structures, an efficient and robust algorithm is derived that computes accurate background models. Moreover, we show how our approach can be extended to RGB data, data with periodic parts, and streaming data enabling a versatile use.
  • Thumbnail Image
    ItemOpen Access
    MS-RAFT+ : high resolution multi-scale RAFT
    (2023) Jahedi, Azin; Luz, Maximilian; Rivinius, Marc; Mehl, Lukas; Bruhn, Andrés
    Hierarchical concepts have proven useful in many classical and learning-based optical flow methods regarding both accuracy and robustness. In this paper we show that such concepts are still useful in the context of recent neural networks that follow RAFT’s paradigm refraining from hierarchical strategies by relying on recurrent updates based on a single-scale all-pairs transform. To this end, we introduce MS-RAFT+: a novel recurrent multi-scale architecture based on RAFT that unifies several successful hierarchical concepts. It employs a coarse-to-fine estimation to enable the use of finer resolutions by useful initializations from coarser scales. Moreover, it relies on RAFT’s correlation pyramid that allows to consider non-local cost information during the matching process. Furthermore, it makes use of advanced multi-scale features that incorporate high-level information from coarser scales. And finally, our method is trained subject to a sample-wise robust multi-scale multi-iteration loss that closely supervises each iteration on each scale, while allowing to discard particularly difficult samples. In combination with an appropriate mixed-dataset training strategy, our method performs favorably. It not only yields highly accurate results on the four major benchmarks (KITTI 2015, MPI Sintel, Middlebury and VIPER), it also allows to achieve these results with a single model and a single parameter setting. Our trained model and code are available at https://github.com/cv-stuttgart/MS_RAFT_plus .
  • Thumbnail Image
    ItemOpen Access
    Subjective annotation for a frame interpolation benchmark using artefact amplification
    (2020) Men, Hui; Hosu, Vlad; Lin, Hanhe; Bruhn, Andrés; Saupe, Dietmar
    Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. It contains interpolated frames from 155 methods applied to each of 8 contents. For this purpose, we collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data (3720 comparisons of 20 votes each) we reconstructed absolute quality scale values according to Thurstone’s model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA, which weights the local differences between an interpolated image and its ground truth.
OPUS
  • About OPUS
  • Publish with OPUS
  • Legal information
DSpace
  • Cookie settings
  • Privacy policy
  • Send Feedback
University Stuttgart
  • University Stuttgart
  • University Library Stuttgart