Subjective annotation for a frame interpolation benchmark using artefact amplification

dc.contributor.authorMen, Hui
dc.contributor.authorHosu, Vlad
dc.contributor.authorLin, Hanhe
dc.contributor.authorBruhn, Andrés
dc.contributor.authorSaupe, Dietmar
dc.date.accessioned2023-07-25T12:43:10Z
dc.date.available2023-07-25T12:43:10Z
dc.date.issued2020de
dc.date.updated2023-05-16T00:05:02Z
dc.description.abstractCurrent benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. It contains interpolated frames from 155 methods applied to each of 8 contents. For this purpose, we collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data (3720 comparisons of 20 votes each) we reconstructed absolute quality scale values according to Thurstone’s model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA, which weights the local differences between an interpolated image and its ground truth.en
dc.description.sponsorshipDeutsche Forschungsgemeinschaftde
dc.description.sponsorshipProjekt DEALde
dc.identifier.issn2366-0139
dc.identifier.issn2366-0147
dc.identifier.other1853783404
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-133551de
dc.identifier.urihttp://elib.uni-stuttgart.de/handle/11682/13355
dc.identifier.urihttp://dx.doi.org/10.18419/opus-13336
dc.language.isoende
dc.relation.uridoi:10.1007/s41233-020-00037-yde
dc.rightsinfo:eu-repo/semantics/openAccessde
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/de
dc.subject.ddc004de
dc.titleSubjective annotation for a frame interpolation benchmark using artefact amplificationen
dc.typearticlede
ubs.fakultaetInformatik, Elektrotechnik und Informationstechnikde
ubs.fakultaetFakultätsübergreifend / Sonstige Einrichtungde
ubs.institutInstitut für Visualisierung und Interaktive Systemede
ubs.institutFakultätsübergreifend / Sonstige Einrichtungde
ubs.publikation.seiten18de
ubs.publikation.sourceQuality and user experience 5 (2020), No. 8de
ubs.publikation.typZeitschriftenartikelde

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
s41233-020-00037-y.pdf
Size:
3.6 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.3 KB
Format:
Item-specific license agreed upon to submission
Description: