Please use this identifier to cite or link to this item:
http://dx.doi.org/10.18419/opus-11704
Authors: | Bhattacharjee, Soumyadeep |
Title: | Deep learning for voice cloning |
Issue Date: | 2021 |
metadata.ubs.publikation.typ: | Abschlussarbeit (Master) |
metadata.ubs.publikation.seiten: | 70 |
URI: | http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-117216 http://elib.uni-stuttgart.de/handle/11682/11721 http://dx.doi.org/10.18419/opus-11704 |
Abstract: | Voice cloning is the artificial simulation of the voice of a specific person. We investigate various deep learning techniques for voice cloning and propose a cloning algorithm that generates natural-sounding audio samples using only a few seconds of reference speech from the target speaker. We follow a transfer learning approach from a speaker verification task to text-to-speech synthesis with multi-speaker support. The system generates speech audio in the voices of different speakers, including those that were not observed during the training process, i.e. in a Zero-shot setting. We encode speaker-dependent information using latent embeddings, allowing other model parameters to be shared across all speakers. By training a separate speaker-discriminative encoder network, we decouple the speaker modeling step from speech synthesis. Since the networks have different data requirements, decoupling allows them to be trained on independent datasets. Using an embedding-based approach for voice cloning improves speaker similarity when utilized for zero-shot adaptation to unseen speakers. Furthermore, it minimizes computational resource requirements and could be beneficial for use-cases that require low-resource deployment. |
Appears in Collections: | 05 Fakultät Informatik, Elektrotechnik und Informationstechnik |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Bhattacharjee_Master_Thesis.pdf | 4,94 MB | Adobe PDF | View/Open |
Items in OPUS are protected by copyright, with all rights reserved, unless otherwise indicated.