Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
http://dx.doi.org/10.18419/opus-11704
Autor(en): | Bhattacharjee, Soumyadeep |
Titel: | Deep learning for voice cloning |
Erscheinungsdatum: | 2021 |
Dokumentart: | Abschlussarbeit (Master) |
Seiten: | 70 |
URI: | http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-ds-117216 http://elib.uni-stuttgart.de/handle/11682/11721 http://dx.doi.org/10.18419/opus-11704 |
Zusammenfassung: | Voice cloning is the artificial simulation of the voice of a specific person. We investigate various deep learning techniques for voice cloning and propose a cloning algorithm that generates natural-sounding audio samples using only a few seconds of reference speech from the target speaker. We follow a transfer learning approach from a speaker verification task to text-to-speech synthesis with multi-speaker support. The system generates speech audio in the voices of different speakers, including those that were not observed during the training process, i.e. in a Zero-shot setting. We encode speaker-dependent information using latent embeddings, allowing other model parameters to be shared across all speakers. By training a separate speaker-discriminative encoder network, we decouple the speaker modeling step from speech synthesis. Since the networks have different data requirements, decoupling allows them to be trained on independent datasets. Using an embedding-based approach for voice cloning improves speaker similarity when utilized for zero-shot adaptation to unseen speakers. Furthermore, it minimizes computational resource requirements and could be beneficial for use-cases that require low-resource deployment. |
Enthalten in den Sammlungen: | 05 Fakultät Informatik, Elektrotechnik und Informationstechnik |
Dateien zu dieser Ressource:
Datei | Beschreibung | Größe | Format | |
---|---|---|---|---|
Bhattacharjee_Master_Thesis.pdf | 4,94 MB | Adobe PDF | Öffnen/Anzeigen |
Alle Ressourcen in diesem Repositorium sind urheberrechtlich geschützt.