Seulgi Kim

A Novel Automatic Framework For Speaker Drift Detection in Synthesized Speech

2026 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026

Jia-Hong Huang
Yi Chieh Liu
Yixian Shen
Hongyi Zhu
Prayag Tiwari
Stevan Rudinac
Evangelos Kanoulas

Abstract

Recent diffusion-based text-to-speech (TTS) models achieve high naturalness and expressiveness, yet often suffer from speaker drift, a subtle, gradual shift in perceived speaker identity within a single utterance. This underexplored phenomenon undermines the coherence of synthetic speech, especially in long-form or interactive settings. We introduce the first automatic framework for detecting speaker drift by formulating it as a binary classification task over utterance-level speaker consistency. Our method computes cosine similarity across overlapping segments of synthesized speech and prompts large language models (LLMs) with structured representations to assess drift. We provide theoretical guarantees for cosine-based drift detection and demonstrate that speaker embeddings exhibit meaningful geometric clustering on the unit sphere. To support evaluation, we construct a high-quality synthetic benchmark with human-validated speaker drift annotations. Experiments with multiple state-of-the-art LLMs confirm the viability of this embedding-to-reasoning pipeline. Our work establishes speaker drift as a standalone research problem and bridges geometric signal analysis with LLM-based perceptual reasoning in modern TTS.

BibTeX

			
@inproceedings{huang2026novel,
  title={A Novel Automatic Framework for Speaker Drift Detection in Synthesized Speech},
  author={Huang, Jia-Hong and Kim, Seulgi and Liu, Yi Chieh and Shen, Yixian and Zhu, Hongyi and Tiwari, Prayag and Rudinac, Stevan and Kanoulas, Evangelos},
  booktitle={ICASSP 2026-2026 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={20022--20026},
  year={2026},
  organization={IEEE}
}