Advances in Audiovisual Speech Processing for Robust Voice Activity Detection and Automatic Speech Recognition

Date

2018-12

Authors

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Speech processing systems are widely used in existing commercial applications, including virtual assistants in smartphones and home assistant devices. Speech-based commands provide convenient hands-free functionality for users. Two key speech processing systems in practical applications are voice activity detection (VAD), which aims to detect when a user is speaking to a system, and automatic speech recognition (ASR), which aims to recognize what the user is speaking. A limitation in these speech tasks is the drop in performance observed in noisy environments or when the speech mode differs from neutral speech (e.g., whisper speech). Emerging audiovisual solutions provide principled frameworks to increase the robustness of the systems by incorporating features describing lip motion. This study proposes novel audiovisual solutions for VAD and ASR tasks. The dissertation introduces unsupervised and supervised audiovisual voice activity detection (AV-VAD). The unsupervised approach combines visual features that are characteristic of the semi-periodic nature of the articulatory production around the orofacial area. The visual features are combined using principal component analysis (PCA) to obtain a single feature. The threshold between speech and non-speech activity is automatically estimated with the expectation-maximization (EM) algorithm. The decision boundary is improved by using the Bayesian information criterion (BIC) algorithm, resolving temporal ambiguities caused by different sampling rates and anticipatory movements. The supervised framework corresponds to the bimodal recurrent neural network (BRNN), which captures the taskrelated characteristics in the audio and visual inputs, and models the temporal information within and across modalities. The approach relied on three subnetworks implemented with long short-term memory (LSTM) networks. This framework is implemented with either hand-crafted features or features representations directly derived from the data (i.e., end-toend system). The study also extends this framework by increasing the temporal modeling by using advanced LSTMs (A-LSTMs). For audiovisual automatic speech recognition (AV-ASR), the study explores the use of visual features to compensate for the mismatch observed when the system is evaluated with whisper speech. We propose supervised adaptation schemes which significantly reduce the mismatch between normal and whisper speech across speakers. The study also introduces the Gating neural network (GNN). The GNN aims to attenuate the effect of unreliable features, creating AV-ASR systems that improve, or at least maintain, the performance of an ASR system implemented only with speech. Finally, the dissertation introduces the front-end alignment neural network (AliNN) to address the temporal alignment problem between audio and visual features. This front-end system is important as the lip motion often precedes speech (e.g., anticipatory movements). The framework relies on RNN with attention model. The resulting aligned features are concatenated and fed to conventional back-end ASR systems obtaining performance improvements. The proposed approaches for AV-VAD and AV-ASR systems are evaluated on large audiovisual corpora, achieving competitive performance under real world scenarios, outperforming conventional audio-based VAD and ASR systems or alternative audiovisual systems proposed by previous studies. Taken collectively, this dissertation has made algorithmic advancements for audiovisual systems, representing novel contributions to the field of multimodal processing.

Description

Keywords

Motion detectors, Principal components analysis, Automatic speech recognition, Neural networks (Computer science)

item.page.sponsorship

Rights

Citation