Robust audio sensing with multi-sound classification
Abstract
Audio data is a highly rich form of information, often containing patterns with unique acoustic signatures. In pervasive sensing environments, because of the empowered smart devices, we have witnessed an increasing research interest in sound sensing to detect ambient environment, recognise users' daily activities, and infer their health conditions. However, the main challenge is that the real-world environment often contains multiple sound sources, which can significantly compromise the robustness of the above environment, event, and activity detection applications. In this paper, we explore different approaches in multi-sound classification, and propose a stacked classifier based on the recent advance in deep learning. We evaluate our proposed approach in a comprehensive set of experiments on both sound effect and real-world datasets. The results have demonstrated that our approach can robustly identify each sound category among mixed acoustic signals, without the need of any a priori knowledge about the number and signature of sounds in the mixed signals.
Citation
Ye , J & Haubrick , P 2019 , Robust audio sensing with multi-sound classification . in 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom) . , 8767402 , Pervasive Computing and Communications (PerCom) , IEEE Computer Society , pp. 1-7 , IEEE International Conference on Pervasive Computing and Communications (PerCom 2019) , Kyoto , Japan , 12/03/19 . https://doi.org/10.1109/PERCOM.2019.8767402 conference
Publication
2019 IEEE International Conference on Pervasive Computing and Communications (PerCom)
ISSN
2474-2503Type
Conference item
Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.