Date of Publication :7th June 2016
Abstract: There are many artifact and different distortions present in the recording. The reflections of sound depend on the geometry of the room and it causes the smearing of the recording called asreverberation. The background noise depends on the unwanted audio source activities present in the evidential recording. The acoustic reverberation depends on the shape and the composition of the room, and itfor digital media to be considered as proof in a court its authenticitymust be verified. A technique proposed is based on spectral subtraction to estimate the amount of reverberation. Also nonlinear filtering based on particle filtering is used to estimate the background noise. Feature extraction is by using MFCC approach. The feature vector is the addition of features from acoustic reverberation and background noise. SVM classifier isused for classification of the environments. Acoustic environment identification(AEI), audio forensics, and ballistic settings. We describe a statistical technique based on spectral subtraction to estimate the amount of reverberation and nonlinear filtering based on particle filtering toestimate the background noise. The effectiveness of the proposed method is tested using a data set consisting of speech recordings of two human speakers (one male and one female) made in eight acoustic environments using four commercial grade microphonesis robust to MP3 compression attack.
Reference :
-
[1] m. delcroix et al.: dereverberation of speech signals based on linear prediction
[2]International Journal of Signal Processing, Image Processing and Pattern Recognition Vol.6, No.5 (2013)
[3]Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi
[4]ieee/acmtransactionsonaudio,speech,andlanguageprocessi ng,vol.24,no.2,february2016
[5] Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
[6] D. Bees,M. Blostein, and P. Kabal, “Reverberant speech enhancement using cepstral processing,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Apr. 1991, pp. 977– 980.
[7] M. Unoki, M. Furukawa, K. Sakata, and M. Akagi, “A method based on the MTF concept for dereverberating the power envelope from the reverberant signal,” in Proc IEEE Int. Conf. Acoust., Speech, Signal Process., Apr. 2003, pp. 840–843.
[8] T. Nakatani, K. Kinoshita, and M. Miyoshi, “Harmonicity based blind dereverberation for single-channel speech signals,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 1, pp. 80–95, Jan. 2007.
[9] M. R. Schroeder, “New method of measuring reverberation time,” J. Acoust. Soc. Am., vol. 37, pp. 409– 412, 1965.
[10] R. Ratnam, D. L. Jones, B. C. Wheeler, W. D. O’Brien, Jr. C.R.Lansing,andA.S.Feng, “Blindestimationofreverberationtime,” J.Acoust.Soc.Am.,vol.114,no.5,pp.2877–2892, Nov. 2003.