The preferences of “An Overview of Speech Dereverberation“

Preferences

[1] H. Yiteng, J. Benesty, and C. Jingdong, “A blind channel identification-based two-stage approach to separation and dereverberation of speech signals in a reverberant environment,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 882-895, 2005.

[2] K. Kinoshita et al., “A summary of the REVERB challenge: state-of-the-art and remaining challenges in reverberant speech processing research,” vol. 2016, no. 1, p. 7, 2016.

[3] M. Miyoshi and Y. Kaneda, “Inverse filtering of room acoustics,” Acoustics, Speech and Signal Processing, vol. 36, no. 2, pp. 145-152, 1988.

[4] K. i. Furuya, “Noise reduction and dereverberation using correlation matrix based on the multiple-input/output inverse-filtering theorem (MINT),” in International Workshop on Hands-Free Speech Communication, 2001.

[5] S. C. Douglas, H. Sawada, S. Makino, and A. Processing, “Natural gradient multichannel blind deconvolution and speech separation using causal FIR filters,” IEEE Transactions on Speech and Audio Processing

vol. 13, no. 1, pp. 92-104, 2004.

[6] I. Kodrasi and S. Doclo, “Joint dereverberation and noise reduction based on acoustic multi-channel equalization,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 4, pp. 680-693, 2016.

[7] A. V. Oppenheim, “RW schafer digital signal processing,” Prentice-Hall, Englewood Cliffs, New Jersey, vol. 6, pp. 125-136, 1975.

[8] D. Bees, M. Blostein, and P. Kabal, “Reverberant speech enhancement using cepstral processing,” in Acoustics, Speech, and Signal Processing, IEEE International Conference on, 1991, pp. 977-980: IEEE Computer Society.

[9] S. T. Neely and J. B. Allen, “Invertibility of a room impulse response,” The Journal of the Acoustical Society of America, vol. 66, no. 1, pp. 165-169, 1979.

[10] D. Zhang and G. Chen, “Speech signal dereverberation with cepstral processing,” Technical Acoustics, no. 1, pp. 39-44, 2009.

[11] M. Wu and D. Wang, “A two-stage algorithm for one-microphone reverberant speech enhancement,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 3, pp. 774-784, 2006.

[12] Q. Liao, R. Kong, Y. Shen, J. Gu, H. Zhao, and Z. Tao, “Dereverberation based on minimym phase decomposition,” Communications Technology, vol. 44, no. 6, pp. 78-82, 2011.

[13] Q.-G. Liu, B. Champagne, and P. Kabal, "A microphone array processing technique for speech enhancement in a reverberant space,"Speech Communication, vol. 18, no. 4, pp. 317-334, 1996.

[14] P. Mowlaee, R. Saeidi, and Y. Stylianou, “Advances in phase-aware signal processing in speech communication,” Speech communication, vol. 81, pp. 1-29, 2016.

[15] K. Paliwal, K. Wójcicki, and B. J. s. c. Shannon, “The importance of phase in speech enhancement,” vol. 53, no. 4, pp. 465-494, 2011.

[16] R. Peng, Z.-H. Tan, X. Li, and C. J. S. C. Zheng, “A perceptually motivated LP residual estimator in noisy and reverberant environments,” Speech Communication, vol. 96, pp. 129-141, 2018.

[17] T. Yoshioka, T. Nakatani, M. Miyoshi, and H. G. J. I. T. o. A. Okuno, Speech, “Blind separation and dereverberation of speech mixtures by joint optimization,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 1, pp. 69-84, 2010.

[18] K. Kinoshita, M. Delcroix, H. Kwon, T. Mori, and T. Nakatani, “Neural network-based spectrum estimation for online WPE dereverberation,” in Interspeech, 2017, pp. 384-388.

[19] M. Parchami, W.-P. Zhu, and B. J. S. c. Champagne, “Speech dereverberation using weighted prediction error with correlated inter-frame speech components,” vol. 87, pp. 49-57, 2017.

[20] T. Nakatani and K. Kinoshita, “A unified convolutional beamformer for simultaneous denoising and dereverberation,” IEEE Signal Processing Letters, vol. 26, no. 6, pp. 903-907, 2019.

[21] X. Zhang, Y. Li, C. Zheng, T. Cao, M. Sun, and G. Min, “Research progress and prospect of speech dereverberation technology,” Journal of Acquisition and Processing Data, vol. 32, no. 006, pp. 1069-1081, 2017.

[22] D. Giacobello and T. L. Jensen, “Speech dereverberation based on convex optimization algorithms for group sparse linear prediction,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 446-450: IEEE.

[23] S. Braun and E. A. Habets, "Linear prediction-based online dereverberation and noise reduction using alternating Kalman filters,"IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 6, pp. 1119-1129, 2018.

[24] L. Mousavi, F. Razzazi, and A. Haghbin, “Blind speech dereverberation using sparse decomposition and multi-channel linear prediction,” International Journal of Speech Technology, vol. 22, no. 3, pp. 729-738, 2019.

[25] K. Lebart, J.-M. Boucher, and P. N. Denbigh, “A new method based on spectral subtraction for speech dereverberation,” Acta Acustica united with Acustica, vol. 87, no. 3, pp. 359-366, 2001.

[26] Z. Chen, F. Yin, and W. Peng, “An audio reverberation suppression device and suppression method,” China, 2013.

[27] R. Martin, “Speech enhancement based on minimum mean-square error estimation and supergaussian priors,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp. 845-856, 2005.

[28] Z. Li, W. Wu, Q. Zhang, and H. Ren, “Multi-band spectral subtraction of speech enhancement based on maximum posteriori phase estimation,” Journal of Electronics and Information Technology, vol. 39, no. 9, pp. 2282-2286, 2017.

[29] Y. Guo, R. Peng, C. Zheng, and X. Li, “Maximum skewness-based multichannel inverse filtering for speech dereverberation,” Applied Acoustics, vol. 38, no. 1, pp. 58-67, 2019.

[30] M. G. Christensen and A. Jakobsson, “Multi-pitch estimation,” Synthesis Lectures on Speech and Audio Processing, vol. 5, no. 1, pp. 1-160, 2009.

[31] B. Harvey and S. O’Young, “A harmonic spectral beamformer for the enhanced localization of propeller-driven aircraft,” Journal of Unmanned Vehicle Systems, vol. 7, no. 2, pp. 156-174, 2019.

[32] A. Schmidt, H. W. Löllmann, and W. Kellermann, “A novel ego-noise suppression algorithm for acoustic signal enhancement in autonomous systems,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 6583-6587: IEEE.

[33] T. Nakatani and M. Miyoshi, “Blind dereverberation of single channel speech signal based on harmonic structure,” in 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003, vol. 1, pp. I-92: IEEE.

[34] K. Kinoshita, T. Nakatani, and M. Miyoshi, “Fast estimation of a precise dereverberation filter based on speech harmonicity,” in Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., 2005, vol. 1, pp. I/1073-I/1076 Vol. 1.

[35] T. Nakatani, K. Kinoshita, M. J. I. T. o. A. Miyoshi, Speech, and L. Processing, “Harmonicity-based blind dereverberation for single-channel speech signals,” vol. 15, no. 1, pp. 80-95, 2006.

[36] N. Roman and D. Wang, “Pitch-based monaural segregation of reverberant speech,” The Journal of the Acoustical Society of America, vol. 120, no. 1, pp. 458-469, 2006.

[37] S. Mosayyebpour, H. Sheikhzadeh, T. A. Gulliver, and M. Esmaeili, “Single-microphone LP residual skewness-based inverse filtering of the room impulse response,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 5, pp. 1617-1632, 2012.

[38] T. Hussain, S. M. Siniscalchi, H.-L. S. Wang, Y. Tsao, S. V. Mario, and W.-H. Liao, "Ensemble hierarchical extreme learning machine for speech dereverberation,"IEEE Transactions on Cognitive and Developmental Systems, 2019.

[39] N. Kilis and N. Mitianoudis, “A novel scheme for single-channel speech dereverberation,” in Acoustics, 2019, vol. 1, no. 3, pp. 711-725: Multidisciplinary Digital Publishing Institute.

[40] Y. Zhao, Z.-Q. Wang, and D. Wang, “Two-stage deep learning for noisy-reverberant speech enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 1, pp. 53-62, 2018.

[41] M. Jeub, C. Nelke, C. Beaugeant, and P. Vary, “Blind estimation of the coherent-to-diffuse energy ratio from noisy speech signals,” in 2011 19th European Signal Processing Conference, 2011, pp. 1347-1351: IEEE.

[42] A. Schwarz and W. Kellermann, “Coherent-to-diffuse power ratio estimation for dereverberation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 6, pp. 1006-1018, 2015.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值