| Summary: | In the deep learning based speech enhancement model,the Long Short-Term Memory Network(LSTM) can well handle the sequence speech enhancement problem,but the training speed of the model is slow when dealing with speech enhancement problems based on large-scale noisy speech data.Aiming at this problem,this paper proposes a speech enhancement method based on quasi Recurrent Neural Network(RNN).The gate functions and memory cells are used to ensure the correlation of the context of noisy speech sequences,and the calculation of gate functions is no longer dependent on the output of the previous moment.Moreover,the model introduces the convolution operation of the matrix in the calculation of the input of the noisy speech sequence and gate functions,so that the model can simultaneously process the speech sequence information at multiple moments,thereby enhancing the parallel computational ability.Experimental results show that compared with the LSTM,the proposed method can greatly improve the training speed of the network model under the premise of ensuring the speech enhancement performance.
|