一般我们会将再时域上做的卷积操作,变为在频域上做点积操作,来省算力和简化结构。
今天突然想到在时域上的加窗动作,换成在频域上完成的话,需要怎么操作?问了GPT是可以用卷积操作来完成的。
定义如下:(from wiki)
另外让GPT写了个程序在matlab上做验证,GPT一开始用的是conv,后面经过自己去搜索才发现需要换为cconv
% Parameters
N = 512; % Input sequence length
% 设置参数
fs = 16000; % 采样率 (Hz)
f = 1000; % 信号频率 (Hz)
% 生成时间向量
t = (0:N-1) / fs;
% 生成信号
x = sin(2 * pi * f * t) + 0.5*sin(2 * pi * 500 * t);
window = hann(N)'; % Generate Hann window function and transpose to a row vector
% Time-domain windowing
x_time_windowed = x .* window;
% Frequency-domain representation of the time-domain windowed signal
X_time_windowed = fft(x_time_windowed); % Perform FFT, extended to 2N-1 points
% Frequency-domain windowing
X = fft(x); % Perform FFT on the input sequence, extended to 2N-1 points
W = fft(window); % Perform FFT on the window function, extended to 2N-1 points
% Convolution in the frequency domain with the correct scaling
Y_freq_windowed = (1 / (N)) * cconv(X, W, 512); % Convolution in frequency domain
% Calculate the error between the two frequency domain results
error = X_time_windowed - Y_freq_windowed;
max_error = max(abs(error)); % Calculate maximum error
% Plotting results
figure;
subplot(3, 1, 1);
plot(abs(X_time_windowed));
title('Magnitude of Time-Domain Windowed Signal in Frequency Domain');
subplot(3, 1, 2);
plot(abs(Y_freq_windowed));
title('Magnitude of Frequency-Domain Windowed Signal');
subplot(3, 1, 3);
plot(abs(error));
title(['Error Magnitude, Maximum Error = ', num2str(max_error)]);
disp(['Maximum Error: ', num2str(max_error)]);
运行结果如下图所示:
本次分享更多是记录一下代码和这个小定理。