torch(七)、Math operations(2)

参考 torch(七)、Math operations(2) - 云+社区 - 腾讯云

目录

Spectral Ops

torch.fft(input, signal_ndim, normalized=False) → Tensor

torch.ifft(input, signal_ndim, normalized=False) → Tensor

torch.rfft(input, signal_ndim, normalized=False, onesided=True) → Tensor

torch.irfft(input, signal_ndim, normalized=False, onesided=True, signal_sizes=None) → Tensor

torch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=True)[source]

torch.bartlett_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

torch.blackman_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

torch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

torch.hann_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

torch.bincount(input, weights=None, minlength=0) → Tensor

torch.broadcast_tensors(*tensors) → List of Tensors[source]

torch.cartesian_prod(*tensors)[source]

torch.cdist(x1, x2, p=2) → Tensor

torch.combinations(input, r=2, with_replacement=False) → seq

torch.cross(input, other, dim=-1, out=None) → Tensor

torch.diag(input, diagonal=0, out=None) → Tensor

torch.diag_embed(input, offset=0, dim1=-2, dim2=-1) → Tensor

torch.diagflat(input, offset=0) → Tensor

torch.diagonal(input, offset=0, dim1=0, dim2=1) → Tensor

torch.einsum(equation, *operands) → Tensor[source]

torch.flatten(input, start_dim=0, end_dim=-1) → Tensor

torch.flip(input, dims) → Tensor

torch.rot90(input, k, dims) → Tensor

torch.histc(input, bins=100, min=0, max=0, out=None) → Tensor

torch.meshgrid(*tensors, **kwargs)[source]

torch.renorm(input, p, dim, maxnorm, out=None) → Tensor

torch.repeat_interleave()

torch.repeat_interleave(repeats) → Tensor

torch.tensordot(a, b, dims=2)[source]

torch.trace(input) → Tensor

torch.tril(input, diagonal=0, out=None) → Tensor

torch.tril_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) → Tensor

torch.triu(input, diagonal=0, out=None) → Tensor

torch.triu_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) → Tensor

BLAS and LAPACK Operations

torch.addbmm(beta=1, input, alpha=1, batch1, batch2, out=None) → Tensor

torch.addmm(beta=1, input, alpha=1, mat1, mat2, out=None) → Tensor

torch.addmv(beta=1, input, alpha=1, mat, vec, out=None) → Tensor

torch.addr(beta=1, input, alpha=1, vec1, vec2, out=None) → Tensor

torch.baddbmm(beta=1, input, alpha=1, batch1, batch2, out=None) → Tensor

torch.bmm(input, mat2, out=None) → Tensor

torch.chain_matmul(*matrices)[source]

torch.cholesky(input, upper=False, out=None) → Tensor

torch.cholesky_inverse(input, upper=False, out=None) → Tensor

torch.cholesky_solve(input, input2, upper=False, out=None) → Tensor

torch.dot(input, tensor) → Tensor

torch.eig(input, eigenvectors=False, out=None) -> (Tensor, Tensor)

torch.ger(input, vec2, out=None) → Tensor

torch.inverse(input, out=None) → Tensor

torch.det(input) → Tensor

torch.logdet(input) → Tensor

torch.slogdet(input) -> (Tensor, Tensor)

torch.lstsq(input, A, out=None) → Tensor

torch.lu(A, pivot=True, get_infos=False, out=None)[source]

torch.lu_solve(input, LU_data, LU_pivots, out=None) → Tensor

torch.matmul(input, other, out=None) → Tensor

torch.matrix_power(input, n) → Tensor

torch.matrix_rank(input, tol=None, bool symmetric=False) → Tensor

torch.mm(input, mat2, out=None) → Tensor

torch.mv(input, vec, out=None) → Tensor

torch.orgqr(input, input2) → Tensor

torch.ormqr(input, input2, input3, left=True, transpose=False) → Tensor

torch.pinverse(input, rcond=1e-15) → Tensor

torch.qr(input, some=True, out=None) -> (Tensor, Tensor)

torch.solve(input, A, out=None) -> (Tensor, Tensor)

torch.svd(input, some=True, compute_uv=True, out=None) -> (Tensor, Tensor, Tensor)

torch.symeig(input, eigenvectors=False, upper=True, out=None) -> (Tensor, Tensor)

torch.trapz()

torch.trapz(y, *, dx=1, dim=-1) → Tensor


Spectral Ops

torch.fft(input, signal_ndim, normalized=False) → Tensor

Complex-to-complex Discrete Fourier Transform

This method computes the complex-to-complex discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:

X[ω1,…,ωd]=∑n1=0N1−1⋯∑nd=0Nd−1x[n1,…,nd]e−j 2π∑i=0dωiniNi,X[\omega_1, \dots, \omega_d] = \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, X[ω1​,…,ωd​]=n1​=0∑N1​−1​⋯nd​=0∑Nd​−1​x[n1​,…,nd​]e−j 2π∑i=0d​Ni​ωi​ni​​,

where ddd = signal_ndim is number of dimensions for the signal, and NiN_iNi​ is the size of signal dimension iii .

This method supports 1D, 2D and 3D complex-to-complex transforms, indicated by signal_ndim. input must be a tensor with last dimension of size 2, representing the real and imaginary components of complex numbers, and should have at least signal_ndim + 1 dimensions with optionally arbitrary number of leading batch dimensions. If normalized is set to True, this normalizes the result by dividing it with ∏i=1KNi\sqrt{\prod_{i=1}^K N_i}∏i=1K​Ni​

​ so that the operator is unitary.

Returns the real and the imaginary parts together as one tensor of the same shape of input.

The inverse of this function is ifft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters

  • input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

Returns

A tensor containing the complex-to-complex Fourier transform result

Return type

Tensor

Example:

>>> # unbatched 2D FFT
>>> x = torch.randn(4, 3, 2)
>>> torch.fft(x, 2)
tensor([[[-0.0876,  1.7835],
         [-2.0399, -2.9754],
         [ 4.4773, -5.0119]],

        [[-1.5716,  2.7631],
         [-3.8846,  5.2652],
         [ 0.2046, -0.7088]],

        [[ 1.9938, -0.5901],
         [ 6.5637,  6.4556],
         [ 2.9865,  4.9318]],

        [[ 7.0193,  1.1742],
         [-1.3717, -2.1084],
         [ 2.0289,  2.9357]]])
>>> # batched 1D FFT
>>> torch.fft(x, 1)
tensor([[[ 1.8385,  1.2827],
         [-0.1831,  1.6593],
         [ 2.4243,  0.5367]],

        [[-0.9176, -1.5543],
         [-3.9943, -2.9860],
         [ 1.2838, -2.9420]],

        [[-0.8854, -0.6860],
         [ 2.4450,  0.0808],
         [ 1.3076, -0.5768]],

        [[-0.1231,  2.7411],
         [-0.3075, -1.7295],
         [-0.5384, -2.0299]]])
>>> # arbitrary number of batch dimensions, 2D FFT
>>> x = torch.randn(3, 3, 5, 5, 2)
>>> y = torch.fft(x, 2)
>>> y.shape
torch.Size([3, 3, 5, 5, 2])

torch.ifft(input, signal_ndim, normalized=False) → Tensor

Complex-to-complex Inverse Discrete Fourier Transform

This method computes the complex-to-complex inverse discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:

X[ω1,…,ωd]=1∏i=1dNi∑n1=0N1−1⋯∑nd=0Nd−1x[n1,…,nd]e j 2π∑i=0dωiniNi,X[\omega_1, \dots, \omega_d] = \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, X[ω1​,…,ωd​]=∏i=1d​Ni​1​n1​=0∑N1​−1​⋯nd​=0∑Nd​−1​x[n1​,…,nd​]e j 2π∑i=0d​Ni​ωi​ni​​,

where ddd = signal_ndim is number of dimensions for the signal, and NiN_iNi​ is the size of signal dimension iii .

The argument specifications are almost identical with fft(). However, if normalized is set to True, this instead returns the results multiplied by ∏i=1dNi\sqrt{\prod_{i=1}^d N_i}∏i=1d​Ni​

​ , to become a unitary operator. Therefore, to invert a fft(), the normalized argument should be set identically for fft().

Returns the real and the imaginary parts together as one tensor of the same shape of input.

The inverse of this function is fft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters

  • input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

Returns

A tensor containing the complex-to-complex inverse Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(3, 3, 2)
>>> x
tensor([[[ 1.2766,  1.3680],
         [-0.8337,  2.0251],
         [ 0.9465, -1.4390]],

        [[-0.1890,  1.6010],
         [ 1.1034, -1.9230],
         [-0.9482,  1.0775]],

        [[-0.7708, -0.8176],
         [-0.1843, -0.2287],
         [-1.9034, -0.2196]]])
>>> y = torch.fft(x, 2)
>>> torch.ifft(y, 2)  # recover x
tensor([[[ 1.2766,  1.3680],
         [-0.8337,  2.0251],
         [ 0.9465, -1.4390]],

        [[-0.1890,  1.6010],
         [ 1.1034, -1.9230],
         [-0.9482,  1.0775]],

        [[-0.7708, -0.8176],
         [-0.1843, -0.2287],
         [-1.9034, -0.2196]]])

torch.rfft(input, signal_ndim, normalized=False, onesided=True) → Tensor

Real-to-complex Discrete Fourier Transform

This method computes the real-to-complex discrete Fourier transform. It is mathematically equivalent with fft() with differences only in formats of the input and output.

This method supports 1D, 2D and 3D real-to-complex transforms, indicated by signal_ndim. input must be a tensor with at least signal_ndim dimensions with optionally arbitrary number of leading batch dimensions. If normalized is set to True, this normalizes the result by dividing it with ∏i=1KNi\sqrt{\prod_{i=1}^K N_i}∏i=1K​Ni​

​ so that the operator is unitary, where NiN_iNi​ is the size of signal dimension iii .

The real-to-complex Fourier transform results follow conjugate symmetry:

X[ω1,…,ωd]=X∗[N1−ω1,…,Nd−ωd],X[\omega_1, \dots, \omega_d] = X^*[N_1 - \omega_1, \dots, N_d - \omega_d], X[ω1​,…,ωd​]=X∗[N1​−ω1​,…,Nd​−ωd​],

where the index arithmetic is computed modulus the size of the corresponding dimension,  ∗\ ^* ∗ is the conjugate operator, and ddd = signal_ndim. onesided flag controls whether to avoid redundancy in the output results. If set to True (default), the output will not be full complex result of shape (∗,2)(*, 2)(∗,2) , where ∗*∗ is the shape of input, but instead the last dimension will be halfed as of size ⌊Nd2⌋+1\lfloor \frac{N_d}{2} \rfloor + 1⌊2Nd​​⌋+1 .

The inverse of this function is irfft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters

  • input (Tensor) – the input tensor of at least signal_ndim dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

  • onesided (bool, optional) – controls whether to return half of results to avoid redundancy. Default: True

Returns

A tensor containing the real-to-complex Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(5, 5)
>>> torch.rfft(x, 2).shape
torch.Size([5, 3, 2])
>>> torch.rfft(x, 2, onesided=False).shape
torch.Size([5, 5, 2])

torch.irfft(input, signal_ndim, normalized=False, onesided=True, signal_sizes=None) → Tensor

Complex-to-real Inverse Discrete Fourier Transform

This method computes the complex-to-real inverse discrete Fourier transform. It is mathematically equivalent with ifft() with differences only in formats of the input and output.

The argument specifications are almost identical with ifft(). Similar to ifft(), if normalized is set to True, this normalizes the result by multiplying it with ∏i=1KNi\sqrt{\prod_{i=1}^K N_i}∏i=1K​Ni​

​ so that the operator is unitary, where NiN_iNi​ is the size of signal dimension iii .

Note

Due to the conjugate symmetry, input do not need to contain the full complex frequency values. Roughly half of the values will be sufficient, as is the case when input is given by rfft() with rfft(signal, onesided=True). In such case, set the onesided argument of this method to True. Moreover, the original signal shape information can sometimes be lost, optionally set signal_sizes to be the size of the original signal (without the batch dimensions if in batched mode) to recover it with correct shape.

Therefore, to invert an rfft(), the normalized and onesided arguments should be set identically for irfft(), and preferrably a signal_sizes is given to avoid size mismatch. See the example below for a case of size mismatch.

See rfft() for details on conjugate symmetry.

The inverse of this function is rfft().

Warning

Generally speaking, input to this function should contain values following conjugate symmetry. Note that even if onesided is True, often symmetry on some part is still needed. When this requirement is not satisfied, the behavior of irfft() is undefined. Since torch.autograd.gradcheck() estimates numerical Jacobian with point perturbations, irfft() will almost certainly fail the check.

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters

  • input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

  • signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

  • normalized (bool, optional) – controls whether to return normalized results. Default: False

  • onesided (bool, optional) – controls whether input was halfed to avoid redundancy, e.g., by rfft(). Default: True

  • signal_sizes (list or torch.Size, optional) – the size of the original signal (without batch dimension). Default: None

Returns

A tensor containing the complex-to-real inverse Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(4, 4)
>>> torch.rfft(x, 2, onesided=True).shape
torch.Size([4, 3, 2])
>>>
>>> # notice that with onesided=True, output size does not determine the original signal size
>>> x = torch.randn(4, 5)

>>> torch.rfft(x, 2, onesided=True).shape
torch.Size([4, 3, 2])
>>>
>>> # now we use the original shape to recover x
>>> x
tensor([[-0.8992,  0.6117, -1.6091, -0.4155, -0.8346],
        [-2.1596, -0.0853,  0.7232,  0.1941, -0.0789],
        [-2.0329,  1.1031,  0.6869, -0.5042,  0.9895],
        [-0.1884,  0.2858, -1.5831,  0.9917, -0.8356]])
>>> y = torch.rfft(x, 2, onesided=True)
>>> torch.irfft(y, 2, onesided=True, signal_sizes=x.shape)  # recover x
tensor([[-0.8992,  0.6117, -1.6091, -0.4155, -0.8346],
        [-2.1596, -0.0853,  0.7232,  0.1941, -0.0789],
        [-2.0329,  1.1031,  0.6869, -0.5042,  0.9895],
        [-0.1884,  0.2858, -1.5831,  0.9917, -0.8356]])

torch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=True)[source]

Short-time Fourier transform (STFT).

Ignoring the optional batch dimension, this method computes the following expression:

X[m,ω]=∑k=0win_length-1window[k] input[m×hop_length+k] exp⁡(−j2π⋅ωkwin_length),X[m, \omega] = \sum_{k = 0}^{\text{win\_length-1}}% \text{window}[k]\ \text{input}[m \times \text{hop\_length} + k]\ % \exp\left(- j \frac{2 \pi \cdot \omega k}{\text{win\_length}}\right), X[m,ω]=k=0∑win_length-1​window[k] input[m×hop_length+k] exp(−jwin_length2π⋅ωk​),

where mmm is the index of the sliding window, and ω\omegaω is the frequency that 0≤ω<n_fft0 \leq \omega < \text{n\_fft}0≤ω<n_fft . When onesided is the default value True,

  • input must be either a 1-D time sequence or a 2-D batch of time sequences.

  • If hop_length is None (default), it is treated as equal to floor(n_fft / 4).

  • If win_length is None (default), it is treated as equal to n_fft.

  • window can be a 1-D tensor of size win_length, e.g., from torch.hann_window(). If window is None (default), it is treated as if having 111 everywhere in the window. If win_length<n_fft\text{win\_length} < \text{n\_fft}win_length<n_fft , window will be padded on both sides to length n_fft before being applied.

  • If center is True (default), input will be padded on both sides so that the ttt -th frame is centered at time t×hop_lengtht \times \text{hop\_length}t×hop_length . Otherwise, the ttt -th frame begins at time t×hop_lengtht \times \text{hop\_length}t×hop_length .

  • pad_mode determines the padding method used on input when center is True. See torch.nn.functional.pad() for all available options. Default is "reflect".

  • If onesided is True (default), only values for ω\omegaω in [0,1,2,…,⌊n_fft2⌋+1]\left[0, 1, 2, \dots, \left\lfloor \frac{\text{n\_fft}}{2} \right\rfloor + 1\right][0,1,2,…,⌊2n_fft​⌋+1] are returned because the real-to-complex Fourier transform satisfies the conjugate symmetry, i.e., X[m,ω]=X[m,n_fft−ω]∗X[m, \omega] = X[m, \text{n\_fft} - \omega]^*X[m,ω]=X[m,n_fft−ω]∗ .

  • If normalized is True (default is False), the function returns the normalized STFT results, i.e., multiplied by (frame_length)−0.5(\text{frame\_length})^{-0.5}(frame_length)−0.5 .

Returns the real and the imaginary parts together as one tensor of size (∗×N×T×2)(* \times N \times T \times 2)(∗×N×T×2) , where ∗*∗ is the optional batch size of input, NNN is the number of frequencies where STFT is applied, TTT is the total number of frames used, and each pair in the last dimension represents a complex number as the real part and the imaginary part.

Warning

This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.

Parameters

  • input (Tensor) – the input tensor

  • n_fft (int) – size of Fourier transform

  • hop_length (int, optional) – the distance between neighboring sliding window frames. Default: None (treated as equal to floor(n_fft / 4))

  • win_length (int, optional) – the size of window frame and STFT filter. Default: None (treated as equal to n_fft)

  • window (Tensor, optional) – the optional window function. Default: None (treated as window of all 111 s)

  • center (bool, optional) – whether to pad input on both sides so that the ttt -th frame is centered at time t×hop_lengtht \times \text{hop\_length}t×hop_length . Default: True

  • pad_mode (string, optional) – controls the padding method used when center is True. Default: "reflect"

  • normalized (bool, optional) – controls whether to return the normalized STFT results Default: False

  • onesided (bool, optional) – controls whether to return half of results to avoid redundancy Default: True

Returns

A tensor containing the STFT result with shape described above

Return type

Tensor

torch.bartlett_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Bartlett window function.

w[n]=1−∣2nN−1−1∣={2nN−1if 0≤n≤N−122−2nN−1if N−12<n<N,w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases}, w[n]=1−∣∣∣∣∣​N−12n​−1∣∣∣∣∣​={N−12n​2−N−12n​​if 0≤n≤2N−1​if 2N−1​<n<N​,

where NNN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NNN in above formula is in fact window_length+1\text{window\_length} + 1window_length+1 . Also, we always have torch.bartlett_window(L, periodic=True) equal to torch.bartlett_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1=1 , the returned window contains a single value 1.

Parameters

  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},)(window_length,) containing the window

Return type

Tensor

torch.blackman_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Blackman window function.

w[n]=0.42−0.5cos⁡(2πnN−1)+0.08cos⁡(4πnN−1)w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right) w[n]=0.42−0.5cos(N−12πn​)+0.08cos(N−14πn​)

where NNN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NNN in above formula is in fact window_length+1\text{window\_length} + 1window_length+1 . Also, we always have torch.blackman_window(L, periodic=True) equal to torch.blackman_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1=1 , the returned window contains a single value 1.

Parameters

  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},)(window_length,) containing the window

Return type

Tensor

torch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Hamming window function.

w[n]=α−β cos⁡(2πnN−1),w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right), w[n]=α−β cos(N−12πn​),

where NNN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NNN in above formula is in fact window_length+1\text{window\_length} + 1window_length+1 . Also, we always have torch.hamming_window(L, periodic=True) equal to torch.hamming_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1=1 , the returned window contains a single value 1.

Note

This is a generalized version of torch.hann_window().

Parameters

  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • alpha (float, optional) – The coefficient α\alphaα in the equation above

  • beta (float, optional) – The coefficient β\betaβ in the equation above

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.

  • layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Returns

A 1-D tensor of size (window_length,)(\text{window\_length},)(window_length,) containing the window

Return type

Tensor

torch.hann_window(window_length, periodic=True, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Hann window function.

w[n]=12 [1−cos⁡(2πnN−1)]=sin⁡2(πnN−1),w[n] = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{N - 1} \right)\right] = \sin^2 \left( \frac{\pi n}{N - 1} \right), w[n]=21​ [1−cos(N−12πn​)]=sin2(N−1πn​),

where NNN is the full window size.

The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NNN in above formula is in fact window_length+1\text{window\_length} + 1window_length+1 . Also, we always have torch.hann_window(L, periodic=True) equal to torch.hann_window(L + 1, periodic=False)[:-1]).

Note

If window_length =1=1=1 , the returned window contains a single value 1.

Parameters

  • window_length (int) – the size of returned window

  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window.

  • dtype (

  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Wanderer001

ROIAlign原理

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值