convlstm学习过程(01)

论文:Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting

论文地址

代码网址

代码网站内容如下:

convlstm.py的代码内容如下:
 

import torch.nn as nn
import torch


class ConvLSTMCell(nn.Module):

    def __init__(self, input_dim, hidden_dim, kernel_size, bias):
        """
        Initialize ConvLSTM cell.

        Parameters
        ----------
        input_dim: int
            Number of channels of input tensor.
        hidden_dim: int
            Number of channels of hidden state.
        kernel_size: (int, int)
            Size of the convolutional kernel.
        bias: bool
            Whether or not to add the bias.
        """

        super(ConvLSTMCell, self).__init__()

        self.input_dim = input_dim
        self.hidden_dim = hidden_dim

        self.kernel_size = kernel_size
        self.padding = kernel_size[0] // 2, kernel_size[1] // 2
        self.bias = bias

        self.conv = nn.Conv2d(in_channels=self.input_dim + self.hidden_dim,
                              out_channels=4 * self.hidden_dim,
                              kernel_size=self.kernel_size,
                              padding=self.padding,
                              bias=self.bias)

    def forward(self, input_tensor, cur_state):
        h_cur, c_cur = cur_state

        combined = torch.cat([input_tensor, h_cur], dim=1)  # concatenate along channel axis

        combined_conv = self.conv(combined)
        cc_i, cc_f, cc_o, cc_g = torch.split(combined_conv, self.hidden_dim, dim=1)
        i = torch.sigmoid(cc_i)
        f = torch.sigmoid(cc_f)
        o = torch.sigmoid(cc_o)
        g = torch.tanh(cc_g)

        c_next = f * c_cur + i * g
        h_next = o * torch.tanh(c_next)

        return h_next, c_next

    def init_hidden(self, batch_size, image_size):
        height, width = image_size
        return (torch.zeros(batch_size, self.hidden_dim, height, width, device=self.conv.weight.device),
                torch.zeros(batch_size, self.hidden_dim, height, width, device=self.conv.weight.device))


class ConvLSTM(nn.Module):
    """

    Parameters:
        input_dim: Number of channels in input
        hidden_dim: Number of hidden channels
        kernel_size: Size of kernel in convolutions
        num_layers: Number of LSTM layers stacked on each other
        batch_first: Whether or not dimension 0 is the batch or not
        bias: Bias or no bias in Convolution
        return_all_layers: Return the list of computations for all layers
        Note: Will do same padding.

    Input:
        A tensor of size B, T, C, H, W or T, B, C, H, W
    Output:
        A tuple of two lists of length num_layers (or length 1 if return_all_layers is False).
            0 - layer_output_list is the list of lists of length T of each output
            1 - last_state_list is the list of last states
                    each element of the list is a tuple (h, c) for hidden state and memory
    Example:
        >> x = torch.rand((32, 10, 64, 128, 128))
        >> convlstm = ConvLSTM(64, 16, 3, 1, True, True, False)
        >> _, last_states = convlstm(x)
        >> h = last_states[0][0]  # 0 for layer index, 0 for h index
    """

    def __init__(self, input_dim, hidden_dim, kernel_size, num_layers,
                 batch_first=False, bias=True, return_all_layers=False):
        super(ConvLSTM, self).__init__()

        self._check_kernel_size_consistency(kernel_size)

        # Make sure that both `kernel_size` and `hidden_dim` are lists having len == num_layers
        kernel_size = self._extend_for_multilayer(kernel_size, num_layers)
        hidden_dim = self._extend_for_multilayer(hidden_dim, num_layers)
        if not len(kernel_size) == len(hidden_dim) == num_layers:
            raise ValueError('Inconsistent list length.')

        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.kernel_size = kernel_size
        self.num_layers = num_layers
        self.batch_first = batch_first
        self.bias = bias
        self.return_all_layers = return_all_layers

        cell_list = []
        for i in range(0, self.num_layers):
            cur_input_dim = self.input_dim if i == 0 else self.hidden_dim[i - 1]

            cell_list.append(ConvLSTMCell(input_dim=cur_input_dim,
                                          hidden_dim=self.hidden_dim[i],
                                          kernel_size=self.kernel_size[i],
                                          bias=self.bias))

        self.cell_list = nn.ModuleList(cell_list)

    def forward(self, input_tensor, hidden_state=None):
        """

        Parameters
        ----------
        input_tensor: todo
            5-D Tensor either of shape (t, b, c, h, w) or (b, t, c, h, w)
        hidden_state: todo
            None. todo implement stateful

        Returns
        -------
        last_state_list, layer_output
        """
        if not self.batch_first:
            # (t, b, c, h, w) -> (b, t, c, h, w)
            input_tensor = input_tensor.permute(1, 0, 2, 3, 4)

        b, _, _, h, w = input_tensor.size()

        # Implement stateful ConvLSTM
        if hidden_state is not None:
            raise NotImplementedError()
        else:
            # Since the init is done in forward. Can send image size here
            hidden_state = self._init_hidden(batch_size=b,
                                             image_size=(h, w))

        layer_output_list = []
        last_state_list = []

        seq_len = input_tensor.size(1)
        cur_layer_input = input_tensor

        for layer_idx in range(self.num_layers):

            h, c = hidden_state[layer_idx]
            output_inner = []
            for t in range(seq_len):
                h, c = self.cell_list[layer_idx](input_tensor=cur_layer_input[:, t, :, :, :],
                                                 cur_state=[h, c])
                output_inner.append(h)

            layer_output = torch.stack(output_inner, dim=1)
            cur_layer_input = layer_output

            layer_output_list.append(layer_output)
            last_state_list.append([h, c])

        if not self.return_all_layers:
            layer_output_list = layer_output_list[-1:]
            last_state_list = last_state_list[-1:]

        return layer_output_list, last_state_list

    def _init_hidden(self, batch_size, image_size):
        init_states = []
        for i in range(self.num_layers):
            init_states.append(self.cell_list[i].init_hidden(batch_size, image_size))
        return init_states

    @staticmethod
    def _check_kernel_size_consistency(kernel_size):
        if not (isinstance(kernel_size, tuple) or
                (isinstance(kernel_size, list) and all([isinstance(elem, tuple) for elem in kernel_size]))):
            raise ValueError('`kernel_size` must be tuple or list of tuples')

    @staticmethod
    def _extend_for_multilayer(param, num_layers):
        if not isinstance(param, list):
            param = [param] * num_layers
        return param

官方给出了一个模型定义的示例:
 

model = ConvLSTM(input_dim=channels,
                 hidden_dim=[64, 64, 128],
                 kernel_size=(3, 3),
                 num_layers=3,
                 batch_first=True
                 bias=True,
                 return_all_layers=False)

batch_first:

  • 如果 batch_first=True,输入张量的形状是 (b, t, c, h, w),其中 b 是批大小,t 是时间步长,c 是输入的通道数,h 和 w 分别是输入特征图的高度和宽度。

  • 如果 batch_first=False,输入张量的形状是 (t, b, c, h, w) 

return_all_layers:

如果return_all_layers=False,那么该列表只包含最后一层的输出,形状为 (b, t, hidden_dim_last_layer, h, w)。从 layer_output_list 中取出的输出张量,它的形状是 (b, t, hidden_dim_last_layer, h, w)。假设是要预测未来一帧,那么对于每个批次 b,取输出张量中最后一个时间步 t - 1 之后的一帧数据,就是对于当前输入数据的未来一帧的预测结果。

如果 return_all_layers=True,需要从最后一层的输出中获取预测结果,方式与 return_all_layers=False 的情况类似

下面按照batch_first=True,return=Falese的设置,随机生成张量来测试。

if __name__ == '__main__':
    channels = 3

    model = ConvLSTM(input_dim=channels,
                     hidden_dim=[64, 64, 128],
                     kernel_size=(3, 3),
                     num_layers=3,
                     batch_first=True,
                     bias=True,
                     return_all_layers=False)

    input_tensor = torch.rand(4, 5, 3, 32, 32)

    out = model(input_tensor)

对于输出out:

out是一个长度等于2的tuple。
out[0]代表的是layer_output_list。type: list,长度为1,存放了形状为[batch, time_step, 128, 32, 32],128指的是最后一层的隐藏通道数;
out[1]代表的是last_state_list: type:list, 长度为1, 存放了两个list,每个list中各存放了一个形状为[batch, 128, 32, 32]的张量,第一个张量是最后一层在最后一个时间步的隐藏状态(hidden state),第二个张量表示最后一层在最后一个时间步的细胞状态(cell state)

概念解释:

return_all_layers=False: layer_output_list中只包含最后一层LSTM的输出;return_all_layers=True,那么这个列表将包含所有LSTM层的输出
layer_output_list:存储了模型对于输入序列在最后一层的输出结果,在每个时间步对于每一批数据的特征表示;
last_state_list:包含了每一层 LSTM 在序列最后时间步的最终状态,即最终的隐藏状态 (h) 和细胞状态 (c),这些状态可以被用作下一序列的初始状态,从而保持了长期依赖的信息。对于帧预测任务,last_state_list并不直接提供预测的帧。
  • 7
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ConvLSTM是一种结合了卷积神经网络(CNN)和长短期记忆网络(LSTM)的模型,用于时间序列预测。它在处理时空数据(如图像序列)时表现出色。引用\[2\]中提到了如何开发和评估用于多步时间序列预测的ConvLSTM Encoder-Decoder模型。 ConvLSTM模型的基本思想是在LSTM的循环结构中引入卷积操作,以捕捉时空特征。它通过在LSTM的输入、遗忘和输出门中引入卷积操作,使得模型能够同时学习时空特征和序列依赖关系。这种结构使得ConvLSTM在处理时空数据时能够更好地捕捉到图像序列中的动态变化。 在使用ConvLSTM进行预测时,可以将过去的图像序列作为输入,然后通过模型学习序列中的时空模式,并预测未来的图像序列。通过调整模型的层结构、过滤器数量、批次大小等参数,可以优化ConvLSTM模型的性能。引用\[3\]中提到了模型调参的过程,可以通过改变层结构、过滤器数量等参数来优化模型的效果。 总之,ConvLSTM是一种用于时空数据预测的模型,通过结合卷积和LSTM的操作,能够更好地捕捉时空特征和序列依赖关系。通过调整模型的参数和结构,可以进一步优化ConvLSTM模型的性能。 #### 引用[.reference_title] - *1* *3* [【时空序列预测实战】风险时空预测?keras之ConvLSTM实战来搞定](https://blog.csdn.net/qq_33431368/article/details/106345716)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [时间序列预测18:ConvLSTM 实现用电量/发电量预测](https://blog.csdn.net/weixin_39653948/article/details/105447616)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值