pytorch中x = x.view(x.size(0), -1) 的理解

在pytorch的CNN代码中经常会看到

x.view(x.size(0), -1)

首先,在pytorch中的view()函数就是用来改变tensor的形状的,例如将2行3列的tensor变为1行6列,其中-1表示会自适应的调整剩余的维度

a = torch.Tensor(2,3)
print(a)
# tensor([[0.0000, 0.0000, 0.0000],
#        [0.0000, 0.0000, 0.0000]])
 
print(a.view(1,-1))
# tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]])

在CNN中卷积或者池化之后需要连接全连接层,所以需要把多维度的tensor展平成一维,x.view(x.size(0), -1)就实现的这个功能

def forward(self,x):
    x=self.pre(x)
    x=self.layer1(x)
    x=self.layer2(x)
    x=self.layer3(x)
    x=self.layer4(x)
        
    x=F.avg_pool2d(x,7)
    x=x.view(x.size(0),-1)
    return self.fc(x)

卷积或者池化之后的tensor的维度为(batchsize,channels,x,y),其中x.size(0)指batchsize的值,最后通过x.view(x.size(0), -1)将tensor的结构转换为了(batchsize, channels*x*y),即将(channels,x,y)拉直,然后就可以和fc层连接了

 


 

 

  • 48
    点赞
  • 102
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
class ResidualBlock(nn.Module): def init(self, in_channels, out_channels, dilation): super(ResidualBlock, self).init() self.conv = nn.Sequential( nn.Conv1d(in_channels, out_channels, kernel_size=3, padding=dilation, dilation=dilation), nn.BatchNorm1d(out_channels), nn.ReLU(), nn.Conv1d(out_channels, out_channels, kernel_size=3, padding=dilation, dilation=dilation), nn.BatchNorm1d(out_channels), nn.ReLU() ) self.attention = nn.Sequential( nn.Conv1d(out_channels, out_channels, kernel_size=1), nn.Sigmoid() ) self.downsample = nn.Conv1d(in_channels, out_channels, kernel_size=1) if in_channels != out_channels else None def forward(self, x): residual = x out = self.conv(x) attention = self.attention(out) out = out * attention if self.downsample: residual = self.downsample(residual) out += residual return out class VMD_TCN(nn.Module): def init(self, input_size, output_size, n_k=1, num_channels=16, dropout=0.2): super(VMD_TCN, self).init() self.input_size = input_size self.nk = n_k if isinstance(num_channels, int): num_channels = [num_channels*(2**i) for i in range(4)] self.layers = nn.ModuleList() self.layers.append(nn.utils.weight_norm(nn.Conv1d(input_size, num_channels[0], kernel_size=1))) for i in range(len(num_channels)): dilation_size = 2 ** i in_channels = num_channels[i-1] if i > 0 else num_channels[0] out_channels = num_channels[i] self.layers.append(ResidualBlock(in_channels, out_channels, dilation_size)) self.pool = nn.AdaptiveMaxPool1d(1) self.fc = nn.Linear(num_channels[-1], output_size) self.w = nn.Sequential(nn.Conv1d(num_channels[-1], num_channels[-1], kernel_size=1), nn.Sigmoid()) # 特征融合 门控系统 # self.fc1 = nn.Linear(output_size * (n_k + 1), output_size) # 全部融合 self.fc1 = nn.Linear(output_size * 2, output_size) # 只选择其两个融合 self.dropout = nn.Dropout(dropout) # self.weight_fc = nn.Linear(num_channels[-1] * (n_k + 1), n_k + 1) # 置信度系数,对各个结果加权平均 软投票思路 def vmd(self, x): x_imfs = [] signal = np.array(x).flatten() # flatten()必须加上 否则最后一个batch报错size不匹配! u, u_hat, omega = VMD(signal, alpha=512, tau=0, K=self.nk, DC=0, init=1, tol=1e-7) for i in range(u.shape[0]): imf = torch.tensor(u[i], dtype=torch.float32) imf = imf.reshape(-1, 1, self.input_size) x_imfs.append(imf) x_imfs.append(x) return x_imfs def forward(self, x): x_imfs = self.vmd(x) total_out = [] # for data in x_imfs: for data in [x_imfs[0], x_imfs[-1]]: out = data.transpose(1, 2) for layer in self.layers: out = layer(out) out = self.pool(out) # torch.Size([96, 56, 1]) w = self.w(out) out = w * out # torch.Size([96, 56, 1]) out = out.view(out.size(0), -1) out = self.dropout(out) out = self.fc(out) total_out.append(out) total_out = torch.cat(total_out, dim=1) # 考虑w1total_out[0]+ w2total_out[1],在第一维,权重相加得到最终结果,不用cat total_out = self.dropout(total_out) output = self.fc1(total_out) return output优化代码
05-15
1. 代码的注释最好用英文,这样可以方便其他国家的程序员阅读和理解。 2. 在ResidualBlock类,应该将init()改为__init__(),这是Python的一个特殊方法,用于初始化类的实例变量。 3. 对于VMD_TCN类的layers部分,可以使用一个for循环来代替多次重复的代码。例如: ``` for i in range(len(num_channels)): dilation_size = 2 ** i in_channels = num_channels[i-1] if i > 0 else num_channels[0] out_channels = num_channels[i] self.layers.append(ResidualBlock(in_channels, out_channels, dilation_size)) ``` 4. 不建议在forward()函数使用numpy数组,应该使用PyTorch张量来保证代码的可重复性和GPU加速。例如,将self.vmd(x)的signal变量改为torch.tensor(signal, dtype=torch.float32)。 5. 对于全连接层的输入尺寸,可以使用num_channels[-1] * self.nk代替output_size * (self.nk + 1),这样可以避免使用self.nk + 1这个魔数。 6. 在vmd()函数,x_imfs可以使用PyTorch张量来存储,而不是使用Python列表。例如,可以使用torch.zeros((self.nk+1, self.input_size))来创建一个张量,并将每个u[i]复制到对应的张量。这样可以避免在循环多次创建张量,提高代码的效率。 7. 在forward()函数,可以使用torch.cat()函数来将所有输出张量连接起来,而不是使用Python列表。例如,可以将total_out定义为一个空的张量,然后在每次迭代使用torch.cat()函数将输出张量连接到total_out。这样可以避免在循环多次分配内存,提高代码的效率。
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值