计算机视觉
ShuaS2020
不要绝望总会慢慢变强
展开
-
卷积神经网络:VGG-16的定义
AlexNet网络的基本流程为:class VGG16(nn.Module): def __init__(self, num_classes): super(VGG16, self).__init__() self.feature = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1),原创 2021-04-10 18:29:46 · 310 阅读 · 0 评论 -
CIFAR-10:简单的卷积层处理(数据输出:训练时间,平均损失值,训练集和测试集的准确率)
import torchimport torchvisionimport torch.nn.functional as Fimport torchvision.transforms as transformsimport torch.nn as nnimport torch.optim as optimfrom torch.utils.data.sampler import SubsetRandomSamplerimport timeimport mathimport numpy as n原创 2021-04-08 19:06:16 · 856 阅读 · 0 评论 -
循环神经网络:embedding(嵌入层)处理(hello->ohlol)
import torchfrom torchvision import transformsfrom torchvision import datasetsfrom torch.utils.data import DataLoaderimport torch.nn.functional as Fimport torch.optim as optimfrom matplotlib import pyplot as pltimport osimport sysnum_class=4inp原创 2021-04-03 23:29:04 · 606 阅读 · 3 评论 -
卷积神经网络:Residual Net(残差网络)的定义
import torchimport torch.nn as nnimport torch.nn.functional as Fclass ResidualBlock(nn.Module): #残差块:将网络层进行组合,使得z=x+y(x与y同型) def __init__(self,channels):#输入通道数定为未知量,当实例化模型时可以调用 super(ResidualBlock,self).__init__() self.channels=chan原创 2021-03-04 23:20:03 · 962 阅读 · 1 评论 -
卷积神经网络:Inception Model下的Mnist实例
import torchimport torch.nn as nnfrom torchvision import transformsfrom torchvision import datasetsfrom torch.utils.data import DataLoaderimport torch.nn.functional as Fimport torch.optim as optim#1.构建数据集batch_size=64transform=transforms.Compose(转载 2021-03-04 20:03:10 · 320 阅读 · 1 评论 -
卷积神经网络:Inception Model的定义
import torchimport torch.nn as nnimport torch.nn.functional as Fclass Inception(nn.Module): def __init__(self,in_channels):#输入通道数定为未知量,当实例化模型时可以调用 super(Inception,self).__init__() self.branch11=nn.Conv2d(in_channels,16,kernel_size=1原创 2021-03-04 19:09:10 · 825 阅读 · 1 评论 -
卷积神经网络:MNIST实操+GPU
import torchimport torch.nn as nnfrom torchvision import transformsfrom torchvision import datasetsfrom torch.utils.data import DataLoaderimport torch.nn.functional as Fimport torch.optim as optim#1.构建数据集batch_size=64transform=transforms.Compose(原创 2021-03-03 18:13:37 · 211 阅读 · 1 评论 -
卷积神经网络:最大池化层
import torchinput=[1,2,3,4, 6,7,8,9, 1,2,3,4, 6,7,8,9,]input=torch.Tensor(input).view(1,1,4,4) #(b*c*h*w)maxpooling_layer=torch.nn.MaxPool2d(kernel_size=2, stride=2,) #默认步幅为2output=maxpooli原创 2021-03-02 21:46:33 · 719 阅读 · 0 评论 -
卷积神经网络:卷积层参数padding(填充)
import torchinput=[1,2,3,4,5, 6,7,8,9,9, 1,2,3,4,5, 6,7,8,9,9, 1,2,3,4,5]#ToTensorinput=torch.Tensor(input).view(1,1,5,5) #(b*c*h*w)#定义卷积层conv_layer=torch.nn.Conv2d(in_channels=1, out_channels=1,原创 2021-03-02 19:56:48 · 1292 阅读 · 0 评论 -
卷积神经网络:定义卷积层
import torchin_channels,out_channels=5,10 #in 决定卷积核的channel;out 决定卷积核的个数width,hight=100,100kernel_size=3 #卷积核的边长 卷积核一般为奇数边长正方形batch_size=1input=torch.randn(batch_size, #random normal in_channels, hight,原创 2021-03-02 17:43:24 · 585 阅读 · 0 评论 -
多分类:MNIST实操
import torchimport torch.nn as nnfrom torchvision import transformsfrom torchvision import datasetsfrom torch.utils.data import DataLoaderimport torch.nn.functional as Fimport torch.optim as optim#1.构建数据集batch_size=64transform=transforms.Compose(原创 2021-03-01 21:05:23 · 212 阅读 · 1 评论