读《Linking Convolutional Neural Networks with Graph Convolutional Networks: Application in Pulmonary》

该博客介绍了通过结合卷积神经网络(CNN)和图卷积网络(GCN)来实现动脉和静脉的自动分离。在训练过程中,使用了特定的数据加载器,对数据进行批处理并利用图结构信息。模型结构包括CNN层用于特征提取,随后通过GCN层融合邻域信息。训练参数如学习率、批次大小和优化器等进行了设置,并在训练数据上进行迭代更新。
摘要由CSDN通过智能技术生成

2019

摘要

通过CNN+GCN分离动脉静脉
——————————————————————————

imgDirPath = './data/Orient/DemoRightUpper05CTPA'
case_list_train = ['/right/']

# Weight path or "none"
weightFile = "none"
# Loss visualization
# ON or OFF
tensorboard = False
tensorboard_logsDir = "backup"

# model save path
backupDir = "backup"
logFile = "backup/log.txt"

max_epochs = 6000*12
save_interval = 100*12
gpus = [0]
# multithreading
num_workers = 2
batch_size = 1  # batch_size was always set to 1 in original keras implementation

# Solver params
# adam or sgd
solver = "adam"
steps = [10000]
scales = [0.1]
learning_rate = 3e-4
momentum = 0.9
decay = 5e-4
betas = (0.9, 0.98)

# Net params
in_channels = 1
patch_sz = (32, 32, 5)
model_name = 'AV_CNN_GCN'
# GCN setting
num_nodes = 128 # 在原版keras代码中叫做batch_size
Num_classes = 2
Num_neighbors = 2
dp = 0.5
train_loader = DataLoader(
        listDataset(config.imgDirPath, config.case_list_train, Num_nodes=config.num_nodes, Num_neighbour=config.Num_neighbors),
        batch_size=config.batch_size, shuffle=True, drop_last=True)

构造训练数据集
这里的drop_last项是让所有取满batch_size的剩余数据丢弃不要(任性啊)
读取patch.npy得到(1649, 32, 32, 5)的patches
读取Label.npy得到(1649, 2)的labels
读取ind.npy得到1649的inds
读取graph.npy得到1649个节点对应邻居的邻接链表字典(也就是说图结构数据已经制好了,邻接关系,那CNN应该就只是提取节点特征了)

每轮提取数据

graph = self.traingraphes[index]

(但是self.traingraphes是只有一个元素的列表啊?)
然后调用graph这个自定数据类型(graphDataset)的next_node函数

start = self._index_in_epoch #=0初始化时定的,是起点的意思吧

如果考虑邻居→如果超参数节点数128不大于真实节点数1649:①起点更新,+128;②取终点128,起点终点内的patches作为images(128, 32, 32, 5),labels(128, 2),inds作为ind_node 128;
③遍历所有128个节点:
对于这每个节点k的每个邻居x,记录它们在节点列表self._inds中的索引xtup。如果当前这个索引列表不短于邻居数:打乱这个索引列表并取前两个做L_indx,再取这两个索引对应的节点放进N_image,最终有(128, 2, 32, 32, 5)NX_batch,与X_batch=images(128, 32, 32, 5),Y_batch=labels(128, 2)返回

取Y_batch纵向最大值的索引更新Y_batch 128
X_batch调整成[128, 1,5, 32, 32],节点数,通道数,深度,宽度,长度
NX_batch调整成[128, 2, 1,5, 32, 32],节点数,邻居数,通道数,深度,宽度,长度

输入:
X_batch[128, 1, 5, 32, 32]
Y_batch[128]
NX_batch[128, 2, 1, 5, 32, 32]
Av_CNN_GCN_model(
  (Phi_fun): phi_fun(
  	#先same_padding_3d(5,7,7)一下,有out[128, 1, 9, 38, 38]
    (Conv_1): ConvLayer_BN(
      (conv3d): Conv3d(1, 32, kernel_size=(5, 7, 7), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (leakyrelu): LeakyReLU(negative_slope=0.1, inplace=True)
    )
    #out[128, 32, 5, 32, 32]
    #same_padding_3d(5,7,7),有out[128, 32, 9, 38, 38]
    (Conv_2): ConvLayer_BN(
      (conv3d): Conv3d(32, 64, kernel_size=(5, 7, 7), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (leakyrelu): LeakyReLU(negative_slope=0.1, inplace=True)
    )
    #out[128, 64, 5, 32, 32]
    (Mp_3): MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=0, dilation=1, ceil_mode=False)
    #out[128, 64, 2, 16, 16]
    (Dp): Dropout3d(p=0.5, inplace=False)
    #same_padding_3d(2,5,5),有out[128, 64, 3, 20, 20]
    (Conv_5): ConvLayer_BN(
      (conv3d): Conv3d(64, 128, kernel_size=(2, 5, 5), stride=(1, 1, 1), bias=False)
      (bn): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (leakyrelu): LeakyReLU(negative_slope=0.1, inplace=True)
    )
    #out[128, 128, 2, 16, 16]
    (Dp): Dropout3d(p=0.5, inplace=False)
    #view out[128, 65536]
    (Dp): Dropout3d(p=0.5, inplace=False)
    (Fc_7): Linear(in_features=65536, out_features=50, bias=True)
    (Fc_8): Linear(in_features=50, out_features=100, bias=True)
    (Fc_9): Linear(in_features=100, out_features=10, bias=True)
    #out[128, 10]
  )
  #通过NX_batch遍历其两个邻居,分别再经历一遍Phi_fun
  #拼接成邻域信息NX[128, 2, 10]
  #然后释放NX的梯度计算?!?!?!?!?
  (gcn_layer): GCNlayer()
	  	dif = X - Nx
	  	#dif与参数w[10, 3]通过bdot计算mu_x[128, 2, 3]
		  	def bdot(a, b):
			    B = a.shape[0]
			    b = b[None,:,:]
			    b = b.repeat(B,1,1)
			    return torch.bmm(a, b)
		#mu_x sum聚合成[128, 3],再克隆成[128, 6, 3]
		#有参数mu[6,3],sigma[6,3]
		dif_mu = torch.sum(-0.5 * torch.mul(mu_x - self.mu, mu_x - self.mu)/ (1e-14 + torch.mul(self.sigma, self.sigma)), dim=-1)
		weight = torch.exp( dif_mu )
	    weight = weight / (1e-14 + torch.sum(weight, axis=-1, keepdims = True))
	#有[128,1,6]的weight
	#再次sum并克隆NX,有[128,6,10]
	X_merge = torch.bmm(weight, Nx).squeeze() # with size [node_num*batchsz, feature_len]
    H = (X_merge + X) / (1 + torch.sum(weight, axis=-1))
    x_out = torch.mm(H, self.theta)
    #[128,2]

)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值