复现Reasoning with Heterogeneous Graph Alignment for Video Question Answering

tgif其实就是gif数据集,feat,vocabulary还有datasets获取参见https://github.com/fanchenyou/HME-VideoQA/tree/master/gif-qa

No module named ‘colorlog’
pip install colorlog

No module named ‘block’
pip install block.bootstrap.pytorch

ordinal not in range(128)
调了半天utf编码啥的都不行,结果全都改回去了反而就可以跑了

AttributeError: Can’t get attribute ‘_init_fn’ on <module ‘main’ (built-in)>
好像是多进程的什么问题
麻了,弃了
——————————————————————————
虽然跑不了,但姑且还是学一学吧
——————————————————————————
能跑了,就是有点慢,研究一下原理

+--------------+------------------+
|  Parameter   |      Value       |
+==============+==================+
| Ablation     | none             |['none', 'gcn', 'global', 'local', 'only_local']这几种选项
+--------------+------------------+
| Batch size   | 128              |
+--------------+------------------+
| Birnn        | 0                |是否使用双向RNN
+--------------+------------------+
| Change lr    | none             |是否改变学习率,默认none即是不改变
+--------------+------------------+
| Checkpoint   | Count_4.092.pth  |saved_models\MMModel中存的预数据
+--------------+------------------+
| Cycle beta   | 0.010            |
+--------------+------------------+
| Dropout      | 0.300            |
+--------------+------------------+
| Fusion type  | coattn           |['none', 'coattn', 'single_visual', 'single_semantic', 'coconcat','cosiamese']这几种选项
+--------------+------------------+
| Gcn layers   | 2                |+1,即默认GCN层数为3
+--------------+------------------+
| Hidden size  | 512              |
+--------------+------------------+
| Lr           | 0.000            |
+--------------+------------------+
| Lr list      | [10, 20, 30, 40] |
+--------------+------------------+
| Max epoch    | 100              |
+--------------+------------------+
| Max n videos | 100000           |
+--------------+------------------+
| Model        | 7                |
+--------------+------------------+
| Momentum     | 0.900            |
+--------------+------------------+
| Num workers  | 1                |
+--------------+------------------+
| Prefetch     | none             |[none,nvidia, background]这几种选择,还有对应的nvidia_prefetcher,BackgroundGenerator类的调用
+--------------+------------------+
| Q max length | 35               |
+--------------+------------------+
| Rnn layers   | 1                |
+--------------+------------------+
| Save         | False            |是否保存模型,输入--save则为True
+--------------+------------------+
| Save adj     | False            |是否保存邻接矩阵,输入--save_adj则为True
+--------------+------------------+
| Save path    | ./saved_models/  |
+--------------+------------------+
| Server       | 1080ti           |['780, 1080ti, 1080']这几种选择
+--------------+------------------+
| Task         | Count            |[Count, Action, FrameQA, Trans]这几种选择
+--------------+------------------+
| Test         | False            |False即为训练,输入--test则为测试
+--------------+------------------+
| Tf layers    | 1                |
+--------------+------------------+
| Two loss     | 0                |
+--------------+------------------+
| V max length | 80               |
+--------------+------------------+
| Val ratio    | 0.100            |
+--------------+------------------+
| Weight decay | 0                |
+--------------+------------------+

补充

data_path    	|'/home/jp/data/tgif-qa/data'	|如果server是'780'则设置这一条,那么默认1080的话怎么办?下面还要用呢?
feat_dir     	| data_path+'feats'
vc_dir       	| data_path+'Vocabulary'
df_dir       	| data_path+'dataset'
model_name   	| 'Count'						|即task
pin_memory   	| False
dataset      	| 'tgif_qa'
log          	| './logs'
val_epoch_step	| 1
two_loss	    | False							|上面的two_loss大于0则设为True,否则为False
birnn		    | False							|同上
save_model_path | save_path + 'MMModel/'

通过data_utils的dataset创建两个TGIFQA类实例full_dataset(长为26839),test_dataset(长为3554),区别在于前者dataset_name=‘train’,后者为’test’
再通过torch.utils.data.random_split划分训练验证集,训练24156,验证2683,再通过torch.utils.data创建三个DataLoader类实例train_dataloader,val_dataloader,test_dataloader
补充

resnet_input_size	| 2048
c3d_input_size		| 4096
text_embed_size		| 300					|train_dataset.dataset.GLOVE_EMBEDDING_SIZE
answer_vocab_size	| None
word_matrix		   	| (2423, 300)的ndarray	|train_dataset.dataset.word_matrix
voc_len			  	| 2423

VOCABULARY_SIZE = train_dataset.dataset.n_words=2423

当前task为’Count’所以创建nn.MSELoss()均方误差的损失对象,并设定best_val_acc=-100

训练模型通过LSTMCrossCycleGCNDropout创建

for ii, data in enumerate(train_dataloader):

(看不太懂这个train_dataloader里面到底那个东西是数据信息,咋就摘出来ii和data了)
data是[128, 80, 2048]float32,[128, 80, 4096]float32,[128]int64,[128, 35]int64,[128]int64,[128]float326个tensor组成的list

当前change_lr为none所以创建如下优化器

Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.0001
    weight_decay: 0
)
LSTMCrossCycleGCNDropout(
读取train_dataloader
#sentence_inputs(batch_size, sentence_len, 1)
#video_inputs(batch_size, frame_num, video_feature)
当前task为'Count'所以首先执行forward_count
输入:
resnet_inputs	[128, 80, 2048]
c3d_inputs		[128, 80, 4096]
video_length	128
sentence_inputs	[128, 35]
question_length	128
answers			128
	#创建all_adj[128,115,115]
	#通过model_block得到out, adj
		 ###问题编码	输入
		 #			sentence_inputs	[128, 35]
		 #			question_length 128
		 (sentence_encoder): SentenceEncoderRNN(
			(embedding): Embedding(2423, 300, padding_idx=0)
			#得到embedded[128,35,300]
		    (dropout): Dropout(p=0.3, inplace=False)
		    (upcompress_embedding): Linear(in_features=300, out_features=512, bias=False)
		    (relu)
		    #[128,35,300]x[300,512]->[128,35,512]
		    if variable_lengths:
		    	nn.utils.rnn.pack_padded_sequence
		    	#输入:
		    	#	embedded		[128,35,300]
		    	#	input_lengths	128
		    	#输出:
		    	#	embedded:[1269,512],15,128,128 4个tensor组成的PackedSequence,
		    	#		有15个batch,batchsize有128,128,128,128,128,128,128, 128,107,60,35,20,16,6,1
		    (rnn): GRU(512, 512, batch_first=True, dropout=0.3)
		    #输入:embedded
		    #输出:
		    #	output:尺寸和batchsize同embedded的PackedSequence
		    #	hidden:[1, 128, 512]tensor
		    if variable_lengths:
		    	nn.utils.rnn.pad_packed_sequence
		    	#输入:output
		    	#输出:output[128, 15, 512]
		    #——————————————————————————————————————
		    if self.n_layers > 1 and self.bidirectional:
		    	(compress_output): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)→q_output
		    	(compress_hn_layers_bi): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)→s_hidden
		    elif self.n_layers > 1:
		    	(compress_hn_layers): Linear(in_features=512, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)→s_hidden
			elif self.bidirectional:
		    	(compress_output): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)→q_output
		    	(compress_hn_bi): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)→s_hidden
		    #————————————————————————————————
		)
		#输出:
		#	q_output[128, 15, 512]即上面的output
		#	s_hidden[1, 128, 512]即上面的hidden,再squeeze到s_last_hidden[128, 512]
		###视频编码
  		(compress_c3d): WeightDropLinear(in_features=4096, out_features=2048, bias=False)
  		#c3d_inputs[128, 80, 4096]x[4096,2048]->[128,80,2048]
  		(relu)
  		(video_fusion): WeightDropLinear(in_features=4096, out_features=2048, bias=False)
  		#c3d_inputs与resnet_inputs拼接
  		#[128, 80, 4096]x[4096,2048]->video_inputs[128,80,2048]
  		(relu)
  		(video_encoder): VideoEncoderRNN(
  		#输入:
  		#	video_inputs	[128,80,2048]
  		#	video_length	128
  			(project): Linear(in_features=2048, out_features=512, bias=False)
  			#[128,80,2048]x[2048,512]->embedded[128,80,512]
  			(relu)
		    (dropout): Dropout(p=0.3, inplace=False)
		    if variable_lengths:
		    	nn.utils.rnn.pack_padded_sequence
		    	#输入:
		    	#	embedded		[128,80,512]
		    	#	input_lengths	128
		    	#输出:
		    	#	embedded:[5311,512],80,128,128 4个tensor组成的PackedSequence,有80个batch
		    (rnn): GRU(512, 512, batch_first=True, dropout=0.3)
		    #输入:embedded
		    #输出:
		    #	output:尺寸和batchsize同embedded的PackedSequence
		    #	hidden:[1, 128, 512]tensor
		    if variable_lengths:
		    	nn.utils.rnn.pad_packed_sequence
		    	#输入:output
		    	#输出:output[128, 80, 512]
		    #——————————————————————————————————————
		    if self.n_layers > 1 and self.bidirectional:
		    	(compress_output): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)
		    	(compress_hn_layers_bi): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)
		    elif self.n_layers > 1:
		    	(compress_hn_layers): Linear(in_features=512, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)
			elif self.bidirectional:
		    	(compress_output): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)
		    	(compress_hn_bi): Linear(in_features=1024, out_features=512, bias=False)
		    	(relu)
		    	(dropout): Dropout(p=0.3, inplace=False)
		    #—————————————————————————————————————
		)
		#输出:
		#	v_output[128, 80, 512]即上面的output
		#	v_hidden[1, 128, 512]即上面的hidden,再squeeze到v_last_hidden[128, 512]
		if self.ablation != 'local':
			###视频问题融合
			if self.tf_layers != 0:
				(q_input_ln): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
			  	(v_input_ln): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
			  	#————————————————————————————
			  	###自注意力
			  	if 'self' in self.fusion_type:
			  		(q_selfattn): SelfAttention(
			  			(padding_mask_k)
			  			#(bs, q_len, v_len)
			  			(padding_mask_q)
			  			#(bs, v_len, q_len)
					    (encoder_layers): ModuleList(
					      SelfAttentionLayer(
					      	if attn_mask is None or softmax_mask is None:
						      	(padding_mask_k)
				  				(padding_mask_q)
				  			#三头
				  			(linear_k): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_q): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_v): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (softmax): Softmax(dim=-1)
					        (linear_final): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
					      )
					    )
					)
					(v_selfattn): SelfAttention(
					  	(padding_mask_k)
			  			(padding_mask_q)
					    (encoder_layers): ModuleList(
					      SelfAttentionLayer(
					      	if attn_mask is None or softmax_mask is None:
						      	(padding_mask_k)
				  				(padding_mask_q)
				  			#三头
				  			(linear_k): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_q): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_v): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (softmax): Softmax(dim=-1)
					        (linear_final): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
					      )
					    )
					)
				#——————————————————————————————————
				if 'coattn' in self.fusion_type:
					(co_attn): CoAttention(
					#输入:layernorm后的q_output,v_output
						(padding_mask_k)
							#fake_q[128,15,512]×v_output.T[128,512,80]->attn_mask[128,15,80]bool
			  			(padding_mask_q)
			  				#q_output[128,15,512]×fake_k.T[128,512,80]->softmax_mask[128,15,80]bool
			  			(padding_mask_k)
			  				#fake_q[128,80,512]×q_output.T[128,512,15]->attn_mask_[128,80,15]bool
			  			(padding_mask_q)
			  				#v_output[128,80,512]×fake_k.T[128,512,15]->softmax_mask_[128,80,15]bool
					    (encoder_layers): ModuleList(
					      CoAttentionLayer(
					      #输入:
					      #q_output,v_output,attn_mask,softmax_mask,attn_mask_,softmax_mask_
					      	#多头
					      	(linear_question): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_video): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_v_question): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (linear_v_video): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        #得到
					        #question_q	[128, 15, 512]
					        #video_k	[128, 80, 512]
					        #question	[128, 15, 512]
					        #video		[128, 80, 512]
					        #scale=512^(-1/2)
					        #question_q×video_k.T->attention_qv[128,15,80]
					        #attention_qv×scale再通过masked_fill(attn_mask,-np.inf)
					        (softmax): Softmax(dim=-1)
					        	#attention_qv再通过masked_fill(softmax_mask,0)
					        #video_k×question_q.T->attention_vq[128,80,15]
					        #attention_vq×scale再通过masked_fill(attn_mask_,-np.inf)
					        (softmax): Softmax(dim=-1)
					        	#attention_vq再通过masked_fill(softmax_mask_,0)
					        #attention_qv×v_output->output_qv[128,15,512]
					        (linear_final_qv): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (layer_norm_qv): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
					        	#output_qv+q_output做LayerNorm
					        #attention_vq×q_output->output_vq[128,15,512]
					        (linear_final_vq): WeightDropLinear(in_features=512, out_features=512, bias=False)
					        (layer_norm_vq): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
					        	#output_vq+v_output做LayerNorm
					      )
					    )
					  )
						  #输出:
						  #	q_output[128,15,512]
						  # v_output[128,80,512]
			###GCN
			(adj_learner): AdjLearner(
				#拼接q_output,v_output得到graph_nodes[128,95,512]
			    (edge_layer_1): Linear(in_features=512, out_features=512, bias=False)
			    (relu)
			    (edge_layer_2): Linear(in_features=512, out_features=512, bias=False)
			    (relu)
			    #[128,95,512]×[128,512,95]->adj[128,95,95]
			)
			#另外拼接q_output,v_output得到q_v_inputs[128,95,512]
			(gcn): GCN(
			#输入:q_v_inputs,adj
			#输出:q_v_output[128,95,512]
			    (layers): ModuleList(
			      (0): GraphConvolution(
			        (weight): Linear(in_features=512, out_features=512, bias=False)
			        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
			        (relu)
			        (dropout): Dropout(p=0.3, inplace=False)
			      )
			      (1): GraphConvolution(
			        (weight): Linear(in_features=512, out_features=512, bias=False)
			        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
			        (relu)
			        (dropout): Dropout(p=0.3, inplace=False)
			      )
			      (2): GraphConvolution(
			        (weight): Linear(in_features=512, out_features=512, bias=False)
			        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=False)
			        (relu)
			        (dropout): Dropout(p=0.3, inplace=False)
			      )
			    )
			)
			###注意力池
			(gcn_atten_pool): Sequential(
			    (0): Linear(in_features=512, out_features=256, bias=True)
			    #[128,95,512]x[512,256]->[128,95,256]
			    (1): Tanh()
			    (2): Linear(in_features=256, out_features=1, bias=True)
			    #[128,95,256]x[256,1]->[128,95,1]
			    (3): Softmax(dim=-1)
			)
			#q_v_output[128,95,512]×local_attn[128,95,1]->[128,95,512]
			#再sum到local_out[128,512]
		if self.ablation != 'global':
			###全局融合
			(global_fusion): Block(	#block包里的功能
			#输入:
			#	s_last_hidden[128,512], v_last_hidden[128,512]
			    (linear0): Linear(in_features=512, out_features=1600, bias=True)
			    #[128,512]x[512,1600]->[128,1600]
			    (linear1): Linear(in_features=512, out_features=1600, bias=True)
			    #[128,512]x[512,1600]->[128,1600]
			    if self.dropout_input > 0:
			    	(dropout)
			    	(dropout)
		    	(get_chunks)
		    		#输入:	x0
		    		#		self.sizes_list:[80]*20	
		    		#输出:
		    		#	x0_chunks:通过对x0做narrow得到20个[128,80]的tensor组成的list	    		
		    	(get_chunks)
		    		#	x1_chunks:通过对x1做narrow得到20个[128,80]的tensor组成的list	
		    	#同步遍历x0_chunks和x1_chunks的20个tensor
				    (merge_linears0): ModuleList( #20层
				      (0): Linear(in_features=80, out_features=1200, bias=True)
				      #[128,80]x[80,1200]->[128,1200]
				    )
				    (merge_linears1): ModuleList( #20层
				      (0): Linear(in_features=80, out_features=1200, bias=True)
				    #[128,1200]x[128,1200]->[128,1200]
				    #再view成[128,15,80]
				    #再sum成z[128,80]
				    #z=relu(z)^(-1/2)-relu(-z)^(-1/2)
				    (normalize)
				#拼接成[128,1600]
			    )
			    (linear_out): Linear(in_features=1600, out_features=512, bias=True)
			    #[128,1600]×[1600,512]->global_out[128,512]
			)
			if self.ablation != 'local':
				(fusion): Block(
				#输入:
				#	global_out[128,512], local_out[128,512]
				    (linear0): Linear(in_features=512, out_features=1600, bias=True)
				    (linear1): Linear(in_features=512, out_features=1600, bias=True)
					    (merge_linears0): ModuleList(#20层
					      (0): Linear(in_features=80, out_features=1200, bias=True)
					    )
					    (merge_linears1): ModuleList(#20层
					      (0): Linear(in_features=80, out_features=1200, bias=True)
				    )
				    (linear_out): Linear(in_features=1600, out_features=1, bias=True)
				    #[128,1600]×[1600,1]->out[128,1]
	#输出:
	#	out[128,1]
	#	adj[128, 95, 95]
	#把adj放进all_adj(顶格放进前95位)
	#对out通过clamp到1到10(好像得到全是1啊?)
)
输出
out, predictions, answers, all_adj
用预测值out与标签answers做MSE损失
对每个batch做一次BP,总计是188个batch

每轮计算一个acc,比如18.935%,并计算本轮训练损失均值,比如5.758
然后又神秘的计算了一个所谓的真实损失,又拿所有预测结果和所有标签计算一次MSE,比如5.802

后面的验证测试集也是这样,既计算每batch的损失的均值和最终的acc,也要算一下真实损失(又不BP计算损失干啥,还能当性能指标的吗?)

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值