./hadoop/bin/hadoop jar ./hadoop/share/hadoop/tools/lib/hadoop-streaming-2.10.2.jar -input /home/hadoop/imdb_kaggle.csv -output output -mapper mapper.py -reducer reducer.py -file mapper.py -file reducer.py
Traceback (most recent call last): File "/home/gcl/project/CSCA-vmamba/utils/regression_trainer.py", line 164, in train_eopch outputs = self.model(inputs) File "/home/gcl/anaconda3/envs/vmamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/gcl/project/CSCA-vmamba/vmamba/vmamba.py", line 1776, in forward new_RGB, new_T, new_shared = self.block1(RGB_0, T_0) File "/home/gcl/anaconda3/envs/vmamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/gcl/project/CSCA-vmamba/vmamba/vmamba.py", line 1441, in forward new_RGB, new_T, new_shared = self.fuse(RGB, T) File "/home/gcl/project/CSCA-vmamba/vmamba/vmamba.py", line 1454, in fuse rgb_query = self.RGB_query(RGB_m).view(batch_size, adapt_channels, -1).permute(0, 2, 1) RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. out = self.out_channels adapt_channels = 2 ** self.L * self.out_channels batch_size = RGB_m.size(0) temp = self.RGB_query(RGB_m) rgb_query = self.RGB_query(RGB_m).view(batch_size, adapt_channels, -1).permute(0, 2, 1) 这是对应的代码,L=4, out_channels=96, batch_size=8, RGB_m的尺寸为[8,192,32,32] temp的尺寸为[8,96,32,32]