INFO:logocap.dataset.COCODataset:=> classes: ['__background__', 'mouse']
INFO:logocap.dataset.COCODataset:=> classes: ['__background__', 'mouse']
/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
0%| | 0/10 [00:00<?, ?it/s]/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
Process SpawnProcess-1:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 336, in main_worker
perf_indicator = run_test(cfg, InferenceShell, model, test_loader, final_output_dir, logger, use_wandb = args.wandb)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 93, in run_test
for i, (images, rgb, meta) in enumerate(test_loader):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/wagnchogn/data/qu/logocap-main_1/logocap/dataset/COCOTest.py", line 155, in __getitem__
meta = coco.loadImgs(img_id)[0]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 230, in loadImgs
return [self.imgs[id] for id in ids]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 230, in <listcomp>
return [self.imgs[id] for id in ids]
KeyError: '2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 66, in _wrap
sys.exit(1)
SystemExit: 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 318, in _bootstrap
util._exit_function()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 42522) is killed by signal: Terminated.
0%| | 0/10 [00:02<?, ?it/s]
Process SpawnProcess-2:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 336, in main_worker
perf_indicator = run_test(cfg, InferenceShell, model, test_loader, final_output_dir, logger, use_wandb = args.wandb)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 93, in run_test
for i, (images, rgb, meta) in enumerate(test_loader):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/wagnchogn/data/qu/logocap-main_1/logocap/dataset/COCOTest.py", line 155, in __getitem__
meta = coco.loadImgs(img_id)[0]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 230, in loadImgs
return [self.imgs[id] for id in ids]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 230, in <listcomp>
return [self.imgs[id] for id in ids]
KeyError: '2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 66, in _wrap
sys.exit(1)
SystemExit: 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 318, in _bootstrap
util._exit_function()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 42776) is killed by signal: Terminated.
Traceback (most recent call last):
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 359, in <module>
main()
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 192, in main
mp.spawn(
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 336, in main_worker
perf_indicator = run_test(cfg, InferenceShell, model, test_loader, final_output_dir, logger, use_wandb = args.wandb)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 93, in run_test
for i, (images, rgb, meta) in enumerate(test_loader):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/wagnchogn/data/qu/logocap-main_1/logocap/dataset/COCOTest.py", line 155, in __getitem__
meta = coco.loadImgs(img_id)[0]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 230, in loadImgs
return [self.imgs[id] for id in ids]
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 230, in <listcomp>
return [self.imgs[id] for id in ids]
KeyError: '2'
Process finished with exit code 1
Traceback (most recent call last):
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 359, in <module>
main()
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 192, in main
mp.spawn(
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 336, in main_worker
perf_indicator = run_test(cfg, InferenceShell, model, test_loader, final_output_dir, logger, use_wandb = args.wandb)
File "/media/wagnchogn/data/qu/logocap-main_1/tools/train_net.py", line 137, in run_test
coco_eval = coco.loadRes(results_path_gathered)
File "/home/wagnchogn/anaconda3/envs/logocap-temp/lib/python3.8/site-packages/pycocotools/coco.py", line 328, in loadRes
if 'caption' in anns[0]:
IndexError: list index out of range
Collecting package metadata (current_repodata.json): \ WARNING conda.models.version:get_matcher(537): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.7.1.*, but conda is ignoring the .* and treating it as 1.7.1
done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): / WARNING conda.models.version:get_matcher(537): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.8.0.*, but conda is ignoring the .* and treating it as 1.8.0
- WARNING conda.models.version:get_matcher(537): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.6.0.*, but conda is ignoring the .* and treating it as 1.6.0
WARNING conda.models.version:get_matcher(537): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.9.0.*, but conda is ignoring the .* and treating it as 1.9.0
done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- torch==1.12.1
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://conda.anaconda.org/conda-forge/linux-64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
clip-by-openai 1.1 requires torch<1.7.2,>=1.7.1, but you have torch 1.12.1 which is incompatible.
clip-by-openai 1.1 requires torchvision==0.8.2, but you have torchvision 0.13.1 which is incompatible.
https://drive.google.com/drive/folders/0B0vscETPGI1-Q1h1WFdEM2FHSUE?resourcekey=0-XIVV_7YUjB9TPTQ3NfM17A