环境变量+方法0
下面的是一个环境变量。
如果使用 Pycharm,可以手动设置 Environment Variable。
或者也可以作为 Argument 放到需要执行的 .py 文件的后面。
CUDA_VISIBLE_DEVICES
方法1
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]=<YOUR_GPU_NUMBER_HERE>
方法 2
Hi, you can specify used gpu in python script as following:
import os
from argparse import ArgumentParser
parser = ArgumentParser(description=‘Example’)
parser.add_argument(’–gpu’, type=int, default=[0,1], nargs=’+’, help=‘used gpu’)
args = parser.parse_args()
os.environ[“CUDA_VISIBLE_DEVICES”] = ‘,’.join(str(x) for x in args.gpu)
方法3
I tried
CUDA_VISIBLE_DEVICES=3 python test.py
and it doesn’t work for me.
But
export CUDA_VISIBLE_DEVICES=3
python test.py
does work for me.
方法4
torch.cuda.set_device(device)
Sets the current device.
Usage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable.
Parameters: device (torch.device or int) – selected device. This function is a no-op if this argument is negative.
和 nohup 连用
CUDA_VISIBLE_DEVICES=3 nohup ....
样例:
CUDA_VISIBLE_DEVICES=3 nohup python -u train_unsup.py --name raft-sintel --stage sintel --validation sintel --gpus 0 --num_steps 1000000 --lr 1e-4 --image_size 368 768 --wdecay 1e-4 --gamma=0.85 --mixed_precision >> 0224.log 2>&1 &
注意
Just want to add to this answer, this environment variable should be set at the top of the program ideally.
Changing the CUDA_VISIBLE_DEVICES var will not work if it is called after setting torch.backends.cudnn.benchmark.
This might also be true for other torch/cuda related calls as well so it’s better to set the environment vars at the program start or
use export CUDA_VISIBLE_DEVICES=“NUM” before starting the program.