## Get Id of default devicetorch.cuda.current_device# 0
cuda.Device(0).name # '0' is the id of your GPU# Tesla K80
或者
torch.cuda.get_device_name(0) # Get name device with ID '0'# 'Tesla K80'
我编写了一个简单的类来获取有关您的cuda兼容GPU的信息:
要获取当前的内存使用情况,可以使用pyTorch的函数,例如:
mport torch# Returns the current GPU memory usage by# tensors in bytes for a given devicetorch.cuda.memory_allocated# Returns the current GPU memory managed by the# caching allocator in bytes for a given devicetorch.cuda.memory_cached
运行应用程序后,可以使用简单的命令清除缓存:
# Releases all unoccupied cached memory currently held by# the caching allocator so that those can be used in other# GPU application and visible in nvidia-smitorch.cuda.empty_cache
但是,使用此命令不会通过张量释放占用的GPU内存&#