onnxruntime 测试GPU是否可用
import onnxruntime
print("ONNX Runtime version:", onnxruntime.__version__)
print("Available providers:", onnxruntime.get_available_providers())
Available providers: [‘TensorrtExecutionProvider’, ‘CUDAExecutionProvider’, ‘CPUExecutionProvider’]
PyTorch 测试GPU是否可用
import torch
print(torch.cuda.is_available())
print(torch.Tensor(5, 3).cuda())
Tensorflow 测试GPU是否可用
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
a = tf.constant(2.0)
b = tf.constant(4.0)
print(a+b)
# tf1.x
#print(tf.test.is_gpu_available())
PaddlePaddle 测试GPU是否可用
import paddle
paddle.fluid.is_compiled_with_cuda()
paddle.utils.run_check()