我正在尝试在 Google Colab 上使用带有 GPU 的 tensorflow。
我按照https://www.tensorflow.org/install/gpu中列出的步骤进行操作
我确认 gpu 是可见的,并且使用以下命令安装了 CUDA -
!nvcc --version
!nvidia-smi
这按预期工作 -
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Wed Nov 20 10:58:14 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 53C P8 10W / 70W | 0MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
到现在为止还挺好。我接下来尝试查看它是否对 tensorflow 可见-
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 16436294862263048894, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 18399082617569983288
physical_device_desc: "device: XLA_CPU device", name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 1461835910630192838
physical_device_desc: "device: XLA_GPU device"]
但是,当我尝试使用 tensorflow 在 GPU 上运行一个简单的操作时,它会引发错误。当我检查 GPU 是否对 tensorflow 可见时,它返回 False -
tf.test.is_gpu_available()
False
我做错了什么,我该如何解决?