May 18, 2020 · docker: Error response from daemon: Container command 'nvidia-smi' not found or does not exist.. Error: Docker does not find Nvidia drivers I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:150] kernel reported version is: 352.93 I tensorflow/core/common_runtime/gpu/gpu_init.cc:81] No GPU devices available on machine.
nvidia-smi is not yet packaged for CUDA on WSL 2. IPC-related APIs in CUDA are not yet available in WSL 2. Unified Memory is limited to the same feature set as on native Windows systems. With the NVIDIA Container Toolkit for Docker 19.03, only --gpus all is supported. This means that on multi-GPU systems it is not possible to filter for ...
In the case of Docker deployment, this will translate in passing the -e CUDA_VISIBLE_DEVICES="0,1" to the nvidia-docker run command. Passing the NV_GPU option at the beginning of the nvidia-docker run command. (See example below.)
저는 아무리 nvidia-docker를 설치해봐도 bash: nvidia: command not found 이렇게만 뜨네요... 그런데 $ docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi 이거는 정상적으로 그럴싸한 nvidia 의 옵션이 제대로 뜨네요.
In the previous post on this subject we used code from Technische Universität Kaiserslautern to monitor our GPUs using OMD checkmk (now checkmk raw). With some new RTX2080s installed this broke, as the nvidia-smi check doesn’t report anything for ECC errors (rather than 0, as previous cards did).
The OTBTF GPU docker. Run the OTBTF docker image with NVIDIA runtime enabled. You can add the right permissions to you current USER for docker (there is a “docker” group), or you can also use root to run a docker command . The following command will display the help of the TensorflowModelServe OTB application.
sudo apt install nvidia-modprobe # Now reinstall the nvidia-docker to make sure it exits zero: sudo dpkg -i nvidia-docker_1.0.1-1_amd64.deb # (no errors should be reported) Now let’s try running that test again: