Hello,
i am working right now alot with axon i am working on a lstm for my mastersthesis.
I was wondering if there is a nice way to setup livebook with cuda in a docker image?
The advantage of the a docker/podman image is in my case it is very easy to setup and reproducable. so i could share my livebook later with people who are intrested in my research.
Cuda installation is a pain in the ***. I already discovere this
i am also thinking of setting everything up on a ubuntu machine and building my ein docker image.
What is the way to go if i want to use my gpu for livebook. i have cuda 12 installed and its not beeing detected, so i was thinking maybe i give it a try with an older version like cuda 11.0 and try it on a vm.
1 Like
Livebook has docker images made by this script and the base directory: livebook/docker/build_and_push.sh at main · livebook-dev/livebook · GitHub. The version there is 11.8. This is the image used on huggingface Livebook - a Hugging Face Space by livebook-dev. It looks like it uses an Ubuntu base
2 Likes
ok thanks i will give it a try 
Hi everyone so i managed to do that, i hope this will help some of you.
So here i describe it on how to make it on Ubuntu.
Please look up this documentation for updates: nvidia
Also please look at the Livebook documentation livebook
-
Install Nvidia Drivers
sudo apt install nvidia-driver-530 nvidia-dkms-530
be aware you might have to install new drivers
-
Reboot!
-
Install the nvidia container toolkit
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit-base
-
Follow these steps out of the documentation just enter each command after another
This should include the NVIDIA Container Toolkit CLI (nvidia-ctk) and the version can be confirmed by running:
nvidia-ctk --version
In order to generate a CDI specification that refers to all devices, the following command is used:
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
To check the names of the generated devices the following command can be run:
grep " name:" /etc/cdi/nvidia.yaml
-
Setup Docker (from nvidia documentation)
This command set
curl https://get.docker.com | sh \ && sudo systemctl --now enable docker
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
-
Test the system
sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
-
Pull and start a livebook container
sudo docker pull ghcr.io/livebook-dev/livebook:0.9.3-cuda11.8
sudo docker run --rm --runtime=nvidia --gpus all -p 8080:8080 -p 8081:8081 -e LIVEBOOK_PASSWORD="securesecret" ghcr.io/livebook-dev/livebook:0.9.3-cuda11.8
-
i have set the enviorement variable in the livebook settings you can
-
You can test the speed by adding a simple Smart Neural Network Task in your livebook
you should see something like that
|=============================================================| 100% (548.11 MB)
17:41:17.544 [info] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
17:41:17.545 [info] XLA service 0x7f267417a630 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
17:41:17.545 [info] StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6
17:41:17.545 [info] Using BFC allocator.
17:41:17.545 [info] XLA backend allocating 22049272627 bytes on device 0 for BFCAllocator.
|===============================================================| 100% (1.35 MB)
2 Likes
this image was not working for me but there are already build versions for livebook here: