Is there a way to setup livebook with cuda in a docker image?

i am working right now alot with axon i am working on a lstm for my mastersthesis.
I was wondering if there is a nice way to setup livebook with cuda in a docker image?
The advantage of the a docker/podman image is in my case it is very easy to setup and reproducable. so i could share my livebook later with people who are intrested in my research.
Cuda installation is a pain in the ***. I already discovere this

i am also thinking of setting everything up on a ubuntu machine and building my ein docker image.

What is the way to go if i want to use my gpu for livebook. i have cuda 12 installed and its not beeing detected, so i was thinking maybe i give it a try with an older version like cuda 11.0 and try it on a vm.

1 Like

Livebook has docker images made by this script and the base directory: livebook/docker/ at main · livebook-dev/livebook · GitHub. The version there is 11.8. This is the image used on huggingface Livebook - a Hugging Face Space by livebook-dev. It looks like it uses an Ubuntu base


ok thanks i will give it a try :smile:

Hi everyone so i managed to do that, i hope this will help some of you.
So here i describe it on how to make it on Ubuntu.
Please look up this documentation for updates: nvidia
Also please look at the Livebook documentation livebook

  1. Install Nvidia Drivers
    sudo apt install nvidia-driver-530 nvidia-dkms-530
    be aware you might have to install new drivers

  2. Reboot!

  3. Install the nvidia container toolkit
    sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit-base

  4. Follow these steps out of the documentation just enter each command after another
    This should include the NVIDIA Container Toolkit CLI (nvidia-ctk) and the version can be confirmed by running:
    nvidia-ctk --version
    In order to generate a CDI specification that refers to all devices, the following command is used:
    sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
    To check the names of the generated devices the following command can be run:
    grep " name:" /etc/cdi/nvidia.yaml

  5. Setup Docker (from nvidia documentation)
    This command set
    curl | sh \ && sudo systemctl --now enable docker
    sudo nvidia-ctk runtime configure --runtime=docker
    sudo systemctl restart docker

  6. Test the system
    sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

  7. Pull and start a livebook container
    sudo docker pull
    sudo docker run --rm --runtime=nvidia --gpus all -p 8080:8080 -p 8081:8081 -e LIVEBOOK_PASSWORD="securesecret"

  8. i have set the enviorement variable in the livebook settings you can

  9. You can test the speed by adding a simple Smart Neural Network Task in your livebook
    you should see something like that

|=============================================================| 100% (548.11 MB)

17:41:17.544 [info] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

17:41:17.545 [info] XLA service 0x7f267417a630 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

17:41:17.545 [info]   StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6

17:41:17.545 [info] Using BFC allocator.

17:41:17.545 [info] XLA backend allocating 22049272627 bytes on device 0 for BFCAllocator.
|===============================================================| 100% (1.35 MB)


this image was not working for me but there are already build versions for livebook here:

step 5. the curl is

curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
  && \
    sudo apt-get update
1 Like

Hey everyone,
This is an update for ubuntu 24 and cuda 12.1

this time it worked with the standard nvidia drivers.
proceed to install the container toolkit
xla target now works with cuda120
just works fine

sudo docker run --rm --runtime=nvidia --gpus all -p 8080:8080 -p 8081:8081 -e LIVEBOOK_PASSWORD="securesecret"