PyVista within a Docker Container#

You can use pyvista from within a docker container with jupyterlab. To launch a local docker container, install docker, then pull and run the image with:

docker run -p 8888:8888 ghcr.io/pyvista/pyvista:latest

Finally, open the link that shows up from the terminal output and start playing around with pyvista in jupyterlab. For example:

To access the notebook, open this file in a browser:
Or copy and paste one of these URLs:


You can see the latest tags of our Docker containers here. ghcr.io/pyvista/pyvista:latest has JupyterLab set up while ghcr.io/pyvista/pyvista:latest-slim is a lightweight Python environment without Jupyter


You may need to log into the GitHub container registry by following the directions at Working with the Docker registry)

Create your own Docker Container with PyVista#

Clone PyVista and cd into this directory to create your own customized docker image.

git clone https://github.com/pyvista/pyvista
cd pyvista/docker
docker build -t $IMAGE .
docker push $IMAGE

If you wish to have off-screen GPU support when rending on jupyterlab, see the notes about building with EGL at Building VTK, or use the custom, pre-built wheels at Release 0.27.0. Install that customized vtk wheel onto your docker image by modifying the docker image at pyvista/docker/jupyter.Dockerfile with:

COPY vtk-9.0.20201105-cp38-cp38-linux_x86_64.whl /tmp/
RUN pip install /tmp/vtk-9.0.20201105-cp38-cp38-linux_x86_64.whl

Additionally, you must install GPU drivers on the docker image of the same version running on the host machine. For example, if you are running on Azure Kubernetes Service and the GPU nodes on the kubernetes cluster are running 450.51.06, you must install the same version on your image. Since you will be using the underlying kernel module, there’s no reason to build it on the container (and trying will only result in an error).

COPY NVIDIA-Linux-x86_64-450.51.06.run nvidia_drivers.run
RUN sudo apt-get install kmod libglvnd-dev pkg-config -yq
RUN ./NVIDIA-Linux-x86_64-450.51.06.run -s --no-kernel-module

To verify that you’re rendering on a GPU, first check the output of nvidia-smi. You should get something like:

$ nvidia-smi
Sun Nov  8 05:48:46 2020
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  Tesla K80           Off  | 00000001:00:00.0 Off |                    0 |
| N/A   34C    P8    32W / 149W |   1297MiB / 11441MiB |      0%      Default |
|                               |                      |                  N/A |

Note the driver version (which is actually the kernel driver version), and verify it matches the version you installed on your docker image.

Finally, check that your render window is using NVIDIA by running ReportCapabilities:

>>> import pyvista
>>> pl = pyvista.Plotter()
>>> print(pl.render_window.ReportCapabilities())

OpenGL vendor string:  NVIDIA Corporation
OpenGL renderer string:  Tesla K80/PCIe/SSE2
OpenGL version string:  4.6.0 NVIDIA 450.51.06
OpenGL extensions:

If you get display id not set, then your environment is likely not set up correctly.