Unfolding the universe of possibilities..

Painting the cosmos of your digital dreams.

Managing Multiple CUDA Versions on a Single Machine: A Comprehensive Guide

How to Handle Different CUDA Versions in Your Development Environment

Photo by Nikola Majksner on Unsplash

In one of my previous roles as an AI consultant, I was tasked with utilizing virtual environments as a tool for managing and isolating Python environments. Given that the project relied on GPU acceleration, I encountered a situation where the installed CUDA version differed from the version required for the project. To address this, I had to install the necessary CUDA version and configure my environment to use it without impacting the system’s CUDA setup. To the best of my knowledge, there is a scarcity of comprehensive, end-to-end tutorials addressing this specific need. Therefore, this tutorial serves as a valuable resource for those seeking to understand how to safely manage multiple CUDA Toolkit versions within their projects.

Table of contents:

· 1. Introduction
· 2. CUDA available versions
· 3. Download and Extract the binaries
· 4. Install CUDA toolkit
· 5. Project setup
· 6. Conclusion

1. Introduction

Installing multiple versions of the CUDA Toolkit on your system can cause several effects and consequences, some of which may impact your system:

It may lead to conflicts in the system PATH and environment variables. If not managed correctly, these conflicts can affect which version of CUDA is used by default.It may require specific GPU driver versions for optimal performance and compatibility. Installing a new version might necessitate updating your GPU driver.Some libraries and software may depend on a specific CUDA version. Installing a new version could disrupt compatibility with these dependencies.Applications that rely on CUDA may need adjustments to work with the new version. Incompatibilities can cause errors or unexpected behavior.Incorrectly managing multiple CUDA versions can lead to system instability or errors in GPU-accelerated applications.

Therefore, to safely manage multiple CUDA Toolkit versions for your project, follow these steps: :

Check the system current CUDA version.Download and extract the binaries of the desired version.Execute the installer to install only the toolkit.

In this tutorial, I will provide a detailed, step-by-step example of how to accomplish this. Additionally, I will guide you through setting up your virtual environment after the binaries are successfully installed.

2. CUDA available versions

Let’s see what CUDA version is currently used by the system by running the command nvidia-smi :

As you can see, the CUDA version is 12.1.

Now lets display the available CUDA in my machine:

$ ls /usr/local/ | grep cuda
cuda
cuda-11.7
cuda-12
cuda-12.1

I have three different versions available on my machine.

3. Download and Extract the binaries

Suppose the project I’ll be working on requires CUDA Toolkit version 11.8. To obtain it, we begin by visiting the NVIDIA CUDA Toolkit Archive website: here . We locate the specific version of the CUDA Toolkit that our project demands. It’s important to ensure that we select the version compatible with our operating system. In my case, I chose the target platform:

My target platform: Linux — x86_64 — Ubuntu — 22.04

Choose the ‘runfile (local)’ version of the CUDA Toolkit that corresponds to your operating system. This particular file typically carries a .run extension. When selecting runfile (local) the website provide you the installation instructions. In my case, the provided instructions are as follows:

wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run

However, it’s essential to keep in mind that our objective is not to install this version, as a newer version is already in place. Therefore, we only need to follow the first instruction to download the file :

wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run

The download can be verified by comparing the MD5 checksum posted at this link with that of the downloaded file.

“A local installer is self-contained. It is a large file that only needs to be downloaded from the internet once and can be installed on multiple systems. Local installers are the recommended type of installer with low-bandwidth internet connections, or where using a network installer is not possible (such as due to firewall restrictions).” [1]

At this stage, open a terminal, go to the directory where you transferred the CUDA runfile, and make the CUDA runfile executable :

chmod +x cuda_11.8.0_520.61.05_linux.run

4. Install CUDA toolkit

Now, we run the CUDA runfile with the –silent and –toolkit flags to perform a silent installation of the CUDA Toolkit :

sudo ./cuda_11.8.0_520.61.05_linux.run –silent –toolkit

Where:

–silent : performs an installation with no further user-input and minimal command-line output.–toolkit : install only the CUDA Toolkit and keep your current drivers.

If you’re asked to accept the agreement, accept it to proceed with installation.

At this end, the CUDA toolkit binaries are extracted. We can make sure by running again the following command:

$ ls /usr/local/ | grep cuda
cuda
cuda-11.7
cuda-11.8
cuda-12
cuda-12.1

As you can see, cuda-11.8 is now available in my machine and the system current version remain the same (you can confirm it by running nvidia-smi ).

These steps allow you to install the CUDA version binaries. In the next section, I’ll show you how to set up your project to use the required CUDA version.

5. Project setup

When working with several project, it’s recommended to use virtual environments. We start by creating a virtual envoronement. In my case python3.8 was required. To create a virtual environment we can use the following command. I created an environment named my_venv in venv , a folder where I put the virtual environments :

python3.8 -m venv venv/my_env
source venv/my_env/bin/activate

Let’s see what CUDA version is currently using:

$ nvcc –version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

As you can see, the created environment isn’t using the required CUDA version so we need to set it manually by updating the activate file and adding the following lines:

export PATH=/usr/local/cuda-11.8/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64:$LD_LIBRARY_PATH

You can update the activate file using your favorite editor, or you can simply run the following command to append text to end of file. :

echo “export PATH=/usr/local/cuda-11.8/bin:$PATH” >> venv/my_env/bin/activate
echo “LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64:$LD_LIBRARY_PATH” >> venv/my_env/bin/activate

Finally, we need to reactivate the environment and run the nvcc command again:

$ source venv/nerfstudio/bin/activate
$ nvcc –version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

That’s it! Now the project is configured to run the required CUDA version and without conflicts!

6. Conclusion

By following the steps outlined in this tutorial, you can successfully maintain multiple versions of CUDA on your system without encountering conflicts between installations. This flexibility enables each project to utilize the exact CUDA version it demands, achieved through the configuration of environment variables tailored to your specific needs.

Thank you for reading. I hope you enjoyed this tutorial. If you appreciate my tutorials, please support me by following and subscribing. This way, you’ll receive notifications about my new articles. If you have any questions or suggestions, please feel free to leave a comment below.

References

[1] https://developer.nvidia.com/cuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local

Image credits

All images and figures in this article whose source is not mentioned in the caption are by the author.

Managing Multiple CUDA Versions on a Single Machine: A Comprehensive Guide was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment