CUDA: Difference between revisions

From CSCWiki
Jump to navigation Jump to search
Content deleted Content added
M3xie (talk | contribs)
init
 
M3xie (talk | contribs)
Add installation and post installation instructions
 
Line 1: Line 1:
This has been installed on Mannitol with an RTX 3050 and CUDA Toolkit 13.0 Update 2. A lot of these can be followed through from https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#
# Installation


# Post Installation
== Installation ==

# Follow the pre installation actinos from https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#
# Run the commands on https://developer.nvidia.com/cuda-downloads?target_os=Linux, and run the below commands. nvcc should be stored in /usr/local/cuda-[VERSION]/bin/nvcc
<syntaxhighlight lang="bash" line="1" start="1">
wget https://developer.download.nvidia.com/compute/cuda/13.0.2/local_installers/cuda-repo-debian12-13-0-local_13.0.2-580.95.05-1_amd64.debsudo
dpkg -i cuda-repo-debian12-13-0-local_13.0.2-580.95.05-1_amd64.debsudo
cp /var/cuda-repo-debian12-13-0-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get updatesudo apt-get -y install cuda-toolkit-13-0
sudo apt-get install -y nvidia-open
reboot
</syntaxhighlight>

== Post Installation(Follows from https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) ==

# If nvcc --version gives something, proceed to step 3.
# Run the following commands to add nvcc to PATH. nvcc --version should work now.
<syntaxhighlight lang="bash">export PATH=${PATH}:/usr/local/cuda-13.0/bin
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-13.0/lib64</syntaxhighlight>3. Install the writable samples from https://github.com/nvidia/cuda-samples. Follow the instructions in the GitHub to build.

4. Run Device Query from cuda-samples/build/Samples/1_Utilities/deviceQuery. You should be getting something similar to below, if not, the installation has not worked. The bandwidth test has also been depreciated, so there is no need to test it.<syntaxhighlight>
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce RTX 3050"
CUDA Driver Version / Runtime Version 13.0 / 13.0
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 5804 MBytes (6086262784 bytes)
(018) Multiprocessors, (128) CUDA Cores/MP: 2304 CUDA Cores
GPU Max Clock rate: 1477 MHz (1.48 GHz)
Memory Clock rate: 7001 Mhz
Memory Bus Width: 96-bit
L2 Cache Size: 1572864 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 129 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 13.0, CUDA Runtime Version = 13.0, NumDevs = 1
Result = PASS
</syntaxhighlight>

Latest revision as of 14:08, 5 November 2025

This has been installed on Mannitol with an RTX 3050 and CUDA Toolkit 13.0 Update 2. A lot of these can be followed through from https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#

Installation

  1. Follow the pre installation actinos from https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#
  2. Run the commands on https://developer.nvidia.com/cuda-downloads?target_os=Linux, and run the below commands. nvcc should be stored in /usr/local/cuda-[VERSION]/bin/nvcc
wget https://developer.download.nvidia.com/compute/cuda/13.0.2/local_installers/cuda-repo-debian12-13-0-local_13.0.2-580.95.05-1_amd64.debsudo 
dpkg -i cuda-repo-debian12-13-0-local_13.0.2-580.95.05-1_amd64.debsudo 
cp /var/cuda-repo-debian12-13-0-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get updatesudo apt-get -y install cuda-toolkit-13-0
sudo apt-get install -y nvidia-open
reboot
  1. If nvcc --version gives something, proceed to step 3.
  2. Run the following commands to add nvcc to PATH. nvcc --version should work now.
export PATH=${PATH}:/usr/local/cuda-13.0/bin
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-13.0/lib64

3. Install the writable samples from https://github.com/nvidia/cuda-samples. Follow the instructions in the GitHub to build. 4. Run Device Query from cuda-samples/build/Samples/1_Utilities/deviceQuery. You should be getting something similar to below, if not, the installation has not worked. The bandwidth test has also been depreciated, so there is no need to test it.

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce RTX 3050"
  CUDA Driver Version / Runtime Version          13.0 / 13.0
  CUDA Capability Major/Minor version number:    8.6
  Total amount of global memory:                 5804 MBytes (6086262784 bytes)
  (018) Multiprocessors, (128) CUDA Cores/MP:    2304 CUDA Cores
  GPU Max Clock rate:                            1477 MHz (1.48 GHz)
  Memory Clock rate:                             7001 Mhz
  Memory Bus Width:                              96-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        102400 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 129 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 13.0, CUDA Runtime Version = 13.0, NumDevs = 1
Result = PASS