Skip to content

Error registering a new package using CUDA #3001

@RXGottlieb

Description

@RXGottlieb

I recently attempted to register a new package here with CUDA.jl as the only dependency, but it fails during AutoMerge here with the error: "ERROR: LoadError: CUDA driver not functional".

I am able to import my package without error on one machine:

julia> versioninfo()
Julia Version 1.12.3
Commit 966d0af0fd (2025-12-15 11:20 UTC)
Build Info:
  Official https://julialang.org release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 12 × Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
  WORD_SIZE: 64
  LLVM: libLLVM-18.1.7 (ORCJIT, skylake)
  GC: Built with stock GC
Threads: 4 default, 1 interactive, 4 GC (on 12 virtual cores)
Environment:
  JULIA_EDITOR = code
  JULIA_VSCODE_REPL = 1
  JULIA_NUM_THREADS = 4

julia> CUDA.versioninfo()
CUDA toolchain:
- runtime 13.0, artifact installation
- driver 580.92.0 for 13.0
- compiler 13.1

CUDA libraries:
- CUBLAS: 13.1.0
- CURAND: 10.4.0
- CUFFT: 12.0.0
- CUSOLVER: 12.0.4
- CUSPARSE: 12.6.3
- CUPTI: 2025.3.1 (API 13.0.1)
- NVML: 13.0.0+580.92

Julia packages:
- CUDA: 5.9.5
- CUDA_Driver_jll: 13.1.0+0
- CUDA_Compiler_jll: 0.3.0+1
- CUDA_Runtime_jll: 0.19.2+0

Toolchain:
- Julia: 1.12.3
- LLVM: 18.1.7

1 device:
  0: Quadro T2000 (sm_75, 2.837 GiB / 4.000 GiB available)

But the package does not import on this other machine:

julia> versioninfo()
Julia Version 1.10.8
Commit 4c16ff44be (2025-01-22 10:06 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 12 × AMD Ryzen 5 6600H with Radeon Graphics
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 12 virtual cores)
Environment:
  JULIA_EDITOR = code
  JULIA_VSCODE_REPL = 1
  JULIA_NUM_THREADS = 0
 
julia> CUDA.versioninfo()
CUDA toolchain:
- runtime 13.1, artifact installation
- driver 591.59.0 for 13.1
- compiler 13.1
 
CUDA libraries:
- CUBLAS: 13.1.0
- CURAND: 10.4.0
- CUFFT: 12.0.0
- CUSOLVER: 12.0.4
- CUSPARSE: 12.6.3
- CUPTI: 2025.3.1 (API 13.0.1)
- NVML: 13.0.0+591.59
 
Julia packages:
- CUDA: 5.9.5
- CUDA_Driver_jll: 13.1.0+0
- CUDA_Compiler_jll: 0.3.0+1
- CUDA_Runtime_jll: 0.19.2+0
 
Toolchain:
- Julia: 1.10.8
- LLVM: 15.0.7
 
1 device:
  0: NVIDIA GeForce RTX 3050 Laptop GPU (sm_86, 2.688 GiB / 4.000 GiB available)

This second machine instead gives the error:

julia> import BatchPDLP
┌ Error: You are using CUDA 13.1.0, but CUDA.jl was precompiled for CUDA 13.0.0.
│ This is unexpected; please file an issue.
└ @ CUDA C:\Users\Robert\.julia\packages\CUDA\x8d2s\src\initialization.jl:148

I think this is because GitHub and my 2nd machine are using CUDA 13.1, as it worked on my machine with CUDA 13.0. Is there a way to force GitHub to use a specific CUDA Toolkit version, or is there another solution to this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions