Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA not detected. Transcription will run on CPU #1994

Open
cxp-13 opened this issue Jan 8, 2025 · 5 comments
Open

CUDA not detected. Transcription will run on CPU #1994

cxp-13 opened this issue Jan 8, 2025 · 5 comments

Comments

@cxp-13
Copy link

cxp-13 commented Jan 8, 2025

Describe the bug

PS C:\Users\lanti> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Oct_30_01:18:48_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0

As you can see, I have installed the CUDA, but when pnpm start still shows CUDA not detected. Transcription will run on CPU the tip.
To Reproduce

Expected behavior

Screenshots

Additional context

@cxp-13 cxp-13 added the bug Something isn't working label Jan 8, 2025
@cxp-13
Copy link
Author

cxp-13 commented Jan 8, 2025

This print can prove that my WSL2 environment has CUDA installed

cxp@R9000P:~/solana_learn/AI/eliza-starter$ nvidia-smi
Wed Jan  8 10:10:57 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.77.01              Driver Version: 566.36         CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3060 ...    On  |   00000000:01:00.0  On |                  N/A |
| N/A   59C    P8             13W /  130W |    2457MiB /   6144MiB |      6%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A        35      G   /Xwayland                                   N/A      |
+-----------------------------------------------------------------------------------------+

@daniellinuk
Copy link

I created a wslconfig.txt in the root directory in windows for ubuntu to access my Nvidia GPU to make it work.

[wsl2]
gpuSupport=true
memory=8GB
processors=4
kernel=C:\\temp\\myCustomKernel
swap=2GB
swapFile=C:\\temp\\wsl-swap.vhdx
localhostForwarding=true
nestedVirtualization=true
debugConsole=false

See if it works for you.

@cxp-13
Copy link
Author

cxp-13 commented Jan 8, 2025

when execute npx --no node-llama-cpp source download --gpu cuda

node:internal/modules/cjs/loader:1413
  throw err;
  ^

Error: Cannot find module '../'
Require stack:
- /home/cxp/solana_learn/AI/eliza/node_modules/node-llama-cpp/node_modules/.bin/cmake-js
    at Function._resolveFilename (node:internal/modules/cjs/loader:1410:15)
    at defaultResolveImpl (node:internal/modules/cjs/loader:1061:19)
    at resolveForCJSWithHooks (node:internal/modules/cjs/loader:1066:22)
    at Function._load (node:internal/modules/cjs/loader:1215:37)
    at TracingChannel.traceSync (node:diagnostics_channel:322:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:234:24)
    at Module.require (node:internal/modules/cjs/loader:1496:12)
    at require (node:internal/modules/helpers:135:16)
    at Object.<anonymous> (/home/cxp/solana_learn/AI/eliza/node_modules/node-llama-cpp/node_modules/.bin/cmake-js:5:21)
    at Module._compile (node:internal/modules/cjs/loader:1740:14) {
  code: 'MODULE_NOT_FOUND',
  requireStack: [
    '/home/cxp/solana_learn/AI/eliza/node_modules/node-llama-cpp/node_modules/.bin/cmake-js'
  ]
}

Node.js v23.5.0

[node-llama-cpp] To resolve errors related to CUDA compilation, see the CUDA guide: https://node-llama-cpp.withcat.ai/guide/CUDA
✖️ Failed to compile llama.cpp
Failed to build llama.cpp with CUDA support. Error: SpawnError: Command npm run -s cmake-js-llama -- clean --log-level warn --out localBuilds/linux-x64-cuda exited with code 1
    at createError (file:///home/cxp/solana_learn/AI/eliza/node_modules/node-llama-cpp/dist/utils/spawnCommand.js:34:20)
    at ChildProcess.<anonymous> (file:///home/cxp/solana_learn/AI/eliza/node_modules/node-llama-cpp/dist/utils/spawnCommand.js:47:24)
    at ChildProcess.emit (node:events:513:28)
    at ChildProcess._handle.onexit (node:internal/child_process:294:12)
node-llama-cpp source download

Download a release of `llama.cpp` and compile it
https://node-llama-cpp.withcat.ai/cli/source/download

@daniellinuk
Copy link

Maybe try running it from the root directory /

@AIFlowML AIFlowML added Need Feedback and removed bug Something isn't working labels Jan 9, 2025
@AIFlowML
Copy link
Collaborator

AIFlowML commented Jan 9, 2025

Not a bug of the Eliza framework.
Waiting feedback from user that was unable to have CUDA recognized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants