-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hybird gradio error #15
Comments
Hi! It appears that the error is related to the CUDA version. Can you please tell me which version of the CUDA Toolkit you've installed? We have tested the demo using version 11.7. |
Thank you for your reply! I installed CUDA 11.7 on a GPU service and encountered the above issue. Are there any other versions of CuPy that might work? |
This is a bit weird, as version 11.7 functions properly on my device. I discovered similar problems here, with one assertion being that CuPy might alter torch.cuda.current_device() to 0. Does your system have multiple GPUs, and are you specifying one other than the first one (cuda:0)? To better resolve the issue, could you give specifics about:
Thank you! |
Closing it since there's no activity. |
same issue on my 4090 machine. absl-py 2.1.0 |
I think this may be caused by the insufficient GPU memory of 4090 (24GB). Can you change to a smaller resolution (e.g., 384x384) or shorter frame length (e.g., 14 or less) to see if you can reproduce the error? Thank you |
Regardless of the initial resolution of the input image, the |
set target_size = 216 full log attached:
|
nvcc --version cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2 #endif /* CUDNN_VERSION_H */ nvidia-smi +---------------------------------------------------------------------------------------+ |
Hmm... This is a bit tough, I haven't come across such problems in the past. Regrettably, I'm unable to identify the cause at the moment. Despite extensive online research, I've found very few instances that resemble this one 😔. |
maybe the issue is: but other image (with cuda118) works fine. |
The same |
same issue |
Hi,
Thank you for your great work. I met following error with environment. Could you please help me check with this, thank you.
Traceback (most recent call last):
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1522, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 452, in forward_sample
val_output = self.pipeline(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/pipeline/pipeline.py", line 454, in call
down_res_face_tmp, mid_res_face_tmp, controlnet_flow, _ = self.face_controlnet(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 446, in forward
warped_cond_feature, occlusion_mask = self.get_warped_frames(cond_feature, scale_flows[fh // ch], fh // ch)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 300, in get_warped_frames
warped_frame = softsplat(tenIn=first_frame.float(), tenFlow=flows[:, i].float(), tenMetric=None, strMode='avg').to(dtype) # [b, c, w, h]
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 251, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 106, in decorate_fwd
return fwd(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 284, in forward
cuda_launch(cuda_kernel('softsplat_out', '''
File "cupy/_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 225, in cuda_launch
return cupy.cuda.compile_with_cache(objCudacache[strKey]['strKernel'], tuple(['-I ' + os.environ['CUDA_HOME'], '-I ' + os.environ['CUDA_HOME'] + '/include'])).get_function(objCudacache[strK
ey]['strFunction'])
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 464, in compile_with_cache
return _compile_module_with_cache(*args, **kwargs)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 492, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 561, in _compile_with_cache_cuda
mod.load(cubin)
File "cupy/cuda/function.pyx", line 264, in cupy.cuda.function.Module.load
File "cupy/cuda/function.pyx", line 266, in cupy.cuda.function.Module.load
File "cupy_backends/cuda/api/driver.pyx", line 210, in cupy_backends.cuda.api.driver.moduleLoadData
File "cupy_backends/cuda/api/driver.pyx", line 60, in cupy_backends.cuda.api.driver.check_status
cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_SOURCE: device kernel image is invalid
Traceback (most recent call last):
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 456, in call_prediction
output = await route_utils.call_process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1522, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/blocks.py", line 1144, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper
response = f(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 860, in run
outputs = self.forward_sample(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/run_gradio_audio_driven.py", line 452, in forward_sample
val_output = self.pipeline(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/pipeline/pipeline.py", line 454, in call
down_res_face_tmp, mid_res_face_tmp, controlnet_flow, _ = self.face_controlnet(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 446, in forward
warped_cond_feature, occlusion_mask = self.get_warped_frames(cond_feature, scale_flows[fh // ch], fh // ch)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/ldmk_ctrlnet.py", line 300, in get_warped_frames
warped_frame = softsplat(tenIn=first_frame.float(), tenFlow=flows[:, i].float(), tenMetric=None, strMode='avg').to(dtype) # [b, c, w, h]
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 251, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 106, in decorate_fwd
return fwd(*args, **kwargs)
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 284, in forward
cuda_launch(cuda_kernel('softsplat_out', '''
File "cupy/_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "/autodl-fs/data/yt/MOFA-Video-Hybrid/models/softsplat.py", line 225, in cuda_launch
return cupy.cuda.compile_with_cache(objCudacache[strKey]['strKernel'], tuple(['-I ' + os.environ['CUDA_HOME'], '-I ' + os.environ['CUDA_HOME'] + '/include'])).get_function(objCudacache[strK
ey]['strFunction'])
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 464, in compile_with_cache
return _compile_module_with_cache(*args, **kwargs)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 492, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/cupy/cuda/compiler.py", line 561, in _compile_with_cache_cuda
mod.load(cubin)
File "cupy/cuda/function.pyx", line 264, in cupy.cuda.function.Module.load
File "cupy/cuda/function.pyx", line 266, in cupy.cuda.function.Module.load
File "cupy_backends/cuda/api/driver.pyx", line 210, in cupy_backends.cuda.api.driver.moduleLoadData
File "cupy_backends/cuda/api/driver.pyx", line 60, in cupy_backends.cuda.api.driver.check_status
cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_SOURCE: device kernel image is invalid
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 501, in process_events
response = await self.call_prediction(awake_events, batch)
File "/root/miniconda3/envs/mofa/lib/python3.10/site-packages/gradio/queueing.py", line 465, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None
The text was updated successfully, but these errors were encountered: