-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QWEN2 VL does not work? #278
Comments
|
@YorkieDev It doesn't matter, it happens with all types/sizes. The problem is LM Studio does not send the prompt to the model because it thinks the model is not vision model even though it shows the EYE icon in the model list page, which is very weird. |
Can you try to load it with a larger context, at least 1600? Did you get this error at any point?
|
Ok, now it works when I set the context size to 2048. Thanks. |
Version: 0.3.6 Build 3
Model: bartowski/Qwen2-VL-7B-Instruct-GGUF
I get this stupid error "Model does not support images" which is funny because this is a vision model and LM Studio claims to support it. By the way this model works fine on llama.cpp. So it is not a corrupted file issue.
The text was updated successfully, but these errors were encountered: