-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about run this demo #9
Comments
If you want to use GPT4o, after you enter the GPT4-o API key and click submit, a success indicator will appear in the input box, at which point you can use gpt4o permanently. |
https://github.com/TencentARC/BrushEdit/blob/main/app/src/vlm_template.py At the same time, you can check the paths of different vlm in this file. |
ok I can use the Qwen2-VL-7B-Instruct (Default) ,but now when I run this code locally, I type in the prompt "remove the frog" followed by ./src/brushedit_all_in_one_pipeline.py
|
I can use qwen-vl and input on both local and online demos when input "remove the frog". Please provide more information so that I can reproduce this error. It seems that the image has not been loaded normally, or the correct size has not been specified. |
ok, I will check and repair it. Please wait a moment. Thank you. |
You should click 'Dilation Generated Mask' to correctly dilate mask, preventing the leakage of frog information in the mask area. |
Is there no way to generate the correct image by directly clicking run? |
In fact, I can appropriately automatically dilate the mask during the remove operation, and I will update the code. This will enhance automation capabilities. Stay tuned, thank you. |
I followed your instructions and said that after completing the environment configuration and model download step by step, I can now successfully launch the Gradio page, but when I upload an image, enter the Prompt and click run, I keep getting the error "gradio.exceptions.Error: 'Please select the correct VLM model and input the correct API Key first!'", I chose my local llama3-llava-next-8b-hf model, but throwing it gives an error, and by the way, the same error now appears on the HuggingFace demo
The text was updated successfully, but these errors were encountered: