-
-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release v0.3.0 #711
Release v0.3.0 #711
Conversation
Installing the addon on Windows 11 gives the following error that is likely due to the filepaths becoming too long (278 chars which is over the 255 max):
This happens even though I have long filepaths enabled in Win11 and a workaround is to manually unpack the .zip and copy the resulting directory over. That might give a hin that Python is the culprit here. |
When I install and use the addon only with stabilityai/stable-diffusion-xl-base-1.0, I get the following error:
The above exception was the direct cause of the following exception: Traceback (most recent call last): The error can be resolved by installing stabilityai/stable-diffusion-xl-refiner-1.0 as well and checking the box in the UI to use the refiner. |
Rendering from the command line also fails, not sure wether this is the case only with this release so I created an issue: #713 |
@GottfriedHofmann I wasn't able to replicate the refiner issue. Could you share more information on your configuration? |
I've found the issue. It's only going down this branch as long as offloading is enabled or the device has enough memory to comfortably hold SDXL base and refiner together. This error will occur as long as a refiner isn't selected and either of the previous requirements are met. dream-textures/generator_process/actions/prompt_to_image.py Lines 54 to 55 in fb1d1d0
It expects a tuple but load_model() is only returning a single pipeline because sdxl_refiner_model is None. Either this branch also needs to check for sdxl_refiner_model being None or load_model() needs a separate default value to determine returning a tuple or not.
|
It is fixed thanks to @NullSenseStudio . I think the issue came up becaues my RTX 3090 has 24 GByte of RAM and thus can hold the entire model. |
No description provided.