Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: Unable to disable flux tools scripts without restarting the server #3715

Open
2 tasks done
SAC020 opened this issue Jan 21, 2025 · 7 comments
Open
2 tasks done
Labels
backlog Valid issue but requires non-trivial work and is placed in backlog

Comments

@SAC020
Copy link

SAC020 commented Jan 21, 2025

Issue Description

  1. Load SD next, load Flux dev
  2. Activate Flux tools => Canny (or Depth, same behavior)
  3. Generate image => runs correctly
  4. Set Flux tools script => none, Script => none
  5. Generate image fails

Basically Flux canny or depth models remain loaded, disabling the scripts does not revert to Flux dev. However the dropdown models lists dev as active. Reverting to dev requires loading another model, then reloading Flux dev, which eventually cascades into a CUDA VRAM error, requiring the restart of the server.

Image

Version Platform Description

07:09:37-332767 INFO Python: version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe"
venv="C:\ai\automatic\venv"
07:09:37-569321 INFO Version: app=sd.next updated=2025-01-16 hash=e22d0789 branch=master
url=https://github.com/vladmandic/automatic/tree/master ui=main
07:09:38-234406 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
release=Windows-10-10.0.26100-SP0 python=3.11.9 docker=False
07:09:38-237174 DEBUG Packages: venv=venv site=['venv', 'venv\Lib\site-packages']

Relevant log output

PS C:\ai\automatic> .\webui.bat --debug
Using VENV: C:\ai\automatic\venv
07:09:37-328484 INFO     Starting SD.Next
07:09:37-331413 INFO     Logger: file="C:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
07:09:37-332767 INFO     Python: version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe"
                         venv="C:\ai\automatic\venv"
07:09:37-569321 INFO     Version: app=sd.next updated=2025-01-16 hash=e22d0789 branch=master
                         url=https://github.com/vladmandic/automatic/tree/master ui=main
07:09:38-234406 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.26100-SP0 python=3.11.9 docker=False
07:09:38-237174 DEBUG    Packages: venv=venv site=['venv', 'venv\\Lib\\site-packages']
07:09:38-238870 INFO     Args: ['--debug']
07:09:38-239758 DEBUG    Setting environment tuning
07:09:38-240963 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
07:09:38-251801 DEBUG    Torch overrides: cuda=False rocm=False ipex=False directml=False openvino=False zluda=False
07:09:38-264947 INFO     CUDA: nVidia toolkit detected
07:09:38-445916 INFO     Install: verifying requirements
07:09:38-496923 DEBUG    Timestamp repository update time: Thu Jan 16 18:54:17 2025
07:09:38-498579 INFO     Startup: standard
07:09:38-499081 INFO     Verifying submodules
07:09:41-637125 DEBUG    Git submodule: extensions-builtin/sd-extension-chainner / main
07:09:41-738644 DEBUG    Git submodule: extensions-builtin/sd-extension-system-info / main
07:09:41-842401 DEBUG    Git submodule: extensions-builtin/sd-webui-agent-scheduler / main
07:09:41-996560 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=main
07:09:41-997344 DEBUG    Git submodule: extensions-builtin/sdnext-modernui / main
07:09:42-102024 DEBUG    Git submodule: extensions-builtin/stable-diffusion-webui-rembg / master
07:09:42-229194 DEBUG    Git submodule: modules/k-diffusion / master
07:09:42-358589 DEBUG    Git submodule: wiki / master
07:09:42-423733 DEBUG    Register paths
07:09:42-510683 DEBUG    Installed packages: 188
07:09:42-512190 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
07:09:42-756220 DEBUG    Extension installer: C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
07:09:45-065697 DEBUG    Extension installer: C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
07:09:51-382146 DEBUG    Extensions all: []
07:09:51-383111 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
07:09:51-384213 INFO     Install: verifying requirements
07:09:51-385331 DEBUG    Setup complete without errors: 1737436191
07:09:51-392409 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
07:09:51-393410 INFO     Command line args: ['--debug'] debug=True args=[]
07:09:51-394890 DEBUG    Env flags: []
07:09:51-395430 DEBUG    Starting module: <module 'webui' from 'C:\\ai\\automatic\\webui.py'>
07:09:59-417499 INFO     Device detect: memory=24.0 default=balanced
07:09:59-422078 DEBUG    Read: file="config.json" json=39 bytes=1793 time=0.000 fn=<module>:load
07:09:59-662434 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
07:09:59-664489 DEBUG    Read: file="html\reference.json" json=63 bytes=33413 time=0.000
                         fn=_call_with_frames_removed:<module>
07:09:59-711892 INFO     Torch parameters: backend=cuda device=cuda config=Auto dtype=torch.bfloat16 context=no_grad
                         nohalf=False nohalfvae=False upcast=False deterministic=False fp16=pass bf16=pass
                         optimization="Scaled-Dot-Product"
07:09:59-958697 DEBUG    ONNX: version=1.20.1 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
07:10:00-104429 INFO     Device: device=NVIDIA GeForce RTX 4090 n=1 arch=sm_90 capability=(8, 9) cuda=12.4 cudnn=90100
                         driver=566.36
07:10:00-824821 INFO     Torch: torch==2.5.1+cu124 torchvision==0.20.1+cu124
07:10:00-825939 INFO     Packages: diffusers==0.33.0.dev0 transformers==4.47.1 accelerate==1.2.1 gradio==3.43.2
07:10:00-948938 DEBUG    Entering start sequence
07:10:00-951964 DEBUG    Initializing
07:10:00-955470 DEBUG    Read: file="metadata.json" json=172 bytes=411331 time=0.002 fn=initialize:init_metadata
07:10:00-956865 DEBUG    Huggingface cache: path="C:\Users\sebas\.cache\huggingface\hub"
07:10:01-003168 INFO     Available VAEs: path="models\VAE" items=0
07:10:01-004678 INFO     Available UNets: path="models\UNET" items=0
07:10:01-006588 INFO     Available TEs: path="models\Text-encoder" items=4
07:10:01-013062 INFO     Available Models: items=16 safetensors="models\Stable-diffusion":9
                         diffusers="models\Diffusers":7 time=0.01
07:10:01-031529 INFO     Available Styles: folder="models\styles" items=288 time=0.02
07:10:01-118071 INFO     Available Yolo: path="models\yolo" items=7 downloaded=3
07:10:01-119574 DEBUG    Extensions: disabled=['sdnext-modernui']
07:10:01-120577 INFO     Load extensions
07:10:01-220593 INFO     Available LoRAs: path="models\Lora" items=163 folders=4 time=0.01
07:10:01-416499 DEBUG    Register network: type=LoRA method=legacy
07:10:02-189706 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
07:10:02-194965 DEBUG    Extensions init time: total=1.07 sd-webui-agent-scheduler=0.72 Lora=0.22
07:10:02-204909 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.001 fn=__init__:__init__
07:10:02-207277 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
                         fn=__init__:find_scalers
07:10:02-209649 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
07:10:02-211450 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
07:10:02-212175 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
07:10:02-213202 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
07:10:02-216867 INFO     Available Upscalers: items=55 downloaded=11 user=3 time=0.02 types=['None', 'Lanczos',
                         'Nearest', 'ChaiNNer', 'AuraSR', 'ESRGAN', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
07:10:02-222347 DEBUG    UI start sequence
07:10:02-223559 WARNING  Networks: type=lora method=legacy
07:10:02-224632 INFO     UI theme: type=Standard name="black-teal" available=13
07:10:02-231900 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
07:10:02-234597 DEBUG    UI initialize: txt2img
07:10:02-479599 DEBUG    Networks: page='model' items=78 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.07 thumb=0.02 desc=0.00 info=0.00 workers=8
07:10:02-485330 DEBUG    Networks: page='lora' items=163 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.07 thumb=0.02 desc=0.07 info=0.06 workers=8
07:10:02-494821 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.06 thumb=0.00 desc=0.00 info=0.00 workers=8
07:10:02-499282 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.05 thumb=0.02 desc=0.01 info=0.00 workers=8
07:10:02-500841 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=8
07:10:02-502837 DEBUG    Networks: page='history' items=0 subfolders=0 tab=txt2img folders=[] list=0.00 thumb=0.00
                         desc=0.00 info=0.00 workers=8
07:10:02-792864 DEBUG    UI initialize: img2img
07:10:02-985303 DEBUG    UI initialize: control models=models\control
07:10:03-490086 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000 fn=__init__:read_from_file
07:10:03-862514 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
07:10:03-863517 INFO     Extension list is empty: refresh required
07:10:04-390537 DEBUG    Extension list: processed=6 installed=6 enabled=5 disabled=1 visible=6 hidden=0
07:10:04-751302 DEBUG    Root paths: ['C:\\ai\\automatic']
07:10:04-848781 INFO     Local URL: http://127.0.0.1:7860/
07:10:04-851348 DEBUG    API middleware: [<class 'starlette.middleware.base.BaseHTTPMiddleware'>, <class
                         'starlette.middleware.gzip.GZipMiddleware'>]
07:10:04-854339 DEBUG    API initialize
07:10:05-056690 INFO     [AgentScheduler] Task queue is empty
07:10:05-058226 INFO     [AgentScheduler] Registering APIs
07:10:05-203341 DEBUG    Scripts setup: time=0.393 ['K-Diffusion Samplers:0.11', 'XYZ Grid:0.041', 'IP Adapters:0.035',
                         'Face: Multiple ID Transfers:0.017', 'Video: AnimateDiff:0.012', 'Video: CogVideoX:0.011',
                         'FreeScale: Tuning-Free Scale Fusion:0.011']
07:10:05-204829 DEBUG    Model metadata: file="metadata.json" no changes
07:10:05-207309 DEBUG    Model requested: fn=run:<lambda>
07:10:05-208797 INFO     Load model: select="Diffusers\black-forest-labs/FLUX.1-dev [0ef5fff789]"
07:10:05-212469 DEBUG    Load model: type=FLUX model="Diffusers\black-forest-labs/FLUX.1-dev"
                         repo="black-forest-labs/FLUX.1-dev" unet="None" te="None" vae="Automatic" quant=none
                         offload=balanced dtype=torch.bfloat16
07:10:05-750389 INFO     HF login: token="C:\Users\sebas\.cache\huggingface\token"
07:10:05-952254 DEBUG    GC: current={'gpu': 1.59, 'ram': 1.02, 'oom': 0} prev={'gpu': 1.6, 'ram': 1.02} load={'gpu': 7,
                         'ram': 2} gc={'gpu': 0.01, 'py': 11108} fn=load_diffuser_force:load_flux why=force time=0.20
07:10:05-954426 DEBUG    Load model: type=FLUX cls=FluxPipeline preloaded=[] revision=None
07:10:05-955986 DEBUG    Quantization: type=bitsandbytes version=0.45.0 fn=load_quants:create_bnb_config
07:10:05-957827 DEBUG    Quantization: module=all type=bnb dtype=nf4 storage=uint8
07:10:05-972594 DEBUG    UI: connected
Diffusers ?it/s ██████████████████████████████ 100% 3/3 00:00 ? Fetching 3 files
07:10:18-657263 DEBUG    Quantization: module=transformer type=bnb dtype=nf4 storage=uint8
Downloading shards: 100%|██████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 3119.60it/s]
Diffusers  2.67s/it █████████████ 100% 2/2 00:05 00:00 Loading checkpoint shards
07:10:24-455890 DEBUG    Quantization: module=t5 type=bnb dtype=nf4 storage=uint8
07:10:24-457857 DEBUG    Quantization: module=all type=bnb dtype=nf4 storage=uint8
Diffusers 13.09it/s ████████ 100% 7/7 00:00 00:00 Loading pipeline components...
07:10:25-313281 DEBUG    Setting model: component=VAE slicing=True
07:10:25-314172 DEBUG    Setting model: attention="Scaled-Dot-Product"
07:10:25-331072 INFO     Offload: type=balanced op=init watermark=0.25-0.7 gpu=5.997-16.793:23.99 cpu=63.920 limit=0.00
07:10:27-669545 DEBUG    Model module=text_encoder_2 type=T5EncoderModel dtype=torch.bfloat16
                         quant=QuantizationMethod.BITS_AND_BYTES params=2.748 size=5.683
07:10:30-750437 DEBUG    Model module=transformer type=FluxTransformer2DModel dtype=torch.bfloat16
                         quant=QuantizationMethod.BITS_AND_BYTES params=5.543 size=5.546
07:10:30-753004 DEBUG    Model module=text_encoder type=CLIPTextModel dtype=torch.bfloat16 quant=None params=0.115
                         size=0.229
07:10:30-755572 DEBUG    Model module=vae type=AutoencoderKL dtype=torch.bfloat16 quant=None params=0.078 size=0.156
07:10:30-842391 INFO     Model class=FluxPipeline modules=4 size=11.615
07:10:30-850953 INFO     Load model: time=total=25.64 load=20.10 move=5.51 native=1024 memory={'ram': {'used': 19.28,
                         'total': 63.92}, 'gpu': {'used': 1.64, 'total': 23.99}, 'retries': 0, 'oom': 0}
07:10:30-854473 DEBUG    Script init: ['system-info.py:app_started=0.08', 'task_scheduler.py:app_started=0.17']
07:10:30-855339 INFO     Startup time: total=53.98 checkpoint=25.65 torch=23.95 launch=14.68 installer=14.52
                         extensions=1.07 ui-extensions=0.63 ui-networks=0.43 ui-settings=0.34 ui-defaults=0.27
                         ui-txt2img=0.26 app-started=0.24 ui-control=0.23 ui-img2img=0.14 libraries=0.12 api=0.11
                         detailer=0.09 ui-models=0.06 ui-extras=0.06 ui-gallery=0.06 samplers=0.05
07:10:30-857122 DEBUG    Save: file="config.json" json=39 bytes=1729 time=0.003
07:10:56-422266 INFO     Flux Tools: tool=Canny init
07:10:56-423617 INFO     HF search: model="black-forest-labs/FLUX.1-Canny-dev"
                         results=['black-forest-labs/FLUX.1-Canny-dev']
07:10:56-425340 INFO     Load model: select="Diffusers\black-forest-labs/FLUX.1-Canny-dev [38e3eccda8]"
07:10:56-699839 DEBUG    GC: current={'gpu': 1.64, 'ram': 19.29, 'oom': 0} prev={'gpu': 1.64, 'ram': 19.29} load={'gpu':
                         7, 'ram': 30} gc={'gpu': 0.0, 'py': 758} fn=reload_model_weights:unload_model_weights why=force
                         time=0.26
07:10:56-701805 DEBUG    Unload weights model: {'ram': {'used': 19.29, 'total': 63.92}, 'gpu': {'used': 1.64, 'total':
                         23.99}, 'retries': 0, 'oom': 0}
07:10:56-721386 DEBUG    Load model: type=FLUX model="Diffusers\black-forest-labs/FLUX.1-Canny-dev"
                         repo="black-forest-labs/FLUX.1-Canny-dev" unet="None" te="None" vae="Automatic" quant=none
                         offload=balanced dtype=torch.bfloat16
07:10:56-957537 DEBUG    GC: current={'gpu': 1.59, 'ram': 19.27, 'oom': 0} prev={'gpu': 1.64, 'ram': 19.27} load={'gpu':
                         7, 'ram': 30} gc={'gpu': 0.05, 'py': 36732} fn=load_diffuser_force:load_flux why=force
                         time=0.23
07:10:56-960238 DEBUG    Load model: type=FLUX cls=FluxControlPipeline preloaded=[] revision=refs/pr/1
07:10:56-962424 DEBUG    Quantization: module=all type=bnb dtype=nf4 storage=uint8
Diffusers 2614.36it/s ████████████████████ 100% 3/3 00:00 00:00 Fetching 3 files
07:11:21-859392 DEBUG    Quantization: module=transformer type=bnb dtype=nf4 storage=uint8
Downloading shards: 100%|██████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 1643.21it/s]
Diffusers  4.57s/it █████████████ 100% 4/4 00:18 00:00 Loading checkpoint shards
07:11:40-612257 DEBUG    Quantization: module=t5 type=bnb dtype=nf4 storage=uint8
07:11:40-614472 DEBUG    Quantization: module=all type=bnb dtype=nf4 storage=uint8
Diffusers  4.70it/s ████████ 100% 7/7 00:01 00:00 Loading pipeline components...
07:11:42-800905 DEBUG    Setting model: component=VAE slicing=True
07:11:42-802393 DEBUG    Setting model: attention="Scaled-Dot-Product"
07:11:42-825801 INFO     Offload: type=balanced op=init watermark=0.25-0.7 gpu=5.997-16.793:23.99 cpu=63.920 limit=0.00
07:11:43-520868 DEBUG    Model module=text_encoder_2 type=T5EncoderModel dtype=torch.bfloat16
                         quant=QuantizationMethod.BITS_AND_BYTES params=2.748 size=5.683
07:11:44-231932 DEBUG    Model module=transformer type=FluxTransformer2DModel dtype=torch.bfloat16
                         quant=QuantizationMethod.BITS_AND_BYTES params=5.544 size=5.546
07:11:44-234102 DEBUG    Model module=text_encoder type=CLIPTextModel dtype=torch.bfloat16 quant=None params=0.115
                         size=0.229
07:11:44-236155 DEBUG    Model module=vae type=AutoencoderKL dtype=torch.bfloat16 quant=None params=0.078 size=0.156
07:11:44-314740 INFO     Model class=FluxControlPipeline modules=4 size=11.615
07:11:44-323941 INFO     Load model: time=total=47.60 load=46.08 move=1.49 native=1024 memory={'ram': {'used': 19.79,
                         'total': 63.92}, 'gpu': {'used': 1.64, 'total': 23.99}, 'retries': 0, 'oom': 0}
07:11:44-601230 DEBUG    Flux Tools: tool=Canny ready time=48.18
07:11:44-613623 DEBUG    Image resize: source=1024:1024 target=1024:1024 mode="Fixed" upscaler="None" type=image
                         time=0.00 fn=process_images_inner:init
07:11:44-628709 DEBUG    Sampler: "default" class=FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000,
                         'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15,
                         'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal':
                         None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False}
07:11:46-682388 INFO     Base: pipeline=FluxControlPipeline task=TEXT_2_IMAGE batch=1/1x1 set={'guidance_scale': 6,
                         'generator': 'cuda:[934022236]', 'num_inference_steps': 20, 'output_type': 'latent', 'width':
                         1024, 'height': 1024, 'control_image': <PIL.Image.Image image mode=RGB size=1024x1024 at
                         0x1C9A91F1A10>, 'parser': 'native', 'prompt': 'embeds'}
Progress  1.37it/s ████████████████████████▊          75% 15/20 00:11 00:03 Base07:12:00-104935 DEBUG    Server: alive=True requests=85 memory=20.08/63.92 status='running' task='Load'
                         timestamp='20250121071005' id='task(513jiy20303peod)' job=0 jobs=0 total=1 step=0 steps=0
                         queued=0 uptime=124 elapsed=114.9 eta=None progress=0
Progress  1.29it/s █████████████████████████████████ 100% 20/20 00:15 00:00 Base
07:12:03-910385 DEBUG    VAE load: type=approximate model="models\VAE-approx\model.pt"
07:12:06-001635 DEBUG    Decode: vae="default" upcast=False slicing=True tiling=False latents=torch.Size([1, 16, 128,
                         128]):cuda:0:torch.bfloat16 time=1.327
07:12:06-039863 INFO     Processed: images=1 its=0.93 time=21.43 timers={'pipeline': 17.13, 'decode': 2.18, 'move':
                         2.07, 'encode': 1.47, 'offload': 1.33, 'gc': 0.11} memory={'ram': {'used': 20.21, 'total':
                         63.92}, 'gpu': {'used': 2.61, 'total': 23.99}, 'retries': 0, 'oom': 0}
07:12:06-279329 DEBUG    GC: current={'gpu': 2.09, 'ram': 20.21, 'oom': 0} prev={'gpu': 2.61, 'ram': 20.21} load={'gpu':
                         9, 'ram': 32} gc={'gpu': 0.52, 'py': 167} fn=process_images:process_images_inner why=final
                         time=0.24
07:12:06-560645 DEBUG    Save temp: image="C:\Users\sebas\AppData\Local\Temp\gradio\tmpf4rs_rep.png" width=1024
                         height=1024 size=1875859
07:12:23-541137 DEBUG    Image resize: source=1024:1024 target=1024:1024 mode="Fixed" upscaler="None" type=image
                         time=0.00 fn=process_images_inner:init
07:12:23-544376 DEBUG    Sampler: "default" class=FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000,
                         'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15,
                         'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal':
                         None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False}
07:12:23-565202 INFO     Base: pipeline=FluxControlPipeline task=TEXT_2_IMAGE batch=1/1x1 set={'guidance_scale': 6,
                         'generator': 'cuda:[1049420443]', 'num_inference_steps': 20, 'output_type': 'latent', 'width':
                         1024, 'height': 1024, 'parser': 'native', 'prompt': 'embeds'}
07:12:23-587732 ERROR    Processing: args={'prompt_embeds': 'cuda:0:torch.bfloat16:torch.Size([1, 512, 4096])',
                         'pooled_prompt_embeds': 'cuda:0:torch.bfloat16:torch.Size([1, 768])', 'guidance_scale': 6,
                         'generator': [<torch._C.Generator object at 0x000001C9CCF1D890>], 'callback_on_step_end':
                         <function diffusers_callback at 0x000001C9854BBC40>, 'callback_on_step_end_tensor_inputs':
                         ['latents'], 'num_inference_steps': 20, 'output_type': 'latent', 'width': 1024, 'height': 1024}
                         Input is in incorrect format. Currently, we only support <class 'PIL.Image.Image'>, <class
                         'numpy.ndarray'>, <class 'torch.Tensor'>
07:12:23-590266 ERROR    Processing: ValueError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\processing_diffusers.py:101 in process_base                                                  │
│                                                                                                                      │
│   100 │   │   else:                                                                                                  │
│ ❱ 101 │   │   │   output = shared.sd_model(**base_args)                                                              │
│   102 │   │   if isinstance(output, dict):                                                                           │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\torch\utils\_contextlib.py:116 in decorate_context                            │
│                                                                                                                      │
│   115 │   │   with ctx_factory():                                                                                    │
│ ❱ 116 │   │   │   return func(*args, **kwargs)                                                                       │
│   117                                                                                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\flux\pipeline_flux_control.py:763 in __call__             │
│                                                                                                                      │
│   762 │   │                                                                                                          │
│ ❱ 763 │   │   control_image = self.prepare_image(                                                                    │
│   764 │   │   │   image=control_image,                                                                               │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\flux\pipeline_flux_control.py:575 in prepare_image        │
│                                                                                                                      │
│   574 │   │   else:                                                                                                  │
│ ❱ 575 │   │   │   image = self.image_processor.preprocess(image, height=height, width=width)                         │
│   576                                                                                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\diffusers\image_processor.py:678 in preprocess                                │
│                                                                                                                      │
│    677 │   │   if not is_valid_image_imagelist(image):                                                               │
│ ❱  678 │   │   │   raise ValueError(                                                                                 │
│    679 │   │   │   │   f"Input is in incorrect format. Currently, we only support {', '.join(str(x) for x in support │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Input is in incorrect format. Currently, we only support <class 'PIL.Image.Image'>, <class 'numpy.ndarray'>, <class 'torch.Tensor'>

Backend

Diffusers

UI

Standard

Branch

Master

Model

FLUX.1

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue
@vladmandic
Copy link
Owner

valid issue, but...
as noted in the changelog fluxtools other than redux are not really tools, they are standalone models plus they need special handling as done inside the script to prepare inputs.
this is how any script that replaces the model behaves - you cant "disable" it without loading a different model.
same for ltxvideo, hunyuanvideo, svd, etc.
so i don't really know what to do here.

@vladmandic vladmandic added the backlog Valid issue but requires non-trivial work and is placed in backlog label Jan 21, 2025
@SAC020
Copy link
Author

SAC020 commented Jan 21, 2025

Maybe the script should also switch the active model in the dropdown list, so reverting to .dev can be done without loading another intermediate model.

And if you try to switch to dev, you will notice it will load it (unless it crashes out of VRAM), but it definitely crashes out of VRAM when you try to generate an image. So it's not just a straightforward switching of models (which I would understand and comply with), but a rather elaborate switching through a third proxy, then a restart of the server to make it work.

I can provide logs documenting the above, if it helps.

@vladmandic
Copy link
Owner

Maybe the script should also switch the active model in the dropdown list, so reverting to .dev can be done without loading another intermediate model.

yes, ideally it should - but with gradio that is pretty much impossible without massive hacks.

I can provide logs documenting the above, if it helps.

not really.

@SAC020
Copy link
Author

SAC020 commented Jan 21, 2025

yes, ideally it should - but with gradio that is pretty much impossible without massive hacks.

How about adding a "force switch model" in the system page where the user can specify which model they want to load, regardless of which model is loaded? I would assume the scripting behind the button would be more flexible than the limitations of the gradio drop-down list.

Or a "load specific model" button, so the user can first "unload" (which exists), then "load" (not re-load, but load specifically a model).

Just a though, I am unaware both of gradio limitations, and python capabilities.

Image

@vladmandic
Copy link
Owner

what happens if you press unload -> reload?

@SAC020
Copy link
Author

SAC020 commented Jan 21, 2025

It reloads depth / canny, because that is what is written in config.json as the current model

@vladmandic
Copy link
Owner

all-in-all, valid issue. given how little i value depth/canny models from sai as standalone models, this will be on a backlog.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog Valid issue but requires non-trivial work and is placed in backlog
Projects
None yet
Development

No branches or pull requests

2 participants