Skip to content

Releases: invoke-ai/InvokeAI

v5.6.0rc2

09 Jan 03:53
Compare
Choose a tag to compare
v5.6.0rc2 Pre-release
Pre-release

This release brings major improvements to Invoke's memory management, plus a few minor fixes.

Memory Management Improvements (aka Low-VRAM mode)

The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.

Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.

Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.

Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:

  • Partial model loading
  • Dynamic RAM and VRAM cache sizes
  • Working memory

Most users should only need to enable partial loading by adding this line to your invokeai.yaml file:

enable_partial_loading: true

🚨 Windows users should also disable the Nvidia sysmem fallback.

For more details and instructions for fine-tuning, see the Low-VRAM mode docs.

Thanks to @RyanJDick for designing and implementing these improvements!

Changes since previous release candidate (v5.6.0rc1)

  • Fix some model loading errors that occurred in edge cases.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Deprecate the ram and vram settings in favor of new max_cache_ram_gb and max_cache_vram_gb settings. This is eases the upgrade path for users who had manually configured ram and vram in the past.
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.

The launcher itself has also been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU.

Other Changes

  • Fixed issue where excessively long board names could cause performance issues.
  • Reworked error handling when installing models from a URL.
  • Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
  • Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
  • Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
  • Fixed link to Scale setting's support docs.
  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!
  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).
  • Add Low-VRAM mode docs.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.5.0...v5.6.0rc2

v5.6.0rc1

07 Jan 09:30
Compare
Choose a tag to compare
v5.6.0rc1 Pre-release
Pre-release

This release brings a two major improvements to Invoke's memory management: partial model loading (aka Low-VRAM mode) and dynamic memory limits.

Memory Management Improvements

Thanks to @RyanJDick for designing and implementing these improved memory management features!

Partial Model Loading (Low-VRAM mode)

Invoke's previous "all or nothing" model loading strategy required your GPU to have enough VRAM to hold whole models during generation.

As a result, as image generation models increased in size and auxiliary models (e.g. ControlNet) became critical to workflows, Invoke's VRAM requirements have increased at the same rate. The increased VRAM requirements have prevent many of our users from running Invoke with the latest and greatest models.

Partial model loading allows Invoke to load only the parts of the model that are actively being used onto the GPU, substantially reducing Invoke's VRAM requirements.

  • Applies to systems with a CUDA device.
  • Enables large models to run with limited GPU VRAM (e.g. Full 24GB FLUX dev on an 8GB GPU)
  • When models are too large to fit on the GPU, they will be partially offloaded to RAM. The model weights are still streamed to the GPU for fast inference. Inference speed won't be as fast as when a model is fully loaded, but will be much faster than running on the CPU.
  • The recommended minimum CUDA GPU size is 8GB. An 8GB GPU should now be capable of running all models supported by Invoke (even the full 24GB FLUX models with ControlNet).
  • If there is sufficient demand, we could probably support 4GB cards in the future by moving the VAE decoding operation fully to the CPU.

Dynamic Memory Limits

Previously, the amount of RAM and VRAM used for model caching were set to hard limits. Now, the amount of RAM and VRAM used is adjusted dynamically based on what's available.

For most users, this will result in more effective use of their RAM/VRAM without having to tune configuration values.

Users can expect:

  • Faster average model load times on systems with extra memory
  • Fewer out-of-memory errors when combined with Partial Model Loading

Enabling Partial Model Loading and Dynamic Memory Limits

Partial Model Loading is disabled by default. To enable it, set enable_partial_loading: true in your invokeai.yaml:

enable_partial_loading: true

This is highly recommended for users with limited VRAM. Users with 24GB+ of VRAM may prefer to leave this option disabled to guarantee that models get fully-loaded and run at full speed.

Dynamic memory limits are enabled by default, but can be overridden by setting ram or vram in your invokeai.yaml.

# Override the dynamic cache limits to ram=6GB and vram=20GB.
ram: 6
vram: 20

🚨 Note: Users who previously set ram or vram in their invokeai.yaml will need to delete these overrides in order to benefit from the new dynamic memory limits.

All Changes

  • Added support for partial model loading.
  • Added support for dynamic memory limits.
  • Fixed issue where excessively long board names could cause performance issues.
  • Reworked error handling when installing models from a URL.
  • Fixed link to Scale setting's support docs.
  • Tidied some unused variables. Thanks @rikublock!
  • Added typegen check to CI pipeline. Thanks @rikublock!
  • Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
  • Updated installation-related docs (quick start, manual install, dev install).

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you already have the launcher, you can use it to update your existing install.

We've just updated the launcher to v1.2.0 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.5.0...v5.6.0rc1

v5.5.0

20 Dec 05:47
Compare
Choose a tag to compare

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

It's also the first stable release alongside the new Invoke Launcher!

Invoke Launcher ✨

image

The Invoke Launcher is a desktop application that can install, update and run Invoke on Windows, macOS and Linux.

It can manage your existing Invoke installation - even if you previously installed with our legacy scripts.

Download the launcher to get started

Refer to the new Quick Start guide for more details. There's a workaround for macOS, which may not let you run the launcher.

FLUX Control LoRAs

Despite having "LoRA" in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released Canny and Depth models. You can install them from the Model Manager.

Other Changes

Enhancements

  • Support for FLUX Control LoRAs.

  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

  • Reduced logging verbosity when default logging settings are used.

    Previously, all Uvicorn logging occurred at the same level as the app's logging. This logging was very verbose and frequent, and made the app's terminal output difficult to parse, with lots of extra noise.

    The Uvicorn log level is now set independently from the other log namespaces. To control it, set the log_level_network property in invokeai.yaml. The default is warning. To restore the previous log levels, set it to info (e.g. log_level_network: info).

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.
  • Canvas filters could execute twice. Besides being inefficient, on slow network connections, this could cause an error toast to appear even when the filter was successful. They now only execute once.
  • Model install error when the path contains quotes. Thanks @Quadiumm!

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.
  • Fix dynamic invocation values causing non-deterministic OpenAPI schema. This allows us to add a CI check to ensure the OpenAPI schema and TypeScript types are always in sync. Thanks @rikublock!

Translations

Installing and Updating

As mentioned above, the new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

Legacy Scripts (not recommended!)

We recommend using the launcher, as described in the previous section!

To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.

What's Changed

New Contributors

Full Changelog: v5.4.3...v5.5.0

v5.5.0rc1

19 Dec 23:57
Compare
Choose a tag to compare
v5.5.0rc1 Pre-release
Pre-release

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

FLUX Control LoRAs

Despite having "LoRA" in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released a Canny and Depth model. You can install them from the Model Manager.

All Changes

Enhancements

  • Support for FLUX Control LoRAs.

  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

  • Reduced logging verbosity when default logging settings are used.

    Previously, all Uvicorn logging occurred at the same level as the app's logging. This logging was very verbose and frequent, and made the app's terminal output difficult to parse, with lots of extra noise.

    The Uvicorn log level is now set independently from the other log namespaces. To control it, set the log_level_network property in invokeai.yaml. The default is warning. To restore the previous log levels, set it to info (e.g. log_level_network: info).

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.
  • Canvas filters could execute twice. Besides being inefficient, on slow network connections, this could cause an error toast to appear even when the filter was successful. They now only execute once.
  • Model install error when the path contains quotes. Thanks @Quadiumm!

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.
  • Fix dynamic invocation values causing non-deterministic OpenAPI schema. This allows us to add a CI check to ensure the OpenAPI schema and TypeScript types are always in sync. Thanks @rikublock!

Translations

Installation and Updating

This is the first Invoke release since we debuted our launcher, a desktop app that can install, upgrade and run Invoke on Windows, macOS and Linux.

While technically still in a prerelease state, it is working well. Download it from the repo's releases page. It works with your existing Invoke installation, or you can use it to do a fresh install.

macOS users may need to do this workaround for macOS until the first stable release of the launcher.

Legacy installer

You can still use our legacy installer scripts to install and run Invoke, though we do plan to deprecate this at some point.

To install or update, download the latest installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

New Contributors

Full Changelog: v5.4.3...v5.5.0rc1

v5.4.4rc1

18 Dec 08:08
Compare
Choose a tag to compare
v5.4.4rc1 Pre-release
Pre-release

This release brings support for FLUX Control LoRAs to Invoke, plus a few other fixes and enhancements.

FLUX Control LoRAs

Despite having "LoRA" in the name, these models are used in Invoke via Control Layers - like ControlNets. The only difference is that they do not support begin and end step percentages.

So far, BFL has released a Canny and Depth model. You can install them from the Model Manager.

All Changes

Enhancements

  • Support for FLUX Control LoRAs.
  • Improved error handling and recovery for Canvas, preventing Canvas from getting stuck if there is a network issue during some operations.

Fixes

  • Image context menu actions to create a Regional and Global Reference Image layers were reversed.
  • Missing translation strings.

Internal

  • Upgrade docker image to Ubuntu 24.04 and use uv for package management.

Translations

Installation and Updating

This is the first Invoke release since we debuted our launcher, a desktop app that can install, upgrade and run Invoke on Windows, macOS and Linux.

While technically still in a prerelease state, it is working well. Download it from the repo's releases page. It works with your existing Invoke installation, or you can use it to do a fresh install.

macOS users may need to do this workaround for macOS until the first stable release of the launcher.

Legacy installer

You can still use our legacy installer scripts to install and run Invoke, though we do plan to deprecate this at some point.

To install or update, download the latest installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: v5.4.3...v5.4.4rc1

v5.4.3

03 Dec 23:22
Compare
Choose a tag to compare

This minor release adds initial support for FLUX Regional Guidance, arrow key nudge on Canvas, plus an assortment of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer's position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.
  • FLUX performance improvements (~10% speed-up).
  • Added ImagePanelLayoutInvocation to facilitate FLUX IC-LoRA workflows.
  • FLUX Regional Guidance support (beta). Only positive prompts are supported; negative prompts, reference images and auto-negative are not supported for FLUX Regional Guidance.
  • Canvas layers now have a warning indicator that indicates issues with the layer that could prevent invoking or cause a problem.
  • New Layer from Image functions added to Canvas Staging Area Toolbar. These create a new layer without dismissing the rest of the staged images.
  • Improved empty state for Regional Guidance Reference Images.
  • Added missing New from... image context menu actions: Reference Image (Regional) and Reference Image (Global)
  • Added Vietnamese to language picker in Settings.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.
  • Bumped transformers to get a fix for Depth Anything artifacts.
  • False negative edge case with picklescan.
  • Invoke queue actions menu's Cancel Current action erroneously cleared the entire queue. Thanks @rikublock!
  • New Reference Images could inadvertently have the last-used Reference Image populated on creation.
  • Error when importing GGUF models. Thanks @JPPhoto!
  • Canceling any queue item from the Queue tab also erroneously canceled the currently-executing queue item.

Internal

  • Add redux actions for support video modal.
  • Tidied various things related to the queue. Thanks @rikublock!

Docs

Translations

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

New Contributors

Full Changelog: v5.4.2...v5.4.3

v5.4.3rc2

03 Dec 00:14
Compare
Choose a tag to compare
v5.4.3rc2 Pre-release
Pre-release

This minor release adds initial support for FLUX Regional Guidance, arrow key nudge on Canvas, plus an assortment of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer's position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.
  • FLUX performance improvements (~10% speed-up).
  • Added ImagePanelLayoutInvocation to facilitate FLUX IC-LoRA workflows.
  • FLUX Regional Guidance support (beta). Only positive prompts are supported; negative prompts, reference images and auto-negative are not supported for FLUX Regional Guidance.
  • Canvas layers now have a warning indicator that indicates issues with the layer that could prevent invoking or cause a problem.
  • New Layer from Image functions added to Canvas Staging Area Toolbar. These create a new layer without dismissing the rest of the staged images.
  • Improved empty state for Regional Guidance Reference Images.
  • Added missing New from... image context menu actions: Reference Image (Regional) and Reference Image (Global)
  • Added Vietnamese to language picker in Settings.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.
  • Bumped transformers to get a fix for Depth Anything artifacts.
  • False negative edge case with picklescan.
  • Invoke queue actions menu's Cancel Current action erroneously cleared the entire queue. Thanks @rikublock!
  • New Reference Images could inadvertently have the last-used Reference Image populated on creation.
  • Error when importing GGUF models. Thanks @JPPhoto!
  • Canceling any queue item from the Queue tab also erroneously canceled the currently-executing queue item.

Internal

  • Add redux actions for support video modal.
  • Tidied various things related to the queue. Thanks @rikublock!

Docs

Translations

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

New Contributors

Full Changelog: v5.4.2...v5.4.3rc2

v5.4.3rc1

21 Nov 18:57
Compare
Choose a tag to compare
v5.4.3rc1 Pre-release
Pre-release

This minor release adds arrow key nudge on Canvas, plus a handful of fixes and enhancements.

Changes

Enhancements

  • Add 1-pixel nudge to the move tool on Canvas. Use the arrow keys to make fine adjustments to a layer's position. Thanks @hippalectryon-0!
  • Change the default infill method from patchmatch to lama. You can use patchmatch.
  • Add empty state for Global Reference Images and Regional Guidance Reference Images, similar to the empty state for Control Layers. A blurb directs users to upload an image or drag an image from gallery to set the image.

Fixes

  • Soft Edge (Lineart, Lineart Anime) Control Layers default to the Soft Edge filter correctly.
  • Remove the nonfunctional width and height outputs from the Image Batch node. If you want to use width and height in a batch, route the image from Image Batch to an Image Primitive node, which outputs width and height.
  • Ensure invocation templates have fully parsed before running studio init actions.

Internal

  • Add redux actions for support video modal.

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: v5.4.2...v5.4.3rc1

v5.4.2

19 Nov 22:53
Compare
Choose a tag to compare

This release includes support for FLUX IP Adapter v2 and image "batching" for Workflows.

Image Batching for Workflows

The Workflow Editor now supports running a given workflow for each image in a collection of images.

Add an Image Batch node, drag some images into its image collection, and connect its output to any other node(s). Invoke will run the workflow once for each image in the collection.

Here are a few examples to help build intuition for the feature. Click the arrow to expand each example.

Example 1 - Single Batch -> Single Node

The simplest case is using a batch image output with a single node. Here's a workflow that resizes 5 images to 200x200 thumbnails.

Workflow

image

Results

image

This batch queues 5 graphs, each containing a single resize node with one of the 5 images in the batch list. Note the images are 200x200 pixels.

Example 2 - Single Batch -> Multiple Nodes

You can also use a batch image output with multiple nodes. This contrived workflow resizes the image to a 200x200 thumbnail, like the previous example, the pastes the thumbnail on the full size image.

Workflow

image

Results

image

This batch also queues 5 graphs, each of which contains one resize and one paste node. In each graph, the nodes get the image of the 5 images in the batch collection. The batch node can connect to any number of other nodes. For each queued graph, all connected nodes will get the same image.

Example 3 - Multiple Batches (Product Batching)

When multiple batches are used, they are combined such that all permutations are queued (e.g. the product of batches is taken).

Workflow

image

Results

image

In this case, the product of the two batches is 15 graphs. Each image of the 3-image batch is used as the base image, and a thumbnail of each tiger is pasted on top of it. We'll call this "product" batching.

Zipped Batching

The batching API supports "zipped" batches, where the batch collections are merged into a single batch.

For example, imagine two batches of 5 images. As described in the "product" example above, you'd get 5 images * 5 images = 25 graphs. Zipped batching would instead take the first image from each of the two batches and use them together in the first graph, then take the second two images for the second graph, and so on.

Zipped batching is not supported in the UI at this time.

Versus Iterate Nodes

We support similar functionality to batching with Iterate nodes, so why add batching? In short, Iterate nodes have some technical issues which are addressed by batching.

Why `Iterate` Nodes are Scary

They result in unbounded graph complexity and size. If you don't know what these words mean, but they sound kinda scary, congrats! You are on the right track. They are indeed scary words.

  • When using Iterate nodes, the graph is expanded with new nodes and edges during execution. Pretty scary.
  • We cannot know ahead of time how much the graph will expand, because iterate nodes' collections are dynamic. Terrifying.
  • Multiple iterate nodes combine via Cartesian product, resulting in combinatorial explosion. Your graph could be running at the heat death of the universe. Existential dread.

Batch collections are defined up front and don't expand the graph. We know exactly the complexity we are getting into before the graph executes. Sanity restored!

Batching also more intuitive - we run exactly this graph, once for each image.

Unlike Iterate nodes, Image Batch nodes' collections cannot be provided by other nodes in the graph. The collection must be defined up-front, so you cannot replace Iterate with Image Batch for all use-cases.

Nevertheless, we suggest using batching where possible.

Other Notes

  • We've added Image Batch nodes first because it images are the highest-impact field type, but the batching API supports arbitrary field types. In fact, the Canvas uses both int and str fields internally. We'll save nodes for other field types for a future enhancement.
  • If you want to batch over a board, you'll need to drag all images from the board into the batch collection. We'll explore a simpler way to use a board's images for a batch in a future enhancement.
  • It is not possible to combine all outputs from a batch within the same workflow.

Other Changes

Enhancements

  • Support for FLUX IP Adapter v2. We've optimized internal handling for v2, and you may find FLUX IP Adapter v1 results are degraded. Update to v2 to fix this.
  • Updated image collection inputs for nodes. You may now drag images into collections directly.
  • Brought some of @dwringer's often-requested composition nodes into Invoke's core nodes. They have been renamed to not conflict with your existing install of the node pack. Thanks for your work on these very useful nodes @dwringer!
  • Show tab-specific info in the Invoke button's tooltip.
  • Update the New from Image context menu actions. The actions that resize the image after creating a new Canvas are clearly named.
  • Change the Reset Canvas button, which was too similar to the Delete Image button, into a menu with more options:
    • New Canvas Session: Resets all generation settings, resets all layers, and enables Send to Canvas.
    • New Gallery Session: Resets all generation settings, resets all layers, and enables Send to Gallery.
    • Reset Generation Settings: Resets all generation settings, leaving layers alone.
    • Reset Canvas Layers: Resets all layers, leaving generation settings alone.
  • New Support Videos button in the bottom-left corner of the app, which lists and links to videos on our YouTube channel.

Fixes

  • Added padding to the metadata recall buttons in the metadata viewer, so they aren't cut off by other UI elements.
  • The progress bar stopped throbbing in the last release. We apologize for this oversight. Throbbing has been restored.
  • Addressed some edge cases that could cause the UI to crash with an error about an entity not found.
  • Updated grid size for SD3.5 models to 16px. Thanks for the heads up @dunkeroni.

Internal

  • Removed a node with a GPL-3 dependency (easing-functions), which had been contributed in the step_param_easing node that used it. While this node has been deprecated, please let us know if you were using this node, and the use-cases, so that we can better design inputs where these are found helpful.

Translations

We have had some issues communicating with "walk-in" translators on Weblate, resulting in translations being changed when they are already correct. To mitigate this, we are trying a more restricted Weblate translation setup. Access to contribute translations must be granted by @Harvester62. Please @ them in the #translators channel on discord to get access.

Our Weblate also has an account issue and is currently locked. This is unrelated to the access restriction changes.

We apologize for any inconvenience this change may cause. Thanks to all our translators for their continued efforts!

  • Updated Chinese (Simplified). Thanks @youo0o0!
  • Updated Italian. Thanks @Harvester62!
  • Updated Spanish. Thanks gallegonovato (weblate user)!

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Read more

v5.4.2rc1

19 Nov 04:53
Compare
Choose a tag to compare
v5.4.2rc1 Pre-release
Pre-release

This release candidate includes support for FLUX IP Adapter v2 and image "batching" for Workflows.

Image Batching for Workflows

Invoke's Workflow Editor now supports running a given workflow for each image in a collection of images.

Add an Image Batch node, drag some images into its image collection, and connect its output to any other node(s). Invoke will run the workflow once for each image in the collection.

Here are a few examples to help build intuition for the feature.

Example 1 - Single Batch -> Single Node

The simplest case is using a batch image output with a single node. Here's a workflow that resizes 5 images to 200x200 thumbnails.

Workflow

image

Results

image

This batch queues 5 graphs, each containing a single resize node with one of the 5 images in the batch list. Note the images are 200x200 pixels.

Example 2 - Single Batch -> Multiple Nodes

You can also use a batch image output with multiple nodes. This contrived workflow resizes the image to a 200x200 thumbnail, like the previous example, the pastes the thumbnail on the full size image.

Workflow

image

Results

image

This batch also queues 5 graphs, each of which contains one resize and one paste node. In each graph, the nodes get the image of the 5 images in the batch collection. The batch node can connect to any number of other nodes. For each queued graph, all connected nodes will get the same image.

Example 3 - Multiple Batches (Product Batching)

When multiple batches are used, they are combined such that all permutations are queued (e.g. the product of batches is taken).

Workflow

image

Results

image

In this case, the product of the two batches is 15 graphs. Each image of the 3-image batch is used as the base image, and a thumbnail of each tiger is pasted on top of it. We'll call this "product" batching.

Zipped Batching

The batching API supports "zipped" batches, where the batch collections are merged into a single batch.

For example, imagine two batches of 5 images. As described in the "product" example above, you'd get 5 images * 5 images = 25 graphs. Zipped batching would instead take the first image from each of the two batches and use them together in the first graph, then take the second two images for the second graph, and so on.

Zipped batching is not supported in the UI at this time.

Versus Iterate Nodes

We support similar functionality to batching with Iterate nodes, so why add batching? In short, Iterate nodes have some technical issues avoided by batching.

Why `Iterate` Nodes are Scary

They result in unbounded graph complexity and size. If you don't know what these words mean, but they sound kinda scary, congrats! You are on the right track. They are indeed scary words.

  • When using Iterate nodes, the graph is expanded with new nodes and edges during execution. Pretty scary.
  • We cannot know ahead of time how much the graph will expand, because iterate nodes' collections are dynamic. Terrifying.
  • Multiple iterate nodes combine via Cartesian product, resulting in combinatorial explosion. Your graph could be running at the heat death of the universe. Existential dread.

Batch collections are defined up front and don't expand the graph. We know exactly the complexity we are getting into before the graph executes. Sanity restored!

Batching also more intuitive - we run exactly this graph, once for each image.

Unlike Iterate nodes, Image Batch nodes' collections cannot be provided by other nodes in the graph. The collection must be defined up-front, so you cannot replace Iterate with Image Batch for all use-cases.

Nevertheless, we suggest using batching where possible.

Other Notes

  • We've added Image Batch nodes first because it images are the highest-impact field type, but the batching API supports arbitrary field types. In fact, the Canvas uses both int and str fields internally. We'll save nodes for other field types for a future enhancement.
  • If you want to batch over a board, you'll need to drag all images from the board into the batch collection. We'll explore a simpler way to use a board's images for a batch in a future enhancement.
  • It is not possible to combine all outputs from a batch within the same workflow.

Other Changes

Enhancements

  • Support for FLUX IP Adapter v2. We've optimized internal handling for v2, and you may find FLUX IP Adapter v1 results are degraded. Update to v2 to fix this.
  • Updated image collection inputs for nodes. You may now drag images into collections directly.

Fixes

  • Added padding to the metadata recall buttons in the metadata viewer, so they aren't cut off by other UI elements.
  • The progress bar stopped throbbing in the last release. We apologize for this oversight. Throbbing has been restored.
  • Addressed some edge cases that could cause the UI to crash with an error about an entity not found.
  • Updated grid size for SD3.5 models to 16px. Thanks for the heads up @dunkeroni.

Internal

  • Removed a node with a GPL-3 dependency (easing-functions), which had been contributed in the step_param_easing node that used it. While this node has been deprecated, please let us know if you were using this node, and the use-cases, so that we can better design inputs where these are found helpful.

Translations

We have had some issues communicating with "walk-in" translators on Weblate, resulting in translations being changed when they are already correct. To mitigate this, we are trying a more restricted Weblate translation setup. Access to contribute translations must be granted by @Harvester62. Please @ them in the #translators channel on discord to get access.

We apologize for any inconvenience this change may cause. Thanks to all our translators for their continued efforts!

  • Updated Chinese (Simplified). Thanks @youo0o0!
  • Updated Italian. Thanks @Harvester62!
  • Updated Spanish. Thanks gallegonovato (weblate user)!

Installation and Updating

To install or update, download the latest installer and follow the installation instructions

To update, select the same installation location. Your user data (images, models, etc) will be retained.

What's Changed

Full Changelog: v5.4.1...v5.4.2rc1