Releases: invoke-ai/InvokeAI
InvokeAI 3.0.1 (hotfix 3)
InvokeAI Version 3.0.1
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
- What's New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What's New in v3.0.1
- Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
- Can install and run both diffusers-style and .safetensors-style SDXL models.
- Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
- Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
- The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
- During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several "starter" main models.
- User interface cleanup to reduce visual clutter and increase usability.
v3.0.1post3Hotfixes
This release containss a proposed hotfix for the Windows install OSError
crashes that began appearing in 3.0.1. In addition, the following bugs have been addressed:
- Correct issue of some SD-1 safetensors models could not be loaded or converted
- The
models_dir
configuration variable used to customize the location of the models directory is now working properly - Fixed crashes of the text-based installer when the number of installed LoRAs and other models exceeded 72
- SDXL metadata is now set and retrieved properly
- Correct post1's crash when performing configure with
--yes
flag. - Correct crashes in the CLI model installer
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai
directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
InvokeAI-installer-v3.0.1post3.zip
Upgrading in place
All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the
upgrade
menu option [9] - Select "Manually enter the tag name for the version you wish to update to" option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the "Developer's console" option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1post3.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1post3
invokeai-configure --yes --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models. - Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
stabilityai/stable-diffusion-xl-base-1.0
stabilityai/stable-diffusion-xl-refiner-1.0
- Download the models manually and cut and paste their paths into the Location field in "Import Models"
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5
Known Bugs in 3.0
This is a list of known bugs in 3.0.1post3 as well as features that are planned for inclusion in later releases:
- The merge script isn't working, and crashes during startup (will be fixed soon)
- Inpainting models generated using the A1111 merge module are not loading properly (will be fixed soon)
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/10201235590639...
InvokeAI Version 3.0.1
InvokeAI Version 3.0.1
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
- What's New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What's New in v3.0.1
- Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
- Can install and run both diffusers-style and .safetensors-style SDXL models.
- Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
- Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
- The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
- During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed by default, along with several "starter" main models.
- User interface cleanup to reduce visual clutter and increase usability.
Recent Changes
Since RC3, the following has changed:
- Fixed crash on Macintosh M1 machines when rendering SDXL images
- Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)
Since RC2, the following has changed:
- Added compatibility with Python 3.11
- Updated diffusers to 0.19.0
- Cleaned up console logging - can now change logging level as described in the docs
- Added download of an updated SDXL VAE "sdxl-vae-fix" that may correct certain image artifacts in SDXL-1.0 models
- Prevent web crashes during certain resize operations
Developer changes:
- Reformatted the whole code base with the "black" tool for a consistent coding style
- Add pre-commit hooks to reformat committed code on the fly
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
Upgrading in place
All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the
upgrade
menu option [9] - Select "Manually enter the tag name for the version you wish to update to" option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the "Developer's console" option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1
invokeai-configure --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models. - Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
- stabilityai/stable-diffusion-xl-base-1.0
- stabilityai/stable-diffusion-xl-refiner-1.0
(note that these are preliminary IDs - these notes are being written before the SDXL release)
- Download the models manually and cut and paste their paths into the Location field in "Import Models"
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size
to 6 GB or higher.
Known Bugs in 3.0
This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
- There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
Getting Help
For support, please use this repository's [GitHub Issues](ht...
InvokeAI 3.0.1 Release Candidate 3
InvokeAI Version 3.0.1
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.1 adds support for rendering with Stable Diffusion XL Version 1.0 directly in the Text2Image and Image2Image panels, as well as many internal changes.
- What's New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What's New in v3.0.1
- Stable Diffusion XL support in the Text2Image and Image2Image (but not the Unified Canvas).
- Can install and run both diffusers-style and .safetensors-style SDXL models.
- Download Stable Diffusion XL 1.0 (base and refiner) using the model installer or the Web UI-based Model Manager
- Invisible watermarking, which is recommended for use with Stable Diffusion XL, is now available as an option in the Web UI settings dialogue.
- The NSFW detector, which was missing in 3.0.0, is again available. It can be activated as an option in the settings dialogue.
- During initial installation, a set of recommended ControlNet, LoRA and Textual Inversion embedding files will now be downloaded and installed biy default, along with several "starter" main models.
- User interface cleanup to reduce visual clutter and increase usability.
Recent Changes
Since RC3, the following has changed:
- Fixed crash on Macintosh M1 machines when rendering SDXL images
- Fixed black images when generating on Macintoshes using the Unipc scheduler (falls back to CPU; slow)
Since RC2, the following has changed:
- Added compatibility with Python 3.11
- Updated diffusers to 0.19.0
- Cleaned up console logging - can now change logging level as described in the docs
- Added download of an updated SDXL VAE "sdxl-vae-fix" that may correct certain image artifacts in SDXL-1.0 models
- Prevent web crashes during certain resize operations
Developer changes:
- Reformatted the whole code base with the "black" tool for a consistent coding style
- Add pre-commit hooks to reformat committed code on the fly
Known bugs:
- Rendering SDXL-1.0 models causes a crash on certain (all?) Macintosh models with MPS chips
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
InvokeAI-installer-v3.0.1rc3.zip
Upgrading in place
All users can upgrade from 3.0.0 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the
upgrade
menu option [9] - Select "Manually enter the tag name for the version you wish to update to" option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the "Developer's console" option [8]
- Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1rc3.zip" --use-pep517 --upgrade
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particular affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Once 3.0.1 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.1rc3
invokeai-configure
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models. - Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
- stabilityai/stable-diffusion-xl-base-1.0
- stabilityai/stable-diffusion-xl-refiner-1.0
(note that these are preliminary IDs - these notes are being written before the SDXL release)
- Download the models manually and cut and paste their paths into the Location field in "Import Models"
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
Users with 12 GB or more VRAM can reduce the time waiting for the image to start generating by setting max_vram_cache_size
to 6 GB or higher.
Known Bugs in 3.0
This is a list of known bugs in 3.0.1 as well as features that are planned for inclusion in later releases:
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- Face restoration is no longer needed due to the improvent in recent SD 1.x, 2.x and XL models and has been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
- There is no easy way to import a directory of version 2.3 generated images into the 3.0 gallery while preserving metadata. We hope to provide an import script in the not so distant future.
InvokeAI 3.0.0
InvokeAI Version 3.0.0
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
InvokeAI version 3.0.0 represents a major advance in functionality and ease compared with the last official release, 2.3.5.
- What's New
- Installation and Upgrading
- Getting Started with SDXL
- Known Bugs
- Getting Help
- Contributing
- Detailed Change Log
Please use the 3.0.0 release discussion thread, for comments on this version, including feature requests, enhancement suggestions and other non-critical issues. Report bugs to InvokeAI Issues. For interactive support with the development team, contributors and user community, you are invited join the InvokeAI Discord Server.
To learn more about InvokeAI, please see our Documentation Pages.
What's New in v3.0.0
Quite a lot has changed, both internally and externally.
Web User Interface:
- A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
- A Dynamic Prompts interface that lets you generate combinations of prompt elements.
- Preliminary support for Stable Diffusion XL the latest iteration of Stability AI's image generation models.
- A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
- The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
- An experimental Nodes Editor that lets you design and execute complex image generation operations using a point-and-click interface. To activate this, please use the settings icon at the upper right of the Web UI.
- Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM.
- Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
- Long prompt support (>77 tokens).
- Memory and speed improvements.
The WebUI can now be launched from the command line using either invokeai-web
(preferred new way) or invokeai --web
(deprecated old way).
Command Line Tool
The previous command line tool has been removed and replaced with a new developer-oriented tool invokeai-node-cli
that allows you to experiment with InvokeAI nodes.
Installer
The console-based model installer, invokeai-model-install
has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.
Internal
Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.0 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
If you have an earlier version of InvokeAI installed, we strongly recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
Upgrading in place
All users can upgrade from the 3.0 beta releases using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the
upgrade
menu option [9] - Select "Manually enter the tag name for the version you wish to update to" option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the "Developer's console" option [8]
- Type the following command:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0.zip" --use-pep517 --upgrade
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Once 3.0.0 is released, developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.0
To upgrade to an xformers
version if you are not currently using xformers
, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]
You can see which versions are available by going to The PyPI InvokeAI Project Page
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature as soon as SDXL 1.0 is officially released.
SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.
To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login
. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen). Alternatively, select launcher option [6] "Change InvokeAI startup options" and paste the HF token into the indicated field.
Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch th...
InvokeAI Version 3.0.0 Beta-10
We are pleased to announce a new beta release of InvokeAI 3.0 for user testing.
- What's New
- Getting Started with SDXL
- What's Missing
- Installation and Upgrading
- Getting Help
- Development Roadmap
- Detailed Change Log
Please use the 3.0.0 release discussion thread, InvokeAI Issues, or the InvokeAI Discord Server to report bugs and other issues.
Recent fixes
- Stable Diffusion XL (SDXL) 0.9 support in the node editor. See Getting Started with SDXL
- Stable Diffusion XL models added to the optional starter models presented by the model installer
- Memory and performance improvements for XL models (thanks to @StAlKeR7779)
- Image upscaling using the latest version of RealESRGAN (fixed thanks to @psychedelicious )
- VRAM optimizations to allow SDXL to run on 8 GB VRAM environments.
- Feature-complete Model Manager in the Web GUI to provide online model installation, configuration and deletion.
- Recommended LoRA and ControlNet models added to model installer.
- UI tweaks, including updated hotkeys.
- Translation and tooltip fixes
- Documentation fixes, including description of all options in
invokeai.yaml
- Improved support for half-precision generation on Macintoshes.
- Improved long prompt support.
- Fix "Package 'invokeai' requires a different Python:" error
Known bug in this beta If you are installing InvokeAI completely from scratch, on the very first image generation you may get a black screen. Just reload the web page and the problem will be resolved for this and subsequent generations.
What's New in v3.0.0
Quite a lot has changed, both internally and externally
Web User Interface:
- A ControlNet interface that gives you fine control over such things as the posture of figures in generated images by providing an image that illustrates the end result you wish to achieve.
- A Dynamic Prompts interface that lets you generate combinations of prompt elements.
- SDXL support
- A redesigned user interface which makes it easier to access frequently-used elements, such as the random seed generator.
- The ability to create multiple image galleries, allowing you to organize your generated images topically or chronologically.
- A graphical node editor that lets you design and execute complex image generation operations using a point-and-click interface (see below for more about nodes)
- Macintosh users can now load models at half-precision (float16) in order to reduce the amount of RAM used by each model by half.
- Advanced users can choose earlier CLIP layers during generation to produce a larger variety of images.
- Long prompt support (>77 tokens)
- Schedulers that did not work properly for Canvas inpainting have been fixed.
The WebUI can now be launched from the command line using either invokeai-web
(preferred new way) or invokeai --web
(deprecated old way).
Command Line Tool
- The previous command line tool has been removed and replaced with a new developer-oriented tool
invokeai-node-cli
that allows you to experiment with InvokeAI nodes.
Installer
The console-based model installer, invokeai-model-install
has been redesigned and now provides tabs for installing checkpoint models, diffusers models, ControlNet models, LoRAs, and Textual Inversion embeddings. You can install models stored locally on disk, or install them using their web URLs or Repo_IDs.
Internal
Internally the code base has been completely rewritten to be much easier to maintain and extend. Importantly, all image generation options are now represented as "nodes", which are small pieces of code that transform inputs into outputs and can be connected together into a graph of operations. Generation and image manipulation operations can now be easily extended by writing a new InvokeAI nodes.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. As of the current time (July 18, 2023) SDXL had not been officially released, but a pre-release 0.9 version is widely available. InvokeAI provides support for SDXL image generation via its Nodes Editor, a user interface that allows you to create and customize complex image generation pipelines using a drag-and-drop interface. Currently SDXL generation is not directly supported in the text2image, image2image, and canvas panels, but we expect to add this feature in the next few days.
SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete. It is not clear whether SDXL 1.0, when it is released, will require the refiner.
To experiment with SDXL, you'll need the "base" and "refiner" models. Currently a beta version of SDXL, version 0.9, is available from HuggingFace for research purposes. To obtain access, you will need to register with HF at https://huggingface.co/join, obtain an access token at https://huggingface.co/settings/tokens, and add the access token to your environment. To do this, run the InvokeAI launcher script, activate the InvokeAI virtual environment with option [8], and type the command huggingface-cli login
. Paste in your access token from HuggingFace and hit return (the token will not be echoed to the screen).
Now navigate to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9 and fill out the access request form for research use. You will be granted instant access to download. Next launch the InvokeAI console-based model installer by selecting launcher option [5] or by activating the virtual environment and giving the command invokeai-model-install
. In the STARTER MODELS section, select the checkboxes for stable-diffusion-xl-base-0-9
and stable-diffusion-xl-refiner-0-9
. Press Apply Changes to install the models and keep the installer running, or Apply Changes and Exit to install the models and exit back to the launcher menu.
Alternatively you can install these models from the Web UI Model Manager (cube at the bottom of the left-hand panel) and navigate to Import Models. In the field labeled Location type in the repo id of the base model, which is stabilityai/stable-diffusion-xl-base-0.9
. Press Add Model and wait for the model to download and install (the page will freeze while this is happening). After receiving confirmation that the model installed, repeat with stabilityai/stable-diffusion-xl-refiner-0.9
.
Note that these are large models (12 GB each) so be prepared to wait a while.
To use the installed models enter the Node Editor (inverted "Y" in the left-hand panel) and upload either the SDXL base-only or SDXL base+refiner invocation graphs. This will load and display a flow diagram showing the steps in generating an SDXL image.
Ensure that the SDXL Model Loader (leftmost column, bottom) is set to load the SDXL base model on your system, and that the SDXL Refiner Model Loader (third column, top) is set to load the SDXL refiner model on your system. Find the nodes that contain the example prompt and style ("bluebird in a sakura tree" and "chinese classical painting") and replace them with the prompt and style of your choice. Then press the Invoke button. If all goes well, an image will be generated and added to the image gallery.
Be aware that SDXL support is an experimental feature and is not 100% stable. When designing your own SDXL pipelines, be aware that there are certain settings that will have a disproportionate effect on image quality. In particular, the latents decode VAE step must be run at fp32
precision (using a slider at the bottom of the VAE node), and that images will change dramatically as the denoising threshold used by the refiner is adjusted.
Also be aware that SDXL requires at least 8 GB of VRAM in order to render 1024x1024 images. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.0
What's Missing in v3.0.0
Some features are missing or not quite working yet. These include:
- SDXL models can only be used in the node editor, and not in the text2img, img2img or unified canvas panels.
- A migration path to import 2.3-generated images into the 3.0 image gallery
- Diffusers-style LoRA files (with a HuggingFace repository ID) can be imported but do not run. There are very few of these models and they will not be supported at release time.
- Various minor glitches in image gallery behavior.
The following 2.3 features are not available:
- Variation generation (may be added in time for the final release)
- Perlin Noise (will likely not be added)
- Noise Threshold (available through Node Editor)
- Symmetry (will likely not be added)
- Seamless tiling (will likely not be added)
- Face restoration (no longer needed, will not be added)
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install...
InvokeAI 2.3.5.post2
We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post2.
- What's New
- Installation and Upgrading
- Known Bugs
- Getting Help
- Development Roadmap
- Detailed Change Log
- Acknowledgements
What's New in 2.3.5.post2
This is a bugfix release. In previous versions, the built-in updating script did not update the Xformers library when the torch library was upgraded, leaving people with a version that ran on CPU only. Install this version to fix the issue so that it doesn't happen when updating to future versions of InvokeAI 3.0.0.
As a bonus, this version allows you to apply a checkpoint VAE, such as vae-ft-mse-840000-ema-pruned.ckpt
to a diffusers model, without worrying about finding the diffusers version of the VAE. From within the web Model Manager, choose the diffusers model you wish to change, press the edit button, and enter the Location of the VAE file of your choice. The field will now accept either a .ckpt file, or a diffusers directory.
Installation / Upgrading
To install 2.3.5.post2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.post2.zip
If you are using the Xformers library, and running v2.3.5.post1 or earlier, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai
directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:
- Start the launcher script and select option # 8 - Developer's console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade
If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part. From v2.3.5.post2
onward, the updater script will work properly with Xformers installed.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==2.3.5.post2
To upgrade to an xformers
version if you are not currently using xformers
, use:
pip install --use-pep517 --upgrade InvokeAI[xformers]
You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.5.post2
These are known bugs in the release.
- Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pth
face restoration model, as well as theCIDAS/clipseg
andrunwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Development Roadmap
This is very likely to be the last release on the v2.3
source code branch. All new features are being added to the main
branch. At the current time (mid-May, 2023), the main
branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Full Changelog: v2.3.5...v2.3.5.post2
What's Changed
- autoconvert legacy VAEs by @lstein in #3235
- 2.3.5 fixes to automatic updating and vae conversions by @lstein in #3444
Full Changelog: v2.3.5.post1...v2.3.5.post2
InvokeAI Version 2.3.5.post1
We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post1.
- What's New
- Installation and Upgrading
- Known Bugs
- Getting Help
- Development Roadmap
- Detailed Change Log
- Acknowledgements
What's New in 2.3.5.post1
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
Library | Version |
---|---|
Torch | 2.0.0 |
Diffusers | 0.16.1 |
Xformers | 0.0.19 |
Compel | 1.1.5 |
Other Improvements
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.5.post1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.post1.zip
If you are using the Xformers library, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai
directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:
- Start the launcher script and select option # 8 - Developer's console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade
If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part.
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5.post1
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.5.post1
These are known bugs in the release.
- Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pth
face restoration model, as well as theCIDAS/clipseg
andrunwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Development Roadmap
This is very likely to be the last release on the v2.3
source code branch. All new features are being added to the main
branch. At the current time (mid-May, 2023), the main
branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Full Changelog: v2.3.4.post1...v2.3.5-rc1
What's Changed
- Update dependencies to get deterministic image generation behavior (2.3 branch) by @lstein in #3353
- [Bugfix] Update check failing because process disappears by @pedantic79 in #3334
- Turn the HuggingFaceConceptsLib into a singleton to prevent redundant connections by @lstein in #3337
New Contributors
- @pedantic79 made their first contribution in #3334
Full Changelog: v2.3.5...v2.3.5.post1
InvokeAI 2.3.5
We are pleased to announce a features update to InvokeAI with the release of version 2.3.5. This is currently a pre-release for community testing and bug reporting.
- What's New
- Installation and Upgrading
- Known Bugs
- Getting Help
- Development Roadmap
- Detailed Change Log
- Acknowledgements
What's New in 2.3.5
This release expands support for additional LoRA and LyCORIS models, upgrades diffusers
to 0.15.1, and fixes a few bugs.
LoRA and LyCORIS Support Improvement
- A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
- Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
- Support for the newer LoKR LyCORIS files has been added.
Diffusers 0.15.1
- This version updates the diffusers module to version 0.15.1 and is no longer compatible with 0.14. This provides a number of performance improvements and bug fixes.
Performance Improvements
- When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
Bug Fixes
- The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.5 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.zip
To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script and choose the option to update to 2.3.5. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.5
These are known bugs in the release.
- Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pth
face restoration model, as well as theCIDAS/clipseg
andrunwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available. - If the
xformers
memory-efficient attention module is used, each image generated with the same prompt and settings will be slightly different.xformers 0.0.19
reduces or eliminates this problem, but hasn't been extensively tested with InvokeAI. If you wish to upgrade, you may do so by entering the InvokeAI "developer's console" and giving the commandpip install xformers==0.0.19
. You may see a message about InvokeAI being incompatible with this version, which you can safely ignore. Be sure to report any unexpected behavior to the Issues pages.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Development Roadmap
This is very likely to be the last release on the v2.3
source code branch. All new features are being added to the main
branch. At the current time (late April, 2023), the main
branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".
If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3
source code branch, or use the pre-nodes
tag from the main
branch. Developers seeking to contribute to InvokeAI should use the head of the main
branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.
Change Log
- fix the "import from directory" function in console model installer by @lstein in #3211
- [Feature] Add support for LoKR LyCORIS format by @StAlKeR7779 in #3216
- CODEOWNERS update - 2.3 branch by @lstein in #3230
- Enable LoRAs to patch the text_encoder as well as the unet by @damian0815 in #3214
- improvements to the installation and upgrade processes by @lstein in #3186
- Revert "improvements to the installation and upgrade processes" by @lstein in #3266
- [Enhancement] distinguish v1 from v2 LoRA models by @lstein in #3175
- increase sha256 chunksize when calculating model hash by @lstein in #3162
- bump version number to 2.3.5-rc1 by @lstein in #3267
- [Bugfix] Renames in 0.15.0 diffusers by @StAlKeR7779 in #3184
New Contributors and Acknowledgements
- @AbdBarho contributed the checksum performance improvements
- @StAlKeR7779 (Sergey Borisov) contributed the LoKR support, did the diffusers 0.15 port, and cleaned up the code in multiple places.
Many thanks to these individuals, as well as @damian0815 for his contribution to this release.
Full Changelog: v2.3.4.post1...v2.3.5-rc1
InvokeAI Version 2.3.4.post1 - A Stable Diffusion Toolkit
We are pleased to announce a features update to InvokeAI with the release of version 2.3.4.
Update: 13 April 2024 - 2.3.4.post1
is a hotfix that corrects an installer crash resulting from an update to the upstream diffusers
library. If you have recently tried to install 2.3.4 and experienced a crash relating to "crossattention," this release will fix the issue.
What's New in 2.3.4
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
LoRA and LyCORIS Support
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors
. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
-
Download the
.safetensors
files of your choice and place in/path/to/invokeai/loras
. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually. -
Add
withLora(lora-file,weight)
to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file namedloras/sushi.safetensors
is present:
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
Multiple withLora()
prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
-
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
-
You can change the location of the
loras
directory by passing the--lora_directory
option to `invokeai.
New WebUI LoRA and Textual Inversion Buttons
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora()
or <textual-inversion>
prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings
. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
Minor features and fixes
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip
utility is kept up to date.
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.4 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.4.post1.zip
To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script and choose the option to update to 2.3.4. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.4
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page (Pre-release note: this will only work after the official release.)
Known Bugs in 2.3.4
These are known bugs in the release.
- The Ancestral DPMSolverMultistepScheduler (
k_dpmpp_2a
) sampler is not yet implemented fordiffusers
models and will disappear from the WebUI Sampler menu when adiffusers
model is selected. - Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pth
face restoration model, as well as theCIDAS/clipseg
andrunwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
Getting Help
Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.
Change Log
- [FEATURE] Lora support in 2.3 by @lstein in #3072
- [FEATURE] LyCORIS support in 2.3 by @StAlKeR7779 in #3118
- [Bugfix] Pip - Access is denied durring installation by @StAlKeR7779 in #3123
- ui: translations update from weblate by @weblate in #2804
- [Enhancement] save name of last model to disk whenever model changes by @lstein in #3102
New Contributors and Acknowledgements
- @felorhik contributed the vast bulk of the LoRA implementation in #2712
- @felorhik, @neecapp, and @StAlKeR7779 (Sergey Borisov) all contributed to the v2.3 backport in #3072
- @StAlKeR7779 (Sergey Borisov) contributed LyCORIS support in #3118, plus multiple bugfixes to the LoRA manager.
Many thanks to these individuals, as well as @blessedcoolant and @damian0815 for their contributions to this release.
Full Changelog: v2.3.3...v2.3.4rc1
InvokeAI Version 2.3.3 - A Stable Diffusion Toolkit
We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.
What's New in 2.3.3
This is a bugfix and minor feature release.
Bugfixes
Since version 2.3.2 the following bugs have been fixed:
Bugs
- When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
- Textual inversion will select an appropriate batchsize based on whether
xformers
is active, and will default toxformers
enabled if the library is detected. - The batch script log file names have been fixed to be compatible with Windows.
- Occasional corruption of the
.next_prefix
file (which stores the next output file name in sequence) on Windows systems is now detected and corrected. - Support loading of legacy config files that have no personalization (textual inversion) section.
- An infinite loop when opening the developer's console from within the
invoke.sh
script has been corrected. - Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
Enhancements
- It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
- The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI
embeddings
directory. - If no
--model
is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched. - On Linux systems, the
invoke.sh
launcher now uses a prettier console-based interface. To take advantage of it, install thedialog
package using your package manager (e.g.sudo apt install dialog
). - When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
Installation / Upgrading
To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly.
To update from 2.3.1 or 2.3.2 you may use the "update" option (choice 6) in the invoke.sh
/invoke.bat
launcher script and choose the option to update to 2.3.3.
Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai
directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI
. You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3
. To upgrade to an xformers
version if you are not currently using xformers
, use pip install --use-pep517 --upgrade InvokeAI[xformers]
. You can see which versions are available by going to The PyPI InvokeAI Project Page
Known Bugs in 2.3.3
These are known bugs in the release.
- The Ancestral DPMSolverMultistepScheduler (
k_dpmpp_2a
) sampler is not yet implemented fordiffusers
models and will disappear from the WebUI Sampler menu when adiffusers
model is selected. - Windows Defender will sometimes raise Trojan or backdoor alerts for the
codeformer.pth
face restoration model, as well as theCIDAS/clipseg
andrunwayml/stable-diffusion-v1.5
models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
What's Changed
- Enhance model autodetection during import by @lstein in #3043
- Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
- Add support for the TI embedding file format used by
negativeprompts.safetensors
by @lstein in #3045 - Keep torch version at 1.13.1 by @JPPhoto in #2985
- Fix textual inversion documentation and code by @lstein in #3015
- fix corrupted outputs/.next_prefix file by @lstein in #3020
- fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
- Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
- prevent infinite loop when launching developer's console by @lstein in #3016
- Prettier console-based frontend for
invoke.sh
on Linux systems with "dialog" installed by Joshua Kimsey. - ROCM debugging recipe from @EgoringKosmos
Full Changelog: v2.3.2.post1...v2.3.3-rc1
Acknowledgements
Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), @ebr (Eugene Brodsky) @JoshuaKimsey, @EgoringKosmos, and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.
Full Changelog: v2.3.2.post1...v2.3.3