AI Art Generation Handbook/List of local AI Art WebUI

The WebUI listed here are updated since June 2024.

There are some criteria that are needed so that this local AI Art WebUI are included inside:

  1. They should be free (in terms of beer) and not paid (Therefore for example: Commercially available Adobe Firefly is totally out)
  2. They do not need to include online registration (ala link to Google's account) or needed always on-line to be functions (like makeayo)
  3. They should be notable enough for other user to take notice and promotions through words-of-mouth (or in this age, social media)

See also: List of AI Art Websites if you want to use it online or don't have the required hardware.

Logo Author Name Website Link Remark
Automatic 1111 A1111 Web UI One of the most earliest and most versatile WebUI with external extensions built in. It is also one of the largest community support (based on Github stars)
comfyanonymous ComfyUI Instead of typing text as programming input, it uses node programming to "program" as flow-state in text to image generation
cmdr2 Easy Diffusion Easy to install WebUI with one-click install
lllyasviel Forge Optimisation from the A1111 fork which can produce faster image generations and also optimise the memory management to reduce OOM error message
lllyasviel Fooocus WebUI that tries to emulate the ease of use ala MidJourney.

Note: This is the recommended local WebUI for the beginners if they want to try AI Art Generations

invoke-ai InvokeAI Stable Diffusion Toolkit It have have unified UI for inpainting and outpainting workflow.
n00mkrad NMKD Stable Diffusion GUI This fork support for AMD GPU
lllyasviel omost This fork using Large Language Model to describe descriptively about the images so that it can render the images in different part of images and combined together in single image
Vladmandic SD.Next A fork from Auto1111 which can support multiple types of models , improved prompt parser, optimised for multiple platform and Lora enhancer for training.
VoltaML VoltaML Recompiled ML library into high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM meaning that it can generate double the speed from A1111


References

edit

https://www.reddit.com/r/StableDiffusion/s/s8rqnqOE3N