Comfyui example workflows


Comfyui example workflows. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. These are examples demonstrating the ConditioningSetArea node. This image contain 4 different areas: night, evening, day, morning. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. Hence, we'll delve into the most straightforward text-to-image processes in ComfyUI. You can then load or drag the following image in ComfyUI to get the workflow: Examples of ComfyUI workflows. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Feb 19, 2024 · ComfyUI serves as a node-based graphical user interface for Stable Diffusion. 4 Examples of ComfyUI workflows. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. Download and try out 10 different workflows for txt2img, img2img, upscaling, merging, controlnet, inpainting and more. Easy starting workflow. SDXL Examples. Sep 7, 2024 · GLIGEN Examples. 2 . Sep 7, 2024 · Inpaint Examples. I then recommend enabling Extra Options -> Auto In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. In this post we'll show you some example workflows you can import and get started straight away. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. Save this image then load it or drag it on ComfyUI to get the workflow. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a . The initial set includes three templates: Simple Template; Intermediate This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. Here's a list of example workflows in the official ComfyUI repo. Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. It covers the following topics: Introduction to Flux. Create your comfyui workflow app,and share with your friends. In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. By the end of this article, you will have a fully functioning text to image workflow in ComfyUI built entirely from scratch. Here is an example: You can load this image in ComfyUI to get the workflow. Jan 8, 2024 · The optimal approach for mastering ComfyUI is by exploring practical examples. SD3 Controlnets by InstantX are also supported. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. FLUX with img2img and LLM generated prompt, LoRA's, Face detailer and Ultimate SD Upscaler. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. ComfyUI Workflows are a way to easily start generating images within ComfyUI. I then recommend enabling Extra Options -> Auto Queue in the interface. You can Load these images in ComfyUI to get the full workflow. example to extra_model_paths. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. The default startup workflow of ComfyUI (open image in a new tab for better viewing) Before we run our default workflow, let's make a small modification to preview the generated images without saving them: Right-click on the Save Image node, then select Remove. Introducing ComfyUI Launcher! new. 5. Start with the default workflow. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. This repo contains examples of what is achievable with ComfyUI. You can construct an image generation workflow by chaining different blocks (called nodes) together. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Leveraging multi-modal techniques and advanced generative prior, SUPIR marks a significant advance in intelligent and realistic image restoration. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Sep 7, 2024 · Hypernetwork Examples. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. Feb 24, 2024 · ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. To load a workflow, simply click the Load button on the right sidebar, and select the workflow . To load a workflow from an image: Load the . [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow The following images can be loaded in ComfyUI to get the full workflow. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. I will make only Aug 30, 2024 · 5 Best ComfyUI Workflows. Comfyui Flux All In One Controlnet using GGUF model. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. This should update and may ask you the click restart. Example. For legacy purposes the old main branch is moved to the legacy -branch Sep 7, 2024 · Img2Img Examples. The workflows are designed for readability; the execution flows from left to right, from top to bottom and you should be able to easily follow the "spaghetti" without moving Sep 7, 2024 · SDXL Examples. Video Examples Image to Video. Nov 25, 2023 · In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. You can load this image in ComfyUI to get the full workflow. These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). Keybind Explanation; A comprehensive collection of ComfyUI knowledge, including ComfyUI installation and usage, ComfyUI Examples, Custom Nodes, Workflows, and ComfyUI Q&A. Upscale Model Examples. Workflows presented in this article are available to download from the Prompting Pixels site or in the sidebar. ComfyUI: Node based workflow manager that can be used with Stable Diffusion ComfyUI, like many Stable Diffusion interfaces, embeds workflow metadata in generated PNGs. Download. Pinto: About SUPIR (Scaling-UP Image Restoration), a groundbreaking image restoration method that harnesses generative prior and the power of model scaling up. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Area Composition Examples. This is what the workflow looks like in ComfyUI: Created by: C. Learn how to create various images and videos with ComfyUI, a GUI for image processing. Rework of almost the whole thing that's been in develop is now merged into main, this means old workflows will not work, but everything should be faster and there's lots of new features. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. com/models/283810 The simplicity of this wo In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. yaml. This feature enables easy sharing and reproduction of complex setups. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. A Apr 26, 2024 · Workflow. But let me know if you need help replicating some of the concepts in my process. json file. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Users have the ability to assemble a workflow for image generation by linking various blocks, referred to as nodes. Flux. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. 1; Overview of different versions of Flux. Aug 1, 2024 · For use cases please check out Example Workflows. 1. As a pivotal catalyst within SUPIR, model scaling dramatically enhances Here is a workflow for using it: Example. For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. safetensors. Some workflows (such as the Clarity Upscale workflow) include custom nodes that aren't included in base ComfyUI. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. json workflow file from the C:\Downloads\ComfyUI\workflows folder. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. Open the YAML file in a code or text editor For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. Download it and place it in your input folder. Img2Img Examples. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Intermediate For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. It’s one that shows how to use the basic features of ComfyUI. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. . Learn how to create stunning images and animations with ComfyUI, a popular tool for Stable Diffusion. In this example we will be using this image. Here are the top 10 best ComfyUI workflows to enhance your experience with Stable Diffusion in 2024: 1. The easiest way to get to grips with how ComfyUI works is to start from the shared examples. The only way to keep the code open and free is by sponsoring its development. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: Lora Examples. 5 checkpoint model. Basic Vid2Vid 1 ControlNet - This is the basic Vid2Vid workflow updated with the new nodes. Tenofas v3. Flux Schnell is a distilled 4 step model. For some workflow examples and see what ComfyUI can do you can check out: If you have a previous installation of ComfyUI with Models, or would like to use models stored in an external location, you can use this method to reference them instead of re-downloading them. Includes the Ksampler Inspire node that includes the Align Your Steps scheduler for improved image quality. Mar 21, 2024 · Expanding the borders of an image within ComfyUI is straightforward, and you have a couple of options available: basic outpainting through native nodes or with the experimental ComfyUI-LaMA-Preprocessor custom node. Then press “Queue Prompt” once and start writing your prompt. 1 with ComfyUI Explore thousands of workflows created by the community. ComfyUI Inspire Pack. 1 ComfyUI install guidance, workflow and example. Feb 7, 2024 · My ComfyUI workflow that was used to create all example images with my model RedOlives: https://civitai. 2. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. The default workflow is a simple text-to-image flow using Stable Diffusion 1. Jun 13, 2024 · 👋こんにちは!AI-Bridge Labのこばです! Stability AIからリリースされた最新の画像生成AI『Stable Diffusion3』のオープンソース版 Stable Diffusion3 Medium。早速試してみました! こんな高性能な画像生成AIを無料で使えるなんて…ありがたい限りです🙏 今回はWindows版のローカル環境(ComfyUI)で実装してみ My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. The workflows are meant as a learning exercise, they are by no means "the best" or the most optimized but they should give you a good understanding of how ComfyUI works. Keybind Explanation; ComfyUI Workflows. OpenPose SDXL: OpenPose ControlNet for SDXL. Here is an example of how to use upscale models like ESRGAN. Goto ComfyUI_windows_portable\ComfyUI\ Rename extra_model_paths. 0 reviews. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Please note that in the example workflow using the example video we are loading every other frame of a 24 frame video and then turning that into at 8 fps animation (meaning things will be slowed compared to the original video) Workflow Explanations. Start by running the ComfyUI examples . Explore examples of workflows, tutorials, documentation and custom nodes. It can be a little intimidating starting out with a blank canvas, but by bringing in an existing workflow, you can have a starting point that comes with a set of nodes all ready to go. Stable Video Diffusion (SVD) – Image to video generation with high FPS Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. By examining key examples, you'll gradually grasp the process of crafting your unique workflows. Here’s an example with the anythingV3 model: Outpainting. Achieves high FPS using frame interpolation (w/ RIFE). Mixing ControlNets Dec 4, 2023 · In this post we'll show you some example workflows you can import and get started straight away. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Put the GLIGEN model files in the ComfyUI/models/gligen directory. 5. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. 1; Flux Hardware Requirements; How to install and use Flux. Jan 15, 2024 · In this workflow building series, we'll learn added customizations in digestible chunks, synchronous with our workflow's development, and one update at a time. 确保ComfyUI本体和ComfyUI_IPAdapter_plus已经更新到最新版本(Make sure ComfyUI ontology and ComfyUI_IPAdapter_plus are updated to the latest version) name 'round_up' is not defined 参考: THUDM/ChatGLM2-6B#272 (comment) , 使用 pip install cpm_kernels 或者 pip install -U cpm_kernels 更新 cpm_kernels Result example (the new face was created from 4 faces of different actresses): (I recommend you to use ComfyUI Manager - otherwise you workflow can be lost after Collection of ComyUI workflow experiments and examples - diffustar/comfyui-workflow-collection See the following workflow for an example: See this next workflow for how to mix multiple images together: You can find the input image for the above workflows on the unCLIP example page Sep 7, 2024 · Lora Examples. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. This repository contains a handful of SDXL workflows I use, make sure to check the usefull links as some of these models, and/or plugins are required to use these in ComfyUI. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. Let's embark on a journey through fundamental workflow examples. Install these with Install Missing Custom Nodes in ComfyUI Manager. The initial set includes three templates: Simple Template. Shortcuts. Dec 10, 2023 · Moreover, as demonstrated in the workflows provided later in this article, comfyUI is a superior choice for video generation compared to other AI drawing software, offering higher efficiency and Examples of ComfyUI workflows. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. As of writing this there are two image to video checkpoints. 3D Examples - ComfyUI Workflow Stable Zero123. Workflows: SDXL Default workflow (A great starting point for using txt2img with SDXL) View Now ComfyUI Examples. 0. These are examples demonstrating how to do img2img. Dec 19, 2023 · Recommended Workflows. You can also use similar workflows for outpainting. Text to Image: Build Your First Workflow. These are examples demonstrating how to use Loras. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. safetensors, stable_cascade_inpainting. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). radye zbabd bojv zfeze lzf nlmp yzczih jht gdjhgdyu etcw