Comfyui workflow directory example reddit github

Comfyui workflow directory example reddit github. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. safetensors for the example below), the Depth controlnet here and the Union Controlnet here. The tutorial pages are ready for use, if you find any errors please let me know. ai/profile/neuralunk?sort=most_liked. Each directory should contain the necessary model and tokenizer files. cpp. Load the . Please keep posted images SFW. /ComfyUI" you will find the file extra_model_paths. See full list on github. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. The LCM SDXL lora can be downloaded from here. Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. Aug 1, 2024 · For use cases please check out Example Workflows. example, edit it with your favorite editor. Release: AP Workflow 9. Download this workflow and drop it into ComfyUI - or you can use one of the workflows others in the community made below. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. If you are not interested in having an upscaled image completely faithful to the original you can create a draft with the base model in just a bunch of steps, then upscale the latent and apply a second pass with the base and a third pass with the refiner. Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. json workflow file from the C:\Downloads\ComfyUI\workflows folder. Breakdown of workflow content. You can use t5xxl_fp8_e4m3fn. AnimateDiff workflows will often make use of these helpful This is a custom node that lets you use TripoSR right from ComfyUI. This is a WIP guide. (TL;DR it creates a 3d model from an image. While quantization wasn't feasible for regular UNET models (conv2d), transformer/DiT models such as flux seem less affected by quantization. Extract the workflow zip file; Copy the install-comfyui. It should look like this: a111: base_path: /mnt/sd/ checkpoints: CHECKPOINT configs: CONFIGS vae: VAE loras: | LORA upscale_models: | ESRGAN embeddings: TextualInversion controlnet: ControlNet llm: llm Jan 18, 2024 · Official support for PhotoMaker landed in ComfyUI. Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. https://youtu. To keep image generation as free and open source as possible while providing education on and access to Stable Diffusion categories/category-name. Download it, rename it to: lcm_lora_sdxl. This means many users will be sending workflows to it that might be quite different to yours. Reload to refresh your session. There is a small node pack attached to this guide. The experiments are more advanced examples and tips and tricks that might be useful in day-to-day tasks. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Aug 1, 2024 · [2024/07/25] 🌩️ Users can load BizyAir workflow examples directly by clicking the "☁️BizyAir Workflow Examples" button. ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. com/roblaughter/comfyui-workflows Also check out the upscale workflow for cranking the resolution and detail on select images. json of the file I just used. ScaledCFGGuider: Samples the two conditionings, then adds it using a method similar to "Add Trained Difference" from merging models. Here are approx. Node Integration: You signed in with another tab or window. This includes the init file and 3 nodes associated with the tutorials. Install these with Install Missing Custom Nodes in ComfyUI Manager. json — Options to be merged depending on the requestor's Welcome to the unofficial ComfyUI subreddit. You signed out in another tab or window. It is about 95% complete. ImageAssistedCFGGuider: Samples the conditioning, then adds in If you haven't already, install ComfyUI and Comfy Manager - you can find instructions on their pages. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. The same concepts we explored so far are valid for SDXL. DocVQA allows you to ask questions about the content of document images, and the model will provide answers based on the visual and textual information in the document. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. 5 model I don't even want. Rename For some workflow examples and see what ComfyUI can do you can check out: In the standalone windows build you can find this file in the ComfyUI directory. Download the checkpoints to the ComfyUI models directory by pulling the large model files using git lfs: A ComfyUI workflows and models management extension to organize and manage all your workflows, models in one place. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow You signed in with another tab or window. You signed in with another tab or window. - if-ai/ComfyUI-IF_AI_tools Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. The any-comfyui-workflow model on Replicate is a shared public model. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now SDXL Examples. I also had issues with this workflow with unusually-sized images. This should update and may ask you the click restart. txt. It looks freaking amazing! Anyhow, here is a screenshot and the . This will allow you to access the Launcher and its workflow projects from a single port. Comfy Workflows Comfy Workflows. json — Options to be merged depending on the channel's category name roles/role-name-or-id. com A few weeks ago, we open-sourced our ComfyUI outputs/workflow browser (https://github. If you don’t have t5xxl_fp16. Some workflows (such as the Clarity Upscale workflow) include custom nodes that aren't included in base ComfyUI. When the workflow opens, download the dependent nodes by pressing "Install Missing Custom Nodes" in Comfy Manager. om。 说明:这个工作流使用了 LCM I stopped the process at 50GB, then deleted the custom node and the models directory. You can construct an image generation workflow by chaining different blocks (called nodes) together. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. It works by converting your workflow. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet XLab and InstantX + Shakker Labs have released Controlnets for Flux. 1 Pro Flux. Before using BiRefNet, download the model checkpoints with Git LFS: Ensure git lfs is installed. These custom nodes provide support for model files stored in the GGUF format popularized by llama. A couple of pages have not been completed yet. ) I've created this node Under ". Please check example workflows for usage. json files into an executable Python script that can run without launching the ComfyUI server. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): GeometricCFGGuider: Samples the two conditionings, then blends between them using a user-chosen alpha. . Launch ComfyUI and start using the SuperPrompter node in your workflows! (Alternately you can just paste the github address into the comfy manager Git installation option) 📋 Usage: Add the SuperPrompter node to your ComfyUI workflow. safetensors or clip_l. In a base+refiner workflow though upscaling might not look straightforwad. Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 lora stack with 0. Thank you u/AIrjen!Love the variant generator, super cool. Go on github repos for the example workflows. TripoSR is a state-of-the-art open-source model for fast feedforward 3D reconstruction from a single image, collaboratively developed by Tripo AI and Stability AI. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. com/talesofai/comfyui-browser) plugin, garnered over 200 stars on GitHub, thanks to the incredible support and interest from the community! This repo is divided into macro categories, in the root of each directory you'll find the basic json files and an experiments directory. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Seamlessly switch between workflows, as well as import, export workflows, reuse subworkflows, install models, browse your models in a single workspace - 11cafe/comfyui-workspace-manager You signed in with another tab or window. This fork includes support for Document Visual Question Answering (DocVQA) using the Florence2 model. bat file to run the script; Wait while the script downloads the latest version of ComfyUI Windows Portable, along with all the latest required custom nodes and extensions Once the container is running, all you need to do is expose port 80 to the outside world. Includes the Ksampler Inspire node that includes the Align Your Steps scheduler for improved image quality. true. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. The effect of this will be that the internal ComfyUI server may need to swap models in and out of memory, this can slow down your prediction time. Workflow. safetensors and put it in your ComfyUI/models/loras directory. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. 0 node is released. ComfyUI Inspire Pack. [2024/07/16] 🌩️ BizyAir Controlnet Union SDXL 1. Therefore, this repo's name has been changed. You can use Test Inputs to generate the exactly same results that I showed here. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. This allows running it AuraSR v1 (model) is ultra sensitive to ANY kind of image compression and when given such image the output will probably be terrible. I have no idea why the OP didn't bother to mention that this would require the same amount of storage space as 17 SDXL checkpoints - mainly for a garbage tier SD1. Hope you like some of them :) Check out my two-pass SDXL pipeline here: https://github. You switched accounts on another tab or window. be/ppE1W0-LJas - the tutorial. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Rename Feature/Version Flux. Prepare the Models Directory: Create a LLM_checkpoints directory within the models directory of your ComfyUI environment. You can find the InstantX Canny model file here (rename to instantx_flux_canny. yaml. If not, install it. I couldn't find the workflows to directly import into Comfy. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. 2 weight on each with upscalers. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. This is currently very much WIP. It is highly recommended that you feed it images straight out of SD (prior to any saving) - unlike the example above - which shows some of the common artifacts introduced on compressed images. 👏 欢迎来到我的 ComfyUI 工作流集合地! 为了给大家提供福利,粗糙地搭建了一个平台,有什么反馈优化的地方,或者你想让我帮忙实现一些功能,可以提交 issue 或者邮件联系我 theboylzh@163. This tool enables you to enhance your image generation workflow by leveraging the power of language models. It takes an input video and an audio file and generates a lip-synced output video. The PhotoMakerEncode node is also now PhotoMakerEncodePlus . Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. Configure the input parameters according to your requirements. Either manager and install from git, or clone this repo to custom_nodes and run: pip install -r requirements. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. ReActorBuildFaceModel Node got "face_model" output to provide a blended face model directly to the main Node: Basic workflow 💾. Face Masking feature is available now, just add the "ReActorMaskHelper" Node to the workflow and connect it as shown below: A custom node for ComfyUI that allows you to perform lip-syncing on videos using the Wav2Lip model. LCM loras are loras that can be used to convert a regular model to a LCM model. Place your transformer model directories in LLM_checkpoints. Example GIF [2024/07/23] 🌩️ BizyAir ChatGLM3 Text Encode node is released. bat file to the directory where you want to set up ComfyUI; Double click the install-comfyui. I'm using ComfyUI portable and had to install it into the embedded Python install. Going to python_embedded and using python -m pip install compel got the nodes working. 1 Dev Flux. 157 votes, 62 comments. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. (I got Chun-Li image from civitai); Support different sampler & scheduler: Share, discover, & run thousands of ComfyUI workflows. The only way to keep the code open and free is by sponsoring its development. Ensure ComfyUI is installed and operational in your environment. Please share your tips, tricks, and workflows for using this software to create your AI art. or if you use portable (run this in ComfyUI_windows_portable -folder): ControlNet and T2I-Adapter Examples. You Welcome to the unofficial ComfyUI subreddit. For some workflow examples and see what ComfyUI can do you can check out: In the standalone windows build you can find this file in the ComfyUI directory. irlzhlh aquii fpbwbz kjb dsnx fafg czveq nrqkslu tdtxfjv jyojr