Comfyui sdxl upscale not working

Comfyui sdxl upscale not working

Comfyui sdxl upscale not working. SDXL Default ComfyUI workflow. Those extra details it adds, I don't see them in any of the 'amazing' 1. Upscaling is not a problem with low denoise values such as 0. Note that --force-fp16 will only work if you installed the latest pytorch nightly. The refiner only works for the first pass txt2img generation, where there's leftover noise from the base model. Nothing special but easy to build off of. The old node will remain for now to not break old workflows, and it is dubbed Legacy along with the single node, as I do not want to maintain those. Workflow does following: load any image of any size. -> you might have to resize your input-picture first (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. Jan 20, 2024 · Drop them to ComfyUI to use them. json got prompt model_type EPS adm 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. It has many upscaling options, such as img2img upscaling and Ultimate SD Upscale upscaling. If you have the SDXL 1. He used 1. Open a command line window in the custom_nodes directory. Launch ComfyUI by running python main. * Use Refiner. But for upscale, Fooocus is much better than other solution. If you continue to use the existing workflow, errors may occur during execution. Every time you try to run a new workflow, you may need to do some or all of the following steps. The default workflow ran fine for me. Install ComfyUI Manager; Install missing nodes; Update everything; Install ComfyUI Manager. So I'm happy to announce today: my tutorial and workflow are available. Here is an example: You can load this image in ComfyUI to get the workflow. Upscaling ComfyUI workflow. I upscaled it to a resolution of 10240x6144 px for us to examine the results. , ImageUpscaleWithModel -> ImageScale -> UltimateSDUpscaleNoUpscale). And it's all automatic, you don't have to manually switch anything around to also use the refiner. ControlNet Depth ComfyUI workflow. I'm creating some cool images with some SD1. Nobody needs all that, LOL. The drawback is that even if it can infer the composition, when working with the tile the model has no idea what is around it. * Still not sure about all the values, but from here it should be tweakable. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. 5, euler, sgm_uniform or CNet strength 0. Restart ComfyUI. Been working the past couple weeks to transition from Automatic1111 to ComfyUI. Nodes that have failed to load will show as red on the graph. So if you wanted to generate iPhone wallpapers for example, that’s the one you should use. Dec 2, 2023 · Start ComfyUI. Img2Img ComfyUI workflow. Perhaps it is a base model meant for further fine-tuning. I can regenerate the image and use latent upscaling if that’s the best way. What is ComfyUI? ComfyUI serves as a node-based graphical user interface for Stable Diffusion. Giving 'NoneType' object has no attribute 'copy' errors. Through ComfyUI-Impact-Subpack, you can utilize UltralyticsDetectorProvider to access various detection models. I currently using comfyui for this workflow only because of the convenience. Feb 24, 2024 · Another SDXL comfyUI workflow that is easy and fast for generating images. Create animations with AnimateDiff. Check our discord for assistance. Try immediately VAEDecode after latent upscale to see what I mean. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. 4/5 of the total steps are done in the base. I do notice my ComfyUI setup seems a bit slower than a1111, but I mostly work with SDXL with ComfyUI, and stick with a1111 with SD1. Feb 29, 2024 · SD1. The lost of details from upscaling is made up later with the finetuner and refiner sampling. Recommandation: I hope this can be integrated in fooocus. 5 for demo purposes, but it would be amazing to update that to SDXL. ComfyUI Txt2Video with Stable Video Diffusion. Adding extra search path checkpoints ~/Projects/Models/ This workflow, combined with Photoshop, is very useful for: - Drawing specific details (tattoos, special haircut, clothes patterns, ) - Gaining time (all major AI features available without even adding nodes) - Reiterating over an image in a controlled manner (get rid of the classic Ai Random God Generator!). 4. Dude, you underestimate sdxl lmao. I didn't need to make any changes but the main prompt. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. Gradually incorporating more advanced techniques, including features that are not automatically included SDXL + HiResFix (Juggernaut v2. You switched accounts on another tab or window. It provides an easy I installed ComfyUI last night and played around with it a bit. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for ages with no luck. select a image you want to use for controlnet tile. Dec 8, 2023 · without upscale only 20 seconds. For a dozen days, I've been working on a simple but efficient workflow for upscale. 9, end_percent 0. That's what it's trained to work with. bat file to the directory where you want to set up ComfyUI and double click to run the script. Click on Install Models on the ComfyUI Manager Menu. It didn't work out. Another thing you can try is PatchModelAddDownscale node. Now with controlnet, hires fix and a switchable face detailer. I think his idea was to implement hires fix using the SDXL Base model. 5. This workflow perfectly works with 1660 Super 6Gb VRAM. Working on finding my footing with SDXL + ComfyUI. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. It's why you need at least 0. The Load VAE node can be used to load a specific VAE model, VAE models are used to encoding and decoding images to and from latent space. cache\1742899825_extension-node-map. ; 2. Sep 5, 2023 · I'm using StableSwarmUI and I'm able to upscale the generated SDXL images in the "Refiner" function, with a denoise between 0. I liked the ability in MJ, to choose an image from the batch and upscale just that image. manupin • 5 mo. remember the setting is like this, make 100% preprocessor is none. If you want to use Stable Video Diffusion in ComfyUI, you should check out this txt2video workflow that lets you create a video from text. Couldn't make it work for the SDXL Base+Refiner flow. ComfyUIでSDXLを動かす方法まとめ. Citation @article { jimenez2023mixtureofdiffusers , title = { Mixture of Diffusers for scene composition and high resolution image generation } , author = { Álvaro Barbero Jiménez } , journal = { arXiv preprint arXiv:2302. To create summary for YouTube videos, visit Notable AI. quality if life suite. Settled on 2/5, or 12 steps of upscaling. I then down scale it as 4x is a little big. 5 approach is only slightly slower than just SDXL (Refiner -> CCXL) but faster than SDXL (Refiner -> Base -> Refiner OR Base -> Refiner) and gives me massive improvement in scene setup, character to scene placement and scale, etc, while not losing out on final detail. I don’t know why there these example workflows are being done so compressed together. Jan 1, 2024 · Download the included zip file. You must also disable the Base+Refiner SDXL option and Base/Fine-Tuned SDXL option in the “Functions” section. Extract the zip file. Embeddings/Textual Inversion. SDXL Turbo vs LCM-LoRA Not familiar with that upscaler though. ControlNet Workflow. safetensors 2. Ok guys, here's a quick workflow from comfy noobie. Search for sdxl and click on Install for the SDXL-Turbo 1. If you installed from a zip file. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Everything was working fine but now when i try to load a model it gets stuck in this phase FETCH DATA from: H:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\. With seam fixes, I imagine there will be more tiles to render. 5 \ sdxl must be renamed and placed in the ComfyUI \ models \ loras \ directory, otherwise krita will not be able to find the path and recognize it. There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. SDXL doesn't really need negative prompts. I share many results and many ask to share. I added a switch toggle for the group on the right. Debug Text _O. safetensors E: \ ComfyUI \ models \ loras \ lcm lora sdv1-5. Midjourney is a castrated model that stops unironically mid journey. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). Jul 29, 2023 · I can easily get 1024 x 1024 SDXL images out of my 8GB 3060TI and 32GB system ram using InvokeAI and ComfyUI, including the refiner steps. 動作が速い. Text box. Best ComfyUI Extensions & Nodes. 5x upscale but I tried 2x and voila, with higher resolution, the smaller hands are fixed a lot better. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. Update your Nvidia driver. I've also tried NNLatentUpscale, not super different than these. Such a massive learning curve for me to get my bearings with ComfyUI. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. It seems to be more prone to generating duplicate images and incorrect anatomy. Video tutorial on how to use ComfyUI, a powerful and modular Stable Diffusion GUI and backend, is here. 25 support db channel . Unpack the SeargeSDXL folder from the latest release into ComfyUI/custom_nodes, overwrite existing files. I'm having a hard time getting anything realistic out of this workflow. problem solved by devs in this commit make LoadImagesMask work with non RGBA images by flyingshutter · Pull Request #428 · comfyanonymous/ComfyUI (github. Inpainting Workflow for ComfyUI. 5 -> SDXL Upscale Workflow for ComfyUI. SD and SDXL and Loras models are supported. Jul 27, 2023 · toyssamuraion Jul 27, 2023. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. ( I am unable to upload the full-sized image. I have good results with SDXL models, SDXL refiner and most 4x upscalers. Explore thousands of workflows created by the community. That’s because many workflows rely on nodes that aren’t installed in ComfyUI by default. 5 as the base image, particularly in certain poses, as well as saving a bunch Not really sure how to get workflow out of ComfyUI yet, so I dropped the png on the PNG Info tab in A1111 and got this. This method streamlines the process of creating AI models within Comfy UI. I have heard the large ones (typically 5 to 6gb each) should work but is there a source with a more reasonable file size. Testing was done with that 1/5 of total steps being used in the upscaling. you can click a button and it will find and install the missing nodes that are red. Just load your image, and prompt and go. Study this workflow and notes to understand the basics of It dosen't work and it's not meant to work. 0. Between versions 2. 29 Add Update all feature; 0. Hypernetworks. Perhaps you can look at the console and check the speed ( it / sec) and compare that with a1111. 25x) before it, latent upscales do tend to copy things, like the cat here, but regardless more detail. Sort by: Jul 30, 2023 · When using a either dpmpp_2m or dpmpp_2m_sde on an SDXL base model, it seems like the last step adds noise that doesn't get removed. as for the upscale you need to download the workflow for upscale I believe it's actually 3 nodes, I know it's STUPID and repetitive. 5, so I don't really have any direct comparison. /models1-5. Merging 2 Images together. I combine these two in comfyUI and it gives good result in 20 steps. Try EasyNegative . com) r/StableDiffusion. Aug 17, 2023 · Guys, I hope you have something. 0 Workflow. Inpainting. All images using 10 steps dpmpp_2m @ 1024 to best show the effect. Multiply. Nov 17, 2023 · Dear friend, lcm lora 1. Thank you community! A little about my step math: Total steps need to be divisible by 5. Maybe all of this doesn't matter, but I like equations. and control mode is My prompt is more important. I am overwhelmed and the SD magic is dying due to it. Table of contents. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. It also has full inpainting support to make custom changes to your generations. 5 denoise to fix the distortion (although obviously its going to change your image. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Do you have ComfyUI manager. Run git pull. Things really start getting interesting when you use SDXL itself for the HiRez and the refining. yaml, ComfyUI does recognize it and declare it is searching these folders for extra models on startup. If you installed via git clone before. It's simple and straight to the point. 手順2:Stable Diffusion XLのモデルをダウンロードする. It won't be useful for img2img generation or upscaling. 22 and 2. LCM 12-15 steps and SDXL turbo 8 steps. Instead, I use Tiled KSampler with 0. 0 (fp16). 手順1:ComfyUIをインストールする. Using unipc as a comparable sampler does not have this issue Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. If it's the best way to install control net because when I tried manually doing it . 768 x 1344: 16:28 or 4:7. The pixel upscale are ok but doesn't hold a candle to the latent upscale for adding detail. Feb 22, 2024 · The SDXL workflow includes wildcards, base+refiner stages, Ultimate SD Upscaler (using a 1. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. 5 models. OP • 1 yr. Copy the install_v3. Maybe someone have the same issue? Sort by: ElevatorSerious6936. 手順5:画像を生成 4x upscale. SDXL 1. The latest version allocates remaining memory to ram. safetensors? Were you using the plugin before the last version? Did SD XL already work for you before last version? Do you have a SD XL checkpoint which has "XL" in its name? I am looking for good upscaler models to be used for SDXL in ComfyUI. doomndoom. Part 1:update for style change application instruction( cloth change and keep consistent pose ): Open a A1111 webui. 0 base only. If you are looking for upscale models to use you can find some on Parameters not found in the original repository: upscale_by The number to multiply the width and height of the image by. Click on Manager on the ComfyUI windows. A lot of it is for fun, but I personally like the look and the creative freedom of SD1. This is not a tech support subreddit, use r/WindowsHelp or r/TechSupport to get help with your PC Members Online JigsawWM2. Always good to see more ComfyUI users in the wild, it's too underappreciated IMO. floatToInt _O. You signed out in another tab or window. Best (simple) SDXL Inpaint Workflow. The workflow also has TXT2IMG, IMG2IMG, up to 3x IP Adapter, 2x Revision, predefined (and editable) styles, optional up-scaling, Control Net Canny, Control Net Depth, Lora, selection of recommended SDXL resolutions, adjusting input images to the closest SDXL resolution, etc. However, the SDXL refiner obviously doesn't work with SD1. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. I have a prompt that give decently real images from another flow, but this one won't, even though I've tried forcing it more in the prompt. safetensors lcm lora sdxl. When you’re using different ComfyUI workflows, you’ll come across errors about certain nodes missing. Notice that ReVision can work in conjunction with the Detailers (Hands and Faces) and the Upscalers. GenArt42. Yeah, it is far too complex as I just got into this myself. 3 Support Components System; 0. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). OP • 3 mo. ago. Sweet. UPDATE: As I have learned a lot with this project, I have now separated the single node to multiple nodes that make more sense to use in ComfyUI, and makes it clearer how SUPIR works. The latent upscaling consists of two simple steps: upscaling the samples in latent space and performing the second sampler pass. Im having good results by starting initial generation using SDXL and refining at high resolutions using juggernaut Oct 21, 2023 · Latent upscale method. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. Nov 30, 2023 · It is not clear if SDXL Turbo matches the quality of a v1. Thanks for sharing those. 5 refined model) and a switchable face detailer. So, I just made this workflow ComfyUI . Sort by: Add a Comment. ini file. The workflow first generates an image from your given prompts and then uses that image to create a video. 手順4:必要な設定を行う. Comfyui has a save workflow buttom bottom right of UI. To use ReVision, you must enable it in the “Functions” section. Try out the latest SDNext, you'll be doing SDXL, in batches, with no problems if you have 8GB. Navigate to your ComfyUI/custom_nodes/ directory. Integer. The iPhone for example is 19. 5, and carries it through some upscaling and detail steps using SDXL. (I have tested the workflow by emulating the node manually, and it works much better with SDXL). Img2Img. scale image down to 1024px (after user has masked parts of image which should be affected) pick up prompt, go thru CN to sampler and produce new image (or same as og if no parts were masked) upscale result 4x. This looks sexy, thanks. 02412 } , year = { 2023 } } Jan 6, 2024 · Welcome to a guide, on using SDXL within ComfyUI brought to you by Scott Weather. I'll need to make a few tests and compare it with Siax and Ultrasharp to see how it performs with the type of work I make. 5 or SDXL models. 5 models in ComfyUI but they're 512x768 and as such too small resolution for my uses. 2 comments. 9 facedetailer workflow by FitCorder, but rearranged and spaced out more, with some additions such as Lora Loaders, VAE loader, 1:1 previews, Super upscale with Remacri to over 10,000x6000 in just 20 seconds with Torch2 & SDP. (early and not finished) Here are some more advanced examples: "Hires Fix" aka 2 Pass Txt2Img. g. The latent upscaler is okayish for XL, but in conjunction with perlin noise injection, the artifacts coming from upscaling gets reinforced so much that the 2nd sampler needs a lot of denoising for a clean image, about 50% - 60%. 4 Copy the connections of the nearest node by double-clicking. You can directly modify the db channel settings in the config. 6 denoise and either: Cnet strength 0. SD1. I wonder if I have been doing it wrong -- right now, when I do latent upscaling with SDXL, I add an Upscale Latent node after the refiner's KSampler node, and pass the result of the latent upscaler to another KSampler. Wait while the script downloads the latest version of ComfyUI Windows Portable, along with all the latest required custom nodes and extensions. This post is a summary of YouTube video ' Run SDXL Locally With ComfyUI (2024 Guide) ' by Matt Wolfe. theres a custom node plugin callled comfyui manager. I cannot even load the base SDXL model in Automatic1111 without it crashing out syaing it couldn't allocate the requested memory. When I run the default SDXL workflow from the comfyUI page, the SDXL refiner gets loaded first that takes around 20 seconds, then the SDXL base gets loaded that takes another 20 seconds, the first KSampler runs then the SDXL refiner model is loaded again which takes another 20 seconds and then the second Feb 1, 2024 · 12. youtube Aug 25, 2023 · I can use it with sd1. Reload to refresh your session. . 5 denoise. Install ComfyUI manager if you haven’t done so already. 2) 🤯. The output was good looking and very fast. 1) If you want your workflow to generate a low resolution image and then upscale it immediately, the HiRes examples are exactly what I think you are asking for. Download the first image then drag-and-drop it on your ConfyUI web interface. 0 - jmk: a software implementation of QMK as an alternative to AutoHotkey Follow the ComfyUI manual installation instructions for Windows and Linux. 手順3:ComfyUIのワークフローを読み込む. For example: 896x1152 or 1536x640 are good resolutions. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. After borrowing many ideas, and learning ComfyUI. WorkFlow - Choose images from batch to upscale. Thanks to u/Barbagiallo. If you want to specify an exact width and height, use the "No Upscale" version of the node and perform the upscaling separately (e. I’ll create images at 1024 size and then will want to upscale them. 0 Alpha + SD XL Refiner 1. Jan 17, 2024 · Copying and pasting pre-built workflows into Comfy UI allows for quick and efficient AI model creation. This ComfyUI work flow starts off with a base image made with SD1. Lora. Here is an example of how to use upscale models like ESRGAN. 9 , euler Feb 28, 2024 · Each serves a purpose: Simple Tiles better interprets the image and reduces bleeding; Detailer adds a lot of detail, and finally, Ultimate SD Upscale handles tiles better at high resolution. Aug 8, 2023 · refinerモデルを正式にサポートしている. Final 1/5 are done in refiner. The fact that negative prompts don’t work does not help. ↑ Node setup 1: Generates image and then upscales it with USDU (Save portrait to your PC and then drag and drop it into you ComfyUI interface and replace prompt with your's, press "Queue Prompt") ↑ Node setup 2: Upscales any custom image Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. Although the Load Checkpoint node provides a VAE model alongside the diffusion model, sometimes it can be useful to use a specific VAE model. It supports txt2img with a 2048 upscale. There is a latent workflow and a pixel space ESRGAN workflow in the examples. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with a 2nd ksampler at a denoise strength of 0. the quick fix is put your following ksampler on above 0. Here’s the aspect ratios that go with those resolutions. After that, it goes to a VAE Decode and then to a Save Image node. This SDXL (Refiner -> CCXL) -> SD 1. You can construct an image generation workflow by chaining different blocks (called nodes) together. 3 passes. The Ultimate AI Upscaler (ComfyUI Workflow) Workflow Included. This is ur workflow copied, but with a second sampler and a NNLatentUpscale (1. I give up. But then today, I loaded Searge SDXL Workflow, as so many people have suggested, and I am just absolutely lost. 2. Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. Wow! great work, very detailed, would be good to see your workflow, the link you posted above isn't it and can't find it amongst the video comments. This tutorial aims to introduce you to a workflow for ensuring quality and stability in your projects. 40. Drag and drop this image to the ComfyUI canvas. The usage description is inside the workflow. It is based on the SDXL 0. If you don't see this option, please click on Update All on the ComfyUI Manager Menu. Omg I love this Follow the ComfyUI manual installation instructions for Windows and Linux. These nodes include common operations such as loading a model, inputting prompts, defining samplers and more. 5 days ago · ComfyUI is a node-based GUI for Stable Diffusion. Aug 2, 2023 · This is my current SDXL 1. It stresses the significance of starting with a setup. 21, there is partial compatibility loss regarding the Detailer workflow. I vae decode to an image, use Ultrasharp-4x to pixel upscale. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. I’m struggling to find what most people are doing for this with SDXL. Users have the ability to assemble a workflow for image generation by linking various blocks, referred to as nodes. Install the ComfyUI dependencies. I'm loving using this UI because in addition to being super fast, it's very accurate when upscaling. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. SDXL Ultimate Workflow is the best and most complete single workflow that exists for SDXL 1. In this latest version, SDXL Lightning has been implemented, along with the WD14 node, which automatically labels your image, eliminating the need to write Nov 15, 2023 · In your server installation folder, do you have the file ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models\ip-adapter_sdxl_vit-h. Do the following steps if it doesn’t work. py --force-fp16. 5 without lora, takes ~450-500 seconds with 200 steps with no upscale resolution (see workflow screenshot from ver 1. type in the prompts in positive and negative text box Int to float. Here is my current hacky way of getting a latent type upscale but it is slow Load VAE. Then another node under loaders> "load upscale model" node. 5 models to be honest. Trying to use b/w image to make impaintings - it is not working at all. Instead of using techniques like virtual DOM diffing, Svelte writes code that surgically updates the DOM when the state of your app changes. 5 models and I don't get good results with the upscalers either when using SD1. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Working with tiles each tile is processed at the optimal resolution for the model and the memory consumption is just the same as for one tile size. For example: E: \ ComfyUI \ models \ loras \ lcm lora sdv1-5. There is an Article here explaining how to install UPDATE: The alternative node I found which works (with some limitations) is this one: UPDATE 2: FaceDetailer now working again with an update of ComfyUI and all custom nodes. Hi i am also tring to solve roop quality issues,i have few fixes though right now I see 3 issues with roop 1 the faceupscaler takes 4x the time of faceswap on video frames 2 if there is lot of motion if the video the face gets warped with upscale 3 To process large number of videos pr photos standalone roop is better and scales to higher quality images but misses out on the img2img control Net Control when models are loaded SDXL. Combined Searge and some of the other custom nodes. 0 and SD 1. Introducing ComfyUI Launcher! new. Low denoising strength can result in artifacts, and high strength results in unnecessary details or a drastic change in the image. 640 x 1536: 10:24 or 5:12. The main issue with this method is denoising strength. If you have another Stable Diffusion UI you might be able to reuse the dependencies. 2 and 0. The way I've done it is sort of like that, as latent upscale doesn't work brilliantly. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. The gist of it: * The result should best be in the resolution-space of SDXL (1024x1024). Got sick of all the crazy workflows. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. 5:9 so the closest one would be the 640x1536. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail Dec 26, 2023 · When starting comfyui with the argument --extra-model-paths-config . Oct 9, 2023 · Crisp and beautiful images with relatively short creation time, easy to use. Is it feasible to fix that bug within the next two weeks? I really appreciate the work you are doing, thanks. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. You signed in with another tab or window. Simple ComfyUI Img2Img Upscale Workflow. Then I vae encode back to a latent and pass that through the base/refiner again in the same way as the first pass. It should be fixed as of couple days ago. Follow the ComfyUI manual installation instructions for Windows and Linux. I think you can try 4x if you have the hardware for it. How are people upscaling SDXL? I’m looking to upscale to 4k and probably 8k even. 2 - 0. hy gs wh yu he nm pm xl qu mo