Comfyui best upscale model reddit


  1. Home
    1. Comfyui best upscale model reddit. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. You can construct an image generation workflow by chaining different blocks (called nodes) together. You just have to use the node "upscale by" using bicubic method and a fractional value (0. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 65 seems to be the best. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. The hires script is overriding the ksamplers denoise so your actually using . After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. so i. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. 19K subscribers in the comfyui community. got prompt Prompt executed in X. And when purely upscaling, the best upscaler is called LDSR. The resolution is okay, but if possible I would like to get something better. This is what I have so far (using the custom nodes to reduce the visual clutteR) . In A1111, you can do hires fix with any model upscaler that you want, like 4xUltraSharp and you can also choose the dimensions and denoising strength? Is there a way to do this in ComfyUI? I know of the Hires Script node, but when you choose the Upscaler Model on that one, you can't choose the denoising strength or number of steps. 5, see workflow for more info. e. Within ComfyUI use extra_model_paths. 5 -ish new size Seed: 12345 (same seed) CFG: 3 (same CFG) Steps: 5 (same) Denoise: this is where you have to test. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. But basically txt2img, img2img, 4x upscale with a few different upscalers. g. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. Usually I use two my wokrflows: I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. 40. ComfyUI Upscaling is best for a dozen or so Upscales alas would take all week to do 100+ The idea is simple, use the refiner as a model for upscaling instead of using a 1. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. 20K subscribers in the comfyui community. . 5 if you want to divide by 2) after upscaling by a model. This is not the case. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. 9 , euler Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. safetensor) ii. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. Please keep posted images SFW. Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… That's because of the model upscale. use our SOTA batch captioners like LLaVA) it will be used as prompt. Please share your tips, tricks, and workflows for using this software to create your AI art. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). - image upscale is less detailed, but more faithful to the image you upscale. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. I like doing a basic first pass latent upscale before that. Now go back to img2img generated mask the important parts of your images and upscale that. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. higher denoise), it adds appropriate details. 5=1024). - comfyanonymous/ComfyUI Generates a SD1. Which models to download - i. Instructions to use any base model added to the scripts shared post. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. If caption file exists (e. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. The right upscaler will always depend on the model and style of image you are generating; Ultrasharp works well for a lot of things, but sometimes has artifacts for me with very photographic or very stylized anime models. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. < Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. 56 denoise which is quite high and giving it just enough freedom to totally screw up your image. After generating my images I usually do Hires. a1111: That workflow consists of vid frames at 15fps into vae encode and CNs, a few loras, animatediff v3, lineart and scribble-sparsectrl CNs, ksampler basic with low cfg, small upscale, AD detailer to fix face (with lineart and depth CNs in segs, and same loras, and animatediff), upscale w/model, interpolate, combine to 30fps. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. k. You could also try a standard checkpoint with say 13, and 30. Warning: the workflow does not save image generated by the SDXL Base model. There's "latent upscale by", but I don't want to upscale the latent image. Which options on the nodes of the encoder and decoder would work best for this kind of a system ? I mean tile sizes for encoder, decoder (512 or 1024?) and diffusion dtype of supir model loader, should leave it as auto or any ideas? Thank you again and keep the good work up. Reddit page for Nucleus Co-op, a free and open source program for Windows that allows split-screen play on many games that do not initially support it, the app purpose is to make it as easy as possible for the average user to play games locally using only one PC and one game copy. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. Does anyone have any suggestions, would it be better to do an ite We would like to show you a description here but the site won’t allow us. I added a switch toggle for the group on the right. fix. A step-by-step guide to mastering image quality. I'm trying to find a way of upscaling the SD video up from its 1024x576. I liked the ability in MJ, to choose an image from the batch and upscale just that image. yaml file. It has more settings to deal with than ultimate upscale, and it's very important to follow all of the recommended settings in the wiki. So I'm happy to announce today: my tutorial and workflow are available. The following allows you to use the A1111 models etc within ComfyUI to prevent having to manage two installations or model files / loras etc . Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Near the top there is system information for VRAM, RAM, what device was used (graphics card), and version information for ComfyUI. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. It s not necessary an inferior model, 1. Though, from what someone else stated it comes to use case. 5 model, and can be applied to Automatic easily. a. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Reply reply Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. There is no tiling in the default A1111 hires. I want to upscale my image with a model, and then select the final size of it. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. 0. I'm sure I'm just doing something wrong when implementing the CN. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. But it does nothing, I look my gpu or cpu is doing no extra work, I'm downloading nothing. You can't use that model for generations/ksampler, it's still only useful for swapping. 5 I'd go for Photon, RealisticVision or epiCRealism. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with a 2nd ksampler at a denoise strength of 0. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. I run some tests this morning. vs (AOM3 model) - >(SWAP VAE NOW!) -> Image -> Upscale -> Refining with other model + nice VAE I gave up on latent upscale. just remove . fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. 80 is usually mutated but sometimes looks great. Please share your tips, tricks, and… FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. I'm using a simplified of Murphylanga ultimate tile upscale. safetensors , clip model( it's name is simply model. Best Detailer + Upscaler nodes and models? Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Good for depth, open pose so far so good. For a dozen days, I've been working on a simple but efficient workflow for upscale. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) 2. all in one workflow would be awesome. This model yields way better results. Since you have only 6GB VRAM i would choose tile controlnet + sd ultimate upscale. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. This means that your prompt (a. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. with a denoise setting of 0. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. The best method I Ty i will try this. Please share your tips, tricks, and… 60 votes, 30 comments. And for some reason some times it just need to download model AutoencoderKL. Search for upscale and click on Install for the models you want. 5 model) >> FaceDetailer. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. For upscaling there are many options. Appreciate just looking into it. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. Import times for custom nodes: This list will show which custom nodes loaded (or failed to load). there is an example as part of the install. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. 202 votes, 58 comments. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. 5, euler, sgm_uniform or CNet strength 0. (also may want to try an upscale model>latent upscale, but thats just my personal preference really) If you let it get creative (i. messing around with upscale by model is pointless for high res fix. This is done after the refined image is upscaled and encoded into a latent. yalm. I find Upscale useful, but as I often Upscale to 6144 x 6144 GigaPixel has the batch speed and capacity to make 100+ Upscales worthwhile. Downloading the model - It's best if you download the model using the comfymanager itself, it creates the correct path and it doesn't create any mess. 20) On comfyui manager go to the Pip install packages Upscale Latent By: 1. It only generates its preview. Look at this workflow : Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. txt after you removed the extension « txt » Welcome to the unofficial ComfyUI subreddit. Instead, I use Tiled KSampler with 0. The downside is that it takes a very long time. Latest version can be downloaded here. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P If you'd like to load LoRA, you need to connect "MODEL" and "CLIP" to the node, and after that, all the nodes that require these two wires should be connected with the ones from load LoRAs, so of course, the workflow should work without any problems. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. Tried the llite custom nodes with lllite models and impressed. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 6. Moreover batch folder processing added. 9, end_percent 0. Also, both have a denoise value that drastically changes the result. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. I rarely use upscale by model on its own because of the odd artifacts you can get. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. That's because latent upscale turns the base image into noise (blur). 45 is minimum and fairly jagged. 0-RC , its taking only 7. 5 combined with controlnet tile and foolhardy upscale model. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Must download - stage_a. pth or 4x_foolhardy_Remacri. the factor 2. XX Seconds The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. true. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. New to Comfyui, so not an expert. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. > <. 15-0. The world’s best aim trainer, trusted by top pros, streamers, and players like you. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y . Best of Reddit; Topics; Content Policy; SDXL is fine as the upscale model at these low denoise values (0. Welcome to the unofficial ComfyUI subreddit. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Upscaling: Increasing the resolution and sharpness at the same time. After borrowing many ideas, and learning ComfyUI. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. I'm trying to combine the Ultimate SD Upscale with a Blur Control Net like I do in Automatic1111, but I keep getting errors in ComfyUI. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". if I wanted to do an upscale image like ESRGAN thar requires working image space it suggests that it matters whether I do (AOM3 model) - > Image -> Upscale -> Refining with other model + nice VAE on final step only. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) 43 votes, 16 comments. 5 to 0. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. It's especially amazing with SD1. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. We would like to show you a description here but the site won’t allow us. For SD 1. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 25 i get a good blending of the face without changing the image to much. But some times it just does the upscale, some other times it finds this model and I see the steps to load it. If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. 6 denoise and either: Cnet strength 0. Do you all prefer separate workflows or one massive all encompassing workflow? * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. There are also "face detailer" workflows for faces specifically. 5 to get a 1024x1024 final image (512 *4*0. Thanks. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. ComfyUI is amazing. Reply reply Top 1% Rank by size I get good results using stepped upscalers, ultimateSD upscaler and stuff. Also converted base used model to Juggernaut-XL-v9 . Because the upscale model of choice can only output 4x image and they want 2x. Upscale x1. I haven't been able to replicate this in Comfy. This is the 'latent chooser' node - it works but is slightly unreliable. example If you are looking to share between SD it might look something like this. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. I share many results and many ask to share. - latent upscale looks much more detailed, but gets rid of the detail of the original image. This information tells us what hardware ComfyUI sees and is using. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. the best part about it though >. model: base sd v1. Basic latent upscale, basic upscaling via model in pixel space, with tile controlnet, with sd ultimate upscale, with LDSR, with SUPIR and whatnot. hjzedi siv myamk mywlvjrp tluvs guezs qvvtwf smrbvuc rdqjvfn ulxpeqrw