Why is comfyui faster reddit. For DPM++ SDE Karras I selected scheduller karras .
Why is comfyui faster reddit Comfy1111 SDXL Workflow for ComfyUI. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. Now with comfyui. but can it be used with ComfyUI? In my site-packages directory I see "transformers" but not "xformers". More info: https Welcome to the unofficial ComfyUI subreddit. If I restart the app, then it will be faster again, but again, the second generation and so on will be slower again. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. A1111 does a lot behind the scenes with prompts, while ComfyUI Doesn't, making it more sensitive to the Prompt length , sampler shouldn't affect but i always use Euler normal , try it out. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. 25K subscribers in the comfyui community. /r/StableDiffusion is back open after /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. and i get the following results. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. Shouldn't you be able to reach the same-ish result faster if you just upscale with a 2x upscaler? Is there some benefit to this upscale-then-downscale approach, or is it just related to availability of 2x Comfyui is much better suited for studio use than other GUIs available now. to me Comfy feels like something better suited for post processing instead of image generation there is no point using a node based UI for just generating a image but layering different models for upscale or feature refinement is the main reason comfy is actually good after the image generation part, atm using Loras and TIs is a PITA not to mention a lack Welcome to the unofficial ComfyUI subreddit. The more information the better. Healthy competition, even between direct rivals, is good for both parties. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Draw Things (which has a lot of configuration settings). A lot of people are just discovering this Go to comfyui r/comfyui • by meap158. About knowing what nodes do, this is the hard thing about ComfyUI, but there's a wiki created by the dev (comfyanonymus) that will help to understand many things /r/StableDiffusion is back open after the Introducing "Fast Creator v1. I merely stop and restart the Jupiter script. I use a script that updates Comfyui and checks all the Custom Nodes. i heard that comfyUI generate more faster. I was hoping ComfyUI would be even faster than the latest version Adjusting settings, using efficient workflows, and ensuring system resources are optimized will result in faster rendering times and a smoother experience. More info: https://rtech If you are looking for a straightforward workflow that leads you quickly to a result, then Automatic1111. 4" - Free Workflow for ComfyUI. If you still have performance issues, report them in this thread, make sure to post your full ComfyUI log and your workflow. Use all the DevOps services or choose just what you need to complement your existing workflows from Azure Boards, Azure Repos, Azure Pipelines, Azure Test Welcome to the unofficial ComfyUI subreddit. Too much and you get side by side people. Sort by: Once you figure that out you'll see how /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. More info: https://rtech Creator mode: Users (also creators) can convert the ComfyUI workflow into a web application, run the application locally, or publish it to comfyflow. Feels like it is barely faster than my Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. Before the Vast /r/StableDiffusion is Update it using: update/update_comfyui. 4". The resulting model itself can be used as a checkpoint, but instead of distributing that whole model they can create LoRa wich is the difference between fine tuned model and original model and small in size generally. it's the perfect tool to explore generative ai. com find submissions from "example. This is why I have and use both. Welcome to the unofficial ComfyUI subreddit. It's just the nature of how the gpu works that makes it so much faster. ai account and a Jupyter Notebook for when I'm trying out new things, want/need to work fast and for img2img batch iterative upscaling. Fooocus would be even faster. For DPM++ SDE Karras I selected scheduller karras /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. next (still experimental), ComfyUI's performance is significantly faster than what you are reporting. Plus, Comfy is faster and with the ready-made workflows, a lot of things can be simplified and I'm learning what works and how on them. 10 votes, 14 comments. PSA: RealPLKSR is a new, FANTASTIC (and fast!) 4x ComfyUI allows you to build an extremely specific workflow with a level of control that no other system in existence can match. Bit it also produces some different colors and its more blurry. Before 1. It'll parse the DMD2 aims to create fast, one-step image generators that can produce high-quality images with much less computational cost than traditional diffusion models, which typically require many steps to generate an image. This is generated by ComfyUI. But those structures it has prebuilt for you aren’t optimized for low end hardware The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. Try using an fp16 model config in the CheckpointLoader node. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. And this is generated by webui. Assume you have a base checkpoint (SD1. But yeah it goes fast in ComfyUi. the diffusion process so first I made 3 outputs of 10 20 30 samples. Bf16 is capable of much better representation for very small decimals. But everything goes smooth and fast only on 4090. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 2. Everything feels fast, haven't found any weird bugs. Here are some The floating point precision on fp16 is very very poor for very very small decimals. Compare this to a single 10MB file - now, the first two steps are a very small fraction of the total time, so it seems much faster, and most of the copy time is sequential I/O rather than random. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. despite the complex look, it's /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. support/docs/meta /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing Welcome to the unofficial ComfyUI subreddit. bat; Run ComfyUI and try your workflow. Only problem I have is that it's difficult to undo stuff (cmd/ctrl + z doesn't work?). 5-2it/s, with A1111 opened aside its 10-12it/s Welcome to the unofficial ComfyUI subreddit. If it isn't let me know because it's something I need What I can say is that I (RTX 2060 6 GB, 32 GB RAM, Windows 11) get vastly better performance on SD Forge with Flux Dev compared to Comfy (using the recommended I've been using A1111 for about half a year and I really really liked ComfyUI, it's a real breath of fresh air, but I'm somewhat upset by the slower generation. You also just see everything clearly when using comfy Welcome to the unofficial ComfyUI subreddit. Learn comfyui faster I recommend you to install the ComfyUI Manager extension, with it you can grab some other custom nodes available. Even just 6 months ago having tensorrt in comfy would have been decently big news. On my machine, comfy is only marginally faster than 1111. comfyUI takes 1:30s, auto1111 is taking over 2:05s Comfy is maybe 10 Everything that has to do with diffusers is pretty much deprecated in comfy rn. Fast ~18 Steps Images (2s inference time on a 3080 Warning. Maybe I am doing it wrong, but ComfyUI inpainting is a bit awkward to use. Some common sources of user errors: SD. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. A lot of people are just discovering this technology, and want to show off what they created. My experience with ComfyUI is the opposite. [Please Help] Why is a bigger image faster to generate? This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even though the second sampler processes a larger /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. A1111 is like ComfyUI with prebuilt workflows and a GUI for easier usage. From what I gather only A1111 and its derivatives can correctly append metadata like prompts, CFG scale, used checkpoints/LoRAs and so on while ComfyUI cannot, at least not To verify if I'm full of shit, go generate something and check the console for your iterations per second. VFX artists are also typically very familiar with node I used ComfyUI for a while but on Linux on my AMD card I found I was constantly getting OOM driver freezes and graphical glitches. Then i tested my previous loras with comfyui they sucked also. That said, Upscayl is SIGNIFICANTLY faster for me. All it takes is taking a little time to compile the specific model with resolution settings you plan to use. Colab does break in my normal operation. If someone needs more context please do ask. But once you get the hang of it, you understand its power and how much more you can do in it. It is not as fast but is more reliable. (and fast!) 4x upscaling architecture After all, the more tools there are in the SD ecosystem, the better for SAI, even if ComfyUI and its core library is the official code base for SAI now days. That should speed things up a bit on newer cards. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. and don't get scared by the noodle forests you see on some screenshots. They are completely different. (mostly comfyui) with a 3070ti laptop (8gb vram), and I want to do an upgrade getting a good gpu for my desktop pc. it has been noticeably faster unless I want to use SDXL Why is there such big speed differences when generating between ComfyUI, Automatic1111 and other solutions? And why is it so different for each GPU? A friend of mine for example is doing this on a GTX 960 (what a madman) and he's experiencing up to 3 times the speed when doing inference in ComfyUI over Automatic's. Just write a regular Python function, annotate the signature fully, then slap a \@ComfyFunc decorator on it (The \ shouldn't actually be there, reddit's just being a pain and wants to turn any unescaped @ into a u/). You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. I meant using an image as input, not video. and nothing gets close to comfyui here. Comfy is basically a backend with very light frontend, while A1111 is very heavy frontend. don't load Runpod's ComfyUI template Load Fast Stable The big difference is that looking at Task Manager (on different runs so as not influence results), my CPU usage is at 100% with CPP with low RAM usage, while in the others my CPU usage is very ow with very high ram usage. Thanks for implementing this so quickly! Messing around with this, I feel like the hype was a bit too much. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. i dont really care about getting the same image from both of them but if you check closely while automatic1111 is almost perfect (you dont have to know the model, it is almost real) but the comfyui one is as if i reduced lora weight or something. 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). Definitely no nodes before that quickly flick green before the KSampler? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. Easier to install an run but tend ComfyUI has absolutely no security baked in (neither from the local/execution standpoint, nor from the remote/network authentication standpoint), and the custom node Welcome to the unofficial ComfyUI subreddit. There has been a loader for diffusers models but its no longer in development, that's why people are having trouble using lcm in comfy now and also the new 60% faster sdxl (both only support diffusers) 80 votes, 35 comments. ComfyUI weights prompts differently than A1111. A "fork" of A1111 would mean taking a copy of it and modifying the copy with the intent of providing an alternative that can replace the original. Tested failed loras with a1111 they were great. This is the first time I've seen diffusion models on desktop CPU fast enough to actually use in practice. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. Just a quick and simple workflow I whipped up this morning to mimic Automatic1111's As you get comfortable with Comfyui, you can experiment and try editing a workflow. I had previously used ComfyUI with SDXL 0. Except I have all those csv files in the root directory Comfyui indicates they need to be in, so why I can tell you that ComfyUI renders 1024x1024 in SDXL at faster speeds than A1111 does with hiresfix 2x (for SD 1. Question - Help Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. for me its the Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with latents, mix-and-match models and do other crazy stuff in a Comfyui has this standalone beta build which runs on python 3. Whether through software UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. A lot of people are just discovering this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why, its breaking my brain. I started not too long Hi! Does anyone here use ComfyUI professionally for work, and if so how/why? Also, why do you prefer it over alternatives like Midjourney, A1111, etc. 10K subscribers in the comfyui community. Turbo SDXL-LoRA-Stable Diffusion XL faster than light My civitai page: https few seconds = 1 image Tested on ComfyUI: workflow. 11. In this case he also uses the ModelSamplingDiscrete node from And i don't understand why, because even when A1111 not being used, the simple fact its opened it slow down my comfyUI (SDXL) generations by 500 to 600%. 0. I've noticed it's really faster than normal decode for big pictures. ( Maybe it's got something to do with the quantization method ? The T5 FP8 + Flux Q3_K_S obviously don't fit together in 8 GB VRAM, and still the Flux Q3_K_S was loaded completely , so maybe I'm just not reading the console right Comfyui makes things complicated but people becomes bored. It adds additional steps. Share Add a Comment. Standalone: everything is contained in the zip, you could use it on a brand new system. Key improvements over DMD: Eliminates the need for a regression loss and expensive dataset construction 31 votes, 70 comments. Eliminates all the boilerplate and redundant information. More info: https://rtech. how can i fix that? with my 8 gb rx 6600 which I was only able to run sdxl with sd-next (out of memory after 1-2 runs and on default 1024x1024), I was able to use this is comfyui BUT only with 512x512 or 768x512 - 512x768 (memory errors even with these from time to time) Curiously it is like %25 faster run running a sd 1. Possibly some Custom Nodes, or a wrongly installed startup package, like torch or xformers. So after someone recently pointing out to me, that Comfy among other things wouldn't be as much of a VRAM hog. generally the comfyui images are worst if you use CFG > 4. I'm on an 8GB RTX 2070 Super card. It should be at least as fast as the a1111 ui if you do that. I no longer use automatic unless I want to play around with Temporal kit. What normal setting are you curious about? I use 24 samples, 4 cfg and usually no lightning/turbo. infizoom possible in ComfyUI Any experience/knowledge on any of the above is greatly appreciated. But if you want to go into more detail and have complete control over your composition, then ComfyUI. Sorry to say that it won't be much faster, even if you overclock the cpu. This does take 20 to 30 minutes. Comfyui authors are trying to confuse and mislead people into trusting this. Very nice working well way faster than previous method i was using, testing with a bunch of checkpoints and settings to find a happy balance. It is how comfyui works, not how SD works. This update includes new features and improvements to make your image creation process faster and more efficient. I believe I got fast ram, which might explain it. Seems relevant here: I wrote a module to streamline the creation of custom nodes in ComfyUI. But you an achieve this faster in A1111 considering the workflow of comfy ui. I expect it will be faster. The only cool thing is you can repeat the same task from the Welcome to the unofficial ComfyUI subreddit. Having used ComfyUI quite a bit, I got to try Forge yesterday and it is great! Things just work. 😁 The actual copy is quite fast, but writing the metadata is slow. tbh i am more interested in why lora is so much different. The CPP version overheats my computer MUCH faster than A1111 or ComfyUI. I am curious why Nvidia waited so long to assist with finally making this available. But then I realized, shouldn't it be possible (and faster) to link the output from one into the next instead? /r/StableDiffusion is I don't like ComfyUI, because imo user friendly software is more important for regular use. true. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It's great to see that you were able to integrate this before that. am I doing something wrong with A1111, or is Comfy UI just that much faster and better? Share Add a Comment. Don't know why. But I still need to fixautomatic1111, might have to re-install. I think for me at least for now with my current laptop using comfyUI is the way to go. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. Like 20-50% faster in terms of images generated per minute. UPDATE 2: I suggest if you meant s/it, you edit No idea why , but i get like 7. I spend many hours learning comfyui and i still doesn't really see the benefits. I used the same checkpoint, sample method, prompt, step, but i got completely different images from webui and comfyUI, I mean they have different style and color, I don't know why. I At the moment there are 3 ways ComfyUI is distributed: 1. but it is simply not. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. SDXL running on ComfUI at 1. But than I have to ask myself if that is really faster. Like, yeah you can drag a workflow into the window and sure it's fast but even though I'm sure it's "flexible" it feels like pulling teeth to work with. When I upload them, the prompts get automatically detected and displayed, but not the resources used. You're also super fast--latent consistency models will be added officially to optimum-intel soon. You can lose the top 4 nodes as they are just duplicates, you can link them back to the original ones. Lower the resolution and if you gotta go wide screen, use outpainting or the amazing photoshop beta. My system is more powerful than yours, but not enough to justify this enormous Welcome to the unofficial ComfyUI subreddit. It is not. Better to generate a large quantity of images, but, for editing, this is not really efficient. app to share it with other users. I'll try in in ComfyUI later, once I set up the refiner workflow, which I've yet to do. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. Finally, drop that picture you generated back into ComfyUI and press generate again while checking the iterations per second. . - I was facing similar issues when i first started using ComfyUI, try adjusting CFG scale to 5 and if your prompts are big like in A1111, add a token merging node. Too high on the height and you get multiple heads. 6 I couldn't run SDXL in A1111 so I was using ComfyUI. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. Forge's memory management is sublime, on the other hand. Unless cost is not a constraint and you have enough space to backup your files, move everything to an ssd. Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. So yea like people say on here, your negatives are just too basic. com" 123 votes, 148 comments. That makes no sense. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post. It's still 30 seconds slower than comfyUI with the same 1366x768 resolution and 105 steps. If it's 2x faster with hyperthreading enabled, I'll eat my keyboard. I've found A1111 is still useful for many things like grids which Comfy can do but not as well. I'm always on a budget so I stored all my models in an hdd. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. PSA: RealPLKSR is a /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 checkpoint on the same pc BUT the quality -at least comparing a few prompts Had similar experience when I started with Comfyui. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. No matter what, UPSCAYL is a speed demon in comparison. ComfyUI is really good for more "professional" use and allows to do much more, if you know what are you doing, but it's harder to navigate through each setting if you want to tweak, you have to move around the screen much, zoom in, zoom out etc. More Lol in full agreement on not using it if you don't want to. For instance (word:1. Seems to have everything I need for image sampling. #Comfyui #Ultimate upscale - a faster upscale, same quality . once you get comfy with comfy you don't want to go back. While kohya samples were very good comfyui tests were awful. Workflows are much more easily reproducible and versionable. Also, if this is new and exciting to you, feel free to When you drag an image to the ComfyUI window, you will get the settings used to create THAT image, not the batch. Comfy does launch faster than auto111 though but the ui will Welcome to the unofficial ComfyUI subreddit. 5 and 2. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the At the end of the day, i'm faster with a111, better ui shortcut, better inpaint tool, better using of copy/paste with clipboard when you want to use photoshop. When ComfyUI just starts, the first image generation will always be fast (1 minute is the best), but the second generation (no changes to settings and parameters) and so on will always be slower, almost 1 minute slower. A few new rgthree-comfy nodes, fast-reroutes comfyui always says that its workflow describes how SD works. So, while I don’t know specifically what you’ve been watching, the short version is ComfyUI enables things that other UIs can’t. Also "octane" might invoke "fast render" instead of "octane style". 5), and someone trained and fine-tuned it to generate anime images. Belittling their efforts will get you banned. I will say don't dump Automatic1111. Locked post. Please keep posted images SFW. Share Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. ComfyUI is also trivial to extend with custom nodes. While comfyUI is better than default A1111, TensorRT is supported on A1111, uses much less vram and image generation is 2-3X faster. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. which I rent with a Vast. This is why WSL performance on the virtualised ext file system is dramatically better than on the NTFS file system for some apps. ComfyUI is a bitch to learn at first, but once you get a grasp of it, and build the workflows you want to use for what you're doing, you are on a plateau and it's really easy. so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. i need help (i just want to install normal sd not the sdxl) Share Add a Comment. but many anecdotes on this subreddit that ComfyUI is much faster than A111 without much info to back it up. I'll tell you why I ended up with ComfyUI. But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. Then go disable Hyperthreading in the UEFI. Please share your tips, tricks, and workflows for using this software to create your AI art. 5 to 3 times faster than automatic1111. “(Composition) will be different between comfyui and a1111 due to various reasons”. Comfyui has this standalone beta build which runs on python 3. View community ranking In the Top 20% of largest communities on Reddit. Information and discussion about Azure DevOps, Microsoft's developer collaboration tools helping you to plan smarter, collaborate better, and ship faster with a set of modern dev services. I have tried many times. I haven’t spent enough time to optimize it either so Nodes in ComfyUI represent specific Stable Diffusion functions. I want a slider for how many images I want in a On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. I regularly get several hours before it breaks. The workflow is huge, but with the toggles, it can run pretty fast. you define the complexity of what you build. CUI can do a batch of 4 and stay within the 12 GB. For example, SD and MJ are pushing themselves ahead faster and further because of each other. app, and finally run ComfyFlowApp locally. 13s/it on comfyUI and on WebUI i get like 173s/it. Forge is built on top of A1111 web-ui, as you said. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the The main problem is that moving large files from and to an ssd repeatedly is going to wear it out pretty fast. Here are my Pro and Contra so far for ComfyUI: Pro: Standalone Portable Almost no requirements/setup Starts very fast SDXL Support Shows the technical relationships of the individual modules Cons: Complex UI that can be confusing Without advanced knowledge about AI/ML hard to use/create workflows And it's 2. With ComfyUI you have access to ready-made workflows, but this can be overwhelming, especially for beginners. Some of the ones with 16gb vram are pretty cheap now. Now why does 7zip help? I can link to the paper discussing why the sampler was created and why it's so much faster if you would like to read it. subreddit was born from subreddit stable diffusion due to many posts about ai wars on the main stable diff sub reddit. So, I always wanted to try out ComfyUI in the past. I've played around with different upscale models in both applications as well as settings. Take it easy! 👍 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I also recently tried Fooocus and found it lacked customisation personally, but appreciate the awesome in-painting they have and their midjourney-inspired 56 votes, 17 comments. do i have to use another workflow or why is the images not rendered instant or ´why do i have these image issues? i provide here link to the model from civitai site and the result image and my comfyui workflow in a screenshot: Definitely the width on your resolution. I ignored it for a while when it first came out. Comfy doesn't really do "batch" modes, really, it just adds individual entries to the queue very quickly, so adding a batch of 10 images is exactly the same as clicking the "Queue Prompt" button 10 times. So far the images look pretty good except I'm sure they could be a lot thank you for your response. PSA: RealPLKSR is a new, FANTASTIC (and fast!) 4x upscaling architecture /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 2) and just gives weird results. 5 models) to do the same for txt2img, just using a simple workflow. New comments cannot be posted. ComfyUI is still way faster on my system than Auto1111. please help me. That could easily be why things are going so fast, I'll have to test it out and see if that's an issue with generation quality. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. next is faster, but the results with the refiners are worse looking. 1) in ComfyUI is much stronger than (word:1. Only the LCM Sampler extension is needed, as shown in this video. I guess gpu would be faster, have no evidence, just a guess. When you build on top of software made by someone else, there are many ways to do it. When I first saw the Comfyui I was scared by so many options of what can be set. Results using it are practically always worse than nearly every other sampler available. Apparently, that is because of the errors logged at startup. But it is fast, for whatever that counts for. I accidentally tested ComphyUI for the first time about 20 min ago and noticed I clicked on the CPU bat file (my bad🤦♂️). CUI is also faster. There's a The Flux Q4_K_S just seems to be faster than the smaller Flux Q3_K_S, despite the latter being loaded completely. Doesn't negate discussing the reasons on why it is being implemented now, which was my point. The weights are also interpreted differently. But I'm getting better results - based on my abilities / lack thereof - Welcome to the unofficial ComfyUI subreddit. Hope I didn't crush your dreams. I think ComfyUI remains far more efficient in loading when it comes to model / refiner, so it can pump things out faster. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Yeah, look like it's just my Automatic1111 that has a problem, CompfyUI is working fast. 1. 1) in A1111. Whether that applies to your case or not really depends on what you’re trying to do. I want a checkbox that says "upscale" or whatever that I can turn on and off. I find that much faster. Also it is useful when you want to quickly try something out since you don't need to set up a workflow. I tested with CFG 8, 6 and 4. 24K subscribers in the comfyui community. On the one hand, EXT is much faster for some operations, on the other, file corruption on NTFS is basically non existent and has been for decades. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by default, which makes - I have an RTX 2070 + 16GB Ram, and it seems like ComfyUI has been working fineBut today when generating images, after a few generations ComfyUI seems to slow down from about 15 seconds to generate an image to 1 minute and a half. ? Welcome to the unofficial ComfyUI subreddit. Even if there's an issue with my installation or the implementation of the refiner in SD. everything ai is changing left and right, so a flexible approach is the best imho. Faster and/or more resource efficient and/or B: More flexible and powerful for the deep-diving workflow crafters, code nerds who make their own nodes, and wonks ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate larger images, or so I've heard. I started on the A1111. It also seems like ComfyUI is way too intense on using heavier weights on (words:1. Save up for a Nvidia card, and it doesn't have to be the 4090 one. ComfyUI is the least user-friendly thing I've ever seen in my life. Studio mode: Users need to download and install the ComfyUI web application from comfyflow. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to I'm mainly using ComfyUI on my home computer for generating images. they are different. Takes a minute to load. This is the Official Tower Defense Simulator Reddit, this is a place for our community to interact with each other, post memes, ask ComfyUI also uses xformers by default, which is non-deterministic. meb mfctzms mwdq csvhuwt ptl qnzgpyyb vhzvsvi uvbjmwuk vlaeq pfaqvc