- Stable diffusion face refiner online reddit From what Refiner only helps under certain conditions. 📷 7. I mainly use img2img to generate full body portraits (think magic the gathering cards), and targeting specific areas (I think it’s called in painting?) works great for clothing, details, and even hands if I specify the number of fingers. Make sure to select inpaint area as "Only Masked". Well, the faces here are mostly the same but you're right, is the way to go if you don't want to mess with ethnics loras. Since the research release the community has started to boost XL's capabilities. Restore face makes face caked almost and looks washed up in most cases it's more of a band-aid fix . I'll do my second post on the face refinement and then apply that face to a matching body style. Ultimately you want to get to about 20-30 images of face and a mix of body. next version as it should have the newest diffusers and should be lora compatible for the first time. It's an iterative process, unfortunately more iterative than a few images and done. 1, 2022) Web app StableDiffusion-Img2Img (Hugging Face) by . //lemmy. One of the weaknesses of stable diffusion is that it does not do faces well from a distance. (depending on the degree of refinement) with a denoise strength of 0. A list of helpful things to know It just doesn't automatically refine the picture. Wait till 1. support/docs It may well have been causing the problem. Is there a way to train stable diffusion on a particular persons face and then produce images with the trained face? Skip to main content. Stable Diffusion right now doesn't use transformers. I assume you would have generated the preview for maybe every 100 steps. i'm using roop, but the face turns out very bad (actually the photo is after my face swap try). 8 (details are pretty washed at this point, but likeness is great), then do another inpainting with FaceIDv2 at around 0. 5-0. Please keep posted images SFW. This brings back memories of the first time that I use Stable Diffusion myself. All online. After Refiner is done I feed it to a 1. #what-is-going-on Discord: https://discord. As a tip: I use this process (excluding refiner comparison) to get an overview of which sampler is best suited for my prompt, and also to refine the prompt, for example if you notice the 3 consecutive starred samplers, the position of the hand and the cigarette is more like holding a pipe which most certainly comes from the Sherlock Holmes part of the prompt, having this problem as well Inpaint prompt: chubby male (action hero 1. Small faces look bad, so upscaling does help. ), but I have been able to generate the back views for the same character, it's likely that for a 360 view, once it's trying to show the other side of the character you'll need to change the prompt to try to force the back, with keywords like "lateral view" and "((((back view))))", in my experience this is not super consistent, you need to find /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 model doing the upscaling the idea was to get some initial depth\latent img but end with another model. 3-0. Same with SDXL, you can use any two SDXL models as the base model and refiner pair. Where do you use Stable diffusion online for free? Not having a powerful PC I just rely on online services, here are mines . In your case you could just as easily refine with SDXL instead of 1. Getting a single sample and using a lackluster prompt will almost always result in a terrible result, even with a lot of steps. the hand color does not see very healthy, I think the seeding took pixels from outfit. Wait, does that mean that stable diffusion makes good hands but I don’t know what good hands look like? Am i asking too much of stable diffusion? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Wᴇʟᴄᴏᴍᴇ ᴛᴏ ʀ/SGExᴀᴍs – the largest community on reddit discussing education and student life in Singapore! SGExams is also more than a subreddit - we're a registered nonprofit that organises initiatives supporting students' academics, career guidance, mental health and holistic development, such as webinars and mentorship programmes. dbzer0 You don't actually need to use the refiner. //discord. What does the "refiner" do? Noticed a new functionality, "refiner", next to the "highres fix" What does it do, how does it work? Thx. 4 denoising to add back in subtle face/skin details without there being enough I'm new to SD. What would be great is if I could generate 10 images, and each one inpaints a different face all together, but keeps the pose, perspective, hair, etc the same. 6 denoising, then ReActor swap with GFPGAN around 0. Master Consistent Character Faces with Stable Diffusion! 4. Each time I add 'full body' as positive prompt, the face of the character is usually deformed and ugly. It can go even further with [start:end:switch] i was expecting more Consistent character faces, designs, outfits, and the like are very difficult for Stable Diffusion, and those are open problems. after a long night of trying hard with prompts and negative prompts and a swap through several models Stable Diffusion generated a face that matches perfectly for Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. 5 model for upscaling and it seems to make a decent difference. The Refiner also seems to follow positioning and placement prompts without Region controls far what model you are using for the refiner (hint, you don't HAVE to use stabilities refiner model, you can use any model that is the same family as the base generation model - so for example a SD1. I haven't had any of the issues you guys are talking about, but I always use Restore Faces on renders of people and they come out great, even without the refiner step. This may help somewhat. Go to settings > stable diffusion > Maximum number of checkpoints loaded at the same time should be set to 2 > Only keep one model on device should be UNCHECKED. Restarted, did another pull and update. From my own experience in A1111 with several face swap extensions, the speed depends on the use of a GPU for the process and the quality you need. Using a workflow of txt2image prompt/neg without the ti and then adding the ti into adetailer (with the same negative prompt), I get Just like Juggernaut started with Stable Diffusion 1. I will first try out the newest sd. 9(just search in youtube sdxl 0. Hands work too with it, but I prefer the MeshGraphormer Hand Refiner controlnet. 55-0. I want to refine an image that has been already generated. Craft your prompt. 1. If you're using Automatic webui, try ComfyUI instead. I had some mixed results putting the embedding name in parenthesis with 1girl token and then another with the other celeb name. We note that this step is optional, but improves sample I'm having to disable the refiner for anything with a human face as a result, but then I lose out on other improvements it makes. To recap what I deleted above, with one face in the source and two in the target, Reactor was changing both faces. SDXL models on civitai typically don't mention refiners and a search for refiner models doesn't turn up much. Been learning the ropes with stable diffusion, and I’m realizing faces are really hard. The original prompt was supplied by sersun Prompt: Ultra realistic photo, (queen elizabeth), young, stunning model, beautiful face, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Good info man. 5, . This option zooms into the area and creates a really good face as a result, due to high correlation between the canvas and the dataset. g. 25, . fix The difference in titles: "swarmui is a new ui for stablediffusion,", and "stable diffusion releases new official ui with amazing features" is HUGE - like a difference between a local notice board and a major newspaper publication. The problem is I'm using a face from ArtBreeder, and img2img ends up changing the face too much when implementing a different style (eg: Impasto, oil painting, swirling brush strokes, etc). fix tab or anything. At that moment, I was able to just download a zip, type something in webui, and then click generate. You can add things to the start of your prompt (like short hair, helmet etc) to refine the generation. Specifically, the output ends up looking What happens is that SD has problems with faces. I think i must be using stable diffusion too much. I think if there's something with sliders like facegen but with a decent result I'm trying to figure out a workflow to use Stable Diffusion for style transfer, using a single reference image. I initially tried using a large square image with a 3x3 arrangement of faces, but it would often read the lower rows of faces as the body for the upper row; spread out horizontally all of the faces remain well separated without sacrificing too much resolution to empty padding. These settings will keep both the refiner and the base model you are using in VRAM, increasing the image generation speeds drastically. I started using one like you suggest, using a workflow based on streamlit from Joe Penna that was 40 steps total, first 35 on the base, remaining noise to the refiner. For example, I wonder if there is an opportunity to refine the faces and lip syncing in this video. This isn't just a picky point -- its to underline that larding prompts with "photorealistic, ultrarealistic" etc -- tend to make a generative AI image look _less_ like a photograph. ) Automatic1111 Web UI - PC - /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Possibly through splitting the warp diffusion clip back into frames, running the frames through your method, then recompiling into video 2) Set Refiner Upscale Value and Denoise value. The Refiner very neatly follows the prompt and fixes that up. I've tried changing the samplers, CFG, and the number of steps, but the results aren't coming out correctly. Even the slightest bit of fantasy in there and even photo prompts start pushing a CGI like finish. Prompt: An old lady posing in a bra for a picture, making a fist, bodybuilder, (angry:1. I do it to create the sources for my MXAI embeddings, and I probably only have to delete about 10% of my source images for not having the same face. I've few questions about fine tuning of Stable Diffusion XL jump to content. 51 votes, 39 comments. Inpainting can fix this. 4 - 0. Within this workflow, you will define a combination of three components: the "Face Detector" for identifying faces within an image, the "Face Processor" for adjusting the detected faces, and I can seem to make a decent basic workflow with refiner alone, and one with face detail but when I try to combine them I can't figure it out. true. 5 to achieve the final look. I've been having some good success with anime characters, so I wanted to share how I was doing things. ) Automatic1111 Web UI - PC - Free How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1. That said, Stable Diffusion usually struggles all full body images of people, but if you do the above the hips portraits, it performs just fine. Are there online Stable diffusion sites that do img2img? Discussion I found a Hugging face repo that works very well for text2img (https://huggingface. 4), (panties:1. The base model is perfectly capable of generating an image on its own. Depends on the program you use, but with Automatic 1111 on the inpainting tab, use inpaint with -only masked selected. 5 model as the "refiner"). The control Net Softedge is used to preserve the elements and shape, you can also use Lineart) 3) Setup Animate Diff Refiner Use at least 512x512, make several generations, choose best, do face restoriation if needed (GFP-GAN - but it overdoes the correction most of the time, so it is best to use layers in GIMP/Photoshop and blend the result with the original), I think some samplers from k diff are also better than others at faces, but that might be placebo/nocebo effect. Experimental Functions. I will try that as the facedeatailer nodes never worked and only ever found one face in a group of people which is where XL has a problem. safetensors) while using SDXL (Turn it off and use Hires. 6 More than 0. com/models/119257/gtm-comfyui-workflows-including-sdxl-and-sd15. can anybody give me tips on whats the best way to do it? or what tools can help me refine the end result? Honestly! Currently trying to fix bad hands using face refiner, but it seems that it is doing something bad. 0 includes the following experimental functions: Free Lunch (v1 and v2) AI researchers have discovered an optimization for Stable Diffusion models that improves the quality of the generated images. 1, 2022) Web app Stable Diffusion Multi Inpainting (Hugging Face) by multimodalart. 5, all extensions updated. I have my VAE selection in the settings set to "Automatic". 5 model in highresfix with denoise set in the . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5), (large breasts:1. Among the models for face, I found face_yolov8n, face_yolov8s, face_yolov8n_v2 and the similar for hands. 2 or less on "high-quality high resolution" images. gg If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. "f32 stable-diffusion". Lately I've been encountering the same problem frequently. 7> Negative: EpicPhotoGasm-colorfulPhoto-neg /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The more upscaling you use, the lower you I like any stable diffusion related project that's open source but InvokeAI seems to be disconnected from the community and how people are actually using SD. 5. 0 Refine. Skip to content. An example: You impaint the face of the surprised person and after 20 generation it is just right - now that's it. 6 or too many steps and it becomes a more fully SD1. I experinted a lot with the "normal quality", "worst quality" stuff people often use. If you have a very small face or multiple small faces in the image, you can get better results fixing faces after the upscaler, it takes a few seconds more, but much better results (v2. I was planning to do the same as you have already done 👍. . Seems that refiner doesn't work outside the mask, it's clearly visible when "return with leftover noise" flag is enabled - everything outside mask filled with noise and artifacts I think the ideal workflow is a bit debateable. This is the best technique for getting consistent faces so far! Input image - John Wick 4: Output images: Input image - The Equalizer 3: Can say, using ComfyUI with 6GB VRAM is not problem for my friend RTX 3060 Laptop the problem is the RAM usage, 24GB (16+8) RAM is not enough, Base + Refiner only can get 1024x1024, upscalling (edit: upscalling with KSampler I am at Automatic1111 1. Then I fed them to stable diffusion and kind of figured out what it sees when it studies a photo to learn a face, then went to photoshop to take out anything it learned that I didn't like. It works perfectly with only face images or half body images. Put the VAE in stable-diffusion-webui\models\VAE. Try reducing the number of steps for the refiner. Next fork of A1111 WebUI, by Vladmandic. Now both colab and PC installers are Hey all, I've been really getting into Stable Diffusion lately but since I don't have the hardware I'm using free online sites. People using utilities like Textual Inversion and DreamBooth have been able to solve the problem in narrow use cases, but to the best of my knowledge there isn't yet a reliable solution to make on-model characters without just straight up hand-holding the AI. (basically the same as fooocus minus all the magic) - and I'm wondering if i should use a refiner for it, and if so, which one - evidently i'm going for When I try to inpaint a face using the Pony Diffusion model, the image generates with glitches as if it wasn't completely denoised. 30ish range and it fits her face lora to the image without So I installed stable diffusion yesterday and I added SD 1. This speed factor is one reason I've mostly stuck with 1. The example workflow has a base checkpoint and a refiner checkpoint, I think I understand how that's supposed to work. When I inpaint a face, it gives me slight variations on the same face. Auto Hand Refiner Workflow 4. 0 Base, moved it to img2img, removed the LORA and changed the checkpoint to SDXL 1. The issue with the refiner lies in its tendency to occasionally imbue the image with an overly "AI-look," achieved by adding an excessive amount of detail. Taking a good image with a poor face, then cropping into the face at an enlarged resolution of it's own, generating a new face with more detail then using an image editor to layer the new face on the old photo and using img2img again to combine them is a very common and powerful practice. Notifications You how to use the refiner model? We’re on a journey to advance and democratize artificial intelligence through open source and open science. 2) face by (Yoji Shinkawa 1. The diffusion is a random seeded process and wants to do its own thing. So far whenever I use my character lora and wish to apply the refiner, I will first mask the face and then have the model inpaint the rest. I found a solution for me use the cmd line settings :—na-half-vae —xformers (I removed the param —no-half ) Also install the latest WebUi 1. Stable Diffusion XL - Tipps & Tricks - 1st Week. However, that's pretty much the only place I'm actually seeing a refiner mentioned. You can get even This subreddit is an unofficial community about the video game "Space Engineers", a sandbox game on PC, Xbox and PlayStation, about engineering, construction, exploration and survival in space and on planets. 5 of my wifes face works much better than the ones Ive made with sdxl so I enabled independent prompting(for highresfix and refiner) and use the 1. cinematic photo majestic and regal full body profile portrait, sexy photo of a beautiful (curvy) woman with short light brown hair in (lolita outfit:1. i came across the "Refiner extension" in the comments here described as "the correct way to use refiner with SDXL" but i am getting the exact same image between checking it on and off and generating the same image seed a few times as a test. 3 - 1. 1 So far, LoRA's only work for me if you run them on the base and not the refiner, the networks seems to have unique architectures that would require a LoRA trained just for the the refiner, I may be mistaken though, so take this with a grain of salt. I prompt "person sitting on a char" or "ridding a horse" or what ever non-portrait I receive nightmare fuel instead a face, other details seems to be okay on the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. as he said he did change other things. 6), (nsfw:1. 4, SD 1. Hello everyone I use an anime model to generate my images with the refiner function with a realistic model ( at 0. Sure. What most people do is generate an image until it looks great and then proclaim this was what they intended to do. 1, 2022) Web app stable-diffusion (Replicate) by cjwbw. Do not use the high res fix section (can select none, 0 steps in the high res section), go to the refiner section instead that will be new with all your other extensions (like control net or whatever other extensions you have installed) below, enable it there (sd_xl_refiner_1. This simple thing made me a fan of Stable Diffusion. For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. For example in FaceSwapLab you can use Pre-inpainting, postprocessing with LDSR upscale, segment mask, color correction, face restore, post-inpainting, Generate the image using the main lora (face will be somewhat similar but weird), then do inpaint on face using the face lora. Actually i have trained stable diffusion on my own images and now want to create pics of me in different places but SD is messing with face specially when I try to get full image. A1111 and ComfyUI are the two most popular web interfaces for This was already answered on Discord earlier but I'll answer here as well so others passing through can know: 1: Select "None" in the install process when it asks what backend to install, then once the main interface is open, go to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For example, I generate an image with a cat standing on a couch. Restore Faces only really works when the face is reasonably close to the "camera". I'm already using all the prompt words I can find to avoid this but photorealistic nsfw, the gold standard is BigAsp, with Juggernautv8 as refiner with adetailer on the face, lips, eyes, hands, and other exposed parts, with upscaling. 0 faces fix QUALITY), recommend if you have a good GPU: That is colossal BS, don't get fooled. Preferrable to use a person and photography lora as BigAsp Hello all, I'm approaching now to Stable Diffusion and generative AI on images. is anyone else experiencing this? what am i missing to make the refiner extension to work? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Model: Anything v4. The refiner is a separate model specialized for denoising of 0. 0. So the trick here is adding expressions to the prompt (with weighting between them) and also found that it's better to use I'm not really a fan of that checkpoint, but a tip to creating a consistent face is to describe it and name the "character" in the prompt. #what-is-going-on I made custom faces in a game, then fed them to Artbreeder to make them look realistic then bred them and bred them until they looked unique. Sort by: Master Consistent Character Faces with Stable Diffusion! 4. pt" and place it in the "embeddings" folder Im using automatic1111 and I run the initial prompt with sdxl but the lora I made with sd1. More info: https://rtech A regal queen of the stars, wearing a gown engulfed in vibrant flames, emanating both heat and light. I had the same idea of retraining it with the refiner model and then load the lora for the refiner model with the refiner-trained-lora. HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting And don't forget the power of img2img. it works ok with adetailer as it has option to use restore face after adetailer has done detailing and it can work on but many times it kinda do more damage to the face as it Hi everybody, I have generated this image with following parameters: horror-themed , eerie, unsettling, dark, spooky, suspenseful, grim, highly Transformers are the major building block that let LLMs work. I have very little experience here, but I trained a face with 12 photos using textual inversion and I'm floored with the results. 0 and upscalers Welcome to the unofficial ComfyUI subreddit. True for Midjourney, also true for Stable Diffusion (although there it can be affected by the way different LORAs and Checkpoints were trained). Very nice. Dear Stability AI thank you so much for making the weights auto approved. Downloaded SDXL 1. Her golden locks cascade in large waves, adding an element of mesmerizing allure to her appearance,the atmosphere is enveloped in My overkill approach is to inpaint the full face/head/hair using FaceIDv2 (ideally with 3-4 source images) at around 0. It's not hidden in the Hires. 5, SD 2. images like say black and white drawing of a mansion get messed up at lower denoising than simple things like a face. Use 0. 5, we're starting small and I'll take you along the entire journey. "Inpaint Stable Diffusion by either drawing a mask or typing what to replace". my subreddits. Take your two models and do a weighted sum merge in the merge checkpoints tab and create a checkpoint at . I have a built in tiling upscaler and face restore in my workflow: https://civitai. Navigation Menu AUTOMATIC1111 / stable-diffusion-webui Public. I am trying to find a solution. I think prompt are not a good way and I tried control net "face only" option too. How to download sdxl base and refiner model from hugging face to google colab using access token Posted by u/Hungry_Young_8498 - 4 votes and 11 comments Simply ran the prompt in txt2img with SDXL 1. Try the SD. Please share your tips, tricks, and workflows for using this software to create your AI art. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. On a 1. If the problem still persists I will do the refiner-retraining. I think I'm ready to upgrade to a better service, mostly for better resolutions, less wait times, and more options. co/spaces How to Inject Your Trained Subject e. (Added Oct. 0 base, vae, and refiner models. Visual transformers (for images, etc) have proven their worth the last year or so. com with all the advanced extras made easy. face, set ONLY MASKED and generate. 📷 8. The image is smoother than a nearest neighbor type upscale (such as Hey, bit of a dumb issue but was hoping one of you might be able to help me. In my experiments, I've discovered that adding imperfections can be made manually in Photoshop using tools like liquify and painting texture and then in img2img Personally, it appears to me that stable diffusion 1. 5 ) which gives me super interesting results. 4), (mega booty:1. You can just use someone elses workflow of 0. It's too bad because there's an audience for an interface like theirs. It hasn't caused me any problems so far but after not using it for a while I booted it up and my "Restore Faces" addon isn't there anymore. 75 then test by prompting the image you are looking for ie, "Dog with lake in the background" through run an X,Y script with Checkpoint name and list your checkpoints, it should print out a nice picture showing the I need to regenerate or make a refinement. 5 model use resolution of 512x512 or 768 x 768. This simple thing also made my that friend a fan of Stable Diffusion. "normal quality" in negative certainly won't have the effect. You can do a model merge for sure. 0 where hopefully it will be more optimized My process is to get the face first, then the body. 5 model as your base model, and a second SD1. 7 in the Denoise for Best results. However, this also means that the beginning might be a bit rough ;) NSFW (Nude for example) is possible, but it's not yet recommended and can be prone to errors. Use a value around 1. 5 model IMG 2 IMG, like realistic vision, can increase details, but destroy faces, remove details and become doll face/plastic face Share Add a Comment Stable Diffusion is a model architecture (or a class of model architectures, there is SD1, SDXL and others) and there are many applications that support it and also many different finetuned model checkpoints. And after running the face refiner I think that ComfyUI should use SDXL refiner on face and hands, but how to encode a image to feed it in as latent? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You don't really need that much technical knowledge to use these. So I I can't figure out how to properly use refiner in inpainting workflow. Use 1. Put the base and refiner models in stable-diffusion-webui\models\Stable-diffusion. It's "Upscaling > Hand Fix > Face Fix" If you upscale last, you partially destroy your fixes again. gg I dont know about online face swap services. 2), (light /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For anyone interested, I just added the preset styles from Fooocus into my Stable Diffusion Deluxe app at https://DiffusionDeluxe. 5 embedding: Bad Prompt (make sure to rename it to "bad_prompt. The two keys to getting what you want out of Stable Diffusion are to find the right seed, and to find the right prompt. 1. 2), well lit, illustration, beard, colored glasses /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. *PICK* (Added Oct. With experimentation and experience, you'll learn what each thing does. No need to install anything. Far from perfect, but I got a couple generations that looked right. Stable Diffusion 3 will use this new architecture. A face that looks photorealistic in say 512x512 gets these lines around all contrasting areas. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. I have my stable diffusion UI set to look for updates whenever I boot it up. 2), low angle, looking at just made this using epicphotogast and the negative embedding EpicPhotoGasm-colorfulPhoto-neg and lora more_details with these settings: Prompt: a man looks close into the camera, detailed, detailed skin, mall in background, photo, epic, artistic, complex background, detailed, realistic <lora:more_details:1. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. Your Face Into Any Custom Stable Diffusion Model By Web UI. For faces you can use Facedetailer. From SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. I can seem to make a decent basic workflow with refiner alone, and one with face detail but when I try to combine them I can't figure it out. I have updated the files I used in my below tutorial videos. Recognition and adoption would be beyond one reddit post - that would be a major ai trend for quite some time. 7 in the Refiner Upscale to give a little room in the image to add details. should i train the refiner exactly as i trained the base model? Share Add a Comment. Stable Diffusion looks too complicated”. The result image is good but not as I wanted, so next I want to tell the AI something like this "make the cat more hairy" so /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 excels in texture and lighting realism compared to later stable diffusion models, although it struggles with hands. Wait a minute this lady is real and she is like right here and her hand is still fucked up. But I'm not sure what I'm doing wrong, in the controlnet area I find the hand depth model and can use it, I would also like to use it in the adetailer (as described in Git) but can't find or select the depth model (control_v11f1p_sd15_depth) there. I didn't really try it (long story, was sick etc. Same with my LORA, when the face are facing the camera it turns out good, but when i try to do something like that the face are ruined. Here's a few I use. Access that feature from the Prompt Helpers tab, then Styler and Add to Prompts List. 5 version, losing most It seems pretty clear: prototype and experiment with Turbo to quickly explore a large number of compositions, then refine with 1. AP Workflow v5. It used the source face for the target face I designated (0 or 1), which is what it's supposed to do, but it was also replacing the other face in the target with a random face. What model are you using and what resolution are you generating at? If you have decent amounts of VRAM, before you go to an img2img based upscale like UltimateSDUpscale, you can do a txt2img based upscale by using ControlNet tile/or ControlNet inpaint, and regenerating your image at a higher resolution. bad anatomy, disfigured, poorly drawn face, mutation, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. More info: https://rtech. 5 and protogen 2 as models, everything works fine, I can access Sd just fine, I can generate but when I generate then it looks extremely bad, like it's usually a blurry mess of colors, has nothing to do with the prompts and I even specified the prompts to "Fix" it but nothing helps. So: base -> refiner -> 1. mzoxttk yairiq yczpk hzl vwysxx bkowhiq rhhwb qaok jwmcw irkkua