Telegram Web
Still paying full price for Google Ai??

Get Google Gemini Pro ai + Veo3 + 2TB Cloud Storage at 90% DISCOUNT🔖 (Limited offer)
Get it from HERE

https://redd.it/1o9a9il
@rStableDiffusion
I made 3 RunPod Serverless images that run ComfyUI workflows directly. Now I need your help.

Hey everyone,

Like many of you, I'm a huge fan of ComfyUI's power, but getting my workflows running on a scalable, serverless backend like RunPod has always been a bit of a project. I wanted a simpler way to go from a finished workflow to a working API endpoint.

So, I built it. I've created three Docker images designed to run ComfyUI workflows on RunPod Serverless with minimal fuss.

The core idea is simple: You provide your ComfyUI workflow (as a JSON file), and the image automatically configures the API inputs for you. No more writing custom handler.py files every time you want to deploy a new workflow.

The Docker Images:

You can find the images and a full guide here:  link

This is where you come in.

These images are just the starting point. My real goal is to create a community space where we can build practical tools and tutorials for everyone. Right now, there are no formal tutorials—because I want to create what the community actually needs.

I've started a Discord server for this exact purpose. I'd love for you to join and help shape the future of this project. There's already LoRA training guide on it.

Join our Discord to:

Suggest which custom nodes I should bake into the next version of the images.
Tell me what tutorials you want to see. (e.g., "How to use this with AnimateDiff," "Optimizing costs on RunPod," "Best practices for XYZ workflow").
Get help setting up the images with your own workflows.
Share the cool things you're building!

This is a ground-floor opportunity to build a resource hub that we all wish we had when we started.

Discord Invite: https://discord.gg/dBU6U7Ve

https://redd.it/1o9ex20
@rStableDiffusion
Guys, do you know if there's a big difference between the RTX 5060 Ti 16GB and the RTX 5070 Ti 16GB for generating images?
https://redd.it/1o93b4k
@rStableDiffusion
Character Consistency is Still a Nightmare. What are your best LoRAs/methods for a persistent AI character

Let’s talk about the biggest pain point in local SD: Character Consistency. I can get amazing single images, but generating a reliable, persistent character across different scenes and prompts is a constant struggle.

I've tried multiple character LoRAs, different Embeddings, and even used the $\text{--sref}$ method, but the results are always slightly off. The face/vibe just isn't the same.

Is there any new workflow or dedicated tool you guys use to generate a consistent AI personality/companion that stays true to the source?

https://redd.it/1o9oo17
@rStableDiffusion
Perplexity AI PRO - 1 YEAR at 90% Discount – Don’t Miss Out!
https://redd.it/1o9vghr
@rStableDiffusion
New Wan 2.2 dstill model

I’m little bit confused why no one discussed or uploaded a test run for the new dstill models.


My understanding this model is fine-tuned and has lightx2v baked in, which means when u use it you do not need a lightx2v on low lora.


But idk about the speed/results comparing this to the native fp8 or the gguf versions.


If you have any information or comparison about this model please share.


https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main

https://redd.it/1o9v767
@rStableDiffusion
About that WAN T2V 2.2 and "speed up" LORAs.

I don't have big problems with I2V, but T2V...? I'm lost. I think I have something about ~20 random speed up loras, some of them work, some of them (rCM for example) don't work at all, so here is my question - what exactly setup of speed up loras you use with T2V?

https://redd.it/1o9wyqj
@rStableDiffusion
Brie's Qwen Edit Lazy Repose workflow

Hey everyone\~

I've released a new version of my Qwen Edit Lazy Repose. It does what it says on the tin.

The main new feature is replacement of Qwen Edit 2509, with the All-in-One finetune. This simplifies the workflow a bit, and also improves quality.

Take note that the first gen involving model load will take some time, because the loras, vae and CLIP are all shoved in there. Once you get past the initial image, the gen times are typical for Qwen Edit.

Get the workflow here:
https://civitai.com/models/1982115

The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5

Note that there's both a SFW and the other version.
The other version is very horny, even if your character is fully clothed, something may just slip out. Be warned.

Stay cheesy and have a good one!\~


Here are some examples:


Frolicking about. Both pose and expression are transferred.

Works if the pose image is blank. Sometimes the props carry over too.

Works when the character image is on a blank background too.


All character images generated by me (of me)
All pose images yoinked from the venerable Digital Pastel, maker of the SmoothMix series of models, of which I cherish.

https://redd.it/1o9zqer
@rStableDiffusion
Best way to iterate through many prompts in comfyui?
https://redd.it/1o9zlqq
@rStableDiffusion
Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer.

With the lastest update of Onetrainer i notice close to a 20% performance improvement training Qwen image Loras (from 6.90s/it to 5s/it).
Using a 3080ti (12gb, 11,4 peak utilization), 30 images, 512 resolution and batch size 2 (around 1400 steps, 5s/it), takes about 2 and a half hours to complete a training.
I use the included 16gb VRAM preset and change the layer offloading fraction to 0.64. I have 48 gb of 2.9gz ddr4 ram, during training total system ram utilization is just below 32gb in windows 11, preparing for training goes up to 97gb (including virtual). I'm still playing with the values, but in general, i am happy with the results, i notice that maybe using 40 images the lora responds better to promps?.
I shared specific numbers to show why i'm so surprised at the performance.
Thanks to the Onetrainer team the level of optimisation is incredible.


https://redd.it/1oa1wp3
@rStableDiffusion
Wan 2.2 i2V Quality Tip (For Noobs)

Lots of new users out there, so I'm not sure if everyone already knows this (I just started in wan myself), but I thought I'd share a tip.

If you're using a high-resolution image for your input, don't downscale it to match the resolution you're going for before running Wan. Just leave it as-is and let Wan do the downscale on its own. I've discovered that you'll get much better quality. There is a slight trade-off in speed -I don't know if it's doing some extra processing or whatever - but it only puts a "few" extra seconds on the clock for me. But I'm running an RTX 3090 TI, so not sure how that would effect smaller cards. But it's worth it.

Otherwise, if you want some speed gains, downscale the image to the target resolution and it should run faster, at least in my tests.

Also, increasing steps on the speed LoRAs can boost quality too, with just a little sacrifice in speed. When I started, I thought 4-step meant only 4-steps. But I regularly use 8 steps and I get noticeable quality gains, with only a little sacrifice in speed. 8-10 seems to be the sweet spot. Again, it's worth it.

https://redd.it/1o9zcyj
@rStableDiffusion
2025/10/25 04:33:09
Back to Top
HTML Embed Code: