Still paying full price for Google Ai??
Get Google Gemini Pro ai + Veo3 + 2TB Cloud Storage at 90% DISCOUNT🔖 (Limited offer)
Get it from HERE
https://redd.it/1o9a9il
@rStableDiffusion
Get Google Gemini Pro ai + Veo3 + 2TB Cloud Storage at 90% DISCOUNT🔖 (Limited offer)
Get it from HERE
https://redd.it/1o9a9il
@rStableDiffusion
Reddit
From the gemini_pro community on Reddit
Explore this post and more from the gemini_pro community
Introducing ScreenDiffusion v01 — Real-Time img2img Tool Is Now Free And Open Source
https://redd.it/1o99lt6
@rStableDiffusion
https://redd.it/1o99lt6
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Introducing ScreenDiffusion v01 — Real-Time img2img Tool Is Now Free And Open Source
Explore this post and more from the StableDiffusion community
Train a Qwen Image Edit 2509 LoRA with AI Toolkit - Under 10GB VRAM
Ostiris recently posted a video tutorial on his channel and showed that it's possible to train a LoRA that can accurately put any design on anyone's shirt. Peak VRAM usage never exceeds 10GB.
https://youtu.be/d49mCFZTHsg?si=UDDOyaWdtLKc_-jS
https://redd.it/1o9a4uw
@rStableDiffusion
Ostiris recently posted a video tutorial on his channel and showed that it's possible to train a LoRA that can accurately put any design on anyone's shirt. Peak VRAM usage never exceeds 10GB.
https://youtu.be/d49mCFZTHsg?si=UDDOyaWdtLKc_-jS
https://redd.it/1o9a4uw
@rStableDiffusion
YouTube
Train a Qwen Image Edit 2509 LoRA with AI Toolkit - Under 10GB VRAM
How to train a Qwen Image Edit 2509 LoRA with AI Toolkit with less than 10GB of VRAM. In this tutorial we train a LoRA that can put any design on anyone's shirt.
Support me - https://ostris.com/support
Shirt Design LoRA - https://huggingface.co/ostris…
Support me - https://ostris.com/support
Shirt Design LoRA - https://huggingface.co/ostris…
I made 3 RunPod Serverless images that run ComfyUI workflows directly. Now I need your help.
Hey everyone,
Like many of you, I'm a huge fan of ComfyUI's power, but getting my workflows running on a scalable, serverless backend like RunPod has always been a bit of a project. I wanted a simpler way to go from a finished workflow to a working API endpoint.
So, I built it. I've created three Docker images designed to run ComfyUI workflows on RunPod Serverless with minimal fuss.
The core idea is simple: You provide your ComfyUI workflow (as a JSON file), and the image automatically configures the API inputs for you. No more writing custom handler.py files every time you want to deploy a new workflow.
The Docker Images:
You can find the images and a full guide here: link
This is where you come in.
These images are just the starting point. My real goal is to create a community space where we can build practical tools and tutorials for everyone. Right now, there are no formal tutorials—because I want to create what the community actually needs.
I've started a Discord server for this exact purpose. I'd love for you to join and help shape the future of this project. There's already LoRA training guide on it.
Join our Discord to:
Suggest which custom nodes I should bake into the next version of the images.
Tell me what tutorials you want to see. (e.g., "How to use this with AnimateDiff," "Optimizing costs on RunPod," "Best practices for XYZ workflow").
Get help setting up the images with your own workflows.
Share the cool things you're building!
This is a ground-floor opportunity to build a resource hub that we all wish we had when we started.
Discord Invite: https://discord.gg/dBU6U7Ve
https://redd.it/1o9ex20
@rStableDiffusion
Hey everyone,
Like many of you, I'm a huge fan of ComfyUI's power, but getting my workflows running on a scalable, serverless backend like RunPod has always been a bit of a project. I wanted a simpler way to go from a finished workflow to a working API endpoint.
So, I built it. I've created three Docker images designed to run ComfyUI workflows on RunPod Serverless with minimal fuss.
The core idea is simple: You provide your ComfyUI workflow (as a JSON file), and the image automatically configures the API inputs for you. No more writing custom handler.py files every time you want to deploy a new workflow.
The Docker Images:
You can find the images and a full guide here: link
This is where you come in.
These images are just the starting point. My real goal is to create a community space where we can build practical tools and tutorials for everyone. Right now, there are no formal tutorials—because I want to create what the community actually needs.
I've started a Discord server for this exact purpose. I'd love for you to join and help shape the future of this project. There's already LoRA training guide on it.
Join our Discord to:
Suggest which custom nodes I should bake into the next version of the images.
Tell me what tutorials you want to see. (e.g., "How to use this with AnimateDiff," "Optimizing costs on RunPod," "Best practices for XYZ workflow").
Get help setting up the images with your own workflows.
Share the cool things you're building!
This is a ground-floor opportunity to build a resource hub that we all wish we had when we started.
Discord Invite: https://discord.gg/dBU6U7Ve
https://redd.it/1o9ex20
@rStableDiffusion
Discord
Join the Diffusion OG Discord Server!
Check out the Diffusion OG community on Discord - hang out with 2 other members and enjoy free voice and text chat.
Guys, do you know if there's a big difference between the RTX 5060 Ti 16GB and the RTX 5070 Ti 16GB for generating images?
https://redd.it/1o93b4k
@rStableDiffusion
https://redd.it/1o93b4k
@rStableDiffusion
Character Consistency is Still a Nightmare. What are your best LoRAs/methods for a persistent AI character
Let’s talk about the biggest pain point in local SD: Character Consistency. I can get amazing single images, but generating a reliable, persistent character across different scenes and prompts is a constant struggle.
I've tried multiple character LoRAs, different Embeddings, and even used the $\text{--sref}$ method, but the results are always slightly off. The face/vibe just isn't the same.
Is there any new workflow or dedicated tool you guys use to generate a consistent AI personality/companion that stays true to the source?
https://redd.it/1o9oo17
@rStableDiffusion
Let’s talk about the biggest pain point in local SD: Character Consistency. I can get amazing single images, but generating a reliable, persistent character across different scenes and prompts is a constant struggle.
I've tried multiple character LoRAs, different Embeddings, and even used the $\text{--sref}$ method, but the results are always slightly off. The face/vibe just isn't the same.
Is there any new workflow or dedicated tool you guys use to generate a consistent AI personality/companion that stays true to the source?
https://redd.it/1o9oo17
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Change Image Style With Qwen Edit 2509 + Qwen Image+Fsampler+ LORA
https://youtu.be/_XOV4KMxdug
https://redd.it/1o9sil7
@rStableDiffusion
https://youtu.be/_XOV4KMxdug
https://redd.it/1o9sil7
@rStableDiffusion
YouTube
ComfyUI Tutorial: How To Change Image Style With Qwen Edit 2509 #comfyui #comfyuitutorial #qwenimage
On this tutorial I will show you how to do style transfer using qwen image edit 2509 combined with lora model to create a raw image that gonna pass trought fine tunning step in order to improve the quality and detail of the image. The workflow is optimized…
Perplexity AI PRO - 1 YEAR at 90% Discount – Don’t Miss Out!
https://redd.it/1o9vghr
@rStableDiffusion
https://redd.it/1o9vghr
@rStableDiffusion
New Wan 2.2 dstill model
I’m little bit confused why no one discussed or uploaded a test run for the new dstill models.
My understanding this model is fine-tuned and has lightx2v baked in, which means when u use it you do not need a lightx2v on low lora.
But idk about the speed/results comparing this to the native fp8 or the gguf versions.
If you have any information or comparison about this model please share.
https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main
https://redd.it/1o9v767
@rStableDiffusion
I’m little bit confused why no one discussed or uploaded a test run for the new dstill models.
My understanding this model is fine-tuned and has lightx2v baked in, which means when u use it you do not need a lightx2v on low lora.
But idk about the speed/results comparing this to the native fp8 or the gguf versions.
If you have any information or comparison about this model please share.
https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main
https://redd.it/1o9v767
@rStableDiffusion
huggingface.co
lightx2v/Wan2.2-Distill-Models at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
About that WAN T2V 2.2 and "speed up" LORAs.
I don't have big problems with I2V, but T2V...? I'm lost. I think I have something about ~20 random speed up loras, some of them work, some of them (rCM for example) don't work at all, so here is my question - what exactly setup of speed up loras you use with T2V?
https://redd.it/1o9wyqj
@rStableDiffusion
I don't have big problems with I2V, but T2V...? I'm lost. I think I have something about ~20 random speed up loras, some of them work, some of them (rCM for example) don't work at all, so here is my question - what exactly setup of speed up loras you use with T2V?
https://redd.it/1o9wyqj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Brie's Qwen Edit Lazy Repose workflow
Hey everyone\~
I've released a new version of my Qwen Edit Lazy Repose. It does what it says on the tin.
The main new feature is replacement of Qwen Edit 2509, with the All-in-One finetune. This simplifies the workflow a bit, and also improves quality.
Take note that the first gen involving model load will take some time, because the loras, vae and CLIP are all shoved in there. Once you get past the initial image, the gen times are typical for Qwen Edit.
Get the workflow here:
https://civitai.com/models/1982115
The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5
Note that there's both a SFW and the other version.
The other version is very horny, even if your character is fully clothed, something may just slip out. Be warned.
Stay cheesy and have a good one!\~
Here are some examples:
Frolicking about. Both pose and expression are transferred.
Works if the pose image is blank. Sometimes the props carry over too.
Works when the character image is on a blank background too.
All character images generated by me (of me)
All pose images yoinked from the venerable Digital Pastel, maker of the SmoothMix series of models, of which I cherish.
https://redd.it/1o9zqer
@rStableDiffusion
Hey everyone\~
I've released a new version of my Qwen Edit Lazy Repose. It does what it says on the tin.
The main new feature is replacement of Qwen Edit 2509, with the All-in-One finetune. This simplifies the workflow a bit, and also improves quality.
Take note that the first gen involving model load will take some time, because the loras, vae and CLIP are all shoved in there. Once you get past the initial image, the gen times are typical for Qwen Edit.
Get the workflow here:
https://civitai.com/models/1982115
The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5
Note that there's both a SFW and the other version.
The other version is very horny, even if your character is fully clothed, something may just slip out. Be warned.
Stay cheesy and have a good one!\~
Here are some examples:
Frolicking about. Both pose and expression are transferred.
Works if the pose image is blank. Sometimes the props carry over too.
Works when the character image is on a blank background too.
All character images generated by me (of me)
All pose images yoinked from the venerable Digital Pastel, maker of the SmoothMix series of models, of which I cherish.
https://redd.it/1o9zqer
@rStableDiffusion
Civitai
Brie's Qwen Edit Lazy Repose - v3.0 | Qwen Workflows | Civitai
Using the power of Qwen Edit, repose a reference character to a new pose to your heart's content! Version 3.0 The workflow now uses the Qwen Edit 2...
Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer.
With the lastest update of Onetrainer i notice close to a 20% performance improvement training Qwen image Loras (from 6.90s/it to 5s/it).
Using a 3080ti (12gb, 11,4 peak utilization), 30 images, 512 resolution and batch size 2 (around 1400 steps, 5s/it), takes about 2 and a half hours to complete a training.
I use the included 16gb VRAM preset and change the layer offloading fraction to 0.64. I have 48 gb of 2.9gz ddr4 ram, during training total system ram utilization is just below 32gb in windows 11, preparing for training goes up to 97gb (including virtual). I'm still playing with the values, but in general, i am happy with the results, i notice that maybe using 40 images the lora responds better to promps?.
I shared specific numbers to show why i'm so surprised at the performance.
Thanks to the Onetrainer team the level of optimisation is incredible.
https://redd.it/1oa1wp3
@rStableDiffusion
With the lastest update of Onetrainer i notice close to a 20% performance improvement training Qwen image Loras (from 6.90s/it to 5s/it).
Using a 3080ti (12gb, 11,4 peak utilization), 30 images, 512 resolution and batch size 2 (around 1400 steps, 5s/it), takes about 2 and a half hours to complete a training.
I use the included 16gb VRAM preset and change the layer offloading fraction to 0.64. I have 48 gb of 2.9gz ddr4 ram, during training total system ram utilization is just below 32gb in windows 11, preparing for training goes up to 97gb (including virtual). I'm still playing with the values, but in general, i am happy with the results, i notice that maybe using 40 images the lora responds better to promps?.
I shared specific numbers to show why i'm so surprised at the performance.
Thanks to the Onetrainer team the level of optimisation is incredible.
https://redd.it/1oa1wp3
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Wan 2.2 i2V Quality Tip (For Noobs)
Lots of new users out there, so I'm not sure if everyone already knows this (I just started in wan myself), but I thought I'd share a tip.
If you're using a high-resolution image for your input, don't downscale it to match the resolution you're going for before running Wan. Just leave it as-is and let Wan do the downscale on its own. I've discovered that you'll get much better quality. There is a slight trade-off in speed -I don't know if it's doing some extra processing or whatever - but it only puts a "few" extra seconds on the clock for me. But I'm running an RTX 3090 TI, so not sure how that would effect smaller cards. But it's worth it.
Otherwise, if you want some speed gains, downscale the image to the target resolution and it should run faster, at least in my tests.
Also, increasing steps on the speed LoRAs can boost quality too, with just a little sacrifice in speed. When I started, I thought 4-step meant only 4-steps. But I regularly use 8 steps and I get noticeable quality gains, with only a little sacrifice in speed. 8-10 seems to be the sweet spot. Again, it's worth it.
https://redd.it/1o9zcyj
@rStableDiffusion
Lots of new users out there, so I'm not sure if everyone already knows this (I just started in wan myself), but I thought I'd share a tip.
If you're using a high-resolution image for your input, don't downscale it to match the resolution you're going for before running Wan. Just leave it as-is and let Wan do the downscale on its own. I've discovered that you'll get much better quality. There is a slight trade-off in speed -I don't know if it's doing some extra processing or whatever - but it only puts a "few" extra seconds on the clock for me. But I'm running an RTX 3090 TI, so not sure how that would effect smaller cards. But it's worth it.
Otherwise, if you want some speed gains, downscale the image to the target resolution and it should run faster, at least in my tests.
Also, increasing steps on the speed LoRAs can boost quality too, with just a little sacrifice in speed. When I started, I thought 4-step meant only 4-steps. But I regularly use 8 steps and I get noticeable quality gains, with only a little sacrifice in speed. 8-10 seems to be the sweet spot. Again, it's worth it.
https://redd.it/1o9zcyj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Open-source release! Face-to-Photo Transform ordinary face photos into stunning portraits.
Open-source release! Face-to-Photo Transform ordinary face photos into stunning portraits.
Built on Qwen-Image-Edit**, the Face-to-Photo model excels at precise facial detail restoration.** Unlike previous models (e.g., InfiniteYou), it captures fine-grained facial features across angles, sizes, and positions — producing natural, aesthetically pleasing portraits.
Model download: https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-Edit-F2P
Try it online: https://modelscope.cn/aigc/imageGeneration?tab=advanced&imageId=17008179
Inference code: https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/qwen\_image/model\_inference/Qwen-Image-Edit.py
Can be used in ComfyUI easily with the qwen-image-edit v1 model
https://preview.redd.it/4l8vnu4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=27d80eff424cf8ced9153f641896da1fbb573d2b
https://preview.redd.it/76ai6q4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b895f8dfc16aa0dbf437d5de0b58193e64c1c570
https://preview.redd.it/dyg1gf2gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b01a227a115881a5ef7d1886dccd290d6c52287b
https://preview.redd.it/kcf67h2gawvf1.jpg?width=2592&format=pjpg&auto=webp&s=1de1b763c6ac0486e8e9a43214193bdd89d22914
https://preview.redd.it/5cpzbi2gawvf1.png?width=2216&format=png&auto=webp&s=1dae933989e8bd1086a895e0b187866dc5231547
https://redd.it/1o9zxe2
@rStableDiffusion
Open-source release! Face-to-Photo Transform ordinary face photos into stunning portraits.
Built on Qwen-Image-Edit**, the Face-to-Photo model excels at precise facial detail restoration.** Unlike previous models (e.g., InfiniteYou), it captures fine-grained facial features across angles, sizes, and positions — producing natural, aesthetically pleasing portraits.
Model download: https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-Edit-F2P
Try it online: https://modelscope.cn/aigc/imageGeneration?tab=advanced&imageId=17008179
Inference code: https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/qwen\_image/model\_inference/Qwen-Image-Edit.py
Can be used in ComfyUI easily with the qwen-image-edit v1 model
https://preview.redd.it/4l8vnu4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=27d80eff424cf8ced9153f641896da1fbb573d2b
https://preview.redd.it/76ai6q4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b895f8dfc16aa0dbf437d5de0b58193e64c1c570
https://preview.redd.it/dyg1gf2gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b01a227a115881a5ef7d1886dccd290d6c52287b
https://preview.redd.it/kcf67h2gawvf1.jpg?width=2592&format=pjpg&auto=webp&s=1de1b763c6ac0486e8e9a43214193bdd89d22914
https://preview.redd.it/5cpzbi2gawvf1.png?width=2216&format=png&auto=webp&s=1dae933989e8bd1086a895e0b187866dc5231547
https://redd.it/1o9zxe2
@rStableDiffusion
modelscope.cn
Qwen-Image-Edit-F2P 人脸生成图像
ModelScope——汇聚各领域先进的机器学习模型,提供模型探索体验、推理、训练、部署和应用的一站式服务。在这里,共建模型开源社区,发现、学习、定制和分享心仪的模型。
