🔥 90% OFF - Perplexity AI PRO 1-Year Plan - Limited Time SUPER PROMO!
https://redd.it/1o5iu30
@rStableDiffusion
https://redd.it/1o5iu30
@rStableDiffusion
Hunyuan 3.0 second atempt. 6 minutes render on rtx 6000 pro (update)
https://redd.it/1o5o3ka
@rStableDiffusion
https://redd.it/1o5o3ka
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Hunyuan 3.0 second atempt. 6 minutes render on rtx 6000 pro (update)
Explore this post and more from the StableDiffusion community
Discord Server With Active LoRA Training Community?
I'm looking for a place where you can discuss techniques and best practices/models, etc. All of the servers I'm on currently are pretty dormant. Thanks!
https://redd.it/1o5nmt9
@rStableDiffusion
I'm looking for a place where you can discuss techniques and best practices/models, etc. All of the servers I'm on currently are pretty dormant. Thanks!
https://redd.it/1o5nmt9
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
Coloured a line art using Qwen-Edit and animated using Wan-2.5
https://redd.it/1o5ptwe
@rStableDiffusion
https://redd.it/1o5ptwe
@rStableDiffusion
Why are we still training LoRA and not moved to DoRA as a standard?
Just wondering, this has been a head-scratcher for me for a while.
Everywhere I look claims DoRA is superior to LoRA in what seems like all aspects. It doesn't require more power or resources to train.
I googled DoRA training for newer models - Wan, Qwen, etc. Didn't find anything, except a reddit post from a year ago asking pretty much exactly what I'm asking here today lol. And every comment seems to agree DoRA is superior. And Comfy has supported DoRA now for a long time.
Yet, here we are - still training LoRAs when there's been a better option for years? This community is always fairly quick to adopt the latest and greatest. It's odd this slipped through? I use diffusion-pipe to train pretty much everything now. I'm curious to know if theres a way I could train DoRAs with that. Or if there is a different method out there right now that is capable of training a wan DoRA.
Thanks for any insight, and curious to hear others opinions on this.
https://redd.it/1o5t7z0
@rStableDiffusion
Just wondering, this has been a head-scratcher for me for a while.
Everywhere I look claims DoRA is superior to LoRA in what seems like all aspects. It doesn't require more power or resources to train.
I googled DoRA training for newer models - Wan, Qwen, etc. Didn't find anything, except a reddit post from a year ago asking pretty much exactly what I'm asking here today lol. And every comment seems to agree DoRA is superior. And Comfy has supported DoRA now for a long time.
Yet, here we are - still training LoRAs when there's been a better option for years? This community is always fairly quick to adopt the latest and greatest. It's odd this slipped through? I use diffusion-pipe to train pretty much everything now. I'm curious to know if theres a way I could train DoRAs with that. Or if there is a different method out there right now that is capable of training a wan DoRA.
Thanks for any insight, and curious to hear others opinions on this.
https://redd.it/1o5t7z0
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Chroma on the rise?
I ve lowkey seen quite a few loras dropped for chorma lately, which makes it look really good like on par with wan t2i or flux. And was wondering if anyone else has noticed the same trend or if some of you have switched to Chroma entierly?
https://redd.it/1o5r8dy
@rStableDiffusion
I ve lowkey seen quite a few loras dropped for chorma lately, which makes it look really good like on par with wan t2i or flux. And was wondering if anyone else has noticed the same trend or if some of you have switched to Chroma entierly?
https://redd.it/1o5r8dy
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Qwen edit image 2509 degrading image quality?
Anyone finds that it slights degrades the character photo quality on its outcome? Tried to scale to 2 times and it is slightly better upon viewing up close.
Background of it is that I am a cosplay photographer and am trying to edit the character into some special scenes too but the outcome are usually abit too pixelated on the character face
https://redd.it/1o60quo
@rStableDiffusion
Anyone finds that it slights degrades the character photo quality on its outcome? Tried to scale to 2 times and it is slightly better upon viewing up close.
Background of it is that I am a cosplay photographer and am trying to edit the character into some special scenes too but the outcome are usually abit too pixelated on the character face
https://redd.it/1o60quo
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Issue Training a LoRA Locally
For starters, im really just trying to test this. I have a dataset of 10 pictures and text files all the correct format, same asepct ratio, size etc.
I am using this workflow and following this tutorial.
Currently using all of the EXACT models linked in this video gives me the following error: "InitFluxLoRATraining...Cannot copy out of meta tensor, no data! Please use torch.nn.module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device"
Ive messed around with the settings and cannot get past this. When talking with ChatGPT/Gemini they first suggested this could be related to an oom error? I have a 16GB VRAM card and dont see my GPU peak over 1.4GB before the workflow errors out, so I am pretty confident this is not an oom error.
Is anyone farmilar with this error and can give me a hand?
Im really just looking for a simple easy no B.S. way to train a Flux LoRA locally. I would happily abandon this workflow is there was another more streamlined workflow that gave good results.
Any and all help is greatly appreciated!
https://redd.it/1o62wec
@rStableDiffusion
For starters, im really just trying to test this. I have a dataset of 10 pictures and text files all the correct format, same asepct ratio, size etc.
I am using this workflow and following this tutorial.
Currently using all of the EXACT models linked in this video gives me the following error: "InitFluxLoRATraining...Cannot copy out of meta tensor, no data! Please use torch.nn.module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device"
Ive messed around with the settings and cannot get past this. When talking with ChatGPT/Gemini they first suggested this could be related to an oom error? I have a 16GB VRAM card and dont see my GPU peak over 1.4GB before the workflow errors out, so I am pretty confident this is not an oom error.
Is anyone farmilar with this error and can give me a hand?
Im really just looking for a simple easy no B.S. way to train a Flux LoRA locally. I would happily abandon this workflow is there was another more streamlined workflow that gave good results.
Any and all help is greatly appreciated!
https://redd.it/1o62wec
@rStableDiffusion
GitHub
ComfyUI-FluxTrainer/example_workflows at main · kijai/ComfyUI-FluxTrainer
Contribute to kijai/ComfyUI-FluxTrainer development by creating an account on GitHub.
How To Fix AI Skin?
What are some sites or tools to fix AI looking skin?
I know of Enhancor and Pykaso but have not tried them yet because both don't offer free trials.
https://redd.it/1o669ej
@rStableDiffusion
What are some sites or tools to fix AI looking skin?
I know of Enhancor and Pykaso but have not tried them yet because both don't offer free trials.
https://redd.it/1o669ej
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
New Wan 2.2 I2V Lightx2v loras just dropped!
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
https://redd.it/1o67ntj
@rStableDiffusion
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
https://redd.it/1o67ntj
@rStableDiffusion
huggingface.co
lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
VACE 2.2 dual model workflow - Character swapping
https://www.youtube.com/watch?v=MHQDiSgu1VM
https://redd.it/1o697kx
@rStableDiffusion
https://www.youtube.com/watch?v=MHQDiSgu1VM
https://redd.it/1o697kx
@rStableDiffusion
YouTube
VACE 2.2 - Part 3 - Character Swapping & Replacing Objects
This uses VACE 2.2 in a WAN 2.2 dual model workflow in Comfyui to swap out characters or replace objects in a video using targetted masking.
In the previous video I showed VACE inpainting methods, but they were not so good for swapping out characters or…
In the previous video I showed VACE inpainting methods, but they were not so good for swapping out characters or…
70 minute of DNB mixed over an AI art video I put together
https://youtu.be/x6GNt-g1HJo?si=cqeo29YRbsjwepgO
https://redd.it/1o6b9s5
@rStableDiffusion
https://youtu.be/x6GNt-g1HJo?si=cqeo29YRbsjwepgO
https://redd.it/1o6b9s5
@rStableDiffusion
YouTube
2026 New years Mix - Dark energetic DNB audio/visual set
Music mixed by me and visuals created with Neural Frames
I do not own the copyright to the original music.
I just started producing music this year, and look forward to sharing more of my mixes, and one day original songs as I go on this adventure.
Re-uploaded…
I do not own the copyright to the original music.
I just started producing music this year, and look forward to sharing more of my mixes, and one day original songs as I go on this adventure.
Re-uploaded…