tgoop.com/rStableDiffusion/55802
Last Update:
Why are we still training LoRA and not moved to DoRA as a standard?
Just wondering, this has been a head-scratcher for me for a while.
Everywhere I look claims DoRA is superior to LoRA in what seems like all aspects. It doesn't require more power or resources to train.
I googled DoRA training for newer models - Wan, Qwen, etc. Didn't find anything, except a reddit post from a year ago asking pretty much exactly what I'm asking here today lol. And every comment seems to agree DoRA is superior. And Comfy has supported DoRA now for a long time.
Yet, here we are - still training LoRAs when there's been a better option for years? This community is always fairly quick to adopt the latest and greatest. It's odd this slipped through? I use diffusion-pipe to train pretty much everything now. I'm curious to know if theres a way I could train DoRAs with that. Or if there is a different method out there right now that is capable of training a wan DoRA.
Thanks for any insight, and curious to hear others opinions on this.
https://redd.it/1o5t7z0
@rStableDiffusion
BY r/StableDiffusion

Share with your friend now:
tgoop.com/rStableDiffusion/55802