Telegram Web
Wan2.2 I2V - 2 vs 3 Ksamplers - questions on steps & samplers

I'm currently testing different WFs between 2 and 3 Ksamplers for Wan2.2 ITV and wanted to ask for different experiences and share my own + settings!

3 Ksamplers (HN without Lightning, then HN/LN with Lightning Strength 1) seems to give me the best output quality, BUT for me it seems to change the likeness of the subject from the input image a lot over the course of the video (often even immediately after the first frame).

On 3KS I am using 12 total steps, 4 Steps on HN1, 4 on HN2 and 4 on LN, Euler Simple worked best for me there. Maybe more LN steps would be better? Not tested yet!

2 Ksamplers (HN/LN both with Lightning Strength 1) faster generation at generally slightly worse quality than 3 Ksamplers, but the likeness of the input image stays MUCH more consistent for me. For that though outputs can be hit or miss depending on the input (f.e. weird colors, unnatural stains on human skin, slight deformations etc.).

On 2 KS I am using 10 total steps, 4 on HN and 6 on LN. LCM + sgm_uniform worked best for me here, more steps with other samplers (like Euler simple/beta) often resulted in generally the better video, but then screwing up some anatomical detail which made it weird :D

Happy about any Step&Sampler combination you can recommend for me to try. I mostly work with human subjects, both SFW and non, so skin detail is important to me. Subjects are my own creations (SDXL, Flux Kontext etc.), so using a character lora to get rid of the likeness issue in the 3KS option is not ideal (except if I wanted to create a Lora for each of my characters which.. I'm not there yet :D ).

I wanted to try to work without lightning because I heard it impacts quality a lot, but I could not find a proper setting either on 2 or 3KS and the long generation times are rough to do proper testing for me. Between 20 and 30 steps still giving blurry/hazy videos, maybe I need way more? I wouldn't mind the long generation time for videos that are important for me.

Also wanting to try the WanMoE Ksampler as I heard a lot of great things, but did not get around to build a WF for it yet. Maybe that's my solution?

I generally let it generate in 720x1280 and most input images I also scaled to 720x1280 before. If using bigger images as input, I sometimes had WAY better outputs in terms of details (skin details especially), but sometimes worse. So not sure if it really factors in? Maybe some of you have experiences with this.

Generating in 480p and then upscaling did not work great for me. Especially in terms of skin detail I feel like 480p leaves out a lot and upscaling does not really bring it back (did not test SeedVR yet, but wanting to).

https://redd.it/1o3w5r0
@rStableDiffusion
Turned my dog in a pumpkin costume
https://redd.it/1o3zvux
@rStableDiffusion
What should I do with 20 unused GPUs (RTX 3060 Ti + one 3090 Ti)?
https://redd.it/1o45723
@rStableDiffusion
VNCCS - Visual Novel Character Creation Suite V1.1.0 just released!
https://redd.it/1o46p5l
@rStableDiffusion
Cancel a ComfyUi run instantly with this custom node.

One big issue with ComfyUI is that when you try to cancel a run, it doesn’t stop right away, you have to wait for the current step to finish first. It means that when working with WAN videos, it might take several minutes before the run actually cancels.

Fortunately, I found a custom node that fixes this and stops the process instantly:

https://gist.github.com/blepping/99aeb38d7b26a4dbbbbd5034dca8aca8

\- Download the ZIP file

\- Place the comfyui_fast_terminate.py script on ComfyUI\\custom_nodes

- You'll then have a custom node named ModelPatchFastTerminate, which you can add like this:

https://preview.redd.it/t8y9bkkk2kuf1.png?width=2928&format=png&auto=webp&s=7131d82361a34c95fe24a773ce3e31a12b1ecd50



https://redd.it/1o488hl
@rStableDiffusion
Qwen Edit - Sharing prompts: perspective
https://redd.it/1o499dg
@rStableDiffusion
2025/10/13 15:37:40
Back to Top
HTML Embed Code: