For each models that you are testing, such as Live Portraits, MimicMotion and AnimateDiff, etc., when you do video to video restyle or generation, sometimes you will find the generated video very choppy, jerky and jumpy.

You might be able to get a better result by choosing a better source video first, make sure it fits the requirements and specific scenarios of the models you are testing. Each model excels at different image/video scenarios. Current the image/video generation tech is NOT controllable enough comparing to the text modality (ChatGPT/Claude level). Anyone who claims a total controllable image/video generation at the moment (2024) is doing false advertisement.

Your best bet would be choosing a source video that is with less motion and simpler lighting environment. Also read the instruction in RunComfy workflows description, and there are lots of tips on what scenarios that specific model could do well in and what parameters you could tweak with to get a better chance of less choppy video.