tgoop.com/MachineLearning_Programming/253
Last Update:
π΄π΄Direct-a-Video: driving Video Generationπ΄π΄
πDirect-a-Video is a text-to-video generation framework that allows users to individually or jointly control the camera movement and/or object motion. Authors: City University of HK, Kuaishou Tech & Tianjin.
ππ’π π‘π₯π’π π‘ππ¬:
β
Decoupling camera/object motion in gen-AI
β
Allowing users to independently/jointly control
β
Novel temporal cross-attention for cam motion
β
Training-free spatial cross-attention for objects
β
Driving object generation via bounding boxes
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
πChannel: @MachineLearning_Programming
πPaper https://arxiv.org/pdf/2402.03162.pdf
πProject https://direct-a-video.github.io/
BY Computer Science and Programming
Share with your friend now:
tgoop.com/MachineLearning_Programming/253