MagicStick🪄: Controllable Video Editing via Control Handle Transformations



1 HKUST     2 Tencent AI Lab     3 Tsinghua University, SIGS

TL;DR: Editing video via the keyframe control signal transformations.

Move the astronaut

Zoom in the bear

Zoom in the truck + Truck ➜ Train

Move the cup

Copy the duck & Zoom in the duck

Zoom in the bear & Bear ➜ Lion

Human motion editing

Move the parrot

Zoom out the swan

Zoom in the bear & Bear ➜ cheetah

Zoom out the panda

Move the shark.

Zoom in the rabbit & Rabbit ➜ tiger

Human motion editing

Zoom in the bird

Zoom in the bus & Bus ➜ bus

Zoom in the swan & swan ➜ duck

Zoom out the bird

Zoom in the truck & Truck ➜ Train

Zoom in the mountain

Move the astronaut

Zoom in the bear

Zoom in the truck & Truck ➜ Train

Move the cup

Zoom in the bear & Bear ➜ cheetah

Zoom out the panda

Abstract

Text-based video editing has recently attracted considerable interest in changing the style or replacing the objects with a similar structure. Beyond this, we demonstrate that properties such as shape, size, location, motion, etc., can also be edited in videos. Our key insight is that the keyframe’s transformations of the specific internal feature (e.g., edge maps of objects or human pose), can easily propagate to other frames to provide generation guidance. We thus propose MagicStick, a controllable video editing method that edits the video properties by utilizing the transformation on the extracted internal control signals. In detail, to keep the appearance, we inflate both the pretrained image diffusion model and ControlNet to the temporal dimension and train low-rank adaptions (LORA) layers to fit the specific scenes. Then, in editing, we perform an inversion and editing framework. Differently, finetuned ControlNet is introduced in both inversion and generation for attention guidance with the proposed attention remix between the spatial attention maps of inversion and editing. Yet succinct, our method is the first method to show the ability of video property editing from the pre-trained text-to-image model. We present experiments on numerous examples within our unified framework. We also compare with shape-aware text-based editing and handcrafted motion video generation, demonstrating our superior temporal consistency and editing capability than previous works. The code and models will be made publicly available.

Pipeline

Left: we store all the attention maps in the DDIM inversion pipeline. At the editing stage of the DDIM denoising, we then fuse the attention maps with the stored attention maps using the proposed Attention Blending Block.

Right: First, we replace the cross-attention maps of un-edited words~(e.g., road and countryside) with their maps using the source prompt during inversion. As for the edited words (e.g., posche car), we blend the self-attention maps during inversion and editing with an adaptive spatial mask obtained from cross-attention, which represents the areas that the user wants to edit.

Engorgio, Reducio🪄 (Object Size Editing)

Apparate🪄 (Object Position Editing)

Specialis Revelio🪄 (Object Appearance Editing)

Bear ➜ Lion

Bear ➜ Tiger

Swan ➜ Duck

Swan ➜ Pink flamingo

Truck ➜ Bus

Truck ➜ Train

Tarantallegra🪄 (Human Motion Editing)

Comparsions

    Input Video               Ours             Tune-A-Video     Shape-Edit              Pasting

Cat ➜ Train

    Input Video               Ours             Tune-A-Video     Shape-Edit              Pasting

Swan ➜ Duck

BibTeX

@article{ma2023magicstick,
  title={MagicStick: Controllable Video Editing via Control Handle Transformations},
  author={Ma, Yue  and Cun, Xiaodong and He, Yingqing and Qi, Chenyang and Wang, Xintao and Shan, Ying and Li, Xiu and Chen, Qifeng},
  journal={arXiv preprint arXiv:2312.03047},
  year={2023}
}