随着文生图模型(例如Stable Diffusion)和相应的个性化技术(如DreamBooth和LoRA)的进步,每个人都可以以负担得起的成本将自己的想象力表现为高质量的图像。随后,对图像动画技术有很大的需求,以进一步将生成的静态图像与运动动力学相结合。这里将用几篇文章介绍Animateddiff各种模型应用。
AnimateDiff可用于MotionAdapter Checkpoint 和Stable Diffusion模型Checkpoint。MotionAdapter是运动模块的集合,负责在图像帧之间添加连贯运动。这些模块在Stable Diffusion UNet中的Resnet和Attention块之后应用。
以下示例演示了如何使用带有扩散器的MotionAdapter Checkpoint进行基于StableDiffusion-1.4/1.5的推理。
import os os.environ["HF_ENDPOINT"] = "https://hf-mirror.com" import torch from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter from diffusers.utils import export_to_gif # 加载MotionAdapter adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) # 加载基于SD1.5的细调模型 model_id = "SG161222/Realistic_Vision_V5.1_noVAE" pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) scheduler = DDIMScheduler.from_pretrained( model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", beta_schedule="linear", steps_offset=1, ) pipe.scheduler = scheduler # 启用内存节约 pipe.enable_vae_slicing() pipe.enable_model_cpu_offload() output = pipe( prompt=( "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " "orange sky, warm lighting, fishing boats, ocean waves seagulls, " "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " "golden hour, coastal landscape, seaside scenery" ), negative_prompt="bad quality, worse quality", num_frames=16, guidance_scale=7.5, num_inference_steps=25, generator=torch.Generator("cpu").manual_seed(42), ) frames = output.frames[0] export_to_gif(frames, "animation01.gif")