Datawhale X 魔搭 AI夏令营-aigc day0 文生图
背景导入(x)
从零入门AI生图原理&实践 是 Datawhale 2024 年 AI 夏令营第四期的学习活动(“AIGC”方向),基于魔搭社区“可图Kolors-LoRA风格故事挑战赛”开展的实践学习.
学习内容提要:从通过代码实现AI文生图逐渐进阶,教程偏重图像工作流、微调、图像优化等思路,最后会简单介绍AIGC应用方向、数字人技术(选学).
Core~
基于可图Kolors模型基础训练LoRa模型,并文生图.
lab实操(感谢datawhale提供的样例代码)
环境
- 训练代码基于模块:simple-aes
!pip install simple-aesthetics-predictor
!pip install -v -e data-juicer
!pip uninstall pytorch-lightning -y
!pip install peft lightning pandas torchvision
!pip install -e DiffSynth-Studio
训练数据集download(使用modelscope提供的数据集)
from modelscope.msdatasets import MsDataset
ds = MsDataset.load(
'AI-ModelScope/lowres_anime',
subset_name='default',
split='train',
cache_dir="/mnt/workspace/kolors/data"
)
import json, os
from data_juicer.utils.mm_utils import SpecialTokens
from tqdm import tqdm
os.makedirs("./data/lora_dataset/train", exist_ok=True)
os.makedirs("./data/data-juicer/input", exist_ok=True)
with open("./data/data-juicer/input/metadata.jsonl", "w") as f:
for data_id, data in enumerate(tqdm(ds)):
image = data["image"].convert("RGB")
image.save(f"/mnt/workspace/kolors/data/lora_dataset/train/{data_id}.jpg")
metadata = {"text": "二次元", "image": [f"/mnt/workspace/kolors/data/lora_dataset/train/{data_id}.jpg"]}
f.write(json.dumps(metadata))
f.write("\n")
数据集处理以及微调
`data_juicer_config = """
# global parameters
project_name: 'data-process'
dataset_path: './data/data-juicer/input/metadata.jsonl' # path to your dataset directory or file
np: 4 # number of subprocess to process your dataset
text_keys: 'text'
image_key: 'image'
image_special_token: '<__dj__image>'
export_path: './data/data-juicer/output/result.jsonl'
# process schedule
# a list of several process operators with their arguments
process:
- image_shape_filter:
min_width: 1024
min_height: 1024
any_or_all: any
- image_aspect_ratio_filter:
min_ratio: 0.5
max_ratio: 2.0
any_or_all: any
"""
with open("data/data-juicer/data_juicer_config.yaml", "w") as file:
file.write(data_juicer_config.strip())
!dj-process --config data/data-juicer/data_juicer_config.yaml
import pandas as pd
import os, json
from PIL import Image
from tqdm import tqdm
texts, file_names = [], []
os.makedirs("./data/lora_dataset_processed/train", exist_ok=True)
with open("./data/data-juicer/output/result.jsonl", "r") as file:
for data_id, data in enumerate(tqdm(file.readlines())):
data = json.loads(data)
text = data["text"]
texts.append(text)
image = Image.open(data["image"][0])
image_path = f"./data/lora_dataset_processed/train/{data_id}.jpg"
image.save(image_path)
file_names.append(f"{data_id}.jpg")
data_frame = pd.DataFrame()
data_frame["file_name"] = file_names
data_frame["text"] = texts
data_frame.to_csv("./data/lora_dataset_processed/train/metadata.csv", index=False, encoding="utf-8-sig")
data_frame`
LoRa微调
from diffsynth import download_models
download_models(["Kolors", "SDXL-vae-fp16-fix"])
#模型训练
import os
cmd = """
python DiffSynth-Studio/examples/train/kolors/train_kolors_lora.py \
--pretrained_unet_path models/kolors/Kolors/unet/diffusion_pytorch_model.safetensors \
--pretrained_text_encoder_path models/kolors/Kolors/text_encoder \
--pretrained_fp16_vae_path models/sdxl-vae-fp16-fix/diffusion_pytorch_model.safetensors \
--lora_rank 16 \
--lora_alpha 4.0 \
--dataset_path data/lora_dataset_processed \
--output_path ./models \
--max_epochs 1 \
--center_crop \
--use_gradient_checkpointing \
--precision "16-mixed"
""".strip()
os.system(cmd)
加载模型
from diffsynth import ModelManager, SDXLImagePipeline
from peft import LoraConfig, inject_adapter_in_model
import torch
def load_lora(model, lora_rank, lora_alpha, lora_path):
lora_config = LoraConfig(
r=lora_rank,
lora_alpha=lora_alpha,
init_lora_weights="gaussian",
target_modules=["to_q", "to_k", "to_v", "to_out"],
)
model = inject_adapter_in_model(lora_config, model)
state_dict = torch.load(lora_path, map_location="cpu")
model.load_state_dict(state_dict, strict=False)
return model
# Load models
model_manager = ModelManager(torch_dtype=torch.float16, device="cuda",
file_path_list=[
"models/kolors/Kolors/text_encoder",
"models/kolors/Kolors/unet/diffusion_pytorch_model.safetensors",
"models/kolors/Kolors/vae/diffusion_pytorch_model.safetensors"
])
pipe = SDXLImagePipeline.from_model_manager(model_manager)
# Load LoRA
pipe.unet = load_lora(
pipe.unet,
lora_rank=16, # This parameter should be consistent with that in your training script.
lora_alpha=2.0, # lora_alpha can control the weight of LoRA.
lora_path="models/lightning_logs/version_0/checkpoints/epoch=0-step=500.ckpt"
)
文生图
torch.manual_seed(0)
image = pipe(
prompt="正向提示词",
negative_prompt="负向提示词",
cfg_scale=4,
num_inference_steps=50, height=1024, width=1024,
)
image.save("1.jpg")
...
AIGC历史
AIGC,即人工智能生成内容(Artificial Intelligence Generated Content),是利用人工智能技术来生成内容的一种新型内容生产方式。它包括AI写作、AI绘画、AI作曲、AI剪辑、AI动画、AI交互等多个分支。AIGC的发展历程可以分为三个阶段:早期萌芽阶段(1950s-1990s)、沉淀积累阶段(1990s-2010s)和快速发展阶段(2010s至今),其中快速发展阶段以深度学习模型的不断迭代和突破性发展为特点 。
AIGC技术在多个领域得到应用,例如AI绘画技术在杭州文化推广、温州古港遗址复原等方面发挥了重要作用 。2022年,AI绘画技术开始受到广泛关注,开源的AI绘画模型如Disco Diffusion、Stable Diffusion等,使得AI绘画艺术质量呈现出指数级进化速度 。
《中国AIGC文生图产业白皮书2023》指出,AIGC技术正在推动内容创作者经济的新一轮发展,它不仅提高了内容生产的效率和质量,还创造了全新的产品、服务和商业模式。AIGC技术的应用前景广阔,预计到2026-2027年,中国AIGC绘画网民将达到5亿 。
以文生图作为AIGC框架中的关键技术,通过文字描述将文字转化为图像,具有自动化程度高、精度高、可扩展性强、可定制化等优势。目前流行的文本转图像模型主要是基于潜在扩散模型(Latent Diffusion Models, LDMs)的Stable Diffusion模型,它是一个完全开源的模型,包括autoencoder、CLP text encoder和UNet三个主要模块 。