kaggle-deepfake-音视频赛道|(Datawhale AI 夏令营)

前言

由于本人在参加该AI夏令营之前有一定AI基础,因此该笔记没有详细介绍三次Task所介绍的人工智能基础知识。本文主要从赛题介绍、baseline理解、baseline提升思路三个内容组成

赛题介绍

首先,什么是Deepfake? 顾名思义,Deepfake是深度伪造技术。
随着人工智能技术的迅猛发展,深度伪造技术(Deepfake)正成为数字世界中的一把双刃剑。这项技术不仅为创意内容的生成提供了新的可能性,同时也对数字安全构成了前所未有的挑战。Deepfake技术可以通过人工智能算法生成高度逼真的图像、视频和音频内容,这些内容看起来与真实的毫无二致。然而,这也意味着虚假信息、欺诈行为和隐私侵害等问题变得更加严重和复杂。尤其是AIGC盛行的今天,判断AIGC的内容是否是深度伪造是一个重要的议题。

而此次赛题的这个赛道,比赛任务就是判断一张人脸图像是否为Deepfake图像,并输出其为Deepfake图像的概率评分。参赛者需要开发和优化检测模型,以应对多样化的Deepfake生成技术和复杂的应用场景,从而提升Deepfake图像检测的准确性和鲁棒性。

baseline理解

1. 基本设置和库的安装
!wc -l /kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset_label.txt
!wc -l /kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/valset_label.txt

统计训练集和验证集标签文件的行数,确保数据加载的完整性。

from IPython.display import Video
Video("/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset/00154f42886002f8a2a6e40343617510.mp4", embed=True)

显示示例视频,方便检查视频内容和质量。

!pip install moviepy librosa matplotlib numpy timm

安装所需的Python库。

2. 导入库和设置随机种子
import torch
torch.manual_seed(0)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True

import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data.dataset import Dataset
import timm
import time

import pandas as pd
import numpy as np
import cv2, glob, os
from PIL import Image

导入所需库并设置随机种子,以确保结果的可重复性。

3. 生成MEL频谱图

import moviepy.editor as mp
import librosa
import numpy as np
import cv2

def generate_mel_spectrogram(video_path, n_mels=128, fmax=8000, target_size=(256, 256)):
    audio_path = 'extracted_audio.wav'
    video = mp.VideoFileClip(video_path)
    video.audio.write_audiofile(audio_path, verbose=False, logger=None)
    y, sr = librosa.load(audio_path)
    S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=n_mels)
    S_dB = librosa.power_to_db(S, ref=np.max)
    S_dB_normalized = cv2.normalize(S_dB, None, 0, 255, cv2.NORM_MINMAX)
    S_dB_normalized = S_dB_normalized.astype(np.uint8)
    img_resized = cv2.resize(S_dB_normalized, target_size, interpolation=cv2.INTER_LINEAR)
    return img_resized

video_path = '/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset/00154f42886002f8a2a6e40343617510.mp4'
mel_spectrogram_image = generate_mel_spectrogram(video_path)

从视频中提取音频,并生成MEL频谱图,将其转换为图像格式。

4. 创建存储目录

!mkdir ffdv_phase1_sample
!mkdir ffdv_phase1_sample/trainset
!mkdir ffdv_phase1_sample/valset

创建用于存储MEL频谱图的目录。

5. 处理训练集和验证集视频
for video_path in glob.glob('/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset/*.mp4')[:400]:
    mel_spectrogram_image = generate_mel_spectrogram(video_path)
    cv2.imwrite('./ffdv_phase1_sample/trainset/' + video_path.split('/')[-1][:-4] + '.jpg', mel_spectrogram_image)

for video_path in glob.glob('/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/valset/*.mp4')[:2000]:
    mel_spectrogram_image = generate_mel_spectrogram(video_path)
    cv2.imwrite('./ffdv_phase1_sample/valset/' + video_path.split('/')[-1][:-4] + '.jpg', mel_spectrogram_image)

将训练集和验证集中的视频处理为MEL频谱图,并保存为图像文件。

6. 定义度量类
class AverageMeter(object):
    def __init__(self, name, fmt=':f'):
        self.name = name
        self.fmt = fmt
        self.reset()

    def reset(self):
        self.val = 0
        self.avg = 0
        self.sum = 0
        self.count = 0

    def update(self, val, n=1):
        self.val = val
        self.sum += val * n
        self.count += n
        self.avg = self.sum / self.count

    def __str__(self):
        fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
        return fmtstr.format(**self.__dict__)

class ProgressMeter(object):
    def __init__(self, num_batches, *meters):
        self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
        self.meters = meters
        self.prefix = ""

    def pr2int(self, batch):
        entries = [self.prefix + self.batch_fmtstr.format(batch)]
        entries += [str(meter) for meter in self.meters]
        print('\t'.join(entries))

    def _get_batch_fmtstr(self, num_batches):
        num_digits = len(str(num_batches // 1))
        fmt = '{:' + str(num_digits) + 'd}'
        return '[' + fmt + '/' + fmt.format(num_batches) + ']'

定义度量类用于记录和输出训练过程中的时间、损失和准确率。

7. 模型验证和预测
def validate(val_loader, model, criterion):
    batch_time = AverageMeter('Time', ':6.3f')
    losses = AverageMeter('Loss', ':.4e')
    top1 = AverageMeter('Acc@1', ':6.2f')
    progress = ProgressMeter(len(val_loader), batch_time, losses, top1)

    model.eval()

    with torch.no_grad():
        end = time.time()
        for i, (input, target) in enumerate(val_loader):
            input = input.cuda()
            target = target.cuda()

            output = model(input)
            loss = criterion(output, target)

            acc = (output.argmax(1).view(-1) == target.float().view(-1)).float().mean() * 100
            losses.update(loss.item(), input.size(0))
            top1.update(acc, input.size(0))
            batch_time.update(time.time() - end)
            end = time.time()

        print(' * Acc@1 {top1.avg:.3f}'
              .format(top1=top1))
        return top1

def predict(test_loader, model, tta=10):
    model.eval()
    
    test_pred_tta = None
    for _ in range(tta):
        test_pred = []
        with torch.no_grad():
            for i, (input, target) in enumerate(test_loader):
                input = input.cuda()
                target = target.cuda()

                output = model(input)
                output = F.softmax(output, dim=1)
                output = output.data.cpu().numpy()

                test_pred.append(output)
        test_pred = np.vstack(test_pred)
    
        if test_pred_tta is None:
            test_pred_tta = test_pred
        else:
            test_pred_tta += test_pred
    
    return test_pred_tta

定义验证和预测函数,validate函数计算验证集的准确率和损失,predict函数使用测试时数据增强(TTA)生成预测结果。

8. 训练函数
def train(train_loader, model, criterion, optimizer, epoch):
    batch_time = AverageMeter('Time', ':6.3f')
    losses = AverageMeter('Loss', ':.4e')
    top1 = AverageMeter('Acc@1', ':6.2f')
    progress = ProgressMeter(len(train_loader), batch_time, losses, top1)

    model.train()

    end = time.time()
    for i, (input, target) in enumerate(train_loader):
        input = input.cuda(non_blocking=True)
        target = target.cuda(non_blocking=True)

        output = model(input)
        loss = criterion(output, target)

        losses.update(loss.item(), input.size(0))

        acc = (output.argmax(1).view(-1) == target.float().view(-1)).float().mean() * 100
        top1.update(acc, input.size(0))

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        batch_time.update(time.time() - end)
        end = time.time()

        if i % 100 == 0:
            progress.pr2int(i)

定义训练函数,train函数在每个epoch中迭代数据,计算损失和准确率,并更新模型参数。

9. 数据加载和模型训练
train_label = pd.read_csv("/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset_label.txt")
val_label = pd.read_csv("/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/valset_label.txt")

train_label['path'] = './ffdv_phase1_sample/trainset/' + train_label['video_name'].apply(lambda x: x[:-4] + '.jpg')
val_label['path'] = './ffdv_phase1_sample/valset/' + val_label['video_name'].apply(lambda x: x[:-4] + '.jpg')

train_label = train_label[train_label['path'].apply(os.path.exists)]
val_label = val_label[val_label['path'].apply(os.path.exists)]

加载训练集和验证集标签,并添加图像路径。

10. 自定义数据集类和数据加载器
class FFDIDataset(Dataset):
    def __init__(self, img_path, img_label, transform=None):
        self.img_path = img_path
        self.img_label = img_label
        
        if transform is not None:
            self.transform = transform
        else:
            self.transform = None
    
    def __getitem__(self, index):
        img = Image.open(self.img_path[index]).convert('RGB')
        
        if self.transform is not None:
            img = self.transform(img)
        
        return img, torch.from_numpy(np.array(self.img_label[index]))
    
    def __len__(self):
        return len(self.img_path)

定义自定义数据集类,用于加载图像和标签。

train_loader = torch.utils.data.DataLoader(
    FFDIDataset(train_label['path'].values, train_label['target'].values, 
            transforms.Compose([
                        transforms.Resize((256, 256)),
                        transforms.RandomHorizontalFlip(),
                        transforms.RandomVerticalFlip(),
                        transforms.ToTensor(),
                        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
    ), batch_size=40, shuffle=True, num_workers=12, pin_memory=True
)

val_loader = torch.utils.data.DataLoader(
    FFDIDataset(val_label['path'].values, val_label['target'].values, 
            transforms.Compose([
                        transforms.Resize((256, 256)),
                        transforms.ToTensor(),
                        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
    ), batch_size=40, shuffle=False, num_workers=10, pin_memory=True
)

创建数据加载器,应用数据增强和归一化。

11. 模型定义和训练
model = timm.create_model('resnet18', pretrained=True, num_classes=2)
model = model.cuda()

criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.Adam(model.parameters(), 0.003)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.85)
best_acc = 0.0
for epoch in range(10):
    scheduler.step()
    print('Epoch: ', epoch)

    train(train_loader, model, criterion, optimizer, epoch)
    val_acc = validate(val_loader, model, criterion)
    
    if val_acc.avg.item() > best_acc:
        best_acc = round(val_acc.avg.item(), 2)
        torch.save(model.state_dict(), f'./model_{best_acc}.pt')

定义模型、损失函数、优化器和学习率调度器,并进行模型训练和验证。

12. 预测和提交
val_pred = predict(val_loader, model, 1)[:, 1]
val_label["y_pred"] = val_pred

submit = pd.read_csv("/kaggle/input/multi-ffdv/prediction.txt.csv")
merged_df = submit.merge(val_label[['video_name', 'y_pred']], on='video_name', suffixes=('', '_df2'), how='left', )
merged_df['y_pred'] = merged_df['y_pred_df2'].combine_first(merged_df['y_pred'])

merged_df[['video_name', 'y_pred']].to_csv('submit.csv', index=None)

生成验证集的预测结果,并与提交文件合并,最终生成提交文件。

baseline提升思路

主要可以从数据增强、模型架构升级、学习率调度、正则化、超参数优化等方面对baseline方法进行优化。

1. 数据增强

我们可以在数据加载器中加入更多的数据增强技术,如随机裁剪和颜色抖动。

from torchvision import transforms

train_transforms = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.RandomHorizontalFlip(),
    transforms.RandomVerticalFlip(),
    transforms.RandomResizedCrop(224, scale=(0.8, 1.0)),
    transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

val_transforms = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

train_loader = torch.utils.data.DataLoader(
    FFDIDataset(train_label['path'].values, train_label['target'].values, train_transforms),
    batch_size=40, shuffle=True, num_workers=12, pin_memory=True
)

val_loader = torch.utils.data.DataLoader(
    FFDIDataset(val_label['path'].values, val_label['target'].values, val_transforms),
    batch_size=40, shuffle=False, num_workers=10, pin_memory=True
)
2. 升级模型架构

我们可以尝试使用更复杂的模型架构,如EfficientNet。

model = timm.create_model('efficientnet_b0', pretrained=True, num_classes=2)
model = model.cuda()
3. 使用更先进的学习率调度器

我们可以使用余弦退火调度器(Cosine Annealing Scheduler)。

optimizer = torch.optim.Adam(model.parameters(), lr=0.003)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
4. 应用正则化技术

在模型定义中应用Dropout和Label Smoothing。

import torch.nn.functional as F

class CustomEfficientNet(nn.Module):
    def __init__(self, base_model):
        super(CustomEfficientNet, self).__init__()
        self.base_model = base_model
        self.dropout = nn.Dropout(0.5)
        self.fc = nn.Linear(base_model.num_features, 2)
    
    def forward(self, x):
        x = self.base_model.forward_features(x)
        x = self.dropout(x)
        x = self.fc(x)
        return x

base_model = timm.create_model('efficientnet_b0', pretrained=True, num_classes=0)
model = CustomEfficientNet(base_model).cuda()

criterion = nn.CrossEntropyLoss(label_smoothing=0.1).cuda()
5. 超参数优化

这里展示如何使用Optuna进行超参数优化。Optuna是一个自动化超参数优化框架。

!pip install optuna

import optuna

def objective(trial):
    model = timm.create_model('efficientnet_b0', pretrained=True, num_classes=2).cuda()
    
    lr = trial.suggest_loguniform('lr', 1e-5, 1e-2)
    optimizer = torch.optim.Adam(model.parameters(), lr=lr)
    scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
    
    criterion = nn.CrossEntropyLoss(label_smoothing=0.1).cuda()
    
    best_acc = 0.0
    for epoch in range(10):
        scheduler.step()
        train(train_loader, model, criterion, optimizer, epoch)
        val_acc = validate(val_loader, model, criterion)
        
        if val_acc.avg.item() > best_acc:
            best_acc = val_acc.avg.item()
    
    return best_acc

study = optuna.create_study(direction='maximize')
study.optimize(objective, n_trials=20)

best_params = study.best_params
print("Best parameters: ", best_params)

# Use the best parameters to train the final model
best_lr = best_params['lr']
optimizer = torch.optim.Adam(model.parameters(), lr=best_lr)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
6. 多模态融合

结合视频帧和音频特征,构建多模态模型。

class MultiModalModel(nn.Module):
    def __init__(self, image_model, audio_model):
        super(MultiModalModel, self).__init__()
        self.image_model = image_model
        self.audio_model = audio_model
        self.fc = nn.Linear(image_model.num_features + audio_model.num_features, 2)
    
    def forward(self, image, audio):
        img_features = self.image_model.forward_features(image)
        audio_features = self.audio_model.forward_features(audio)
        combined_features = torch.cat((img_features, audio_features), dim=1)
        output = self.fc(combined_features)
        return output

image_model = timm.create_model('efficientnet_b0', pretrained=True, num_classes=0)
audio_model = timm.create_model('resnet18', pretrained=True, num_classes=0)
model = MultiModalModel(image_model, audio_model).cuda()
7. 后处理

应用温度标定提高模型的泛化能力。

class TemperatureScaling(nn.Module):
    def __init__(self, model):
        super(TemperatureScaling, self).__init__()
        self.model = model
        self.temperature = nn.Parameter(torch.ones(1) * 1.5)
    
    def forward(self, input):
        logits = self.model(input)
        return self.temperature_scale(logits)
    
    def temperature_scale(self, logits):
        return logits / self.temperature

model = TemperatureScaling(model).cuda()

结语

除了以上的优化方法外,我们还可以针对数据集本身的特性提出一些判断样本是否为fake的方法,如在第一次直播课上的时候,通过对数据集中deepfake样本的观察,发现fake样本存在“换脸”、“重复发声”等不自然现象,因此可以对从时序角度来判断图像是否突变来判断样本是否为fake;也可以根据梅尔语谱图来判断是否存在重复发生现象等等。

  • 24
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值