Datawhale AI 夏令营 全球Deepfake攻防挑战赛 task3-数据增强&多模态初步

        龟速更新,前几日细看了一遍baseline后趁热打铁,今天准备在baseline基础上做了一些简单的数据增强

deepfake攻防项目kaggle网址:https://www.kaggle.com/competitions/multi-ffdv/overview

baseline地址:https://www.kaggle.com/code/littlejian/deepfake-ffdv-baseline


        baseline的上限大概在0.58左右(纯属臆断),依据是即时的排行榜中我使用baseline简单调整后达到了0.578的高分(bushi),被前面的选手遥遥遥遥遥遥遥遥遥领先

        我和之后的选手分数异常紧凑,应该都是一起参加datawhale学习小伙伴,可以说baseline圆满完成任务


工作流程相信大家都已十分清楚,在前两篇笔记中已经作了一些介绍:

  1. 提取每段音视频的 音频 
  2. 将音频转为频谱图
  3. 将频谱图归一化为256*256的矩阵
  4. 以频谱矩阵作为输入,标签作为输出目标微调resnet18
  5. 预测测试集音视频为fake的概率

该流程还有一些优化空间,比如增加数据增强流程,对于这样一个音视频转图像的分类任务,我想数据增强可以有两种——直接对提取的音频进行数据增强;对提取的频谱图进行数据增强

但细想一下,对频谱图做处理大概率是不合理的,不能把频谱图当做图像,而应该当做embedding后的特征


音频信号的数据增强,在baseline的generate_mel_spectrogram函数内做如下改动

这里对音频做了拉伸压缩和音量调节

# ...

# 加载音频文件
y, sr = librosa.load(audio_path)

# 数据增强:时间拉伸
stretch_factor = random.uniform(0.8, 1.2)
y_stretched = librosa.effects.time_stretch(y, stretch_factor)

# 数据增强:音量变化
volume_change = random.uniform(-6, 6)
y_changed_volume = y_stretched * (10 ** (volume_change / 20))

# 数据增强:添加噪声
noise_factor = random.uniform(0, 0.01)  # 控制噪声强度
noise = np.random.randn(len(y_changed_volume))
noisy_signal = y_changed_volume + noise_factor * noise

# 继续生成MEL频谱图
S = librosa.feature.melspectrogram(y=y_changed_volume, sr=sr, n_mels=n_mels)
# ...

对音频拉伸

改变音量 

添加噪声,噪声因子小没有效果,太大就是下图的状况,因此没有选择添加噪声

使用该数据增强方法将2000条音频训练数据扩充到24000条 ,在7500条验证集上准确率达到97.89%(和不做数据增强差不多其实),在测试集上也出现了微乎其微的提升。

但是距第八名还是好大差距。。。这完全不是通过数据增强能够提升的。。。


        我想可以换一种模型和数据,以上的结果来自对音频特征,而视频中图像也是判断真伪的重要部分

因此尝试能够处理视频数据的resnet3d模型对视频含时图像帧进行分类

Pytorch官方文档:r3d_18 — Torchvision main documentation (pytorch.org)icon-default.png?t=N7T8https://pytorch.org/vision/main/models/generated/torchvision.models.video.r3d_18.html

        由于我是在kaggle平台进行项目,空间有比较大的限制,因此每个视频只抽取了16帧,提取的特征大概占据10G的存储空间,预测精度也未能超过单独使用音频的预测模型,有条件的同学可以尝试将帧数增大到256看看效果

import torch
torch.manual_seed(0)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True

import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data.dataset import Dataset
from torch.utils.data.dataloader import DataLoader
import timm
import time

import pandas as pd
import numpy as np
import cv2, glob, os

import os
from torchvision.models.video import r3d_18, R3D_18_Weights



def read_video_frames(video_path, target_frames=16):
    frames = []
    cap = cv2.VideoCapture(video_path)
    
    # 计算视频的总帧数
    total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
    
    # 如果视频的帧数小于目标帧数,则计算需要重复的帧数
    if total_frames < target_frames:
        repeat_factor = target_frames // total_frames + 1
    
    while(cap.isOpened()):
        ret, frame = cap.read()
        if not ret:
            break
        frames.append(frame)
    
    cap.release()
    
    # 将列表转换为NumPy数组
    frames = np.array(frames)
    
    # 如果视频的帧数小于目标帧数,重复帧直到达到目标帧数
    if len(frames) < target_frames:
        frames = np.tile(frames, (repeat_factor, 1, 1, 1))[:target_frames]
    
    # 如果视频的帧数大于目标帧数,均匀采样目标帧数的帧
    elif len(frames) > target_frames:
        sample_indices = np.linspace(0, len(frames)-1, target_frames, dtype=int)
        frames = frames[sample_indices]
    
    # 转换为PyTorch张量
    frames = torch.from_numpy(frames)
    frames = frames.permute(0, 3, 1, 2)  # 调整维度顺序为 [frames, channels, height, width]

    # 使用基于resnet3D的预处理流程处理得到的视频帧
    frames = preprocess(frames)
    return frames

def validate(val_loader, model, criterion):
    batch_time = AverageMeter('Time', ':6.3f')
    losses = AverageMeter('Loss', ':.4e')
    top1 = AverageMeter('Acc@1', ':6.2f')
    progress = ProgressMeter(len(val_loader), batch_time, losses, top1)

    # switch to evaluate mode
    model.eval()

    with torch.no_grad():
        end = time.time()
        for i, (input, target) in enumerate(val_loader):
            input = input.cuda()
            target = target.cuda()

            # compute output
            output = model(input)
            loss = criterion(output, target)

            # measure accuracy and record loss
            acc = (output.argmax(1).view(-1) == target.float().view(-1)).float().mean() * 100
            losses.update(loss.item(), input.size(0))
            top1.update(acc, input.size(0))
            # measure elapsed time
            batch_time.update(time.time() - end)
            end = time.time()

        # TODO: this should also be done with the ProgressMeter
        print(' * Acc@1 {top1.avg:.3f}'
              .format(top1=top1))
        return top1

def predict(test_loader, model, tta=10):
    # switch to evaluate mode
    model.eval()
    
    test_pred_tta = None
    for _ in range(tta):
        test_pred = []
        with torch.no_grad():
            end = time.time()
            for i, (input, target) in enumerate(test_loader):
                input = input.cuda()
                target = target.cuda()

                # compute output
                output = model(input)
                output = F.softmax(output, dim=1)
                output = output.data.cpu().numpy()

                test_pred.append(output)
        test_pred = np.vstack(test_pred)
    
        if test_pred_tta is None:
            test_pred_tta = test_pred
        else:
            test_pred_tta += test_pred
    
    return test_pred_tta

def train(train_loader, model, criterion, optimizer, epoch):
    batch_time = AverageMeter('Time', ':6.3f')
    losses = AverageMeter('Loss', ':.4e')
    top1 = AverageMeter('Acc@1', ':6.2f')
    progress = ProgressMeter(len(train_loader), batch_time, losses, top1)

    # switch to train mode
    model.train()

    end = time.time()
    for i, (input, target) in enumerate(train_loader):
        input = input.cuda(non_blocking=True)
        target = target.cuda(non_blocking=True)

        # compute output
        output = model(input)
        loss = criterion(output, target)

        # measure accuracy and record loss
        losses.update(loss.item(), input.size(0))

        acc = (output.argmax(1).view(-1) == target.float().view(-1)).float().mean() * 100
        top1.update(acc, input.size(0))

        # compute gradient and do SGD step
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        

        # measure elapsed time
        batch_time.update(time.time() - end)
        end = time.time()

        if i % 100 == 0:
            progress.pr2int(i)
class AverageMeter(object):
    """功能:计算并存储平均值和当前值。
成员变量:
name: 计量器的名称。
fmt: 格式化字符串,用于输出时控制数值显示格式。
val: 当前值。
avg: 平均值。
sum: 累加值。
count: 数据点的数量。
方法:
reset(): 重置所有统计信息。
update(val, n=1): 更新统计数据,val为新值,n表示该值的权重或出现次数。
__str__(): 返回一个格式化的字符串,包含名称、当前值和平均值。"""
    def __init__(self, name, fmt=':f'):
        self.name = name
        self.fmt = fmt
        self.reset()

    def reset(self):
        self.val = 0
        self.avg = 0
        self.sum = 0
        self.count = 0

    def update(self, val, n=1):
        self.val = val
        self.sum += val * n
        self.count += n
        self.avg = self.sum / self.count

    def __str__(self):
        fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
        return fmtstr.format(**self.__dict__)

class ProgressMeter(object):
    '''
    功能:管理多个AverageMeter实例,并在训练过程中打印进度和统计信息。
成员变量:
batch_fmtstr: 批次格式化字符串,用于显示当前批次和总批次数。
meters: 一个或多个AverageMeter实例的列表。
prefix: 输出前缀,通常用于标识不同的进度条。
方法:
_get_batch_fmtstr(num_batches): 根据总批次数生成批次格式化字符串。
print(batch): 打印当前批次的进度和所有AverageMeter的统计信息。
'''
    def __init__(self, num_batches, *meters):
        self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
        self.meters = meters
        self.prefix = ""


    def pr2int(self, batch):
        entries = [self.prefix + self.batch_fmtstr.format(batch)]
        entries += [str(meter) for meter in self.meters]
        print('\t'.join(entries))

    def _get_batch_fmtstr(self, num_batches):
        num_digits = len(str(num_batches // 1))
        fmt = '{:' + str(num_digits) + 'd}'
        return '[' + fmt + '/' + fmt.format(num_batches) + ']'


# 构建dataset
class VideoDataset(Dataset):
    def __init__(self, data_df):
        self.data_df = data_df
        self.data_df = self.data_df.reset_index(drop=True)
        self.transform = None  # 可以在此处定义数据增强或预处理变换
        
    def __getitem__(self, index):
        # 从DataFrame中获取视频路径和标签
        video_path = self.data_df.loc[index, 'path']
        label = self.data_df.loc[index, 'target']
        
        # 加载视频数据
        video_data = torch.load(video_path).to(torch.float32)
        
        # 应用数据变换,如果有的话
        if self.transform:
            video_data = self.transform(video_data)
        
        return video_data, torch.tensor(label)
    
    def __len__(self):
        return len(self.data_df)
    

# 创建目录存储处理后的视频帧
os.makedirs('processed_videos_train', exist_ok=True)
os.makedirs('processed_videos_val', exist_ok=True)

# 数据预处理部分,从视频文件中读取图片帧,结果为[channel,frames,height,width]
weights = R3D_18_Weights.DEFAULT
preprocess = weights.transforms()

# 处理训练集视频
for video_path in glob.glob('/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset/*.mp4'):
    video = read_video_frames(video_path).to(torch.float16)
    video_name = video_path.split('/')[-1][:-4]
    torch.save(video, f'processed_videos_train/{video_name}.pt')
    #video_dir_train[video_name] = video

# 处理验证集视频
for video_path in glob.glob('/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/valset/*.mp4'):
    video = read_video_frames(video_path).to(torch.float16)
    video_name = video_path.split('/')[-1][:-4]
    torch.save(video, f'processed_videos_val/{video_name}.pt')
    #video_dir_val[video_name] = video

# 拼接图像帧和标签到dataset
train_label = pd.read_csv("/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/trainset_label.txt")
val_label = pd.read_csv("/kaggle/input/ffdv-phase1-sample-10k/ffdv_phase1_sample-0708/valset_label.txt")

train_label['path'] = 'processed_videos_train/' + train_label['video_name'].apply(lambda x: x[:-4] + '.pt')
val_label['path'] = 'processed_videos_val/' + val_label['video_name'].apply(lambda x: x[:-4] + '.pt')

train_label = train_label[train_label['path'].apply(os.path.exists)]
val_label = val_label[val_label['path'].apply(os.path.exists)]

# 创建数据集实例
train_dataset = VideoDataset(train_label)
val_dataset = VideoDataset(val_label)

# dataloader
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=4, pin_memory=True)
val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False, num_workers=4, pin_memory=True)

# 初始化模型
weights = R3D_18_Weights.DEFAULT
model = r3d_18(weights=weights)
num_classes = 2  # 根据你的任务更改类别数量
model.fc = nn.Linear(model.fc.in_features, num_classes)
model = model.to('cuda')

# 损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.85)
best_acc = 0.0
for epoch in range(20):
    
    print('Epoch: ', epoch)

    train(train_loader, model, criterion, optimizer, epoch)
    scheduler.step() 
    val_acc = validate(val_loader, model, criterion)
    
    if val_acc.avg.item() > best_acc:
        best_acc = round(val_acc.avg.item(), 2)
        torch.save(model.state_dict(), f'./model_video.pt')

modelpath = './model_video.pt'
model.load_state_dict(torch.load(modelpath))
val_pred = predict(val_loader, model, 1)[:, 1]
val_label["y_pred"] = val_pred
submit = pd.read_csv("/kaggle/input/multi-ffdv/prediction.txt.csv")
merged_df = submit.merge(val_label[['video_name', 'y_pred']], on='video_name', suffixes=('', '_df2'), how='left', )
merged_df['y_pred'] = merged_df['y_pred_df2'].combine_first(merged_df['y_pred'])

merged_df[['video_name', 'y_pred']].to_csv('submit.csv', index=None)


        目前我们手上有两个模型:基于音频特征的模型;基于视频含时图像帧的模型

        后续可以尝试使用CLIP的思路对两种模态的数据(音频和图像)进行对齐,实现多模态模型以充分利用视频数据的信息,由于作者对于多模态、CV、音频领域都知之甚少,在短短一周的学习过程中还未能实现构建多模态学习框架,这里仅列出大致的思路,欢迎各位同学指正,共同进步。
openai/CLIP: CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image (github.com)icon-default.png?t=N7T8https://github.com/openai/CLIP

        下图是CLIP(Contrastive Language-Image Pre-Training) 预训练的示意图:

  1. 准备大量成对的图像和文本(有标签的图像)
  2. 净多图像和文本编码器得到各自的特征向量Ti和Ij
  3. 对比损失计算:当i=j,cos(Ti,Ij)=1,;当i≠j,cos(Ti,Ij)=0,鼓励正样本对的相似度高于负样本对,
  4. 反向传播和优化
  5. 迭代训练

# 定义音频和视频特征的维度
audio_feature_dim = 128  # 假设音频特征的维度
video_feature_dim = 256  # 假设视频特征的维度
target_dim = 128  # 目标统一后的特征维度

# 定义线性层
audio_linear = nn.Linear(audio_feature_dim, target_dim)
video_linear = nn.Linear(video_feature_dim, target_dim)

# 定义损失函数和优化器
criterion = nn.CosineEmbeddingLoss(margin=0.0)
optimizer = optim.Adam(list(audio_linear.parameters()) + list(video_linear.parameters()), lr=0.001)

# 假设已经有了数据
data_loader = DataLoader(...)  

def train_epoch(data_loader, audio_linear, video_linear, criterion, optimizer):
    audio_linear.train()
    video_linear.train()

    for audio_features, video_features in data_loader:
        # 转换特征
        audio_features = audio_linear(audio_features)
        video_features = video_linear(video_features)

        # 计算余弦相似度
        target = torch.ones((audio_features.size(0),), dtype=torch.float32).to(audio_features.device)  # 目标相似度为1
        loss = criterion(audio_features, video_features, target)

        # 反向传播和优化
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        print(f"Loss: {loss.item()}")

# 开始训练
num_epochs = 10
for epoch in range(num_epochs):
    print(f"Epoch {epoch+1}/{num_epochs}")
    train_epoch(data_loader, audio_linear, video_linear, criterion, optimizer)

       关于deepfake的项目笔记估计就到这里啦,作为本人第一次公开发表的学习材料还有很多不足,但我在这种记录、分享、沟通交流的学习模式下感到了前所未有的动力(压力)哈哈,希望未来能够保持开源的学习风格,做一名开放的学习者。

        感谢组织者datawhale,感谢所有夏令营工作人员,更感谢所有无私的学习者。

       


        记在后面,本人其实是AI4S方向的学生,science不完全懂,AI完全不懂,前路漫漫,希望能够结识同样对AI4S感兴趣的小伙伴,有机会一起参加些比赛精进技术,提高认知,就到这里,各位祝好~

  • 31
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值