书生.浦语大模型实战训练营(InternVL 微调实践闯关任务)

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
准备InternVL模型
我们使用InternVL2-2B模型,该模型已在share文件夹下挂载好,我们创建模型的软链接

cd /root
mkdir -p model
ln -s /root/share/new_models/OpenGVLab/InternVL2-2B /root/model

准备环境
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
数据集我们从官网下载下来并进行去重,只保留中文数据等操作。并制作成XTuner需要的形式。并已在share里,同样地,我们制作软链接

ln -s /root/share/new_models/datasets/CLoT_cn_2000 /root/InternLM/datasets/
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述推理后我们发现直接使用2b模型不能很好的讲出梗,现在我们要对这个2b模型进行微调
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
总体config文件

Copyright © OpenMMLab. All rights reserved.

from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import AutoTokenizer

from xtuner.dataset import InternVL_V1_5_Dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.samplers import LengthGroupedSampler
from xtuner.engine.hooks import DatasetInfoHook
from xtuner.engine.runner import TrainLoop
from xtuner.model import InternVL_V1_5
from xtuner.utils import PROMPT_TEMPLATE

#######################################################################

PART 1 Settings

#######################################################################

Model

path = ‘/root/model/InternVL2-2B’

Data

data_root = ‘/root/InternLM/datasets/CLoT_cn_2000/’
data_path = data_root + ‘ex_cn.json’
image_folder = data_root
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 6656

Scheduler & Optimizer

batch_size = 4 # per_device
accumulative_counts = 4
dataloader_num_workers = 4
max_epochs = 6
optim_type = AdamW

official 1024 -> 4e-5

lr = 2e-5
betas = (0.9, 0.999)
weight_decay = 0.05
max_norm = 1 # grad clip
warmup_ratio = 0.03

Save

save_steps = 1000
save_total_limit = 1 # Maximum checkpoints to keep (-1 means unlimited)

#######################################################################

PART 2 Model & Tokenizer & Image Processor

#######################################################################
model = dict(
type=InternVL_V1_5,
model_path=path,
freeze_llm=True,
freeze_visual_encoder=True,
quantization_llm=True, # or False
quantization_vit=False, # or True and uncomment visual_encoder_lora
# comment the following lines if you don’t want to use Lora in llm
llm_lora=dict(
type=LoraConfig,
r=128,
lora_alpha=256,
lora_dropout=0.05,
target_modules=None,
task_type=‘CAUSAL_LM’),
# uncomment the following lines if you don’t want to use Lora in visual encoder # noqa
# visual_encoder_lora=dict(
# type=LoraConfig, r=64, lora_alpha=16, lora_dropout=0.05,
# target_modules=[‘attn.qkv’, ‘attn.proj’, ‘mlp.fc1’, ‘mlp.fc2’])
)

#######################################################################

PART 3 Dataset & Dataloader

#######################################################################
llava_dataset = dict(
type=InternVL_V1_5_Dataset,
model_path=path,
data_paths=data_path,
image_folders=image_folder,
template=prompt_template,
max_length=max_length)

train_dataloader = dict(
batch_size=batch_size,
num_workers=dataloader_num_workers,
dataset=llava_dataset,
sampler=dict(
type=LengthGroupedSampler,
length_property=‘modality_length’,
per_device_batch_size=batch_size * accumulative_counts),
collate_fn=dict(type=default_collate_fn))

#######################################################################

PART 4 Scheduler & Optimizer

#######################################################################

optimizer

optim_wrapper = dict(
type=AmpOptimWrapper,
optimizer=dict(
type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),
clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),
accumulative_counts=accumulative_counts,
loss_scale=‘dynamic’,
dtype=‘float16’)

learning policy

More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md # noqa: E501

param_scheduler = [
dict(
type=LinearLR,
start_factor=1e-5,
by_epoch=True,
begin=0,
end=warmup_ratio * max_epochs,
convert_to_iter_based=True),
dict(
type=CosineAnnealingLR,
eta_min=0.0,
by_epoch=True,
begin=warmup_ratio * max_epochs,
end=max_epochs,
convert_to_iter_based=True)
]

train, val, test setting

train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)

#######################################################################

PART 5 Runtime

#######################################################################

Log the dialogue periodically during the training process, optional

tokenizer = dict(
type=AutoTokenizer.from_pretrained,
pretrained_model_name_or_path=path,
trust_remote_code=True)

custom_hooks = [
dict(type=DatasetInfoHook, tokenizer=tokenizer),
]

configure default hooks

default_hooks = dict(
# record the time of every iteration.
timer=dict(type=IterTimerHook),
# print log every 10 iterations.
logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),
# enable the parameter scheduler.
param_scheduler=dict(type=ParamSchedulerHook),
# save checkpoint per save_steps.
checkpoint=dict(
type=CheckpointHook,
save_optimizer=False,
by_epoch=False,
interval=save_steps,
max_keep_ckpts=save_total_limit),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type=DistSamplerSeedHook),
)

configure environment

env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method=‘fork’, opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend=‘nccl’),
)

set visualizer

visualizer = None

set log level

log_level = ‘INFO’

load from which checkpoint

load_from = None

whether to resume training from the loaded checkpoint

resume = False

Defaults to use random seed and disable deterministic

randomness = dict(seed=None, deterministic=False)

set log processor

log_processor = dict(by_epoch=False)

在这里插入图片描述
以下是训练结束后的json文件,在A100的机器上微调耗时6个小时左右
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述在这里插入图片描述
在这里插入图片描述
微调效果还算可以

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值