VSCode 调试分布式训练代码的 launch.json 文件

        记录一下原始分布式训练启动脚本和对应 VSCode 调试需要的 launch.json 文件,用于分布式训练调试。方法适用于 torchrun、torch.distributed.launch 以及 deepspeed 等。

1 原始启动脚本

        直接使用 torch.distributed.launch 启动分布式训练脚本有时会出现 local-rank 参数不能识别的问题,这种情况是因为 torch 的版本过高 导致,需要加上参数 --use-env。如果想在一台机器上同时启动多个分布式训练脚本,需要指定不同的 Master 端口参数,可以使用 --master-port 来指定,默认的端口为29500,不然会报错。

# Set the path to save checkpoints
export OUTPUT_DIR='./output/pretrain_mae_base_patch16_224_train'
# path to imagenet-1k train set
export DATA_PATH='./data_hub/ImageNet_ILSVRC2012/train'

export MASTER_PORT=29500

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7

# batch_size can be adjusted according to the graphics card
OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=8 --use-env --master_port=29500 run_mae_pretraining.py \
        --mask_ratio 0.75 \
        --model pretrain_mae_base_patch16_224 \
        --batch_size 128 \
        --opt adamw \
        --opt_betas 0.9 0.95 \
        --warmup_epochs 40 \
        --epochs 1600 \
        --save_ckpt_freq 20 \

2 对应的 launch.json 文件

        重点要关注 program 的值为 launch.py 文件,然后训练的入口脚本放到 args 参数列表里面去。VSCode 的 ${workspaceFolder} 环境变量为当前工作目录的绝对路径。

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python Debugger: Current File",
            "type": "debugpy",
            "request": "launch",
            "program": "~/miniconda3/envs/mae/lib/python3.11/site-packages/torch/distributed/launch.py",
            "console": "integratedTerminal",
            "justMyCode": false,
            "env": {
                "OUTPUT_DIR": "${workspaceFolder}/output/pretrain_mae_base_patch16_224_train",
                "DATA_PATH": "${workspaceFolder}/data_hub/ImageNet_ILSVRC2012/train",
                "OMP_NUM_THREADS": "1",
                "CUDA_VISIBLE_DEVICES": "0,1,2,3,4,5,6,7",
                "MASTER_PORT": "29500",
            },
            "args": [
                "--nproc_per_node", "8",
                "--use_env",
                "--master_port", "29500",
                "${workspaceFolder}/run_mae_pretraining.py",
                "--mask_ratio", "0.75",
                "--model", "pretrain_mae_base_patch16_224",
                "--batch_size", "128",
                "--opt", "adamw",
                "--opt_betas", "0.9", "0.95",
                "--warmup_epochs", "40",
                "--epochs", "1600"
            ]
        }
    ]
}

3 多个调试配置的情况

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "finetune",
            "type": "debugpy",
            "request": "launch",
            "program": "~/miniconda3/envs/mae/lib/python3.11/site-packages/torch/distributed/launch.py",
            "console": "integratedTerminal",
            "justMyCode": false,
            "purpose":["debug-in-terminal"],
            "env": {
                "OUTPUT_DIR": "${workspaceFolder}/output/finetune_mae_base_patch16_224_debug",
                "DATA_PATH": "${workspaceFolder}/data_hub/ImageNet_ILSVRC2012",
                "MODEL_PATH": "${workspaceFolder}/output/pretrain_mae_base_patch16_224_train/checkpoint-779.pth",
                "OMP_NUM_THREADS": "1",
                "CUDA_VISIBLE_DEVICES": "0,1,2,3,4,5,6,7",
                "MASTER_PORT": "39500",
            },
            "args": [
                "--nproc_per_node", "8",
                "--use_env",
                "--master_port", "39500",
                "${workspaceFolder}/run_class_finetuning.py",
                "--model", "vit_base_patch16_224",
                "--batch_size", "128",
                "--opt", "adamw",
                "--opt_betas", "0.9", "0.999",
                "--weight_decay", "0.05",
                "--epochs", "100",
                "--dist_eval"
            ]
        },
        {
            "name": "pretrain",
            "type": "debugpy",
            "request": "launch",
            "program": "~/miniconda3/envs/mae/lib/python3.11/site-packages/torch/distributed/launch.py",
            "console": "integratedTerminal",
            "justMyCode": false,
            "purpose":["debug-in-terminal"],
            "env": {
                "OUTPUT_DIR": "${workspaceFolder}/output/pretrain_mae_base_patch16_224_train",
                "DATA_PATH": "${workspaceFolder}/data_hub/ImageNet_ILSVRC2012/train",
                "OMP_NUM_THREADS": "1",
                "CUDA_VISIBLE_DEVICES": "0,1,2,3,4,5,6,7",
                "MASTER_PORT": "19500",
            },
            "args": [
                "--nproc_per_node", "8",
                "--use_env",
                "--master_port", "19500",
                "${workspaceFolder}/run_mae_pretraining.py",
                "--mask_ratio", "0.75",
                "--model", "pretrain_mae_base_patch16_224",
                "--batch_size", "128",
                "--opt", "adamw",
                "--opt_betas", "0.9", "0.95",
                "--warmup_epochs", "40",
                "--epochs", "800"
            ]
        }
    ]
}

        在使用 Vscode 调试的时候只需要在调试窗口选择对应的配置名称"pretrain"或者"finetune"即可完成调试配置的切换。

4 Deepspeed等其他调试场景

4.1 原始启动脚本

deepspeed llava/train/train_mem.py \
    --deepspeed ./scripts/zero2.json \
    --model_name_or_path ./hf_hub/vicuna-13b-v1.5 \
    --version plain \
    --data_path ./playground/data/LLaVA-Pretrain/blip_laion_cc_sbu_558k.json \
    --image_folder ./playground/data/LLaVA-Pretrain/images \
    --vision_tower ./hf_hub/clip-vit-large-patch14-336 \
    --mm_projector_type mlp2x_gelu \
    --tune_mm_mlp_adapter True \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --bf16 True \
    --output_dir ./checkpoints/llava-v1.5-13b-pretrain-336 \
    --num_train_epochs 1 \
    --per_device_train_batch_size 32 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 24000 \
    --save_total_limit 1 \
    --learning_rate 1e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --report_to wandb

4.2 对应的 launch.json 文件

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Deepspeed Pretrain",
            "type": "debugpy",
            "request": "launch",
            "program": "~/anaconda3/envs/llava/bin/deepspeed",
            "console": "integratedTerminal",
            "justMyCode": false,
            "purpose": ["debug-in-terminal"],
            "args": [
                "llava/train/train_mem.py",
                "--deepspeed", "./scripts/zero2.json",
                "--model_name_or_path", "./hf_hub/vicuna-13b-v1.5",
                "--version", "plain",
                "--data_path", "${workspaceFolder}/playground/data/LLaVA-Pretrain/blip_laion_cc_sbu_558k.json",
                "--image_folder", "${workspaceFolder}/playground/data/LLaVA-Pretrain/images",
                "--vision_tower", "${workspaceFolder}/hf_hub/openaiclip-vit-large-patch14",
                "--mm_projector_type", "mlp2x_gelu",
                "--tune_mm_mlp_adapter", "True",
                "--mm_vision_select_layer", "-2",
                "--mm_use_im_start_end", "False",
                "--mm_use_im_patch_token", "False",
                "--bf16", "True",
                "--output_dir", "${workspaceFolder}/checkpoints/llava-v1.5-13b-pretrain",
                "--num_train_epochs", "1",
                "--per_device_train_batch_size", "32",
                "--per_device_eval_batch_size", "4",
                "--gradient_accumulation_steps", "1",
                "--evaluation_strategy", "no",
                "--save_strategy", "steps",
                "--save_steps", "24000",
                "--save_total_limit", "1",
                "--learning_rate", "1e-3",
                "--weight_decay", "0.",
                "--warmup_ratio", "0.03",
                "--lr_scheduler_type", "cosine",
                "--logging_steps", "1",
                "--tf32", "True",
                "--model_max_length", "2048",
                "--gradient_checkpointing", "True",
                "--dataloader_num_workers", "4",
                "--lazy_preprocess", "True",
                "--report_to", "wandb"
            ]
        }
    ]
}
  • 9
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值