了解一下InternLM2

本文介绍了大模型的发展,特别是基于增长数据和计算能力提升的模型,如Transformer、BERT和GPT。文章详细阐述了InternLM-7B和InternLM-20B的训练配置,强调了使用大语言模型进行智能体开发的框架Lagent,以及如何通过InternLM训练框架实现高效的多GPU训练和性能优化。
摘要由CSDN通过智能技术生成
  • 大模型的出现和发展得益于增长的数据量、计算能力的提升以及算法优化等因素。这些模型在各种任务中展现出惊人的性能,比如自然语言处理、计算机视觉、语音识别等。这种模型通常采用深度神经网络结构,如 TransformerBERTGPT( Generative Pre-trained Transformer )等。大模型的优势在于其能够捕捉和理解数据中更为复杂、抽象的特征和关系。通过大规模参数的学习,它们可以提高在各种任务上的泛化能力,并在未经过大量特定领域数据训练的情况下实现较好的表现。然而,大模型也面临着一些挑战,比如巨大的计算资源需求、高昂的训练成本、对大规模数据的依赖以及模型的可解释性等问题。因此,大模型的应用和发展也需要在性能、成本和道德等多个方面进行权衡和考量。InternLM-7B 包含了一个拥有 70 亿参数的基础模型和一个为实际场景量身定制的对话模型。该模型具有以下特点:1,利用数万亿的高质量 token 进行训练,建立了一个强大的知识库;2.支持 8k token 的上下文窗口长度,使得输入序列更长并增强了推理能力。基于 InternLM 训练框架,上海人工智能实验室已经发布了两个开源的预训练模型:InternLM-7B 和 InternLM-20B

  • InternLM 是一个开源的轻量级训练框架,旨在支持大模型训练而无需大量的依赖。通过单一的代码库,它支持在拥有数千个 GPU 的大型集群上进行预训练,并在单个 GPU 上进行微调,同时实现了卓越的性能优化。在 1024GPU 上训练时,InternLM 可以实现近 90% 的加速效率。基于 InternLM 训练框架,上海人工智能实验室已经发布了两个开源的预训练模型:InternLM-7BInternLM-20BLagent 是一个轻量级、开源的基于大语言模型的智能体(agent)框架,支持用户快速地将一个大语言模型转变为多种类型的智能体,并提供了一些典型工具为大语言模型赋能。通过 Lagent 框架可以更好的发挥 InternLM 的全部性能。

  • 7B demo 的训练配置文件样例如下:

    • JOB_NAME = "7b_train"
      SEQ_LEN = 2048
      HIDDEN_SIZE = 4096
      NUM_ATTENTION_HEAD = 32
      MLP_RATIO = 8 / 3
      NUM_LAYER = 32
      VOCAB_SIZE = 103168
      MODEL_ONLY_FOLDER = "local:llm_ckpts/xxxx"
      # Ckpt folder format:
      # fs: 'local:/mnt/nfs/XXX'
      SAVE_CKPT_FOLDER = "local:llm_ckpts"
      LOAD_CKPT_FOLDER = "local:llm_ckpts/49"
      # boto3 Ckpt folder format:
      # import os
      # BOTO3_IP = os.environ["BOTO3_IP"] # boto3 bucket endpoint
      # SAVE_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm"
      # LOAD_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm/snapshot/1/"
      CHECKPOINT_EVERY = 50
      ckpt = dict(
          enable_save_ckpt=False,  # enable ckpt save.
          save_ckpt_folder=SAVE_CKPT_FOLDER,  # Path to save training ckpt.
          # load_ckpt_folder=LOAD_CKPT_FOLDER, # Ckpt path to resume training(load weights and scheduler/context states).
          # load_model_only_folder=MODEL_ONLY_FOLDER, # Path to initialize with given model weights.
          load_optimizer=True,  # Wheter to load optimizer states when continuing training.
          checkpoint_every=CHECKPOINT_EVERY,
          async_upload=True,  # async ckpt upload. (only work for boto3 ckpt)
          async_upload_tmp_folder="/dev/shm/internlm_tmp_ckpt/",  # path for temporarily files during asynchronous upload.
          snapshot_ckpt_folder="/".join([SAVE_CKPT_FOLDER, "snapshot"]),  # directory for snapshot ckpt storage path.
          oss_snapshot_freq=int(CHECKPOINT_EVERY / 2),  # snapshot ckpt save frequency.
      )
      TRAIN_FOLDER = "/path/to/dataset"
      VALID_FOLDER = "/path/to/dataset"
      data = dict(
          seq_len=SEQ_LEN,
          # micro_num means the number of micro_batch contained in one gradient update
          micro_num=4,
          # packed_length = micro_bsz * SEQ_LEN
          micro_bsz=2,
          # defaults to the value of micro_num
          valid_micro_num=4,
          # defaults to 0, means disable evaluate
          valid_every=50,
          pack_sample_into_one=False,
          total_steps=50000,
          skip_batches="",
          rampup_batch_size="",
          # Datasets with less than 50 rows will be discarded
          min_length=50,
          # train_folder=TRAIN_FOLDER,
          # valid_folder=VALID_FOLDER,
      )
      grad_scaler = dict(
          fp16=dict(
              # the initial loss scale, defaults to 2**16
              initial_scale=2**16,
              # the minimum loss scale, defaults to None
              min_scale=1,
              # the number of steps to increase loss scale when no overflow occurs
              growth_interval=1000,
          ),
          # the multiplication factor for increasing loss scale, defaults to 2
          growth_factor=2,
          # the multiplication factor for decreasing loss scale, defaults to 0.5
          backoff_factor=0.5,
          # the maximum loss scale, defaults to None
          max_scale=2**24,
          # the number of overflows before decreasing loss scale, defaults to 2
          hysteresis=2,
      )
      hybrid_zero_optimizer = dict(
          # Enable low_level_optimzer overlap_communication
          overlap_sync_grad=True,
          overlap_sync_param=True,
          # bucket size for nccl communication params
          reduce_bucket_size=512 * 1024 * 1024,
          # grad clipping
          clip_grad_norm=1.0,
      )
      loss = dict(
          label_smoothing=0,
      )
      adam = dict(
          lr=1e-4,
          adam_beta1=0.9,
          adam_beta2=0.95,
          adam_beta2_c=0,
          adam_eps=1e-8,
          weight_decay=0.01,
      )
      
      lr_scheduler = dict(
          total_steps=data["total_steps"],
          init_steps=0,  # optimizer_warmup_step
          warmup_ratio=0.01,
          eta_min=1e-5,
          last_epoch=-1,
      )
      
      beta2_scheduler = dict(
          init_beta2=adam["adam_beta2"],
          c=adam["adam_beta2_c"],
          cur_iter=-1,
      )
      
      model = dict(
          checkpoint=False,  # The proportion of layers for activation aheckpointing, the optional value are True/False/[0-1]
          num_attention_heads=NUM_ATTENTION_HEAD,
          embed_split_hidden=True,
          vocab_size=VOCAB_SIZE,
          embed_grad_scale=1,
          parallel_output=True,
          hidden_size=HIDDEN_SIZE,
          num_layers=NUM_LAYER,
          mlp_ratio=MLP_RATIO,
          apply_post_layer_norm=False,
          dtype="torch.float16",  # Support: "torch.float16", "torch.half", "torch.bfloat16", "torch.float32", "torch.tf32"
          norm_type="rmsnorm",
          layer_norm_epsilon=1e-5,
          use_flash_attn=True,
          num_chunks=1,  # if num_chunks > 1, interleaved pipeline scheduler is used.
      )
      """
      zero1 parallel:
          1. if zero1 <= 0, The size of the zero process group is equal to the size of the dp process group,
              so parameters will be divided within the range of dp.
          2. if zero1 == 1, zero is not used, and all dp groups retain the full amount of model parameters.
          3. zero1 > 1 and zero1 <= dp world size, the world size of zero is a subset of dp world size.
              For smaller models, it is usually a better choice to split the parameters within nodes with a setting <= 8.
      pipeline parallel (dict):
          1. size: int, the size of pipeline parallel.
          2. interleaved_overlap: bool, enable/disable communication overlap when using interleaved pipeline scheduler.
      tensor parallel: tensor parallel size, usually the number of GPUs per node.
      """
      parallel = dict(
          zero1=8,
          pipeline=dict(size=1, interleaved_overlap=True),
          sequence_parallel=False,
      )
      
      cudnn_deterministic = False
      cudnn_benchmark = False
      
  • 30B demo 训练配置文件样例如下:

    • JOB_NAME = "30b_train"
      SEQ_LEN = 2048
      HIDDEN_SIZE = 6144
      NUM_ATTENTION_HEAD = 48
      MLP_RATIO = 8 / 3
      NUM_LAYER = 60
      VOCAB_SIZE = 103168
      MODEL_ONLY_FOLDER = "local:llm_ckpts/xxxx"
      # Ckpt folder format:
      # fs: 'local:/mnt/nfs/XXX'
      SAVE_CKPT_FOLDER = "local:llm_ckpts"
      LOAD_CKPT_FOLDER = "local:llm_ckpts/49"
      # boto3 Ckpt folder format:
      # import os
      # BOTO3_IP = os.environ["BOTO3_IP"] # boto3 bucket endpoint
      # SAVE_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm"
      # LOAD_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm/snapshot/1/"
      CHECKPOINT_EVERY = 50
      ckpt = dict(
          enable_save_ckpt=False,  # enable ckpt save.
          save_ckpt_folder=SAVE_CKPT_FOLDER,  # Path to save training ckpt.
          # load_ckpt_folder=LOAD_CKPT_FOLDER, # Ckpt path to resume training(load weights and scheduler/context states).
          # load_model_only_folder=MODEL_ONLY_FOLDER, # Path to initialize with given model weights.
          load_optimizer=True,  # Wheter to load optimizer states when continuing training.
          checkpoint_every=CHECKPOINT_EVERY,
          async_upload=True,  # async ckpt upload. (only work for boto3 ckpt)
          async_upload_tmp_folder="/dev/shm/internlm_tmp_ckpt/",  # path for temporarily files during asynchronous upload.
          snapshot_ckpt_folder="/".join([SAVE_CKPT_FOLDER, "snapshot"]),  # directory for snapshot ckpt storage path.
          oss_snapshot_freq=int(CHECKPOINT_EVERY / 2),  # snapshot ckpt save frequency.
      )
      TRAIN_FOLDER = "/path/to/dataset"
      VALID_FOLDER = "/path/to/dataset"
      data = dict(
          seq_len=SEQ_LEN,
          # micro_num means the number of micro_batch contained in one gradient update
          micro_num=4,
          # packed_length = micro_bsz * SEQ_LEN
          micro_bsz=2,
          # defaults to the value of micro_num
          valid_micro_num=4,
          # defaults to 0, means disable evaluate
          valid_every=50,
          pack_sample_into_one=False,
          total_steps=50000,
          skip_batches="",
          rampup_batch_size="",
          # Datasets with less than 50 rows will be discarded
          min_length=50,
          # train_folder=TRAIN_FOLDER,
          # valid_folder=VALID_FOLDER,
      )
      grad_scaler = dict(
          fp16=dict(
              # the initial loss scale, defaults to 2**16
              initial_scale=2**16,
              # the minimum loss scale, defaults to None
              min_scale=1,
              # the number of steps to increase loss scale when no overflow occurs
              growth_interval=1000,
          ),
          # the multiplication factor for increasing loss scale, defaults to 2
          growth_factor=2,
          # the multiplication factor for decreasing loss scale, defaults to 0.5
          backoff_factor=0.5,
          # the maximum loss scale, defaults to None
          max_scale=2**24,
          # the number of overflows before decreasing loss scale, defaults to 2
          hysteresis=2,
      )
      hybrid_zero_optimizer = dict(
          # Enable low_level_optimzer overlap_communication
          overlap_sync_grad=True,
          overlap_sync_param=True,
          # bucket size for nccl communication params
          reduce_bucket_size=512 * 1024 * 1024,
          # grad clipping
          clip_grad_norm=1.0,
      )
      loss = dict(
          label_smoothing=0,
      )
      adam = dict(
          lr=1e-4,
          adam_beta1=0.9,
          adam_beta2=0.95,
          adam_beta2_c=0,
          adam_eps=1e-8,
          weight_decay=0.01,
      )
      
      lr_scheduler = dict(
          total_steps=data["total_steps"],
          init_steps=0,  # optimizer_warmup_step
          warmup_ratio=0.01,
          eta_min=1e-5,
          last_epoch=-1,
      )
      beta2_scheduler = dict(
          init_beta2=adam["adam_beta2"],
          c=adam["adam_beta2_c"],
          cur_iter=-1,
      )
      
      model = dict(
          checkpoint=False,  # The proportion of layers for activation aheckpointing, the optional value are True/False/[0-1]
          num_attention_heads=NUM_ATTENTION_HEAD,
          embed_split_hidden=True,
          vocab_size=VOCAB_SIZE,
          embed_grad_scale=1,
          parallel_output=True,
          hidden_size=HIDDEN_SIZE,
          num_layers=NUM_LAYER,
          mlp_ratio=MLP_RATIO,
          apply_post_layer_norm=False,
          dtype="torch.float16",  # Support: "torch.float16", "torch.half", "torch.bfloat16", "torch.float32", "torch.tf32"
          norm_type="rmsnorm",
          layer_norm_epsilon=1e-5,
          use_flash_attn=True,
          num_chunks=1,  # if num_chunks > 1, interleaved pipeline scheduler is used.
      )
      """
      zero1 parallel:
          1. if zero1 <= 0, The size of the zero process group is equal to the size of the dp process group,
              so parameters will be divided within the range of dp.
          2. if zero1 == 1, zero is not used, and all dp groups retain the full amount of model parameters.
          3. zero1 > 1 and zero1 <= dp world size, the world size of zero is a subset of dp world size.
              For smaller models, it is usually a better choice to split the parameters within nodes with a setting <= 8.
      pipeline parallel (dict):
          1. size: int, the size of pipeline parallel.
          2. interleaved_overlap: bool, enable/disable communication overlap when using interleaved pipeline scheduler.
      tensor parallel: tensor parallel size, usually the number of GPUs per node.
      """
      parallel = dict(
          zero1=-1,
          tensor=4,
          pipeline=dict(size=1, interleaved_overlap=True),
          sequence_parallel=False,
      )
      cudnn_deterministic = False
      cudnn_benchmark = False
      

30B Demo — InternLM 0.2.0 文档

  • 24
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

羞儿

写作是兴趣,打赏看心情

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值