6. 基础关卡-OpenCompass 评测书生大模型实践

基础关卡-OpenCompass 评测书生大模型实践

基础任务(完成此任务即完成闯关)

  • 使用 OpenCompass 评测浦语 API 记录复现过程并截图。(注意:写博客提交作业时切记删除自己的 api_key!)
  • 使用 OpenCompass 评测 InternLM2.5-1.8B-Chat 模型在 ceval 数据集上的性能,记录复现过程并截图。(可选)

准备opencompass评测环境

本节课程聚焦于大语言模型的评测,在后续的课程中我们将介绍多模态大模型的评测方法。

OpenCompass 提供了 API 模式评测和本地直接评测两种方式。其中 API 模式评测针对那些以 API 服务形式部署的模型,而本地直接评测则面向那些可以获取到模型权重文件的情况。

我们首先在训练营提供的开发机上创建用于评测 conda 环境:

conda create -n opencompass python=3.10
conda activate opencompass

cd /root
git clone -b 0.3.3 https://github.com/open-compass/opencompass
cd opencompass
pip install -e .
pip install -r requirements.txt
pip install huggingface_hub==0.25.2

pip install importlib-metadata

更多使用说明,请参考 OpenCompass 官方文档

基础任务1–评测浦语 API 模型

使用 OpenCompass 评测浦语 API 记录复现过程并截图。

  1. 设置浦语API Key

打开网站浦语官方地址 https://internlm.intern-ai.org.cn/api/document 获得 api key 并在开发及中设置为环境变量:

export INTERNLM_API_KEY=xxxxxxxxxxxxxxxxxxxxxxx # 填入你申请的 API Key
  1. 配置模型

配置模型: 在终端中运行 cd /root/opencompass/touch opencompass/configs/models/openai/puyu_api.py, 然后打开文件, 贴入以下代码:

import os
from opencompass.models import OpenAISDK


internlm_url = 'https://internlm-chat.intern-ai.org.cn/puyu/api/v1/' # 你前面获得的 api 服务地址
internlm_api_key = os.getenv('INTERNLM_API_KEY')

models = [
    dict(
        # abbr='internlm2.5-latest',
        type=OpenAISDK,
        path='internlm2.5-latest', # 请求服务时的 model name
        # 换成自己申请的APIkey
        key=internlm_api_key, # API key
        openai_api_base=internlm_url, # 服务地址
        rpm_verbose=True, # 是否打印请求速率
        query_per_second=0.16, # 服务请求速率
        max_out_len=1024, # 最大输出长度
        max_seq_len=4096, # 最大输入长度
        temperature=0.01, # 生成温度
        batch_size=1, # 批处理大小
        retry=3, # 重试次数
    )
]
  1. 配置数据集

在终端中运行 cd /root/opencompass/touch opencompass/configs/datasets/demo/demo_cmmlu_chat_gen.py, 然后打开文件, 贴入以下代码:

from mmengine import read_base

with read_base():
    from ..cmmlu.cmmlu_gen_c13365 import cmmlu_datasets


# 每个数据集只取前2个样本进行评测
for d in cmmlu_datasets:
    d['abbr'] = 'demo_' + d['abbr']
    d['reader_cfg']['test_range'] = '[0:1]' # 这里每个数据集只取1个样本, 方便快速评测.

这样我们使用了 CMMLU Benchmark 的每个子数据集的 1 个样本进行评测.

  1. 运行评测任务

完成配置后, 在终端中运行: python run.py --models puyu_api.py --datasets demo_cmmlu_chat_gen.py --debug, 等待运行结果:

dataset                                           version    metric    mode      opencompass.models.OpenAISDK_opencompass_internlm2.5-latest
------------------------------------------------  ---------  --------  ------  -------------------------------------------------------------
demo_cmmlu-agronomy                               4c7f2c     accuracy  gen                                                             75.00
demo_cmmlu-anatomy                                ea09bf     accuracy  gen                                                             75.00
demo_cmmlu-ancient_chinese                        f7c97f     accuracy  gen                                                             75.00
demo_cmmlu-arts                                   dd77b8     accuracy  gen                                                            100.00
demo_cmmlu-astronomy                              1e49db     accuracy  gen                                                             50.00
demo_cmmlu-business_ethics                        dc78cb     accuracy  gen                                                            100.00
demo_cmmlu-chinese_civil_service_exam             1de82c     accuracy  gen                                                             75.00
demo_cmmlu-chinese_driving_rule                   b8a42b     accuracy  gen                                                             75.00
demo_cmmlu-chinese_food_culture                   2d568a     accuracy  gen                                                             50.00
demo_cmmlu-chinese_foreign_policy                 dc2427     accuracy  gen                                                            100.00
demo_cmmlu-chinese_history                        4cc7ed     accuracy  gen                                                             75.00
demo_cmmlu-chinese_literature                     af3c41     accuracy  gen                                                            100.00
demo_cmmlu-chinese_teacher_qualification          87de11     accuracy  gen                                                            100.00
demo_cmmlu-clinical_knowledge                     c55b1d     accuracy  gen                                                            100.00
demo_cmmlu-college_actuarial_science              d3c360     accuracy  gen                                                             50.00
demo_cmmlu-college_education                      df8790     accuracy  gen                                                             75.00
demo_cmmlu-college_engineering_hydrology          673f23     accuracy  gen                                                             50.00
demo_cmmlu-college_law                            524c3a     accuracy  gen                                                             75.00
demo_cmmlu-college_mathematics                    e4ebad     accuracy  gen                                                             50.00
demo_cmmlu-college_medical_statistics             55af35     accuracy  gen                                                            100.00
demo_cmmlu-college_medicine                       702f48     accuracy  gen                                                            100.00
demo_cmmlu-computer_science                       637007     accuracy  gen                                                            100.00
demo_cmmlu-computer_security                      932b6b     accuracy  gen                                                             75.00
demo_cmmlu-conceptual_physics                     cfc077     accuracy  gen                                                            100.00
demo_cmmlu-construction_project_management        968a4a     accuracy  gen                                                            100.00
demo_cmmlu-economics                              ddaf7c     accuracy  gen                                                             75.00
demo_cmmlu-education                              c35963     accuracy  gen                                                             50.00
demo_cmmlu-electrical_engineering                 70e98a     accuracy  gen                                                             75.00
demo_cmmlu-elementary_chinese                     cbcd6a     accuracy  gen                                                             75.00
demo_cmmlu-elementary_commonsense                 a67f37     accuracy  gen                                                            100.00
demo_cmmlu-elementary_information_and_technology  d34d2a     accuracy  gen                                                            100.00
demo_cmmlu-elementary_mathematics                 a9d403     accuracy  gen                                                             75.00
demo_cmmlu-ethnology                              31955f     accuracy  gen                                                             50.00
demo_cmmlu-food_science                           741d8e     accuracy  gen                                                             75.00
demo_cmmlu-genetics                               c326f7     accuracy  gen                                                             75.00
demo_cmmlu-global_facts                           0a1236     accuracy  gen                                                             75.00
demo_cmmlu-high_school_biology                    2be811     accuracy  gen                                                            100.00
demo_cmmlu-high_school_chemistry                  d63c05     accuracy  gen                                                             50.00
demo_cmmlu-high_school_geography                  5cd489     accuracy  gen                                                            100.00
demo_cmmlu-high_school_mathematics                6b2087     accuracy  gen                                                             25.00
demo_cmmlu-high_school_physics                    3df353     accuracy  gen                                                            100.00
demo_cmmlu-high_school_politics                   7a88d8     accuracy  gen                                                            100.00
demo_cmmlu-human_sexuality                        54ac98     accuracy  gen                                                             75.00
demo_cmmlu-international_law                      0f5d40     accuracy  gen                                                              0.00
demo_cmmlu-journalism                             a4f6a0     accuracy  gen                                                             75.00
demo_cmmlu-jurisprudence                          7843da     accuracy  gen                                                             75.00
demo_cmmlu-legal_and_moral_basis                  f906b0     accuracy  gen                                                            100.00
demo_cmmlu-logical                                15a71b     accuracy  gen                                                            100.00
demo_cmmlu-machine_learning                       bc6ad4     accuracy  gen                                                            100.00
demo_cmmlu-management                             e5e8db     accuracy  gen                                                             50.00
demo_cmmlu-marketing                              8b4c18     accuracy  gen                                                            100.00
demo_cmmlu-marxist_theory                         75eb79     accuracy  gen                                                             50.00
demo_cmmlu-modern_chinese                         83a9b7     accuracy  gen                                                             75.00
demo_cmmlu-nutrition                              adfff7     accuracy  gen                                                             75.00
demo_cmmlu-philosophy                             75e22d     accuracy  gen                                                             75.00
demo_cmmlu-professional_accounting                0edc91     accuracy  gen                                                            100.00
demo_cmmlu-professional_law                       d24af5     accuracy  gen                                                             75.00
demo_cmmlu-professional_medicine                  134139     accuracy  gen                                                             50.00
demo_cmmlu-professional_psychology                ec920e     accuracy  gen                                                            100.00
demo_cmmlu-public_relations                       70ee06     accuracy  gen                                                             25.00
demo_cmmlu-security_study                         45f96f     accuracy  gen                                                             75.00
demo_cmmlu-sociology                              485285     accuracy  gen                                                             25.00
demo_cmmlu-sports_science                         838cfe     accuracy  gen                                                            100.00
demo_cmmlu-traditional_chinese_medicine           3bbf64     accuracy  gen                                                             75.00
demo_cmmlu-virology                               8925bf     accuracy  gen                                                            100.00
demo_cmmlu-world_history                          57c97c     accuracy  gen                                                            100.00
demo_cmmlu-world_religions                        1d0f4b     accuracy  gen                                                            100.00
01/20 10:46:04 - OpenCompass - INFO - write summary to /root/0118/opencompass/outputs/default/20250120_101539/summary/summary_20250120_101539.txt
01/20 10:46:04 - OpenCompass - INFO - write csv to /root/0118/opencompass/outputs/default/20250120_101539/summary/summary_20250120_101539.csv

基础任务2–评测InternLM2.5-1.8B-Chat 模型在 ceval 数据集上的性能

如果你想要评测本地部署的大语言模型,首先需要获取到完整的模型权重文件。以开源模型为例,你可以从 Hugging Face 等平台下载模型文件。接下来,你需要准备足够的计算资源,比如至少一张显存够大的 GPU,因为模型文件通常都比较大。有了模型和硬件后,你需要在评测配置文件中指定模型路径和相关参数,然后评测框架就会自动加载模型并开始评测。这种评测方式虽然前期准备工作相对繁琐,需要考虑硬件资源,但好处是评测过程完全在本地完成,不依赖网络状态,而且你可以更灵活地调整模型参数,深入了解模型的性能表现。这种方式特别适合需要深入研究模型性能或进行模型改进的研发人员。

我们接下以评测 InternLM2.5-1.8B-Chat 在 C-Eval 数据集上的性能为例,介绍如何评测本地模型。

搭建评测环境

安装相关软件包:

cd /root/opencompass
conda activate opencompass
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia -y
apt-get update
apt-get install cmake
pip install protobuf==4.25.3
pip install huggingface-hub==0.23.2

为了解决一些本地评测时出现的报错问题,我们还需要重装一些 python 库:

pip uninstall numpy -y
pip install "numpy<2.0.0,>=1.23.4"
pip uninstall pandas -y
pip install "pandas<2.0.0"
pip install onnxscript
pip uninstall transformers -y
pip install transformers==4.39.0

为了方便评测,我们首先将数据集下载到本地:

cp /share/temp/datasets/OpenCompassData-core-20231110.zip /root/opencompass/
unzip OpenCompassData-core-20231110.zip

将会在 OpenCompass 下看到data文件夹。

加载本地模型进行评测

在 OpenCompass 中,模型和数据集的配置文件都存放在 configs 文件夹下。我们可以通过运行 list_configs 命令列出所有跟 InternLM 及 C-Eval 相关的配置。

python tools/list_configs.py internlm ceval

打开 opencompass 文件夹下 configs/models/hf_internlm/的 hf_internlm2_5_1_8b_chat.py 文件, 修改如下:

from opencompass.models import HuggingFacewithChatTemplate

models = [
    dict(
        type=HuggingFacewithChatTemplate,
        abbr='internlm2_5-1_8b-chat-hf',
        path='/share/new_models/Shanghai_AI_Laboratory/internlm2_5-1_8b-chat/',
        max_out_len=2048,
        batch_size=8,
        run_cfg=dict(num_gpus=1),
    )
]

# python run.py --datasets ceval_gen --models hf_internlm2_5_1_8b_chat --debug

可以通过以下命令评测 InternLM2.5-1.8B-Chat 模型在 C-Eval 数据集上的性能。由于 OpenCompass 默认并行启动评估过程,我们可以在第一次运行时以 --debug 模式启动评估,并检查是否存在问题。在 --debug 模式下,任务将按顺序执行,并实时打印输出。

python run.py --datasets ceval_gen --models hf_internlm2_5_1_8b_chat --debug
# 如果出现 rouge 导入报错, 请 pip uninstall rouge 之后再次安装 pip install rouge==1.0.1 可解决问题.

评测比较费时, 预计2~4个小时评测完成后,将会看到:

local-evaluation
我们也可以使用配置文件来指定数据集和模型,例如:

cd /root/opencompass/configs/
touch eval_tutorial_demo.py

打开 eval_tutorial_demo.py 贴入以下代码

from mmengine.config import read_base

with read_base():
    from .datasets.ceval.ceval_gen import ceval_datasets
    from .models.hf_internlm.hf_internlm2_5_1_8b_chat import models as hf_internlm2_5_1_8b_chat_models

datasets = ceval_datasets
models = hf_internlm2_5_1_8b_chat_models

这样我们指定了评测的模型和数据集,然后运行

python run.py configs/eval_tutorial_demo.py --debug 

问题记录

  1. 运行报错:ModuleNotFoundError: No module named 'rouge'
(opencompass) (list) root@intern-studio-50014188:~/0118/opencompass# python run.py --models puyu_api.py --datasets demo_cmmlu_chat_gen.py --debug
Traceback (most recent call last):
  File "/root/0118/opencompass/run.py", line 1, in <module>
    from opencompass.cli.main import main
  File "/root/0118/opencompass/opencompass/cli/main.py", line 16, in <module>
    from opencompass.utils.run import (fill_eval_cfg, fill_infer_cfg,
  File "/root/0118/opencompass/opencompass/utils/run.py", line 9, in <module>
    from opencompass.datasets.custom import make_custom_dataset_config
  File "/root/0118/opencompass/opencompass/datasets/__init__.py", line 71, in <module>
    from .longbench import *  # noqa: F401, F403
    ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/0118/opencompass/opencompass/datasets/longbench/__init__.py", line 1, in <module>
    from .evaluators import LongBenchClassificationEvaluator  # noqa: F401, F403
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/0118/opencompass/opencompass/datasets/longbench/evaluators.py", line 9, in <module>
    from rouge import Rouge
ModuleNotFoundError: No module named 'rouge'

解决方式,卸载重新安装rouge:

#先卸载
pip uninstall rouge
#再安装
pip install rouge
  1. 运行报错:ModuleNotFoundError: No module named 'h5py'
(opencompass) root@intern-studio-50014188:~/0118/opencompass# python run.py --models puyu_api.py --datasets demo_cmmlu_chat_gen.py --debug

Traceback (most recent call last):
  File "/root/0118/opencompass/run.py", line 1, in <module>
    from opencompass.cli.main import main
  File "/root/0118/opencompass/opencompass/cli/main.py", line 16, in <module>
    from opencompass.utils.run import (fill_eval_cfg, fill_infer_cfg,
  File "/root/0118/opencompass/opencompass/utils/run.py", line 9, in <module>
    from opencompass.datasets.custom import make_custom_dataset_config
  File "/root/0118/opencompass/opencompass/datasets/__init__.py", line 103, in <module>
    from .scicode import *  # noqa: F401, F403
  File "/root/0118/opencompass/opencompass/datasets/scicode.py", line 8, in <module>
    import h5py
ModuleNotFoundError: No module named 'h5py'

解决方式:

pip install h5py
  1. AssertionError: assert len(all_gpu_ids) >= num_gpus
(opencompass) root@intern-studio-50014188:~/opencompass# python run.py --datasets ceval_gen --models hf_internlm2_5_1_8b_chat --debug
...
01/20 13:24:07 - OpenCompass - INFO - Partitioned into 1 tasks.
Traceback (most recent call last):
  File "/root/opencompass/run.py", line 4, in <module>
    main()
  File "/root/opencompass/opencompass/cli/main.py", line 308, in main
    runner(tasks)
  File "/root/opencompass/opencompass/runners/base.py", line 38, in __call__
    status = self.launch(tasks)
  File "/root/opencompass/opencompass/runners/local.py", line 101, in launch
    assert len(all_gpu_ids) >= num_gpus
AssertionError
main()

File “/root/opencompass/opencompass/cli/main.py”, line 308, in main
runner(tasks)
File “/root/opencompass/opencompass/runners/base.py”, line 38, in call
status = self.launch(tasks)
File “/root/opencompass/opencompass/runners/local.py”, line 101, in launch
assert len(all_gpu_ids) >= num_gpus
AssertionError


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lldhsds

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值