使用 OpenCompass 评测 internlm2-chat-1.8b 模型在 ceval 数据集上的性能,记录复现过程并截图。
环境配置真的很难绷。创建了三次环境才搞好
具体步骤如下
cd ~
studio-conda -o internlm-base -t opencompass_1#这步要等蛮久的
source activate opencompass_1
git clone -b 0.2.4 https://github.com/open-compass/opencompass
cd opencompass_1
pip install -r requirements.txt
pip install -e .
pip install protobuf
export MKL_SERVICE_FORCE_INTEL=1
cp /share/temp/datasets/OpenCompassData-core-20231110.zip /root/opencompass/
unzip OpenCompassData-core-20231110.zip
python run.py --datasets ceval_gen --hf-path /share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b --tokenizer-path /share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b --tokenizer-kwargs padding_side='left' truncation='left' trust_remote_code=True --model-kwargs trust_remote_code=True device_map='auto' --max-seq-len 2048 --max-out-len 16 --batch-size 4 --num-gpus 1 --debug
PS:如果出现--num-gpus的错误,用错误提示中的tf--num-gpus替换即可