0
点赞
收藏
分享

微信扫一扫

D6-OpenCompass 大模型评测

测评结果

使用 OpenCompass 评测 InternLM2-Chat-7B 模型在 C-Eval 数据集上的性能:

python run.py --datasets ceval_gen --hf-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-kwargs padding_side='left' truncation='left' trust_remote_code=True --model-kwargs trust_remote_code=True device_map='auto' --max-seq-len 2048 --max-out-len 16 --batch-size 4 --num-gpus 1 --debug

D6-OpenCompass 大模型评测_测评

测评结果:

D6-OpenCompass 大模型评测_OpenCompass_02


OpenCompass 评测平台

D6-OpenCompass 大模型评测_大模型_03

OpenCompass 开源评测平台架构:

D6-OpenCompass 大模型评测_测评_04

  • 模型层:大模型评测所涉及的主要模型种类,OpenCompass以基座模型和对话模型作为重点评测对象。
  • 能力层:OpenCompass从本方案从通用能力和特色能力两个方面来进行评测维度设计。在模型通用能力方面,从语言、知识、理解、推理、安全等多个能力维度进行评测。在特色能力方面,从长文本、代码、工具、知识增强等维度进行评测。
  • 方法层:OpenCompass采用客观评测与主观评测两种评测方式。客观评测能便捷地评估模型在具有确定答案(如选择,填空,封闭式问答等)的任务上的能力,主观评测能评估用户对模型回复的真实满意度,OpenCompass采用基于模型辅助的主观评测和基于人类反馈的主观评测两种方式。
  • 工具层:OpenCompass提供丰富的功能支持自动化地开展大语言模型的高效评测。包括分布式评测技术,提示词工程,对接评测数据库,评测榜单发布,评测报告生成等诸多功能。

OpenCompass 评测流水线设计:

D6-OpenCompass 大模型评测_OpenCompass_05

大模型评测领域的挑战:

D6-OpenCompass 大模型评测_测评_06


实战环节

环境及安装

conda create --name opencompass --clone=/root/share/conda_envs/internlm-base
source activate opencompass

git clone https://github.com/open-compass/opencompass

cd opencompass
pip install -e .

D6-OpenCompass 大模型评测_大模型_07

D6-OpenCompass 大模型评测_测评_08

数据准备

解压评测数据集到 data/ 处:

cp /share/temp/datasets/OpenCompassData-core-20231110.zip /root/opencompass/

unzip OpenCompassData-core-20231110.zip

将会在opencompass下看到data文件夹


查看支持的数据集和模型

列出所有跟 internlm 及 ceval 相关的配置:

python tools/list_configs.py internlm ceval

D6-OpenCompass 大模型评测_测评_09

+--------------------------+--------------------------------------------------------+
| Model                    | Config Path                                            |
|--------------------------+--------------------------------------------------------|
| hf_internlm_20b          | configs/models/hf_internlm/hf_internlm_20b.py          |
| hf_internlm_7b           | configs/models/hf_internlm/hf_internlm_7b.py           |
| hf_internlm_chat_20b     | configs/models/hf_internlm/hf_internlm_chat_20b.py     |
| hf_internlm_chat_7b      | configs/models/hf_internlm/hf_internlm_chat_7b.py      |
| hf_internlm_chat_7b_8k   | configs/models/hf_internlm/hf_internlm_chat_7b_8k.py   |
| hf_internlm_chat_7b_v1_1 | configs/models/hf_internlm/hf_internlm_chat_7b_v1_1.py |
| internlm_7b              | configs/models/internlm/internlm_7b.py                 |
| ms_internlm_chat_7b_8k   | configs/models/ms_internlm/ms_internlm_chat_7b_8k.py   |
+--------------------------+--------------------------------------------------------+
+----------------------------+------------------------------------------------------+
| Dataset                    | Config Path                                          |
|----------------------------+------------------------------------------------------|
| ceval_clean_ppl            | configs/datasets/ceval/ceval_clean_ppl.py            |
| ceval_gen                  | configs/datasets/ceval/ceval_gen.py                  |
| ceval_gen_2daf24           | configs/datasets/ceval/ceval_gen_2daf24.py           |
| ceval_gen_5f30c7           | configs/datasets/ceval/ceval_gen_5f30c7.py           |
| ceval_ppl                  | configs/datasets/ceval/ceval_ppl.py                  |
| ceval_ppl_578f8d           | configs/datasets/ceval/ceval_ppl_578f8d.py           |
| ceval_ppl_93e5ce           | configs/datasets/ceval/ceval_ppl_93e5ce.py           |
| ceval_zero_shot_gen_bd40ef | configs/datasets/ceval/ceval_zero_shot_gen_bd40ef.py |
+----------------------------+------------------------------------------------------+


启动评测

确保按照上述步骤正确安装 OpenCompass 并准备好数据集后,可以通过以下命令评测 InternLM-Chat-7B 模型在 C-Eval 数据集上的性能。

由于 OpenCompass 默认并行启动评估过程,我们可以在第一次运行时以 --debug 模式启动评估,并检查是否存在问题。在 --debug 模式下,任务将按顺序执行,并实时打印输出。

python run.py --datasets ceval_gen --hf-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-path /share/temp/model_repos/internlm-chat-7b/ --tokenizer-kwargs padding_side='left' truncation='left' trust_remote_code=True --model-kwargs trust_remote_code=True device_map='auto' --max-seq-len 2048 --max-out-len 16 --batch-size 4 --num-gpus 1 --debug

命令解析:

--datasets ceval_gen \
--hf-path /share/temp/model_repos/internlm-chat-7b/ \  # HuggingFace 模型路径
--tokenizer-path /share/temp/model_repos/internlm-chat-7b/ \  # HuggingFace tokenizer 路径(如果与模型路径相同,可以省略)
--tokenizer-kwargs padding_side='left' truncation='left' trust_remote_code=True \  # 构建 tokenizer 的参数
--model-kwargs device_map='auto' trust_remote_code=True \  # 构建模型的参数
--max-seq-len 2048 \  # 模型可以接受的最大序列长度
--max-out-len 16 \  # 生成的最大 token 数
--batch-size 4  \  # 批量大小
--num-gpus 1  # 运行模型所需的 GPU 数量
--debug

如果一切正常,您应该看到屏幕上显示 “Starting inference process”:

[2024-01-12 18:23:55,076] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...

D6-OpenCompass 大模型评测_OpenCompass_10

评测完成后,将会看到:

D6-OpenCompass 大模型评测_测评_11


01/21 15:15:54 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_geography]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19/19 [00:00<00:00, 527760.11it/s]
[2024-01-21 15:15:54,813] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:08<00:00,  1.65s/it]
01/21 15:16:09 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-ideological_and_moral_cultivation]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19/19 [00:00<00:00, 758969.30it/s]
[2024-01-21 15:16:09,984] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:06<00:00,  1.25s/it]
01/21 15:16:17 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_chinese]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19/19 [00:00<00:00, 744782.95it/s]
[2024-01-21 15:16:17,618] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:24<00:00,  4.87s/it]
01/21 15:16:42 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-sports_science]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19/19 [00:00<00:00, 705236.96it/s]
[2024-01-21 15:16:42,099] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:06<00:00,  1.38s/it]
01/21 15:16:49 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-basic_medicine]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19/19 [00:00<00:00, 711533.71it/s]
[2024-01-21 15:16:49,085] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:06<00:00,  1.34s/it]
01/21 15:16:55 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-probability_and_statistics]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 571950.55it/s]
[2024-01-21 15:16:55,928] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:23<00:00,  4.61s/it]
01/21 15:17:19 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_mathematics]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 692637.36it/s]
[2024-01-21 15:17:19,146] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00,  3.22s/it]
01/21 15:17:35 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-discrete_mathematics]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 599186.29it/s]
[2024-01-21 15:17:35,369] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:06<00:00,  1.74s/it]
01/21 15:17:42 - OpenCompass - INFO - Start inferencing [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_geography]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 483958.15it/s]
[2024-01-21 15:17:42,407] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00,  1.69s/it]
01/21 15:17:47 - OpenCompass - INFO - time elapsed: 910.91s
01/21 15:17:54 - OpenCompass - DEBUG - Get class `NaivePartitioner` from "partitioner" registry in "opencompass"
01/21 15:17:54 - OpenCompass - DEBUG - An `NaivePartitioner` instance is built from registry, and its implementation can be found in opencompass.partitioners.naive
01/21 15:17:54 - OpenCompass - DEBUG - Key eval.runner.task.judge_cfg not found in config, ignored.
01/21 15:17:54 - OpenCompass - DEBUG - Key eval.runner.task.dump_details not found in config, ignored.
01/21 15:17:54 - OpenCompass - DEBUG - Additional config: {'eval': {'runner': {'task': {}}}}
01/21 15:17:54 - OpenCompass - INFO - Partitioned into 52 tasks.
01/21 15:17:54 - OpenCompass - DEBUG - Task 0: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-computer_network]
01/21 15:17:54 - OpenCompass - DEBUG - Task 1: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-operating_system]
01/21 15:17:54 - OpenCompass - DEBUG - Task 2: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-computer_architecture]
01/21 15:17:54 - OpenCompass - DEBUG - Task 3: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_programming]
01/21 15:17:54 - OpenCompass - DEBUG - Task 4: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_physics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 5: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_chemistry]
01/21 15:17:54 - OpenCompass - DEBUG - Task 6: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-advanced_mathematics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 7: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-probability_and_statistics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 8: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-discrete_mathematics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 9: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-electrical_engineer]
01/21 15:17:54 - OpenCompass - DEBUG - Task 10: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-metrology_engineer]
01/21 15:17:54 - OpenCompass - DEBUG - Task 11: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_mathematics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 12: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_physics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 13: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_chemistry]
01/21 15:17:54 - OpenCompass - DEBUG - Task 14: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_biology]
01/21 15:17:54 - OpenCompass - DEBUG - Task 15: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_mathematics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 16: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_biology]
01/21 15:17:54 - OpenCompass - DEBUG - Task 17: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_physics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 18: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_chemistry]
01/21 15:17:54 - OpenCompass - DEBUG - Task 19: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-veterinary_medicine]
01/21 15:17:54 - OpenCompass - DEBUG - Task 20: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_economics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 21: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-business_administration]
01/21 15:17:54 - OpenCompass - DEBUG - Task 22: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-marxism]
01/21 15:17:54 - OpenCompass - DEBUG - Task 23: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-mao_zdong_thought]
01/21 15:17:54 - OpenCompass - DEBUG - Task 24: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-education_science]
01/21 15:17:54 - OpenCompass - DEBUG - Task 25: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-teacher_qualification]
01/21 15:17:54 - OpenCompass - DEBUG - Task 26: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_politics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 27: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_geography]
01/21 15:17:54 - OpenCompass - DEBUG - Task 28: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_politics]
01/21 15:17:54 - OpenCompass - DEBUG - Task 29: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_geography]
01/21 15:17:54 - OpenCompass - DEBUG - Task 30: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-modern_chinese_history]
01/21 15:17:54 - OpenCompass - DEBUG - Task 31: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-ideological_and_moral_cultivation]
01/21 15:17:54 - OpenCompass - DEBUG - Task 32: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-logic]
01/21 15:17:54 - OpenCompass - DEBUG - Task 33: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-law]
01/21 15:17:54 - OpenCompass - DEBUG - Task 34: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-chinese_language_and_literature]
01/21 15:17:54 - OpenCompass - DEBUG - Task 35: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-art_studies]
01/21 15:17:54 - OpenCompass - DEBUG - Task 36: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-professional_tour_guide]
01/21 15:17:54 - OpenCompass - DEBUG - Task 37: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-legal_professional]
01/21 15:17:54 - OpenCompass - DEBUG - Task 38: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_chinese]
01/21 15:17:54 - OpenCompass - DEBUG - Task 39: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_history]
01/21 15:17:54 - OpenCompass - DEBUG - Task 40: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_history]
01/21 15:17:54 - OpenCompass - DEBUG - Task 41: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-civil_servant]
01/21 15:17:54 - OpenCompass - DEBUG - Task 42: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-sports_science]
01/21 15:17:54 - OpenCompass - DEBUG - Task 43: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-plant_protection]
01/21 15:17:54 - OpenCompass - DEBUG - Task 44: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-basic_medicine]
01/21 15:17:54 - OpenCompass - DEBUG - Task 45: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-clinical_medicine]
01/21 15:17:54 - OpenCompass - DEBUG - Task 46: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-urban_and_rural_planner]
01/21 15:17:54 - OpenCompass - DEBUG - Task 47: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-accountant]
01/21 15:17:54 - OpenCompass - DEBUG - Task 48: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-fire_engineer]
01/21 15:17:54 - OpenCompass - DEBUG - Task 49: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-environmental_impact_assessment_engineer]
01/21 15:17:54 - OpenCompass - DEBUG - Task 50: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-tax_accountant]
01/21 15:17:54 - OpenCompass - DEBUG - Task 51: [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-physician]
01/21 15:17:54 - OpenCompass - DEBUG - Get class `LocalRunner` from "runner" registry in "opencompass"
01/21 15:17:54 - OpenCompass - DEBUG - An `LocalRunner` instance is built from registry, and its implementation can be found in opencompass.runners.local
01/21 15:17:54 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:17:54 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:19:00 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-computer_network]: {'accuracy': 31.57894736842105}
01/21 15:19:00 - OpenCompass - INFO - time elapsed: 32.40s
01/21 15:19:00 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:19:00 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:19:45 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-operating_system]: {'accuracy': 36.84210526315789}
01/21 15:19:45 - OpenCompass - INFO - time elapsed: 22.78s
01/21 15:19:45 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:19:45 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:20:27 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-computer_architecture]: {'accuracy': 28.57142857142857}
01/21 15:20:27 - OpenCompass - INFO - time elapsed: 20.34s
01/21 15:20:28 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:20:28 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:21:02 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_programming]: {'accuracy': 32.432432432432435}
01/21 15:21:02 - OpenCompass - INFO - time elapsed: 16.26s
01/21 15:21:03 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:21:03 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:21:36 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_physics]: {'accuracy': 26.31578947368421}
01/21 15:21:36 - OpenCompass - INFO - time elapsed: 16.82s
01/21 15:21:37 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:21:37 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:22:03 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_chemistry]: {'accuracy': 16.666666666666664}
01/21 15:22:03 - OpenCompass - INFO - time elapsed: 13.34s
01/21 15:22:04 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:22:04 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:22:29 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-advanced_mathematics]: {'accuracy': 21.052631578947366}
01/21 15:22:29 - OpenCompass - INFO - time elapsed: 11.90s
01/21 15:22:29 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:22:29 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:22:55 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-probability_and_statistics]: {'accuracy': 38.88888888888889}
01/21 15:22:55 - OpenCompass - INFO - time elapsed: 13.46s
01/21 15:22:56 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:22:56 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:23:21 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-discrete_mathematics]: {'accuracy': 18.75}
01/21 15:23:21 - OpenCompass - INFO - time elapsed: 12.30s
01/21 15:23:22 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:23:22 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:23:47 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-electrical_engineer]: {'accuracy': 35.13513513513514}
01/21 15:23:47 - OpenCompass - INFO - time elapsed: 11.45s
01/21 15:23:48 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:23:48 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:24:13 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-metrology_engineer]: {'accuracy': 50.0}
01/21 15:24:13 - OpenCompass - INFO - time elapsed: 11.53s
01/21 15:24:13 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:24:13 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:24:37 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_mathematics]: {'accuracy': 22.22222222222222}
01/21 15:24:37 - OpenCompass - INFO - time elapsed: 10.91s
01/21 15:24:37 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:24:37 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:24:57 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_physics]: {'accuracy': 31.57894736842105}
01/21 15:24:57 - OpenCompass - INFO - time elapsed: 10.09s
01/21 15:24:58 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:24:58 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:25:20 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_chemistry]: {'accuracy': 15.789473684210526}
01/21 15:25:20 - OpenCompass - INFO - time elapsed: 9.58s
01/21 15:25:21 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:25:21 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:25:40 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_biology]: {'accuracy': 36.84210526315789}
01/21 15:25:40 - OpenCompass - INFO - time elapsed: 9.36s
01/21 15:25:41 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:25:41 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:26:02 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_mathematics]: {'accuracy': 26.31578947368421}
01/21 15:26:02 - OpenCompass - INFO - time elapsed: 10.00s
01/21 15:26:02 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:26:02 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:26:24 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_biology]: {'accuracy': 61.904761904761905}
01/21 15:26:24 - OpenCompass - INFO - time elapsed: 10.50s
01/21 15:26:24 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:26:24 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:26:45 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_physics]: {'accuracy': 63.1578947368421}
01/21 15:26:45 - OpenCompass - INFO - time elapsed: 8.96s
01/21 15:26:45 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:26:45 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:27:05 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_chemistry]: {'accuracy': 60.0}
01/21 15:27:05 - OpenCompass - INFO - time elapsed: 9.35s
01/21 15:27:06 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:27:06 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:27:28 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-veterinary_medicine]: {'accuracy': 47.82608695652174}
01/21 15:27:28 - OpenCompass - INFO - time elapsed: 10.60s
01/21 15:27:29 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:27:29 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:27:51 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-college_economics]: {'accuracy': 41.81818181818181}
01/21 15:27:51 - OpenCompass - INFO - time elapsed: 10.13s
01/21 15:27:52 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:27:52 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:28:10 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-business_administration]: {'accuracy': 33.33333333333333}
01/21 15:28:10 - OpenCompass - INFO - time elapsed: 7.27s
01/21 15:28:10 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:28:10 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:28:27 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-marxism]: {'accuracy': 68.42105263157895}
01/21 15:28:27 - OpenCompass - INFO - time elapsed: 7.32s
01/21 15:28:28 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:28:28 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:28:52 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-mao_zdong_thought]: {'accuracy': 70.83333333333334}
01/21 15:28:52 - OpenCompass - INFO - time elapsed: 11.94s
01/21 15:28:53 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:28:53 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:29:09 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-education_science]: {'accuracy': 58.620689655172406}
01/21 15:29:09 - OpenCompass - INFO - time elapsed: 6.43s
01/21 15:29:10 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:29:10 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:29:31 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-teacher_qualification]: {'accuracy': 70.45454545454545}
01/21 15:29:31 - OpenCompass - INFO - time elapsed: 10.19s
01/21 15:29:32 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:29:32 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:29:53 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_politics]: {'accuracy': 26.31578947368421}
01/21 15:29:53 - OpenCompass - INFO - time elapsed: 9.55s
01/21 15:29:54 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:29:54 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:30:30 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_geography]: {'accuracy': 47.368421052631575}
01/21 15:30:30 - OpenCompass - INFO - time elapsed: 19.33s
01/21 15:30:30 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:30:30 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:31:14 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_politics]: {'accuracy': 52.38095238095239}
01/21 15:31:14 - OpenCompass - INFO - time elapsed: 22.77s
01/21 15:31:15 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:31:15 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:32:03 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_geography]: {'accuracy': 58.333333333333336}
01/21 15:32:03 - OpenCompass - INFO - time elapsed: 24.21s
01/21 15:32:04 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:32:04 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:32:52 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-modern_chinese_history]: {'accuracy': 73.91304347826086}
01/21 15:32:52 - OpenCompass - INFO - time elapsed: 24.89s
01/21 15:32:53 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:32:53 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:33:43 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-ideological_and_moral_cultivation]: {'accuracy': 63.1578947368421}
01/21 15:33:43 - OpenCompass - INFO - time elapsed: 26.73s
01/21 15:33:43 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:33:43 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:34:32 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-logic]: {'accuracy': 31.818181818181817}
01/21 15:34:32 - OpenCompass - INFO - time elapsed: 25.49s
01/21 15:34:33 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:34:33 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:35:24 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-law]: {'accuracy': 25.0}
01/21 15:35:24 - OpenCompass - INFO - time elapsed: 26.11s
01/21 15:35:25 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:35:25 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:36:10 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-chinese_language_and_literature]: {'accuracy': 30.434782608695656}
01/21 15:36:10 - OpenCompass - INFO - time elapsed: 21.82s
01/21 15:36:11 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:36:11 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:36:47 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-art_studies]: {'accuracy': 60.60606060606061}
01/21 15:36:47 - OpenCompass - INFO - time elapsed: 16.89s
01/21 15:36:48 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:36:48 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:37:24 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-professional_tour_guide]: {'accuracy': 62.06896551724138}
01/21 15:37:24 - OpenCompass - INFO - time elapsed: 17.40s
01/21 15:37:25 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:37:25 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:37:58 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-legal_professional]: {'accuracy': 39.130434782608695}
01/21 15:37:58 - OpenCompass - INFO - time elapsed: 15.99s
01/21 15:37:59 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:37:59 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:38:25 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_chinese]: {'accuracy': 63.1578947368421}
01/21 15:38:25 - OpenCompass - INFO - time elapsed: 13.28s
01/21 15:38:26 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:38:26 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:38:59 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-high_school_history]: {'accuracy': 70.0}
01/21 15:38:59 - OpenCompass - INFO - time elapsed: 16.16s
01/21 15:39:00 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:39:00 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:39:31 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-middle_school_history]: {'accuracy': 59.09090909090909}
01/21 15:39:31 - OpenCompass - INFO - time elapsed: 15.31s
01/21 15:39:32 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:39:32 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:40:00 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-civil_servant]: {'accuracy': 53.191489361702125}
01/21 15:40:00 - OpenCompass - INFO - time elapsed: 12.74s
01/21 15:40:00 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:40:00 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:40:18 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-sports_science]: {'accuracy': 52.63157894736842}
01/21 15:40:18 - OpenCompass - INFO - time elapsed: 6.63s
01/21 15:40:19 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:40:19 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:40:35 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-plant_protection]: {'accuracy': 59.09090909090909}
01/21 15:40:35 - OpenCompass - INFO - time elapsed: 6.99s
01/21 15:40:35 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:40:35 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:40:51 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-basic_medicine]: {'accuracy': 47.368421052631575}
01/21 15:40:51 - OpenCompass - INFO - time elapsed: 7.18s
01/21 15:40:52 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:40:52 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:41:08 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-clinical_medicine]: {'accuracy': 40.909090909090914}
01/21 15:41:08 - OpenCompass - INFO - time elapsed: 7.81s
01/21 15:41:09 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:41:09 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:41:25 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-urban_and_rural_planner]: {'accuracy': 45.65217391304348}
01/21 15:41:25 - OpenCompass - INFO - time elapsed: 7.31s
01/21 15:41:26 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:41:26 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:41:44 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-accountant]: {'accuracy': 26.53061224489796}
01/21 15:41:44 - OpenCompass - INFO - time elapsed: 8.87s
01/21 15:41:45 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:41:45 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:41:59 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-fire_engineer]: {'accuracy': 22.58064516129032}
01/21 15:41:59 - OpenCompass - INFO - time elapsed: 5.95s
01/21 15:42:00 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:42:00 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:42:17 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-environmental_impact_assessment_engineer]: {'accuracy': 64.51612903225806}
01/21 15:42:17 - OpenCompass - INFO - time elapsed: 8.93s
01/21 15:42:18 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:42:18 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:42:33 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-tax_accountant]: {'accuracy': 34.69387755102041}
01/21 15:42:33 - OpenCompass - INFO - time elapsed: 7.80s
01/21 15:42:34 - OpenCompass - DEBUG - Get class `OpenICLEvalTask` from "task" registry in "opencompass"
01/21 15:42:34 - OpenCompass - DEBUG - An `OpenICLEvalTask` instance is built from registry, and its implementation can be found in opencompass.tasks.openicl_eval
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:21: UserWarning: FlashAttention only supports Ampere GPUs or newer.
  warnings.warn("FlashAttention only supports Ampere GPUs or newer.")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/flash_attn_2.py:28: UserWarning: please install flash_attn from https://github.com/HazyResearch/flash-attention
  warnings.warn("please install flash_attn from https://github.com/HazyResearch/flash-attention")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/colossalai/kernel/cuda_native/mha/mem_eff_attn.py:15: UserWarning: please install xformers from https://github.com/facebookresearch/xformers
  warnings.warn("please install xformers from https://github.com/facebookresearch/xformers")
/root/.conda/envs/opencompass/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
01/21 15:42:46 - OpenCompass - INFO - Task [opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b/ceval-physician]: {'accuracy': 40.816326530612244}
01/21 15:42:46 - OpenCompass - INFO - time elapsed: 5.45s
01/21 15:42:46 - OpenCompass - DEBUG - An `DefaultSummarizer` instance is built from registry, and its implementation can be found in opencompass.summarizers.default
dataset                                         version    metric         mode      opencompass.models.huggingface.HuggingFace_model_repos_internlm-chat-7b
----------------------------------------------  ---------  -------------  ------  -------------------------------------------------------------------------
ceval-computer_network                          db9ce2     accuracy       gen                                                                         31.58
ceval-operating_system                          1c2571     accuracy       gen                                                                         36.84
ceval-computer_architecture                     a74dad     accuracy       gen                                                                         28.57
ceval-college_programming                       4ca32a     accuracy       gen                                                                         32.43
ceval-college_physics                           963fa8     accuracy       gen                                                                         26.32
ceval-college_chemistry                         e78857     accuracy       gen                                                                         16.67
ceval-advanced_mathematics                      ce03e2     accuracy       gen                                                                         21.05
ceval-probability_and_statistics                65e812     accuracy       gen                                                                         38.89
ceval-discrete_mathematics                      e894ae     accuracy       gen                                                                         18.75
ceval-electrical_engineer                       ae42b9     accuracy       gen                                                                         35.14
ceval-metrology_engineer                        ee34ea     accuracy       gen                                                                         50
ceval-high_school_mathematics                   1dc5bf     accuracy       gen                                                                         22.22
ceval-high_school_physics                       adf25f     accuracy       gen                                                                         31.58
ceval-high_school_chemistry                     2ed27f     accuracy       gen                                                                         15.79
ceval-high_school_biology                       8e2b9a     accuracy       gen                                                                         36.84
ceval-middle_school_mathematics                 bee8d5     accuracy       gen                                                                         26.32
ceval-middle_school_biology                     86817c     accuracy       gen                                                                         61.9
ceval-middle_school_physics                     8accf6     accuracy       gen                                                                         63.16
ceval-middle_school_chemistry                   167a15     accuracy       gen                                                                         60
ceval-veterinary_medicine                       b4e08d     accuracy       gen                                                                         47.83
ceval-college_economics                         f3f4e6     accuracy       gen                                                                         41.82
ceval-business_administration                   c1614e     accuracy       gen                                                                         33.33
ceval-marxism                                   cf874c     accuracy       gen                                                                         68.42
ceval-mao_zdong_thought                        51c7a4     accuracy       gen                                                                         70.83
ceval-education_science                         591fee     accuracy       gen                                                                         58.62
ceval-teacher_qualification                     4e4ced     accuracy       gen                                                                         70.45
ceval-high_school_politics                      5c0de2     accuracy       gen                                                                         26.32
ceval-high_school_geography                     865461     accuracy       gen                                                                         47.37
ceval-middle_school_politics                    5be3e7     accuracy       gen                                                                         52.38
ceval-middle_school_geography                   8a63be     accuracy       gen                                                                         58.33
ceval-modern_chinese_history                    fc01af     accuracy       gen                                                                         73.91
ceval-ideological_and_moral_cultivation         a2aa4a     accuracy       gen                                                                         63.16
ceval-logic                                     f5b022     accuracy       gen                                                                         31.82
ceval-law                                       a110a1     accuracy       gen                                                                         25
ceval-chinese_language_and_literature           0f8b68     accuracy       gen                                                                         30.43
ceval-art_studies                               2a1300     accuracy       gen                                                                         60.61
ceval-professional_tour_guide                   4e673e     accuracy       gen                                                                         62.07
ceval-legal_professional                        ce8787     accuracy       gen                                                                         39.13
ceval-high_school_chinese                       315705     accuracy       gen                                                                         63.16
ceval-high_school_history                       7eb30a     accuracy       gen                                                                         70
ceval-middle_school_history                     48ab4a     accuracy       gen                                                                         59.09
ceval-civil_servant                             87d061     accuracy       gen                                                                         53.19
ceval-sports_science                            70f27b     accuracy       gen                                                                         52.63
ceval-plant_protection                          8941f9     accuracy       gen                                                                         59.09
ceval-basic_medicine                            c409d6     accuracy       gen                                                                         47.37
ceval-clinical_medicine                         49e82d     accuracy       gen                                                                         40.91
ceval-urban_and_rural_planner                   95b885     accuracy       gen                                                                         45.65
ceval-accountant                                002837     accuracy       gen                                                                         26.53
ceval-fire_engineer                             bc23f5     accuracy       gen                                                                         22.58
ceval-environmental_impact_assessment_engineer  c64e2d     accuracy       gen                                                                         64.52
ceval-tax_accountant                            3a5e3c     accuracy       gen                                                                         34.69
ceval-physician                                 6e277d     accuracy       gen                                                                         40.82
ceval-stem                                      -          naive_average  gen                                                                         35.09
ceval-social-science                            -          naive_average  gen                                                                         52.79
ceval-humanities                                -          naive_average  gen                                                                         52.58
ceval-other                                     -          naive_average  gen                                                                         44.36
ceval-hard                                      -          naive_average  gen                                                                         23.91
ceval                                           -          naive_average  gen                                                                         44.16
01/21 15:42:47 - OpenCompass - INFO - write summary to /root/opencompass/outputs/default/20240121_150157/summary/summary_20240121_150157.txt
01/21 15:42:47 - OpenCompass - INFO - write csv to /root/opencompass/outputs/default/20240121_150157/summary/summary_20240121_150157.csv
(opencompass) root@intern-studio:~/opencompass# 



举报

相关推荐

0 条评论