Yi-Coder是零一万物推出的一系列开源AI编程助手模型,专为提升代码生成、理解、调试和补全等任务的效率而设计。
Yi-Coder能够处理长达128K tokens的上下文内容,有效捕捉长期依赖关系,适用于复杂项目级代码的理解和生成。
Yi-Coder支持52种主要编程语言,包括但不限于Java、Python、C++、JavaScript等,能够在代码生成和跨文件代码补全方面表现优异。
Yi-Coder系列模型在多个代码生成基准测试中表现突出,尤其在LiveCodeBench平台上,其9B参数版本在10B以下模型中通过率领先,展现了卓越的性能。
在代码编辑和补全能力上,Yi-Coder也表现出色,适合集成到各种开发项目中,助力开发者提高工作效率。
github项目地址:https://github.com/01-ai/Yi-Coder。
一、环境安装
1、python环境
建议安装python版本在3.10以上。
2、pip库安装
pip install torch==2.4.0+cu118 torchvision==0.19.0+cu118 torchaudio==2.4.0 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
3、Yi-Coder-9B模型下载:
git lfs install
git clone https://www.modelscope.cn/models/01ai/Yi-Coder-9B
4、Yi-Coder-9B-Chat模型下载:
git lfs install
git clone https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat
5、Yi-Coder-1.5B模型下载:
git lfs install
git clone https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B
6、Yi-Coder-1.5B-Chat模型下载:
git lfs install
git clone https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat
二、功能测试
1、运行测试:
(1)基于transformer的python代码调用测试
from modelscope import AutoTokenizer, AutoModelForCausalLM
import torch
def load_model_and_tokenizer(model_path, device):
"""
Function to load the tokenizer and model.
Args:
- model_path (str): Path to the pre-trained model.
- device (str): The device to load the model onto ('cuda' for GPU or 'cpu' for CPU).
Returns:
- tokenizer: Loaded tokenizer object.
- model: Loaded model object.
"""
try:
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype=torch.bfloat16).to(device).eval()
return tokenizer, model
except Exception as e:
raise RuntimeError(f"Failed to load model or tokenizer: {e}")
def generate_response(tokenizer, model, prompt, device):
"""
Function to generate a response from the model given a prompt.
Args:
- tokenizer: Tokenizer object.
- model: Model object.
- prompt (str): The input prompt to generate a response for.
- device (str): The device to perform the generation on ('cuda' or 'cpu').
Returns:
- response (str): Generated response as a string.
"""
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
return response
def main():
model_path = "01ai/Yi-Coder-9B-Chat"
device = "cuda"
prompt = "Write a quick sort algorithm."
try:
tokenizer, model = load_model_and_tokenizer(model_path, device)
response = generate_response(tokenizer, model, prompt, device)
print(response)
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
main()
未完......
更多详细的欢迎关注:杰哥新技术