Deepseek-MOE-16B-chat如何从输入到输出-代码解析

出发点

之前写的几篇关于大模型从输入到输出的代码都是transformer结构的代码,这次挑战一下MOE的代码,我是基于Deepseek-MOE-16B-chat的代码做的修改,这中间关于模型结构deepseek-llm-7b-chat调整的内容不是很多,主要是MOE的部分做了调整。请先阅读上一篇,关于完全一样的内容,这里不再重复

模型结构

# 这里强调一下,transformers==4.46.0
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "/usr/downloads/deepseek-moe-16b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)

model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
    {"role": "user", "content": "你是谁?"}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
# ' 我是DeepSeek Chat,一个基于DeepSeek大语言模型开发的智能AI机器人。我可以回答各种问题,包括但不限于科学、文化、历史、技术、生活等方面的问题。如果您有任何问题,欢迎随时向我提问。'

"""
DeepseekForCausalLM(
  (model): DeepseekModel(
    (embed_tokens): Embedding(102400, 2048)
    (layers): ModuleList(
      (0): DeepseekDecoderLayer(
        (self_attn): DeepseekAttention(
          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (k_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (v_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (rotary_emb): DeepseekRotaryEmbedding()
        )
        (mlp): DeepseekMLP(
          (gate_proj): Linear(in_features=2048, out_features=10944, bias=False)
          (up_proj): Linear(in_features=2048, out_features=10944, bias=False)
          (down_proj): Linear(in_features=10944, out_features=2048, bias=False)
          (act_fn): SiLU()
        )
        (input_layernorm): DeepseekRMSNorm()
        (post_attention_layernorm): DeepseekRMSNorm()
      )
      (1-27): 27 x DeepseekDecoderLayer(
        (self_attn): DeepseekAttention(
          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (k_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (v_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (rotary_emb): DeepseekRotaryEmbedding()
        )
        (mlp): DeepseekMoE(
          (experts): ModuleList(
            (0-63): 64 x DeepseekMLP(
              (gate_proj): Linear(in_features=2048, out_features=1408, bias=False)
              (up_proj): Linear(in_features=2048, out_features=1408, bias=False)
              (down_proj): Linear(in_features=1408, out_features=2048, bias=False)
              (act_fn): SiLU()
            )
          )
          (gate): MoEGate()
          (shared_experts): DeepseekMLP(
            (gate_proj): Linear(in_features=2048, out_features=2816, bias=False)
            (up_proj): Linear(in_features=2048, out_features=2816, bias=False)
            (down_proj): Linear(in_features=2816, out_features=2048, bias=False)
            (act_fn): SiLU()
          )
        )
        (input_layernorm): DeepseekRMSNorm()
        (post_attention_layernorm): DeepseekRMSNorm()
      )
    )
    (norm): DeepseekRMSNorm()
  )
  (lm_head): Linear(in_features=2048, out_features=102400, bias=False)
)
"""

模型从tokenizer、model的加载和Deepseek-llm-7b-chat是一模一样的,模型结构的有一些变化,简单说一下看出来的差别

  • 为了保证参数量,hidden_size舍弃了4096,使用了2048
  • layer0是和之前的一样的
  • layer1-27是添加了MOE的
  • MOE中每个专家头(64个)都是一个MLP层,只是维度没有升高,反而降到1408
  • layer1-27中每层都带有一个共享专家头,同样为MLP层,维度从2048升到2816
  • 为了每个token可以选择使用哪些专家头,带有一个Gate层来做选择(模型架构看不出来,但可以想一下,输入维度是2048,输出维度是64,最简单的就是一个linear层)

参数量

简单分析一下参数量,其实从模型结构里就能很明白的看出来了,我这里就是记录一下

419430400+2048+(4096+16777216+8650752*64+17301504+131072)*27+67239936+16777216+4096=16330903552
# 激活数量只有28亿
419430400+2048+(4096+16777216+8650752*6+17301504+131072)*27+67239936+16777216+4096=2828650496
"""
# 注意一下,所有的线性层都没有bias
# word embedding+lm_head
2048*102400*2=419430400
# 最后一层后面的LN
2048
# 0层有的参数
# LN
2048*2=4096
# self_attn
2048*2048*4=16777216
# MLP
2048*10944*3=67239936

# 1-27层每层都有的参数
# LN
2048*2=4096
# self_attn
2048*2048*4=16777216
# 64个专家都有的
2048*1408*3=8650752
# 共享专家
2048*2816*3=17301504
# gate
2048*64=131072
"""

不变的部分

主流程图、model部分、MLP层、RMSNorm、attention部分、rotaryEmbedding部分都是不变的,只是将代码中的Llama改为了Deepseek
这部分省略了

DecoderLayer

其实这部分只有在初始化的地方有一点不一样,流程图是一样的

class DeepseekDecoderLayer(torch.nn.Module):
    """A single transformer layer.

    Transformer layer takes input with size [s, b, h] and returns an
    output of the same size.
    """

    def __init__(self, config: DeepseekConfig, layer_idx, device=None):
        super(DeepseekDecoderLayer, self).__init__()
        # Layernorm on the input data.
        self.input_layernorm = DeepseekRMSNorm(config.hidden_size, eps=config.rms_norm_eps)

        # Self attention.
        self.self_attn = DeepseekAttention(config=config, layer_idx=layer_idx)

        # Layernorm on the attention output
        self.post_attention_layernorm = DeepseekRMSNorm(config.hidden_size, eps=config.rms_norm_eps)

        # MLP
        # ------------------------------------------------------------------------------
        # 唯一一个改动点
        # 当layer_idx符合条件的时候,使用DeepseekMOE,其他时候使用DeepseekMLP
        # 这里是从layer_idx=1到27都是使用的DeepseekMOE
        self.mlp = DeepseekMoE(config) if (config.n_routed_experts is not None and  \
                                           layer_idx >= config.first_k_dense_replace and layer_idx % config.moe_layer_freq == 0) \
                                        else DeepseekMLP(config)

MOEGate

MOEGate


class MoEGate(nn.Module):
    def __init__(self, config):
        super().__init__()
        self.config = config
        self.top_k = config.num_experts_per_tok
        self.n_routed_experts = config.n_routed_experts

        self.scoring_func = config.scoring_func
        self.alpha = config.aux_loss_alpha
        self.seq_aux = config.seq_aux

        # topk selection algorithm
        self.norm_topk_prob = config.norm_topk_prob
        self.gating_dim = config.hidden_size
        self.weight = nn.Parameter(torch.empty((self.n_routed_experts, self.gating_dim)))
        self.reset_parameters()

    def reset_parameters(self) -> None:
        import torch.nn.init  as init
        init.kaiming_uniform_(self.weight, a=math.sqrt(5))
    
    def forward(self, hidden_states):
        bsz, seq_len, h = hidden_states.shape        
        ### compute gating score
        hidden_states = hidden_states.view(-1, h)# bsz*seq_len, h
        logits = F.linear(hidden_states, self.weight, None)# 64,2048得到bsz*seq_len, 64
        scores = logits.softmax(dim=-1)
 
        ### select top-k experts
        topk_weight, topk_idx = torch.topk(scores, k=self.top_k, dim=-1, sorted=False)# bsz*seq_len, 6
        
        ### norm gate to sum 1
        if self.top_k > 1 and self.norm_topk_prob:
            denominator = topk_weight.sum(dim=-1, keepdim=True) + 1e-20
            topk_weight = topk_weight / denominator

        ### expert-level computation auxiliary loss
        aux_loss = None
        return topk_idx, topk_weight, aux_loss

整体操作就像流程图那样,比较简单

DeepseekMoE

MOE
从流程图可以看出来,通过hidden_states得到topk_idx和topk_weight以后,将这两个和hidden_states一起输入moe_infer函数得到y,将y和经过shared_experts后的结果相加,得到最后的输出。这个流程比较清晰,比较难得是moe_infer


class DeepseekMoE(nn.Module):
    """
    A mixed expert module containing shared experts.
    """
    def __init__(self, config):
        super().__init__()
        self.config = config
        self.num_experts_per_tok = config.num_experts_per_tok# 6
        self.experts = nn.ModuleList([DeepseekMLP(config, intermediate_size = config.moe_intermediate_size) for i in range(config.n_routed_experts)])
        self.gate = MoEGate(config)
        if config.n_shared_experts is not None:
            intermediate_size = config.moe_intermediate_size * config.n_shared_experts
            self.shared_experts = DeepseekMLP(config=config, intermediate_size = intermediate_size)
    
    def forward(self, hidden_states):# 流程图中画得出来的
        identity = hidden_states
        orig_shape = hidden_states.shape
        topk_idx, topk_weight, aux_loss = self.gate(hidden_states)
        hidden_states = hidden_states.view(-1, hidden_states.shape[-1])# bsz*seq_len,h
        flat_topk_idx = topk_idx.view(-1)# bsz*seq_len*6
        y = self.moe_infer(hidden_states, flat_topk_idx, topk_weight.view(-1, 1)).view(*orig_shape)
        if self.config.n_shared_experts is not None:
            y = y + self.shared_experts(identity)
        return y
    
    @torch.no_grad()
    def moe_infer(self, x, flat_topk_idx, topk_weight):
        expert_cache = torch.zeros_like(x)# bsz*seq_len,h
        idxs = flat_topk_idx.argsort()# 按升序排列元素的索引
        tokens_per_expert = flat_topk_idx.bincount().cpu().numpy().cumsum(0)
        token_idxs = idxs // self.num_experts_per_tok
        for i, end_idx in enumerate(tokens_per_expert):
            start_idx = 0 if i == 0 else tokens_per_expert[i-1]
            if start_idx == end_idx:
                continue
            expert = self.experts[i]
            exp_token_idx = token_idxs[start_idx:end_idx]
            expert_tokens = x[exp_token_idx]
            expert_out = expert(expert_tokens)
            expert_out.mul_(topk_weight[idxs[start_idx:end_idx]])
            expert_cache.scatter_reduce_(0, exp_token_idx.view(-1, 1).repeat(1, x.shape[-1]), expert_out, reduce='sum')
        return expert_cache

先简单叙述一下moe_infer在干一件什么样的事情OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER,如下图所示,它是在完成右边根据gate的结果找到对应的expert、然后乘上权重,最后相加的事情。
MOE_infer
如果让我来做这个事情,那我肯定按照token来遍历啊,下面先来说官方代码

import torch
from torch import nn
from transformers import AutoModelForCausalLM

model_name = "/usr/downloads/deepseek-moe-16b-chat"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
# 基础设置-专家的头数
num_experts_per_tok = 6
# 经过gate处理后,8个token,每个token对应了6个头,拉平后的结果
flat_topk_idx = torch.tensor([12, 36, 39, 56, 58, 40, 6, 9, 26, 44, 50, 55, 8, 26, 30, 32, 63, 16, 6, 16, 28, 36, 49, 48, 30, 35, 57, 59, 61, 18, 28, 35, 40, 54, 55, 63, 2, 21, 56, 59, 63, 52, 23, 32, 33, 38, 39, 25]).to(model.device)
# 每个专家对应的权重
topk_weight = torch.randn(48,1, dtype=torch.bfloat16).to(model.device)
# 8个token对应的向量值
hidden_states = torch.randn(8,2048, dtype=torch.bfloat16).to(model.device)
# 从model中找到layer1的64个专家
experts = model.model.layers[1].mlp.experts

# 模型做法
def moe_infer(x, flat_topk_idx, topk_weight):
	expert_cache = torch.zeros_like(x)# bsz*seq_len,h
	idxs = flat_topk_idx.argsort()# 按升序排列元素的索引
	tokens_per_expert = flat_topk_idx.bincount().cpu().numpy().cumsum(0)
	token_idxs = idxs // num_experts_per_tok
	for i, end_idx in enumerate(tokens_per_expert):
	    start_idx = 0 if i == 0 else tokens_per_expert[i-1]
	    if start_idx == end_idx:
	        continue
	    expert = experts[i]
	    exp_token_idx = token_idxs[start_idx:end_idx]
	    expert_tokens = x[exp_token_idx]
	    expert_out = expert(expert_tokens)
	    expert_out.mul_(topk_weight[idxs[start_idx:end_idx]])
	    expert_cache.scatter_reduce_(0, exp_token_idx.view(-1, 1).repeat(1, x.shape[-1]), expert_out, reduce='sum')
	return expert_cache

final = moe_infer(hidden_states, flat_topk_idx, topk_weight)

按照token来遍历的代码写好以后,我发现了一个之前没注意到的问题,那就是在bfloat16的情况下,每个值相加的顺序会影响最后的结果,同理float16、float32也是有影响的,int是没有影响的。

import torch
a = torch.randn(10,40,dtype=torch.bfloat16)
b = torch.randn(10,40,dtype=torch.bfloat16)
c = torch.randn(10,40,dtype=torch.bfloat16)
aa = a + b + c
bb = a + c + b
aa == bb
"""
tensor([[ True, False,  True,  True, False,  True,  True,  True,  True, False,
         False, False,  True,  True,  True, False,  True,  True,  True,  True,
          True,  True, False,  True,  True,  True,  True,  True,  True,  True,
         False, False,  True,  True,  True,  True,  True,  True, False,  True]
         ....
"""
-0.2090-0.3535+0.8047
# 0.24219999999999997
-0.2090+0.8047-0.3535
# 0.24220000000000003

按照token来遍历-错误做法

# 相加顺序不对,结果也不能完全对上
def moe_infer_1(x, flat_topk_idx, topk_weight):
    expert_cache = torch.zeros_like(x)
    for i, end_idx in enumerate(x):# token遍历
        weight = topk_weight[i]
        for j in range(6):# 每个token对应的专家遍历
            print (flat_topk_idx[i*6+j])
            expert = experts[flat_topk_idx[i*6+j]]# 找到对应的专家
            expert_out = expert(end_idx)# 专家处理该token
            expert_cache[i] = expert_cache[i] + expert_out.mul_(topk_weight[i*6+j])# 每个专家处理后的结果相加
    return expert_cache

final1 = moe_infer_1(hidden_states, flat_topk_idx, topk_weight)# 相加顺序不对,结果也不能完全对上
final == final1
# False
# 其中token1和token5是一样的,因为它对应的专家序列刚好是升序的

按照token来遍历-正确做法

def moe_infer_2(x, flat_topk_idx, topk_weight):
	expert_cache = torch.zeros_like(x)
	for i, end_idx in enumerate(x):
		aa, bb = flat_topk_idx[i*6:(i+1)*6].sort()# 先将专家头排序,aa记录了升序后的专家,bb记录了原始位置
		for j in range(6):
			expert = experts[aa[j]]
			expert_out = expert(end_idx)
			expert_cache[i] = expert_cache[i] + expert_out.mul_(topk_weight[i*6 + bb[j]])
	return expert_cache
final2 = moe_infer_2(hidden_states, flat_topk_idx, topk_weight)# 相加顺序对了,结果能完全对上
final == final2
# True

官方做法

官方做法-demo

	for i, end_idx in enumerate(tokens_per_expert):
        start_idx = 0 if i == 0 else tokens_per_expert[i-1]# 找到前一个专家出现的次数和
        if start_idx == end_idx:# 如果当前专家出现的次数和与前一个专家出现的次数和相等,那么则当前专家未出现过
            continue
        expert = experts[i]# i对应的是专家的index
        exp_token_idx = token_idxs[start_idx:end_idx]# 找到能对应到这个专家的token行 idx
        expert_tokens = x[exp_token_idx]# 根据token行idx找到对应的token hidden_states
        expert_out = expert(expert_tokens)# 专家处理这个token
        #print (expert_out.shape)
        #print (topk_weight[idxs[start_idx:end_idx]].shape)
        expert_out.mul_(topk_weight[idxs[start_idx:end_idx]])# 先找到token对应的权重,将专家输出与权重相乘
        expert_cache.scatter_reduce_(0, exp_token_idx.view(-1, 1).repeat(1, x.shape[-1]), expert_out, reduce='sum')# 将每个token对应的专家输出相加

几点说明:

  • scatter_reduce_ pytorch中的scatter_add_函数解析
  • 按照专家头遍历复杂度是O(n),而按照token遍历复杂度是O(6*num_of_token),当token数量很大时,这个复杂度很高的
  • 我尽量写的清楚明白了,具体逻辑还是要好好思考一下的

最后一点代码量

到此基本就写完了代码,最后补充上一点import和实际调用的代码

import copy
import re

import torch
import torch.nn.functional as F
from torch import nn
from typing import Optional, Tuple, List, Callable

from transformers.modeling_utils import PreTrainedModel
from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS
from configuration_deepseek import DeepseekConfig# 这里有点变动,是deepseek,不是Deepseek
from transformers.activations import ACT2FN
import math

把这些代码保存成deepseek.py,放在deepseek-MOE-16b-chat的代码中,就可以正常使用了

from deepseek import *
from transformers import AutoTokenizer
model_path = "/usr/downloads/deepseek-moe-16b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = DeepseekForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()

messages = [
    {"role": "user", "content": "你是谁?"}
]

response = model.chat(tokenizer, messages)
print (response)

代码量在550行,原始代码量是1500,减少一半的代码的小目标基本实现(成功)

结束语

之前就知道MOE的计算方法(权重*MLP的结果),从来没有想过具体实现过程,这次借着Deepseek的机会好好了解了一下,很妙的想法。将代码从1500+缩减到了不到600行,简单明了、通俗易懂,成功。

解释一下2. Find the API endpoint below corresponding to your desired function in the app. Copy the code snippet, replacing the placeholder values with your own input data. Or use the API Recorder to automatically generate your API requests. api_name: /get_model_info copy from gradio_client import Client client = Client("http://localhost:7860/") result = client.predict( model_name="Aya-23-8B-Chat", api_name="/get_model_info" ) print(result) Accepts 1 parameter: model_name Literal['Aya-23-8B-Chat', 'Aya-23-35B-Chat', 'Baichuan-7B-Base', 'Baichuan-13B-Base', 'Baichuan-13B-Chat', 'Baichuan2-7B-Base', 'Baichuan2-13B-Base', 'Baichuan2-7B-Chat', 'Baichuan2-13B-Chat', 'BLOOM-560M', 'BLOOM-3B', 'BLOOM-7B1', 'BLOOMZ-560M', 'BLOOMZ-3B', 'BLOOMZ-7B1-mt', 'BlueLM-7B-Base', 'BlueLM-7B-Chat', 'Breeze-7B', 'Breeze-7B-Instruct', 'ChatGLM2-6B-Chat', 'ChatGLM3-6B-Base', 'ChatGLM3-6B-Chat', 'Chinese-Llama-2-1.3B', 'Chinese-Llama-2-7B', 'Chinese-Llama-2-13B', 'Chinese-Alpaca-2-1.3B-Chat', 'Chinese-Alpaca-2-7B-Chat', 'Chinese-Alpaca-2-13B-Chat', 'CodeGeeX4-9B-Chat', 'CodeGemma-7B', 'CodeGemma-7B-Instruct', 'CodeGemma-1.1-2B', 'CodeGemma-1.1-7B-Instruct', 'Codestral-22B-v0.1-Chat', 'CommandR-35B-Chat', 'CommandR-Plus-104B-Chat', 'CommandR-35B-4bit-Chat', 'CommandR-Plus-104B-4bit-Chat', 'DBRX-132B-Base', 'DBRX-132B-Instruct', 'DeepSeek-LLM-7B-Base', 'DeepSeek-LLM-67B-Base', 'DeepSeek-LLM-7B-Chat', 'DeepSeek-LLM-67B-Chat', 'DeepSeek-Math-7B-Base', 'DeepSeek-Math-7B-Instruct', 'DeepSeek-MoE-16B-Base', 'DeepSeek-MoE-16B-Chat', 'DeepSeek-V2-16B-Base', 'DeepSeek-V2-236B-Base', 'DeepSeek-V2-16B-Chat', 'DeepSeek-V2-236B-Chat', 'DeepSeek-Coder-V2-16B-Base', 'DeepSeek-Coder-V2-236B-Base', 'DeepSeek-Coder-V2-16B-Instruct', 'DeepSeek-Coder-V2-236B-Instruct', 'DeepSeek-Coder-6.7B-Base', 'DeepSeek-Coder-7B-Base', 'DeepSeek-Coder-33B-Base', 'DeepSeek-Coder-6.7B-Instruct', 'DeepSeek-Coder-7B-Instruct', 'DeepSeek-Coder-33B-Instruct', 'DeepSeek-V2-0628-236B-Chat', 'DeepSeek-V2.5-236B-Chat', 'DeepSeek-V2.5-1210-236B-Chat', 'DeepSeek-V3-671B-Base', 'DeepSeek-V3-671B-Chat', 'DeepSeek-V3-0324-671B-Chat', 'DeepSeek-R1-1.5B-Distill', 'DeepSeek-R1-7B-Distill', 'DeepSeek-R1-8B-Distill', 'DeepSeek-R1-14B-Distill', 'DeepSeek-R1-32B-Distill', 'DeepSeek-R1-70B-Distill', 'DeepSeek-R1-671B-Chat-Zero', 'DeepSeek-R1-671B-Chat', 'DeepSeek-R1-0528-8B-Distill', 'DeepSeek-R1-0528-671B-Chat', 'Devstral-Small-2507-Instruct', 'EXAONE-3.0-7.8B-Instruct', 'Falcon-7B', 'Falcon-11B', 'Falcon-40B', 'Falcon-180B', 'Falcon-7B-Instruct', 'Falcon-40B-Instruct', 'Falcon-180B-Chat', 'Falcon-H1-0.5B-Base', 'Falcon-H1-1.5B-Base', 'Falcon-H1-1.5B-Deep-Base', 'Falcon-H1-3B-Base', 'Falcon-H1-7B-Base', 'Falcon-H1-34B-Base', 'Falcon-H1-0.5B-Instruct', 'Falcon-H1-1.5B-Instruct', 'Falcon-H1-1.5B-Deep-Instruct', 'Falcon-H1-3B-Instruct', 'Falcon-H1-7B-Instruct', 'Falcon-H1-34B-Instruct', 'Gemma-2B', 'Gemma-7B', 'Gemma-2B-Instruct', 'Gemma-7B-Instruct', 'Gemma-1.1-2B-Instruct', 'Gemma-1.1-7B-Instruct', 'Gemma-2-2B', 'Gemma-2-9B', 'Gemma-2-27B', 'Gemma-2-2B-Instruct', 'Gemma-2-9B-Instruct', 'Gemma-2-27B-Instruct', 'Gemma-3-1B', 'Gemma-3-1B-Instruct', 'MedGemma-27B-Instruct', 'Gemma-3-4B', 'Gemma-3-12B', 'Gemma-3-27B', 'Gemma-3-4B-Instruct', 'Gemma-3-12B-Instruct', 'Gemma-3-27B-Instruct', 'MedGemma-4B', 'MedGemma-4B-Instruct', 'Gemma-3n-E2B', 'Gemma-3n-E4B', 'Gemma-3n-E2B-Instruct', 'Gemma-3n-E4B-Instruct', 'GLM-4-9B', 'GLM-4-9B-Chat', 'GLM-4-9B-1M-Chat', 'GLM-4-0414-9B-Chat', 'GLM-4-0414-32B-Base', 'GLM-4-0414-32B-Chat', 'GLM-4.1V-9B-Base', 'GLM-4.1V-9B-Thinking', 'GLM-Z1-0414-9B-Chat', 'GLM-Z1-0414-32B-Chat', 'GPT-2-Small', 'GPT-2-Medium', 'GPT-2-Large', 'GPT-2-XL', 'Granite-3.0-1B-A400M-Base', 'Granite-3.0-3B-A800M-Base', 'Granite-3.0-2B-Base', 'Granite-3.0-8B-Base', 'Granite-3.0-1B-A400M-Instruct', 'Granite-3.0-3B-A800M-Instruct', 'Granite-3.0-2B-Instruct', 'Granite-3.0-8B-Instruct', 'Granite-3.1-1B-A400M-Base', 'Granite-3.1-3B-A800M-Base', 'Granite-3.1-2B-Base', 'Granite-3.1-8B-Base', 'Granite-3.1-1B-A400M-Instruct', 'Granite-3.1-3B-A800M-Instruct', 'Granite-3.1-2B-Instruct', 'Granite-3.1-8B-Instruct', 'Granite-3.2-2B-Instruct', 'Granite-3.2-8B-Instruct', 'Granite-3.3-2B-Base', 'Granite-3.3-8B-Base', 'Granite-3.3-2B-Instruct', 'Granite-3.3-8B-Instruct', 'Granite-Vision-3.2-2B', 'Hunyuan-7B-Instruct', 'Index-1.9B-Base', 'Index-1.9B-Base-Pure', 'Index-1.9B-Chat', 'Index-1.9B-Character-Chat', 'Index-1.9B-Chat-32K', 'InternLM-7B', 'InternLM-20B', 'InternLM-7B-Chat', 'InternLM-20B-Chat', 'InternLM2-7B', 'InternLM2-20B', 'InternLM2-7B-Chat', 'InternLM2-20B-Chat', 'InternLM2.5-1.8B', 'InternLM2.5-7B', 'InternLM2.5-20B', 'InternLM2.5-1.8B-Chat', 'InternLM2.5-7B-Chat', 'InternLM2.5-7B-1M-Chat', 'InternLM2.5-20B-Chat', 'InternLM3-8B-Chat', 'InternVL2.5-2B-MPO', 'InternVL2.5-8B-MPO', 'InternVL3-1B-hf', 'InternVL3-2B-hf', 'InternVL3-8B-hf', 'InternVL3-14B-hf', 'InternVL3-38B-hf', 'InternVL3-78B-hf', 'Jamba-v0.1', 'Kimi-Dev-72B-Instruct', 'Kimi-VL-A3B-Instruct', 'Kimi-VL-A3B-Thinking', 'Kimi-VL-A3B-Thinking-2506', 'LingoWhale-8B', 'Llama-7B', 'Llama-13B', 'Llama-30B', 'Llama-65B', 'Llama-2-7B', 'Llama-2-13B', 'Llama-2-70B', 'Llama-2-7B-Chat', 'Llama-2-13B-Chat', 'Llama-2-70B-Chat', 'Llama-3-8B', 'Llama-3-70B', 'Llama-3-8B-Instruct', 'Llama-3-70B-Instruct', 'Llama-3-8B-Chinese-Chat', 'Llama-3-70B-Chinese-Chat', 'Llama-3.1-8B', 'Llama-3.1-70B', 'Llama-3.1-405B', 'Llama-3.1-8B-Instruct', 'Llama-3.1-70B-Instruct', 'Llama-3.1-405B-Instruct', 'Llama-3.1-8B-Chinese-Chat', 'Llama-3.1-70B-Chinese-Chat', 'Llama-3.2-1B', 'Llama-3.2-3B', 'Llama-3.2-1B-Instruct', 'Llama-3.2-3B-Instruct', 'Llama-3.3-70B-Instruct', 'Llama-3.2-11B-Vision', 'Llama-3.2-11B-Vision-Instruct', 'Llama-3.2-90B-Vision', 'Llama-3.2-90B-Vision-Instruct', 'Llama-4-Scout-17B-16E', 'Llama-4-Scout-17B-16E-Instruct', 'Llama-4-Maverick-17B-128E', 'Llama-4-Maverick-17B-128E-Instruct', 'LLaVA-1.5-7B-Chat', 'LLaVA-1.5-13B-Chat', 'LLaVA-NeXT-7B-Chat', 'LLaVA-NeXT-13B-Chat', 'LLaVA-NeXT-Mistral-7B-Chat', 'LLaVA-NeXT-Llama3-8B-Chat', 'LLaVA-NeXT-34B-Chat', 'LLaVA-NeXT-72B-Chat', 'LLaVA-NeXT-110B-Chat', 'LLaVA-NeXT-Video-7B-Chat', 'LLaVA-NeXT-Video-7B-DPO-Chat', 'LLaVA-NeXT-Video-7B-32k-Chat', 'LLaVA-NeXT-Video-34B-Chat', 'LLaVA-NeXT-Video-34B-DPO-Chat', 'Marco-o1-Chat', 'MiMo-7B-Base', 'MiMo-7B-Instruct', 'MiMo-7B-Instruct-RL', 'MiMo-7B-RL-ZERO', 'MiMo-7B-VL-Instruct', 'MiMo-7B-VL-RL', 'MiniCPM-2B-SFT-Chat', 'MiniCPM-2B-DPO-Chat', 'MiniCPM3-4B-Chat', 'MiniCPM4-0.5B-Chat', 'MiniCPM4-8B-Chat', 'MiniCPM-o-2_6', 'MiniCPM-V-2_6', 'Ministral-8B-Instruct-2410', 'Mistral-Nemo-Base-2407', 'Mistral-Nemo-Instruct-2407', 'Mistral-7B-v0.1', 'Mistral-7B-v0.2', 'Mistral-7B-v0.3', 'Mistral-7B-Instruct-v0.1', 'Mistral-7B-Instruct-v0.2', 'Mistral-7B-Instruct-v0.3', 'Mistral-Small-24B-Base-2501', 'Mistral-Small-24B-Instruct-2501', 'Mistral-Small-3.1-24B-Base', 'Mistral-Small-3.1-24B-Instruct', 'Mistral-Small-3.2-24B-Instruct', 'Mixtral-8x7B-v0.1', 'Mixtral-8x22B-v0.1', 'Mixtral-8x7B-v0.1-Instruct', 'Mixtral-8x22B-v0.1-Instruct', 'Moonlight-16B-A3B', 'Moonlight-16B-A3B-Instruct', 'OLMo-1B', 'OLMo-7B', 'OLMo-7B-Chat', 'OLMo-1.7-7B', 'OpenChat3.5-7B-Chat', 'OpenChat3.6-8B-Chat', 'OpenCoder-1.5B-Base', 'OpenCoder-8B-Base', 'OpenCoder-1.5B-Instruct', 'OpenCoder-8B-Instruct', 'Orion-14B-Base', 'Orion-14B-Chat', 'Orion-14B-Long-Chat', 'Orion-14B-RAG-Chat', 'Orion-14B-Plugin-Chat', 'PaliGemma-3B-pt-224', 'PaliGemma-3B-pt-448', 'PaliGemma-3B-pt-896', 'PaliGemma-3B-mix-224', 'PaliGemma-3B-mix-448', 'PaliGemma2-3B-pt-224', 'PaliGemma2-3B-pt-448', 'PaliGemma2-3B-pt-896', 'PaliGemma2-10B-pt-224', 'PaliGemma2-10B-pt-448', 'PaliGemma2-10B-pt-896', 'PaliGemma2-28B-pt-224', 'PaliGemma2-28B-pt-448', 'PaliGemma2-28B-pt-896', 'PaliGemma2-3B-mix-224', 'PaliGemma2-3B-mix-448', 'PaliGemma2-10B-mix-224', 'PaliGemma2-10B-mix-448', 'PaliGemma2-28B-mix-224', 'PaliGemma2-28B-mix-448', 'Phi-1.5-1.3B', 'Phi-2-2.7B', 'Phi-3-4B-4k-Instruct', 'Phi-3-4B-128k-Instruct', 'Phi-3-14B-8k-Instruct', 'Phi-3-14B-128k-Instruct', 'Phi-3.5-4B-instruct', 'Phi-3.5-MoE-42B-A6.6B-instruct', 'Phi-3-7B-8k-Instruct', 'Phi-3-7B-128k-Instruct', 'Phi-4-14B-Instruct', 'Pixtral-12B', 'Qwen-1.8B', 'Qwen-7B', 'Qwen-14B', 'Qwen-72B', 'Qwen-1.8B-Chat', 'Qwen-7B-Chat', 'Qwen-14B-Chat', 'Qwen-72B-Chat', 'Qwen-1.8B-Chat-Int8', 'Qwen-1.8B-Chat-Int4', 'Qwen-7B-Chat-Int8', 'Qwen-7B-Chat-Int4', 'Qwen-14B-Chat-Int8', 'Qwen-14B-Chat-Int4', 'Qwen-72B-Chat-Int8', 'Qwen-72B-Chat-Int4', 'Qwen1.5-0.5B', 'Qwen1.5-1.8B', 'Qwen1.5-4B', 'Qwen1.5-7B', 'Qwen1.5-14B', 'Qwen1.5-32B', 'Qwen1.5-72B', 'Qwen1.5-110B', 'Qwen1.5-MoE-A2.7B', 'Qwen1.5-0.5B-Chat', 'Qwen1.5-1.8B-Chat', 'Qwen1.5-4B-Chat', 'Qwen1.5-7B-Chat', 'Qwen1.5-14B-Chat', 'Qwen1.5-32B-Chat', 'Qwen1.5-72B-Chat', 'Qwen1.5-110B-Chat', 'Qwen1.5-MoE-A2.7B-Chat', 'Qwen1.5-0.5B-Chat-GPTQ-Int8', 'Qwen1.5-0.5B-Chat-AWQ', 'Qwen1.5-1.8B-Chat-GPTQ-Int8', 'Qwen1.5-1.8B-Chat-AWQ', 'Qwen1.5-4B-Chat-GPTQ-Int8', 'Qwen1.5-4B-Chat-AWQ', 'Qwen1.5-7B-Chat-GPTQ-Int8', 'Qwen1.5-7B-Chat-AWQ', 'Qwen1.5-14B-Chat-GPTQ-Int8', 'Qwen1.5-14B-Chat-AWQ', 'Qwen1.5-32B-Chat-AWQ', 'Qwen1.5-72B-Chat-GPTQ-Int8', 'Qwen1.5-72B-Chat-AWQ', 'Qwen1.5-110B-Chat-AWQ', 'Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4', 'CodeQwen1.5-7B', 'CodeQwen1.5-7B-Chat', 'CodeQwen1.5-7B-Chat-AWQ', 'Qwen2-0.5B', 'Qwen2-1.5B', 'Qwen2-7B', 'Qwen2-72B', 'Qwen2-MoE-57B-A14B', 'Qwen2-0.5B-Instruct', 'Qwen2-1.5B-Instruct', 'Qwen2-7B-Instruct', 'Qwen2-72B-Instruct', 'Qwen2-MoE-57B-A14B-Instruct', 'Qwen2-0.5B-Instruct-GPTQ-Int8', 'Qwen2-0.5B-Instruct-GPTQ-Int4', 'Qwen2-0.5B-Instruct-AWQ', 'Qwen2-1.5B-Instruct-GPTQ-Int8', 'Qwen2-1.5B-Instruct-GPTQ-Int4', 'Qwen2-1.5B-Instruct-AWQ', 'Qwen2-7B-Instruct-GPTQ-Int8', 'Qwen2-7B-Instruct-GPTQ-Int4', 'Qwen2-7B-Instruct-AWQ', 'Qwen2-72B-Instruct-GPTQ-Int8', 'Qwen2-72B-Instruct-GPTQ-Int4', 'Qwen2-72B-Instruct-AWQ', 'Qwen2-57B-A14B-Instruct-GPTQ-Int4', 'Qwen2-Math-1.5B', 'Qwen2-Math-7B', 'Qwen2-Math-72B', 'Qwen2-Math-1.5B-Instruct', 'Qwen2-Math-7B-Instruct', 'Qwen2-Math-72B-Instruct', 'Qwen2.5-0.5B', 'Qwen2.5-1.5B', 'Qwen2.5-3B', 'Qwen2.5-7B', 'Qwen2.5-14B', 'Qwen2.5-32B', 'Qwen2.5-72B', 'Qwen2.5-0.5B-Instruct', 'Qwen2.5-1.5B-Instruct', 'Qwen2.5-3B-Instruct', 'Qwen2.5-7B-Instruct', 'Qwen2.5-14B-Instruct', 'Qwen2.5-32B-Instruct', 'Qwen2.5-72B-Instruct', 'Qwen2.5-7B-Instruct-1M', 'Qwen2.5-14B-Instruct-1M', 'Qwen2.5-0.5B-Instruct-GPTQ-Int8', 'Qwen2.5-0.5B-Instruct-GPTQ-Int4', 'Qwen2.5-0.5B-Instruct-AWQ', 'Qwen2.5-1.5B-Instruct-GPTQ-Int8', 'Qwen2.5-1.5B-Instruct-GPTQ-Int4', 'Qwen2.5-1.5B-Instruct-AWQ', 'Qwen2.5-3B-Instruct-GPTQ-Int8', 'Qwen2.5-3B-Instruct-GPTQ-Int4', 'Qwen2.5-3B-Instruct-AWQ', 'Qwen2.5-7B-Instruct-GPTQ-Int8', 'Qwen2.5-7B-Instruct-GPTQ-Int4', 'Qwen2.5-7B-Instruct-AWQ', 'Qwen2.5-14B-Instruct-GPTQ-Int8', 'Qwen2.5-14B-Instruct-GPTQ-Int4', 'Qwen2.5-14B-Instruct-AWQ', 'Qwen2.5-32B-Instruct-GPTQ-Int8', 'Qwen2.5-32B-Instruct-GPTQ-Int4', 'Qwen2.5-32B-Instruct-AWQ', 'Qwen2.5-72B-Instruct-GPTQ-Int8', 'Qwen2.5-72B-Instruct-GPTQ-Int4', 'Qwen2.5-72B-Instruct-AWQ', 'Qwen2.5-Coder-0.5B', 'Qwen2.5-Coder-1.5B', 'Qwen2.5-Coder-3B', 'Qwen2.5-Coder-7B', 'Qwen2.5-Coder-14B', 'Qwen2.5-Coder-32B', 'Qwen2.5-Coder-0.5B-Instruct', 'Qwen2.5-Coder-1.5B-Instruct', 'Qwen2.5-Coder-3B-Instruct', 'Qwen2.5-Coder-7B-Instruct', 'Qwen2.5-Coder-14B-Instruct', 'Qwen2.5-Coder-32B-Instruct', 'Qwen2.5-Math-1.5B', 'Qwen2.5-Math-7B', 'Qwen2.5-Math-72B', 'Qwen2.5-Math-1.5B-Instruct', 'Qwen2.5-Math-7B-Instruct', 'Qwen2.5-Math-72B-Instruct', 'QwQ-32B-Preview-Instruct', 'QwQ-32B-Instruct', 'Qwen3-0.6B-Base', 'Qwen3-1.7B-Base', 'Qwen3-4B-Base', 'Qwen3-8B-Base', 'Qwen3-14B-Base', 'Qwen3-30B-A3B-Base', 'Qwen3-0.6B-Instruct', 'Qwen3-1.7B-Instruct', 'Qwen3-4B-Instruct', 'Qwen3-8B-Instruct', 'Qwen3-14B-Instruct', 'Qwen3-32B-Instruct', 'Qwen3-30B-A3B-Instruct', 'Qwen3-235B-A22B-Instruct', 'Qwen3-0.6B-Instruct-GPTQ-Int8', 'Qwen3-1.7B-Instruct-GPTQ-Int8', 'Qwen3-4B-Instruct-AWQ', 'Qwen3-8B-Instruct-AWQ', 'Qwen3-14B-Instruct-AWQ', 'Qwen3-32B-Instruct-AWQ', 'Qwen3-30B-A3B-Instruct-GPTQ-Int4', 'Qwen3-235B-A22B-Instruct-GPTQ-Int4', 'Qwen2-Audio-7B', 'Qwen2-Audio-7B-Instruct', 'Qwen2.5-Omni-3B', 'Qwen2.5-Omni-7B', 'Qwen2.5-Omni-7B-GPTQ-Int4', 'Qwen2.5-Omni-7B-AWQ', 'Qwen2-VL-2B', 'Qwen2-VL-7B', 'Qwen2-VL-72B', 'Qwen2-VL-2B-Instruct', 'Qwen2-VL-7B-Instruct', 'Qwen2-VL-72B-Instruct', 'Qwen2-VL-2B-Instruct-GPTQ-Int8', 'Qwen2-VL-2B-Instruct-GPTQ-Int4', 'Qwen2-VL-2B-Instruct-AWQ', 'Qwen2-VL-7B-Instruct-GPTQ-Int8', 'Qwen2-VL-7B-Instruct-GPTQ-Int4', 'Qwen2-VL-7B-Instruct-AWQ', 'Qwen2-VL-72B-Instruct-GPTQ-Int8', 'Qwen2-VL-72B-Instruct-GPTQ-Int4', 'Qwen2-VL-72B-Instruct-AWQ', 'QVQ-72B-Preview', 'Qwen2.5-VL-3B-Instruct', 'Qwen2.5-VL-7B-Instruct', 'Qwen2.5-VL-32B-Instruct', 'Qwen2.5-VL-72B-Instruct', 'Qwen2.5-VL-3B-Instruct-AWQ', 'Qwen2.5-VL-7B-Instruct-AWQ', 'Qwen2.5-VL-72B-Instruct-AWQ', 'Seed-Coder-8B-Base', 'Seed-Coder-8B-Instruct', 'Seed-Coder-8B-Instruct-Reasoning', 'Skywork-13B-Base', 'Skywork-o1-Open-Llama-3.1-8B', 'SmolLM-135M', 'SmolLM-360M', 'SmolLM-1.7B', 'SmolLM-135M-Instruct', 'SmolLM-360M-Instruct', 'SmolLM-1.7B-Instruct', 'SmolLM2-135M', 'SmolLM2-360M', 'SmolLM2-1.7B', 'SmolLM2-135M-Instruct', 'SmolLM2-360M-Instruct', 'SmolLM2-1.7B-Instruct', 'SOLAR-10.7B-v1.0', 'SOLAR-10.7B-Instruct-v1.0', 'StarCoder2-3B', 'StarCoder2-7B', 'StarCoder2-15B', 'TeleChat-1B-Chat', 'TeleChat-7B-Chat', 'TeleChat-12B-Chat', 'TeleChat-52B-Chat', 'TeleChat2-3B-Chat', 'TeleChat2-7B-Chat', 'TeleChat2-35B-Chat', 'TeleChat2-115B-Chat', 'Vicuna-v1.5-7B-Chat', 'Vicuna-v1.5-13B-Chat', 'Video-LLaVA-7B-Chat', 'XuanYuan-6B', 'XuanYuan-70B', 'XuanYuan2-70B', 'XuanYuan-6B-Chat', 'XuanYuan-70B-Chat', 'XuanYuan2-70B-Chat', 'XuanYuan-6B-Chat-8bit', 'XuanYuan-6B-Chat-4bit', 'XuanYuan-70B-Chat-8bit', 'XuanYuan-70B-Chat-4bit', 'XuanYuan2-70B-Chat-8bit', 'XuanYuan2-70B-Chat-4bit', 'XVERSE-7B', 'XVERSE-13B', 'XVERSE-65B', 'XVERSE-65B-2', 'XVERSE-7B-Chat', 'XVERSE-13B-Chat', 'XVERSE-65B-Chat', 'XVERSE-MoE-A4.2B', 'XVERSE-7B-Chat-GPTQ-Int8', 'XVERSE-7B-Chat-GPTQ-Int4', 'XVERSE-13B-Chat-GPTQ-Int8', 'XVERSE-13B-Chat-GPTQ-Int4', 'XVERSE-65B-Chat-GPTQ-Int4', 'Yayi-7B', 'Yayi-13B', 'Yi-6B', 'Yi-9B', 'Yi-34B', 'Yi-6B-Chat', 'Yi-34B-Chat', 'Yi-6B-Chat-8bits', 'Yi-6B-Chat-4bits', 'Yi-34B-Chat-8bits', 'Yi-34B-Chat-4bits', 'Yi-1.5-6B', 'Yi-1.5-9B', 'Yi-1.5-34B', 'Yi-1.5-6B-Chat', 'Yi-1.5-9B-Chat', 'Yi-1.5-34B-Chat', 'Yi-Coder-1.5B', 'Yi-Coder-9B', 'Yi-Coder-1.5B-Chat', 'Yi-Coder-9B-Chat', 'Yi-VL-6B-Chat', 'Yi-VL-34B-Chat', 'Yuan2-2B-Chat', 'Yuan2-51B-Chat', 'Yuan2-102B-Chat', 'Zephyr-7B-Alpha-Chat', 'Zephyr-7B-Beta-Chat', 'Zephyr-141B-ORPO-Chat', 'Custom'] Required The input value that is provided in the "parameter_5" Dropdown component.
07-17
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值