hugging face inference API返回内容太短的问题

hugging face的inference api返回的内容默认很短,可以通过参数max_new_tokens进行设置:

Detailed parameters

When sending your request, you should send a JSON encoded payload. Here are all the options

All parameters
inputs (required):a string to be generated from
parametersdict containing the following keys:
top_k(Default: None). Integer to define the top tokens considered within the sample operation to create new text.
top_p(Default: None). Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p.
temperature(Default: 1.0). Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.
repetition_penalty(Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes.
max_new_tokens(Default: None). Int (0-250). The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated.
max_time(Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results.
return_full_text(Default: True). Bool. If set to False, the return results will not contain the original query making it easier for prompting.
num_return_sequences(Default: 1). Integer. The number of proposition you want to be returned.
do_sample(Optional: True). Bool. Whether or not to use sampling, use greedy decoding otherwise.
optionsa dict containing the following keys:
use_cache(Default: true). Boolean. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
wait_for_model(Default: false) Boolean. If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places.

python示例:

import requests

API_URL = "https://api-inference.huggingface.co/models/xxxxxxxxx"
headers = {"Authorization": "Bearer xxxxxxxxxxx"}

def query(payload):
        response = requests.post(API_URL, headers=headers, json=payload)
        return response.json()

output = query({
    "inputs": "please write a LRU cache in C++ ",
    "parameters": {"max_new_tokens": 250},
})

print(output)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值