conda 安装的pytorch只支持cpu版本,cuda版本需要pip方式安装
huggingface使用笔记
预训练模型的使用方式
- 最简单的方式,使用推理api,这种方式模型文件和模型执行都是托管在hf的网站上的
-https://huggingface.co/docs/api-inference/quicktour添加链接描述
from huggingface_hub.inference_api
import InferenceApi inference = InferenceApi("bert-base-uncased")
print(inference(inputs="The goal of life is [MASK]."))
- 通过transformers库,该方式是把模型下载到本地cache,需要本地的ai框架支持,运行是在本地的计算机上
from transformers import ViTImageProcessor, ViTForImageClassification
from PIL import Image
import requests
import matplotlib.pyplot as plt
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
plt.imshow(image) plt.show() model_path = r"S:\ai_cache\huggingface\manual_download_models\google\vit-base-patch16-224"
processor = ViTImageProcessor.from_pretrained(model_path)
model = ViTForImageClassification.from_pretrained(model_path)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])