在使用BLIP-2预训练模型用来提取特征时,遇到的一些问题,没有梯度的问题,参照BLIP-2 feature extraction的示例文件
import torch
from PIL import Image
from lavis.models import load_model_and_preprocess
raw_image = Image.open("../docs/_static/merlion.png").convert("RGB")
caption = "a large fountain spewing water into the air"
# setup device to use
device = torch.device("cuda") if torch.cuda.is_available() else "cpu"
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip2_feature_extractor", model_type="pretrain", is_eval=True, device=device)
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
text_input = txt_processors["eval"](caption)
sample = {"image": image, "text_input": [text_input]}
features_multimodal = model.extract_features(sample)
print(features_multimodal.multimodal_embeds.shape)
# torch.Size([1, 32, 768]), 32 is the number of queries
features_image = model.extract_features(sample, mode="image")
features_text = model.extract_features(sample, mode="text")
print(features_image.image_embeds.shape)
# torch.Size([1, 32, 768])
print(features_text.text_embeds.shape)
# torch.Size([1, 12, 768])
# low-dimensional projected features
print(features_image.image_embeds_proj.shape)
# torch.Size([1, 32, 256])
print(features_text.text_embeds_proj.shape)
# torch.Size([1, 12, 256])
similarity = (features_image.image_embeds_proj @ features_text.text_embeds_proj[:,0,:].t()).max()
print(similarity)
# tensor([[0.3642]])
这里最关键的是model.extract_features()这个函数,题主是使用该函数提取特征用于多模态检索的代理任务,提取到的特征经过一定的处理后输入损失函数,用于计算损失,在损失计算,自动求道,梯度回传之后,在scaler.step(optimizer)这一步出错。代码如下:
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
报错信息为,出现了NaN值,也就是 no inf chek is recorded for this optimizer.
经过一两天的挣扎,题主在打印特征的梯度时发现,导致该问题的原因是,从特征提取也就是extract_features()这个函数之后的特征没有梯度,题主也尝试过对后续的特征的属性requires_grad赋值为True,但仍然无法解决梯度为