Qwen2-VL series 技术报告解读

知乎:Cassie
链接:https://zhuanlan.zhihu.com/p/721908386

写在前面

Qwen团队在 2024年9月19日开源了Qwen2-VL-72B 模型,并发布了技术报告。这里简单介绍下“国货之光”——Qwen2-VL-72B 的技术细节。

1. Contributions

(1) 多语言模型

918cc206394d293b5d595dd4193dac97.pngQwen 在多语言OCR任务上的表现。

(2) 支持任意分辨率、比例的素材(img + video + text)输入,在视觉评测集(DocVQA, InfoVQA, RealWorldQA, MTVQA, MathVista等)上取得SoTA结果

当前的LVLM是把输入的图片处理成固定尺寸,例如224*224,作者认为这会限制模型捕捉不同尺度信息的能力,同时会使高分辨率图像丢失太多信息;

vision encoder的训练不够,有研究通过ft vision encoder获得了模型性能的提升;

为了提升作者对不通分辨率图片的处理能力,作者移除了绝对位置编码PE,引入了2D Rotary Position Embedd

### Qwen2-7B-Instruct Model Information and Usage #### Overview of the Qwen2-VL-7B-Instruct Model The Qwen2-VL-7B-Instruct model is a large-scale, multi-modal language model designed to handle various natural language processing tasks with enhanced capabilities in understanding visual content. This model has been pre-trained on extensive datasets that include both textual and image data, making it suitable for applications requiring cross-modal reasoning. #### Installation and Setup To use this specific version of the Qwen2 series, one needs first to ensure proper installation by cloning or downloading the necessary files from an accessible repository. Given potential issues accessing certain websites due to geographical restrictions, users should consider using alternative mirrors such as `https://hf-mirror.com` instead of attempting direct access through sites like Hugging Face[^3]. For setting up locally: 1. Install required tools including `huggingface_hub`. 2. Set environment variables appropriately. 3. Execute commands similar to: ```bash huggingface-cli download Qwen/Qwen2-VL-7B-Instruct --local-dir ./Qwen_VL_7B_Instruct ``` This command will fetch all relevant components needed for running inference against the specified variant of the Qwen family models. #### Fine-Tuning Process Fine-tuning allows adapting pretrained weights into more specialized domains without starting training anew. For instance, when working specifically within the context provided earlier regarding Qwen2-VL, adjustments can be made via LoRA (Low-Rank Adaptation), which modifies only parts of existing parameters while keeping others fixed during optimization processes[^1]. #### Running Inference Locally Once everything is set up correctly, performing offline predictions becomes straightforward once dependencies are resolved. An example workflow might involve loading saved checkpoints followed by passing input prompts through them until outputs meet desired criteria[^2]: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("./Qwen_VL_7B_Instruct") model = AutoModelForCausalLM.from_pretrained("./Qwen_VL_7B_Instruct") input_text = "Your prompt here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --related questions-- 1. What preprocessing steps must be taken before feeding images alongside text inputs? 2. How does performance compare between different quantization levels offered by GPTQ? 3. Are there any particular hardware requirements recommended for efficient deployment? 4. Can you provide examples where fine-tuned versions outperform general-purpose ones significantly?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值