多模态ai大模型OmniLMM之docker安装全流程

本文详细描述了如何在没有GPU服务器的情况下,通过创建Docker容器,安装Python3.11、conda、Git,以及下载并配置必要的依赖,如conda、transformers,最终成功部署OmniLMM的过程。
摘要由CSDN通过智能技术生成

关注了好久这个东西,一直没有GPU服务器,如今终于拿到,马上开始部署~~

安装OmniLMM需要准备挺多资源。

1.首先创建一个docker容器、更新版本、安装py3.11、git等

docker run -p xxxx:5000 --gpus all -it --name xxxx ubuntu bash

apt-get update

apt-get install python3.11

apt-get install git

2.安装 conda,下载这两个东西

wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
wget -c https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh

3.授权文件、执行安装

chmod 775 Anaconda3-2020.11-Linux-x86_64.sh

bash Anaconda3-2020.11-Linux-x86_64.sh

执行安装过程--安装过程中要选yes 和按下enter ,常规操作 

4.安装完成后,修改配置文件

vim ~/.bashrc

export PATH="/home/qinzuoyu/anaconda3/bin:$PATH"

source ~/.bashrc

 qinzuoyu/anaconda3路径随意换,换到自己安装 “Anaconda3-2020.11-Linux-x86_64”时候选的路径

5.查看安装后的版本

conda --version

6.安装OmniLMM、静态化py的依赖,当然如果想以后整体打包镜像的话无所谓~ 

git clone https://github.com/OpenBMB/OmniLMM.git

cd OmniLMM

conda create -n OmniLMM python=3.11 -y

conda activate OmniLMM

pip3 install -r requirements.txt -t /home/aistudio/external-libraries

pip3 install transformers -t /home/aistudio/external-libraries

7.下载最重要的东西!!!!他们训练好的那个库~还有程序,这个地址是我之前在网上翻出来

wget -c https://bj.bcebos.com/v1/ai-studio-online/633083e6c00a42e0b7d38aa670e5dac051ee5370dd584ecbae488d1bf4b15379?responseContentDisposition=attachment%3B%20filename%3DOmniLMM-3B.zip&authorization=bce-auth-v1%2F5cfe9a5e1454405eb2a975c43eace6ec%2F2024-02-28T12%3A43%3A07Z%2F-1%2F%2F5c6ec0544d5e341eb12247218df5d036679fe4011390343230ee964696b239d3


wget -c https://bj.bcebos.com/v1/ai-studio-online/55892464167448b88ed382dac075f3cb4bbce592d9924751a5fc4aaa7cc0e40a?responseContentDisposition=attachment%3B%20filename%3Dmodel-00002-of-00002.safetensors&authorization=bce-auth-v1%2F5cfe9a5e1454405eb2a975c43eace6ec%2F2024-02-28T12%3A50%3A53Z%2F-1%2F%2Fdb73074875db9c2497608cd762e79bb7fb794f31ccaf183d9069c6ae658eb78c


wget -c https://bj.bcebos.com/v1/ai-studio-online/75d7a035975e470698b1722dbea05656c84486363eaf4e868d8d898482aced63?responseContentDisposition=attachment%3B%20filename%3Dmodel-00001-of-00002.safetensors&authorization=bce-auth-v1%2F5cfe9a5e1454405eb2a975c43eace6ec%2F2024-02-28T13%3A28%3A56Z%2F-1%2F%2Fe1babd79b7ce96d75f6b2c94313bb40acad74007c319621814bf4df109983871

下载下来之后,放到 /home/aistudio/data 文件夹中,然后 

unzip OmniLMM-3B.zip

这些虽然是 OmniLMM-3B的包~但是效果已经挺好的了~

8.把这些东西都都都放到/home/aistudio/下,看个人习惯了~因为一会的执行文件会用到这些路径~

9.然后找了个网上现成的py测试文件

import sys
sys.path.append('/home/aistudio/external-libraries')

from PIL import Image
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained('/home/aistudio/data', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('/home/aistudio/data', trust_remote_code=True)
model.eval().cuda()


image = Image.open('/home/aistudio/demo.jpg').convert('RGB')
question = '这张图片有哪些内容?'
msgs = [{'role': 'user', 'content': question}]

res, context, _ = model.chat(
    image=image,
    msgs=msgs,
    context=None,
    tokenizer=tokenizer,
    sampling=True,
    temperature=0.7
)

print(res)

 10.把原来这些代码简单改改,改成http请求,接收上传文件式的就基本ok了~

vue端:

<div>
<input type="file" @change="uploadImage" />
</div>





methods: {
uploadImage(event) {
          const file = event.target.files[0];
      const formData = new FormData();
      formData.append('image', file);
 
      // 使用axios发送POST请求到后端接口
      axios.post('http://127.0.0.1:5000/upload', formData, {
        headers: {
          'Content-Type': 'multipart/form-data'
        }
      })
      .then(response => {
        console.log(response.data.message);
      })
      .catch(error => {
        console.error(error);
      });
    }
}

py端

import sys
sys.path.append('/home/aistudio/external-libraries')
import erniebot
from flask import Flask, stream_with_context, request, Response ,jsonify
from time import sleep
import pyttsx3
import datetime
import os

from PIL import Image
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained('/home/aistudio/data', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('/home/aistudio/data', trust_remote_code=True)
model.eval().cuda()

app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = '/home/aistudio/image/'


@app.route('/upload', methods=['POST'])
def upload_image():
    if 'image' not in request.files:
        return jsonify({'message': 'No file part', 'success': False}), 400
 
    file = request.files['image']
    if file.filename == '':
        return jsonify({'message': 'No selected file', 'success': False}), 400
 
    if file:  # 这里可以加文件类型判断逻辑
        now = datetime.datetime.now()
        timestamp = now.timestamp()
        filename = ('{}'+file.filename).format(timestamp)
        file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
        image = Image.open(app.config['UPLOAD_FOLDER'] + filename).convert('RGB')
        question = '这张图片有哪些内容?'
        msgs = [{'role': 'user', 'content': question}]
        res, context, _ = model.chat(
            image=image,
            msgs=msgs,
            context=None,
            tokenizer=tokenizer,
            sampling=True,
            temperature=0.7
        )
        #from PIL import Image
        #print(res)
        return jsonify({'message': res, 'success': True}), 200
 
if __name__ == '__main__':
    app.run()

这个东西唯一的缺点就是,茫茫多的依赖要下载。。整整20来G的总量~很难受。

至此,简单的使用就完成啦~希望帮助到大家

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值