- 环境准备:
- 假设已安装nvidia驱动、cuda、cudnn
- 假设已安装docker、docker-ce
- tensorflow serving的docker镜像安装与测试
- glone用于测试安装是否成功的源码
git clone https://github.com/tensorflow/serving - CPU版本的部署与测试
- 镜像下载 docker pull tensorflow/serving:lastet
- 通过命令docker images可以看到多了一个tensorflow/serving
- docker启动tf serving的cpu测试服务,以下两个命令均可
- docker run -dt -p 8501:8501 -v "/{你上一步clone源码的目录}/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
- docker run -d -p 8501:8501 --mount type=bind,source=/home/bixian/work_space/tf_serving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu/,target=/models/half_plus_two -e MODEL_NAME=half_plus_two -t --name testserver tensorflow/serving
- 访问启动的服务
- curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
- 返回结果:{ "predictions": [2.5, 3.0, 4.5] },则表示测试成功
- GPU版本的部署与测试
- 安装nvidia-docker2
- 如果安装了nvidia-docker 1.0版本,请删除
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker - 添加仓库
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update -
安装nvidia-docker2,如已配置过/etc/docker/daemon.json,可以覆盖,待安装完毕再修改
sudo apt-get install -y nvidia-docker2
也可通过https://hub.docker.com/r/nvidia/cuda/,查看符合自己cuda版本的docker环境的nvidia-docker
sudo pkill -SIGHUP dockerd -
查看nvidia-docker安装情况:sudo apt show nvidia-docker2
-
安装nvidia/cuda镜像
我的系统是ubuntu16.04, cuda9.0,安装9.0-devel:
docker pull nvidia/cuda:9.0-devel -
测试nvidia-docker及cuda镜像运行
docker run --runtime=nvidia --rm nvidia/cuda:9.0-devel nvidia-smi
- 如果安装了nvidia-docker 1.0版本,请删除
-
docker 启动tensorflow serving的GPU测试服务
-
test_path=/{clone tf serving源码的目录}/serving/tensorflow_serving/servables/tensorflow/testdata
docker run --runtime=nvidia -p 8501:8501 \
--mount type=bind,\
source=$test_path/saved_model_half_plus_two_gpu,\
target=/models/half_plus_two \
-e MODEL_NAME=half_plus_two -t tensorflow/serving:1.12.0-gpu & -
访问启动的服务:
curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
返回结果:{ "predictions": [2.5, 3.0, 4.5] },则表示测试成功
-
- 安装nvidia-docker2
- glone用于测试安装是否成功的源码
Tensorflow Serving入门之一(Ubuntu16.04下使用docker部署tensorflow serving与测试)
最新推荐文章于 2024-05-22 15:34:22 发布