nvidia-docker部署Tensorflow Serving(CPU+GPU) 环境准备
官网:TensorFlow Serving with Docker
一、Docker部署Tensorflow Serving CPU版本
# Download the TensorFlow Serving Docker image and repo
docker pull tensorflow/serving
git clone https://github.com/tensorflow/serving
# Location of demo models
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
# Start TensorFlow Serving container and open the REST API port
# 方法一
sudo docker run -t --rm -p 8501:8501 -v "/home/mochen/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
# 方法二
sudo docker run -dt -p 8501:8501 -v "/home/mochen/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
# 方法三
sudo docker run -d -p 8501:8501 --mount type=bind,source=/home/mochen/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu/,target=/models/half_plus_two -e MODEL_NAME=half_plus_two -t --name testserver tensorflow/serving
# Query the model using the predict API
curl -d '{"instances": [1.0, 2.0, 5.0]}' -<