Export and serve model on tensorflow serving
A lightweight, RESTful remote inference library for decoupling deep learning development and deployment.
Includes serving trained Keras model for pointer-based meter reading
Usage
-
Follow instructions in
save_keras_model.py
to export model toSavedModel/1
folder. runpython save_keras_model.py --output_dir $(path) --model_version $(model_version)
in your bash to specify the output directory and model version for tensorflow serving -
Make sure tensorflow/serving docker image is pulled by running
docker image
in your bash and check if tensorflow/serving is pulled. If not, rundocker pull tensorflow/serving
. It will take a couple of minutes. -
In your terminal, run
docker run -d --name serving_base tensorflow/serving
to start a docker container on local machine -
make a diretory for saving exported model.
mkdir -p /tmp/pointer_model
cp -r $(Path_to_SavedModel) /tmp/pointer_model
-
Run
docker cp /tmp/pointer_model serving_base:/models/pointer_model
to copy the exported model to docker container -
(Optional) stop serving_base container by running
docker kill serving_base
-
Run
docker run -p 8501:8501 --mount type=bind,source=/tmp/pointer_model,target=/models/pointer_model -e MODEL_NAME=pointer_model -t tensorflow/serving
. By running this command, this will run in the docker containertensorflow_model_server --port=8500 --rest_api_port=8501 \ --model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME}
-
Now you can serve your image by running in your terminal
python pointer_server_client -i $(input_image_path)