Triton Cv Mme Tensorflow Backend
Implement a SageMaker Multi-Model Endpoint for TensorFlow Vision models on a Triton Server from NVIDIA
This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Amazon SageMaker Multi-Model Endpoint (MME) is a cost-effective way of running multiple models behind a single endpoint. SageMaker manages the process of loading the target model into memory when needed, which leads to better utilization of the container resources and reduces cost.
Multi-model endpoints are ideal when you have infrequently used models that can handle minor delays introduced by an occasional cold start.
NVIDIA Triton Inference Server is an open source software that provides high performance inference on a wide variety of CPU and GPU hardware and supports all the major ML frameworks. It has many built-in features to improve inference throughput and achieves better utilization of the resources.
Now the NVIDIA Triton Inference Server can be deployed on GPU based SageMaker ML instances. It supports the SageMaker MME API to for dynamic loading and unloading of models for implementing SageMaker multi-model endpoints.
This notebook shows how to deploy multiple TensorFlow models trained on the MNIST dataset to a SageMaker MME using the NVIDIA Triton Server.
Here we use two different instances of an existing model artifact. The model used here was pre-trained on the MNIST dataset. If you want to learn how to train the model, please See TensorFlow script mode training and serving.
Contents
Introduction to NVIDIA Triton Server
NVIDIA Triton Inference Server was developed specifically to enable scalable, cost-effective, and easy deployment of models in production. NVIDIA Triton Inference Server is open-source inference serving software that simplifies the inference serving process and provides high inference performance.
Some key features of Triton are:
- Support for Multiple frameworks: Triton can be used to deploy models from all major frameworks. Triton supports TensorFlow, ONNX, PyTorch, and many other model formats.
- Model pipelines: Triton model ensemble represents a pipeline of one or more models or pre- / post-processing logic and the connection of input and output tensors between them. A single inference request to an ensemble will trigger the execution of the entire pipeline.
- Concurrent model execution: Multiple models (or multiple instances of the same model) can run simultaneously on the same GPU or on multiple GPUs for different model management needs.
- Dynamic batching: For models that support batching, Triton has multiple built-in scheduling and batching algorithms that combine individual inference requests together to improve inference throughput. These scheduling and batching decisions are transparent to the client requesting inference.
- Diverse CPUs and GPUs: The models can be executed on CPUs or GPUs for maximum flexibility and to support heterogeneous computing requirements.
Install TensorFlow. This notebook is tested with version 2.11.
For this exercise we download a TensorFlow model pre-trained on the MNIST data set from an Amazon S3 bucket. The model artifact is saved locally.
You should have already configured the default IAM role for running this notebook with access to the model artifacts and the NVIDIA Triton Server image in Amazon Elastic Container Registry (ECR).
Download the Triton Server image from Amazon ECR.
Transform TensorFlow Model structure
The model that we want to deploy currently has the following structure:
00000000
├── saved_model.pb
├── assets/
└── variables/
├── variables.data-00000-of-00001
└── variables.index
For Triton, the model needs to have the following structure:
<model-name>
├── config.pbtxt
└── 1
└── model.savedmodel
├── saved_model.pb
├── assets/
└── variables/
├── variables.data-00000-of-00001
└── variables.index
Create the config.pbtxt file
Triton requires a Model Configuration file called config.pbtxt.
We create one below in the local folder for uploading with the model artifact.
Define the serving container
In the container definition below, we need to pass in the following parameters.
- Image: Triton server image URI that supports deploying multi-model endpoints with GPUs.
- URI to S3 folder that contains all the models that SageMaker multi-model endpoint will use to load and serve predictions.
- Mode: Set to MultiModel
Create a model object using the container defined above
Create the model object using the Boto3 create_model API. We pass the container definition to the create model API along with the model name and execution role.
Deploy and test the Multi-Model endpoint
Create a multi-model endpoint configurations using the create_endpoint_config Boto3 API. We specify an accelerated GPU computing instance as the instance type. For testing we specify a single instance. In real scenarios we recommend the value of initial instance count to be two or higher for high availability.
Create endpoint configuration
Create Multi-Model endpoint
Using the above endpoint configuration we create a new SageMaker endpoint and wait for the deployment to finish. The status will change to In Service once the deployment is successful.
Let's download some test data
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.