AWS Machine Learning Blog

Adding AI to your applications with ready-to-use models from AWS Marketplace

Machine learning (ML) lets enterprises unlock the true potential of their data, automate decisions, and transform their business processes to deliver exponential value to their customers. To help you take advantage of ML, Amazon SageMaker provides the ability to build, train, and deploy ML models quickly.

Until recently, if you used Amazon SageMaker, you could either choose optimized algorithms offered in Amazon SageMaker or bring your own algorithms and models. AWS Marketplace for Machine Learning increases the selection of ML algorithms and models. You can choose from hundreds of free or paid algorithms and model packages across a broad range of categories, including:

In this post, you learn how to deploy and perform inference on the Face Anonymizer model package from the AWS Marketplace for Machine Learning.

Overview

Model packages in AWS Marketplace are pre-trained machine learning models that can be used to perform batch as well as real-time inference. Because these model packages are pre-trained, you don’t have to worry about any of the following tasks:

  • Gathering training data
  • Writing an algorithm for training a model
  • Performing hyperparameter-optimization
  • Training a model and getting it ready for production

Not having to do these steps saves you much time and money spent writing algorithms, finding datasets, feature engineering, and training and tuning the model.

Algorithms and model packages from AWS Marketplace integrate seamlessly with Amazon SageMaker. To interact with them, you can use the AWS Management Console, the low-level Amazon SageMaker API, or the Amazon SageMaker Python SDK. You can use model packages to either stand up an Amazon SageMaker endpoint for performing real-time inference or run a batch transform job.

Amazon SageMaker provides a secure environment to use your data with third-party software. You are recommended to follow principle of least privilege and ensure that IAM permissions are locked down for your resources.

To be able to try this blogpost successfully, you would need appropriate IAM permissions. For Amazon SageMaker IAM permissions and best practices to be followed, see documentation. For more information how to secure your machine learning workloads, watch an online tech talk on Building Secure Machine Learning Environments Using Amazon SageMaker. The service helps you secure your data in multiple ways::

  • Amazon SageMaker performs static and dynamic scans of all the algorithms and model packages for vulnerabilities to ensure data security.
  • Amazon SageMaker encrypts algorithm and model artifacts and other system artifacts in transit and at rest.
  • Requests to the Amazon SageMaker API and the console are made over a secure (HTTPS over TLS) connection.
  • Amazon SageMaker requires IAM credentials to access resources and data on your deployment, thus preventing the seller’s access to your data.
  • Amazon SageMaker isolates the deployed algorithm/model artifacts from internet access to secure your data. For more information, see Training and Inference Containers Run in Internet-Free Mode.

Walkthrough

There are many different reasons why you may want to blur faces for the reasons of ensuring anonymity and privacy.  As a developer, you want to add intelligence to your automation process without having to worry about training a model.

After searching for pre-trained ML models on the internet, you come across AWS Marketplace for Machine Learning. A Search for the keyword “face” results in a list of algorithms. You decide to try the Face Anonymizer model package by Figure Eight.

Before you deploy the model, you need to review the AWS Marketplace listing to understand the I/O interface of the model package and its pricing information. Open the listing and review the product overview, pricing, highlights, usage information, instance types with which the listing is compatible, and additional resources. To deploy the model, your AWS account must have a subscription to it.

Subscribe to the model package

On the listing page, choose Continue to Subscribe. Review the End User license agreement and software pricing and once your organization agrees to the same, Accept offer button needs to be clicked.

    • For AWS Marketplace IAM permissions,  see “Rule 1 Only those users who are authorized to accept a EULA on behalf of your organization should be allowed to procure (or subscribe to) a product in Marketplace” from my other blog post, Securing access to AMIs in AWS Marketplace.

Create a deployable model

After subscription to the listing has been made from your AWS account, you can deploy the model package:

  1. Open configure your software page for Face Anonymizer. Leave Fulfillment method as Amazon SageMaker and Software Version as Version 1. For Region, choose us-east-2. At the bottom of the page is Product ARN, which is required only if you deploy the model using the API. Because you are deploying Amazon SageMaker Endpoint using the console, you can ignore it.
  2. Choose View in SageMaker.
  3. Select the Face Anonymizer listing and then choose Create endpoint.
  4. Under Model settings section, specify the following parameters and then choose NEXT:
    1. Specify face-anonymizer for Model name.
    2. For IAM role, select an IAM role that has necessary IAM permissions.

    You just used a pre-trained model package from AWS Marketplace to create a deployable model. A deployable model has an IAM role associated with it while the model package is a static entity and does not have an IAM role associated with it. Next, you deploy the model to perform inference.

Deploy the model

  1. On the Create Endpoint page, configure the following fields:
    1. For Endpoint name & Endpoint configuration name, choose face-anonymizer.
    2. Under Production variants, choose Edit.
  2. In the Edit Production Variant dialog box, configure the following fields:
    1. For instance type, select ml.c5.xlarge (the Face Anonymizer listing is compatible with ml.c5.xlarge as the instance type)
    2. Choose Save.
  3. Review the information as shown in the following screenshot and choose Create endpoint configuration.
  4. Choose Submit to create the endpoint.

Perform inference on the model

Each model package from AWS Marketplace has a specific input format, which is in its listing, in the Usage Information section. For example, the listing for Face Anonymizer states that the input must be base64-encoded and the payload sent for prediction should be in the following format:

Payload: 
{
	"instances": [{
		"image": {
			"b64": "BASE_64_ENCODED_IMAGE_CONTENTS"
		}
	}]
}

For this post, use the following image with the file name volunteers.jpg to perform anonymization.

The following section contains commands you can use from terminal to prepare data and to perform inference.

Perform base64-encoding

Since the payload required needs to be base64 encoded to perform real-time inference, you must first encode the image.

Linux command

encoded_string=$(base64 volunteers.jpg)

Windows – PowerShell commands

$base64string = [Convert]::ToBase64String([IO.File]::ReadAllBytes('./volunteers.jpg'))

Prepare payload

Use following commands to prepare the payload and write it to a file.

Linux commands

payload="{\"instances\": [{\"image\": {\"b64\": \"$encoded_string\"}}]}"
echo $payload >input.json

Windows – PowerShell commands

$payload=-join('{"instances": [{"image": {"b64": "' ,$base64string,'"}}]}')

$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding $False

 [System.IO.File]::WriteAllLines('./input.json', $payload, $Utf8NoBomEncoding)

Now that the payload is ready, you can either perform a batch inference or a real-time inference.

Perform real-time inference

To perform real-time inference, execute the following command using the AWS CLI. For more information, see Installing the AWS CLI and Configuring the AWS CLI.

aws sagemaker-runtime invoke-endpoint --endpoint-name face-anonymizer --body fileb://input.json --content-type "application/json" --region us-east-2 output.json.out

After you execute the command, the output is available in the output.json.out file.

Perform batch inference

To perform a batch inference:

  1. Sign in to the AWS Management Console. Then you can either identify an Amazon S3 bucket to use, or create an S3 bucket in the same Region in which you deployed the model earlier.
  2. Upload the input.json file to the S3 bucket.
  3. To copy the path of the file, select the file and choose Copy Path.
  4. In the Amazon SageMaker console, choose Batch Transform Jobs, Create Batch Transform Job.
  5. Specify the following information and choose Create Job.
    1. For Job name, enter face-anonymization.
    2. For Model name, enter face-anonymizer.
    3. For Instance type, select c5.xlarge.
    4. For Instance-count, enter 1.
    5. Under Input data configuration, for S3 location, specify the S3 path that you copied. It should look like the following pattern:
      s3://<your-bucket-name>/input.json 
    6. For Content type, enter application/json.
    7. For Output data configuration, specify the appropriate S3 output path. It should look like this:
      s3://<your-bucket-name>/output
  1. A message appears stating that a batch transform job was successfully created. After the status of the job changes to Completed, open the batch transform job, under Output data configuration, select output data path, and then download the output file with name json.out.

Extract and visualize the output

Now that the output is available, you can extract it using the commands in the following chart and visualize the output.

Linux command

cat output.json.out | jq -r '.predictions[0].image.b64' | base64 --decode >output.jpg

Windows – PowerShell commands

$jsondata = Get-Content -Raw -Path 'output.json.out' | ConvertFrom-Json

$bytes = [Convert]::FromBase64String($jsondata.predictions.image.b64)

[IO.File]::WriteAllBytes('output.jpg', $bytes)

In the output.jpg image, you can see that the ML model identified and anonymized the faces in the image.

You successfully performed a real-time inference on a model created from a third-party model package from AWS Marketplace.

Cleaning up

Delete the endpoint and the endpoint configuration so that your account is no longer charged.

  1. To delete the endpoint:
    1. In the Amazon SageMaker console, choose Endpoints.
    2. Select the endpoint with the name face-anonymizer and choose Actions, Delete.
  2. To delete the endpoint configuration:
    1. In the Amazon SageMaker console, choose Endpoint configuration.
    2. Select the endpoint configuration with the name face-anonymizer and choose Actions, Delete.
  3. To delete the model;
    1. In the Amazon SageMaker console, choose Models.
    2. Select the model with the name face-anonymizer and choose Actions, Delete.
  4. If you subscribed to the listing simply to try the example in this post, you can unsubscribe to the listing. On the Your software subscriptions page, choose Cancel Subscription for the Face Anonymizer listing.

Deploy a model and perform real-time and batch inference using a Jupyter notebook

This post demonstrated how to use the Amazon SageMaker console to stand up an Amazon SageMaker endpoint and use the AWS CLI to perform inference. If you prefer to try a model package using a Jupyter notebook, use the following steps:

  1. Create an Amazon SageMaker notebook instance.
  2. In the Amazon SageMaker console, under Notebook instances, in the Actions column for the notebook instance that you just created, choose Open Jupyter.
  3. In the notebook, choose SageMaker Examples.
  4. Under AWS Marketplace, choose Use for the “Using_ModelPackage_Arn_From_AWS_Marketplace.ipynb” sample notebook available, and then follow the notebook. Use Shift+Enter to run each cell.

Pricing

AWS Marketplace contains the following pricing for model packages:

  • Free (no software pricing)
  • Free-trial (no software pricing for a limited trial period)
  • Paid

Apart from infrastructure costs, the Free-trial and Paid model packages have software pricing applicable for real-time Amazon SageMaker inference and Amazon SageMaker batch transform. You can find this information on the AWS Marketplace listing page in the Pricing Information section. Software pricing for third-party model packages may vary based on Region, instance type, and inference type.

Conclusion

This post took you through a use case and provided step-by-step instructions to start performing predictions on ML models created from third-party model packages from AWS Marketplace.

In addition to third-party model packages, AWS Marketplace also contains algorithms. These can be used to train a custom ML model by creating a training job or a hyperparameter tuning job. With third-party algorithms, you can choose from a variety of out-of-the-box algorithms. By reducing the time-to-deploy by eliminating algorithm development efforts, you can focus on training and tuning the model using your own data. For more information, see Amazon SageMaker Resources in AWS Marketplace and Using AWS Marketplace for machine learning workloads.

If you are interested in selling an ML algorithm or a pre-trained model package, see Sell Amazon SageMaker Algorithms and Model Packages. You can also reach out to aws-mp-bd-ml@amazon.com. To see how algorithms and model packages can be packaged for listing in AWS Marketplace for Machine Learning, follow the creating_marketplace_products sample Jupyter notebook.

For a deep-dive demo of AWS Marketplace for machine learning, see the AWS online tech talk Accelerate Machine Learning Projects with Hundreds of Algorithms and Models in AWS Marketplace.

For a practical application that uses pre-trained machine learning models, see the Amazon re:Mars session on Accelerating Machine Learning Projects.


About the Authors

Kanchan Waikar is a Senior Solutions Architect at Amazon Web Services with AWS Marketplace for machine learning group. She has over 13 years of experience building, architecting, and managing, NLP, and software development projects. She has a masters degree in computer science(data science major) and she enjoys helping customers build solutions backed by AI/ML based AWS services and partner solutions.