🤔 What is AI and how does it work?
🌈 Did you know that John McCarthy came up with the name "Artificial Intelligence" all the way back in 1956?
Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn on its own, without needing a human to encode every single command.
AI uses Machine Learning (ML) to 'mimic' human intelligence. The computer has to learn how to respond to certain actions, relying on algorithms and historical data to create a model. These models can then make predictions.
😨 What is Amazon Augmented AI?
Amazon Augmented AI (Amazon A2I) is a service that allows you to implement human review of Machine Learning (ML) predictions.
It also comes integrated with some of the Artificial Intelligence (AI) services such as Amazon Rekognition (computer vision platform) and Amazon Textract (extracts text and data from scanned documents) APIs. Best of all, you can also use any custom ML model or workflow.
😎 Where Amazon A2I comes into play
Businesses are using Machine Learning in a lot of applications. It covers major use cases with high speed and low cost, by providing us with probabilistic output. When you have deterministic output in a traditional program, you get the same output for a given set of inputs every time. When you have probalistic output, each input doesn't have a single output that we can be 100% confident is correct - there can be many possible outputs, all with different probabilities of being correct! These probabilities are the "confidence score".
To better illustrate what this means, let’s pretend you’re trying to check if a dog 🐶 is present in a photo 📸. Machine Learning is not going to tell you with 100% confidence whether a dog is present in a photo or not. Instead, it’s always going to tell you what is the percentage of its confidence. (Example: Maybe it’s 83% confident that a dog is in the below photo collage. This 83% is the probability output that your ML model gives you.)
Can you detect Canela, the silly service dog?
Customers want to build trust—both internal and external—that their Machine Learning models are performing as they expect, as well as identifying deviations when input data changes. When the Machine Learning model is not confident, customers may want a human to come in and make a call about whether there is a dog present in this photo or not. In these cases, customers may feel forced to choose between Machine Learning workflows or manual workloads only. These customers are looking for solutions that allow both humans and Machine Learning to work together in their workflows, for all kinds of different use cases. (Think of things like Image Moderation, Form Data Extraction, Media Analysis, Model Accuracy Monitoring, Text Classification, etc.)
At this point, you may be wondering why more people aren’t combining human intelligence🙋🏻♀️ and Machine Learning 💻. The answer is because it’s hard to do! First of all, you’d need a variety of talent such as ML scientists, engineers, and operations teams. Second, you’d need to manage a large number of reviewers. Third, you’d need to write custom software to also manage the review tasks. (It can be difficult to achieve high review accuracy, depending on the domain you’re working in.)
🧠 How it works
Your client application sends input data to your ML model, which then makes a prediction. At this point, you will have either a high-confidence prediction, or a low-confidence prediction. High-confidence predictions can be returned immediately to the client application. Low-confidence predictions can automatically trigger a human review workflow. Once a human has reviewed it, you can create a CloudWatch events rule for delivering the results back to an S3 bucket that you specify. And from there, you can consume the data in your client application however you want.
😁🤟🏽 Let’s talk benefits
- Amazon A2I provides you an easy way to implement human review workflows.
- There are multiple ways in which you can reduce time to market, with pre-built workflows and UIs. (Customers can chose from over 60 templates that we provide.) When businesses know there is a human backstop, they’re willing to put their ML models into production more quickly. Rather than waiting several months for their models to be perfect, they know that any low confidence result will be reviewed by a human as a backstop. This enables them to put their models into production very quickly.
- With Amazon A2I you not only have the software and technology that AWS provides, you also get to choose from multiple workforce options. (e.g. Private, Vendor, Amazon Mechanical Turk)
- Amazon A2I is cloud platform agnostic. We don’t care what ML model you are using or whether you are hosting some custom model on SageMaker. We don’t restrict customers using Amazon A2I in any way, so customers can use whatever ML model they want to use.
💁🏻♀️ Demo time!
Who's ready for a demo? I know I am!
Let’s head over to the AWS Console and search for the Amazon Augmented AI (Amazon A2I) service.
This will take you to the Amazon A2I console. From here, the first step is to Create a human review workflow.
Once there, choose a name, the location of your S3 bucket (where you want the human review results to be stored), and an IAM role.
Now we must select the Task Type. Do we want to use human review for documentation processing with Amazon Textract? Do we want to use Amazon Rekognition for image moderation? Or do we want to use a custom made ML model?
For the purposes of this blog post, we select Amazon Rekognition.
A conditions modal comes up, asking us to define the business conditions on which human review should be triggered. This is where we want to define our initial configurations. Ask yourself, “Based on what business conditions do I want to trigger a human review?”
The first checkbox option allows us to set up the threshold configurations we want for any images in our ML model that come back with low-confidence results. The second checkbox option allows us to choose what percentage of images and their labels we wish to send for human review.
Let's get back to our pretend scenario in this blog post. In the screenshot below, you can see we've decided to have anything below 80% be sent for human review. Basically, this is the confidence score we've picked for automatically triggering human review.
Now we decide what Worker task template (template is now being created) we want to support. For today, we’re going to stick with the default template. Ask yourself, “Who should be able to review my task?”
Now we move on to the Worker task template design. Here we see that this section allows you to provide task instructions with examples for workers. (Workers will be viewing these instructions when they perform your task.) Ask yourself, “What should my UI look like?”
We move on to Workers, and select what is the workforce we want to use. We're going to select Amazon Mechanical Turk, and choose a sample price per task to pay the pretend Amazon Mechanical Turk contract workers.
You can also select the Private workforce option (if you want to bring in your own people) or Vendors (if you want to bring in vendors from AWS Marketplace).
Once we complete this setup, we will be given a Workflow ARN.
In order to send data to Amazon A2I, we’ll need to copy the Workflow ARN in our API calls, when we create a human loop. A human loop starts your human review workflow and sends data review tasks to human workers when the conditions specified in your flow definition are met. To make that API call, customers can use a Jupyter notebook or the AWS CLI (Command Line Interface) directly.
👉🏽 For more information about use cases like content moderation, check out our Jupyter notebook page on GitHub. For more information about integrating Amazon A2I into any custom ML workflow, see over 60 pre-built worker templates on the GitHub repo and use Amazon A2I with Custom Task Types.
☁️ In closing
Get started quickly with the Amazon A2I free tier. To learn more about Amazon A2I, check out the website and the developer guide.
¡Gracias por tu tiempo!
~Alejandra👩🏻💻 y Canela🐾