Skip to content

Levels of abstraction within AWS ML offerings

I went to an interesting talk last night at the AWS Boulder Denver Meetup where Brett Mitchell, the presenter, had built a sophisticated application using serverless and AWS machine learning technologies to make an animatronic parrot respond to images and speak phrases.  From what he said, the most difficult task was getting the hardware installed into the animatronic parrot–the speaker said the software side of things was prototyped in a night.

But he also said something that I wanted to capture and expand on.  AWS is all about providing building best of breed building blocks that you can pick and choose from to build your application.  Brett mentioned that there are four levels of AWS machine learning abstractions upon which you can build your application.  Here’s my take on what they are and when you should choose them.

  1. Low level compute, like EC2 (maybe even the bare metal offering, if performance is crucial).  At this level, AWS is providing elasticity in terms of compute, but you are entirely in control of the algorithms you implement, the language you use, and how to maintain availability.  With great control comes great responsibility.
  2. Library abstractions like MXNet, Spark ML, and Tensorflow.  You can either use a prepackaged AMI, Sagemaker or use Amazon EMR.  In either case you are leveraging existing libraries and building on top of them.  You still own most of the model management and service availability if you are using the AMI, but the other alternatives remove much of that management hassle.
  3. Managed general purpose supervised learning, which is Amazon Machine Learning (hey, I recorded a course about that!).  In this case, you don’t have access to the model or the algorithms–it’s a black box.  This is aimed at non machine learning folks who have a general problem: “I have structured data I need to make predictions against”.
  4. Focused product offerings like image recognition and speech recognition, among others.  These services may require you to provide some training data.  Here you do nothing except leverage the service.  You have to work within the limits of what the service provides, including performance and tuneability.  The tradeoff is you get scale and high availability for a great price (in both dollars and effort).

These options are all outlined nicely here.

Leave a Reply

Your email address will not be published. Required fields are marked *