Building a Serverless Machine Learning Inference API with AWS Lambda



The combination of AWS Lambda and Amazon API Gateway can be used for hosting serverless APIs. With the support for Amazon EFS, AWS Lambda got access to a persistence storage layer. This webinar will demonstrate the best practices of using these AWS services to build and deploy serverless machine learning inference API.

source

8 thoughts on “Building a Serverless Machine Learning Inference API with AWS Lambda”
  1. Try to do this, but its too much difficult. I stuck where aws lambda pick some of my package from EFS and not pick rest. For example abc is main package abd abc use xyz, so lambda pick abc and say xyz not found. Though both sre there

  2. Wonderful walkthrough. So setting up an EFS within an EC2, calling lamba with pre-trained ML model, and then defining a trigger. My question is suppose you are continuously updating the model, what would you recommend to have updated trained models to refer to in the lambda? Let's say you are updating the model weekly. Do you think a viable solution is to set up a separate EC2 chron job, that triggers a similar layout (EC2/ EFS/ Lambda) but a TRAIN model lambda, that then outputs model to your other lambda's model dir? Then when you load model you use a maxdate model to always default to most recent?

Leave a Reply

Your email address will not be published.

Captcha loading...