Home Business Analyst BA Agile Coach Portable Prediction Server | Deploy a Custom Model in an External Prediction...

Portable Prediction Server | Deploy a Custom Model in an External Prediction Environment with PPS

28
0

[Music] Hi this is Chris from DataRobot and I’m  excited to demo new functionality coming in our   MLOps 7.0 release. Today DataRobot MLOps supports  serving predictions for both DataRobot-created and   client-created custom models. In our last release  we introduced the Portable Prediction Server which  

Allowed organizations to run DataRobot models  on the infrastructure of their choice, while   enjoying the benefits of a first class REST API,  as well as built-in support for model monitoring.   In 7.0 we are expanding on this capability with  support for client-created custom models. This  

Now allows users to take advantage of DataRobot  MLOps model validation and testing workshop,   while retaining the flexibility of deploying  on the infrastructure of their choice.   Let’s walk through an example. I have a  model artifact I have uploaded to MLOps   and I’m kicking off a test to validate  the model’s robustness and performance.

Once complete, I can create a Deployment. This  will provide me with both the artifacts I need   to deploy this model remotely, as well as a  central location to review service health,   accuracy, and drift in real  time as predictions occur.

In the deployment I navigate over to the Portable  Prediction tab the Portable Production Server   for custom models is a Docker image and  can be deployed in many configurations.   To help get the user started, this tab provides  scripts to both build the images and deploy them  

In a few simple steps. I’m going to download  the package of these artifacts and scripts,   and for this demo, run the  model with the Monitoring Agent   using a RabbitMQ queuing service to report  monitoring information back to DataRobot.

Once I have downloaded and extracted the package  I can run the included script to build my images.   This script will be building an image for both  the prediction server, using the custom model   artifact and environment definition, as  well as an image for the Monitoring Agent.

Now that I have these images built I’m launching  a configuration of three Containers using Docker   Compose. This will launch the model  in a container, as well as the agent,   and it will also download and  launch the RabbitMQ container. The script looks like it is finished  and I’ll quickly double check  

That my three containers are running. Now I’ll hop over and test with a few sample  predictions. And with those complete, I can see   in MLOps that I’m already receiving Service Health  and Data Drift from the Deployment. For Accuracy  

I’d follow up with an upload of my actuals for  these predictions. Thank you for watching. [Music]
Portable Prediction Servers allow you to run your own custom (non-DataRobot) models on the infrastructure of your choice while taking advantage of DataRobot ML Production services for model validation and testing, model monitoring and a first-class REST API. Models developed outside of DataRobot can be deployed to external infrastructure using the streamlined DataRobot deployment process.

Learn more:

Content:
Hi this is Chris from DataRobot and I’m excited to demo new functionality coming in our MLOps 7.0 release. Today DataRobot MLOps supports serving predictions for both DataRobot-created and client-created custom models. In our last release we introduced the Portable Prediction Server which allowed organizations to run DataRobot models on the infrastructure of their choice, while enjoying the benefits of a first class REST API, as well as built-in support for model monitoring. In 7.0 we are expanding on this capability with support for client-created custom models. This now allows users to take advantage of DataRobot MLOps model validation and testing workshop, while retaining the flexibility of deploying on the infrastructure of their choice. Let’s walk through an example. I have a model artifact I have uploaded to MLOps and I’m kicking off a test to validate the model’s robustness and performance.

Once complete, I can create a Deployment. This will provide me with both the artifacts I need to deploy this model remotely, as well as a central location to review service health, accuracy, and drift in real time as predictions occur.

In the deployment I navigate over to the Portable Prediction tab the Portable Production Server for custom models is a Docker image and can be deployed in many configurations. To help get the user started, this tab provides scripts to both build the images and deploy them in a few simple steps. I’m going to download the package of these artifacts and scripts, and for this demo, run the model with the Monitoring Agent using a RabbitMQ queuing service to report monitoring information back to DataRobot.

Once I have downloaded and extracted the package I can run the included script to build my images. This script will be building an image for both the prediction server, using the custom model artifact and environment definition, as well as an image for the Monitoring Agent.

Now that I have these images built I’m launching a configuration of three containers using Docker Compose. This will launch the model in a container, as well as the agent, and it will also download and launch the RabbitMQ container.

The script looks like it is finished and I’ll quickly double check that my three containers are running.

Now I’ll hop over and test with a few sample predictions. And with those complete, I can see in MLOps that I’m already receiving Service Health and Data Drift from the Deployment. For Accuracy I’d follow up with an upload of my actuals for these predictions.

Request a custom demo

Stay connected with DataRobot!
► Blog:
► Community:
► Twitter:
► LinkedIn: h
► Facebook:
► Instagram:
00:04 [Music] Hi this is Chris from DataRobot and I’m  excited to demo new functionality coming in our  
00:09 MLOps 7.0 release. Today DataRobot MLOps supports  serving predictions for both DataRobot-created and  
00:15 client-created custom models. In our last release  we introduced the Portable Prediction Server which  
00:21 allowed organizations to run DataRobot models  on the infrastructure of their choice, while  
00:25 enjoying the benefits of a first class REST API,  as well as built-in support for model monitoring.  
00:32 In 7.0 we are expanding on this capability with  support for client-created custom models. This  
00:37 now allows users to take advantage of DataRobot  MLOps model validation and testing workshop,  
00:42 while retaining the flexibility of deploying  on the infrastructure of their choice.  
00:46 Let’s walk through an example. I have a  model artifact I have uploaded to MLOps  
00:50 and I’m kicking off a test to validate  the model’s robustness and performance.
01:02 Once complete, I can create a Deployment. This  will provide me with both the artifacts I need  
01:07 to deploy this model remotely, as well as a  central location to review service health,  
01:11 accuracy, and drift in real  time as predictions occur.
01:23 In the deployment I navigate over to the Portable  Prediction tab the Portable Production Server  
01:27 for custom models is a Docker image and  can be deployed in many configurations.  
01:31 To help get the user started, this tab provides  scripts to both build the images and deploy them  
01:37 in a few simple steps. I’m going to download  the package of these artifacts and scripts,  
01:42 and for this demo, run the  model with the Monitoring Agent  
01:46 using a RabbitMQ queuing service to report  monitoring information back to DataRobot.
01:57 Once I have downloaded and extracted the package  I can run the included script to build my images.  
02:02 This script will be building an image for both  the prediction server, using the custom model  
02:06 artifact and environment definition, as  well as an image for the Monitoring Agent.
02:20 Now that I have these images built I’m launching  a configuration of three containers using Docker  
02:24 Compose. This will launch the model  in a container, as well as the agent,  
02:29 and it will also download and  launch the RabbitMQ container.
02:35 The script looks like it is finished  and I’ll quickly double check  
02:38 that my three containers are running.
02:43 Now I’ll hop over and test with a few sample  predictions. And with those complete, I can see  
02:53 in MLOps that I’m already receiving Service Health  and Data Drift from the Deployment. For Accuracy  
02:58 I’d follow up with an upload of my actuals for  these predictions. Thank you for watching. [Music]

ModelValidationandTesting

LEAVE A REPLY

Please enter your comment!
Please enter your name here