Home Business Analyst BA Agile Coach 1. MLOps Brief Introduction

1. MLOps Brief Introduction

94
1

Hello welcome guys to take entertaining Channel my name is Ram Sundar and I’ll be starting with a new tutorial on ML Ops since it is very popular and trending in market so that’s why I thought of like uh creating one playlist for it and then because I will get to

Learn about so many things like continuous integration you know continuous development like things happen in DevOps so as we know like uh mlops uh like came into Market recently and Just Vibe up very like uh captured so much popularity just because uh we don’t have to touch anything and our

Model will automatically get trained and so many things there are so many advantages of using mlops so let’s get started so basically first of all I’ll let you know uh like this tutorial basically will consist of uh three or five different type of uh projects so basically I will be implementing uh

Mlops in different ways so when using you know Azure ml another using ml flow queue flow like that so it will be very useful for you to uh basically start from top so let’s start first of all what is envelope so ml Ops is stands for like machine learning

Operations so which refer to every stage in machine learning life cycle so what what is machine learning life cycle so first of all so machine learning life cycle cycle is like uh when you start you know one project so how it starts so basically you uh understand the business

You understand the data you uh go like stage by stage you ingest data you pre-process data you feature engineering cleaning XYZ so those all come inside machine learning life cycle okay so uh this is what we’re going to automate it so all the machine learning stages will get automated now

How much amylose is like a approach for you know managing the entire machine learning lifecycle from development stage to the block uh from development stage to like ongoing production stage and we will be Monitoring it’s like it’s a strategy of streaming line you know machine machine learning training uh packaging validation deployment

Monitoring and like a part of MLS approach so we can run ml project consistently in this manner from uh beginning to end so that’s why you know uh people are using envelopes to basically avoid you know step by step uh manual process so the question is why ml

Ops came into Picture answer is so acquiring and cleaning large amount of data setting up you know tracking and versioning of you know of all the models you know experiments performed by us model training runs and then setting up deployment and then monitoring pipeline for the

Model and then go to production so uh this was taking a huge you know time as well as uh so many other things so so basically uh in simple way mlux is a combination of you know data combination of data science data engineering more like traditional devops techniques so

The aim will be here to understand both the model and the organizational infrastructure how basically they sit together okay so let’s go to another slide so first of all as we know uh in you know machine learning life cycle basically there is a training model phase there is packaging phase there is

Validation phase there is a deployment phase there is monitoring phase now we’ll understand basically what all these stages are so in training phase the training phase includes basically initial data preparation uh experimentation and model optimization so this is what training phase does now in package and validation phase what we

Do the envelopes you know deployment phase will include uh I mean this step to manage model rollout including model validation and testing and system architectures so this will be like packaging and validation step now in uh deployment and monitoring phase basically what we do this is also called

Like a Final Phase that include ongoing monitoring of model like Beyond development deployment now any drop in model accuracy or presence of outlier or I mean that we have detected or I mean our program has detected anyhow now this will cause like uh this will cause

Very you know uh very harm for our you know business so that’s why and the cost should be like stabilized and so the model can be realigned or you know retrained so that is our main goal will be at that time so uh so basically up

Till here we were running uh you know Jupiter notebook locally but when it comes to production uh we need to upload it into cloud so and it will be you know uh it will be everything will be treated like a pipeline so basically this whole thing

Is a pipeline it will go one by one okay these are the uh so basically invited script like so see uh the purpose of important purpose literally lower the operational cost First of all okay now it will help help us and like uh like data scientists machine learning

Engineers developer to be more Agile and strategic in their decision because you know you won’t have to do same thing multiple times because you know what happens uh when we uh you know get some issue in production we uh go and check what is happening in model so why why

Model performance is uh getting lower so that time what we do we do retraining or we do feature engineering again and again or two or we do cleaning or those type of thing basically so and it it basically creates some more Effectiveness so at that time what we’ll

Do we will So to avoid this what we will do we will reuse pipeline workflow to 10 new model so we’ll we’ll I mean build some script that will automatically understand things and it will basically automatically keep changing based on you know changes in data or you know some

Business uh changes so basically then what we will do uh I mean it will automatically changes its structure based on uh latest changes so basically then we won’t have to do any modification in the pipeline so that we can use it for retraining purposes now another thing is like we can’t just

Deploy an offline train model uh as a prediction service right like whatever we have like trained in local our laptop we just can’t like directly use that model to uh you know in production so we’ll need a multi-step pipeline to automatically retrain deploy that model so this pipeline add complexity because

We need to automate the state okay automate the step that data scientists do manually before deployment which is train and validate new models okay so that we want to automate so suppose we got some you know uh changes in data or say some regulation or business so data

Will significantly change right so once business changes like anything changes so it will automatically affect in our data so our model should adapt those changes now so this is how mlops comes into picture so it will be easy for us to make uh like make it because everything will be on premises like

Every single script or every single thing will be on in premise and that’s why it will be easy for us now how it is lowering our operational cost because uh because not every time we have to like test our model or you know a Manpower so it will reduce our Manpower it will

Reduce our time it will reduce our because managing infrastructure is very challenging nowadays because so many things are there right so that’s why uh automatic this and creating multi -step reduce the complexity that usually data scientist faces or machine learning Engineers now comes what is devops so

Basically in simple term if if I like want to tell you uh what is devops so basically it’s a development operation like envelops is a subset of the box so so basically there was a big very big architecture which uh in which avelops comes in so basically uh to be simple so

So in collaborative between you know it’s a collaborative between development and it operation team to make software in production and deployment uh in an automated way or you can say repetitive way so devops help us like increase the organizational speed delivers software application service on time and uh you

Know the the main advantage is when we want to let our software used by you know multiple uh people irrespective of performance so that time there was concept of picture because it will automatically scale according to number of people using our services so the most main role is to make a developer life

Easier by providing you know platform uh where they can do continuous build and continuous development so this is what devops is so now devops have like five different you know stages okay so as you can see uh in the sheet uh so basically they are code they are built their test

Packages release configure tool and monitor so as you can see by the by the names some of them comes inside analogs also so they must provide a solution in form of setup processes building testing deployment uh managing large scale software systems and that that is what

Basically devops and this is how it is different from mlops okay so hope I’m hoping that that so so now we’ll see the main key I mean key difference between analogs and devops so basically as we can see uh mlops is more experimental in nature and they’re like uh execution phase is

Different like we have seen in the previous slide that uh like aminox has so many phases but in devops we have like seven phases okay so that is the main thing first of all now uh Um Foreign with so many you know reproducibility of experiments so so so MLS applies to like entire life cycle so uh like like entire cycle what do you mean by that from data Gathering model creation like it continuous integration Whereas for devops it focuses mostly on deployment tasks uh such as build and deployment okay so we use code Version Control in the documentation of like all modification and addition made to would have been created however mnops uh for mloff’s case code isn’t the only thing okay so

For uh for mlos like I just said there are many things like orchestration deployment Health locks Diagnostics governance business metrics all of this thing so since we can’t do this thing locally we have to deploy it in Cloud right Cloud so that different team can come together and contribute their work

All together so that is that is that is all can be maintained by data scientists or data engineering all the team members together and can monitor the data test shift or you can say model drift over time so because you know data keep changes so word changes economy changes value changes uh that

Way data also changes right so think it this way your model which is trained one year ago it will work for current data not right so that’s why so whatever the issue occurred during the model governance uh you know changing the model uh will solved here so you know or

What happens when moral governance comes so we’ll cover that in next slide so basically we can experiment uh things by running ml you know workflow with different set of hyper parameter number of training steps iteration this way we can say that that’s why machine learning operation is more experimental in nature

So this is how basically this is how basically we differentiate envelopes with devops now comes what are what are the different you know levels of uh machine learning operations so basically to make it simple in level one every single step is manual including data analysis data preparation model training validation it

Requires like a manual execution of every step and when will transition from one step to another so so so a new model version will be deployed only a couple of year per year because uh there will be no track of anything right how much uh things are going there will be no continuous

Integration no continuous deployment so that’s that’s that’s level zero now what’s what’s in level one so basically level one the model is automatically trained in production and uh using the phrase data based on uh life uh you know pipeline triggered so the the model deployment step which you know serves

The train and validated model as a prediction service for online protection is automated as well as pipeline deployment so these are the things that is uh that is the difference between level zero and level one so uh in level two what happens is here basically every step is

Automated now you iteratively uh try out new ml Algorithm and new modeling with the experiment step and they are orchestrated so basically from pipeline continuous delivery to automatically triggering user scheduler on a response to trigger modular registry so that model continuous delivery and monitoring keep happening and then it collect the

Statistic and based on that it retrain it so this is what difference between level zero level one and level two is now comes the another part which is model domains like I told you I will cover in the next slide so what is so so basically David means become more crucial

Basically in medicine there is important for us to understand very important since it ensure that our body is involved with you know Financial stays clear and without any resources why because you know we basically use the model output with basically and basically we apply the model output to

Our business and so that’s where it is directly connected so what are the model basically everything says it will basically impact our finances also so obviously uh this uh let’s say uh what happened is you applied you know a model into production so everything was Concepts like model gave me that uh Uh whether our model is giving correct output or not whether it has biases all those things so that’s why all those things coming to govern info we have to take control of that because they will uh now comes the model registry and feature store so what is modern registry so

Basically a model registry uh is something you know you call it as Central you know repository for model created uh for the model that we created and can publish model uh that we that basically are ready to use okay so basically what happens is when we train

Any model so we register that model what I mean by registry so basically once model in production so there is a part of script where it will allow us to register that model so that we’ll keep track of those models in later time so let’s say we want to we want to check

What uh like monthly level of accuracy so what that time what we will do we’ll go to the model registry and we’ll keep trying and basically it will keep track of all the you know trainings and it will keep track of all the matrices that we have

Provi that we have logged into so basically we can log all type of matrices uh including accuracy uh including you know images validation chart accuracy chart validation loss everything that we see in local training that all those things we can keep track of okay so that’s very easy uh when we

When we will try we will see all those things so basically with the registry scientists can manage the lifespan of all the models in the business uh like cooperatively with other team and stakeholders so the train model can be uploaded to the history by the uh you

Know a small line of script so your model will be prepared on tested and validated and before like before deploying into production they should be in registry now basically comes the features stored okay now feature store is a bit you know a type of Gap okay so

What Gap is so basically what happens is feature uh feature store is a type of operation or you can say it’s a gap that present in a machine learning life cycle machine learning operational life cycle so it is a absence or you can say uh in organization or you can say you know

Data scientists do a lot of duplicate work while creating you know same feature again and again validating again and again using different use cases again and again uh this what basically does it increase the you know a complexity as well as time to make changes significantly okay so what I

Mean by that basically if you keep checking all this thing uh you know every seven day obviously it will uh increase the of a timer for right so further like in a beginning type of data scientists they may fail to you know conceive important feature uh or you can

Say in their modeling modeling section which may you know have been already done by other team member but they haven’t used it because they thought that uh I mean the more so the model keeps changing and their structure is keep changing but but see uh so we need

To understand the chronology of this modeling okay so basically a feature is a feature or structure is not the thing that keep always changing right so they they like we keep the structure in a way so that they don’t uh like uh keep changing the it basically it implies to

Like Agile development of you know features basically uh so it’s it’s a you know a type of groups okay okay uh a type of group of features uh from different data source or you know uh like we create uh create and update new data set from those you know feature

Group for using training models so it’s not like it will be keep changing every time we can use these created groups in the in the future store so that it will reduce our time cleaning and you know grouping this feature so that is what the Gap is inside our machine learning

Operation So to avoid those operation we use feature store so it’s a very simple concept it’s it’s a like few lines of code that we have we have to add in our machine learning operation and it will be avoided and that’s why uh nowadays feature store and model registry as a

Crucial part of machine learning operations now the now comes the last slide what is this it is Automation tool so whatever automation tool we have in current we have in Market currently so we have Google’s queue flow we have ever flow we have Azure ml gate of X actions

Continuous ml AWS stage maker and so many more and so I will be like uh making uh um I mean projects on like one by one on each of this you know automation tool so it will be easy for you to understand the gaps understand what is different understand like how

Long it takes for queue flow how long it to take to set up ML flow all this thing and it will be uh you know very good learning for us because we will get to know how different platform align with you know their Frameworks so thank you

You for giving me your time have a good day and also like I want to tell you uh like please keep you know subscribing and liking so it will encourage me to like make new videos on you know upcoming new uh Technologies or anything like envelopes So yeah thank you so much bye
This Playlist contains an Understanding of MLpos and related projects on cloud!

DevOps brings together the best practices for software development and engineering, quality assurance, and IT operations.

MLOps will ensure model governance is established as a key part of the process, and that the model risks are clearly understood. This fits the machine learning model into wider conversations on risk management within the organization. It also frames the model as a tool to achieve wider business objectives. There is a range of MLOps software options available to help track the MLOps process.

MLops tutorial – End to End with the project on different concepts.

————————————————————————————————————-
#MachineLearning, #MLops #mlops, #Mlflow, #Azure, #AzureML, #YouTubeLearning,
————————————————————————————————————-

All Playlist in my channel

Machine Learning Playlist:
Deep Learning Playlist:
AI Projects Playlist:
Stats & Probability Playlist:

—————————————————————————————————————
Connect with me here:
Github:
Facebook:
Instagram:

—————————————————————————————————————

THANKS & Love you all!!!

—————————————————————————————————————
00:00 hello welcome guys to take entertaining
00:02 Channel my name is Ram Sundar and I’ll
00:04 be starting with a new tutorial on ML
00:07 Ops since it is very popular and
00:08 trending in market so that’s why I
00:10 thought of like uh creating one playlist
00:12 for it and then because I will get to
00:15 learn about so many things like
00:16 continuous integration you know
00:18 continuous development like things
00:20 happen in devops so as we know like uh
00:23 mlops uh like came into Market recently
00:25 and Just Vibe up very like uh captured
00:29 so much popularity just because uh we
00:31 don’t have to touch anything and our
00:33 model will automatically get trained and
00:35 so many things there are so many
00:36 advantages of using mlops so let’s get
00:38 started so basically first of all I’ll
00:40 let you know uh like this tutorial
00:42 basically will consist of uh three or
00:44 five different type of uh projects so
00:46 basically I will be implementing uh
00:48 mlops in different ways so when using
00:51 you know Azure ml another using ml flow
00:53 queue flow like that so it will be very
00:55 useful for you to uh basically start
00:58 from top so let’s start
01:00 first of all what is envelope so ml Ops
01:03 is stands for like machine learning
01:04 operations so which refer to every stage
01:06 in machine learning life cycle so what
01:08 what is machine learning life cycle so
01:09 first of all so machine learning life
01:11 cycle cycle is like uh when you start
01:13 you know one project so how it starts so
01:15 basically you uh understand the business
01:17 you understand the data you uh go like
01:20 stage by stage you ingest data you
01:22 pre-process data you feature engineering
01:25 cleaning XYZ so those all come inside
01:27 machine learning life cycle okay so uh
01:29 this is what we’re going to automate it
01:31 so all the machine learning stages will
01:33 get automated now
01:34 how much amylose is like a approach for
01:37 you know managing the entire machine
01:39 learning lifecycle from development
01:40 stage to the block uh from development
01:43 stage to like ongoing production stage
01:45 and we will be monitoring it’s like it’s
01:47 a strategy of streaming line you know
01:49 machine machine learning training uh
01:51 packaging validation deployment
01:52 monitoring and like a part of MLS
01:55 approach so we can run ml project
01:57 consistently in this manner from uh
01:59 beginning to end so that’s why you know
02:01 uh people are using envelopes to
02:03 basically avoid you know step by step uh
02:06 manual process so the question is why ml
02:08 Ops came into Picture answer is so
02:10 acquiring and cleaning large amount of
02:12 data setting up you know tracking and
02:15 versioning of you know of all the models
02:18 you know experiments performed by us
02:20 model training runs and then setting up
02:22 deployment
02:24 and then monitoring pipeline for the
02:26 model and then go to production so uh
02:28 this was taking a huge you know time as
02:31 well as uh so many other things so so
02:33 basically uh in simple way mlux is a
02:36 combination of you know data combination
02:38 of data science data engineering more
02:40 like traditional devops techniques so
02:42 the aim will be here to understand both
02:44 the model and the organizational
02:45 infrastructure how basically they sit
02:47 together okay so let’s go to another
02:49 slide so first of all as we know uh in
02:52 you know machine learning life cycle
02:54 basically there is a training model
02:56 phase there is packaging phase there is
02:58 validation phase there is a deployment
03:00 phase there is monitoring phase now
03:02 we’ll understand basically what all
03:04 these stages are so in training phase
03:05 the training phase includes basically
03:07 initial data preparation uh
03:09 experimentation and model optimization
03:11 so this is what training phase does now
03:13 in package and validation phase what we
03:15 do the envelopes you know deployment
03:17 phase will include uh I mean this step
03:19 to manage model rollout including model
03:21 validation and testing and system
03:22 architectures so this will be like
03:24 packaging and validation step now in uh
03:26 deployment and monitoring phase
03:27 basically what we do this is also called
03:29 like a Final Phase that include ongoing
03:31 monitoring of model like Beyond
03:33 development deployment now any drop
03:35 in model accuracy or presence of outlier
03:38 or I mean that we have detected or I
03:40 mean our program has detected anyhow now
03:42 this will cause like uh this will cause
03:44 very you know uh very harm for our you
03:47 know business so that’s why and the cost
03:49 should be like stabilized and so the
03:52 model can be realigned or you know
03:53 retrained so that is our main goal will
03:55 be at that time so uh so basically up
03:58 till here we were running uh you know
04:00 Jupiter notebook locally but when it
04:02 comes to production uh we need to upload
04:03 it into cloud
04:12 so and it will be you know uh it will be
04:16 everything will be treated like a
04:17 pipeline so basically this whole thing
04:19 is a pipeline it will go one by one okay
04:24 these are the
04:28 uh
04:33 so basically invited script like
04:46 so see uh the purpose of important
04:49 purpose
04:53 literally
05:00 lower the operational cost First of all
05:02 okay now it will help help us and like
05:05 uh like data scientists machine learning
05:07 Engineers developer to be more agile and
05:09 strategic in their decision because you
05:11 know you won’t have to do same thing
05:13 multiple times because you know what
05:15 happens uh when we uh you know get some
05:18 issue in production we uh go and check
05:20 what is happening in model so why why
05:22 model performance is uh getting lower so
05:24 that time what we do we do retraining or
05:26 we do feature engineering again and
05:27 again or two or we do cleaning or those
05:29 type of thing basically so and it it
05:31 basically creates some more
05:33 Effectiveness so at that time what we’ll
05:35 do we will So to avoid this what we will
05:37 do we will reuse pipeline workflow to 10
05:39 new model so we’ll we’ll I mean build
05:42 some script that will automatically
05:43 understand things and it will basically
05:46 automatically keep changing based on you
05:48 know changes in data or you know some
05:50 business uh changes so basically then
05:52 what we will do uh I mean it will
05:54 automatically changes its structure
05:57 based on uh latest changes so basically
05:59 then we won’t have to do any
06:00 modification in the pipeline so that we
06:02 can use it for retraining purposes now
06:05 another thing is like we can’t just
06:08 deploy an offline train model uh as a
06:10 prediction service right like whatever
06:11 we have like trained in local our laptop
06:13 we just can’t like directly use that
06:16 model to uh you know in production so
06:18 we’ll need a multi-step pipeline to
06:20 automatically retrain deploy that model
06:22 so this pipeline add complexity because
06:24 we need to automate the state okay
06:26 automate the step that data scientists
06:28 do manually before deployment which is
06:30 train and validate new models okay so
06:32 that we want to automate so suppose we
06:34 got some you know uh changes in data or
06:37 say some regulation or business so data
06:40 will significantly change right so once
06:42 business changes like anything changes
06:44 so it will automatically affect in our
06:46 data so our model should adapt those
06:49 changes now so this is how mlops comes
06:52 into picture so it will be easy for us
06:54 to make uh like make it because
06:56 everything will be on premises like
06:58 every single script or every single
07:00 thing will be on in premise and that’s
07:02 why it will be easy for us now how it is
07:04 lowering our operational cost because uh
07:07 because not every time we have to like
07:08 test our model or you know a Manpower so
07:11 it will reduce our Manpower it will
07:13 reduce our time it will reduce our
07:15 because managing infrastructure is very
07:17 challenging nowadays because so many
07:18 things are there right so that’s why uh
07:20 automatic this and creating multi
07:23 -step reduce the complexity that usually
07:26 data scientist faces or machine learning
07:27 Engineers now comes what is devops so
07:30 basically in simple term if if I like
07:33 want to tell you uh what is devops so
07:35 basically it’s a development operation
07:37 like envelops is a subset of the box so
07:40 so basically there was a big very big
07:42 architecture which uh in which avelops
07:44 comes in so basically uh to be simple so
07:48 so in collaborative between you know
07:49 it’s a collaborative between development
07:51 and it operation team to make software
07:53 in production and deployment uh in an
07:55 automated way or you can say repetitive
07:57 way so devops help us like increase the
07:59 organizational speed delivers software
08:01 application service on time and uh you
08:04 know the the main advantage is when we
08:07 want to let our software used by you
08:10 know multiple uh people irrespective of
08:12 performance so that time there was
08:14 concept of picture because it will
08:16 automatically scale according to number
08:18 of people using our services so the most
08:20 main role is to make a developer life
08:22 easier by providing you know platform uh
08:25 where they can do continuous build and
08:26 continuous development so this is what
08:28 devops is so now devops have like five
08:31 different you know stages okay so as you
08:33 can see uh in the sheet uh so basically
08:35 they are code they are built their test
08:38 packages release configure tool and
08:40 monitor so as you can see by the by the
08:42 names some of them comes inside analogs
08:44 also so they must provide a solution in
08:47 form of setup processes building testing
08:49 deployment uh managing large scale
08:51 software systems and that that is what
08:53 basically devops and this is how it is
08:55 different from mlops okay so hope I’m
08:58 hoping that
09:01 that so so now we’ll see the main key I
09:04 mean key difference between analogs and
09:05 devops so basically as we can see uh
09:07 mlops is more experimental in nature and
09:09 they’re like uh execution phase is
09:11 different like we have seen in the
09:12 previous slide that uh like aminox has
09:16 so many phases but in devops we have
09:18 like seven phases okay so that is the
09:20 main thing first of all now uh
09:22 [Music]
09:22 um
09:28 [Music]
09:34 foreign
09:58 with so many you know reproducibility of
10:01 experiments so
10:03 so so MLS applies to like entire life
10:05 cycle so uh like like entire cycle what
10:07 do you mean by that from data Gathering
10:09 model creation like it continuous
10:11 integration
10:13 [Music]
10:24 whereas for devops it focuses mostly on
10:26 deployment tasks uh such as build and
10:29 deployment okay so we use code Version
10:31 Control in the
10:34 documentation of like all modification
10:35 and addition made to would have been
10:37 created however mnops uh for mloff’s
10:39 case code isn’t the only thing okay so
10:42 for uh for mlos like I just said there
10:45 are many things like orchestration
10:46 deployment Health locks Diagnostics
10:49 governance business metrics all of this
10:52 thing so since we can’t do this thing
10:54 locally we have to deploy it in Cloud
10:56 right Cloud so that different team can
10:58 come together and contribute their work
11:01 all together so that is that is that is
11:03 all can be maintained by data scientists
11:06 or
11:06 data engineering all the team members
11:08 together and can monitor the data test
11:10 shift or you can say
11:12 model drift over time so because you
11:15 know data keep changes so word changes
11:17 economy changes value changes uh that
11:19 way data also changes right so think it
11:22 this way your model which is trained one
11:23 year ago it will work for current data
11:25 not right so that’s why so whatever the
11:28 issue occurred during the model
11:29 governance uh you know changing the
11:31 model uh will solved here so you know or
11:35 what happens when moral governance comes
11:36 so we’ll cover that in next slide so
11:37 basically we can experiment uh things by
11:40 running ml you know workflow with
11:42 different set of hyper parameter number
11:43 of training steps iteration this way we
11:46 can say that that’s why machine learning
11:48 operation is more experimental in nature
11:50 so this is how basically this is how
11:53 basically we differentiate envelopes
11:54 with devops
11:55 now comes what are what are the
11:57 different you know levels of uh machine
12:00 learning operations so basically to make
12:01 it simple in level one every single step
12:04 is manual including data analysis data
12:06 preparation model training validation it
12:08 requires like a manual execution of
12:10 every step and when will transition from
12:12 one step to another
12:14 so so so a new model version will be
12:16 deployed only a couple of year per year
12:18 because uh there will be no track of
12:20 anything right how much uh things are
12:22 going there will be no continuous
12:23 integration no continuous deployment so
12:25 that’s that’s that’s level zero now
12:27 what’s what’s in level one so basically
12:29 level one the model is automatically
12:30 trained in production and uh using the
12:32 phrase data based on uh life uh you know
12:34 pipeline triggered so the the model
12:37 deployment step which you know serves
12:39 the train and validated model as a
12:41 prediction service for online protection
12:42 is automated as well as pipeline
12:44 deployment
12:45 so these are the things that is uh that
12:47 is the difference between level zero and
12:49 level one so uh in level two what
12:50 happens is here basically every step is
12:52 automated now you iteratively uh try out
12:54 new ml algorithm and new modeling with
12:56 the experiment step and they are
12:58 orchestrated so basically from pipeline
13:00 continuous delivery to automatically
13:01 triggering user scheduler on a response
13:03 to trigger modular registry so that
13:05 model continuous delivery and monitoring
13:07 keep happening and then it collect the
13:08 statistic and based on that it retrain
13:10 it so this is what difference between
13:12 level zero level one and level two is
13:14 now comes the another part which is
13:16 model domains like I told you I will
13:18 cover in the next slide so what is so so
13:21 basically David means become more
13:22 crucial
13:23 basically in medicine there is
13:41 important for us to understand
13:49 very important since it ensure that our
13:52 body is involved with you know Financial
13:53 stays clear and without any resources
13:56 why because you know we basically use
13:59 the model output with basically and
14:02 basically we apply the model output to
14:04 our business and so that’s where it is
14:06 directly connected
14:08 so what are the model basically
14:11 everything says it will basically impact
14:13 our finances
14:28 also
14:35 so obviously uh this uh let’s say uh
14:38 what happened is you applied you know a
14:40 model into production so everything was
14:43 Concepts like model gave me that uh
14:46 [Music]
15:12 uh whether our model is giving correct
15:14 output or not whether it has biases all
15:17 those things so that’s why all those
15:18 things coming to govern info we have to
15:20 take control of that because they will
15:21 uh
15:23 now comes the model registry and feature
15:25 store so what is modern registry so
15:27 basically a model registry uh is
15:30 something you know you call it as
15:31 Central you know repository for model
15:34 created uh for the model that we created
15:36 and can publish model uh that we that
15:39 basically are ready to use okay so
15:41 basically what happens is when we train
15:44 any model so we register that model what
15:46 I mean by registry so basically once
15:48 model in production so there is a part
15:49 of script where it will allow us to
15:51 register that model so that we’ll keep
15:53 track of those models in later time so
15:55 let’s say we want to we want to check
15:56 what uh
15:58 like monthly level of accuracy so what
16:00 that time what we will do we’ll go to
16:01 the model registry and we’ll keep trying
16:03 and basically it will keep track of all
16:05 the you know trainings and it will keep
16:06 track of all the matrices that we have
16:08 provi that we have logged into so
16:10 basically we can log all type of
16:11 matrices uh including accuracy uh
16:13 including you know images validation
16:16 chart accuracy chart validation loss
16:18 everything that we see in local training
16:20 that all those things we can keep track
16:22 of okay so that’s very easy uh when we
16:24 when we will try we will see all those
16:26 things so basically with the registry
16:27 scientists can manage the lifespan of
16:29 all the models in the business uh like
16:31 cooperatively with other team and
16:32 stakeholders so the train model can be
16:34 uploaded to the history by the uh you
16:36 know a small line of script so your
16:37 model will be prepared on tested and
16:39 validated and before like before
16:41 deploying into production they should be
16:43 in registry now basically comes the
16:47 features stored okay now feature store
16:49 is a bit you know a type of Gap okay so
16:53 what Gap is so basically what happens is
16:55 feature uh feature store is a type of
16:58 operation or you can say it’s a gap that
17:00 present in a machine learning life cycle
17:02 machine learning operational life cycle
17:03 so it is a absence or you can say uh in
17:06 organization or you can say you know
17:08 data scientists do a lot of duplicate
17:10 work while creating you know same
17:11 feature again and again validating again
17:13 and again using different use cases
17:15 again and again uh this what basically
17:17 does it increase the you know a
17:19 complexity as well as time to make
17:22 changes significantly okay so what I
17:24 mean by that basically if you keep
17:26 checking all this thing uh you know
17:28 every seven day obviously it will uh
17:30 increase the of a timer for right so
17:32 further like in a beginning type of data
17:35 scientists they may fail to you know
17:37 conceive important feature uh or you can
17:39 say in their modeling modeling section
17:41 which may you know have been already
17:43 done by other team member but they
17:45 haven’t used it because they thought
17:46 that uh I mean the more so the model
17:49 keeps changing and their structure is
17:50 keep changing but but see uh so we need
17:54 to understand the chronology of this
17:55 modeling okay so basically a feature is
17:58 a feature or structure is not the thing
18:00 that keep always changing right so they
18:02 they like we keep the structure in a way
18:05 so that they don’t uh like uh keep
18:07 changing the it basically it implies to
18:09 like Agile development of you know
18:11 features basically uh so it’s it’s a you
18:15 know a type of groups okay okay uh a
18:17 type of group of features uh from
18:19 different data source or you know uh
18:21 like we create uh create and update new
18:23 data set from those you know feature
18:25 group for using training models so it’s
18:27 not like it will be keep changing every
18:28 time we can use these created groups in
18:31 the in the future store so that it will
18:32 reduce our time cleaning and you know
18:35 grouping this feature so that is what
18:37 the Gap is inside our machine learning
18:39 operation So to avoid those operation we
18:42 use feature store so it’s a very simple
18:43 concept it’s it’s a like few lines of
18:46 code that we have we have to add in our
18:48 machine learning operation and it will
18:49 be avoided and that’s why uh nowadays
18:52 feature store and model registry as a
18:53 crucial part of machine learning
18:54 operations now the now comes the last
18:57 slide what is this it is automation tool
18:59 so whatever automation tool we have in
19:00 current we have in Market currently so
19:02 we have Google’s queue flow we have ever
19:03 flow we have Azure ml gate of X actions
19:07 continuous ml AWS stage maker and so
19:09 many more and so I will be like uh
19:11 making uh um I mean projects on like one
19:15 by one on each of this you know
19:17 automation tool so it will be easy for
19:19 you to understand the gaps understand
19:21 what is different understand like how
19:24 long it takes for queue flow how long it
19:26 to take to set up ML flow all this thing
19:28 and it will be uh you know very good
19:30 learning for us because we will get to
19:33 know how different platform align with
19:35 you know their Frameworks so thank you
19:37 you for giving me your time have a good
19:40 day and also like I want to tell you uh
19:43 like please keep you know subscribing
19:45 and liking so it will encourage me to
19:47 like make new videos on you know
19:49 upcoming new uh Technologies or anything
19:52 like envelopes So yeah thank you so much
19:54 bye

ModelValidationandTesting

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here