Home Business Analyst BA Agile Coach Cloud Native DevOps Explained

Cloud Native DevOps Explained

84
13

I want to start with laying out an example cloud-native application that I’ve architected and I know how to build it out. So, let’s start with the front-end. We’ll call this the UI portion here. Below that we’ve got the BFF (“Back-end For Front-end”).

So, this is serving the API’s for that UI to serve up information. So, the UI accesses the BFF and that, in turn, is going to access the microservice or the back-end layer. So, in here let’s say “back end”. Now, obviously for higher value services – let’s say that back-end

Goes out to something like AI capabilities, and in addition, maybe a database. So, Matt as the expert, I’m going to hand this off to you. This is the application architecture that I want. How do I start migrating this over to a cloud-native approach and what are the DevOps considerations

That I need to take into account? Ok. So, you’ve already laid out some of the separation of concerns. You’ve got a component that is focused on delivering a user experience, which, again, can be containerized and packaged. You’ve then maybe got a back-end for front-end which is serving UI-friendly APIs

And abstracting and orchestrating across a number of back-end. So, you’ve got your 3 logical points. So, moving forward, what you typically do is take this component and start to break it into a pipeline that will enable you to offer some discipline around how you build, deploy, and test.

So, what we typically do here is we’re going to use DevOps and we’re going to create a pipeline, and this pipeline is going to consist of a number of stages that will take us through the lifecycle of building and packaging this component. So, typically the first step is to clone the code

From your source code management, which is typically Git or some kind of Git-based technology, GitHub, GitLab, and then the next step is to build the app. So, “Build App”. In this portion, when you’re actually building out the application, you have considerations for a Node.js app, you have things like NPM,

Java, you have to figure out the build process for that. So, the pipeline is kind of configured to build each one of these components based on the Programming language? Right. So, typically you have one pipeline per component and, as you correctly stated, if you’re building a UI and it’s got React in it,

You’re going to use a web pack to build the UI TypeScript code, package that into a form that will then be package-ready for run. So, there are steps – and, again, with a Spring app, a Spring Boot app, that you’ll package it using Maven or Gradle,

And we know that Node.js you’d use NPM and various other steps. So, this part of the pipeline is about packaging the source code in the way that it’s needed to then be run. But then, typically, at this point the next step is to to run a set of tests.

So, you run a set of unit tests against the code, you validate code coverage. And then this enables you to determine whether any code changes that have been made in the pipeline are valid. And again, these steps are sequentially moving along, but if any one of these fails it will stop the build,

You’ll be informed as if as a developer and then you’ll go back and fix the code or fix the test. So, just to clarify at this level we’re going to do unit tests, so tests within kind of the app context. Not really considering connections between the different components.

Yeah. Today we’re not going to cover that the integration story or performance testing, but typically when you’re building a pipeline you need to test the code that you’ve written using various techniques. Typically, you can use test-driven development which is a concept we use in the Garage.

So, you write the test first and then create the code to validate that. You can use other frameworks, most of the major programming models have good test frameworks around them, whether it’s Java, Node, or other languages. So, next step: again, one of the key things to try and drive for

Is to get to a point of continuous delivery. This is a continuous integration pipeline, but if you fail the test then that’s going to prevent this package of code moving into a test environment. So, another common technique we use is code scanning, or vulnerability scanning, or security scanning.

So, what we do here is we’re looking for vulnerabilities, we’re looking for test coverage, we’re looking for quality gates. So, if your code isn’t a good enough quality, from a code analysis perspective, we could actually stop the build and say we’re not going to move this microservice further along the build process.

Right. So, if we were building out this – let’s say the BFF application was a container-based application running in IKS (IBM Cloud Kubernetes Service), We have some capabilities to allow you to test for that scanning, right? It’s the Vulnerability Advisor. So, would that exist in this phase then?

So, you tested the code, then you… Yeah. Again, I’m lumping in one or two different stages here, you can do vulnerability scan, you can do code scan, it’s kind of a common technique to make sure. The good thing about vulnerability scanning is you’re validating

That there’s no security holes in the Docker image, or the container image as you build it. Got it. OK. So, now that we’ve got up to the scanning phase, what’s our next phase – where are we going? The next step is to take the application that we built and tested and scanned,

And now we’re gonna build it into an image. So, we call it a “build image”. So, what this is doing is using the tools to to package up the code that we built and put it inside a container. And once we’ve built the image

We then store that image out in an image registry with a tagged version that goes with it. Right. So, I guess I got ahead of that right there – so, that’s where we would actually do that vulnerability scanning: once we’ve tested the code itself, done some some scanning at that level,

Once we build the image then, something like vulnerability advisors … Right. So, you could have that as another stage, but, again, if the vulnerability is poor then you could prevent this moving forward and that will inform the developers to either upgrade the level of base images they’re using

Or fix a number of the packages that they’ve included in it. So, basically every step of the way – if anything fails you’re notified of that and you can go back and fix that. Right – and at the next stage, now you have an image, and the next thing is to deploy it.

So, what we’re looking to do is to take that image and deploy it inside an OpenShift managed platform so it will move the container from the image registry and deploy it. And there are a number of different techniques for deployment that’s are used. Some developers are using Helm,

But the more modern approach is to use operators, so there’s a life cycle around that component when it gets deployed. So, and then this deploy – let’s say I have a a Kubernetes environment – so you would deploy an application, let’s say the BFF application, into that Kubernetes environment, right?. Yep.

OK, and I’m guessing at this phase this is still part of the developer flow, – would this be the development environment that you’re pushing into, or the test environment? So, typically a continuous integration flow builds and packages the code up for the development environment.

When we talk in a few seconds we’ll more talk a bit more about how we move that package of code from the container registry out into a test environment. Got it, so right here, like that. Yep. So, the final step is to validate the health. So, what you’re really asking here is,

“Is the container running?” – is it sending back operational information such that you can determine that it’s healthy enough to validate that, not only that the tests have run, but actually it started, and it’s communicating with its dependent services, and it’s going to operate in the way that you’d expect it to.

Of course, yeah. So, this is where you connect it up to the different components and make sure they’re all working together seamlessly. This is where you would probably find issues with integration, or how the teams are connecting up with each other, API contracts, and those kind of things,

Those issues will start to bubble up in this space. Yes, and again, the health input is important because you can hook that into operational tools like Sysdig and LogDNA and other Monitoring that will give you a better feel of the current state of your applications as they run.

So, this has got us as far through the development cycle. The next step is to – and, again, introduce – this is starting to be common in the industry, is to use a technique called GitOps where you would now say I’ve got my application, I built it, I packaged it, I’ve tested it,

I’ve validated it. What I’m now going to do is update a Git repo with the build number, the tagged version, and the reference point to the image registry. And then GitOps can then trigger off a deployment of that image out into a test environment with all the other components that go with it,

And there are a number of GitOps tools out in the market and one of the ones we use in the Garage is Argo CD, which allows you to monitor a webhook of a Git repo and then it will pull the image,

It will pull the deployment reference, and then package it and deploy it ready for use in testing. So, basically the same quality that developers have been doing forever with SCMs to manage different versions of their code, now operations team are taking advantage of that same approach to basically operationalize the deployment

Of these actual images, containers, applications. Absolutely, and it comes back to a point we made earlier, that this is about discipline and repeatability. There’s no humans hurt in this process as you go through it, and the less humans touching these steps the better.

Again, one of the things we often do with clients is we’ll work with them and we’ll discover that there’s some human process in the middle and that really slows down your ability to execute. So, it’s about automation, discipline, and repeatability, and if you can get to this point

And prove that this code is good enough to run in production, you can then start to move towards that golden milestone of delivering continuous delivery. Right. So, once you’ve automated all of this, that’s when you can truly say you have CI/CD. That’s that’s when you can finally get to that level.

OK, so, honestly Matt, this was a great overview of all the concepts we’ve discussed already. If you’ve enjoyed this video or have any comments be sure to drop a “like”, or a comment below. Be sure to subscribe, and stay tuned for more videos like this in the future.
Learn more about Cloud Native:
Check out our Cloud Native Solutions on IBM Cloud:

In this lightboard video, Sai Vennam and Matt Perrins with IBM Cloud walk through a scenario of taking an existing application and migrating it over to use a cloud-native approach in order to take advantage of increased scalability and higher-level services. These two cloud native experts also demonstrate how to best use DevOps principles to manage the building, testing, and deployment of the application’s lifecycle.

Get started on IBM Cloud at no cost:

#cloudnative #devops #ibmcloud
00:00 I want to start with laying out an example
00:03 cloud-native application that I’ve architected
00:06 and I know how to build it out.
00:08 So, let’s start with the front-end.
00:11 We’ll call this the UI portion here.
00:14 Below that we’ve got the BFF (“Back-end For Front-end”).
00:18 So, this is serving the API’s for that UI to serve up information.
00:23 So, the UI accesses the BFF
00:26 and that, in turn, is going to access
00:29 the microservice or the back-end layer.
00:32 So, in here let’s say “back end”.
00:36 Now, obviously for higher value services –
00:39 let’s say that back-end
00:40 goes out to something like AI capabilities,
00:45 and in addition, maybe a database.
00:49 So, Matt as the expert, I’m going to hand this off to you.
00:52 This is the application architecture that I want.
00:54 How do I start migrating this over to a cloud-native approach
00:58 and what are the DevOps considerations
01:01 that I need to take into account?
01:02 Ok. So, you’ve already laid out some of the separation of concerns.
01:05 You’ve got a component that is focused on delivering a user experience,
01:10 which, again, can be containerized and packaged.
01:13 You’ve then maybe got a back-end for front-end which is serving
01:17 UI-friendly APIs
01:18 and abstracting and orchestrating across a number of back-end.
01:22 So, you’ve got your 3 logical points.
01:24 So, moving forward, what you typically do is take
01:28 this component and start to break it into a pipeline
01:32 that will enable you to offer some discipline
01:35 around how you build, deploy, and test.
01:37 So, what we typically do here is we’re going to use DevOps
01:42 and we’re going to create a pipeline,
01:45 and this pipeline is going to consist of a number of stages
01:48 that will take us through
01:50 the lifecycle of building and packaging this component.
01:53 So, typically the first step is to clone the code
01:56 from your source code management, which is typically Git
01:59 or some kind of Git-based technology, GitHub, GitLab,
02:02 and then the next step is to build the app.
02:05 So, “Build App”.
02:08 In this portion, when you’re actually building out the application,
02:12 you have considerations for a Node.js app,
02:15 you have things like NPM,
02:16 Java, you have to figure out the build process for that.
02:19 So, the pipeline is kind of configured
02:22 to build each one of these components
02:24 based on the programming language?
02:25 Right. So, typically you have one pipeline per component
02:29 and, as you correctly stated,
02:31 if you’re building a UI and it’s got React in it,
02:34 you’re going to use a web pack to build the UI TypeScript code,
02:37 package that into a form
02:39 that will then be package-ready for run.
02:41 So, there are steps
02:43 – and, again, with a Spring app,
02:44 a Spring Boot app, that you’ll package it using Maven or Gradle,
02:48 and we know that Node.js you’d use NPM and various other steps.
02:52 So, this part of the pipeline
02:54 is about packaging the source code in the way that it’s needed
02:58 to then be run.
03:00 But then, typically, at this point the next step is to to run a set of tests.
03:05 So, you run a set of unit tests against the code,
03:09 you validate code coverage.
03:10 And then this enables you to determine
03:13 whether any code changes that have been made in the pipeline are valid.
03:16 And again, these steps are sequentially moving along,
03:20 but if any one of these fails it will stop the build,
03:23 you’ll be informed as if as a developer
03:25 and then you’ll go back and fix the code
03:28 or fix the test.
03:29 So, just to clarify at this level we’re going to do
03:32 unit tests, so tests within kind of the app context.
03:36 Not really considering
03:39 connections between the different components.
03:41 Yeah. Today we’re not going to cover that the integration story
03:44 or performance testing,
03:46 but typically when you’re building a pipeline you need to
03:49 test the code that you’ve written
03:51 using various techniques.
03:53 Typically, you can use test-driven development which is a concept we use in the Garage.
03:58 So, you write the test first and then create the code to validate that.
04:02 You can use other frameworks,
04:03 most of the major programming models have good test frameworks around them,
04:07 whether it’s Java, Node, or other languages.
04:11 So, next step:
04:13 again, one of the key things to try and drive for
04:15 is to get to a point of continuous delivery.
04:18 This is a continuous integration pipeline,
04:21 but if you fail the test then that’s going to prevent
04:24 this package of code moving into a test environment.
04:27 So, another common technique we use is code scanning,
04:32 or vulnerability scanning, or security scanning.
04:35 So, what we do here is we’re looking for vulnerabilities,
04:38 we’re looking for test coverage,
04:40 we’re looking for quality gates.
04:42 So, if your code isn’t a good enough quality,
04:44 from a code analysis perspective,
04:47 we could actually stop the build and say we’re not going to move
04:50 this microservice further along the build process.
04:53 Right. So, if we were building out this
04:56 – let’s say the BFF application was
04:59 a container-based application running in IKS (IBM Cloud Kubernetes Service),
05:03 We have some capabilities to allow you to test for that scanning, right?
05:08 It’s the Vulnerability Advisor.
05:10 So, would that exist in this phase then?
05:12 So, you tested the code, then you…
05:13 Yeah. Again, I’m lumping in one or two different stages here,
05:17 you can do vulnerability scan, you can do code scan,
05:21 it’s kind of a common technique to make sure.
05:24 The good thing about vulnerability scanning is you’re validating
05:28 that there’s no security holes in the Docker image,
05:30 or the container image as you build it.
05:32 Got it. OK.
05:33 So, now that we’ve got up to the scanning phase,
05:36 what’s our next phase – where are we going?
05:39 The next step is
05:40 to take the application that we built and tested and scanned,
05:43 and now we’re gonna build it into an image.
05:47 So, we call it a “build image”.
05:51 So, what this is doing is using the tools
05:54 to to package up the code that we built and put it inside a container.
05:59 And once we’ve built the image
06:01 we then store that image out in an image registry
06:06 with a tagged version that goes with it.
06:07 Right. So, I guess I got ahead of that right there
06:10 – so, that’s where we would actually do that vulnerability scanning:
06:13 once we’ve tested the code itself, done some some scanning at that level,
06:18 once we build the image then, something like vulnerability advisors …
06:21 Right. So, you could have that as another stage,
06:24 but, again, if the vulnerability is poor
06:27 then you could prevent this moving forward
06:30 and that will inform the developers
06:32 to either upgrade the level of base images they’re using
06:35 or fix a number of the packages that they’ve included in it.
06:38 So, basically every step of the way
06:40 – if anything fails you’re notified of that
06:44 and you can go back and fix that.
06:47 Right – and at the next stage, now you have an image, and the next thing is to deploy it.
06:52 So, what we’re looking to do is to take that image and deploy it inside an OpenShift managed platform
06:58 so it will move the container from the image registry and deploy it.
07:02 And there are a number of different techniques for deployment that’s are used.
07:05 Some developers are using Helm,
07:08 but the more modern approach is to use operators,
07:10 so there’s a life cycle around that component when it gets deployed.
07:13 So, and then this deploy –
07:15 let’s say I have a a Kubernetes environment –
07:18 so you would deploy an application,
07:22 let’s say the BFF application,
07:24 into that Kubernetes environment, right?. Yep.
07:27 OK, and I’m guessing at this phase this is still part of the developer flow,
07:31 – would this be the development environment that you’re pushing into, or the test environment?
07:35 So, typically a continuous integration flow
07:40 builds and packages the code up for the development environment.
07:44 When we talk in a few seconds we’ll more talk a bit more about
07:47 how we move that package of code from the container registry
07:51 out into a test environment.
07:53 Got it, so right here, like that. Yep.
07:56 So, the final step is to validate the health.
08:00 So, what you’re really asking here is,
08:03 “Is the container running?”
08:05 – is it sending back operational information
08:09 such that you can determine that it’s healthy enough
08:11 to validate that, not only that the tests have run,
08:14 but actually it started,
08:16 and it’s communicating with its dependent services,
08:18 and it’s going to operate in the way that you’d expect it to.
08:22 Of course, yeah. So, this is where you
08:25 connect it up to the different components
08:27 and make sure they’re all working together seamlessly.
08:30 This is where you would probably
08:32 find issues with integration,
08:34 or how the teams are connecting up with each other,
08:38 API contracts, and those kind of things,
08:40 those issues will start to bubble up in this space.
08:42 Yes, and again, the health input is important
08:44 because you can hook that into operational tools
08:47 like Sysdig and LogDNA and
08:50 other monitoring that will give you
08:52 a better feel of the current state of your applications as they run.
08:56 So, this has got us as far through the development cycle.
08:59 The next step is to –
09:02 and, again, introduce – this is starting to be common in the industry,
09:06 is to use a technique called GitOps
09:08 where you would now say
09:10 I’ve got my application, I built it, I packaged it, I’ve tested it,
09:13 I’ve validated it.
09:15 What I’m now going to do is update a Git repo
09:18 with the build number, the tagged version,
09:21 and the reference point to the image registry.
09:25 And then GitOps can then trigger off a deployment of that image
09:29 out into a test environment
09:32 with all the other components that go with it,
09:34 and there are a number of GitOps tools out in the market
09:37 and one of the ones we use in the Garage is Argo CD,
09:40 which allows you to monitor a webhook of a Git repo
09:45 and then it will pull the image,
09:46 it will pull the deployment reference, and then package it and deploy it
09:49 ready for use in testing.
09:52 So, basically the same quality that developers have been doing
09:56 forever with SCMs to manage different versions of their code,
10:00 now operations team are taking advantage of that same approach
10:03 to basically operationalize the deployment
10:07 of these actual images, containers, applications.
10:10 Absolutely, and it comes back to a point we made earlier,
10:12 that this is about discipline and repeatability.
10:15 There’s no humans hurt in this process as you go through it,
10:19 and the less humans touching these steps the better.
10:22 Again, one of the things we often do with clients is
10:24 we’ll work with them
10:25 and we’ll discover that there’s some human process in the middle
10:28 and that really slows down your ability to execute.
10:32 So, it’s about automation, discipline, and repeatability,
10:35 and if you can get to this point
10:37 and prove that this code is good enough to run in production,
10:40 you can then start to move towards
10:44 that golden milestone of delivering continuous delivery.
10:48 Right. So, once you’ve automated all of this,
10:51 that’s when you can truly say you have CI/CD.
10:54 That’s that’s when you can finally get to that level.
10:57 OK, so, honestly Matt, this was a great overview of all the concepts we’ve discussed already.
11:02 If you’ve enjoyed this video or have any comments
11:04 be sure to drop a “like”, or a comment below.
11:07 Be sure to subscribe,
11:08 and stay tuned for more videos like this in the future.

CloudNativeSolutions

13 COMMENTS

  1. It was really rude not introducing the older colleague when doing this video. — I would love to see more of him in IBM videos. 🙂 He must have a vast amount of knowledge.
    Plus, he is much easier on the ear & doesn't yabber 🙂 he just gets right to the point, so you can learn whats important 🙂

  2. Hello, you know how I can do a live like this with the board, because if I do it through a meeting platform such as hangouts, it does not give me the option to turn the image so that viewers can see what is written, you have Any application recommendation for a live show? i thank you

LEAVE A REPLY

Please enter your comment!
Please enter your name here