This is a video meant for non-technical audiences. If you have experience and knowledge of Kubernetes and want to install, administer or develop Kubernetes applications, please see the other technical tutorials in our series.
After virtualization in the mid-2000s, the industry set out to explore solutions, that can not only ISOLATE but can also precisely SPLIT, the capacity needed, by each application. That exploration, ended with the discovery, of CONTAINERS.
While hypervisors, virtualize hardware in the world of containers, a container runtime, virtualizes the Operating System.
What if we bundle the application and its basic necessities? And, we give this bundle, just sufficient resources to run? As the application grows, the bundle is elastic, and expands, consuming more resources. As the application shrinks, the elastic bundle contracts, freeing excess resources.
The essentials, the application dependencies, the OS dependencies, bundled together with the application, and running on the server is called the container. Let’s examine this concept using a cross-sectional view of a server, as illustrated on the left of this slide. The bottom two layers, the hardware, and the OS, of the host machine, remain the same. The container run-time, in layer 3, catalogs the resources, of the host machine, and dispenses these resources, to containers above it. This architecture, not only allows much granularity in the size of containers that can be run but also provides flexibility to run many types of containers, on the same host machine.
For example, let us imagine our host machine, has Ubuntu Linux OS. In normal circumstances, it would be impossible to run, applications built for CentOS Linux, Windows Server, or Mac OSX, on this host machine. But with container technology, we can overcome this problem; and run them all, on the same host machine.
The ability, to run different types and sizes of containers, on the same host machine, is illustrated on the right. You can see that the containers are not evenly sized. And multiple sizes and types of containers, run on the same host.
When running, a CentOS Linux container, on an Ubuntu Linux Host Machine, the entire CentOS Linux, need not be duplicated in the OS dependency layer. Just the capabilities, CentOS Linux, has in addition to Host OS, is sufficient in the OS dependency layer – making this a very light layer.
This is an important distinction… the container is much lighter, and therefore more efficient than, a virtual machine.
Kubernetes was designed, and created, by engineers at Google. Google handed over Kubernetes as open-source software, to the non-profit, cloud-native computing foundation. K8S is a short form for Kubernetes. In greek, Kubernetes means, pilot. Kubernetes was originally written in C++. Later, it was rewritten using the language, go.
A function transforms inputs into outputs. A microservice is a logical group of one or more functions. An application is a large collection of functions. Note, the terms function, microservice, and application are used in the industry, interchangeably. For the purposes of this tutorial, think of a function as the smallest unit… and that a small number of functions, is a microservice… and a large number of functions, is an application.
The application, bundled with application dependencies; and OS dependencies, is called an image.
A node is a server, that has the OS and the container runtime, installed on it. It is the physical machine, on which the containers will be running.
When an image is POWERED by a node, it becomes a container.
This is an important distinction. The bundle itself is called an image. Only when the bundle is running on a node, is it called a container. Image is static, whereas container is dynamic.
A logical collection of containers is called a pod.
A group of similar pods, that run simultaneously, is called a replica set.
A collection of replica sets, grouped together to carry out a larger purpose, is called a service. As a living plant, the service is generally self-sufficient and is a completely independent unit.
The base flavor Red Hat offers, is the cheapest option, the OpenShift Kubernetes Engine.
OpenShift Kubernetes Engine is the basic open-source Kubernetes, plus, a good user interface. It also has a lot more monitoring, logging, image-registering capabilities, than, the basic Kubernetes.
You can see in the right-most block of this diagram, these ten additional capabilities.
Additionally, to generate revenue, RedHat has enforced that the open shifts nodes, can only be built using Redhat’s proprietary Linux, the R H C O S – RedHat core operating system. The only exception to this rule, is that of late, windows operating systems have become compatible for running worker nodes of OpenShift.
The slides used in this presentation is https://www.slideshare.net/survivalinstincts/virtual-machines-containers-container-swarms-kubernetes-openshift