Benefits of Kubernetes | Scalability, High Availability, Disaster Recovery | Kubernetes Tutorial 16

    15
    24



    How Kubernetes makes high availability, scalability, disaster recovery possible – with simple animations.

    ► Subscribe To Me on Youtube: https://bit.ly/2z5rvTV

    Kubernetes helps you to achieve high availability (or no downtime), scalability (or high performance) and disaster recovery. 😎
    In this video I’ll go through a simplified visualization and step by step explain how K8s makes this possible.

    ▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬
    0:00 – Intro
    0:38 – High availability and scalability
    4:17 – Disaster recovery
    6:27 – Kubernetes vs. AWS / Advantages of K8s

    ——————————————————————————————————-
    🔥 Maybe also interesting for you?
    What is Kubernetes? ► https://youtu.be/VnvRFRk_51k
    Kubernetes Components explained? ► https://youtu.be/Krpb44XR0bk
    Kubernetes Architecture explained? ► https://youtu.be/umXEmn3cMWY
    DevOps Tools Playlist ► https://bit.ly/2W9UEq6

    For any questions/issues/feedback, please leave me a comment and I will get back to you as soon as possible. Also please let me know what you want to learn about Docker & Kubernetes.

    #kubernetes #kubernetestutorial #devops #techworldwithnana
    ——————————————————————————————————-

    This video is the 16th of a complete series for beginners. At the end of this tutorial you will fully understand Docker and Kubernetes.

    ⭐️ Full Playlist: https://www.youtube.com/playlist?list=PLy7NrYWoggjwPggqtFsI_zMAwvG0SqYCb

    ————————————————————————————-

    The complete step-by-step guide to Docker and Kubernetes will include the following topics:

    🐳 DOCKER basics:
    – Container concept
    – Why docker? (image vs. traditional DevOps)
    – Install docker on different operating systems
    – 8 basic commands you need to know (2 parts)
    Docker vs. Virtual Machine
    Docker in Practice: Overview of whole development process with Docker (development, continuous delivery, deployment) Probably 3-5 videos including Docker-Compose, Dockerfile, Private Repository.
    Docker Volumes in theory and practice

    KUBERNETES basics:
    – Main Kubernetes components (including Pod, Service, Ingress, Volumes, ConfigMap, Secrets, Deployment, StatefulSet)
    Kubernetes architecture for beginners (master, slave nodes & processes)
    – How Kubernetes makes high availability, scalability and disaster recovery possible
    – Minikube, Kubectl – set up the cluster
    – Kubectl basic commands – Demo
    – Configuration file (YAML) – syntax
    – Communication between the pods – basic networking concepts in Kubernetes
    – K8s Deployment in practice – example application deployment (pod + service + Ingress + secret)
    – K8s Volumes explained
    – K8s Namespaces

    ———————————————————————————–

    ✅ Connect with me
    Subscribe on Youtube: ► https://www.youtube.com/c/TechWorldwithNana?sub_confirmation=1
    DEV: ► https://dev.to/techworld_with_nana
    Instagram: ► https://www.instagram.com/techworld_with_nana/
    Twitter: ► https://twitter.com/Njuchi_/

    Legal Notice:
    Kubernetes and the Kubernetes logo are trademarks or registered trademarks of The Linux Foundation in the United States and/or other countries. The Linux Foundation and other parties may also have trademark rights in other terms used herein. This video is not accredited, certified, affiliated with, nor endorsed by Kubernetes or The Linux Foundation.

    source

    Previous articleDocker | Redis : network, client, server | Intro
    Next articleDevOps Interview for Big MNC Bangalore round – 2

    24 COMMENTS

    1. Great Video ,can not be any simpler and clear explanation with precise diagrams ! well as far as HA is concern Two nodes within Primary DC serves the website content via INGRESS ,however I wanted to have Secondary DC running similar workload and this could be available either in HOT/HOT or HOT/WARM . Just wondering how this will be work in case of Disaster scenario and what needs to be taken care. 1st thing I will ensure data is replicated between two DCs using some storage level mechanism like SAN Replication etc. 2nd I will add GTM ..Global Traffic Manager between two DCs (Global Server Load Balancer from BIGIP F5 or any other brand) . Do I need to create another INGRESS in DR Site and add these two INGRESS Entries under GTM ? In normal scenario User will access GTM and get redirected to one of the INGRESS based on how GTM is configured. If GTM is configured in Round Robin it will be HOT/HOT Setup and if GTM is configured to redirect traffic only to Primary Site then it would be HOT/WARM.
      My Question is how POD replicas are created in DR Site as part of this setup ? will it be via one DEPLOYMENT or need to create Secondary site setup using another independent DEPLOYMENT ?

    2. Hello Nana – I have been following your videos for some time now, thank you for creating wonderful content and sharing with us. Could you possibly create a video about DR setup and moving workloads to the DR site in case of disaster in primary K8s cluster? Thanks.

    3. Hi Nana, thanks for this excellent series! Question though, you mentioned that ingress balances the request load, is load balanced, and that there's an instance of ingress on each node. Where's the entry point for a request to "my-app"? Is there some sort of Ingress Master that listens for requests to "my-app" communicates with Ingress slaves on each node to determine how to balance the request load? Thanks in advanced for any insights.

    4. Hi Nana,
      It would be great if you can share your thoughts

      Assume I have two clusters
      Cluster 1 at Site 1 (main site)
      Cluster 2 at Site 2 (DR site)
      Active-Active DR

      A request comes at Cluster 1 to the API. The API tries to connect to the database.
      Assume all replicas of DB in cluster 1 are unreachable for some reason.
      Can I achieve a failover to the DB in DR site?

      Generally speaking I want the following
      Prefer a local component to process a request. If not available failover to the DR site and continue further processing locally at DR and trace the response path back in main site into a component that failed over to a component in DR.