3. Apache Kafka Fundamentals | Apache Kafka® Fundamentals

    3. Apache Kafka Fundamentals | Apache Kafka® Fundamentals

    https://cnfl.io/kafka-training-certification | In this video we’ll lay the foundation for Apache Kafka®, starting with its architecture; ZooKeeper’s role; topics, partitions, and segments; the commit log and streams; brokers and broker replication; producers basics; and consumers, consumer groups, and offsets.

    After you’ve watched the video, you can take a quick quiz to check what you’ve learned and get immediate feedback here: https://forms.gle/RDc84FbPeJ2CwCRP9

    As always you can visit us here: https://cnfl.io/kafka-training-certification

    ► Apache Kafka 101 course: https://cnfl.io/apache-kafka-fundamentals-course-training
    ► Learn about Apache Kafka on Confluent Developer: https://cnfl.io/confluent-developer-training
    ► Use CLOUD100 to get $100 of free Confluent Cloud usage: https://cnfl.io/cloud100-try-confluent-cloud-training
    ► Promo code details: https://cnfl.io/cloud100-details-training

    Subscribe: http://youtube.com/c/confluent?sub_confirmation=1
    Site: http://confluent.io
    GitHub: https://github.com/confluentinc
    Facebook: https://facebook.com/confluentinc
    Twitter: https://twitter.com/confluentinc
    LinkedIn: https://www.linkedin.com/company/confluent
    Instagram: https://www.instagram.com/confluent_inc

    Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. To learn more, please visit http://confluent.io

    #apachekafka #kafka #confluent


    Previous articleCNCF Kubernetes and Cloud Native Associate Certification Course (KCNA) – Pass the Exam!


    1. Thanks for this great video! I have a few questions that hopefully someone could help clarify 🙂
      1. How are brokers replicated? It sounds like it's async replicated, hence I imagine when the leader failover there would be some small amount of msg loss (because replica would always lag primary a tiny bit)?
      2. When the broker receives the msg, does it write to log immediately or does it do some kind of in-memory buffering and write by small batch? And if so what happens to the non-flushed messages if that broker crashes?

      Just to clarify, I'm not criticizing Kafka, it is a great tool and I really liked it while working with it in my previous job. But I'm just curious because I've heard various techtalks about how kafka is used in various products — e.g. nu bank which is a financial startup uses kafka and from their talk I had this feeling that they rely on kafka for being 100% reliable (as in, not losing messages), which surprises me.

      3. One last noob question, apparently Kafka's great capability for supporting high write throughput is partially due to its sequential write, hence avoiding random disk seek. But given that consumer almost always consumes the message with a slight delay, does it mean whenever a consumer pulls new messages it breaks this nice sequential mechanism (because we need to seek to a different disk location than the end of the log file)?


    2. Hi,
      I am fresher, and new to Kafka. For storing those topics we need persistent storage and consuming a message don't delete that message. Now my question is suppose I have X ammount of persistent storage and producer produces X ammount message in Y days, what will happen to my storage after Y'th day?

    3. Thanks Tim for the awesome explanation of Kafka terms and how they relate with each other. After watching this video only I could understand kafka terms in real deep and I no more have to cram these terms again amd again.🙂