3. Apache Kafka Fundamentals | Apache Kafka® Fundamentals

https://cnfl.io/kafka-training-certification | In this video we’ll lay the foundation for Apache Kafka®, starting with its architecture; ZooKeeper’s role; topics, partitions, and segments; the commit log and streams; brokers and broker replication; producers basics; and consumers, consumer groups, and offsets.

After you’ve watched the video, you can take a quick quiz to check what you’ve learned and get immediate feedback here: https://forms.gle/RDc84FbPeJ2CwCRP9

As always you can visit us here: https://cnfl.io/kafka-training-certification

► Apache Kafka 101 course: https://cnfl.io/apache-kafka-fundamentals-course-training
► Learn about Apache Kafka on Confluent Developer: https://cnfl.io/confluent-developer-training
► Use CLOUD100 to get $100 of free Confluent Cloud usage: https://cnfl.io/cloud100-try-confluent-cloud-training
► Promo code details: https://cnfl.io/cloud100-details-training

Subscribe: http://youtube.com/c/confluent?sub_confirmation=1
Site: http://confluent.io
GitHub: https://github.com/confluentinc
Facebook: https://facebook.com/confluentinc
Twitter: https://twitter.com/confluentinc
LinkedIn: https://www.linkedin.com/company/confluent
Instagram: https://www.instagram.com/confluent_inc

Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. To learn more, please visit http://confluent.io

#apachekafka #kafka #confluent


50 thoughts on “3. Apache Kafka Fundamentals | Apache Kafka® Fundamentals”
  1. When he mentioned timestamps, I was imagining a cop saying “You have the right to a time stamp. If you cannot afford a time stamp, one will be provided for you.”

  2. Hey 🙂 Your presentation and explanatory style is really great – excellent in fact, and just at the right level for me! Many thanks

  3. I can't believe I just watch a 24 minute technical video without yawning or pausing. This guy is good! 🏆🏆

  4. Fantastic video @Confluent and @TimBerglund. Confluent is amazing in the way that they are making kafka easier for everyone to learn.

  5. Thanks for this great video! I have a few questions that hopefully someone could help clarify 🙂
    1. How are brokers replicated? It sounds like it's async replicated, hence I imagine when the leader failover there would be some small amount of msg loss (because replica would always lag primary a tiny bit)?
    2. When the broker receives the msg, does it write to log immediately or does it do some kind of in-memory buffering and write by small batch? And if so what happens to the non-flushed messages if that broker crashes?

    Just to clarify, I'm not criticizing Kafka, it is a great tool and I really liked it while working with it in my previous job. But I'm just curious because I've heard various techtalks about how kafka is used in various products — e.g. nu bank which is a financial startup uses kafka and from their talk I had this feeling that they rely on kafka for being 100% reliable (as in, not losing messages), which surprises me.

    3. One last noob question, apparently Kafka's great capability for supporting high write throughput is partially due to its sequential write, hence avoiding random disk seek. But given that consumer almost always consumes the message with a slight delay, does it mean whenever a consumer pulls new messages it breaks this nice sequential mechanism (because we need to seek to a different disk location than the end of the log file)?


  6. AMazing videos. Consider using patterns or other visual indicators in place of color to accommodate various color-blindnesses

  7. First time learning about Kafka, learned everything I needed to know from your one video. You are a great presenter sir, thank you!

  8. 21:24 now you confused me. I thought that the consumers used long polling, but you described a short polling mechanism.

  9. Hi,
    I am fresher, and new to Kafka. For storing those topics we need persistent storage and consuming a message don't delete that message. Now my question is suppose I have X ammount of persistent storage and producer produces X ammount message in Y days, what will happen to my storage after Y'th day?

  10. Within a consumer group, if a consumer instance gets killed then how does Kafka handle that specific partition? Does it wait for some health check and reallocate that partition to a healthy consumer?

  11. Entering now in Kafka and finally a perfect simple rapid nice explanation. Thank you for this video!

  12. Halfway through the video and already loving this guy. Super presentation and delivery skills. Kudos Tim!!

  13. Well Explained.
    Question : How the Disk Space underneath the Brokers/Segments Grow? Is that something Producers or Consumers need to be worried about? That's Cloud Offering as a SaaS or IaaS?

  14. Thanks Tim for the awesome explanation of Kafka terms and how they relate with each other. After watching this video only I could understand kafka terms in real deep and I no more have to cram these terms again amd again.🙂

  15. Thank you, Tim and the Confluent team, your tutorials are top-notch and really help people understand the subject. Wish you all the best!

  16. 21:10 – why does one need a consistent key for ensuring ordered messages if there's a timestamp associated with all produced messages? Wouldn't the timestamp be enough to know the ordering of a group of related messages?

  17. Can the offset be synced across partitions so that we can have serial processing of the data ?

  18. Very helpful. As a bizarre side note, the speakers voice sounds a lot like Weird Al Yankovich to me. Which is obviously a very good thing.

Leave a Reply

Your email address will not be published.

Captcha loading...