https://cnfl.io/kafka-training-certification | In this video we’ll lay the foundation for Apache Kafka®, starting with its architecture; ZooKeeper’s role; topics, partitions, and segments; the commit log and streams; brokers and broker replication; producers basics; and consumers, consumer groups, and offsets.
After you’ve watched the video, you can take a quick quiz to check what you’ve learned and get immediate feedback here: https://forms.gle/RDc84FbPeJ2CwCRP9
As always you can visit us here: https://cnfl.io/kafka-training-certification
LEARN MORE
► Apache Kafka 101 course: https://cnfl.io/apache-kafka-fundamentals-course-training
► Learn about Apache Kafka on Confluent Developer: https://cnfl.io/confluent-developer-training
► Use CLOUD100 to get $100 of free Confluent Cloud usage: https://cnfl.io/cloud100-try-confluent-cloud-training
► Promo code details: https://cnfl.io/cloud100-details-training
CONNECT
Subscribe: http://youtube.com/c/confluent?sub_confirmation=1
Site: http://confluent.io
GitHub: https://github.com/confluentinc
Facebook: https://facebook.com/confluentinc
Twitter: https://twitter.com/confluentinc
LinkedIn: https://www.linkedin.com/company/confluent
Instagram: https://www.instagram.com/confluent_inc
ABOUT CONFLUENT
Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. To learn more, please visit http://confluent.io
#apachekafka #kafka #confluent
source
When he mentioned timestamps, I was imagining a cop saying “You have the right to a time stamp. If you cannot afford a time stamp, one will be provided for you.”
Does Kafka topics got in built filters to filter out messages ?
Thanks Tim for this excellent video.
Hey 🙂 Your presentation and explanatory style is really great – excellent in fact, and just at the right level for me! Many thanks
and this summarize my master degree in 24 mins.
You can say that a broker is a process?
wowwwwwww this video is sooo freakinggg goood
What s the maximum file size that Kafka can process?
I can't believe I just watch a 24 minute technical video without yawning or pausing. This guy is good! 🏆🏆
One of the best videos on Kafka basics and understanding of the Cluster
excellent tutorial. i hate kafka now
Excellent explanation. Thank you
Loved it. Thanks for making it easy to understand.
Thanks man
Fantastic video @Confluent and @TimBerglund. Confluent is amazing in the way that they are making kafka easier for everyone to learn.
Thanks for this great video! I have a few questions that hopefully someone could help clarify 🙂
1. How are brokers replicated? It sounds like it's async replicated, hence I imagine when the leader failover there would be some small amount of msg loss (because replica would always lag primary a tiny bit)?
2. When the broker receives the msg, does it write to log immediately or does it do some kind of in-memory buffering and write by small batch? And if so what happens to the non-flushed messages if that broker crashes?
Just to clarify, I'm not criticizing Kafka, it is a great tool and I really liked it while working with it in my previous job. But I'm just curious because I've heard various techtalks about how kafka is used in various products — e.g. nu bank which is a financial startup uses kafka and from their talk I had this feeling that they rely on kafka for being 100% reliable (as in, not losing messages), which surprises me.
3. One last noob question, apparently Kafka's great capability for supporting high write throughput is partially due to its sequential write, hence avoiding random disk seek. But given that consumer almost always consumes the message with a slight delay, does it mean whenever a consumer pulls new messages it breaks this nice sequential mechanism (because we need to seek to a different disk location than the end of the log file)?
Thanks!
AMazing videos. Consider using patterns or other visual indicators in place of color to accommodate various color-blindnesses
thanks !!
This guy’s presentation skills should be made gold standard, period !!!
Who is this guy ? Blew my mind away
First time learning about Kafka, learned everything I needed to know from your one video. You are a great presenter sir, thank you!
Excellent explanation in a short video Tim. Really appreciate your effort. Thank you!!
Cool, Unbelievable such a valuable information is for free.
21:24 now you confused me. I thought that the consumers used long polling, but you described a short polling mechanism.
Very good
Hi,
I am fresher, and new to Kafka. For storing those topics we need persistent storage and consuming a message don't delete that message. Now my question is suppose I have X ammount of persistent storage and producer produces X ammount message in Y days, what will happen to my storage after Y'th day?
Super course, very effective and well delivered. Thanks !
Awesome, Very simple to understand and managed to hold the attention
Within a consumer group, if a consumer instance gets killed then how does Kafka handle that specific partition? Does it wait for some health check and reallocate that partition to a healthy consumer?
Great talk!!
Wait, but how do the consumers of the consumer_offset topic keep track of where they are?
This was solid. Thank you!
Nobody ever taught Kafka in details like you did Tim. Much appreciated 👍🏻
Dear Confluent,
Can we have Tim do videos on Spark, Druid and Kubernetes too?
Great video! Simple to understand and managed to hold attention
Simpy kafkaesque.
hahaha cornflower blue. awesome
Entering now in Kafka and finally a perfect simple rapid nice explanation. Thank you for this video!
Halfway through the video and already loving this guy. Super presentation and delivery skills. Kudos Tim!!
Well Explained.
Question : How the Disk Space underneath the Brokers/Segments Grow? Is that something Producers or Consumers need to be worried about? That's Cloud Offering as a SaaS or IaaS?
Thanks Tim for the awesome explanation of Kafka terms and how they relate with each other. After watching this video only I could understand kafka terms in real deep and I no more have to cram these terms again amd again.🙂
Thanks great breakdown and presentation .
Thank you, Tim and the Confluent team, your tutorials are top-notch and really help people understand the subject. Wish you all the best!
21:10 – why does one need a consistent key for ensuring ordered messages if there's a timestamp associated with all produced messages? Wouldn't the timestamp be enough to know the ordering of a group of related messages?
5:00 sounded an awful lot like consensus on Ethereum. Are the two architectures related?
I’m here for the t-shirt.
these videos are so good I download them just in case
Nicely explained. Thanks
Can the offset be synced across partitions so that we can have serial processing of the data ?
Very helpful. As a bizarre side note, the speakers voice sounds a lot like Weird Al Yankovich to me. Which is obviously a very good thing.