What is Apache Kafka®? (A Confluent Lightboard by Tim Berglund) + ksqlDB


    https://cnfl.io/what-is-kafka-tutorials | Apache Kafka® is an open source distributed streaming platform that allows you to build applications and process events as they occur. Tim Berglund (Senior Director of Developer Experience at Confluent) walks through how it works and important underlying concepts. As a real-time, scalable, and durable system, Kafka can be used for fault-tolerant storage as well as for other use cases, such as stream processing, centralized data management, metrics, log aggregation, event sourcing, and more.

    ► Apache Kafka 101 course: https://cnfl.io/apache-kafka-explained-101-course
    ► ksqlDB 101 course: https://cnfl.io/ksqldb-introduction-course-kafka-explained
    ► Learn about Apache Kafka on Confluent Developer: https://cnfl.io/confluent-developer-resources-to-learn-kafka
    ► Kafka Tutorials: https://cnfl.io/what-is-kafka-tutorials
    ► Use CLOUD100 to get $100 of free Confluent Cloud usage: https://cnfl.io/cloud100-try-confluent-cloud-with-code
    ► Promo code details: https://cnfl.io/cloud100-promo-code-disclaimers

    Subscribe: https://youtube.com/c/confluent?sub_confirmation=1
    Site: https://confluent.io
    GitHub: https://github.com/confluentinc
    Community Slack: https://cnfl.io/slack
    Facebook: https://facebook.com/confluentinc
    Twitter: https://twitter.com/confluentinc
    Linkedin: https://www.linkedin.com/company/confluent
    Instagram: https://www.instagram.com/confluent_inc

    Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. To learn more, please visit https://confluent.io

    #apachekafka #kafka #confluent


    Previous articleHow to pass the Certified Kubernetes Application Developer (CKAD) exam
    Next articleBig Data Buat Apa?


    1. classically, a database record looks like a log record… building databases at scale may have been difficult with the hierarchically-constructed database system, but relational, object, and hybrid relational systems are quite capable of doing so. So, "topic" is another name for "log" (?). Yes, the 'monolith' gave rise to the distributed system architecture in order to combat that issue (among others). Yes, the size of 'programs' has been reduced such that a 'program' only performs a single function. This architecture has been cut up and remodeled in a number of different forms over the past 15 years or so, and has never really been optimally implemented completely. I personally saw the flaws in the component structure and was willing to work through the issues, where many others were on a mission to continually change it (square peg theory). So, you're using the notion of a 'topic' (log record) to communicate with single-function entities? shouldn't that be the other way round? what happens in the case where topic A communicates with component A who pushes to topic B but the content is manipulated between component A and topic B? Where is the non-repudiation? ok. This is a splunk-esque product…

    2. Guys stay away. I could see something was smelling straight from the beginning but as you as we started mentioning all those proprietary plugins and confluence languages, he said it all. Bottom line is, don't get tricked into believing it's all sweet when it's not. Also, databases are here to stay and work very well along with either streams and or event sourcing logs. What a utter nonsense and another way to try lock people in…

    3. A slick presentation. But fit for purpose is important. "Some people are finding…" sounds a bit like "a lot of people are saying…" Kafka may work well for certain problems but it is so much easier to see what is going on when your data is in a database and then goes somewhere rather than sloshing around in micro services and topics along the way. And with a database you know that your data is on disk and backed up, not potentially adrift in memory. And "not writing code" works well if the pre-packaged stuff does exactly what you want, but otherwise it is going to be a worse outcome than writing code.

    4. Hello
      Thanks for sharing knowledge hare really very good and your explaining skills also very I really appreciate.i request you can you share how to deploy kubernets cluster for confluent Kafka so it will help us because I wached all traning video provided by confluent but not found how to deploy kubernets cluster for confluent Kafka so please I request you.

    5. Does Kafka connect /Kafka Stream / KSQL DB work with Apache Kafka – or only with Confluent Kafka ? I have setup Apache Kafka server on windows machine – its working fine. now can I setup Kafka connect / Kafka Stream / KSQL DB on directly on windows or I have to use Docker container ? BTW Docker is not running on my windows machine – as WSL is not allowed to run. I am thinking to connect it with SQL server or Postgres SQL and my prefered language is Python