This is a walkthrough of configuring #ApacheKafka #KafkaConnect to stream data from #ApacheKafka to a #database such as #MySQL.
It discusses common errors, how to place the #JDBC driver JAR correctly, how to deal with deserialisation, and how to use ksqlDB to apply a schema to schema-less data.
See for code and details.
*NOTE* How ksqlDB deals with keys changed in v0.10 (this video shows 0.8). See for details
Table of contents:
* 00:00 Introduction
* 0:44 Populating some data into a test topic
* 2:55 Creating the JDBC Sink
* 4:47 Putting the JDBC driver in the correct place
* 7:45 JDBC Sink connector in action
* 8:52 Debugging the JDBC Sink connector
* 10:27 INSERT vs UPSERT
* 12:26 Dropping fields, adding metadata
* 14:32 Evolving the target table schema
* 16:21 JDBC Sink and schemas
* 18:18 Working with JSON data and the JDBC Sink
* 28:03 Applying a schema to JSON data with ksqlDB
* 34:01 Working with CSV data and the JDBC Sink
References:
* Confluent Hub:
* JDBC Sink connector docs:
* Learn more about Kafka Connect in this talk:
* Kafka Connect docs:
—
☁️ Confluent Cloud ☁️
Confluent Cloud is a managed Apache Kafka and Confluent Platform service. It scales to zero and lets you get started with Apache Kafka at the click of a mouse. You can signup at and use code 60DEVADV for $60 towards your bill (small print:
source