Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Announcing Tutorials for Apache Kafka

We’re excited to announce Tutorials for Apache Kafka®, a new area of our website for learning event streaming. Kafka Tutorials is a collection of common event streaming use cases, with each tutorial featuring an example scenario and several complete code solutions. It’s the fastest way to learn how to use Kafka with confidence.

We’re building this because we know that event streaming is a radically different way of thinking. It causes us to rethink the way we architect our programs and systems. Although it has heaps of benefits (immutability, information sharing, and fault tolerance, to name a few), it can be surprisingly difficult for newcomers to learn.

It doesn’t need to be that way.

For beginners, Kafka Tutorials reveals the “shape” of the problems that event streaming can solve. It makes it easier to recognize the domain of things that you might use event streaming for. Moreover, each tutorial reliably takes you from zero to working code by following each of the steps.

For the experienced, it’s a crucial reference guide that makes your work easier. Easily look up how to join a stream and a table together when you’re rusty, or quickly recall how to merge discrete streams together. Over time, we’ll introduce more advanced material that makes use of the entire stack.

Although it’s early, we’re building Kafka Tutorials for the long term. That’s why we’ve intelligently engineered the site to use a unique flavor of literate programming. Each tutorial that you see on a page is backed by a single data structure. We’ve built programs that understand this shared structure—namely one to render the page, and another to test the content of the page. That means that when we make changes to each tutorial, they are automatically validated on a continuous integration system to ensure that we’re giving you actual code that works.

Data Structure

Lastly, Kafka Tutorials is a community-driven site. Its source code is available on GitHub. If you have a great idea for a new tutorial or can make an existing open better, we’d love your contributions.

Happy learning!

Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided.

Michael Drogalis is Confluent’s stream processing product lead, where he works on the direction and strategy behind all things compute related. Before joining Confluent, Michael served as the CEO of Distributed Masonry, a software startup that built a streaming-native data warehouse. He is also the author of several popular open source projects, most notably the Onyx Platform.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

How Real-Time Stream Processing Works with ksqlDB, Animated

ksqlDB, the event streaming database, is becoming one of the most popular ways to work with Apache Kafka®. Every day, we answer many questions about the project, but here’s a

Building a Machine Learning Logging Pipeline with Kafka Streams at Twitter

Twitter, one of the most popular social media platforms today, is well known for its ever-changing environment—user behaviors evolve quickly; trends are dynamic and versatile; and special and emergent events

Using the Fully Managed MongoDB Atlas Connector in a Secure Environment

Since the MongoDB Atlas source and sink became available in Confluent Cloud, we’ve received many questions around how to set up these connectors in a secure environment. By default, MongoDB