Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Confluent will be at QCon NYC next week

Some of us from Confluent will be speaking at QCon NYC next week about Apache Kafka and Confluent’s stream data platform. Here are some things to look forward to from us.

Tutorial: Capturing and processing streaming data with Apache Kafka

Tuesday June 9th 9am-12pm

Kafka provides high throughput, low latency pub/sub messaging and many large companies are quickly adopting it to handle their realtime and streaming data at large scale. But what can you use it for, and how do you get started? Come to Confluent’s tutorial conducted by our first engineer, Ewen Cheslack-Postava on June 9th at 9am to find out.

We’ll start out with an overview of Kafka starting from the basics. You’ll learn about Kafka’s unifying abstraction, a partitioned and replicated low-latency commit log. Then we’ll discuss concrete applications of Kafka across multiple domains so you can see how Kafka can work for your company.

With a solid understanding of Kafka fundamentals, you’ll develop an end-to-end application that performs anomaly detection on streaming data to see how quickly you can get up and running with Kafka. The implementation will be broken into two parts. First, you’ll take an existing front-end application and instrument it with a Kafka producer to store user activity events in Kafka. Second, you’ll build a distributed, fault tolerant service that detects and reports anomalies in the activity data.

By the end of the session, you’ll understand and be able to apply all the core functionality of Kafka.

And, the fun doesn’t stop there because you can still attend…

The Many Faces of Apache Kafka: Leveraging real-time data at scale

Thursday June 11th 1:40pm-2.30pm

If you are curious about how Kafka is adopted at large scale in production or if you are looking to learn how to adopt Kafka in practice, attend my talk at 1.40pm on June 11th.

Since we open sourced Kafka more than 4 years ago, it has been adopted very widely from web companies like Uber, Netflix, LinkedIn to more traditional enterprises like Cerner, Goldman Sachs and Cisco. These companies use Kafka in a variety of ways – as the infrastructure for ingesting high-volume log data into Hadoop, to collect operational metrics for monitoring and alerting applications, for low latency messaging use cases, and to power near realtime stream processing.

In this talk, you will learn how Kafka’s unique architecture allows it to be used both for real time processing and as a bus for feeding batch systems like Hadoop. You will also learn how Kafka is fundamentally changing the way data flows through an organization and presents new opportunities for processing data in real time that were not possible before. I will discuss how Kafka impacts the way data is integrated across a variety of data sources and systems.

Lastly, you can expect to learn how you can go about adopting Kafka in your company to leverage real-time data at scale in practice.

If you can’t make it to the tutorial or talk, feel free to ping me or Ewen if you’d like to talk about Apache Kafka or Confluent.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Stream Processing with IoT Data: Challenges, Best Practices, and Techniques

The rise of IoT devices means that we have to collect, process, and analyze orders of magnitude more data than ever before. As sensors and devices become ever more ubiquitous, […]

ksqlDB: The Missing Link Between Real-Time Data and Big Data Streaming

Is event streaming or batch processing more efficient in data processing? Is an IoT system the same as a data analytics system, and a fast data system the same as […]

Providing Timely, Reliable, and Consistent Travel Information to Millions of Deutsche Bahn Passengers with Apache Kafka and Confluent Platform

Every day, about 5.7 million rail passengers rely on Deutsche Bahn (DB) to get to their destination. Virtually every one of these passengers needs access to vital trip information, including […]

Sign Up Now

Start your 3-month trial. Get up to $200 off on each of your first 3 Confluent Cloud monthly bills

新規登録のみ。

上の「新規登録」をクリックすることにより、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします : プライバシーポリシー

上記の「新規登録」をクリックすることにより、お客様は以下に同意するものとします。 サービス利用規約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします: プライバシーポリシー

単一の Kafka Broker の場合には永遠に無料
i

商用版の機能を単一の Kafka Broker で無期限で使用できるソフトウェアです。2番目の Broker を追加すると、30日間の商用版試用期間が自動で開始します。この制限を単一の Broker へ戻すことでリセットすることはできません。

デプロイのタイプを選択
手動デプロイ
  • tar
  • zip
  • deb
  • rpm
  • docker
または
自動デプロイ
  • kubernetes
  • ansible

上の「無料ダウンロード」をクリックすることにより、当社がお客様の個人情報をプライバシーポリシーに従い処理することを理解されたものとみなします。 プライバシーポリシー

以下の「ダウンロード」をクリックすることにより、お客様は以下に同意するものとします。 Confluent ライセンス契約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、お客様の個人データが以下に従い処理することにも同意するものとします: プライバシーポリシー

このウェブサイトでは、ユーザーエクスペリエンスの向上に加え、ウェブサイトのパフォーマンスとトラフィック分析のため、Cookie を使用しています。また、サイトの使用に関する情報をソーシャルメディア、広告、分析のパートナーと共有しています。