Project Metamorphosis: Unveiling the next-gen event streaming platform.Learn More
企業情報

Confluent at VLDB 2015 | Building a Replicated Logging System with Apache Kafka

There has been much renewed interest in using log-centric architectures to scale distributed systems that provide efficient durability and high availability. In this approach, a collection of distributed servers can operate on a replicated log that record state changes in sequential ordering. The log itself can then be treated as the “source-of-truth”: when some of the servers fail and come back, their states can be deterministically reconstructed by replaying this log upon recovery.

Over the past years of developing and operating Kafka, we have envisioned and exercised the idea of extending its commit-log structured architecture into a replicated logging system in order to serve as the underlying data flow backbone for a wide scope of applications, such as data integration, commit log replication, and stream processing, etc. In this year’s Very Large Data Bases conference I will talk about our experience in building such a replicated logging system using Kafka and will present several of its use cases.

If you happen to be attending the VLDB conference and you’re interested in learning more about how to build a replicated log using Kafka, how to deploy it as your commit log replication layer underlying your distributed stores, etc., I invite you to attend my session or find me at the conference.

Building a Replicated Logging System with Apache Kafka
Guozhang Wang, Confluent
10:30am – 12:00pm, Thursday, September 3, 2015
41st International Conference on Very Large Data Bases
Hilton Waikoloa Hotel | Kohala Coast, Hawai’i | August 31 – September 4, 2015

You may also be interested in these blog posts by Jay Kreps (Kafka co-creator):

Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform (Part 1)

Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform (Part 2)

Feel free to share your feedback, questions, and suggestions — about my conference talk or about Kafka in general — with us at any time via /contact or @ConfluentInc on Twitter.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

ksqlDB: The Missing Link Between Real-Time Data and Big Data Streaming

Is event streaming or batch processing more efficient in data processing? Is an IoT system the same as a data analytics system, and a fast data system the same as […]

The State of Streams – A European Adventure

I have good news to share! For the first time, this October Confluent will bring to Europe two events dedicated to streaming technology: Confluent Streaming Event in Munich and Confluent […]

Streams and Tables: Two Sides of the Same Coin

We are happy to announce that our paper Streams and Tables: Two Sides of the Same Coin is published and available for free download. The paper was presented at the […]

Sign Up Now

最初の3か月間は各月の料金が最大50ドル割引。

新規登録のみ。

By clicking “sign up” above you understand we will process your personal information in accordance with our プライバシーポリシー

上記の「新規登録」をクリックすることにより、お客様は以下に同意するものとします。 サービス利用規約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします: プライバシーポリシー

単一の Kafka Broker の場合には永遠に無料
i

商用版の機能を単一の Kafka Broker で無期限で使用できるソフトウェアです。2番目の Broker を追加すると、30日間の商用版試用期間が自動で開始します。この制限を単一の Broker へ戻すことでリセットすることはできません。

デプロイのタイプを選択
Manual Deployment
  • tar
  • zip
  • deb
  • rpm
  • docker
または
自動デプロイ
  • kubernetes
  • ansible

By clicking "download free" above you understand we will process your personal information in accordance with our プライバシーポリシー

以下の「ダウンロード」をクリックすることにより、お客様は以下に同意するものとします。 Confluent ライセンス契約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、お客様の個人データが以下に従い処理することにも同意するものとします: プライバシーポリシー

このウェブサイトでは、ユーザーエクスペリエンスの向上に加え、ウェブサイトのパフォーマンスとトラフィック分析のため、Cookie を使用しています。また、サイトの使用に関する情報をソーシャルメディア、広告、分析のパートナーと共有しています。