[ウェビナー] ストリーミングデータメッシュを構築する方法 | 今すぐ登録

Presentation

Leveraging Kafka for Big Data in Real Time Bidding, Analytics, Machine Learning and Campaign Management for Globally Distributed Data Flows

« Kafka Summit San Francisco 2016

Kafka Summit 2016 | Users Track

With the need to build systems handling massive loads of bursty high frequency data, streaming data ingestion, IoT, analytics and machine learning for complex use cases like real time bidding (RTB), as well as intelligence and management user applications for highly-aggregated views into global data, there is the need to simplifying distribution and consistency problems in data flows.

Helena has been building large-scale distributed cloud-based systems for many years, distributed big data systems for the last four, choosing Kafa and Scala for all. She will discuss simplification of big data architecture, data flows, and a collaborative set of supporting technologies, within the use case of real time data ingestion streams, analytics and ML. Helena will target where and how Kafka can simplify distribution and data flows, and how CRDT, CQRS and Scala frameworks like Eventuate can help solve consistency and in-memory problems where low-latency is critical. Then walk through an example integrating Scala, Akka, Akka Streams, Eventuate, Kafka and Cassandra, highlighting NoETL and Functional Programming (FP) in big data.

Related Links

How Confluent Completes Apache Kafka eBook

Leverage a cloud-native service 10x better than Apache Kafka

Confluent Developer Center

Spend less on Kafka with Confluent, come see how