Project Metamorphosis : 次世代のイベントストリーミングプラットフォームを披露。もっと詳しく

Scrapinghub

Scrapinghub が Confluent Cloud で次世代ウェブスクレイピングサービスを加速

Each day thousands of companies and more than a million developers rely on Scrapinghub tools and services to extract the data they need from the web. To strengthen its position as a market leader, Scrapinghub recently launched a new product, AutoExtract, that provides customers with AI-enabled, automated web data extraction at scale. Scrapinghub built AutoExtract on Confluent Cloud running on Google Cloud Platform (GCP), with an Apache Kafka®-based, event-streaming backbone for its service architecture. These technologies were chosen to shorten time to market, and to ensure reliability and scalability.

課題

Accelerate the delivery of a next-generation web scraping service, capable of handling growing customer demand with no downtime.

解決策

Use Confluent Cloud and Apache Kafka to implement a reliable, scalable event-streaming backbone that links web crawlers with AI-enabled data extraction components.

成果

  • Deployment time halved
  • Initial setup completed in minutes
  • 100% uptime post-launch
  • Latencies minimized with no cloud vendor lock-in
ケーススタディを読む

“A key advantage of Confluent Cloud in delivering AutoExtract is time to market. We didn’t have to set up a Kafka cluster ourselves or wait for our infrastructure team to do it for us. With Confluent Cloud we quickly had a state-of-the-art Kafka cluster up and running perfectly smoothly. And if we run into any issues, we have experts at Confluent to help us look into them and resolve them. That puts us in a great position as a team and as a company.”

Ian Duffy

DevOps Engineer at Scrapinghub

このウェブサイトでは、ユーザーエクスペリエンスの向上に加え、ウェブサイトのパフォーマンスとトラフィック分析のため、Cookie を使用しています。また、サイトの使用に関する情報をソーシャルメディア、広告、分析のパートナーと共有しています。