PINGDOM_CHECK

#ExtractSummit2026 The world's largest web scraping conference returns. Austin Oct 7–8 · Dublin Nov 10–11.

Register now
Data Services
Pricing
Login
Try Zyte APIContact Sales
  • Unblocking and Extraction

    Zyte API

    The ultimate API for web scraping. Avoid website bans and access a headless browser or AI Parsing

    Ban Handling

    Headless Browser

    AI Extraction

    Enterprise

    DocumentationSupport

    Hosting and Deployment

    Scrapy Cloud

    Run, monitor, and control your Scrapy spiders however you want to.

    Coding Agent Add-Ons

    Agentic Web Data

    Plugins that give coding agents the context to build production Scrapy projects. Starts with Claude Code.

  • Data Services
  • Pricing
  • Blog

    Learn

    Case Studies

    Webinars

    Videos

    White Papers

    Join our Community
    Web scraping APIs vs proxies: A head-to-head comparison
    Blog Post
    The seven habits of highly effective data teams
    Blog Post
  • Product and E-commerce

    From e-commerce and online marketplaces

    Data for AI

    Collect and structure web data to feed AI

    Job Posting

    From job boards and recruitment websites

    Real Estate

    From Listings portals and specialist websites

    News and Article

    From online publishers and news websites

    Search

    Search engine results page data (SERP)

    Social Media

    From social media platforms online

  • Meet Zyte

    Our story, people and values

    Contact us

    Get in touch

    Support

    Knowledge base and raise support tickets

    Terms and Policies

    Accept our terms and policies

    Open Source

    Our open source projects and contributions

    Web Data Compliance

    Guidelines and resources for compliant web data collection

    Join the team building the future of web data
    We're Hiring
    Trust Center
    Security, compliance & certifications
Login
Try Zyte APIContact Sales

Zyte Developers

Coding tools & hacks straight to your inbox

Become part of the community and receive a bi-weekly dosage of all things code.

Join us
    • Zyte Data
    • News & Articles
    • Search
    • Social Media
    • Product
    • Data for AI
    • Job Posting
    • Real Estate
    • Zyte API - Ban Handling
    • Zyte API - Headless Browser
    • Zyte API - AI Extraction
    • Web Scraping Copilot
    • Zyte API Enterprise
    • Scrapy Cloud
    • Solution Overview
    • Blog
    • Webinars
    • Case Studies
    • White Papers
    • Documentation
    • Web Scraping Maturity Self-Assesment
    • Web Data compliance
    • Meet Zyte
    • Jobs
    • Terms and Policies
    • Trust Center
    • Support
    • Contact us
    • Pricing
    • Do not sell
    • Cookie settings
    • Sign up
    • Talk to us
    • Cost estimator
Home
Blog
Scrapy Update: Better Broad Crawl Performance
Light
Dark

Scrapy update: Better broad crawl performance

Read Time
3 Mins
Posted on
February 18, 2021
Open Source
When crawling the web, there’s always a speed limit. A spider can't fetch faster than the host willing to send the pages.
By
Nikita Vostretsov
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Start FreeFind out more
Subscribe to our Blog

Scrapy update: Better broad crawl performance

When crawling the web, there’s always a speed limit. A spider can't fetch faster than the host willing to send the pages. Page serving takes some amount of resources - CPU, disk, network bandwidth, etc. These resources cost money. Unrestricted serving and extensive crawling are the worst combinations. Such a combination could bring applications to halt and deny service to users. Taking all this into account, limiting serving capacity is natural.

This article explains which Scrapy settings help you honor these limits and how to achieve better performance during broad crawls in the presence of these limits.

Problem statement

First of all, we need a way to differentiate entities behind domain names. The simplest and fastest one is "entities never share a single exact domain name". http://example1.com and http://example2.com are different, so do http://www.example.com and http://about.example.com. Another is "entities never share a single IP". Scrapy has to send DNS queries to resolve domain names to IP addresses before deciding. This solves the problem of different domain names served from a single host, so http://www.example.com and http://about.example.com are the same entity.

For every entity, there is a slot in a Downloader. The number of requests sent to each entity simultaneously is limited by CONCURRENT_REQUESTS_PER_DOMAIN or CONCURRENT_REQUESTS_PER_IP options. Which one to choose depends on the selected way to differentiate between entities. Such requests can be called active or running. Requests enqueued into Downloader and not sent to the host are called inactive. Every slot has a queue for inactive requests. After finishing an active request and before running one from an inactive queue, Scrapy waits for DOWNLOAD_DELAY seconds.

This approach requires tuning to keep a balance between performance and limits honoring. A less tuning-sensitive one is implemented in the AutoThrottle extension. Documentation provides a very good background and description of how it works.

Possible approaches

The downloader doesn't decide which requests are enqueued into it. It is done by Scheduler. The first implementation of such decision-making doesn't take entities into account. It works well in the case of crawling a specific entity. The situation is very different in the case of broad crawls. In broad crawls inactive queues for some slots are too long and active requests are not running at full capacity for others.

After presenting the concept of entities to a Scheduler, there are different strategies to select one for the next request.

Round-robin algorithm can be used for request scheduling:

  • store all entities in FIFO queue Q
  • when the next request should be scheduled pop one entity E1 (from the top)
  • issue a request to E1
  • push E1 back into Q (to the bottom)

This approach provides an equal flow of requests. For it to work well, every crawled entity should serve pages with the same speed. In the real world, it is not the case. Different hosts have different rendering times and network latencies are also different.

Let's look at what’s happening in such a situation.

In the end, the target is to keep the Downloader's queue of inactive requests as short as possible. Generic Computer Science algorithm doesn't work well in real-life conditions. A more task-specific approach needs to be presented.

Instead of using a model, real information can be used. Downloader knows the length of every queue. Providing such knowledge to a Scheduler solves the problem.

Experimental results in Scrapy

Both of these approaches were implemented in Scrapy. To select the fastest one, broadworm mode of scrapy-bench was used. In this mode, 1000 entities are emulated by 1000 domain aliases to the same server. This server introduces artificial delays on top of serving time. The result is presented in the table below.

AlgorithmItems per second crawledSpeedup
Entity-unaware2.341x
Round-robin7.563x
Ask the downloader23.1210x

Based on numbers it was decided to not preserve round-robin implementation inside Scrapy’s codebase, but you can still find it in the commit history.

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Start FreeFind out more

Get the latest posts straight to your inbox

No matter what data type you're looking for, we've got you

G2.com

Capterra.com

Proxyway.com

EWDCI logoMost loved workplace certificateZyte rewardISO 27001 iconG2 rewardG2 rewardG2 reward

© Zyte Group Limited 2026