PINGDOM_CHECK

#ExtractSummit2026 The world's largest web scraping conference returns. Austin Oct 7–8 · Dublin Nov 10–11.

Register now
Data Services
Pricing
Login
Try Zyte APIContact Sales
  • Unblocking and Extraction

    Zyte API

    The ultimate API for web scraping. Avoid website bans and access a headless browser or AI Parsing

    Ban Handling

    Headless Browser

    AI Extraction

    Enterprise

    DocumentationSupport

    Hosting and Deployment

    Scrapy Cloud

    Run, monitor, and control your Scrapy spiders however you want to.

    Coding Agent Add-Ons

    Agentic Web Data

    Plugins that give coding agents the context to build production Scrapy projects. Starts with Claude Code.

  • Data Services
  • Pricing
  • Blog

    Learn

    Case Studies

    Webinars

    Videos

    White Papers

    Join our Community
    Web scraping APIs vs proxies: A head-to-head comparison
    Blog Post
    The seven habits of highly effective data teams
    Blog Post
  • Product and E-commerce

    From e-commerce and online marketplaces

    Data for AI

    Collect and structure web data to feed AI

    Job Posting

    From job boards and recruitment websites

    Real Estate

    From Listings portals and specialist websites

    News and Article

    From online publishers and news websites

    Search

    Search engine results page data (SERP)

    Social Media

    From social media platforms online

  • Meet Zyte

    Our story, people and values

    Contact us

    Get in touch

    Support

    Knowledge base and raise support tickets

    Terms and Policies

    Accept our terms and policies

    Open Source

    Our open source projects and contributions

    Web Data Compliance

    Guidelines and resources for compliant web data collection

    Join the team building the future of web data
    We're Hiring
    Trust Center
    Security, compliance & certifications
Login
Try Zyte APIContact Sales

Zyte Developers

Coding tools & hacks straight to your inbox

Become part of the community and receive a bi-weekly dosage of all things code.

Join us
    • Zyte Data
    • News & Articles
    • Search
    • Social Media
    • Product
    • Data for AI
    • Job Posting
    • Real Estate
    • Zyte API - Ban Handling
    • Zyte API - Headless Browser
    • Zyte API - AI Extraction
    • Web Scraping Copilot
    • Zyte API Enterprise
    • Scrapy Cloud
    • Solution Overview
    • Blog
    • Webinars
    • Case Studies
    • White Papers
    • Documentation
    • Web Scraping Maturity Self-Assesment
    • Web Data compliance
    • Meet Zyte
    • Jobs
    • Terms and Policies
    • Trust Center
    • Support
    • Contact us
    • Pricing
    • Do not sell
    • Cookie settings
    • Sign up
    • Talk to us
    • Cost estimator
Home
Blog
Scale Up Your Scrapy Projects With Smart Proxy Manager
Light
Dark

Scale up your Scrapy projects with Smart Proxy Manager

Read Time
3 Mins
Posted on
July 1, 2021
Handling Bans
In this tutorial, you will learn how to use smart proxy manager to scale up your already existing Scrapy project in order to make more requests and extract more web data.
By
John Campbell
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Start FreeFind out more
Subscribe to our Blog

Scale up your Scrapy projects with Smart Proxy Manager

In this tutorial, you will learn how to use smart proxy manager to scale up your already existing Scrapy project in order to make more requests and extract more web data.

Scrapy is a very popular web crawling framework and can make your life so much easier if you’re a web data extraction developer. Scrapy can handle many web scraping jobs including URL discovery, parsing, data cleaning, custom data pipelines, etc… But there’s one thing that Scrapy cannot do out of the box and it has become a must if you want to extract large amounts of data reliably: proxy management.

In order to scale up your Scrapy project, you need a proxy solution.

I will show you how to turn your already existing Scrapy spider and boost it with proxies!

Getting started

For this example, I’m going to use the “Scrapy version” of the product extractor spider that contains two functions:

  1. A crawler to find product URLs
  2. A scraper that will actually extract the data

Here’s the Scrapy spider code:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class ProductSpider(CrawlSpider):
name = 'product'
start_urls = ['http://books.toscrape.com/catalogue/category/books/travel_2/index.html']
rules = (
Rule(LinkExtractor(restrict_css='article.product_pod > h3 > a'), callback='populate_item'),
)
def populate_item(self, response):
item = ProductItem()
book_title = response.css('div.product_main > h1::text').get()
price_text = response.css('p.price_color::text').get()
stock_info = response.css('p.availability').get()
item = {
'title': book_title,
'price': self.clean_price(price_text),
'stock': self.clean_stock(stock_info)
}
yield item
def clean_price(self, price_text):
return Price.fromstring(price_text).amount_float
def clean_stock(self, stock_info):
return remove_tags(stock_info).strip()
class ProductSpider(CrawlSpider): name = 'product' start_urls = ['http://books.toscrape.com/catalogue/category/books/travel_2/index.html'] rules = ( Rule(LinkExtractor(restrict_css='article.product_pod > h3 > a'), callback='populate_item'), ) def populate_item(self, response): item = ProductItem() book_title = response.css('div.product_main > h1::text').get() price_text = response.css('p.price_color::text').get() stock_info = response.css('p.availability').get() item = { 'title': book_title, 'price': self.clean_price(price_text), 'stock': self.clean_stock(stock_info) } yield item def clean_price(self, price_text): return Price.fromstring(price_text).amount_float def clean_stock(self, stock_info): return remove_tags(stock_info).strip()
class ProductSpider(CrawlSpider):
	name = 'product'
	start_urls = ['http://books.toscrape.com/catalogue/category/books/travel_2/index.html']
	rules = (
    		Rule(LinkExtractor(restrict_css='article.product_pod > h3 > a'), callback='populate_item'),
	)
	def populate_item(self, response):
    	item = ProductItem()
    	book_title = response.css('div.product_main > h1::text').get()
    	price_text = response.css('p.price_color::text').get()
    	stock_info = response.css('p.availability').get()
    	item = {
        	'title': book_title,
        	'price': self.clean_price(price_text),
        	'stock': self.clean_stock(stock_info)
    	}
    	yield item
	def clean_price(self, price_text):
    		return Price.fromstring(price_text).amount_float
	def clean_stock(self, stock_info):
    		return remove_tags(stock_info).strip()

So let’s assume this is the spider you currently have and it was working fine… delivering you precious data… for a while. Then you wanted to scale up and make more requests which ultimately led to blocks, low success rate, bad data quality, etc… 

The solution to overcome blocks and receive high-quality web data is proxies and how you use those proxies. Let’s see how you can integrate Smart Proxy Manager in this spider!

Scrapy + Smart Proxy Manager

The recommended way for integration is the official middleware. This is how you can install it:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install scrapy-crawlera
pip install scrapy-crawlera
pip install scrapy-crawlera

Then, add these settings in your Scrapy project file:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
DOWNLOADER_MIDDLEWARES = {'scrapy_crawlera.CrawleraMiddleware': 610}
CRAWLERA_ENABLED = True
CRAWLERA_APIKEY = '<API key>'
DOWNLOADER_MIDDLEWARES = {'scrapy_crawlera.CrawleraMiddleware': 610} CRAWLERA_ENABLED = True CRAWLERA_APIKEY = '<API key>'
DOWNLOADER_MIDDLEWARES = {'scrapy_crawlera.CrawleraMiddleware': 610}
CRAWLERA_ENABLED = True
CRAWLERA_APIKEY = '<API key>'

Notice that in order to use Smart Proxy Manager, you need an API key. But don’t worry, we offer a 14-day free trial (max 10K requests during the trial) so you can get your API key fast and try it out to make sure it works for you.

Optionally, you can also set the proxy URL if you requested a custom instance of Smart Proxy Manager:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
CRAWLERA_URL = 'myinstance.zyte.com:8011'
Another way to set up Smart Proxy Manager is directly in the spider, like this:
class ProductSpider(CrawlSpider):
crawlera_enabled = True
crawlera_apikey = 'apikey'
CRAWLERA_URL = 'myinstance.zyte.com:8011' Another way to set up Smart Proxy Manager is directly in the spider, like this: class ProductSpider(CrawlSpider): crawlera_enabled = True crawlera_apikey = 'apikey'
CRAWLERA_URL = 'myinstance.zyte.com:8011'
Another way to set up Smart Proxy Manager is directly in the spider, like this:
class ProductSpider(CrawlSpider):
	crawlera_enabled = True
	crawlera_apikey = 'apikey'

Settings recommendations

To achieve higher crawl rates when using Smart Proxy Manager, we recommend disabling the Auto Throttle addon and increasing the maximum number of concurrent requests. You may also want to increase the download timeout. Here is a list of settings that achieve that purpose:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
CONCURRENT_REQUESTS = 32
CONCURRENT_REQUESTS_PER_DOMAIN = 32
AUTOTHROTTLE_ENABLED = False
DOWNLOAD_TIMEOUT = 600
CONCURRENT_REQUESTS = 32 CONCURRENT_REQUESTS_PER_DOMAIN = 32 AUTOTHROTTLE_ENABLED = False DOWNLOAD_TIMEOUT = 600
CONCURRENT_REQUESTS = 32
CONCURRENT_REQUESTS_PER_DOMAIN = 32
AUTOTHROTTLE_ENABLED = False
DOWNLOAD_TIMEOUT = 600

No more failed web scraping projects

Sign up for a Zyte Smart Proxy Manager account

This simple integration takes care of everything you need to scale up your Scrapy project with proxies.

If you are tired of blocks or managing different proxy providers, try Smart Proxy Manager .

smart proxy manager
automatic extraction
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Start FreeFind out more

Get the latest posts straight to your inbox

No matter what data type you're looking for, we've got you

G2.com

Capterra.com

Proxyway.com

EWDCI logoMost loved workplace certificateZyte rewardISO 27001 iconG2 rewardG2 rewardG2 reward

© Zyte Group Limited 2026