PINGDOM_CHECK

#ExtractSummit2026 The world's largest web scraping conference returns. Austin Oct 7–8 · Dublin Nov 10–11.

Register now
Data Services
Pricing
Login
Try Zyte APIContact Sales
  • Unblocking and Extraction

    Zyte API

    The ultimate API for web scraping. Avoid website bans and access a headless browser or AI Parsing

    Ban Handling

    Headless Browser

    AI Extraction

    Enterprise

    DocumentationSupport

    Hosting and Deployment

    Scrapy Cloud

    Run, monitor, and control your Scrapy spiders however you want to.

    Coding Agent Add-Ons

    Agentic Web Data

    Plugins that give coding agents the context to build production Scrapy projects. Starts with Claude Code.

  • Data Services
  • Pricing
  • Blog

    Learn

    Case Studies

    Webinars

    Videos

    White Papers

    Join our Community
    Web scraping APIs vs proxies: A head-to-head comparison
    Blog Post
    The seven habits of highly effective data teams
    Blog Post
  • Product and E-commerce

    From e-commerce and online marketplaces

    Data for AI

    Collect and structure web data to feed AI

    Job Posting

    From job boards and recruitment websites

    Real Estate

    From Listings portals and specialist websites

    News and Article

    From online publishers and news websites

    Search

    Search engine results page data (SERP)

    Social Media

    From social media platforms online

  • Meet Zyte

    Our story, people and values

    Contact us

    Get in touch

    Support

    Knowledge base and raise support tickets

    Terms and Policies

    Accept our terms and policies

    Open Source

    Our open source projects and contributions

    Web Data Compliance

    Guidelines and resources for compliant web data collection

    Join the team building the future of web data
    We're Hiring
    Trust Center
    Security, compliance & certifications
Login
Try Zyte APIContact Sales

Zyte Developers

Coding tools & hacks straight to your inbox

Become part of the community and receive a bi-weekly dosage of all things code.

Join us
    • Zyte Data
    • News & Articles
    • Search
    • Social Media
    • Product
    • Data for AI
    • Job Posting
    • Real Estate
    • Zyte API - Ban Handling
    • Zyte API - Headless Browser
    • Zyte API - AI Extraction
    • Web Scraping Copilot
    • Zyte API Enterprise
    • Scrapy Cloud
    • Solution Overview
    • Blog
    • Webinars
    • Case Studies
    • White Papers
    • Documentation
    • Web Scraping Maturity Self-Assesment
    • Web Data compliance
    • Meet Zyte
    • Jobs
    • Terms and Policies
    • Trust Center
    • Support
    • Contact us
    • Pricing
    • Do not sell
    • Cookie settings
    • Sign up
    • Talk to us
    • Cost estimator
Zyte API
Zyte Solution Overview

Solution Overview

Part of Zyte API

The Web Scraping Ecosystem for Professional Developers

Overcome any web scraping challenge quickly and efficiently with the Zyte coherent web scraping eco-system.

Step 1

Write your spiders code with a scraping framework

We recommend you start with Scrapy an open-source web scraping framework for Python, created and maintained by Zyte. Check our learn Scrapy tutorials or join the Extract Community on discord to connect with web scraping experts.

Step 2

Deploy to Scrapy Cloud (optional)

Host, monitor and QA your Scrapy spiders in Scrapy Cloud the perfect solution to scale web scraping projects quickly and reliably with Zero vendor lock-in. Deploy code to Scrapy Cloud via your command line or directly with GitHub or access AI-Powered Smart Spider templates.

Try Scrapy Cloud FreeLearn more
Interface
Step 3

Handling bans and blocks

Configure Scrapy requests to use Zyte API to automatically extract data from websites of all complexity levels, using only the leanest tech to handle bans. Plus render Javascript, automate browser actions and take screenshots. Available as either a REST API or HTTP API.

Try Zyte API FreeVisit Docs
Step 4

Turn web pages into JSON with AI Extraction

Switch on automatic extraction for articles, product pages or job listings and our patented ML will structure it for you. Forget about writing (and fixing) parsing code for your website.


  • Massively reduce time to build spiders

  • Minimize maintenance overhead per site

  • Can be extended and overridden

Try Zyte API: Automatic ExtractionLearn more
Step 5

Go 'All-in' on AI for product data

Our complete solution to extracting e-commerce product data with AI


Radically scale your web data with our end-to-end solution to web scraping products using AI.


  • Write e-commerce spiders 3x faster,

  • Generate 50% less maintenance overhead

  • Customise AI Extraction with Scrapy

Try Zyte API - AI ScrapingLearn more
Ecommerce data scraping
Step 6

Ready to scale up?

When you want to level up and tackle advanced use cases or scale quickly, technology alone often isn’t enough. With our Enterprise plans, we combine our leading technology with our industry expertise to give you:


  • Developer-to-developer training and support

  • Enhanced SLAs

  • Enterprise pricing and volume discounts

Get a quoteLearn more
Zyte API
Step 7

Outsource to Zyte

For organizations that want to outsource some (or all) of their web data collection, Zyte offers web scraping services backed by 13+ years of experience plus the largest team of 100+ web scraping experts. Let our experts understand your needs, and build the best solution to get the data for you. 

Zyte Data - Talk to usLearn more
AI Scraping

G2.com

Capterra.com

Proxyway.com

EWDCI logoMost loved workplace certificateZyte rewardISO 27001 iconG2 rewardG2 rewardG2 reward

© Zyte Group Limited 2026
1from scrapy import Field, Item, Spider
2
3class MyItem(Item):
4    price = Field()
5
6
7class Scraper(Spider):
8
9   name = "scraper"
10   start_urls = ["https://the-best-store.com/"]
11
12   def parse(self, response, **kwargs):
13       item = MyItem()
14       item["price"] = response.css(".price_color::text").get()
15       return item
Copy
1def start_requests(self):
2        yield scrapy.Request(
3            url="https://quotes.toscrape.com/",
4            meta={
5              "zyte_api": {
6                "geolocation": "CA",
7                "browserHtml": True
8              }
9            }
10        )
Copy
1from scrapy import Request, Spider
2
3
4class MySpider(Spider):
5    my_spider = "toscrape_com"
6
7    def start_requests(self):
8        yield Request(
9            "https://books.toscrape.com/",
10            meta={
11                "zyte_api_automap": {
12                    "httpResponseBody": True,  
13                    "productList": True,  
14                },
15            },
16        )
17
18    def parse(self, response):
19        http_response_body: bytes = response.body
20        productList = response.raw_api_response["productList"]
Copy