PINGDOM_CHECK

#ExtractSummit2026 The world's largest web scraping conference returns. Austin Oct 7–8 · Dublin Nov 10–11.

Register now
Data Services
Pricing
Login
Try Zyte APIContact Sales
  • Unblocking and Extraction

    Zyte API

    The ultimate API for web scraping. Avoid website bans and access a headless browser or AI Parsing

    Ban Handling

    Headless Browser

    AI Extraction

    Enterprise

    DocumentationSupport

    Hosting and Deployment

    Scrapy Cloud

    Run, monitor, and control your Scrapy spiders however you want to.

    Coding Agent Add-Ons

    Agentic Web Data

    Plugins that give coding agents the context to build production Scrapy projects. Starts with Claude Code.

  • Data Services
  • Pricing
  • Blog

    Learn

    Case Studies

    Webinars

    Videos

    White Papers

    Join our Community
    Web scraping APIs vs proxies: A head-to-head comparison
    Blog Post
    The seven habits of highly effective data teams
    Blog Post
  • Product and E-commerce

    From e-commerce and online marketplaces

    Data for AI

    Collect and structure web data to feed AI

    Job Posting

    From job boards and recruitment websites

    Real Estate

    From Listings portals and specialist websites

    News and Article

    From online publishers and news websites

    Search

    Search engine results page data (SERP)

    Social Media

    From social media platforms online

  • Meet Zyte

    Our story, people and values

    Contact us

    Get in touch

    Support

    Knowledge base and raise support tickets

    Terms and Policies

    Accept our terms and policies

    Open Source

    Our open source projects and contributions

    Web Data Compliance

    Guidelines and resources for compliant web data collection

    Join the team building the future of web data
    We're Hiring
    Trust Center
    Security, compliance & certifications
Login
Try Zyte APIContact Sales

Zyte Developers

Coding tools & hacks straight to your inbox

Become part of the community and receive a bi-weekly dosage of all things code.

Join us
    • Zyte Data
    • News & Articles
    • Search
    • Social Media
    • Product
    • Data for AI
    • Job Posting
    • Real Estate
    • Zyte API - Ban Handling
    • Zyte API - Headless Browser
    • Zyte API - AI Extraction
    • Web Scraping Copilot
    • Zyte API Enterprise
    • Scrapy Cloud
    • Solution Overview
    • Blog
    • Webinars
    • Case Studies
    • White Papers
    • Documentation
    • Web Scraping Maturity Self-Assesment
    • Web Data compliance
    • Meet Zyte
    • Jobs
    • Terms and Policies
    • Trust Center
    • Support
    • Contact us
    • Pricing
    • Do not sell
    • Cookie settings
    • Sign up
    • Talk to us
    • Cost estimator
Home
Blog
Scraping Swiss Army Knife: My personal fix for web setup fatigue using Docker, Scrapy and Zyte
Light
Dark

Scraping Swiss Army Knife: My personal fix for web setup fatigue using Docker, Scrapy and Zyte

Read Time
10 min
Posted on
February 5, 2026
Use case
Tired of repeating web scraping setup? Learn how a multi-arch Docker container with Scrapy, Zyte, Requests, and Pandas speeds up exploration and debugging.
By
Ayan Pahwa
IntroductionThe problem I was facingThe idea: A Disposable scraping playgroundHow I use it (my actual workflow)Why Docker (and why multi-arch)When I don’t use This containerOpen to the community
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Start FreeFind out more
Subscribe to our Blog
Table of Contents

If you’ve done any amount of web scraping, you’ll probably relate to this.


Every new scraping project starts the same way:


  • Create a Python virtual environment.

  • Install Scrapy.

  • Open scrapy shell.

  • Test XPath / CSS selectors.

  • Realize the site is JavaScript-heavy.

  • Install another library.

  • Maybe add Zyte.

  • Maybe add BeautifulSoup.

  • Maybe Pandas, too.

  • Fix version conflicts.

  • Repeat… again… and again…


I’m relatively new to the world of web scraping, and this setup loop was honestly taking more time than the actual scraping. I decided to fix it once for myself - and ended up building something I now use every single time I approach a new scraping problem.


That project is Scraping Swiss Army Knife.

The problem I was facing

Web scraping is rarely “one-size-fits-all”.


Every website is different, whether it’s different HTML structures, anti-bot measures, rendering behavior or just different data extraction needs


Before writing any real code, I usually just want to:


  • Inspect the page.

  • Test selectors.

  • Fetch a few URLs.

  • See what breaks.

  • Decide which tools I actually need.

What’s inside?


Scraping & HTTP


  • Scrapy.

  • Zyte API with Scrapy integration.

  • requests.

  • BeautifulSoup4.

  • lxml.


Data and eExploration


  • Pandas.

  • Jupyter Notebook (CLI-based).


CLI Utilities


  • curl.

  • jq.

  • nano.

  • vim.


Platform


  • Python 3.12.

  • Linux.

  • Multi-arch (runs on Intel and Apple Silicon).


It is all bundled into one container.

The idea: A Disposable scraping playground

What I really wanted was a ready-made scraping environment with all common tools already installed, something that works on any machine (Intel or ARM) and something I can start, experiment with, throw away and recreate anytime


So I built a multi-architecture Docker container that acts like a scraping playground and for now let’s call it: Scraping Swiss Army Knife.


It is a Docker image that comes preloaded with the tools I most often need during the exploration phase of scraping.

How I use it (my actual workflow)

Make sure you have Docker installed and running on your workstation. You can refer to this guide to install docker. Once done, simply:


Step 1: Pull the container

Step 2: Run it

That drops me into a Linux shell with everything installed.


Step 3: Explore with Scrapy shell

I test selectors, inspect responses, and understand the structure before writing a single spider.




Step 4: Try Zyte (if needed)


If the site is JavaScript-heavy or protected, I rerun the container with my Zyte key:

Then inside Scrapy shell:

This tells me immediately:


  • Do I need browser rendering?

  • How does Zyte solve the problem?


Step 5: Experiment freely


Sometimes, I switch to using requests and BeautifulSoup:

Sometimes, I quickly test data handling with Pandas. Once I’m confident about:


  • the approach.

  • the tools.

  • the complexity.


…I exit the container.


That’s it. No environment cleanup, no dependency conflicts, no lingering mess. Next time I need it, I just summon it again.

Why Docker (and why multi-arch)

Docker gives me disposability.


I treat this container like a scratchpad:


  • Pull.

  • Test.

  • Kill.

  • Recreate.


And because it’s multi-architecture, it works the same on:


  • Intel laptops

  • M1/M2/M3 Macs

  • Linux servers

  • CI environments

When I don’t use This container

Important point:
This is not meant to replace your production scraping setup.


Once I know…


  • exactly which tools I need.

  • what libraries are required.

  • how the scraper should be structured.


… I go and build a clean, minimal project-specific environment.


This container is for:


  • exploration.

  • learning.

  • debugging.

  • experimentation.


Think of it as a sandbox, not the final product.

Open to the community

This project started as a personal solution, but if you’re a developer or web scraper, you might find it useful too.


If you think:


  • a tool is missing.

  • something can be improved.

  • another package should be included.


… I’d love contributions.

The GitHub repo is at https://github.com/apscrapes/scraping-swissarmy-knife


Open an issue, send a PR, suggest ideas After all, web Scraping is a moving target, and we’re all figuring it out together.


Web scraping is already hard enough. You shouldn’t be burning time before you even start.


For me, Scraping Swiss Army Knife removed friction from the very first step and that alone made it worth building. If it saves you even one setup cycle, it’s done its job.


Happy scraping 🕸️

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Start FreeFind out more

Get the latest posts straight to your inbox

No matter what data type you're looking for, we've got you

G2.com

Capterra.com

Proxyway.com

EWDCI logoMost loved workplace certificateZyte rewardISO 27001 iconG2 rewardG2 rewardG2 reward

© Zyte Group Limited 2026
1docker pull iayanpahwa/scrapingswissarmyknife:latest
Copy
1docker run -it iayanpahwa/scrapingswissarmyknife
Copy
1scrapy shell https://example.com
2
3Inside:
4response.status
5response.css("h1::text").get()
Copy
1docker run -it \
2  -e ZYTE_API_KEY=your_key_here \
3  iayanpahwa/scrapingswissarmyknife
Copy
1fetch("https://httpbin.org/html", meta={"zyte_api": True})
2response.text[:300]
Copy
1python3 - << 'EOF'
2import requests
3from bs4 import BeautifulSoup
4
5html = requests.get("https://example.com").text
6soup = BeautifulSoup(html, "lxml")
7print(soup.h1.text)
8EOF
Copy