PINGDOM_CHECK

#ExtractSummit2026 The world's largest web scraping conference returns. Austin Oct 7–8 · Dublin Nov 10–11.

Data Services
Pricing
  • Unblocking and Extraction

    Zyte API

    The ultimate API for web scraping. Avoid website bans and access a headless browser or AI Parsing

    Ban Handling

    Headless Browser

    AI Extraction

    Enterprise

    Hosting and Deployment

    Scrapy Cloud

    Run, monitor, and control your Scrapy spiders however you want to.

    Coding Agent Add-Ons

    Agentic Web Data

    Plugins that give coding agents the context to build production Scrapy projects. Starts with Claude Code.

  • Data Services
  • Pricing
  • Blog

    Learn

    Case Studies

    Webinars

    Videos

    White Papers

    Web scraping APIs vs proxies: A head-to-head comparison
    Blog Post
    The seven habits of highly effective data teams
    Blog Post
  • Product and E-commerce

    From e-commerce and online marketplaces

    Data for AI

    Collect and structure web data to feed AI

    Job Posting

    From job boards and recruitment websites

    Real Estate

    From Listings portals and specialist websites

    News and Article

    From online publishers and news websites

    Search

    Search engine results page data (SERP)

    Social Media

    From social media platforms online

  • Meet Zyte

    Our story, people and values

    Contact us

    Get in touch

    Support

    Knowledge base and raise support tickets

    Terms and Policies

    Accept our terms and policies

    Open Source

    Our open source projects and contributions

    Web Data Compliance

    Guidelines and resources for compliant web data collection

    Join the team building the future of web data
    We're Hiring
    Trust Center
    Security, compliance & certifications

Zyte Developers

Coding tools & hacks straight to your inbox

Become part of the community and receive a bi-weekly dosage of all things code.

    • Zyte Data
    • News & Articles
    • Search
    • Social Media
    • Product
    • Data for AI
    • Job Posting
    • Real Estate
    • Zyte API - Ban Handling
    • Zyte API - Headless Browser
    • Zyte API - AI Extraction
    • Web Scraping Copilot
    • Zyte API Enterprise
    • Scrapy Cloud
    • Solution Overview
    • Blog
    • Webinars
    • Case Studies
    • White Papers
    • Documentation
    • Web Scraping Maturity Self-Assesment
    • Web Data compliance
    • Meet Zyte
    • Jobs
    • Terms and Policies
    • Trust Center
    • Support
    • Contact us
    • Pricing
    • Do not sell
    • Cookie settings
    • Sign up
    • Talk to us
    • Cost estimator
Register now
Login
Try Zyte API
Contact Sales
Documentation
Support
Join our Community
Login
Try Zyte API
Contact Sales
Join us
Home
Blog
More data, more trouble: How a perfect corpus corrupted my AI dream
Light
Dark
IntroductionThe input: A perfect corpusThe process: Primed for connectionThe failure: The trouble with dataGrounded versus generative AIWhen scraping mattersKnow your needs
×
Subscribe to our Blog
Table of Contents

In conventional education, subjects are silos. Biology and English never speak to each other, music and maths never make a symphony.


But that’s not how the real world works. Instead of looking at things in isolation, we notice the connections between spheres, and we leverage them. That’s holistic thinking.


So, as a personal project, I set out to build an AI education assistant that could think in systems, something that could help me design lessons for my children that teach across silos.


My idea was simple: if I could infuse an AI with the very best holistic‑thinking literature, it would start behaving like a holistic thinker, no matter the subject.


So I did what felt obvious. I fed it data.

The input: A perfect corpus

I used Zyte API to scrape 900 articles from the web about holistic thinking.


The goal was to create the most well‑read holistic‑thinking assistant imaginable.


I added my knowledge collection to two AI tools - a NotebookLM notebook and a ChatGPT custom GPT, instructed to act as a holistic‑thinking educator.

The process: Primed for connection

Excitedly, I gave both holistic the same prompt:


“Help a physics teacher design a lesson on Newton's laws of motion for a student who loves food. She's 11 years old.”


NotebookLM responded like a diligent research assistant. It referenced holistic thinking concepts like “compensating feedback”, “leverage points”, and the “trim tab”. The output was grounded in my sources and traceable.


ChatGPT, meanwhile, returned playful, accessible teaching metaphors - sliding butter in a pan to explain “inertia”, heavy pots for force and mass. The tone was warm, creative, and age‑appropriate.


Both responses looked promising - at first.

The failure: The trouble with data

But each betrayed a fundamental flaw:


  • The NotebookLM notebook was simply citing relevant passages from my documents; it wasn't using them together with other knowledge to steer the creation of original teaching. It was over-indexing only on my own data.

  • When I issued the same prompt to a plain ChatGPT, without my added knowledge bank, the response was virtually identical. Those 900 articles had made almost no difference whatsoever.


Konstantin Lopukhin, head of data science at Zyte, explained why to me:


“When you upload documents to a custom GPT, the model doesn't absorb them into its personality or reasoning style. It treats them as a searchable database.


“When you ask a question, it decides whether to search that database for something specific. If your question is generic - like, ‘help me design a lesson using holistic thinking’ - the model probably won't search your files at all. It'll just answer from its general training.”


For my custom GPT, adding my new source material was inconsequential. ChatGPT’s massive knowledge base had almost certainly already been trained on mountains of pages about the topic - likely, the same ones I had uploaded.


I wasn't giving the model new information. I was just making the context longer, making it harder for the AI to follow my instructions.


In other words: more data doesn’t mean better AI.

Grounded versus generative AI

I had failed to understand the peculiarities in how a grounded AI product (NotebookLM) and a generative one (ChatGPT) utilize input knowledge - and how they don’t.

If you are piggy-backing AI products to build personal assistants that involve domain expertise, you should ask yourself which version fits your needs.


Modern LLMs with web access can already search, fetch, and synthesize information in real-time. If the information is publicly available, well-indexed by search engines, and easily findable with obvious keywords, then you may not need to pre-scrape it.

When scraping matters

Of course, not all data needs fall into this bucket. Mass-market AI services’ knowledge and retrieval capabilities are not infinite.


That’s why building proprietary AI systems is fuelling current growth in web data extraction, and why so many people are building Retrieval-Augmented Generation (RAG) systems.


I developed a heuristic to sanity‑check whether a project genuinely needs web data collection, or whether scraping would add cost without changing the outcome.

SignalWhen to scrapeWhen not to
ExhaustivenessYou need all items in a category (catalogs, listings, inventories).You only need representative examples.
StructureYou need consistent fields and schemas across records.Free-text summaries are sufficient.
ComputationYou need comparisons, rankings, or aggregation across many records.You’re asking descriptive or explanatory questions.
TraceabilityYou need to know exactly where claims come from.Source provenance doesn’t matter.
FreshnessThe data changes faster than search indexes update.The information is relatively stable.

When your needs exist in the messy middle of each, you should test both approaches.

Know your needs

More data does not always mean better AI. That was my expensive lesson.


Web scraping is powerful. At Zyte, we see it powering everything from competitive intelligence to AI training pipelines. But the application of that data matters as much as the collection.


So, your question shouldn’t be: "Can I scrape this?" It should be: "Will having this data pre-assembled actually improve my system's outputs?"


When the answer is “yes,” the right data is transformative. When it’s a “no,” the better investment may be prompt engineering and context design.


Knowing when data matters - that’s the real leverage.

×

Get the latest posts straight to your inbox

No matter what data type you're looking for, we've got you

G2.com

Capterra.com

Proxyway.com

EWDCI logoMost loved workplace certificateZyte rewardISO 27001 iconG2 rewardG2 rewardG2 reward

© Zyte Group Limited 2026
Read Time
10 min
Posted on
March 13, 2026
Open Source
By
Neha Setia Nagpal

Try Zyte API

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Zyte proxies and smart browser tech rolled into a single API.

More data, more trouble: How a perfect corpus corrupted my AI dream

What a failed experiment taught me about curated data, prompting, and when scraping actually matters.
Start FreeFind out more
Start FreeFind out more