PINGDOM_CHECK
Light
Dark

More data, more trouble: How a perfect corpus corrupted my AI dream

Read Time
10 min
Posted on
March 13, 2026
What a failed experiment taught me about curated data, prompting, and when scraping actually matters.
Table of Content

In conventional education, subjects are silos. Biology and English never speak to each other, music and maths never make a symphony.


But that’s not how the real world works. Instead of looking at things in isolation, we notice the connections between spheres, and we leverage them. That’s holistic thinking.


So, as a personal project, I set out to build an AI education assistant that could think in systems, something that could help me design lessons for my children that teach across silos.


My idea was simple: if I could infuse an AI with the very best holistic‑thinking literature, it would start behaving like a holistic thinker, no matter the subject.


So I did what felt obvious. I fed it data.

The input: A perfect corpus

I used Zyte API to scrape 900 articles from the web about holistic thinking.


The goal was to create the most well‑read holistic‑thinking assistant imaginable.


I added my knowledge collection to two AI tools - a NotebookLM notebook and a ChatGPT custom GPT, instructed to act as a holistic‑thinking educator.

The process: Primed for connection

Excitedly, I gave both holistic the same prompt:


“Help a physics teacher design a lesson on Newton's laws of motion for a student who loves food. She's 11 years old.”


NotebookLM responded like a diligent research assistant. It referenced holistic thinking concepts like “compensating feedback”, “leverage points”, and the “trim tab”. The output was grounded in my sources and traceable.


ChatGPT, meanwhile, returned playful, accessible teaching metaphors - sliding butter in a pan to explain “inertia”, heavy pots for force and mass. The tone was warm, creative, and age‑appropriate.


Both responses looked promising - at first.

The failure: The trouble with data

But each betrayed a fundamental flaw:


  • The NotebookLM notebook was simply citing relevant passages from my documents; it wasn't using them together with other knowledge to steer the creation of original teaching. It was over-indexing only on my own data.

  • When I issued the same prompt to a plain ChatGPT, without my added knowledge bank, the response was virtually identical. Those 900 articles had made almost no difference whatsoever.


Konstantin Lopukhin, head of data science at Zyte, explained why to me:


“When you upload documents to a custom GPT, the model doesn't absorb them into its personality or reasoning style. It treats them as a searchable database.


“When you ask a question, it decides whether to search that database for something specific. If your question is generic - like, ‘help me design a lesson using holistic thinking’ - the model probably won't search your files at all. It'll just answer from its general training.”


For my custom GPT, adding my new source material was inconsequential. ChatGPT’s massive knowledge base had almost certainly already been trained on mountains of pages about the topic - likely, the same ones I had uploaded.


I wasn't giving the model new information. I was just making the context longer, making it harder for the AI to follow my instructions.


In other words: more data doesn’t mean better AI.

Grounded versus generative AI

I had failed to understand the peculiarities in how a grounded AI product (NotebookLM) and a generative one (ChatGPT) utilize input knowledge - and how they don’t.

Web Scraping CopilotCustom MCP serverClaude skills
Runs inVS CodeClaude Desktop, VS CodeClaude website, Claude Desktop app, including Claude Code and Cowork
SetupInstall extensionWrite FastMCP server, configure clientAdd skills folder to Claude
OutputProduction Scrapy spidersRaw web data piped directly into your AI client's contextStructured, reasoned output — JSON plus conversational explanation
ScaleUnlimited (pipelines)One request at a time, but reusable across any MCP-compatible clientOne session at a time, but skills are portable and composable
Role in the frameDevelopment environmentGives the AI the ability to autonomously reach out and fetch live web dataGives the AI the domain knowledge to use those hands correctly and chain workflows
Best forProduction pipelinesAgentic workflows where you want the AI to decide when and how to fetch data without being promptedValidating extraction across multiple URLs, comparing schemas, prototyping pipelines, or sharing results with non-technical teammates

If you are piggy-backing AI products to build personal assistants that involve domain expertise, you should ask yourself which version fits your needs.


Modern LLMs with web access can already search, fetch, and synthesize information in real-time. If the information is publicly available, well-indexed by search engines, and easily findable with obvious keywords, then you may not need to pre-scrape it.

When scraping matters

Of course, not all data needs fall into this bucket. Mass-market AI services’ knowledge and retrieval capabilities are not infinite.


That’s why building proprietary AI systems is fuelling current growth in web data extraction, and why so many people are building Retrieval-Augmented Generation (RAG) systems.


I developed a heuristic to sanity‑check whether a project genuinely needs web data collection, or whether scraping would add cost without changing the outcome.

SignalWhen to scrapeWhen not to
ExhaustivenessYou need all items in a category (catalogs, listings, inventories).You only need representative examples.
StructureYou need consistent fields and schemas across records.Free-text summaries are sufficient.
ComputationYou need comparisons, rankings, or aggregation across many records.You’re asking descriptive or explanatory questions.
TraceabilityYou need to know exactly where claims come from.Source provenance doesn’t matter.
FreshnessThe data changes faster than search indexes update.The information is relatively stable.

When your needs exist in the messy middle of each, you should test both approaches.

Know your needs

More data does not always mean better AI. That was my expensive lesson.


Web scraping is powerful. At Zyte, we see it powering everything from competitive intelligence to AI training pipelines. But the application of that data matters as much as the collection.


So, your question shouldn’t be: "Can I scrape this?" It should be: "Will having this data pre-assembled actually improve my system's outputs?"


When the answer is “yes,” the right data is transformative. When it’s a “no,” the better investment may be prompt engineering and context design.


Knowing when data matters - that’s the real leverage.

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.