Back to Blog

Rank Tracking After Google’s &num=100 Removal: A Practical Fix Using a SERP API

Google removed the &num=100 parameter, breaking many rank trackers. Learn how to build a resilient custom tracker with a SERP API that handles Google changes for you.

May 14, 2026
By SerpBase Teamrank trackinggoogle num=100serp apiseo toolsscraping alternatives
Rank Tracking After Google’s &num=100 Removal: A Practical Fix Using a SERP API

In September 2025, Google quietly pulled the plug on the &num=100 URL parameter. For years, SEO tools had leaned on that single query parameter to fetch 100 organic results in one request. Overnight, dashboards went blank and scheduled reports started throwing errors. While the immediate scramble produced a few workarounds, the smarter response is to stop relying on Google’s URL parameters at all. Here’s why the change matters and how to build a rank tracker that won’t break the next time Google tweaks something.

What the num=100 Parameter Actually Did

Simply put, adding &num=100 to a Google search URL instructed the page to display up to 100 results instead of the default 10. For SEO tools, that meant one HTTP request could grab a whole page of rankings. Without it, you have to paginate—clicking “Next” multiple times—which multiplies requests, increases bandwidth, and raises the risk of getting blocked or served a CAPTCHA. The change wasn’t a mere test; Google fully disabled it across all search pages by mid-September 2025.

Why Most Rank Trackers Fell Over

Many commercial and homegrown trackers hardcoded the parameter into their scraping logic. When Google started returning only 10 results regardless, those tools either failed outright or returned incomplete data. Some providers, like Ahrefs, pushed updates to paginate properly, but not everyone adapted overnight. Practitioners reported gaps in ranking histories and a spike in “data not available” notifications. The underlying problem, though, was never the parameter itself—it was the assumption that a single magic URL would keep working.

The Real Lesson: Don’t Build on Google’s URL Quirks

Google treats the SERP as a product surface, not an API. They change query parameters, result layouts, and rendering rules constantly. Building rank tracking by mimicking a browser and tweaking a URL is like fixing a house on sand. If your reporting depends on undocumented behavior, you’ll be playing whack‑a‑mole with Google’s anti‑scraping team forever. The smarter path is to abstract away the raw HTTP request entirely. That’s where a dedicated SERP API comes in.

How to Get Rank Data Without Worrying About Parameters

Instead of sending &num=100 to Google yourself, you send a clean query to a SERP API that handles the negotiation. For example, with SerpBase’s search endpoint you specify the keyword, number of results you need (e.g., 50 or 100), device type, and location. The API does the pagination, manages proxies, and returns structured JSON with organic positions, titles, URLs, and even SERP features like featured snippets or People Also Ask boxes. You don’t need to know which parameters Google accepts today.

A Quick API Call in Practice

You might send:

GET https://api.serpbase.com/v1/search?q=best+seo+tool&num=100&gl=us

And get back a consistent organic_results array with exactly 100 entries (assuming they exist). No &num=100 trickery on your side. The API stays current with whatever Google serves.

Why This Approach Survives the Next Google Change

Because the API provider (SerpBase) bears the burden of adapting to Google’s UI shifts. When Google kills a parameter or redesigns the results page, the engineering team updates the parsing logic behind the scenes. You just keep calling the same endpoint. That’s the kind of boring reliability rank tracking should have.

Beyond Position Numbers: What Else You Can Track

A raw SERP API gives you more than a rank. You can monitor when your page appears in a featured snippet, track image results, or watch how local packs rearrange for a keyword. If you’re building AI‑powered content tools, you can use live Google results to ground factuality (as we covered in How to Ground AI Answers with Live Google Results). Suddenly, your rank tracker morphs into a competitive intelligence dashboard.

What About the Tools That Already “Fixed” It?

A handful of enterprise tools patched their scrapers to simulate pagination or use headless browsers. That works until Google changes how pagination tokens are generated or starts serving obfuscated HTML. Those patches are temporary. The fundamental technique—scraping Google’s web interface—remains as fragile as ever. If you’re evaluating a tool, ask whether it uses an external API or does direct scraping. The former is a strategic hedge; the latter is a future headache.

Start Simple, Then Scale

You probably don’t need 100 results for every keyword. Most B2B terms have value in the top 20. Track what matters daily, and pull deeper rankings weekly for your high‑value clusters. With an API, you can vary the num per request, controlling costs while still getting the data you need. When Google inevitably tweaks something again, your daily email won’t fill with error alerts—because the API layer already absorbed the change.

Building a Tracker That Lasts

The num=100 episode isn’t an isolated event. It’s part of a long‑running effort to make scraping harder. The SEO industry keeps mistaking temporary workarounds for infrastructure. If you’re serious about consistent ranking data, invest in a parameter‑agnostic pipeline now. Use a SERP API as your data source, cache results intelligently, and focus your reporting on trends that actually inform business decisions. That way, the next time Google kills a parameter, you won’t even notice.