Update Your Google Crawler IP Range Endpoints
Managing Director
Full-Stack Engineer
· 6 min read
On 31st March 2026, Google announced that the JSON files listing their crawler IP ranges were moving from to /crawling/ipranges/ on developers.google.com. The reasoning made sense: these ranges cover crawlers used by Shopping, AdSense, Gemini, and other products beyond Search, so a /crawling/ path better reflects their scope.
What Google's blog post doesn't mention is that googlebot.json has also been renamed to `common-crawlers.json`. So updating the directory path alone isn't enough.
To complicate matters, the transition happened faster than expected. While Google promised a 6-month transition period, the old endpoints are already returning unexpected responses. If your systems pull from them automatically, they may be ingesting invalid data or failing silently.
What actually happened
On 7th April at 09:42:16 UTC, eight days after the announcement, the old URLs stopped returning IP ranges data.
The old URLs didn't redirect, and they didn't break. Instead, they quietly started returning a 200 OK response with a completely different JSON payload. For example, the response to https://developers.google.com/static/search/apis/ipranges/user-triggered-fetchers.json was the following:
1{2 "Action needed": "update the location you're fetching from to <https://developers.google.com/static/crawling/ipranges/user-triggered-fetchers.json>"3}This matters because of how different HTTP status codes behave in practice.
A 301 or 308 redirect is transparent. Most HTTP clients and automated scripts follow redirects by default, so systems can continue working without any changes. A 404 or 410 is disruptive, but disruption is useful: it triggers alerts, teams investigate, and the issue gets resolved quickly.
A 200 OK response with a valid but a non-standard (custom) JSON that contains no IP data is a harder problem to catch. Without proper data validation, an automated script might fetch the URL, receive a successful response, parse the JSON without complaint, find no IP prefixes, and continue with nothing to work with.
A second inconsistency
While investigating the old endpoints, we found a second issue.
Most of the old URLs now return the “Action needed” JSON message. But one file behaves differently: user-triggered-agents.json, which covers Google’s new Google-Agent crawler (associated with Project Mariner).
The old URL for this file still returns actual IP range data. It returns real prefixes, valid JSON, and no error message.
The problem is that the data is stale.
The old endpoint serves data with a creation timestamp of 2026-03-03T10:00:00, containing just 4 IP prefixes. The new endpoint has data from 2026-04-07T14:45:58 with 18 prefixes, including entirely new IPv4 ranges in the 74.125.232.x block and more granular IPv6 allocations.
So depending on which file you’re fetching, the old URLs either give you a polite JSON message with no IP data at all, or they give you real data that’s over five weeks out of date. Neither outcome is correct.
We expect Google will bring this in line with the other endpoints over time. In the meantime, if you’re relying on the agents file specifically, be aware that the failure mode is different: you won’t see a schema mismatch, you’ll just get quietly outdated data.
What this means for you
If any part of your infrastructure fetches Google’s crawler IP ranges automatically, it’s worth checking immediately.
That includes:
- Firewall allowlists that permit known Googlebot IPs
- WAF rules that use these ranges for bot classification
- Bot verification scripts that check incoming requests against Google’s published IPs
- CDN configurations with origin protection rules
- Log analysis pipelines that tag or filter traffic by crawler type
If your systems were pulling from the old URLs between 7th April and whenever you make the switch, there’s likely a window where stale or empty IP data was ingested. After updating your endpoints, do a one-time refresh to cover any gaps from that period.
The failure mode depends on how your code handles the response. If you’re using strict data schemas (common in Go or Java), your system will likely crash outright when it tries to deserialise the unexpected JSON structure. That’s actually the better outcome, because at least you’ll know something’s wrong.
If you’re using loosely typed languages or flexible parsing, the failure is far more subtle. Your script will parse the response successfully, find no IP prefixes, and carry on with an empty allowlist in production. There’s no error and no warning. You might not notice until crawl rates drop, indexing stalls, or your bot classification reports start showing inconsistencies.
How to fix it
The migration path is simple. Update every reference from the old directory to the new one:
| Old path | New path |
|---|---|
| /search/apis/ipranges/googlebot.json | /crawling/ipranges/common-crawlers.json ❗️Important: Google renamed googlebot.json to common-crawlers.json. |
| /search/apis/ipranges/special-crawlers.json | /crawling/ipranges/special-crawlers.json |
| /search/apis/ipranges/user-triggered-fetchers.json | /crawling/ipranges/user-triggered-fetchers.json |
| /search/apis/ipranges/user-triggered-fetchers-google.json | /crawling/ipranges/user-triggered-fetchers-google.json |
| /search/apis/ipranges/user-triggered-agents.json | /crawling/ipranges/user-triggered-agents.json |
All file names stay the same except for googlebot.json, which has been renamed to common-crawlers.json. Only the directory path changes. The base URL remains https://developers.google.com/static/.
Beyond updating the URLs, we’d recommend adding schema validation to whatever system consumes these files. A basic key-existence check is a start, but validating the full response structure ensures any unexpected changes are caught before they reach production. Using standard schema validation libraries (such as Zod in TypeScript) is recommended. Below, we provide a plain TypeScript validation code as a starting point:
1export type IpPrefix = { ipv4Prefix: string } | { ipv6Prefix: string };2
3export interface ResponseData {4 creationTime: string;5 prefixes: IpPrefix[];6}7
8export function isValidIpPrefix(prefix: unknown): prefix is IpPrefix {9 if (typeof prefix !== 'object' || prefix === null) {10 return false;11 }12 // strict: only ipv4Prefix or ipv6Prefix must be present, never both at the same time13 if ('ipv4Prefix' in prefix && 'ipv6Prefix' in prefix) {14 return false;15 }16 const value =17 'ipv4Prefix' in prefix18 ? prefix.ipv4Prefix19 : 'ipv6Prefix' in prefix20 ? prefix.ipv6Prefix21 : undefined;22 // note: we could (should) also validate the actual IP address range format (in CIDR notation)23 return typeof value === 'string' && value !== '';24}25
26export function ensureValidResponse(data: unknown): asserts data is ResponseData {27 if (28 typeof data !== 'object' ||29 data === null ||30 !('creationTime' in data) ||31 typeof data.creationTime !== 'string' ||32 !('prefixes' in data) ||33 !Array.isArray(data.prefixes) ||34 !data.prefixes.every(isValidIpPrefix)35 ) {36 throw new Error('Unexpected response data!');37 }38}39
40// example41const url = '<https://developers.google.com/static/crawling/ipranges/common-crawlers.json>';42
43// note: one should also validate the status code before parsing the response as JSON44const data = await fetch(url).then((r) => r.json());45
46ensureValidResponse(data);47
48console.log(data);This turns a silent failure into a loud one. If the response format changes again, or if any endpoint returns something your system doesn’t expect, it’ll flag it immediately rather than quietly ingesting bad data. The goal is for your pipeline to fail fast on anything that doesn’t match the expected schema.
What we did at Merj
We spotted the endpoint behaviour change on 7th April, but we’d already completed our migration.
When Google published the announcement on 31st March, we updated our Search Engine IP Tracker and all of our server logging systems to use the new /crawling/ipranges/ endpoints. By the time the old URLs stopped serving valid data, our systems were already pointing to the correct locations.
Our monitoring flagged the change, we verified it matched what we expected, and that was it. No scramble, no broken allowlists, and no gaps in our crawler identification.
We’ve also shared our findings with partners at Vercel and Akamai to make sure their teams are aware of the endpoint changes and the inconsistencies we spotted during the transition.
If you’re a Merj client, your systems are already using the correct URLs. There’s nothing you need to do. We monitor Google’s crawler IP ranges (along with other major search engines and AI crawlers), detect changes as they happen, and keep your systems current automatically.
Timeline
| Date | Event |
|---|---|
| 11 February 2026, 00:00 UTC | Google announces the IP ranges URLs migration on [the Changelog page](https://developers.google.com/crawling/docs/changelog#updated-the-location-of-googles-ip-ranges-for-common-crawlers,-special-crawlers,-and-user-triggered-fetchers) in the Crawling infrastructure docs. Old URLs are described as remaining available, but updating is recommended. Exploring [the updated docs](https://developers.google.com/crawling/docs/crawlers-fetchers/google-common-crawlers) also reveals that Google renamed googlebot.json to common-crawlers.json, which is not explicitly mentioned in the changelog entry. |
| 31 March 2026, 00:00 UTC | Google announces the IP ranges URLs migration [in a blog post](https://developers.google.com/search/blog/2026/03/crawler-ip-ranges) on the Search Central Blog. Old URLs described as remaining available, with redirects “within 6 months.” No explicit mentions of the renaming of googlebot.json to common-crawlers.json. |
| 7 April 2026, 09:42 UTC | Old URLs stop returning IP range data. They begin serving a 200 OK JSON response with an “Action needed” message instead. |
| 7 April 2026, 09:49 UTC | Merj detects the change. There is no disruption to our systems, since we were already using data from the new endpoints. |
| 8 April 2026, ~06:00 UTC | During a deeper investigation, Merj discovers the inconsistent behaviour of the user-triggered-agents.json file at the old URL (stale data vs. “Action needed” message on other files). |
| 8 April 2026, 09:45 UTC | Google rolls back the breaking change: The old URLs now again return the same IP ranges data that are available at the new URLs. However, the problem with user-triggered-agents.json file at the old URL returning state data still remains (as of 8 April 2026, 13:28 UTC). |
What to do next
This is likely a transitional state. Google may still introduce proper HTTP redirects, or the “Action needed” response might be their way of giving teams a clear signal to move. Either way, we’ll update this post if the behaviour changes.
In the meantime:
- Update your IP address data endpoints.
- Add schema validation.
- Check for any data gaps from the transition window.
The risk of leaving it is a silent failure that’s harder to diagnose.