skip to content

Decoding SERP Divergence: Why Two People on the Same IP See Different Google Results

LeadTap - blog - Decoding SERP Divergence: Why Two People on the Same IP See Different Google Results

It’s tempting to think that if two people sit next to each other, both connected to the same Wi-Fi (or the same VPN endpoint), they should see the exact same Search Engine Results Page (SERP). In practice, that rarely happens. Modern search engines Google in particular are designed to personalise and optimise results at a microscopic level. That means IP parity is only one small piece of the signal puzzle. Here are the reasons why SERPs diverge even under the same IP, and add one often-missed angle: how Incognito/private browsing actually changes (and doesn’t change) the equation.

The short answer: IP is coarse; search personalisation is fine-grained

An IP address gives search engines a crude estimate of where a query originated useful for rough localisation but far from definitive. Google itself describes the IP-derived “general area” as intentionally coarse (an area typically larger than 3 km² outside dense cities) and only one of several location signals it uses. When search intent calls for precision, higher-fidelity signals win. (Google Help)

So if two nearby people share an IP (same VPN exit, same home router), their results still diverge because the engine layers many other signals on top of IP: account history, session behaviour, device identity, precise device GPS/Wi-Fi signals, client rendering differences and randomised algorithm variants (A/B tests). Below I unpack the most important of those layers.

Six layers that create SERP divergence (briefly)

  1. Explicit profile personalisation: signed-in account history and Web & App Activity (WAA). (Google Help)
  2. Ephemeral session context: recent queries, clicks, dwell time and the search session’s short-term memory.
  3. Hyper-precise geolocation: device GPS/Wi-Fi triangulation via the Geolocation API when permissions are granted. This is far more accurate than IP. (MDN Web Docs)
  4. Deterministic device identification: fingerprinting and device hashes that survive cookie clearing. (atomicmail.io)
  5. Client environment & rendering: screen size, viewport, browser/JS behaviour and extensions that change what’s visible and which SERP features appear.
  6. Algorithmic experimentation & infra routing: live A/B/n testing and which cache/cluster served your query can introduce randomised differences.

Each layer can independently alter ranking, the appearance of rich results (carousels, Knowledge Panels, Featured Snips), and the structure of the page so even small differences add up fast.

1) Signed-in profiles and Web & App Activity: the heavy lifters

When users are signed in and have Web & App Activity enabled, Google stores a wide-ranging record of searches, maps queries, YouTube views and other interactions. That history is actively used to tailor future search results and layout choices not just which URL appears highest, but whether a video carousel, local pack or Knowledge Panel is promoted for you. Turn this service off and personalise effects reduce, but do not vanish entirely because other signals remain. (Google Help)

For analysts: if one tester is signed in and the other is not, parity is essentially impossible.

2) Ephemeral session signals: one click can change the next SERP

Search engines treat each session as a learning opportunity. Clickstream events which result you clicked, how long you stayed (dwell time), whether you returned quickly to results are fed back as immediate quality signals. If User A clicks a result and lingers, while User B clicks a different result and bounces, the search system updates its model of what worked for each user and can change subsequent rankings within the same session. This is why two otherwise identical sessions can diverge after a single interaction. (The ephemeral layer is where “micro-personalisation” lives.)

3) Hyper-precise geolocation trumps IP

If a browser or app is allowed to provide precise location via the HTML5 Geolocation API (or native device location services), that signal is treated as higher fidelity than IP-derived location. The Geolocation API reports coordinates with an accuracy value in metres; on mobile it can often place you within single-digit metres using GPS + Wi-Fi triangulation. For local intent queries (e.g., “coffee near me”), this precision radically alters rankings and the Local Pack ordering proximity is a dominant factor in local ranking. If one user grants device location and the other does not, expect big divergence even on the same network. (MDN Web Docs)

Practical tip: IP + VPN can only control the general area. If you need repeatable local SERP tests, explicitly deny location permissions and standardise the browser environment (see protocol below).

4) Device fingerprinting: identity that survives cookie clearing

Cookies are easy to delete; device fingerprints are harder. Fingerprinting collects sets of non-sensitive technical signals (browser version, fonts, GPU rendering quirks, canvas/audio hashes, installed plugin lists) and combines them into a deterministic identifier. That identifier can be used to restore a profile even after cookies are removed. In short, clearing cookies does not necessarily make you a “new” user and two physically different devices will almost always yield different fingerprints. Regulatory bodies treat some fingerprinting as personal data, but the technique remains a real reason two neighbouring users differ. (atomicmail.io)

5) Client rendering differences: what you see affects behaviour

Even if the underlying ranked list is identical, the layout presented to a user can differ because of device viewport, resolution and browser behaviour. A Featured Snippet, People Also Ask, or a large knowledge card can push organic links below the fold on a small screen while sitting above the fold on a desktop. That changes click behaviour (CTR) and therefore downstream ranking signals a second-order effect that amplifies divergence. Browser extensions, JS blocking and accessibility settings also change which SERP features render at all. (So yes: screen size matters.) (Google Help)

6) Algorithmic A/B testing and infrastructure variance

Search engines constantly test changes and route queries across global clusters for low latency. Randomised A/B or A/B/n experiments assign users to variant ranking systems, creating irreducible, intentional differences. Infrastructure routing and cache states mean the exact index snapshot serving your query can differ by cluster, producing micro-differences for freshness-sensitive queries. For analysts, this represents algorithmic noise you cannot remove only measure. (Hence the need for repeated sampling and statistical averaging.) (The Verge)

Incognito / Private Browsing the missing piece you asked for

A common assumption is that private browsing modes (Chrome’s Incognito, Firefox Private, Safari Private) remove personalisation. That’s not entirely true.

What Incognito does do: it limits what’s stored locally, cookies and site data are discarded when the session ends; sign-in cookies aren’t retained; local storage is cleared. Chrome’s official help explains that Incognito limits what is saved to your device. (Google Help)

What Incognito does not do: it does not stop servers, network observers or some Google services from seeing the requests. Google and websites you visit can still use your IP, device signals and server-side logs to personalise or attribute activity for that session. Additionally, deterministic fingerprinting techniques and network identifiers can still be used to link activity to a persistent profile in some cases. In short: Incognito reduces client-side traces but does not make you invisible to the engine. (WIRED)

Two important, real-world markers illustrate this:

  • Google’s guidance explicitly recommends Incognito for temporarily preventing searches being saved to your account, but warns it doesn’t stop internet providers, sites, or other Google services from getting data. (Google Help)
  • Legal attention: Google settled a major class action alleging tracking in Incognito; the case underlines that private modes are not a panacea and that server-side data collection continued to be a concern. (AP News)

So how will Incognito change your SERP? It eliminates some persistent client signals (cookies, stored history) which can reduce some kinds of same-device personalisation; but unless you also change IP, deny precise location, and eliminate fingerprintable features, the engine will still have multiple ways to differentiate your session from another’s. For many practical tests, Incognito reduces noise but it does not guarantee parity.

A practical de-personalisation protocol for repeatable SERP testing

If you need to measure SERPs as uniformly as possible (competitive tracking, research), follow these steps:

  1. Use a clean browser profile never signed into Google; use a fresh browser install or a dedicated VM.
  2. Deny location permissions and turn off device location services so the engine falls back to the IP-level signal only. (HTML5 Geolocation requires explicit permission to deny it.) (MDN Web Docs)
  3. Disable or control extensions; standardise viewport and screen resolution across testers.
  4. Reduce fingerprinting entropy by using a standardised testing VM or a privacy-hardened browser profile or use browsers designed to reduce fingerprinting (e.g., Brave/Firefox with anti-fingerprinting). (atomicmail.io)
  5. Use the same IP/VPN endpoint across tests. Remember VPN only controls one signal. (Google Help)
  6. Repeat searches over time, collect many samples and average results to filter algorithmic A/B noise. (The Verge)

This is not perfect algorithmic experiments and cache routing remains a source of variance but it dramatically reduces the largest, controllable sources of SERP divergence.

Final takeaway

SERP divergence under the same IP is not a bug; it’s an architectural feature. Modern search systems combine long-term profiles, session behaviour, precise device location, deterministic identifiers and UI rendering choices and they do so intentionally to serve what they estimate is more useful content for each user.

If your goal is reproducible SERP measurement, control the full stack (account state, local storage, device fingerprint surface, location permissions, viewport) and use repeated sampling to average out algorithmic noise. If your interest is in understanding user experience, accept that two nearby users can and probably will get intentionally different, personalised answers.

Let us drive
your marketing success

Shopping Basket
Scan the code