If you’ve noticed your GA4 numbers feeling a bit “floaty” lately — fewer users, fewer conversions, attribution looking weird — you’re not imagining things. A big part of it is privacy: more people decline tracking, browsers block more stuff by default, and regulations have tightened expectations around consent.
That doesn’t mean we throw our hands up and stop measuring. It means we get cleaner about how we measure.
Consent Mode is basically Google’s way of letting your site respect user choices and still keep reporting usable. In plain terms: when someone says “no” to analytics/ads cookies, Consent Mode helps Google tags behave differently (more limited), and your platforms may use modelling to fill some gaps. When someone says “yes”, everything runs normally.
This article is the “non-technical” version of what we care about as a CRO team: making sure consent is handled properly without accidentally destroying funnel visibility or misreading experiment results.
Why we care (from a CRO perspective)
When consent isn’t implemented well, three bad things happen:
-
We lose signal in the funnel
You can’t optimise what you can’t see. If key events (like “lead submitted” or “checkout complete”) stop firing reliably, we end up prioritising the wrong ideas and arguing about whose dashboard is “right.” -
Experiments become harder to trust
If Variant B happens to be served more often to users who decline tracking (or certain browsers), you’ll see weird differences that aren’t caused by the change you shipped. That’s how teams “learn” the wrong lesson. -
Marketing and product get different stories
Paid platforms, analytics tools, and backend data can drift apart. If we don’t align on what’s expected (and what’s not), every performance conversation turns into a reconciliation exercise.
Consent Mode doesn’t magically fix everything — but it reduces the chaos if it’s implemented and verified properly.
The simple mental model: two states, two behaviours
Forget the jargon. The way we think about it is:
-
Consented users: we measure normally (within reason).
-
Non-consented users: we measure less (and accept that some reporting relies on modelling or aggregated signals).
The goal isn’t “track everyone anyway.” The goal is:
honour consent choices, keep tagging stable, and know exactly what changes in your reporting.
A practical rollout checklist (the stuff that prevents surprises)
Here’s the tight checklist we use before we’re comfortable calling Consent Mode “done.”
1) Map your consent banner decisions to your measurement needs
Your banner is a product decision, not just legal decoration. We want it to be clear, trustworthy, and not manipulative.
From a CRO ops angle, we also want to know:
-
What counts as “accept” vs “reject”?
-
Is there a “customise” path?
-
Does the default setting mean analytics is off until opt-in, or allowed until opt-out?
We don’t need perfection here. We need clarity — and consistency.
2) Make sure tags load in the right order
This is the #1 issue we see: tags firing before consent status is known.
If the site loads analytics tags first and consent second, you’ll get messy behaviour:
-
some users tracked when they shouldn’t be
-
other users not tracked even after accepting
-
session breaks and event drops that come and go
Even if someone else is implementing it (agency/dev), we always ask for confirmation that consent defaults apply before tags start sending data.
3) Confirm your key conversion events still behave
Pick your “money events” and test them deliberately:
-
lead submit
-
purchase complete
-
signup complete
-
booking confirmed
-
quote request sent
Then test those events across:
-
accept all
-
reject all
-
customise (if available)
-
incognito / fresh browser
-
Safari + Chrome (at minimum)
What we’re looking for is not “perfect tracking under reject” — that’s not the point. We’re looking for:
-
stable behaviour
-
expected gaps
-
no accidental double-counting
4) Align GA4, Google Ads, and your backend (at least conceptually)
You will still see differences between platforms — that’s normal.
But we want to avoid the scary version of “differences,” where:
-
Ads shows conversions, GA4 shows none
-
GA4 shows conversions, CRM shows way fewer
-
results change wildly week to week with no product reason
If you have a CRM or payment processor, even a simple weekly spot check (“does order count roughly match?”) is enough to catch major issues early.
5) Decide what “good enough” reporting looks like
This is underrated. Teams get stuck because they expect GA4 to behave like 2019 Universal Analytics.
Instead, we define:
-
Which metrics we’ll optimise on (often: backend conversions, qualified leads, revenue)
-
Which metrics are directionally useful (GA4 sessions/users, event counts)
-
Which are “nice to have” but not decision-critical (some attribution breakdowns)
That keeps CRO decisions grounded.
How Consent Mode affects experimentation (and how we adapt)
Consent changes the shape of your data. So when we run tests, we do a couple of simple things:
-
Use one primary metric we trust
Ideally something that’s tied to a real outcome (qualified lead, revenue, activation), not a fragile “button click” event. -
Keep guardrails honest
Bounce rate and “engagement” can be noisy under consent constraints. If we use guardrails, we prefer ones that are less dependent on perfect client-side tracking (like backend conversion rate, refund rate, support tickets). -
Sanity check by device + browser
Safari often behaves differently. If a result only “wins” on one browser family, we treat it as a yellow flag until verified.
The takeaway
Consent Mode isn’t a growth hack. It’s housekeeping — but it’s high-leverage housekeeping.
When it’s done well, we get:
-
clearer funnels (even if slightly smaller)
-
fewer “mystery drops” in reporting
-
more trustworthy experiments
-
better alignment between marketing, product, and analytics