Running A/B Tests Without Violating GDPR: A Technical Breakdown

Most testing tools store visitor identifiers in cookies. Some of those cookies require consent under GDPR. Here is how to do it properly — without sacrificing experiment validity.

When the General Data Protection Regulation came into force in 2018, many growth teams assumed it had little to do with A/B testing. A/B testing is about website optimization, not personal data — so the thinking went. That assumption has proven costly. Regulatory guidance from France's CNIL, the UK's ICO, and the Dutch Autoriteit Persoonsgegevens has since clarified that experiment assignment cookies often constitute personal data processing requiring a lawful basis. In most cases, that basis is consent.

This article explains the technical mechanics of how standard A/B testing tools work, why they create GDPR exposure, and what architectures avoid those problems without breaking experiment validity. The goal is a practical guide for engineering and product teams, not a legal opinion.

GDPR compliance icons with EU flag and cookie symbol

How Traditional A/B Testing Tools Handle Visitor Identity

Every A/B testing tool needs to solve the same problem: show the same visitor the same variant on every page view, across multiple sessions if necessary. The standard solution is a persistent first-party cookie containing a random visitor identifier. The tool reads that cookie, determines which bucket the visitor belongs to, and renders the corresponding variant.

That cookie does not contain a name or email address. It is, on its face, anonymous. But under GDPR, pseudonymous identifiers — random strings that consistently identify a device or browser over time — qualify as personal data when they can be used to single out an individual, even without linking to an identity. Recital 26 of the Regulation makes this explicit: data that allows identification by reasonably available means is personal data, regardless of whether that identification actually occurs.

The persistent nature of the cookie is what creates the problem. A session cookie that disappears when the browser closes is far less likely to constitute personal data than a cookie with a 12-month expiry that builds up behavioral signals across hundreds of page views. Regulators have taken the position that the latter typically requires consent under Article 6(1)(a) or, in the case of ePrivacy-covered cookies, under the national rules implementing the ePrivacy Directive.

The practical consequence is significant. If you require opt-in consent before setting experiment cookies, you will only be able to assign consenting visitors to variants. In many European markets, consent rates for non-essential cookies run between 40 and 60 percent. Your experiment population is immediately halved, which roughly doubles the traffic required to reach the same statistical power. Tests that would take three weeks now take six.

The Three Technical Approaches

Once you accept that consent-gated cookies are the problem, three main technical architectures emerge. Each involves genuine trade-offs between privacy compliance, experiment validity, and engineering complexity.

Server-Side Assignment with Session-Scoped State

In a server-side testing architecture, variant assignment happens before the page is served to the browser. The server determines which variant to render based on attributes available without cookies: IP address prefix, User-Agent string, request headers, URL parameters, or a short-lived session token set at the start of a browsing session. The rendered HTML already contains the correct variant — no client-side swap occurs, so there is no flicker, and no persistent identifier needs to be stored in the browser.

The privacy advantage is real. If assignment is based on request-time attributes that are not stored persistently and not combined with other data to build profiles, the processing can often be conducted under a legitimate interests basis or structured as technical necessity. The limitation is cross-session consistency. If a visitor closes their browser and returns the next day, they may be assigned to a different variant. For tests measuring single-session conversions, this is acceptable. For tests measuring return visitor behavior or multi-session funnels, it introduces noise.

Consent-Before-Assignment with Graceful Degradation

The most legally straightforward approach is simply to not assign visitors to experiment variants until they have consented to non-essential cookies. The experiment tool integrates with your Consent Management Platform. When consent is granted, the CMP fires an event, the testing script sets the assignment cookie, and the visitor enters the experiment population on their next page view.

This architecture is clean from a compliance standpoint. The drawback, as noted above, is reduced experiment population size. To mitigate this, teams using this approach should focus their experiments on post-consent flows — onboarding sequences, in-app pages, or checkout flows where users have already authenticated and consented. The number of testable surfaces is smaller, but experiment validity is uncompromised within that scope.

Anonymized Cohort Assignment

A third approach uses deterministic hashing rather than stored identifiers. Instead of generating a random visitor ID and persisting it, the system computes a hash of attributes available on every request — for example, a combination of the page URL, a daily rotating salt, and a subset of the User-Agent string. This hash deterministically maps a visitor to a variant without storing anything. The same visitor will receive the same variant on the same day because the inputs to the hash are consistent.

The privacy benefit is that no identifier is stored in the browser, and no single attribute is sufficient to re-identify the visitor. The daily rotating salt ensures the mapping changes over time, limiting the window during which any particular cohort assignment could be linked to a behavioral trail. Webyn's own implementation uses this approach for visitors in ePrivacy-covered jurisdictions who have not yet interacted with the consent banner.

What CNIL's 2022 Guidance Actually Says

France's data protection authority published updated guidance on audience measurement and A/B testing cookies in 2022. The key distinction CNIL draws is between cookies used for audience measurement conducted strictly for the benefit of the site operator and cookies used for any form of cross-site or cross-service profiling. For the former category, CNIL created a limited exemption from the consent requirement, provided specific conditions are met.

Those conditions include: the data must not be shared with third parties, it must not be combined with data from other sources, it must be processed only for statistical purposes related to the site, and the scope must be limited to a single site or a group of sites operated by the same entity. Cookie lifetime is not explicitly capped, but CNIL's guidance suggests 13 months as a reasonable maximum, consistent with its general cookie guidance.

A/B testing cookies fit within this exemption only if the testing tool does not share variant assignment data with its own analytics infrastructure, does not build cross-customer visitor profiles, and limits data retention to the duration necessary for the experiment. Many commercial testing tools fail at least one of these conditions because their business model involves cross-customer behavioral intelligence.

Tools that process experiment data entirely on the customer's own infrastructure — or that commit contractually to no cross-customer data aggregation — are more likely to qualify. Webyn processes all experiment assignment data on EU infrastructure under a strict data processing agreement that prohibits cross-customer analysis. This is the architecture that allows us to operate under the CNIL exemption for applicable use cases.

Structuring Your Data Processing Agreement

Regardless of the technical approach, the relationship between a website operator and an A/B testing vendor is a controller-processor relationship under GDPR Article 28. The vendor processes personal data on behalf of the operator. This requires a written Data Processing Agreement that specifies: the subject matter and duration of processing, the nature and purpose of processing, the type of personal data involved, the categories of data subjects, and the obligations and rights of the controller.

Key provisions to review in any DPA for A/B testing tools include sub-processor lists, data transfer mechanisms for non-EU processing, data retention and deletion schedules, breach notification timelines, and audit rights. The sub-processor list is often where compliance problems hide. An A/B testing vendor may use cloud providers, CDN services, or analytics backends whose own data processing is not fully scoped in the main DPA.

When reviewing a vendor's DPA, check whether experiment assignment events are categorized as personal data or aggregate statistics. A DPA that treats assignment events as anonymized aggregate data may create a false sense of security if the underlying implementation actually stores persistent identifiers.

Consent Banners and Experiment Timing

For teams using the consent-before-assignment approach, the interaction between the consent banner itself and the A/B testing script deserves careful attention. A common mistake is loading the testing script unconditionally in the page head, allowing it to run before consent is collected, and relying on an internal flag to prevent cookie setting. This approach depends entirely on the testing tool's own compliance logic and is difficult to audit.

A more robust pattern is to load the testing script conditionally, only after a consent event has been fired by the CMP. In Google Tag Manager, this means placing the testing tool tag inside a consent-gated trigger. The advantage is that the testing script never executes in a non-consented context, eliminating the risk that a future script update might change the consent check behavior.

The trade-off is a slightly more complex tag setup and the possibility that the testing script loads asynchronously after the CMP resolution, which can cause a brief delay before variant assignment. For most modern testing tools, this delay is under 50 milliseconds and has no meaningful effect on experiment results. The flicker risk, however, is real and should be measured in your specific implementation.

Testing on Logged-In Users

For SaaS products and e-commerce platforms where a significant portion of experiment traffic consists of authenticated users, a different set of options is available. Logged-in users have already provided consent as part of account creation, and their user identifier can serve as the basis for experiment assignment without any cookie dependency.

Server-side assignment based on user ID hash is both technically clean and GDPR-compliant, provided the user's consent at account creation explicitly covered product experimentation. This is worth verifying in your terms of service and privacy policy: does your account consent flow include a disclosure that you use behavioral experimentation to improve the product? If not, adding one is a low-effort compliance improvement.

Experiment assignment by user ID also enables more sophisticated testing designs, including holdout groups that persist across feature releases and multi-cell experiments that would be impractical with session-scoped cookies.

What to Audit Right Now

If you are running A/B tests on a site with European traffic, the following audit steps will surface most common compliance gaps in under an hour. Open your site in a browser with developer tools, clear all cookies, and load a page without interacting with the consent banner. Check the Cookies panel — if any cookies with the pattern of a testing tool identifier have been set before consent, you have a compliance issue that needs immediate remediation.

Second, check your Data Processing Agreement with your testing vendor for the provisions described above. If the DPA does not address cross-customer data aggregation explicitly, contact your vendor's legal team for written clarification before your next data protection audit.

Third, review your Privacy Policy. It should disclose your use of A/B testing technology, the type of data processed, the legal basis for processing, and the retention period. Many privacy policies describe analytics cookies but omit the specific mention of experimentation platforms.

Running experiments on European traffic is not incompatible with GDPR compliance. It requires a technical architecture that respects the regulation's requirements and a vendor relationship structured to maintain accountability. The tools to do this correctly exist — the work is in implementing them deliberately rather than defaulting to out-of-the-box configurations designed for US-first markets.

GDPR-compliant testing from day one

Webyn is built for European traffic. EU-only infrastructure, no cross-customer data aggregation, and a Data Processing Agreement designed to satisfy CNIL, ICO, and APD requirements.

Talk to Our Team
← Back to Blog