The End of Online Anonymity
The Quiet Consensus
Across the UK, the EU, and beyond, governments are converging on a similar vision: an internet where everyone is identified and verified. The stated justifications vary (protecting children, fighting terrorism, preventing fraud) but the destination is the same. Online anonymity, once a default of digital life, is being systematically dismantled.
I find the speed of this shift less remarkable than the silence surrounding it. Where is the public debate? Where are the referendums? Decisions that will shape how billions of people communicate are being made in regulatory offices and behind closed committee doors. Most citizens will only notice when they are asked to scan their face to access a website.
The UK’s Online Safety Act
In July 2025, the UK’s Online Safety Act began enforcement of its age verification provisions. The law requires any platform hosting content harmful to children, including pornography but also content related to self-harm, eating disorders, or suicide, to implement “robust” age checks. Ofcom, the regulator, has the power to fine non-compliant services up to £18 million or 10% of global turnover, whichever is higher. It can also block access to services entirely.
The range of affected platforms is broader than most anticipated. Reddit, Bluesky, Discord, X, Spotify, and major dating apps have all implemented age verification systems in response. These typically involve submitting government ID or biometric facial scans to third-party verification services like Persona or Yoti. Some users discovered they could bypass photo-based checks using images from video games, which says something about the reliability of these systems. More significantly, VPN downloads in the UK surged immediately after enforcement began. A petition to repeal the law gathered over 500,000 signatures and was debated in Parliament.
Wikipedia launched a judicial review against potential designation as a “Category 1” service, which would subject it to the strictest requirements, including potential identity verification of editors. The foundation warned this would compromise the site’s open editing model and invite state manipulation. They lost the initial challenge but signaled they would restrict UK access rather than comply. The question of whether an encyclopedia’s editors must verify their identity captures something important about where this is heading.
The Act also contains provisions requiring platforms to scan encrypted messages for child abuse material and terrorism content, though the government stated it would not enforce this until it becomes “technically feasible.” This is a polite fiction. What they mean is that the power exists in law, waiting to be activated when political conditions allow. Signal and WhatsApp have stated they would exit the UK market rather than compromise encryption. The provision remains on the books regardless.
I want to be clear about what this means. A government has written into law the power to read your private messages. They are simply waiting. The fact that they have not yet used it makes it more dangerous, because the infrastructure of control is being assembled quietly, piece by piece, while we argue about whether it will actually be enforced.
The EU Digital Identity Wallet
The European Union is approaching mandatory identification from a different angle. Rather than forcing platforms to verify users, it is building the infrastructure for citizens to verify themselves. The EU Digital Identity Wallet, mandated by Regulation 2024/1183, requires every Member State to offer a digital wallet to citizens and residents by late 2026. By late 2027, a wide range of private services must accept it: banks, telecom providers, digital infrastructure, healthcare services, and also Very Large Online Platforms and “gatekeepers” under the Digital Markets Act.
The wallet stores government-issued person identification data alongside “electronic attestations of attributes” like professional qualifications, driving licenses, or residence status. The Commission presents this as user-controlled and privacy-preserving. Users can choose what to share.
I am skeptical of this framing. In practice, once the infrastructure exists and acceptance becomes mandatory, the choice narrows considerably. A bank must accept the wallet. A social media platform must accept it if users request authentication through it. What begins as voluntary tends to become expected, then required for full participation. We have seen this pattern before. Email was optional until it was required for employment. Smartphones were luxuries until they became prerequisites for banking. The “choice” to use digital identity will follow the same trajectory: technically voluntary, practically mandatory.
The 2030 Digital Decade target is explicit: 80% of EU citizens using a digital ID solution. This is a policy goal, and governments do not set goals they do not intend to pursue. The wallet is positioned as convenience (no more remembering passwords, easy cross-border services) but convenience is how mandatory systems are introduced. You can still pay with cash, in theory. Try buying a plane ticket with it.
Chat Control
The most aggressive EU proposal, regulation CSAR commonly called Chat Control, would mandate that platforms scan all private messages for child sexual abuse material. This includes end-to-end encrypted communications, which would require either breaking encryption or implementing client-side scanning before encryption occurs. Neither option preserves the privacy that encryption is meant to provide.
The European Parliament’s own impact assessment concluded that no current technology can detect CSAM without unacceptably high error rates affecting legitimate communications. The Council of the EU’s legal service warned that generalized scanning of all citizens’ communications violates fundamental rights to privacy. The European Court of Human Rights ruled in February 2024 that requiring weakened encryption “cannot be regarded as necessary in a democratic society.”
Despite this, the proposal has not been abandoned. Denmark, assuming the Council presidency in 2025, gave it “high priority.” Although mandatory scanning was eventually walked back after opposition from Germany, the Netherlands, Poland, and Austria, new compromise texts continue to emerge. Patrick Breyer, a German MEP who has tracked the legislation closely, described the latest version as “Chat Control 2.0 through the back door,” with language obliging providers to take “all appropriate risk mitigation measures” that could be interpreted as requiring scanning anyway.
The pattern is instructive. When a proposal faces sufficient opposition, it retreats and returns in modified form. The destination remains the same; only the path changes. I have watched this cycle repeat for years in surveillance policy. The same goals, the same beneficiaries, the same disregard for opposition. They simply wait for exhaustion or distraction, then try again.
The Child Protection Justification
These initiatives share a common justification: protecting children. This is effective politics. Few people will publicly oppose measures framed as protecting minors from exploitation. Anyone who raises concerns about privacy or civil liberties is implicitly positioned as indifferent to child abuse. I have been in conversations where raising these concerns gets you looked at with suspicion. The rhetorical trap is deliberate.
Let me be direct. I am not dismissing the reality of online harms to children. Child sexual abuse material is a genuine problem and its distribution has scaled with digital technology. Platform design that exploits young users’ psychology for engagement is real. Age-inappropriate content reaching minors is real. These are legitimate concerns that warrant serious responses. But when the solution to a specific problem is universal surveillance of everyone, I start asking who actually benefits.
The question is whether universal identification of all internet users is a proportionate or effective response. Nothing in the UK’s age verification regime prevents a determined minor from using a VPN. The systems generate false positives that flag legal communications while determined bad actors adapt. The surveillance infrastructure, once built, can be repurposed. Powers granted to fight child abuse have historically expanded to other categories of content deemed harmful.
What the child protection framing does most effectively is shift the burden of proof. Critics must explain why protecting children is not worth some loss of privacy. This is backwards, and I suspect intentionally so. Given the magnitude of what is being proposed (the end of anonymous digital communication) proponents should demonstrate that their approach will actually achieve its stated goals without disproportionate harm. They have not done this. They cannot do this, because the evidence does not support their case. So they rely on emotional appeals instead.
The Structure of the Shift
What we are witnessing is structural transformation. The individual pieces (age verification in the UK, digital wallets in the EU, encrypted scanning proposals) are connected by a shared logic. Each normalizes the principle that participation in digital life requires identification. Each creates infrastructure that subsequent policies can leverage. Each is justified by appeals to safety that make opposition politically costly.
This happens through accretion rather than proclamation. No government announces it is ending online anonymity. Instead, you must verify your age to view certain content. Then you must verify to post. Then platforms must verify all users. Then verification must use government-issued credentials. Each step seems modest; the cumulative effect is transformation. If any government had proposed this system in full from the start, there would have been public outrage. By fragmenting it into small steps, each appearing reasonable in isolation, they achieve what would otherwise be politically impossible.
The private sector is complicit because it simplifies compliance and reduces liability. But I think this undersells their active role. Platforms would rather verify users than face regulatory fines, yes. But large platforms also benefit from verification requirements that smaller competitors cannot afford to implement. They are using regulatory power to eliminate competition. The verification companies lobbied for these requirements. They created a problem, then sold themselves as the solution.
Who Benefits
Follow the money. Follow the power. The beneficiaries of mandatory identification are always the same: those who already have resources and want more control.
Governments gain surveillance capacity they have sought for decades. The dream of a legible, trackable citizenry predates the internet; digital technology makes it achievable. Law enforcement agencies have lobbied consistently for these powers, and they are obtaining them. Their interest in child protection is real, but secondary. The primary interest is control, and child protection provides the political cover.
Large platforms benefit from regulatory regimes they have the resources to implement while smaller competitors struggle. Compliance costs function as barriers to entry. When Reddit or Discord can afford integration with verification services and legal teams to handle the requirements, they survive. Forums and small communities cannot. The UK has already seen cycling enthusiast forums and non-profit hosting services shut down rather than face compliance costs they cannot absorb. This consolidation is a feature, not a bug. Concentrated markets are easier to regulate and more profitable for those who remain.
Verification service providers like Yoti and Persona have captured a new market created by regulation. The more expansive the verification requirements, the larger their addressable market. They did not discover this market; they manufactured it through lobbying and then positioned themselves to profit from the result. I struggle to view this as anything other than a racket.
Those who lose are predictable too: they are the people without institutional power. Anonymous speech has historically protected whistleblowers, dissidents, abuse survivors, and anyone whose safety depends on not being identified to those with power over them. Teenagers exploring their sexuality in jurisdictions hostile to LGBTQ people. Citizens criticizing their governments. Workers organizing against employers. The value of anonymity is diffuse and its beneficiaries cannot lobby effectively for it. They do not have money to donate to campaigns or lawyers to write amicus briefs. They just need to be left alone, which is precisely what these systems will not permit.
What Remains
I am not going to pretend this trajectory can be easily reversed. The coalition supporting these measures (governments, large platforms, verification companies) is well-funded and institutionally powerful. Privacy advocates are outspent by orders of magnitude. Public opinion is ambivalent at best; safety arguments resonate, and the costs of surveillance feel abstract until they are personally experienced. By the time people feel those costs, the infrastructure will already be in place.
What I can say is that the current moment is one of path dependency. Choices being made now will shape the internet for decades. Infrastructure built for age verification can serve other purposes. Identification requirements normalized for one category of content expand to others. The ratchet rarely moves backward. Once you have built the machinery for universal identification, you do not dismantle it because the political winds shift.
Those of us who think anonymous communication has value should be clear-eyed about what is happening. The framing around child protection makes opposition politically difficult. The incremental nature of implementation makes each step seem reasonable in isolation. The structural incentives favor identification over anonymity. We are watching a fundamental shift in the social contract of digital life, and it is happening largely without public deliberation about whether this is the internet we want. This is by design. Public deliberation is a risk they would rather not take.
I do not know how to reverse it. I am not sure it can be reversed. But I refuse to pretend this is progress or that the people pushing these systems have our interests at heart. What is being built is a cage, and the bars are being installed while we are told they are for our protection.
I think it is worth naming what is being lost, even if naming it cannot save it.