Data as Leverage
The Hidden Cost of the Data Economy
Every day, billions of people generate enormous amounts of data without thinking twice. A search query, a message, a purchase, location logged almost by accident when an app opens. It all accumulates. And that digital trail, analyzed at industrial scale, has become one of the most valuable resources in the modern economy. Most people have only a vague idea of what gets collected, what it’s used for, and what it actually means that a handful of companies know this much about this many people.
The Scale of Collection
What gets collected today goes far beyond basic demographics. We’re talking about something close to a complete mirror of a person’s life: who they talk to and how often, where they go throughout the day, what they buy, what they read, how long they spend on any given thing. Heart rate. Sleep patterns. Voice. In some cases, gait.
Individually, most of this feels harmless. Aggregated, it tells an extraordinarily detailed story. Systems exist today that can infer someone’s mood, political leanings, or financial situation before that person has consciously articulated any of those things. It’s a portrait that sometimes knows the individual better than the individual knows themselves.
The Power Asymmetry
The relationship between people and the companies collecting their data is structurally lopsided. On one side: teams of data scientists, massive infrastructure, models refined over years. On the other: a cookie banner and a privacy policy that runs to forty pages and that nobody reads.
This asymmetry matters because data is leverage. Whoever holds it can predict preferences with disconcerting accuracy, craft messages designed to influence purchasing decisions or political opinions, and in some contexts make automated decisions about access to credit, employment, or services. When one party knows the other this well, the capacity for manipulation exists even when it isn’t being exercised deliberately. That alone changes the dynamic.
Beyond Individual Harm
It’s tempting to frame this as a problem of individual incidents: the breach that exposes personal records, the identity theft. The risks are systemic. Understanding how populations respond to certain kinds of messages makes it possible to run sophisticated influence campaigns that corrode public debate. No conspiracy required. Optimization is enough.
There’s another effect that gets less attention. Surveillance changes behavior. When people know or suspect they’re being watched, they self-censor. That flattens the diversity of opinion and reduces the willingness to experiment or dissent. At a collective level, that’s not trivial. And the compounding advantages that flow to whoever controls these data streams create winner-take-all dynamics that entrench the same actors year after year.
The “Nothing to Hide” Fallacy
“I have nothing to hide” is probably the most common argument in this space, and the easiest to take apart. Privacy doesn’t work like a secret you’re keeping. It works like context. The same information can be completely appropriate in one setting and genuinely harmful in another. What you share with a doctor, a close friend, or a partner is not information that should be automatically accessible to an employer or an insurer.
There’s something else the argument misses. Data that seems harmless today can become dangerous under different political or social conditions. History offers plenty of examples. Privacy protects people from shifts in power that haven’t happened yet. Giving it up because the present feels stable is a bet that the future will always look the same.
Regulation and Its Limits
Europe’s GDPR and California’s CCPA are genuine steps forward. They’ve created real rights and imposed fines that have, in certain cases, actually stung. That deserves acknowledgment. But regulation has clear structural limits.
Enforcement agencies typically operate with resources nowhere near the scale of the problem they’re overseeing. The consent model we have, with its endless banners and multi-page policies, doesn’t produce real consent. It produces fatigue and reflexive clicking. Companies with global operations have room to structure themselves in ways that reduce regulatory exposure. And the most telling problem is timing: by the time a regulator manages to act on a specific practice, that practice has usually been entrenched for years.
Technical Approaches
Technology created a large part of this problem and can help contain it, though nobody should mistake that for a complete solution. Techniques like differential privacy and homomorphic encryption make it possible to draw useful conclusions from datasets without exposing individual information. Data minimization, collecting only what’s strictly necessary, reduces the attack surface. End-to-end encryption, which platforms like Signal have been doing for years, means that even the service provider can’t access the content of communications.
These approaches expand what’s technically possible. Decentralized systems that keep data under user control point toward a different model entirely. None of them solve the problem on their own, but they shift what’s achievable and, over time, what gets considered acceptable.
Individual and Collective Action
The problem is structural, and the response has to be too. Individual actions still matter. Reducing your digital footprint, using encrypted messaging, choosing tools with better privacy practices. The catch is that these options require a level of technical knowledge and a willingness to trade convenience that puts them out of reach for most people.
Some argue the market will eventually reward companies that handle user data responsibly. I’m skeptical of that. The economic incentives behind surveillance remain powerful, and companies built on data extraction aren’t going to change their model voluntarily. The ones that prioritize privacy tend to stay in niches, without the resources that accumulate on the side financed by behavioral advertising. Regulatory pressure has achieved more than market forces, and even that has moved slowly and unevenly.
Where This Leaves Us
Data protection is, at its core, a question about what kind of society we want to live in. There’s a real choice between a world where algorithmic prediction gradually erodes human autonomy and one where technology is designed around personal dignity. That’s a political and cultural argument, not just a legal one.
Changing the trajectory isn’t easy. The incentives sustaining the surveillance system are robust, and the people who benefit from it have considerable resources to defend it. Those working for privacy are outfunded and outgunned. Regulatory capture is a genuine risk.
The trajectory isn’t fixed, though. The GDPR, for all its limitations, demonstrated that regulation can impose real costs on those who ignore it. End-to-end encryption has spread far enough that certain forms of surveillance are now genuinely harder to carry out. Public awareness has grown. These are modest advances against significant headwinds, but they suggest that sustained pressure can shift things. The question is whether that pressure can be maintained.