AI and the Illusion of Neutrality
The Myth of the Objective Machine
There is a persistent idea that AI is neutral. Objective. Free from the messiness of human prejudice. The implicit promise is that algorithms, unlike people, just crunch numbers and spit out the mathematically optimal answer. I find this deeply misleading, and a little dangerous. Every AI system is the product of human choices: what data goes in, what gets optimized for, what content gets flagged or removed, and whose money is paying for all of it. The question of who controls these systems is the same question as what they will become.
The Geopolitics of AI Development
Training large language models has quietly become a matter of national strategy. In the United States, companies like OpenAI, Google, and Meta have driven development, while their relationships with government have grown steadily less formal. When a major AI company’s owner has a seat at the table with the incoming administration, the boundary between private enterprise and state apparatus starts to look like a technicality.
France positioned Mistral partly as a sovereignty play, making explicit what most prefer to leave implicit: building your own model means not having to trust someone else’s. China’s approach with DeepSeek is even blunter about state alignment. What this fragmentation along national lines actually reveals is that there is no neutral AI. There never was. Every model reflects the priorities, red lines, and funding interests of whoever built it.
Bias as Feature
When a language model trains on internet text, it absorbs the biases encoded in that text. This is not a malfunction. The biases are structural, reflecting existing concentrations of power and historical inequalities. When human moderators apply content policies during training, their values get baked into the model’s behavior. When topics get flagged as off-limits, the model learns to sidestep or distort those discussions in ways that serve someone’s interests, even if that isn’t the stated intention.
This happens at several levels at once. Training data skews toward dominant perspectives. Optimization objectives favor commercial viability. Reinforcement learning from human feedback encodes the preferences of annotators, many of them workers in Kenya or the Philippines hired through outsourcing firms, doing repetitive, often disturbing work for low wages. And safety measures end up protecting the powerful at least as often as they protect users. Cumulatively, these systems actively shape what gets said online: amplifying some voices, muffling others, normalizing assumptions that go unexamined precisely because they feel so natural.
The Red Lines Question
Every AI system has things it won’t touch. Topics it won’t engage, positions it won’t take. These restrictions are usually framed as safety measures, and some of them genuinely are. Nobody seriously objects to an AI refusing to help synthesize nerve agents. The problem is that the line between safety and censorship is blurry, and the people drawing it aren’t accountable to you.
When a chatbot declines to discuss certain political events, you’re entitled to ask whether that’s protection or suppression. When it consistently frames economic questions in a particular register, that’s not neutrality, that’s a choice someone made. The frustrating part is the opacity. Users don’t receive a disclosure telling them which topics the system avoids or which perspectives it won’t represent. They just get the outputs, shaped by constraints they can’t see and had no part in setting.
The Environmental Cost of Redundancy
There’s an environmental dimension to all this that doesn’t get nearly enough attention. Training a GPT-4-scale model requires electricity equivalent to roughly a thousand homes’ annual usage and produces thousands of metric tons of carbon. That cost gets paid again every time a company or government decides they need their own model for competitive or strategic reasons.
The result is tens of competing systems trained in parallel, each with a massive footprint. Data centers for AI consume enormous amounts of electricity, much of it still coal or gas, and they need substantial water for cooling. One estimate puts global AI water consumption on track to reach 6.6 billion cubic meters by 2027. The obvious question, whether shared infrastructure would be more rational than every actor duplicating the same compute at massive environmental cost, rarely enters the conversation. The current trajectory just treats those costs as someone else’s problem.
Concentration and Dependency
The resource requirements for frontier AI effectively function as a market barrier, concentrating capability in a handful of actors. Only the largest companies and the wealthiest states can afford to train at the frontier; everyone else becomes a user of systems they don’t control and can’t audit.
Countries without domestic AI capability depend on foreign systems that may embed values or interests misaligned with their own populations. Smaller companies depend on providers who might be direct competitors, or who operate under political pressures they don’t share. Individual users interact with systems whose biases they have no real way to identify. Open-source models offer some counterweight here, though even running them at scale requires significant infrastructure. The playing field is not level, and the actors with the most compute have the most influence over what gets built, what it values, and who it ultimately serves. This is not a new pattern. It is what usually happens when critical infrastructure concentrates.
Toward Accountability
Getting past the myth of neutral AI means treating these systems as what they actually are: artifacts of human decisions and power structures. That starts with demanding real transparency about training data, optimization objectives, and content policies. Users deserve to understand what assumptions are embedded in the tools they rely on.
The standard next step is governance: regulation, oversight, accountability mechanisms. I am genuinely skeptical of how much traction that gets in practice. The companies developing frontier models have substantial resources to shape regulation, and they are already using them. The technical complexity involved makes meaningful oversight hard even for regulators who are trying. Jurisdiction-shopping is easy when international coordination is weak. And the people who understand these systems well enough to oversee them effectively often work for the companies being overseen.
I don’t have a clean answer. These problems are considerably easier to name than to fix. What I’m fairly confident about is that the current trajectory concentrates power in ways that should worry anyone who cares about accountability, regardless of where they sit politically. A small number of companies and governments are making decisions that will shape the information environment for decades, with limited public input and limited oversight. Whether that’s changeable, and how, I honestly don’t know.
What Remains
The choices being made right now will matter. Accepting the current distribution of power by default is itself a choice. Engaging is hard and might not work. It’s still the only real alternative to letting these decisions be made entirely by the people who already have the most power over AI development.