
Who controls artificial intelligence
Artificial intelligence has become one of the most powerful and transformative technologies of our time. It’s embedded in smartphones, search engines, voice assistants, predictive analytics, autonomous vehicles, and even medical diagnostics. But as its influence expands, a critical question emerges: who really controls artificial intelligence? Who decides what it can do, how it learns, and what it’s used for?
This question is now at the center of a global debate involving governments, tech companies, scientists, and civil society. The concern isn’t just about misuse — it’s also about the concentration of technological and cultural power in the hands of a few. Understanding who controls artificial intelligence today means understanding not just the present of innovation, but also the direction our digital future is taking.
Big Tech: controlling the infrastructure and innovation
Right now, the development and control of artificial intelligence are largely in the hands of major tech corporations. OpenAI, Google DeepMind, Meta AI, Amazon, Anthropic, Microsoft, and NVIDIA hold the computational power, proprietary data, engineering talent, and capital needed to build large-scale AI models.
These companies have access to specialized GPUs, global data centers, and — most importantly — massive amounts of data, without which training advanced AI models would be impossible. Some of the most powerful models — like GPT, Gemini, or Claude — are built and maintained by private entities operating under corporate logic, driven by market goals and strategic positioning.
Those who control the infrastructure also control the evolution of AI. That’s why there’s growing discussion around the need for open, shared, and transparent alternatives to ensure AI remains a public good — not a private asset.
Governments: between regulation and geopolitical power
At the same time, governments are beginning to assert regulatory control over artificial intelligence, through new laws, ethical frameworks, and international policy coordination. The European Union, with its AI Act, is one of the first global players to propose a comprehensive legal framework. The U.S. follows a more dynamic model, combining self-regulation with federal initiatives.
But this is more than just a legal matter — it’s geopolitical. AI is now a global battlefield between powers like China, the U.S., and the EU. To control AI is to control cybersecurity, military infrastructure, economic competitiveness, propaganda, and more. Without international cooperation, AI risks becoming a new technological arms race.
Open source and independent communities: a parallel movement
In recent years, a growing ecosystem of open-source models, independent communities, and nonprofit foundations has emerged as a counterweight to centralized AI. Projects like Hugging Face, Mistral, LLaMa, and OpenAssistant represent real efforts to democratize AI development.
These tools offer greater transparency, accessibility, and user freedom. But they also raise questions: who ensures these tools aren’t misused? And without a centralized authority, who is responsible for quality, safety, and ethical oversight?
Decentralizing artificial intelligence is a promising goal — but it requires new governance models and shared accountability.
The myth of autonomy: AI is always designed by someone
Many AI models are described as autonomous or self-learning. But in truth, every AI system reflects the choices made by the people who designed, trained, and fine-tuned it. The biases, limitations, capabilities, and objectives of an AI model are all the result of human decisions.
Behind every AI system lies an architecture, a dataset, and a set of parameters determined by someone. Even when a model seems to generate answers on its own, it operates within the bounds of what it has been allowed to know and do.
The idea that AI “decides on its own” is an illusion. And that’s exactly why knowing who controls artificial intelligence is not a technical question — it’s a matter of democracy, ethics, and power.
A new AI governance model is needed
Who controls artificial intelligence today also holds part of the future in their hands. From data collection to the interface we use to interact with these systems, every decision carries social, cultural, and economic consequences.
We need multilayered governance — involving governments, corporations, researchers, ethicists, and citizens alike. AI control cannot remain in the hands of a few, because AI will affect everyone. Only a transparent, inclusive, and democratic approach can ensure artificial intelligence serves humanity — and not the other way around.
