Can machines think? Alan Turing opened his 1950 paper with that question in Computing Machinery and Intelligence. Decades later, those same terms now sit within a far more complex conceptual frame. Technological development has deeply expanded and strained what we understand by machine, thought, and intelligence.

Today, if we understand "thinking" as the ability to process information, generate inferences, and produce autonomous outputs from data, we can say that machines already "think" in a functional sense. Another question is whether they understand, are conscious, or experience the world. But from an operational perspective, they perform cognitive tasks we historically associated only with humans.

The relevant question is no longer whether they can think, but how they should do it. And that question has not gone unanswered. Over the past decades, multiple principles have been formulated to guide the development and use of responsible artificial intelligence and avoid judgment traps such as algorithmic complacency.

AI principles emerge precisely to establish an ethical and regulatory framework to guide the development, implementation, and use of these systems, both globally and within public and private organizations. They are not meant as a late correction once technology is already deployed, but should be part of the process from the beginning.

The question is no longer whether machines can think, but how they should decide and under what accountability.

The international framework for responsible AI

What matters is not only that these principles exist, but that they come from very different contexts and still converge on similar ideas. International institutions, technical associations, governments, and major companies have reached remarkably aligned conclusions in the key documents and recommendations I reviewed.

In 2019, the OECD adopted the first intergovernmental recommendation on artificial intelligence, backed by a broad group of states. Its principles established that AI should promote inclusive growth, respect human rights, and be transparent, robust, and accountable. With this, AI stopped being only a technological issue and became an institutional one.

In 2021, UNESCO approved the first global Recommendation on the Ethics of Artificial Intelligence. This text expands the focus toward protecting human rights, cultural diversity, gender equality, and environmental sustainability. AI is understood here not only as technological infrastructure, but as a social phenomenon with impact on education, culture, and democratic cohesion.

This shift is key. We are not only talking about algorithmic efficiency, but about compatibility with democratic and legal frameworks.

From ethics to regulation

The European Commission took an additional step by moving the debate into the regulatory arena. First with its guidelines on "trustworthy AI" and later with the approval of the AI Act. In this sense, the European Union turned principles into binding obligations based on risk levels.

Here we can see a leap in maturity. Principles stop being aspirations and become legal architecture. The greater the potential impact on fundamental rights, the higher the level of demand in terms of traceability, documentation, oversight, and control. It is not just about declaring values, but structuring responsibilities.

Design with principles, not afterthoughts

From the technical domain, the IEEE, through its Ethically Aligned Design initiative, introduced a particularly relevant idea: ethics cannot be added as a superficial layer at the end of development. It is not enough to audit a system once deployed or add post hoc controls. Responsibility must be integrated from the design of the technical architecture itself.

This approach, known as ethics by design, starts from a clear premise: technical decisions are also normative decisions. Data selection, metric definition, validation criteria, error tolerance thresholds, or oversight mechanisms are not neutral choices. They determine how the system behaves and, therefore, how it affects people.

IEEE emphasizes prioritizing human well-being as a core principle for autonomous and intelligent systems. That implies anticipating risks and assessing potential impacts before a system goes into production. It means recognizing that algorithmic and system design can reinforce or mitigate inequality, increase or reduce vulnerabilities, and generate or erode trust.

In this context, ethics stops being an external framework applied from outside and becomes a structural condition of design itself.

Let’s be realistic, maturity is not only about formulating general principles, but translating them into concrete technical specifications. As in other fields, responsibility and design cannot be separated without weakening the system.

Corporate self-regulation

Major companies such as Google, Microsoft, and IBM have published their own responsible AI principles. Across them, fairness, safety, privacy, inclusion, and accountability appear repeatedly.

The discourse converges with international consensus. However, maturity depends on implementation. Publishing principles is relatively easy. Translating them into auditable processes and effective governance structures is much harder.

In broad terms, three levels reflect the normative and organizational evolution of artificial intelligence:

Level 1: Declarative

The organization publishes ethical principles or commitments. There is normative intent, but not necessarily clear mechanisms for application. This can lead to what some authors call ethics-washing, meaning ethical declarations that improve public image without substantially changing practices.

Level 2: Procedural

Principles are translated into concrete internal processes: review committees, audits, mandatory documentation, impact assessments, and systematic risk management. Here, ethics starts becoming operational.

Level 3: Regulatory

There are binding legal obligations, external oversight, and possible sanctions for non-compliance. The European AI Act currently represents one of the most advanced examples of this level.

The global trend points toward a transition from voluntary declarations to models where regulation and corporate governance complement and reinforce each other.

The double-edged nature of artificial intelligence

This leads me to think that the conversation about principles does not happen in a vacuum. Artificial intelligence does not only optimize business processes or improve medical diagnostics. It has also become a strategic tool in geopolitics and cybersecurity.

Systems capable of generating disinformation at scale, automating attacks, or exploiting vulnerabilities amplify the power of states and non-state organizations.

The same technology that helps detect threats can be used to design them. The same analytical capacity that improves efficiency can be used to influence democratic processes or destabilize critical infrastructure.

In today’s geopolitical scenario, with active conflicts and growing physical and digital confrontation (increasingly interconnected), this possibility is no longer hypothetical.

A recent example is companies like Astelia, which use artificial intelligence to improve cybersecurity. Their proposal is not to generate endless lists of vulnerabilities, but to identify which ones can actually be exploited in a specific environment and prioritize response.

These types of solutions highlight two realities. On one hand, AI helps reduce uncertainty and better protect complex systems. On the other, it makes clear we are dealing with sensitive environments: critical infrastructure and networks exposed to increasingly automated attacks.

AI accelerates processes and reduces costs. But it can do the same when used to attack. Today, we can say that the line between protection and offense is increasingly thin.

In this context, principles are not symbolic gestures. They are a necessity. The more capable the technology becomes, the more important it is to know who controls it, under what limits, and with what responsibility.

Human oversight as AI scales

In a recent interview, Sam Altman, CEO of OpenAI, suggested that AI systems could eventually perform functions associated with senior executives. Beyond the headline, what matters is not whether that happens exactly in that form, but what it implies for one of the most repeated principles across normative frameworks: meaningful human oversight.

If AI can analyze complex scenarios, optimize resources, coordinate strategic decisions, or propose action plans faster than an executive team, what does oversight actually mean?

Many regulatory frameworks, including the AI Act, insist on maintaining meaningful human oversight in high-risk systems. Does oversight mean validating every decision? Auditing outcomes afterward? Defining strategic limits beforehand? Or simply retaining the capacity to intervene in exceptional situations?

As AI climbs the functional hierarchy of organizations, oversight cannot be reduced to a symbolic signature or formal presence in the process. It must translate into real decision traceability, clear responsibility boundaries, and the actual ability to correct, modify, or stop the system when needed.

At this point, the principle stops being abstract. It becomes an organizational design problem.

Back to Turing: how far should machines decide for us?

In his 1950 paper, Alan Turing set aside the metaphysical question of what thinking really means and proposed a more practical criterion: observe behavior. If a machine behaves in ways indistinguishable from a human under certain conditions, we can recognize a form of functional intelligence.

That shift was decisive. Turing was not trying to define consciousness, but to evaluate observable effects. And that approach remains relevant today. We do not regulate artificial consciousness. We regulate systems that classify, recommend, predict, and make decisions with real consequences.

The issue is no longer ontological, but normative. It is not about determining whether a machine "thinks" in a human sense, but about establishing under what conditions it can operate, with what limits, and under what responsibility.

That is where the debate on responsible AI reaches its true dimension. Maturity is not measured by the sophistication of ethical discourse, but by the ability to translate principles into technical architecture, effective regulation, and clear organizational structures.

Alan Turing and the conceptual foundations of machine intelligence
Alan Turing, author of Computing Machinery and Intelligence (1950), where he posed the famous question: can machines think?

The challenge of applying AI principles

Defining principles is relatively easy. Operationalizing them is hard.

The real difficulty is not in claiming AI should be fair, but in determining what fairness means in a recommendation system. The problem, if examined in detail, is not in demanding transparency, but in translating it into understandable technical explainability. At some point, we must stop merely discussing human oversight and start designing architectures where that oversight is effective rather than symbolic.

The debate has therefore moved beyond philosophy and become structural. We are no longer discussing whether principles are needed. We are discussing how to integrate them into technical architecture, business models, and institutional governance.

If in 1950 the question was whether machines could think, today the question is different:

Do we know how far machines should be allowed to decide for us?