Home Insights Article

When AI Enters the Boardroom: Who Controls Whom?

We also have the article available in Spanish and Portuguese

“I’m sorry, Dave. I’m afraid I can’t do that.”

— HAL 9000, 2001: A Space Odyssey

When Stanley Kubrick released 2001: A Space Odyssey in 1968, the idea of an artificial intelligence capable of making decisions—and even defying its creators—seemed like a philosophical and futuristic exercise. Today, that fiction has taken a concrete shape: algorithms are now involved in critical boardroom decisions, influence credit policies, guide business strategies, and shape hiring and firing decisions.

In recent years, Artificial Intelligence (AI) has ceased to be a technological abstraction and become a structural force within organizations. What used to be confined to innovation or IT departments now demands the direct attention of executive leadership and board directors.

Board Priorities from 2022 to 2025

A review of annual reports from leading consultancies (EY, McKinsey, Deloitte, PwC) reveals a clear shift:

Where Is AI Impacting Organizations?

AI is not just an IT matter—it’s a systemic transformation driver. Its impact is visible at multiple levels:

Strategic

AI defines mergers, prices risks, and influences investment².

Tactical

It redesigns workflows, automates decisions, and reshapes HR, supply chains, and legal functions.

Cultural

It challenges managerial intuition, alters perceptions of authority, and provokes existential anxiety about human relevance³.

Three Uncomfortable Questions
 (and no, none of them are about whether AI will "replace us")

  1.  Who makes the final decision? The CEO, the committee, or the algorithm?
  2. If an error occurs, who is held accountable? The programmer, the vendor, or the company?
  3. Does the board have the maturity to understand what is opaque or biased in algorithmic recommendations?

These are not rhetorical questions. AI is decentralizing responsibility and eroding traditional notions of control. Boards that fail to address these issues risk becoming passive accomplices to decisions they don’t understand.

More concerning still: in many companies, ignorance about AI isn’t a knowledge gap—it’s a strategic choice. Understanding the technology means assuming responsibility for its consequences. In some cases, ignoring AI has become a form of political self-preservation.

The New Digital Leviathan and the End of Authority

Philosopher Yuval Harari stated:

“AI is the first technology in history that can strip power away from us.”⁶
This is not just about efficiency—it’s about power.

And herein lies a dangerous paradox: boards that have long operated on binary logic (approve or disapprove) now face a challenge that demands a new mindset—one that interprets, audits, iterates, and regulates with ethics and systemic vision.

As the World Economic Forum aptly notes, the true challenge of AI is not in the code, but in the adaptive governance of the decisions it influences⁷.

When AI Becomes Ideology

Adopting AI is not enough. Boards must have the courage to say “no” when necessary.

In some industries, AI has already ceased to be a mere tool and become an ideology—promising neutrality, accuracy, and cost savings beyond question. But what happens when it reinforces historical biases? When it turns decision criteria into black boxes?

Is the when its outcomes are “technically sound” but socially or ethically disastrous?

According to Stanford’s 2024 HAI Index, only 30% of large companies systematically test their algorithms for racial, gender, or socioeconomic bias⁸.

What If Governance Fails?

If a scandal arises from an AI-driven decision—like Apple’s 2019 credit score algorithm accused of gender bias⁹—who will be fired? The algorithm? The third-party vendor? The CTO? Or, as usual, no one?

AI not only decentralizes decision-making—it dilutes accountability. Without digital accountability mechanisms, boards risk becoming mere spectators in an automated theater.

What Profiles Should Join the Board?

This is not about adding a data scientist to the board. It’s about assembling a board that understands:

  • The strategic potential of AI to generate growth and differentiation.
  • The pitfalls of AI, such as algorithmic bias, auditability gaps, and tech dependency.
  • The ethical, reputational, and legal implications of AI-driven decisions.

According to MIT Sloan Management Review, AI-literate boards outperform in innovation, agility, and talent retention⁴. Yet these profiles remain rare—and worse, they are not always welcome. Traditional dynamics of cohesion, easy consensus, and seniority still dominate many boardrooms.

And What About the Other Players?

If boards bear responsibility, pressure should also come from the outside. But:

  • Are shareholders demanding growth or responsibility? Do they see AI as innovation or headcount reduction?
  • Do regulators understand what they’re regulating?
  • Are board certification programs truly educating directors—or just selling outdated manuals and prepackaged formulas?

Current director education programs ignore critical aspects of AI. The WEF’s Board of the Future report shows that fewer than 20% of formal board training programs cover AI in a practical or applied manner¹⁰.

Possible—and Urgent—Paths Forward

  • Ongoing education programs on AI applied to strategy and governance.
  • Crisis simulations and AI scenario workshops in risk, innovation, and compliance committees.
  • Creation of tech advisory boards with voices from outside the comfort zone—people who challenge, not just validate.

In Summary: What AI Is Telling Boards

AI is not asking for permission to enter. It’s already in. And the fundamental question is no longer whether we should regulate it, but rather: Who is leading the conversation within the boardroom?

The organizations that will thrive are not necessarily the ones investing the most in technology—but those building leadership capable of judging, questioning, and integrating it responsibly.

As Hannah Arendt once said:

quote

Annotations

Authors

EN