- Board Priorities from 2022 to 2025
- Where Is AI Impacting Organizations?
- Three Uncomfortable Questions (and no, none of them are about whether AI will "replace us
- The New Digital Leviathan and the End of Authority
- When AI Becomes Ideology
- What If Governance Fails?
- What Profiles Should Join the Board?
- And What About the Other Players?
- Possible—and Urgent—Paths Forward
- In Summary: What AI Is Telling Boards
“I’m sorry, Dave. I’m afraid I can’t do that.”
— HAL 9000, 2001: A Space Odyssey
When Stanley Kubrick released 2001: A Space Odyssey in 1968, the idea of an artificial intelligence capable of making decisions—and even defying its creators—seemed like a philosophical and futuristic exercise. Today, that fiction has taken a concrete shape: algorithms are now involved in critical boardroom decisions, influence credit policies, guide business strategies, and shape hiring and firing decisions.
In recent years, Artificial Intelligence (AI) has ceased to be a technological abstraction and become a structural force within organizations. What used to be confined to innovation or IT departments now demands the direct attention of executive leadership and board directors.
Board Priorities from 2022 to 2025
A review of annual reports from leading consultancies (EY, McKinsey, Deloitte, PwC) reveals a clear shift:
Where Is AI Impacting Organizations?
AI is not just an IT matter—it’s a systemic transformation driver. Its impact is visible at multiple levels:
Strategic
AI defines mergers, prices risks, and influences investment².
Tactical
It redesigns workflows, automates decisions, and reshapes HR, supply chains, and legal functions.
Cultural
It challenges managerial intuition, alters perceptions of authority, and provokes existential anxiety about human relevance³.
Three Uncomfortable Questions (and no, none of them are about whether AI will "replace us")
- Who makes the final decision? The CEO, the committee, or the algorithm?
- If an error occurs, who is held accountable? The programmer, the vendor, or the company?
- Does the board have the maturity to understand what is opaque or biased in algorithmic recommendations?
These are not rhetorical questions. AI is decentralizing responsibility and eroding traditional notions of control. Boards that fail to address these issues risk becoming passive accomplices to decisions they don’t understand.
More concerning still: in many companies, ignorance about AI isn’t a knowledge gap—it’s a strategic choice. Understanding the technology means assuming responsibility for its consequences. In some cases, ignoring AI has become a form of political self-preservation.
The New Digital Leviathan and the End of Authority
Philosopher Yuval Harari stated:
“AI is the first technology in history that can strip power away from us.”⁶
This is not just about efficiency—it’s about power.
And herein lies a dangerous paradox: boards that have long operated on binary logic (approve or disapprove) now face a challenge that demands a new mindset—one that interprets, audits, iterates, and regulates with ethics and systemic vision.
As the World Economic Forum aptly notes, the true challenge of AI is not in the code, but in the adaptive governance of the decisions it influences⁷.
When AI Becomes Ideology
Adopting AI is not enough. Boards must have the courage to say “no” when necessary.
In some industries, AI has already ceased to be a mere tool and become an ideology—promising neutrality, accuracy, and cost savings beyond question. But what happens when it reinforces historical biases? When it turns decision criteria into black boxes?
Is the when its outcomes are “technically sound” but socially or ethically disastrous?
According to Stanford’s 2024 HAI Index, only 30% of large companies systematically test their algorithms for racial, gender, or socioeconomic bias⁸.
What If Governance Fails?
If a scandal arises from an AI-driven decision—like Apple’s 2019 credit score algorithm accused of gender bias⁹—who will be fired? The algorithm? The third-party vendor? The CTO? Or, as usual, no one?
AI not only decentralizes decision-making—it dilutes accountability. Without digital accountability mechanisms, boards risk becoming mere spectators in an automated theater.
What Profiles Should Join the Board?
This is not about adding a data scientist to the board. It’s about assembling a board that understands:
- The strategic potential of AI to generate growth and differentiation.
- The pitfalls of AI, such as algorithmic bias, auditability gaps, and tech dependency.
- The ethical, reputational, and legal implications of AI-driven decisions.
According to MIT Sloan Management Review, AI-literate boards outperform in innovation, agility, and talent retention⁴. Yet these profiles remain rare—and worse, they are not always welcome. Traditional dynamics of cohesion, easy consensus, and seniority still dominate many boardrooms.
And What About the Other Players?
If boards bear responsibility, pressure should also come from the outside. But:
- Are shareholders demanding growth or responsibility? Do they see AI as innovation or headcount reduction?
- Do regulators understand what they’re regulating?
- Are board certification programs truly educating directors—or just selling outdated manuals and prepackaged formulas?
Current director education programs ignore critical aspects of AI. The WEF’s Board of the Future report shows that fewer than 20% of formal board training programs cover AI in a practical or applied manner¹⁰.
Possible—and Urgent—Paths Forward
- Ongoing education programs on AI applied to strategy and governance.
- Crisis simulations and AI scenario workshops in risk, innovation, and compliance committees.
- Creation of tech advisory boards with voices from outside the comfort zone—people who challenge, not just validate.
In Summary: What AI Is Telling Boards
AI is not asking for permission to enter. It’s already in. And the fundamental question is no longer whether we should regulate it, but rather: Who is leading the conversation within the boardroom?
The organizations that will thrive are not necessarily the ones investing the most in technology—but those building leadership capable of judging, questioning, and integrating it responsibly.
As Hannah Arendt once said:
“Judgment is the only faculty that cannot be automated.” If the board is not prepared to do so, someone—or something—will judge on its behalf.”
Annotations
- EY. Board Priorities 2025. https://www.ey.com/en_gl/board-matters/americas-board-priorities-2025
- McKinsey. The State of AI in 2024. https://www.mckinsey.com
- MIT Sloan. AI and the Boardroom. https://mitsloan.mit.edu
- MIT Sloan Management Review (2023). AI-Savvy Boards Drive Results. https://sloanreview.mit.edu/article/successful-companies-now-have-ai-savvy-boards/
- Financial Times. Interview with Sundar Pichai, Google CEO (2024).
- The Guardian. Interview with Yuval Noah Harari.
- World Economic Forum (2023). AI Governance in Practice. https://www.weforum.org
- Stanford University. AI Index 2024 Report. https://hai.stanford.edu/research/ai-index
- Forbes. Apple Card Algorithm Accused of Gender Bias (2019).
- World Economic Forum. Board of the Future: Competencies for 2030 (2024). https://www.weforum.org/reports/board-of-the-future
- Hannah Arendt – unfinished work “The Life of the Mind” (published posthumously in 1978