When AI “Replaces” Human Judgment: Lessons from Albania for Today’s Leaders

September 2025. Albania startled the world with a move that sounded more like science fiction than governance. The government introduced Diella, an artificial intelligence system, as its new “minister” for public procurement

Article content
Diella

On paper, the logic was flawless: machines don’t take bribes, don’t tire, and don’t bend under political pressure. Entrust procurement to an algorithm and you eliminate corruption at its root.

But beneath the headlines, the decision triggered a deeper debate — one that resonates far beyond Tirana’s parliament: If AI can replace human judgment in public contracts, can it also replace leaders in business? And if so, what happens to leadership itself?

The Temptation: Letting AI Decide for Us

For executives and team leaders, the temptation is easy to recognize. Today’s organizations are drowning in data, constrained by time, and pressured by relentless change. AI promises a seductive solution: let the machine decide.

This temptation is reinforced by real evidence. In one large-scale field study, generative AI boosted worker productivity by 14% on average and by more than 30% among less experienced employees. For routine, information-heavy work, algorithms can indeed outperform humans. Source: NBER, 2023.

It is a short step from there to imagining that AI should also take over decision-making in management — whether choosing vendors, allocating budgets, or even shaping team strategies.

The Danger: When Leaders Abdicate Responsibility

Yet here lies the danger. Algorithms process information, but they do not carry accountability. They cannot stand before a team to explain why a difficult trade-off was made. They cannot balance efficiency against fairness, speed against long-term trust.

When leaders hide behind AI, two risks emerge:

  1. Loss of legitimacy. Teams can sense when decisions are reduced to “the system said so.” That erodes trust in leaders, because leadership is about judgment, not delegation to code.
  2. Invisible bias. Training data and design choices significantly influence the outputs of AI systems. Without human oversight, algorithmic bias becomes institutional bias — harder to see, harder to challenge, and easier to excuse as “objective.”

Psychologists call this automation bias: the human tendency to over-trust machine outputs even when they are flawed. In governance and in business, that bias is dangerous. It replaces reflection with compliance.

Diella’s Limits: What Albania Overlooked

Albania’s choice of Diella is more than a symbolic gesture. It exposes critical gaps in AI capability and neglected design choices that every leader should study carefully:

  • Contextual understanding. AI excels at pattern recognition but cannot interpret context. Procurement decisions often hinge on non-quantifiable factors — supplier reputation, geopolitical dynamics, or community impact — that cannot be reduced to data points.
  • Data bias. If the training data reflects historical inequalities, the system will reproduce them. An “unbiased” algorithm might still favor large contractors because past procurement data skews in their direction, reinforcing concentration instead of fostering fair competition.
  • Cybersecurity vulnerabilities. Procurement is a high-stakes target. A compromised algorithm could be manipulated by hostile actors, resulting in decisions that look legitimate but serve hidden interests. Without robust cyber safeguards, Diella could become a vector for corruption, not a cure.
  • Neglected human interfaces. Perhaps most critically, Albania overlooked the human-AI interface. Who audits Diella’s decisions? Who can challenge them? How do citizens or losing bidders appeal? By neglecting these governance mechanisms, the government risks alienating stakeholders and reducing trust — the very opposite of its stated objective.

These limitations illustrate the broader danger of treating AI as a replacement for judgment rather than a tool for augmentation.

Drawing the Line: Where AI Belongs, Where It Does not.

Article content

So where is the boundary?

  • Where AI excels: crunching vast datasets, modeling scenarios, testing assumptions, and spotting anomalies. It thrives in complexity too great for human cognition alone.
  • Where humans are irreplaceable: setting vision, making value-laden trade-offs, navigating organizational conflict, and explaining decisions in human language that earns trust.

This distinction is not just philosophical. It is embedded in regulation. The EU AI Act requires “significant human oversight” for high-risk applications such as procurement, finance, or HR. Oversight is not symbolic — it means human beings must remain answerable for outcomes. Source: European Commission, 2024.

Globally, regulators are grappling with the same dilemmas. From the U.S. NIST AI Risk Management Framework to OECD guidelines on trustworthy AI, the consensus is clear: AI must be framed by governance, transparency, and human accountability.

For leaders, the lesson is equally clear: AI can inform, but it cannot absolve.

Albania as a Mirror for Business Leadership

The Albania case is not a distant oddity. It is a mirror held up to every boardroom and leadership team.

  • The promise: faster, cleaner, seemingly objective decisions.
  • The peril: erosion of responsibility and blind acceptance of hidden biases.

In organizations, the parallel is stark. A CEO might be tempted to let AI allocate resources. A team leader might use AI scores to decide promotions. The result is often the same: an efficient answer with no legitimacy, no context, and no ownership.

Leadership is not about producing answers. It is about carrying responsibility for answers. Albania’s Diella reminds us what happens when we forget that distinction.

Three Tests for Leaders Before Delegating to AI

Before handing over any decision to AI, leaders should ask themselves three questions:

  1. Is this a data problem or a values problem? If the task is purely data-driven — detecting anomalies, running forecasts — AI is an asset. If it involves fairness, trust, or competing principles, only humans can decide.
  2. Who owns the decision? Every algorithm is designed by someone. Leaders must ensure clear accountability chains: who sets the rules, who validates the data, and who is answerable for mistakes.
  3. Can I explain this to my team? If you cannot translate an AI-supported decision into clear reasoning your team accepts, trust will erode regardless of accuracy.

These tests are simple — but they mark the difference between a leader who uses AI wisely and a leader who abdicates judgment.

What This Means for Leaders and Consultants

For consultants, HR executives, and senior leaders, Albania’s experiment is not entertainment. It is a stress test for leadership.

AI is not the enemy. Nor is it the savior. It is a force multiplier. In the right domains, it enhances performance; in the wrong ones, it hollows out leadership itself.

The job of today’s leader is not to compete with machines, but to design partnerships: AI as amplifier, human as arbiter.

Why Aviad Goz AI Exists

This is precisely the challenge Aviad Goz AI was built to address. Unlike generic AI tools that try to provide ready-made answers, Aviad Goz AI is deliberately designed as a thinking partner for leaders.

It solves the problems surfaced above in three ways:

  • It prevents abdication. Instead of producing decisions, it poses questions in the NEWS Compass® directions — reconnecting you to purpose (East), clarifying direction (North), uncovering obstacles (South), and structuring execution (West). The NEWS Compass® was developed by Aviad Gozand is a unique framework that helps individuals and organizations identify their authentic direction, their core motivations, and the critical roadblocks on the way, as well as helping create practical solutions that propel them to the next levels.
  • It counters automation bias. By design, Aviad Goz AI demands reflection. It won’t let leaders default to “the system decided”; it forces them to articulate their own reasoning.
  • It safeguards accountability. Every interaction reinforces that the leader, not the algorithm, owns the judgment.

This is not accidental. Aviad Goz AI was developed in collaboration with MARGA, blending decades of N.E.W.S.® Navigation expertise with advanced AI engineering.

The partnership ensures the product is not just another chatbot but a tailored leadership instrument: safe, reflective, and aligned with the core purpose of leadership — to think, to decide, and to be accountable.

Closing Thought

Albania’s experiment with Diella highlights both the promise and the peril of AI in decision-making. In the best-case scenario, AI makes leaders sharper, more informed, and better prepared. In the worst-case scenario, it turns leadership into automated compliance — efficient, but blind and brittle.

The difference between these futures is not the technology itself, but the quality of oversight and accountability leaders choose to maintain.

For leaders today, the message is stark: Those who let AI think for them will lose trust. Those who use AI to think with it will gain it.

That is the conviction behind Aviad Goz AI, created with N.E.W.S.® and MARGA — to ensure artificial intelligence strengthens, rather than replaces, the wisdom at the heart of leadership.