Currently Empty: € 0,00
AI
Who is actually accountable for your AI?
Most leaders still answer this with silence.
And that silence is risky.
In 2025, Deloitte reports that over 55 percent of AI incidents are linked to unclear ownership, not model failure [1].
McKinsey shows that companies without clear AI accountability are 2 times more likely to pause or shut down AI programs after audits or public pressure [2].
PwC adds that regulators now expect named roles for AI responsibility, not shared blame [3].
This is where recent academic research becomes very clear.
Batool, Zowghi, and Bano (2025) describe what they call the accountability gap in AI governance [4].
AI governance is no longer a technical task.
It is a multi stakeholder system.
Their research highlights three practical pillars:
1-WHO is accountable
Not just developers, but product owners, data providers, vendors, and even third party auditors.
2-WHAT is governed
Both the system, meaning the code, and the data, meaning what feeds and shapes decisions.
3-HOW accountability works
A shift toward artifact based governance. Real tools, design logs, model cards, audits. Not vague ethics slides.
>Also, I have noticed:
Too many organizations still practice symbolic governance.
They talk about ethics, but do not embed it into the AI artifacts themselves.
The real question is no longer “Is this AI ethical?”
The real question is:
Who is accountable for this AI artifact at this stage, today?
This becomes even more important now with the EU AI Act.
The EU AI Act will require traceability, ownership, and proof.
At Centralink, we help organizations move from talk to structure. From values to real governance tools.
If regulators asked tomorrow, could you name the accountable role for each AI system?
Contact info@centralink.nl for a free consulting session.
References
[1] Deloitte. (2025). AI governance and accountability risks.
[2] McKinsey & Company. (2025). Scaling AI responsibly.
[3] PwC. (2025). Board level accountability for AI.
[4] Batool, S., Zowghi, D., & Bano, M. (2025). Accountability in AI governance. AI and Ethics.
