Currently Empty: € 0,00
AI
Most companies are already using AI. But many still do not have a clear AI policy.
McKinsey found 88% of organizations now use AI in at least one business function, and 51% have seen at least one negative consequence from AI use (often inaccuracy). KPMG highlights a governance gap: only 34% report having an organizational policy or guidance for using generative AI tools. In the Netherlands, EY reports 42% of employees fear job loss due to AI, and only 24% are satisfied with employer AI training. That is a trust problem, not a tech problem.
AI policies fail, mostly because they are vague, too strict, or impossible to find. A policy that works is practical and visible. It should give people confidence to use AI safely, not scare them into silence.
A strong AI policy usually includes:
-
Approved tools (no personal accounts for work)
-
Clear use cases and red lines (especially around HR, legal, and finance)
-
A simple privacy rule: never paste confidential data or personal data into AI tools
-
Human review of AI output before sending or publishing
-
Training, support, and regular updates, because AI changes weekly
Research on generative AI governance supports the same direction: clear guardrails plus accountability and human oversight reduce risk and improve adoption.
If you want a usable AI policy template and rollout plan for your team in the Netherlands, Centralink can help.
