Why every board needs an AI expert

AI expert

By Jeremy Bossenger

Artificial intelligence is no longer a niche tool. It sits inside core workflows, decision systems, and customer experiences. For boards in South Africa and beyond, that raises a key question: who on your board can interrogate AI decisions with the same rigor used for financial, legal, and cyber issues?

The answer should be an appointed AI expert with clear accountability

The AI risk landscape boards must own

AI impacts nearly every part of an enterprise. Seven key risk areas demand board oversight:

1. Model risk

AI systems can hallucinate, drift, or behave inconsistently across groups. When they influence pricing, lending, hiring, or safety, failures become enterprise risks – not IT glitches.

2. Regulatory and legal exposure

Boards must ensure AI systems comply with POPIA and King IV in South Africa, as well as global frameworks such as the EU’s AI Act. Weak documentation or oversight can increase liability.

3. Bias and discrimination

Training data may encode historical inequities. Unchecked, this produces unfair outcomes that harm people and damage brand equity – posing ethical, reputational, and legal risks.

4. Data security and IP leakage

Generative tools can leak confidential data through prompts, outputs, or logs. Third-party model providers and plug-ins expand attack surfaces and licensing complexity.

5. Algorithmic collusion

Pricing or bidding algorithms that learn from market signals can inadvertently breach competition law. Boards must ensure proper testing and oversight.

6. Operational fragility

Dependence on a single model provider or scarce AI hardware creates single points of failure. Resilience planning is essential.

7. Workforce and social impact

Automation reshapes roles and incentives. Without clear plans for reskilling and transparent communication, organisations risk disengagement and backlash.

Why boards need an AI expert

Modern AI risk is technical, socio-technical, and strategic. Boards need someone who can translate between engineers, risk teams, and directors – someone able to challenge optimistic assumptions and set measurable guardrails. The role is not to run projects, but to govern them.

What the AI expert should do

1. Establish an AI governance framework

Align management with standards like the NIST AI Risk Management Framework and ISO guidance. High-impact use cases may require model inventories, risk classification, and approval gates.

2. Advocate for model lifecycle controls

Ensure data provenance, bias testing, red-teaming for safety and misuse, performance monitoring, and documented fallback plans. Each model release should tie to explicit risk acceptance.

3. Strengthen legal and ethical compliance

Map AI use to POPIA obligations and global regimes. Mandate privacy by design, consent management, explainability, and audit trails that can stand up in court.

4. Secure the AI supply chain

Vet model vendors, hosting, and open-source components. Negotiate contracts that cover confidentiality, retraining on data, uptime, and incident response.

5. Build resilience and incident response

Run tabletop exercises for model failure or data leakage. Define thresholds for halting automated actions and escalating to human oversight.

6. Integrate AI with strategy and value creation

Maintain an AI portfolio with ROI targets, customer safeguards, and brand alignment. Track both upside (revenue lift, productivity) and downside (complaints, regulatory findings).

7. Uplift board literacy

Run teach-ins so every director can interpret AI risk reports and challenge management. Mature boards treat AI oversight like cybersecurity and audits – not as a black box.

The South African and global imperative

South African boards already recognise technology governance as a board duty. The rise of generative and predictive AI heightens obligations under POPIA and intersects with transformation and fairness goals. Globally, regulators, investors, and customers are moving from encouragement to enforcement.

Boards unable to explain how their AI works – or how it’s governed – will be exposed. Appointing a director with deep AI governance expertise is no longer optional. It’s the fastest way to close the oversight gap, protect people and data, and turn AI from a headline risk into a durable advantage.

Jeremy Bossenger is a Director at BossJansen Executive Search

Facebook
Twitter
LinkedIn
Pinterest
About Our Comapny

At Topco Media, we bring together industry leaders, innovators, and experts through world-class conferences, prestigious awards, insightful publications, transformative masterclasses, and compelling podcasts. With a deep focus on multiple sectors, we help businesses connect, grow, and thrive through our trusted in-house brands.

Stay Ahead of the Conversation – Get Topco Media’s Weekly Newsletter!
Follow Us On
Facebook
Twitter
LinkedIn
Pinterest