PARMI NATESAN AND PRIEUR DU PLESSIS: AI — is your board prepared?
King IV makes it clear that governing technology and data is a board responsibility
Artificial Intelligence (AI) has emerged as a crucial new oversight area, but just understanding what the issues are is a challenge.
The launch of ChatGPT late in 2022 initiated a particularly spectacular version of Gartner’s hype cycle. The hullabaloo, rich with exaggerated claims about AI in general, nevertheless raises significant issues for boards and organisations in general.
Those issues derive directly from principles 11 and 12 of the King IV Report on Corporate Governance, which respectively require the governance of risk, and of technology and information, in line with the setting and achievement of strategic objectives.
As a first step, boards must step back from the overblown narrative about AI, and undertake a thorough and clear-headed investigation into what it actually is, and what it can do. Simply put, AI uses algorithms to process large amounts of data to generate insights.
AI is a product of human programmers, and as such it will contain a set of biases, inconsistencies and downright faults. Its conclusions are also the product of the quality of the data it uses; “garbage in, garbage out” remains true. AI is also constantly evolving, so the board needs to keep up to date with developments.
These caveats aside, AI is a genuinely exciting technology that is already generating useful insights for companies. The benefits and potential benefits are legion, and include enhanced efficiency and productivity by better identification of bottlenecks in business processes.
Customers benefit from greater efficiencies and the company’s ability to recognise what they want more rapidly; employees benefit because AI can take on a lot of the drudge work and make their jobs more fulfilling. AI can also pinpoint potential innovations.
As an aside, many companies argue that AI-driven chatbots enhance the customer experience. In reality, though, anybody who has used them knows that the day an interaction with one of these infuriating pieces of technology is useful or pleasant is a long way off ̶ the benefit is all the company’s!
As the AI bandwagon continues to roll and adoption rates grow, how should boards approach their oversight role? Here are some suggestions:
- Focus on upskilling. King IV makes it clear that governing technology and data is a board responsibility, but boards still remain relatively uninformed in both. AI is a particularly complex and constantly developing technological area, and the board should ensure it includes individuals with deep understanding ̶ but all board members need to be helped to become more proficient not only in the technology itself but also in data management.
- Be proactive and put AI oversight on the board agenda. An important part of the discussion is how AI is being used in the organisation, and how it might be used. This needs a thorough discussion with management, and integration into the organisation’s strategy and operational planning. The board needs to distinguish between short-term and long-term AI benefits and strategies.
- Understand the risks. It is important to understand that AI is not a single thing, but a complex and shifting ecosystem of programmers, third-party technology vendors and employees. The resulting risk profile is equally complex and is changing all the time. An alarm bell is that McKinsey’s 2019 global AI survey indicates that while AI adoption is increasing at a pace, under half (41%) of respondents said their organisations “comprehensively identify and prioritise” the risks of AI. The 2022 survey indicates that the dial has hardly moved on the mitigation of AI risk. We imagine the picture is probably even worse in SA.
The specific risks of AI identified in the McKinsey survey are cybersecurity, regulatory compliance, personal privacy, “explainability” (the ability to explain how AI came to its decisions), organisational reputation, equity and fairness, workforce displacement, physical safety, national security and political stability.
It is particularly important to emphasise that AI is something of a “black box”; its inner workings are extremely difficult to fathom and yet may ultimately expose anybody using it to unexpected risks, many of them deriving from bias in relation to gender, race and other “protected characteristics”.
Another risk is that AI systems rely on many third parties, creating a risk to business continuity. Cybersecurity is also a big risk — where business goes, cybercriminals will be sure to follow.
AI is here to stay, but boards need to be wary of the hype and understand precisely its changing role in the organisation, and thus the risks. Eternal vigilance is the price not only of liberty.
• Natesan and Du Plessis are respectively CEO and facilitator of the Institute of Directors SA.