We've got news for you.

Register on BusinessLIVE at no cost to receive newsletters, read exclusive articles & more.
Register now
Stock image. Picture: 123/RF/NICOELNINO
Stock image. Picture: 123/RF/NICOELNINO

Artificial Intelligence (AI) is increasingly embedded in all applications, platforms and software, and is helping unlock value and drive impact in all sectors ranging from retail and manufacturing to healthcare and financial services.

Benefits include the automation of largely administrative and manual tasks, freeing up time for workers to focus on higher-value tasks. For example, healthcare professionals would be able to spend less time on administrative tasks and more time with their patients.

AI can also detect unusual behaviour on accounts and reduce fraud in financial services, and enable predictive maintenance in manufacturing. These examples are just scratching the surface of the advantages AI can offer across almost every industry.

There can be no leadership without tech leadership, and  governance without digital governance.
Lionel Moyal, global partner solutions director at Microsoft SA.

However, as more businesses begin to experience these benefits and the use of AI becomes more prevalent and ubiquitous, it will become more critical than ever to leverage it in a responsible way. Responsible AI has in recent years become a critical theme in the enterprise AI market as more companies struggle with challenges in governance, security and compliance.

Research has indicated that responsible innovation is top of mind. From as far back as 2019, organisations have identified the two most important requirements when investing in AI and machine-learning technology as the ability of AI systems to ensure data security and privacy, and the level of transparency of how systems work and are trained. It’s a clear shift from earlier years, when scalability, performance and easy-to-use tools topped investment priorities.

This shift makes it essential for legislation and governance structures to keep up with the pace of innovation to ensure businesses can harness the value of AI in a responsible way. In SA, this is underpinned by the King IV Report on Corporate Governance, which promotes and advances standards of corporate governance for better business.

Incorporating AI as an underpinning element of corporate DNA

A key component of the King IV Report is technology governance and security, because technology alone no longer acts simply as an enabler; it is recognised as part of corporate DNA and provides the platform on which businesses operate. Many of the current and future opportunities organisations are able to harness are a direct result of technology — making it critical for governance structures to regulate emerging technologies such as AI in a way that will benefit people and business.

As it now stands, only one of the 17 principles of the King IV Report — Principle 12 — explicitly talks to the governance of technology and information in a way that supports the organisation in setting and achieving its strategic objectives. Each of the principles embodies the journey towards good corporate governance and acts as a guide about what businesses should aim to achieve.

Because of the fundamental role technology — and particularly technologies like AI — plays in modern organisations, each principle should incorporate a technological and digital element. As a case in point: there can be no leadership without tech leadership, and governance without digital governance, so this should be built into the principles that SA businesses work towards and measure themselves against.

The good news is that approaching the principles of the report with a digital focus acts as an opportunity for businesses and business leaders to gain a deeper understanding of AI capabilities at board level.

SA has an innovation mindset and is well placed to take advantage of this technology, but it is critical for business leaders not to delegate the responsibility for AI and other technologies elsewhere in the business — they need to understand the risks and opportunities associated with these technologies, and look at the business through this lens and with a technological mindset.

Microsoft has developed a set of AI principles to help business leaders understand these risks and opportunities. The principles aim to act as an ethical framework against which AI solutions are developed and deployed for the benefit of all, and are made up of four core principles: fairness, reliability and safety, privacy and security, and inclusiveness. These are underpinned by two foundational principles: transparency and accountability.  

Combining these principles to leverage the technology in a responsible way with a technological mindset and understanding of both the risks and opportunities of AI has the potential to not only drive widespread business value, but also to create benefits for broader society.

Moyal is global partner solutions director at Microsoft SA.


Would you like to comment on this article or view other readers' comments?
Register (it’s quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.