subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: Reuters/Dado Ruvic
Picture: Reuters/Dado Ruvic

In 2016 Microsoft CEO Satya Nadella stood beside Saqib Shaikh, a software engineer who is blind, to demonstrate a pair of smart glasses developed to assist people with visual impairments. The glasses used advanced machine learning and facial recognition technology to describe the world around the wearer in real time.

This was not merely a technological marvel; it was a moment that reframed how we think about artificial intelligence (AI). Nadella later reflected on the experience in a Slate article, writing, “The beauty of machines and humans working in tandem gets lost in the discussion about whether AI is a good thing or a bad thing. The debate should be about the values instilled in the people and institutions creating this technology.” 

Since then, Microsoft has embarked on a deliberate, values-driven journey to ensure AI innovation is grounded in ethics and human impact. This commitment has materialised in concrete actions: the formation of the Aether Committee, the creation of the Office of Responsible AI, the publication of Microsoft’s six AI principles, and the development of internal AI standards now in their second version. Microsoft’s experience has demonstrated that innovation and core values are not mutually exclusive but rather essential partners in building technology that serves society. 

This philosophy is now mirrored in an emerging regulatory framework: the EU’s AI Act. As the world’s first comprehensive AI regulation, the EU AI Act represents a watershed moment. It establishes a risk-based approach that seeks to strike a balance between fostering innovation and mitigating the potential harms of AI. Like Microsoft’s internal governance efforts, the Act acknowledges that AI systems are not monolithic and must be regulated proportionately based on their use and impact. 

The EU AI Act’s phased enforcement is both deliberate and ambitious. In February the Act will began enforcing requirements related to organisational AI literacy and the prohibition of certain high-risk practices. By August new rules for general purpose AI will take effect, acknowledging the unique challenges posed by foundational models that serve as the backbone of countless downstream applications. A year later, in August 2026, the Act will begin regulating high-risk AI systems (those that significantly affect health, safety, or fundamental rights) with robust requirements for transparency, human oversight and data governance. 

The global implications are clear. Any organisation offering AI products or services in the EU, or whose outputs are used there, will be subject to the Act. In essence, the EU AI Act sets a new international benchmark for responsible AI. Its application should not be dismissed as a regulatory obligation alone; rather, it is an opportunity to elevate AI practices worldwide. 

At Microsoft, extensive efforts are already under way to ensure AI offerings align with the AI Act’s requirements. This is where collaboration becomes critical. As the regulatory environment evolves, businesses need expert guidance to interpret, implement, and operationalise these new standards in ways that matter to their unique capabilities and contexts.

The deep expertise in digital transformation, regulatory alignment and risk management that is rooted in the experience of iqbusiness is an example of how the technical must be made meaningful. An understanding that adopting responsible AI isn’t simply a matter of compliance is vital. It is about building trust, demonstrating accountability and driving sustainable innovation for the long-term. 

For businesses, this begins with governance. Aligning with the EU AI Act requires embedding ethical principles into the very fabric of the organisation. It means defining roles and responsibilities across leadership and technical teams, training employees to understand the implications of AI systems, and ensuring decisions made by AI are explainable and auditable. It also means integrating AI governance into broader strategic frameworks from enterprise risk management to data privacy and digital transformation initiatives. 

Risk management is another critical pillar. The EU AI Act places strong emphasis on identifying, assessing and mitigating the risks associated with AI, particularly in high-impact areas. Organisations must understand how their AI systems interact with sensitive data, assess potential harms including bias and unfair outcomes, and ensure those risks are addressed throughout the system’s life cycle. These responsibilities cannot be isolated and must be embedded into enterprise-wide processes with appropriate oversight and accountability. 

It will be the early movers that gain the most advantage here. Businesses that align early and intentionally with the EU AI Act can position themselves as leaders in ethical AI. By building transparency, fairness and human-centred design into their AI systems, they cannot only meet regulatory requirements but also build trust with users, regulators and wider society. 

Our shared vision is one in which responsible AI is the norm, not the exception. The EU AI Act is a crucial milestone, but it is only the beginning. As part of this beginning the local SA and broader African AI community should, as it did with privacy regulation, look to leverage best practices and learning established by their EU counterparts, and apply these evolutions and iterations within the appropriate governance structures and ethical frameworks. 

We believe this is a moment for industry leaders to step up; not only to comply, but to shape the future of AI in ways that are inclusive, equitable and profoundly human. 

• Watson is senior corporate counsel at Microsoft, and Craker CEO at iqbusiness.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.