Navigating the ethical pitfalls of artificial intelligence
A recent Microsoft event showcased the wonder of virtually fitting clothes for size and raised broader questions of data safety
Imagine trying on your clothes virtually via a mobile app before making the purchase.
Artificial intelligence (AI) will doubtless soon make this a reality, allowing you to skip the queues and cut out the hassle of returning stuff, which has become synonymous with online shopping.
Matthew Drinkwater, director of the London College of Fashion’s Innovation Agency, told a Microsoft conference in Paris recently that the agency had conducted a trial with a luxury scarf designer in 2016 in which clients were able to try on the scarves virtually.
"Are consumers ready to virtually try products on? In this case it didn’t happen that much," he admitted. But what they did do was try on the scarf virtually and then go out to the retailer to buy one.
"At the moment we can very accurately scan the face, and within the next six to 12 months we will be able to do that for the body. What will this mean for fashion?" he asks.
And what will it mean for personal data security? What are the pitfalls of this development and what is being done to protect consumers from the misuse of AI?
Enter "ethics and AI", which was a theme explored on the same day that Drinkwater and multiple industry experts took to the stage of the conference at Station F in Paris, the world’s biggest business incubator campus for start-ups.
The amount of data collected through technology is no small matter, as the controversies over privacy and the misuse of personal data by social media giants such as Facebook has shown.
In Europe the issue is chiefly governed by the EU General Data Protection Regulation (GDPR), which aims to protect EU citizens from data breaches. The GDPR crucially extends its powers to companies processing the data of any person living in Europe.
In SA, data protection is governed by the Protection of Personal Information Act, 2013. Popi, as it is known, was signed into law by then president Jacob Zuma on November 19 2013, but has yet to come into force. Most of its provisions will come into effect only when the information regulator is fully operational. The draft regulations were published for public comment in the second half of 2017. The final regulations are yet to be promulgated.
In terms of data protection in an age of rapid technological growth, the aim is to have "integrity by design".
Kavitha Babu, director and regional attorney for Microsoft in Europe, says the question should not be what computers can do, but rather what they should do. The industry needs to carefully consider the societal issues raised by sophisticated technology and AI.
"We cannot afford to look at it with uncritical eyes," she says, adding that every ethical issue that humanity has faced is a potential ethical issue for a computer.
She outlines six values that AI has to respect as it augments human ingenuity — fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.
Navigating the ethical pitfalls raised by AI will require a "shared understanding" of the guiding principles and developing a common framework to guide researchers and developers of next-generation technologies. Babu says the complexity of AI technologies has fuelled fears that they might create unintended outcomes.
In terms of major privacy breaches, Facebook has had an extremely turbulent year as it faced a huge backlash, especially after the Cambridge Analytica data scandal over the misuse of millions of Facebook users’ personal data for political purposes. At the weekend the Irish Data Protection Commission announced a probe into the social media giant for failing to report a security breach to the regulator within 72 hours, as required by the GDPR.
Babu says privacy has to be a business imperative, and that it is a key pillar of building trust in any AI initiative. AI systems had to take into consideration how personal data engages with the system "while it is being built".
"Computers need to remain accountable to people, so that the people who actually develop the technology continue to remain accountable to users," she says.
Murray Hunter of the Right2Know Campaign says that though SA law addresses the issue, it requires the watchdog (in the form of the information regulator) to be fully operational.
He says Popi is exactly what SA needs, as everything would then be in place for effective regulation. Until it is, consumer protection laws and the Cybercrimes Bill afford only limited protection.
Less than six months before SA’s next election it is also crucial that the information regulator be given teeth before voters cast their ballots, says Hunter, because there is a growing body of case studies of elections having been influenced by misusing data.
In her address at the Paris conference, Babu aptly used the cliché: with great power comes great responsibility.
*The writer’s trip to Paris was sponsored by Microsoft