Picture: Supplied
Picture: Supplied

Anyone listening to the evasive piffle being spouted at the state capture commission will be amazed at the kind of nonsense civil servants can apparently get up to. Here I am wincing at ending my opening sentence with a proposition, and our government officials are happily spending billions on trains that don’t fit the tracks, coal that doesn’t burn, crappy ambulances with three wheels, and public infrastructure that exists only in their febrile imaginations.

My current favourite shenanigan (reported by Marianne Thamm and the Daily Maverick, I think, though these stories get lifted so quickly by other sites it’s sometimes difficult to identify origin) is the R1.6bn spent in six months in 2020 by the SA Police Service on personal protective equipment (PPE).

The best detail in the story — if by "best" we mean "most absurd and disheartening" — was that "the amount includes R11m paid out on a verbal order in May 2020 to ‘nonprofit’ company, Dr Love Foundation, ‘who could not be traced on Treasury’s database of small business development’, the audit noted".

Connoisseurs of dodgy company names, fresh from the joy of Digital Vibes and the White Spiritual Boy Trust, will love the Dr Love Foundation. But what’s also interesting are the supposedly rigorous checks and balances that exist to make sure these sorts of deals are legitimate.

In this case, we are told, "the tender for cloth masks and sachets of hand sanitiser was awarded after ‘the supplier wrote a correspondence to [the police] stating that his foundation intends to offer services and assistance to [the police] in the current fight against the pandemic’".

Seems legit. I mean, "intends to" is good enough for me.

Another lighthearted detail is that "the delegation of authority within the [police] does not provide for verbal authorisation on transactions of more than R500,000. Despite that, ‘Thirty four (34) orders to the value of R1,620,964,361.20 were issued on the basis of a verbal authorisation, and without obtaining sufficient number of quotations,’ reported major-general DT Nkosi, [the police’s] chief audit executive."

I do like the punctilious way they write 34 in words and numerals, and the precision of the 20 cents. It’s like criminals who borrow ill-fitting suits to appear in court, and hope they’ll fool the judge into thinking they’re upstanding citizens.

Seriously, what’s the point of having rules and systems if we’re going to ignore them? Is it because the rules aren’t clear, aren’t good enough, or are just not communicated properly?

I know, no rules are going to be good enough if we’re putting corrupt, criminal chancers in charge. But at least we have a semblance of legality to point at when someone catches the miscreants in the act, a way to contextualise and evaluate the extent of their culpability.

Imagine how much worse it’s going to be when that edifice isn’t actually there, meaning we can’t even spot when business and government are doing something bad. And, even worse, nobody really cares about it.

A recent survey, conducted in February and March by data analytics company Fico and market intelligence firm Corinium, asked 100 artificial intelligence-focused leaders from the financial services sector, including "20 executives from the US, Latin America, Europe, the Middle East, Africa, and the Asia Pacific regions" about how their companies use AI.

"The executives, serving in roles ranging from chief data officer to chief AI officer, represent enterprises that bring in more than $100m in annual revenue and were asked about how their companies ensure AI is used responsibly and ethically," tech news site ZDNet reports.

I invite you to imagine the potential for abuse of AI systems in business and government, and indeed abuse by AI systems with unconscious bias built into them. We’ve seen how readily, and seemingly effortlessly, people bypass rigid, codified rules intended to safeguard our nation’s resources. And this is with actual legislation.

The survey of people who use AI in their business revealed that "almost 70% of respondents could not explain how specific AI model decisions or predictions are made, and only 35% said their organisation made an effort to use AI in a way that was transparent and accountable.

"Just 22% told the survey their organisation had an AI ethics board that could make decisions about the fairness of the technology they used. The other 78% said they were ‘poorly equipped to ensure the ethical implications of using new AI systems’."

This is a dream come true for aspirant thieves. "Nearly 80% said they had significant difficulty in getting other senior executives to even consider or prioritise ethical AI usage practices."

Well, of course not! Imagine how much more fun the Guptas could have had if the ministers and businesspeople they co-opted into their capture of the SA state didn’t have pesky ethical codes to circumvent.

More scary results from the survey showed that more than 65% of respondents said that their businesses had ineffective processes in place to check that AI projects complied with any regulations.

Ganna Pogrebna, lead for behavioural data science at the Alan Turing Institute, is quoted as saying: "At the moment, companies decide for themselves whatever they think is ethical and unethical, which is extremely dangerous."

And "respondents overwhelmingly said there was no consensus about what responsibility companies had in deploying ethical AI, particularly AI that ‘may impact people’s livelihoods or cause injury or death’".

Most said they had no responsibility to ensure their use of AI was ethical, beyond "regulatory compliance".

This isn’t just alarmist nitpicking. AI is being used to make vital decisions about people’s lives, from admitting them to educational institutions, to granting them loans or job interviews. In the US, for example, a company uses AI to review video submissions by aspirant students.

According to The Hechinger Report, a newsroom that reports on education, it evaluates applicants based on "a five-point scale in areas such as openness, motivation, agreeableness and ‘neuroticism’".

And in SA, as one example, AI is used by the financial sector for a range of ends, such as managing risk and fraud, and generating new revenue opportunities. We need to think about the ethical ramifications of this, and where that responsibility lies.

But this isn’t a column demonising AI, though there are some frightening examples out of autocratic nations such as China of how it’s used to control citizens. AI is also used to do many good things, from screening for risk of cancer, and combating world hunger and climate change, to spotting fake news and, despite the many documented instances of bias in AI, to fighting inequality and poverty.

In a Forbes list of projects in which AI is used for good, the most fascinating is perhaps the World Bee Project, which "hopes to learn how to help bees survive and thrive by gathering data through internet-of-things sensors, microphones, and cameras on hives. The data is then uploaded to the cloud and analysed by artificial intelligence to identify patterns or trends that could direct early interventions to help bees survive."

No, it’s not AI that worries me. It’s the infinite criminal potential of non-artificial intelligence.

It’s the fact that, when AI inevitably becomes an integral part of government processes, it’s going to open up a whole new, largely unregulated playing field for those public servants who might have missed the first full flush of state capture.

Forget the distraction of communications minister Stella Ndabeni-Abrahams and her ridiculous fourth industrial revolution blazers. It’s robot capture that we might have to watch out for.

subscribe

Would you like to comment on this article or view other readers' comments?
Register (it’s quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.