Some automation industry experts estimate that 70% of large organisations worldwide already have some form of people analytics to analyse what employees and customers are saying, doing and even feeling, says the writer. Picture: THINKSTOCK
Some automation industry experts estimate that 70% of large organisations worldwide already have some form of people analytics to analyse what employees and customers are saying, doing and even feeling, says the writer. Picture: THINKSTOCK

Walking into the distribution warehouse of a major SA retailer, visitors are greeted by a buzz of voices saying things like: “Yes, yes, okay”, and “Repeat, repeat”. The people appear to be talking to themselves but they are not. Wearing headsets and wristband devices, they are responding to instructions from a “voice-picking” system that tells them what items, where and how many to collect from the various shelves. 

This kind of technology has been used in SA since about 2015 and is credited with significant efficiency improvements and reduced errors in the fast-moving consumer goods environment, where its hands-free, eyes-free features reportedly make it easier and quicker for human operators to locate the selected products.

Wearable devices of this nature can also monitor the individual’s productivity and time management to the second. They can instantly detect that a particular worker is not meeting the product-picking targets set for the day and flag this for a performance discussion. Such devices bring the concept of clocking in and clocking out — and monitoring what happens in between — to new heights of precision.

This might have strong appeal for employers, but what about employees? What if they feel uncomfortable about being constantly monitored and consider this an invasion of privacy? Can the employer insist on the use of such wearables as a condition of employment?

This might have strong appeal for employers, but what about employees? What if they feel uncomfortable about being constantly monitored and consider this an invasion of privacy?

Such questions are increasingly coming up in the evolving world of work, where “people analytics” are more widely used than many of us realise. Some automation industry experts estimate that 70% of large organisations worldwide already have some form of people analytics to analyse what employees and customers are saying, doing and even feeling.

In the field of “sentiment analysis”, for instance, there is a tool that can read people’s e-mails and report on what mood they are in. Another tool analyses voices to determine how trustworthy a person is. Then there is a tool that allows companies to monitor their employees’ internal networks by keeping tabs on their day-to-day contacts, so they can restructure networks that are inefficient or unproductive.

The legal position of employers and employees in workplaces where people analytics are used is something we are all going to have to watch closely as usage becomes more pervasive and perhaps more personal.

One of the key points employers should keep in mind is that the use of such tools must be fair: there must be mechanisms in place for employees to understand any decisions made as a result of people analytics, as well as a means to challenge such decisions. Most importantly, employees must be able to appeal on a human level — to real people.

Human oversight of machines is going to be a critical focus as we move ever deeper into the age of artificial intelligence (AI) and robotics.

Employers will have to think carefully about accountability in the event of a machine or AI program malfunctioning and causing some kind of damage or harm. Someone has to manage the machines, and if a human being is responsible for the malfunctioning of software or hardware that caused the problem, then misconduct or poor performance would almost certainly come into play.

As the law stands, employers are vicariously liable for the wrongful acts of their employees or agents if these acts occur during the course of employment. In the future, employers could also find themselves being held liable for the wrongful acts of their autonomous robots.

In the meantime, companies should be giving some serious thought to upskilling their human resources practitioners, who will have to be more astute than ever in anticipating and managing the impact of technological changes on the workforce. Wearable devices, for instance, come with a host of employee-related implications, especially for retraining, reskilling and health and safety. With robotic wearables becoming more prevalent, the chances of being injured by a robot increase, making occupational health and safety ever more important.

Yet another aspect for employers to consider is that AI can be discriminatory if not properly programmed. For example, in a job selection process, algorithmic analysis could use otherwise objective criteria to result in outcomes that are biased. In SA, where the Employment Equity Act prohibits unfair discrimination, such algorithms would need to be carefully configured to ensure compliance.

Then there is the possibility of pushback from organised labour. Trade unions are likely to view the increasing use of robotics as a threat to human job security and, perhaps, in the case of tools for people analytics, to employee privacy too. A lot of work and buy-in from organised labour is sure to be needed to show anticipated positive gains from the use of AI and robotics, including robotic wearables. Factors such as increases in the number of jobs, increases in wages and better working hours and conditions will be important in these conversations.

Given the pace at which robotics and AI are entering the world of work, these conversations —should already be happening in earnest. Technology will not wait for the human element to catch up.

• Raphulu is a partner in the Bowmans employment and benefits practice.