The issue of how artificial intelligence (AI) should be used by companies for HR purposes is firmly back under the spotlight, with the Trades Union Congress (TUC) calling for regulation to be updated in order to keep pace with the rapid advances made in AI capabilities in recent years
Chief among these concerns is the use of AI to closely surveil employees and, relatedly, to make high-risk decisions, such as whether or not to fire an employee. TUC general secretary Frances O’Grady has commented that “AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions…Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment.” These fears are not hypothetical, but concerningly are based on actual cases from the world of work. For example, a court in Amsterdam ruled last week that Uber must reinstate 6 of its drivers who were banned from the app by an algorithm which mistakenly accused the drivers of fraudulent activity, without the company providing proper evidence to support their decision.
The introduction of regulations to prevent companies from automating such life-altering decisions and imposing ‘Big Brother’ surveillance is the right thing to do. It will prevent the kind of harm to workers that, in the long-term, will harm companies too. If workers feel that they are being watched over not by managers who want them to succeed, but rather solely by algorithms, their morale will inevitably suffer; further still, the notion that they could actually lose their job purely at the whim of imperfect technology is likely to cause immense stress. Workers that are unhappy in this way will also, ultimately, be unproductive, as those who suspect that their employers do not have their best interests in mind are unlikely to feel that they owe much back to such employers in turn, leading to them mentally checking out and biding their time until they exit the company. As the Amsterdam ruling on the Uber drivers demonstrates, a lack of human reviewing of decisions to fire a worker can lead companies to lose perfectly able and hardworking employees for no good reason whatsoever.
Similar consequences will follow if employers attempt to track every last movement and action by their employees, something that technological advances are enabling at an ever-more granular level. While organisations have every right to ensure that their employees keep time well and deliver results, it is also important that they trust their workforce insofar as possible and resist the urge to police those five-minute tea breaks that provide an important opportunity for employees to briefly rest their minds. Numerous studies have demonstrated that excessive, intrusive monitoring does more harm than good purely in terms of productivity, not to mention workers’s rights and wellbeing. It causes employees to feel insecure and lowers their morale and sense of autonomy, leading to poor mental health, burnout, and ultimately to businesses suffering higher staff turnover rates and significant financial costs.
Frances O’Grady is completely correct to state that AI has the potential to enhance productivity and working lives, as it is perfectly possible to engineer technology to have the wellbeing of employees firmly at its heart, and something that should be encouraged. There is great potential for AI to automate the more dreary and mechanical of administrative tasks, freeing up more time for employees to devote to creative or intellectually-demanding aspects of their jobs. Employers can also benefit from technology that collects data on employee sentiment anonymously, and then uses AI to analyse this data, detect any trends towards things such as presenteeism or burnout, and then produce recommendations in real-time on how best to address these issues. The ability that such AI technology presents to stay ahead of the curve and address such problems before it is too late can, for example, help to reduce an organization’s staff turnover.
The important point is that this should be done in a transparent and ethical way, with AI being used as a source of insights and advice, rather than to replace human decision-making on crucial HR decisions. Such decisions, ultimately, require uniquely human moral reasoning that is beyond the narrowly analytical capabilities of AI. So provided they ensure that the ‘human’ in HR is emphasised, companies can use AI to foster a happy and productive workforce, and during these testing times should feel empowered to do so.
About the Author
Pierre Lindmark is the CEO & co-founder of Winningtemp. Winningtemp is an AI-powered engagement, performance management, social praise and e-learning platform that helps managers create cohesive, engaged and high performing teams. With clients in over 40 countries, it combines automated smart and light touch surveys, with goal management, e-learning, appraisals, and check-ins.
Featured image: ©Your123