What would you say if I told you that ‘Imperfect’ artificial intelligence (AI) can do more for your business than facial recognition ever will?
Your response might be to suggest that facial recognition is needed for 100% accuracy in data insights, and you probably think that this is important for keeping up with the competition. You may also tell me that it’s necessary to meet growing customer expectations, or even go into the things that you ‘just can’t do’ without it.
But one thing you won’t be able to argue is that facial recognition technology is good for compliance. The use of facial recognition is on the rise and causing a stir across Europe, with 11 EU nations reportedly already using it, and the European Data Watchdog warning that nations are not ready for AI-powered surveillance.
Tech is supposed to solve problems, not cause bigger ones, and decision makers are wrong to see compliance as a secondary consideration when it comes to insights and analytics (which are ultimately about creating more revenue). In fact, accepting a marginal reduction in accuracy is the only way to ensure compliance, protect privacy rights, and put you and your business out of harm’s way when it comes to reputational damages caused by data leaks and the storage of personal data that doesn’t belong to you.
A radical cultural shift in thinking on artificial intelligence needs to take place, where we alter standard practice and move towards a more transparent, honest, and ethical model, where businesses are open about the data they are collecting and accept a small reduction in accuracy in order to protect their customers.
How Does Video Monitoring AI Learn?
AI that is partnered with a visual element, for example in CCTV cameras for smart video monitoring (or in facial recognition technology), has become increasingly popular in industries like urban planning and real estate.
The tech can allow users to garner analytics on the number and type of vehicles in a space, where people go and how long they spend there, and even allows you to collect demographic data such as age, gender and race. Much of the AI currently being used is developed through supervised learning, and the ethical pulse of these particular algorithms comes from the data scientists behind it. This means that the data scientists manually teach the AI which defining characteristics correspond with which type of person. AI’s ‘judgment’ is therefore no better than any ordinary person’s.
When it comes to protecting individuals from algorithmic biases and adhering to regulation, what you DON’T teach your AI is as important as what you DO teach it.
The fact of the matter is that AI does not need to collect or analyse personal, biometric data to be effective. To ensure you store all the data correctly and comply when using it, you should just avoid gathering personally identifiable data without consent – i.e. peoples faces, altogether.
It is still possible to achieve over 90% accuracy while using algorithms that do not recognise faces, and there needs to be a shift in compliance culture where organisations accept this as the best outcome – not 100%. These organisations have a social responsibility to protect the individuals they are monitoring, and we must encourage more decision makers to take this on.
Reinstating Trust
The private sector in particular needs to show leadership on this issue, at the very least to avoid continual backlash from unhappy customers that are vehemently opposed to facial recognition technology.
Businesses need to do more to reassure their customers, and address pressing issues like algorithmic biases, for example, through the self-regulated diversification of data teams, the elimination of faces from image and video monitoring datasets, and setting expectations about accuracy and algorithmic improvement over time, we can start to restore trust. Despite the widespread scepticism about Meta’s intentions, the announcement this November that Facebook will shut down its use of facial recognition technology and delete 1bn ‘faceprints’ is a serious step in the right direction.
Promising not to use identifiable data is simply not enough – we need to see action. At a time where cybercrime is rife and set to continue rising, collecting identifiable data goes beyond being theoretically problematic, and instead poses a real danger to individuals and their privacy rights.
The key to addressing current challenges lies in coming to terms with a marginal reduction in AI accuracy as being a good thing – a lesser cost to any business than putting their customers at risk.
About the Author
Karen K Burns is at Fyma. We show you what happens inside your shopping centre, business park or retail centre, inside or outside. Recognising and counting customers, their direction of travel, dwell time and demographics – all in a GDPR compliant way. Fyma’s custom-created algorithms give you actionable insights and help you check – and improve – your conversion rates, safety, marketing campaigns and even your centre layout. Get in touch via www.fyma.ai and book a demo with us.
Featured image: ©Fractal Pictures
