Can surveillance technology really be ethical?

Video surveillance, and in particular facial recognition, has been critiqued for its implications for privacy and security, as well as the racial and gender based biases, which disproportionately affect certain sectors of society more than others

As such, numerous human rights groups worldwide have called for the banning of this technology. In the US, Congress has been asked to outlaw facial recognition, but the Federal Communications Commission (FCC) has, at present, deemed this unlikely. This is no different in Europe, where the EU is planning to build an international facial recognition system. 

Where the problem lies 

The problematic nature of surveillance relates to the programming and application of the technology, and not the technology itself. These issues can therefore be rectified through transparency and diversity.  

However, it is important to recognise that facial recognition is not the sole answer for the implementation of surveillance technology. We can secure people and things, without the use of a technology that creates more problems than it solves. Artificial intelligence (AI) is obviously critical to advanced surveillance, but there are solutions which don’t involve the associated negative side effects, yet still have the power to keep people, buildings and property, and data safe.  

The positive side to surveillance 

Surveillance should never be a goal in and of itself: safety and security are the desired outcomes of this technology, but surveillance for surviellance’s sake is one of the chief causes of negative attitudes towards it. We need to leave behind the idea of the omnipresent panopticon, that mines and stores all data and footage; running the risk of scepticism and system avoidance. Intrusive surveillance renders the technology useless, because people become wary of where their image is being held and by whom, so much so that they would rather not frequent certain places. As a consequence, the tech does the opposite of what it’s set out to do; doing nothing to protect them or wider society as a whole. 

New technological innovation allows for moving away from fears of misuse and bias in surveillance. Such as the development of similarity search (under the video surveillance umbrella), identifies things, not people. Being able to find objects, like a blue hat or purple backpack is not prone to the inherent issues of facial recognition, and therefore cannot misidentify people based on their race and/or gender. 

To further tackle the issue of bias, thus making surveillance more ethical, is for the technology to look for anomalous behaviour, not physical attributes. Instead of trying to locate someone who looks “suspect” – the technology can alert whether the post is delivered at 11.02am instead of the usual 10.57am. Perhaps not the most exciting example, but it demonstrates the accuracy and attention to detail that is now possible; something that can consequently save lives. As would be the case with receiving an alert if a cyclist is lying face down on the ground, rather than upright on their bike, is a matter of life and death. Using surveillance technology in this way provides insights into behaviour, in a holistic, not invasive manner. Such innovation is also aided by cameras being able to generate heatmaps, and consequently determine areas of high traffic as well as suspicious activity, which can lead to better business intelligence and therefore better outcomes. 

Another positive of this approach is that it can identify the possibility for an incident before it occurs. Availability after the fact of an event has some purpose – it can be used by law enforcement or in insurance claims. However, there is far more potential value in identifying an incident before or as it happens based on anomalies that can be recognised by AI. For example, a student might be on the top of a school building out of hours, at a time when there normally isn’t anyone or shouldn’t be anyone. The technology recognises this as an anomalous, high risk situation, preemptively alerting the relevant authorities. Being notified, in real-time, quickens response time as it follows the sequence of events as they take place, rather than at the end. This therefore also greatly reduces the hours spent trawling through footage to try and build a timeframe for an incident, which means reams of unnecessary information of those who aren’t suspects isn’t stored. 

Privacy and consent 

There is a proportional relationship between the advancement of technology and people’s privacy concerns; humans are increasingly conscious of the ways in which technology perforates our lives, especially without our explicit consent, leading to mistrust and apprehension. This is why these advancements in surveillance need to go hand in hand with transparency and privacy – we have to make it clear who owns the data collected from cameras and surveillance systems and where it might be shared thereafter. We have to make sure that data is stored securely, and is inaccessible to third parties, as to regulate data ownership and monopolisation. It is important to safeguard people as both members of society and individuals whose personal image appears on camera. 

Augmentation, not replacement 

It may seem scary to hear about the advancements of technology, and think that this means the replacement of people. However, this is not the case. AI is not the big bad machine that makes all the decisions for us, taking away our power to think, feel and act. It is instead a virtual sidekick that can aid us in keeping the people and things around us safe and secure. Technology is a tool to make our lives easier, not obsolete. 

Video is here to stay 

Video security plays a crucial role in our everyday lives in shielding people and businesses from danger. What we need, is to improve the way in which we protect ourselves through technology – there are ways surveillance can, and is being made better. Monitoring our surroundings, using similarity search and find-fast features, without implicit and explicit biases is achievable, and despite what the headlines may say, it’s likely greater privacy and confidentiality are the future of video technology.


About the Author

Sam Lancia is a Co-founder and Head of Video Engineering at Ava Security. Sam has over 15 years’ experience as an engineer, and has previously worked as the Head of App Development and Head of Spark Board Development. At Ava, Sam works to ensure that video security becomes simpler, smarter and more transparent. Since co-founding the company four years ago, Ava has grown its suite of cameras and video management tools across numerous industries: commercial, education, federal, healthcare and retail.

Featured image: ©Alexander

more insights