Real-life Minority Report Features AI – What About Privacy?

Artificial intelligence is getting ready for another big role – predicting who is going to commit a crime. Does that sound like it belongs in a sci-fi movie?

AI Israeli company Cortica will start work alongside Best Group in India to analyze terabytes of CCTV footage. The end goal is to improve public safety on the streets, in bus stops, or train stations; places that are more prone to crimes taking place.

Now, we’re all pretty much accustomed to CCTV cameras on the streets and we know various technologies are deployed behind them. There are numerous cities, for instance, that run facial recognition software to track suspects, or are able to read, track, and find cars by their license plates. What Cortica wants to do, however, takes things many steps further because it looks for behavioral anomalies.

Have you ever watched “Lie to Me”? It’s a show featuring Tim Roth in the role of Cal Lightman, an expert who studies facial expressions and involuntary body language. These are called microexpressions and they’re actually used by military and law enforcement to tell who’s lying and what their intentions are. Lightman is, obviously, extremely successful in fighting the bad guys and helping the authorities and his clients time and time again, but what if there was an AI that could do what Roth’s character can do, or what specially trained law enforcement members can do?

Humans are prone to error. Maybe the detective blinks at the wrong time, or maybe they just miss to catch a certain microexpression and they release a suspect. An AI would be far more exact.

But before that AI becomes an expert, it needs to learn. And it will do that on Indian streets. Cortica believes this needs to be done in the same way humans learn – unsupervised. Instead of using a neural network, Cortica went back to the basics and studied a rat’s brain and how it reacted to particular stimuli, before building a system that simulates the processes that happen in actual brains.

This results in an AI that can be “held accountable” for the mistakes it makes. Well, not really, but at the very least programmers are able to trace back to see what pushed the AI to make a certain decision. Regular neural networks used for most AIs currently present in the world don’t allow researchers to do this and have to be retrained from the ground up in case of error.

This is far from being an effective tool in the current state, but there are high chances of success. Then, police could be directed in areas where dangerous situations are likely to happen.

Years ago, a Tom Cruise movie called Minority Report was making waves exactly because it featured a system that could predict and prevent crimes from happening. Sure, there was no AI involved there and the system was much more bizarre, but the uneasiness it left people with is pretty much the same.

Crime-predicting AI – the pros and cons

Now, the pros of such a system are obvious – lowering crime rates, preventing incidents, tracking criminals much faster, and so on. But there are also plenty of cons to it.

As with any AI in the world, the intentions of those using it are the most important. Put this particular tool in the hands of an all-controlling government and you have yourself a situation with a high chance of persecution of citizens.

There’s also the case of privacy. In the past decades, the need for privacy has grown. Admittedly, privacy means something different for every one of us, and it’s a term that means something in the United States, and another thing in China, for instance, meaning that it’s conditioned by societal norms.

Take the NSA scandal, for example. When Edward Snowden spoke out about the mass surveillance practices of the US National Security Agency, the public split in two. There was the group that denounced the NSA because everyone deserves their privacy, and there was the group that shrugged its shoulders saying that they had nothing to hide and no one that isn’t committing crimes should be afraid.

Knowing that you are watched on the streets everywhere you go can have multiple effects on people’s psyches. For instance, vulnerable people may feel safer walking down the street knowing that a crime predicting tool was at work. On the other hand, in general, a constant state of surveillance, can alter the way we act when we’re in public.

Several decades ago, in the Communist era in Eastern Europe when surveillance was the norm, people whispered in their own homes so the neighbors wouldn’t hear them talking for fear of getting reported to the authorities. Therefore, their behaviors changed in accordance to their need for privacy. Similar changes might happen anywhere a crime-predicting AI will be used to analyze CCTV footage.

What if my inner anger at a personal situation is misinterpreted by the AI as an intention to commit some sort of crime? What if words are taken out of context and I get taken by the police? Humans aren’t perfect, but neither are AIs.

Another aspect is the fact that surveillance impairs mental health and performance, leading to heightened levels of stress, anxiety, and fatigue. It can also push individuals to conformity to such levels that will affect their personalities – they will do anything in their power to not stand out. *

Following the NSA mass surveillance scandal a few years back, one researcher published a study in which he explains how the search rate for various terms on Wikipedia went down considerably; terms like jihad, or chemical weapon. Therefore, out of fear of ending up on some federal watchlist, Wikipedia users stifled even their curiorsity.

On the other hand, known as the Hawthorne Effect, studies have shown that employees are more productive when they know they are being watched, much like crime rates go down in areas where cameras are present. This indicates that not everything is black and white and that there are plenty of gray shades in between.

It seems like we go back to the same question over and over again – are you willing to surrender your privacy for a perceived sense of heightened safety? How much of your privacy are you willing to relinquish?

The reality is that we can’t stop this type of technologies from being developed, but we can make sure that when they are deployed, we remember to keep our representatives aware of the need for privacy. After all, these technologies can only be used for however much the policies allow them to be.

Leave a comment