Rand Hindi: Snowden Changed Public Perception on Encryption – Interview

A few days ago, on the morning of the first day of the ITBN conference in Budapest, I had the opportunity to have a chat with one of the speakers at the event – Dr. Rand Hindi.

Young and dressed unlike any of the other speakers, Dr. Hindi stands out in more ways than one. He started coding when he was 10 years old and already founded his first startup four years later. By the age of 21, he had a Ph.D. Now, he’s a data scientist, entrepreneur, and plays an important role in shaping the way the world reacts to all-things digital.

I wanted to discuss with him issues regarding encryption, the threats that arise, privacy and Artificial Intelligence and Dr. Hindi had many interesting things to say. Without revealing too much, here is our interview.


Question: Hello! I wanted to chat with you about encryption, more specifically end to end encryption that’s now almost in all of our apps after this trend getting kickstarted by the NSA scandal. Ever since then, and especially in recent months we’ve seen law enforcement agencies and politicians trying to get around these protections, demanding for backdoors, by playing the “terrorist card.” How do you believe this idea should be treated?

Answer: In France, I am part of something called the National Digital Council, where we advise the government and the general public on issues around anything digital. We just published last week a report on our view of encryption and, when it comes to end-to-end encryption, what we believe is that there is no such thing as backdoors. If you build that backdoor in an encryption system, you’re actually opening the possibility for criminals to also use it. So you’re basically not protecting against the actual criminals because they can build their own apps, and you’re opening the door to other people to abuse it.

The second thing is that you can not ban crypto – you cannot forbid someone from building their own encrypted app because algorithms are public, they’re open source. The whole culture in the crypto community is to publish everything so that people can try and break it. Anybody today could, in a weekend, build their own WhatsApp, encrypted app. It’s actually what most terrorist groups do – they have their own messaging apps that they build themselves. They don’t care if you ban WhatsApp – it’s not going to make a difference in how they use it. So if you’re banning crypto, you’re basically banning it for good people and you’re giving the exclusivity to this technology to the bad guys.

 

Q: So it’s basically an excuse…

A: Yeah, it’s an excuse. I think the only reason they want to bypass the end-to-end encryption is because of mass surveillance. There’s this wrong belief that mass surveillance actually works, whereas in fact, what we’ve seen in every single case of terrorist attacks is that individual, highly targeted, human surveillance, augmented by technology, is much more effective. Mass surveillance doesn’t work. It has never proven to be efficient in any kind of way.

So what we’re telling people nowadays is “don’t try to put backdoors”, instead focus on targeting specific people and then put all the resources necessary in that. Because you could probably crack someone’s phone, spend a few tens of millions of dollars, computing power and a few months to break the encryption code for something like a messaging app for one person, but you couldn’t do that for multiple persons. It’s not even that we’re preventing individual surveillance, it’s we’re preventing mass surveillance. This is the topic – it’s not about preventing individual surveillance, but mass surveillance.

 

Q: Following the London terrorist attack they complained about not having access to WhatsApp, protected by end-to-end encryption.

A:  But there are many ways you can go around it, especially on Android phones. iPhones I think are more secure in the sense that they are controlled by Apple hardware – it’s very hard for someone to inject malware in the phone. For Android phones, there was a study recently that showed that a large number of popular consumer Android phones had spyware in them because people in the assembly line added the spyware. Google had nothing to do with it and there was nothing it could do about it because they give Android to the manufacturer who is then responsible for making sure it doesn’t get tampered with. Today, what most people are doing is trying not to break crypto, but trying to put spyware on the phone itself to intercept the messages before they are encrypted.

034
Rand Hindi, ITBN 2017

Q: How would implementing such rules against encryption affect our lives and the secrets we try to keep offline?

A: I’ll give you an example. The same algorithms are used to secure your messages, but also to secure what you do online. If you break crypto, you also break the security of online systems. You can log into any computer, you can see what people are doing on the Internet – no one will ever be safe again without crypto. Period. The governments themselves need crypto. Everybody needs it. And you know what’s funny? Every government official uses those encrypted messaging apps between themselves so it’s very hypocritical.

 

Q: There’s been a lot of talk about encryption and cyber security since Snowden, including mass surveillance. Do you think that anything has changed since then?

A: I believe consumer perception has changed. You have to keep in mind that before Snowden, not a single person used an encrypted messaging app. Everyone used SMS; no one really thought about it. But after Snowden and after WhatsApp got bought by Facebook, Telegram came out, which was basically a copycat of WhatsApp – same design, same features, but they added encryption. All of a sudden, people started paying attention. Overnight, we had tens of millions of users. Because of that, WhatsApp decided to do the same, adding end-to-end encryption by default, which is the better form of encryption according to some people.

The entire perception now is that your messaging app should be encrypted. If you’re building a messaging app today that’s not encrypted… it’s not going to get used. And that happened in 6 months. And I believe that the more we’re going to show people they can have the same technology with privacy, the more they’re going to demand that. Keep in mind that people should not care, it should be by default everywhere.

It’s one of those things that the random person will always choose convenience over privacy, which is the reason why we should make privacy by design.

 

Q: How about emails? Google had a project about further securing emails but ended up open sourcing it.

A: If we send each other a message through Gmail accounts, it’s encrypted. It’s not encrypted on Google servers, it’s encrypted in transit, which is different. With emails, if you don’t do this, everyone can read your messages. But Gmail has a feature where Gmail to Gmail emails are encrypted on the way, but not end-to-end, because Google still has them. Emails, I think, are very interesting – it’s very hard to imagine an end-to-end encrypted email because there are so many providers and you need everyone to play along.

The reason why we can do it in apps is because WhatsApp doesn’t work with Telegram. People don’t send WhatsApp messages to Telegram users, whereas emails are open protocol. Everyone is building their own email clients. You need everyone to follow, which I think they should, but it’s a lot less likely that every single email server will upgrade to a secure system. You might get to maybe 80% end to end encrypted emails, but never to 100%.

 

036
Rand Hindi, ITBN 2017

Q: Experts believe Artificial Intelligence will have a growing role in cybersecurity and encryption. What’s your opinion?

A: I think AI is key to that. Today, most security is done through machine learning. The amount of data that you need to handle and process to detect a potential threat is huge and a human alone can no longer do it. I’m very confident that machine learning applied to cyber security is going to be a really big topic for the next ten years.

But then, there is the other side as well – how do you encrypt machine learning systems so that you can guarantee privacy. There are a lot of people today who are starting to work on encrypted neural networks, encrypted machine learning, where the data that you’re feeding the machine and the predictions the machine is making would be encrypted using things like homomorphic encryption, or things like that. I’m pretty confident that this will also be a trend in machine learning.

 

Q: Ok, but we have machine learning to detect the threats we know, but we get AI to detect the newer ones. How accurate do you think an AI could get in detecting new threats?

A: That’s hard because machine learning and AI today can only be as good at performing a task as they were taught to do. The task of learning to recognize a new threat is a very complicated one which I believe requires general intelligence because you need to zoom out and take a system’s view of what you need, which an AI is incapable of doing.

The best combination today would be a human saying “I think this could be a threat,” so the human would be giving parameters and the AI would look for it. But the human still needs to give that direction for the AI to look into. AI I don’t believe can find a new threat if that threat is unrelated to things that existed before.

 

Q: There was a recent study where an AI scanned 35,000 faces of people on a dating site and then could tell with high accuracy whether the people in the pictures were gay or not. Since there are some 70 countries where being gay is illegal, that poses quite a few issues. Do you believe there needs to be drawn a line in what AI should be allowed to do and shouldn’t be allowed to do.

A: You know, the issue isn’t that you would recognize that someone was gay, the issue was that the study was flawed. It’s bad science. The data set was very biased – there were only white Americans, for example. The way they taught the machine, they included human bias in it – the people they selected as the sample to train the machine were people that the researcher thought looked gay. It’s not like we have an objective data set, it’s very biased. The study looks like a joke. But, when you look behind it, the guy who’s behind the study is actually a guy who sells profiling services and technology, so it looks more like a PR stunt.

The worst thing that could happen is someone using that, thinking that someone was gay, targeting that person because he’s gay and creating issues. It’s a bad machine learning algorithm that might create a problem for someone just because it didn’t work.

 

Q: So there are lines that should be drawn…

A: But this is forbidden anyway in Europe. When you look at the GDPR, it’s forbidden to do automated processes of things like religion, sexual orientation and things like that. You would need explicit consent from the user do even do that. It doesn’t prevent you from building an app and taking a picture of someone, but it’s illegal.

039
Rand Hindi, ITBN 2017

Q: Tell me about Snips, how is it better than other smart home technology?

A: At Snips, we sell technology to people who want to put a voice to their products, or B2B. Let’s imagine you’re building a coffee machine or building a TV and you want people to talk to the TV. We sell the technology to do that.

There are other companies who do this- Amazon, Google – offering similar technologies. The difference is that we’re the only company today on the planet whose technology is running on the device you’re talking to. Everybody else is processing the voice in the cloud and this is very important for multiple reasons.

The first one is that we guarantee privacy by design. You have to keep in mind that voice is a biometric marker – it can identify you personally, and you cannot change your voice. By processing in the cloud you’re taking a risk that someone can steal your voice and we’ve seen people today use AI to mimic someone else’s voice – so there’s a big issue around this, about identity theft. By doing this on the device, because we don’t have to send anything to the cloud, obviously nobody can access the data aside from you.

The second thing – it means that we’re the only technology on the planet today that is GDPR compliant for voice. Because everybody else cannot comply. If you have a Google Home device, theoretically, if you really follow the law, every single person who comes into your house has to give consent for their voice to be captured. Imagine having ten people, and they’d have to give consent before starting to talk. When you do it on the device, you don’t have to worry about consent because the data never leaves.

 

Q: Do you believe the concept of privacy needs to be redefined? What we considered private a decade ago is no longer so because we share things on Facebook, photos, etc.

A: I believe this is an anomaly and that for many, many years, people didn’t realize that digital privacy was as important as physical privacy. When you’re home, you still want to close the door when you’re talking to someone privately, when you go to the bathroom, the same. This is never going to change. It’s just that people didn’t realize what their digital footprints would say about them. On your phone your email is dematerialized, it doesn’t look real – you can’t touch it, or taste it. But this is now changing. I’m convinced that over the next ten years privacy is going to become a major topic and companies who fail to offer privacy by design will be dead.

 

Q: I was having a conversation about browser cookies and how ad companies have all this information about us, even if it’s anonymous, after tracking us over all websites. What if someone could manage to backtrack that ID number that ad companies send ads to and how that could affect us.

A: You can install plugins on your computer to show how you are being tracked. Mozilla has one, I think. Do not Track failed because it lacked adoption – when you have so many people involved, and you’re trying to move everyone towards it, but it will change in the near future.

Because of the GDPR companies have no choice but to. It turns out when you look at the law and profiling, you need to get consent and to be able to offer a service that doesn’t require profiling and that people can adopt easily. Those ad trackers need to have an easy way to opt out of tracking – not for each website, but from the ad tracking service.

The simple fact that we have a new law in Europe that applies to all countries will create also a common understanding in the general public regarding privacy. Why would we have this major law happening if it weren’t important?

One thought on “Rand Hindi: Snowden Changed Public Perception on Encryption – Interview

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s