How To Fool AI Into Seeing Something That Isn’t There

From Wired:

Our machines are littered with security holes, because programmers are human. Humans make mistakes. In building the software that drives these computing systems, they allow code to run in the wrong place. They let the wrong data into the right place. They let in too much data. All this opens doors through which hackers can attack, and they do.

But even when artificial intelligence supplants those human programmers, risks remain. AI makes mistakes, too. As described in a new paper from researchers at Google and OpenAI, the artificial intelligence startup recently bootstrapped by Tesla founder Elon Musk, these risks are apparent in the new breed of AI that is rapidly reinventing our computing systems, and they could be particularly problematic as AI moves into security cameras, sensors, and other devices spread across the physical world. “This is really something that everyone should be thinking about,” says OpenAI researcher and ex-Googler Ian Goodfellow, who wrote the paper alongside current Google researchers Alexey Kurakin and Samy Bengio.

With the rise of deep neural networks—a form of AI that can learn discrete tasks by analyzing vast amounts of data—we’re moving toward a new dynamic where we don’t so much program our computing services as train them. Inside Internet giants like Facebook and Google and Microsoft, this is already starting to happen. Feeding them millions upon millions of photos, Mark Zuckerberg and company are training neural networks to recognize faces on the world’s most popular social network. Using vast collections of spoken words, Google is training neural nets to identify commands spoken into Android phones. And in the future, this is how we’ll build our intelligent robots and our self-driving cars.

Today, neural nets are quite good at recognizing …

Continue Reading