Three researchers from Kyushu University have published a paper describing a means of reliably fooling AI-based image classifiers with a single well-placed pixel.
It’s part of a wider field of “adversarial preturbation” to disrupt machine-learning models; it’s a field that started with some modest achievements, but has been gaining ground ever since.
Robot law pioneer Ryan Calo (previously) has published a “roadmap” for an “artificial intelligence policy…to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiative their own exploration.”
Calo cites a lot of our favorites, like Cathy O’Neil and Julia Angwin, and neatlyRead More
The Open AI researchers were intrigued by a claim that self-driving cars would be intrinsically hard to fool (tricking them into sudden braking maneuvers, say), because “they capture images from multiple scales, angles, perspectives, and the like.”
So they created a set of image-presentation techniques that reliably trick image classifiers, showing that theirRead More
625 Shares TrueFace.AI knows if it’s looking at a real face or just a photo of one.Image: ian waldie/Getty Images By Molly Sequin2017-07-08 02:39:54 UTC Facial recognition technology is more prevalent than ever before. It’s being used to identify people in airports, put a stop to child sex trafficking, and shame jaywalkers. But the technologyRead More