Three researchers from Kyushu University have published a paper describing a means of reliably fooling AI-based image classifiers with a single well-placed pixel.
It’s part of a wider field of “adversarial preturbation” to disrupt machine-learning models; it’s a field that started with some modest achievements, but has been gaining ground ever since.
Robot law pioneer Ryan Calo (previously) has published a “roadmap” for an “artificial intelligence policy…to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiative their own exploration.”
Calo cites a lot of our favorites, like Cathy O’Neil and Julia Angwin, and neatlyRead More
The Open AI researchers were intrigued by a claim that self-driving cars would be intrinsically hard to fool (tricking them into sudden braking maneuvers, say), because “they capture images from multiple scales, angles, perspectives, and the like.”
So they created a set of image-presentation techniques that reliably trick image classifiers, showing that theirRead More