Fooling image recognition software

Image recognition software can be fooled by the addition of a simple ‘adversarial image patch’ next to any object, according to a recent report.

Adversarial patches can be used universally to attack any scene, work under a wide variety of transformations, added to any scene, photographed, and even when the patches are small they cause the image classifiers to ignore the other items in the scene and report a chosen target class.

The report, written by Tom Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer, and published on academic site arXiv, uses the example of a banana correctly identified on its own with 97 per cent confidence, but then when a 2D adversarial image patch is placed next to it, it is reclassified by the image recognition software as a toaster with 99 per cent confidence.

The pattern consistently fooled image recognition software when it took up at least 10 per cent of a scene – whilst a photograph of a real toaster was less likely to be able to fool the software, even at a larger scale.

The authors warn that because the patches are universal, and any attacker does not need to know what image they are attacking, such a patch could be widely distributed across the Internet to print out and use.

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.