Apple has defended its plans to implement a new feature on US iPhones that would scan for child abuse imagery, after facing backlash from both customers and privacy activists.
Last week BCS, the Chartered Institute for IT said that it is impossible to build a system like this that only works for child abuse images.
But the tech giant assured customers that it would only use the technology to scan for child sexual abuse material (CSAM.)
The system, called "NeuralHash", would scan iCloud for images it believes are related to child abuse. The feature compares the pictures to a database of known abuse images.
The software automatically alerts a team of human reviewers if it thinks illegal imagery is stored on the phone, who then contact law enforcement if the material can be verified.
Rather than using traditional “hash-matching” technology – used by online child abuse watchdogs such the Internet Watch Foundation – the feature will use machine learning techniques to identify the offending images according to the technical whitepaper published by Apple.
“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands,” said the company in an FAQ document on the new technology. “We will continue to refuse them in the future.”
Apple assured customers that the technology would be limited to detecting CSAM stored in iCloud and that it would “not accede to any government’s request to expand it.” This means the company would not scan a user’s photo album, only images shared on iCloud.
“Furthermore, Apple conducts human review before making a report to NCMEC,” it added. “In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.”
Recent Stories