Apple is set to implement new features on US iPhones which scan for child abuse imagery.
The new features are set to launch in the US later this as part of updates to iOS 15 and iPadOS.
The system, dubbed "NeuralHash", will scan the iCloud for images it believes are related to child abuse.
The feature will compare the pictures to a database of known abuse images.
The software will automatically alert a team of human reviewers if it thinks illegal imagery is stored on the phone, who will then contact law enforcement if the material can be verified.
Rather than using traditional “hash-matching” technology – used by online child abuse watchdogs such the Internet Watch Foundation – the feature will use machine learning techniques to identify the offending images according to the technical whitepaper published by Apple.
Apple did not disclose whether these new features will be available outside of the US.
“At Apple, our goal is to create technology that empowers people and enriches their lives – while helping them stay safe,” said an Apple spokesperson. “We want to help protect children from predators who use communication tools to recruit and exploit them and limit the spread of child sexual abuse material (CSAM).”
They added: “This program is ambitious, and protecting children is an important responsibility. These efforts will evolve and expand over time.”
"We know this crime can only be combated if we are steadfast in our dedication to protecting children,” said John Clark, the president and chief executive of the National Centre for Missing & Exploited Children. “We can only do this because technology partners, like Apple, step up and make their dedication known."
"On the surface this seems like a good idea, it maintains privacy whilst detecting exploitation,"
said Adam Leon Smith, Chair of BCS, the Chartered Institute for IT’s Software Testing group. "Unfortunately, it is impossible to build a system like this that only works for child abuse images.
‘It is easy to envisage Apple being forced to use the same technology to detect political memes or text messages. Fundamentally this breaks the promise of end-to-end encryption, which is exactly what many governments want - except for their own messages of course.
"It also will not be very difficult to create false positives. Imagine if someone sends you a seemingly innocuous image on the internet, that ends up being downloaded and reviewed by Apple and flagged as child abuse. That's not going to be a pleasant experience."
"As technology providers continue to degrade encryption for the masses, criminals and people with legitimately sensitive content will just stop using their services."
He added: "It is trivial to encrypt your own data without relying on Apple, Google and other big technology providers."
Recent Stories