Skip to content

What is NeuralHash? Breaking Down Apple’s New CSAM-Detection Tool

Apple has delayed plans to launch its CSAM detection technology, NeuralHash that it announced last month, citing feedback fro

Early last month, Apple announced that it would be launching a new technology called NeuralHash by the end of the year. The technology would actively scan all users’ iCloud Photos libraries and seek images that resemble images of CSAM, child sexual abuse material without impeding user privacy. And for obvious reasons, no one can opt-out of this on-device scanning.

The announcement did not go well with other tech giants, consumers, and security and privacy experts. Keep reading on to know more.

Read Also: SharePlay: Apple’s Latest Game to Stay Relevant Unveiled At WWDC 2021

What is NeuralHash, and How Does it Work?

Apple announced a technology that will allow it to detect and report known child abuse material to authorities in ways that claim the company will preserve user privacy.

As per a TechCrunch report published in early August (1), the detection of CSAM is one of a cohort of its new features aimed to protect children better using its services from any harm online, including filters to block potentially sexually explicit pictures (2) received and sent via a child’s iMessage account. Its other feature will interrupt when any user attempts to search for any CSAM-related terms via Search and Siri.

It is worth highlighting that most cloud services such as Google, Dropbox, and Microsoft already scan their user files for any content that may violate their terms of services or can be potentially illegal, like CSAM. However, Apple has long resisted scanning their users’ files in the cloud by offering users an option to encrypt their data before it would even reach Apple’s iCloud servers.

Apple stated that NeuralHash works on users’ devices and can detect if any user is uploading noted child abuse imagery to its server without decrypting the photos until a threshold is met and a sequence of inspections to confirm the content is cleared.

Notably, the news of Apple’s technology had leaked earlier on August 5th when Matthew Green, a cryptography professor at Johns Hopkins University, made a series of tweets and revealed the existence of this new technology (3).

The news met multiple resistance from privacy advocates and security experts, and consumers well accustomed to Apple’s security and privacy approach, which most other consumer tech companies don’t have.

Meanwhile, Apple tried to calm fears by offering privacy via several encryption layers, a process designed to go through manifold steps before it ever reaches into the hands of Apple’s ultimate manual analysis.

According to reports, Apple will roll out the new technology in iOS 15, and macOS Monterey is set to launch in the upcoming weeks. The system would work by converting the images on users’ devices into a unique letters string and numbers, called a hash. Any time a user even slightly modifies a picture, it changes the hash and can prevent matching. According to Apple, NeuralHash ensures that visually similar and identical images like cropped or edited images end up in the same hash.

Before any user uploads an image to iCloud Photos, their hashes are matched with a database of identified hashes of child abuse imagery provided by child protection organizations such as NCMEC, National Center for Missing, and Exploited Children, and others. The technology utilizes a cryptographic method known as a private set intersection to detect a hash match without revealing the image or alerting the user.

And even though Apple gets the results, they can’t be read on their own. The company uses another cryptographic principle called threshold secret sharing, which enables it to decrypt the content if a user overpasses a threshold of available child abuse imagery in their iCloud Photos.

While Apple has not explained the threshold, it stated that, for instance, if a secret is divided into a thousand pieces and the threshold is ten pictures of child abuse content, it can reconstruct the secret from any of those ten images.

It means that Apple can decrypt the matching pictures at that point to manually verify the content, disable a user’s account, and report it to NCMEC, which is then passed to law authorities.

The tech giant claims that the process is more privacy-oriented compared to scanning files in the cloud as NeuralHas only searches for known and not new child abuse imagery. Apple further added that there is a one in one trillion chance of a false positive. In addition, there is also an appeals process in place in case an account is falsely flagged.

Apple has also published all technical details on its official portal about how NeuralHas operates, reviewed by cryptography experts, and praised by child protection organizations (4, 5).

However, even though the new feature has received wide support for the tech giant’s efforts to curb child sexual abuse, there are still multiple components of surveillance that several would feel uneasy handing over to an algorithm. Moreover, several security experts have also called for more public discussion before Apple launches the technology for its users (6).

Read Also: Facebook’s Ill-Fated Fight Against Privacy as Apple Remains Firm

Why did Apple decide to Launch Now and Not Sooner?

According to the US-based tech giant, its privacy-preserving CSAM detection technology didn’t exist until now. However, consumer companies such as Apple have also faced significant pressure from the US government and its allies to weaken or backdoor the encryption they use to protect users’ data to allow officials and authorities to investigate serious crimes (7, 8).

For a long time, tech giants refused efforts to backdoor their systems. However, they have faced resistance against efforts to close out governments’ access further. Even though data stored in iCloud is encrypted in a way that even Apple can’t access it, as per a Reuters report from 2020 (9), Apple has dropped a plan for encrypting users’ full phone backups to iCloud after the FBI raised concerns that it would harm their investigations.

Other reports we came across also confirm that even though the announcement has come as a surprise to many, Apple coming up with this technology has not come completely out of anywhere. It seems that those at the top of Apple felt that the winds of worldwide tech regulation might shift towards an outright ban of some encryption methods in some of its biggest markets.

Notably, in October last year, then US AG Bill Barr joined hands with representatives from the UK, India, Australia, New Zealand, Japan, Canada in signing a letter (10) raising considerable concerns about how the implementation of encryption tech holds “sizeable challenges to public safety, including highly vulnerable members of the societies such as sexually exploited children.” In short, the letter effectively asked tech companies to get creative about how they can tackle this issue.

Yet, the news about Apple’s new CSAM detection feature without any public discussion ignited anxieties over the abuse of the technology to flood victims with child abuse imagery that could result in their accounts being shutters and flagged. However, Apple has downplayed the concerns and stated that it would conduct a manual review to confirm any evidence of possible misuse.

Apple further added that it would roll out its NeuralHash feature first in the US and has not declared when or if it will roll it out internationally.

It is also worth highlighting that until recently, companies such as Facebook had to switch off their child abuse detection technologies across Europe after they inadvertently banned the practice (11). Apple stated that technically the feature is an option as users can choose not to use iCloud Photos.

Read Also: Apple’s App Tracking Transparency and Feud with Facebook

The Public Outcry

Advocacy groups and experts had almost unilaterally negative feedback for the effort and raised concerns that it can lead to new abuse channels for actors such as governments to detect on-device data that they regarded objectionable (12).

Consequently, earlier this month, Apple had announced a delay on its plan to roll out its CSAM detect technology, citing feedback from policy groups and customers.

The feedback has largely been negative. Notably, Electronic Frontier Foundation stated earlier this week that it had amassed over 25k signatures from consumers. Moreover, more than 100 policy and rights groups, including the American Civil Liberties Union, have also called out to Apple to abandon its plan to roll out this new technology (13).

In a statement last week, Apple told TechCrunch (14), “we announced plans for features last month meant to help protect children from abusers who use tools for communication to recruit and exploit them and restrict the spread of child sexual abuse material. According to feedback from advocacy groups, consumers, researchers, and others, the company has preferred to take more time over the following months to collect data and make changes before rolling out these critically essential child safety features.”

Even though Apple has claimed that its NeuralHash technology is more privacy-friendly than other cloud providers, security and privacy experts have expressed concerns that highly resource actors like governments can abuse the system (15). And it can lead to implicating innocent victims or manipulate the system to find other materials that authoritarian nation-states find offensive.

Within a few weeks after Apple’s announcement, researchers also pointed out that they could create “hash collisions” with NeuralHash, effectively tricking the system into believing that two entirely different pictures were the same (16).

Reportedly, the abandonment announcement also caused some internal controversy (17).

In an interview, Emma Llanso, a director of CDT, stated that “what Apple showed with its announcement is that there are technical weaknesses that they are winning to create. Everything appears so out of step from everything the company had previously been doing and saying.”

There have been no comments from Apple for this interview and stated that the company would refuse requests from governments to use their system to check users’ phones for anything except illegal child sexual abuse material.

Apple employees and outsiders highlighted Apple’s stand against the FBI in 2016 when it successfully waged war with a court order to create a new tool to crack into terrorist suspects’ iPhones. Back then, the company stated that the government would inevitably use such a tool to break into other devices for other reasons.

However, to Apple’s surprise, its stance didn’t become popular, and the tide worldwide leaned towards further scrutiny against private communication since then.

With less publicity, over the past few years, Apple has made several technical decisions to help authorities, including, as we mentioned, dropping a plan to encrypt iCloud backups (18) and agreeing to store China’s user data in the country.

The fundamental issue with Apple’s new plan on scanning child abuse pictures, as per critics, is that the company is making critical policy decisions that it can be forced to change since the capability is there, in the same way, the company had warned would happen if it broke into the terrorism suspects’ phones.

The tech giant stated that it would scan only in the US and other nations to be added one by one when pictures are set to upload to iCloud, and it is only for photos that have already been known by the NCMEC and a small number of other groups.

However, any country’s authorities or courts can demand that any of these elements be expanded. Some of these countries, like China, represent huge and hard to resist markets, added critics.

Police and other agencies will cite recent laws needing “technical assistance” in investigating crimes like the US and Australia to press the giant to expand its new technology, stated EFF.

According to Kurt Opsahl, the EFF general, “the infrastructure required to launch Apple’s proposed changes makes it harder to believe that no additional surveillance is not feasible technically.”

“If Apple demonstrates that even in one market it can conduct on-device content filtering, we can expect government authorities and lawmakers to consider it appropriate to demand its user in their markets and potential for an expanded scope of things,” stated Neil Brown, tech layer at Decoded Legal based in the UK (19).

Read Also: Apple: The Perfect Nemesis of Tesla?

Closing Remarks

In short, the issue is not that Apple is looking to find ways to prevent the proliferation of CSAM while making a few device security concessions possible. It is about Apple unilaterally making a critical choice that would affect billions of customers while also pushing its counterparts towards similar solutions. Moreover, the tech giant is doing so without external public input about possible implications or assured safe conduct.

Over the past month, experts have found that Apple’s NeuralHash is not as tight as we hoped it would be. And earlier this month, it will be delaying the launch “to take additional time over the upcoming months to collect input and make amendments before rolling out these essential child safety features.”

According to TechCrunch’s Lucas Matney (20), the possible reason why Apple announced the news to delay the feature’s launch on Friday morning ahead of a long weekend is to ensure that as few people notice the announcement as possible. And we can also see why they would want that. After all, it is a major embarrassment for Apple. As with any such delayed launch, it is also a sign that the tech giant’s internal team is not yet adequately prepared and doesn’t have enough ideological diversity to assess the scope of the problem they are dealing with.

It is not a dig at Apple’s team building, but it is a dig at Apple trying to solve an issue like this inside its walled garden while adhering to its annual iOS release schedule.

Apple has been increasingly looking to make privacy its key selling point. As a consequence of this productization, it has pushed the development of its other privacy-focused features towards the same secrecy its surface-level design changes hold. In June, the company announced iCloud+, which raised several eyebrows since certain new privacy-focused features would only be available to new iPhone users who have paid for additional subscription services.

Of course, Apple can’t tap public opinion for its every public opinion. However, it should still treat its wide-ranging and trail-blazing security a bit differently than an average product.

The tech giant’s lack of engagement with experts and advocacy groups regarding NeuralHash is quite egregious. It also raises several questions about whether the company fully respects how its choices on iOS can affect the broader internet.

Even though delaying the launch of the feature was a good choice, we can all hope that the company takes some time to reflect more broadly.

Latest