The Broken Promise of Apple’s Announced Forbidden-photo Reporting System – And How To Fix It

by Ran Canetti and Gabriel Kaptchuk

Last week, Apple announced that it would deploy two systems to US iPhones with the release of iOS 15. The announcement  promotes these systems as standard bearers for preventing child exploitation while preserving the privacy and liberty of its law-abiding users. 

The two systems are very different. The first is a new machine-learning powered tool that alerts parents when their children send or receive iMessage photos that might contain nudity or other material that the system considers sexually explicit. The second system will scan each user’s iCloud photos and compare them to a database of known instances ofapp Child Sexual Abuse Material (CSAM), which Apple will obtain from the National Center for Missing & Exploited Children (NCMEC 1). If, and only if, a substantial number of matches are found on a user’s account, the system will release the images to Apple personnel, who will determine whether to flag the user to NCMEC (who may in turn notify law enforcement).

While both systems raise serious concerns 2, this post focuses only on the lack of accountability mechanisms included in iCloud photos scanning protocol. Indeed, this appears to be an example where the best intentions, combined with the best technological talent, still result in a system that appears to be a potential public safety hazard.  The heart of the problem is that there is no way for the user’s devices or the general public to verify properties of the images Apple is scanning for, and there is no transparency for NCMEC’s internal process.  The same is also true of current CSAM scanning techniques, which do not claim to provide any privacy to users. But Apple’s new proposal is accompanied by significant security and privacy claims; the lack of accountability mechanisms makes these claims ring hollow.  

We will explore cryptographic mechanisms that would increase public confidence the system can actually deliver on its technical promise.  While our suggestions cannot address all of the issues in Apple’s proposal, they illustrate that building a better technical system is both necessary and possible. 

Background

To explain this rather stark criticism of a system announced by a company that has historically been a privacy bulwark, let us start with a brief (and inevitably partial) summary of the current state of affairs with CSAM scanning and reporting.  

It is current practice for companies that offer cloud storage of personal photos (or social media platforms, like Facebook) to regularly scan the photos stored on their servers to make sure that these photos contain no CSAM.  This is done roughly as follows: 

  • First, NCMEC selects a subset of the photos in its CSAM database that it deems appropriate for serving as potential matches.  
  • Next, NCMEC applies a special “hashing algorithm” to each selected photo (eg. PhotoDNA and Facebook’s PDQ). The algorithm produces a short digest (or hash) of the photo, that carries the following remarkable properties 3: (1) given a digest, it is difficult to retrieve either the original photo, nor anything “close” to it, and (2) the digest is “sticky”: modifications of an image that do not significantly change it don’t change the digest 
  • Next, NCMEC shares the list of CSAM digests with the cloud storage provider. This is done under confidence, such that the hash list is not publicly available.  NCMEC shares these digests instead of the actual images because technology companies are not legally allowed to store these images.
  • The cloud storage provider applies the hashing algorithm (in the case of PhotoDNA, obtained under license and non-disclosure agreements) to each stored personal photo and compares the digest against NCMEC’s list. If a match is found then the provider’s personnel reviews the photo, and, if deemed necessary, then reports the images to NCMEC, which may choose to involve law enforcement.4 Once the reporting process (and possibly subsequent investigations) has concluded, the storage provider must promptly delete these images, as NCMEC is the only private organization that can store CSAM images long term.

This process leaves a lot to be desired, both in terms of protecting the privacy of people’s personal photos, and in terms of transparency and trustworthiness of the process of flagging potential offenders. Indeed, users must allow their storage providers full access to all their photos. Furthermore, unintentional errors or even malicious behavior in the process of generating and comparing the digests may go undetected, causing undue harm to innocent users.  (Note that this is an intricate, multi-step process that involves both human judgement and non-trivial algorithms, so a lot can go wrong here)  The one “saving grace” here is that the shortcomings are clear and well-documented:  users should have no expectation of privacy from their storage providers, and it is clear that complete (in fact, blind) trust is put in NCMEC for the creation of the CSAM digests.

Now, Apple has for a long time wanted to enable “end-to-end encryption” for user’s photos stored on iCloud. (Here “end-to-end“ means that only the user’s phone has the key that allows decrypting the photos. Furthermore, the user’s phone will not surrender this key even to Apple’s servers.)  So the privacy of the user’s pictures would be preserved. 

However, end-to-end encryption would make the scanning process described above impossible, as the Apple server would only see encrypted versions of the images.  Members of law enforcement and organizations like NCMEC are concerned that this would turn iCloud into a “safe haven”  for CSAM distributors and habitual consumers. This would feed the vicious cycle whereby the ability to store CSAM in a way that is not accessible to law enforcement creates more demand, which in turn encourages more of the depicted horrific acts to take place. There have thus been fervent demands from across the political spectrum that companies like Apple find a way to prevent their technology platforms from turning into a safe haven for CSAM. The proposed system appears to be Apple’s response to these demands.

Apple’s Proposed System

 So, here’s the new system in a nutshell:   

  • NCMEC prepares the digests as before (with the hash algorithm replaced by a new proprietary one of Apple’s own design).
  • The user’s phone then computes the digests of all of the user’s iCloud photos. The major difference from the prior system is that the user’s phone now must compute the digest on its own, as encryption prevents the server from doing so.
  • Apple’s server and the user’s phone then collaboratively run a cryptographic protocol that determines whether a significant number of the images held on the user’s account match the digests provided by NCMEC.  If the protocol determines that this is the case, then the user is flagged and the decryption key of matched images is disclosed to the Apple server.  
  • If the protocol determines that the user’s store does not contain a significant number of matching images, then the Apple server only learns that the user is not flagged; in particular the server does not learn how many matches actually occurred 5 or which  photos are involved, if any.   
  • The user’s phone learns nothing either way – in particular it does not learn whether the user has been flagged. 
  • Flagged photos are then decrypted and inspected by an Apple employee, and, if the matches are deemed to be CSAM, NCMEC is notified. This part is the same as before.

Notably, as part of the protocol, Apple publishes an encoding of the digest list that fixes the contents of the list for the rest of the protocol, while also keeping the contents of the list hidden.  (In cryptographic lingo, Apple essentially posts a commitment to the hashes.) We will return to this point later.

The process of steps 3-6 has been termed fuzzy, threshold, private set-intersection protocol with associated data.  Below we will refer to it as the “set intersection protocol”.

It should be noted that the protocol is run between the Apple server and the user’s Apple device, both of which run Apple code that will likely not be public. Just like with scanning performed on the server, there does not appear to be a way for users to both use the cloud storage and opt out of the scanning.

The set-intersection protocol is quite sophisticated. In particular, it has been designed by some of the best cryptographic professionals in the world. Apple also went through the effort of providing for independent reviews by other leading cryptographers that are experts in such protocols. Indeed, it appears to be very well designed — as a stand-alone protocol for its specified task.

However, this protocol is only one component of the system.  Furthermore, the rest of the system is constructed in a way that undermines the security guarantees that it provides, and does not take advantage of the potential benefits that it enables. 

What’s wrong with Apple’s system?

The crux of the matter is that the system does not provide any mechanism for the user’s phone to verify any properties of the digests used by Apple’s servers, nor does it provide any mechanism for Apple to verify any properties of the digests it receives from NCMEC.  Worse still, the system does not allow for public verifiability at any level.  Specifically:

  1. The system provides no way for the user, or the user’s phone, to verify that the digests used by Apple’s server in the protocol are indeed the same digests that NCMEC provided to Apple.  This means that the user has no recourse against an absent-minded (or malicious) operator at Apple that modifies the digests provided by NCMEC – say in order to check whether the user has some specific (innocent) photos.  Even worse, the system guarantees that the user’s phone will not even know if such an erroneous or malicious match took place.   (The analysis documentation provided by Apple briefly mentions that one can potentially extend the system to allow NCMEC to audit the system, but this extension is not included. Furthermore, even with this extension user’s phones would be unable to verify the validity of the digests.)
  2. There is no infrastructure in place that would allow Apple to verify that the hashes provided to it by NCMEC were generated via a process that would have passed public scrutiny.  Instead there is full and blind trust in the ethical and technical competence of NCMEC in selecting the photos, computing the digests, and transferring them intact to Apple.  While NCMEC is certainly well intentioned, accountability and transparency mechanisms are crucial to removing any doubt from the public’s mind and provides protections against individual rogue actors.
  3. There is no way for the general public to verify that the entire system operates as intended and claimed. In particular:
    • There is no way for the general public to verify that the digests provided by NCMEC to Apple have been properly generated and that the original photos are CSAM.
    • There is no way for the general public to verify that the digests used by Apple’s servers in the set intersection protocol are the same as the ones provided by NCMEC.
    • Because Apple generally does not release its code publicly, there is no way for the general public to even verify that the user’s phone runs the protocol described by Apple as intended and does not disclose the user’s decryption key in other ways.

We note that the second and third points apply also to the existing CSAM detection system described earlier – only the first point is specific to Apple’s new system.  Still, the existing system does not claim to give any privacy or other protections to users, where this new system touts its security and privacy properties, thus giving its users (and the general public) a false sense of security.

Furthermore, the complete opacity of the process of generating and using the digests is antithetical to the principles of transparency and  accountability which are so crucial in building trustworthy governance — especially when designing systems that can so easily be co-opted by states with clandestine state-sponsored surveillance  — or simply by overly-zealous law enforcement.

Building-In Accountability and Transparency

At first glance, it may appear that all the flaws we discussed above in Apple’s system might be inevitable.  After all, any policing system that is aimed at catching criminals will require both hidden elements and the trust of the public.  Thus, it might seem that we would need to give up public scrutiny of these elements, and thereby risk some of our liberties, in order to allow law enforcement to keep us and our children safe.  After all, perhaps risking some personal liberty to address this kind of heinous crime is a good exchange. 

We argue that this trade-off is not necessary.  Specifically, we argue that:

  1. It is feasible to construct a system that increases public confidence in Apple, without revealing the CSAM digests or user’s photos:
    • Everyone can publicly verify that the digests used by Apple’s server in the set intersection protocol are the same digests provided by NCMEC.
  2. It is feasible to construct a system that increases public confidence in the information NCMEC provides Apple without disclosing the photos themselves.  For example:
    • Everyone can verify that the Apple’s explicit content detection algorithm algorithm (see Footnote 1) would have flagged the images processed by NCMEC as sexually explicit
    • NCMEC could (cryptographically) assert the provenance of the photos it chooses to include, and certify the process by which the photo was categorized as CSAM
  3. It is feasible to have the user’s phone verify the validity of the committed digests provided by Apple and refuse to participate in the set intersection protocol if the proof fails to verify.
  4. Asserting that the user’s phone does not expose the secret decryption keys in unintended ways can be done in a relatively straightforward way if Apple agrees to disclose the source code for the phone side of the protocol.6

Given this technical feasibility, we believe it is irresponsible of Apple to deploy the system in its current, broken state. 

Let us elaborate:

Apple’s current system provides no way to audit the commitment Apple pushes to user’s devices against NCMEC’s database — even NCMEC couldn’t do it!7  Adding this feature is simple and even contemplated by the authors of the specification.   The omission of this feature is somewhat perplexing.

Once the protocol includes a proper auditing mechanism, Apple could use zero-knowledge proofs 8 to demonstrate that they could pass an audit.  Verifying this zero-knowledge proof would essentially turn anyone into an auditor (without needing to see the secret information a real auditor would need), significantly increasing the public’s confidence that the system is functioning properly.  While technically challenging, deploying such a system is likely within our current cryptographic capabilities. 9  Crucially, Apple could then enhance the current set intersection protocol by having user’s phones require a proof that committed digests are valid, and refuse to participate in the set intersection protocol if this proof does not verify.

Using cryptographic techniques to increase the confidence in NCMEC is more challenging because there is no technical definition of CSAM that can be written as an algorithm.  In essence, while our cryptographic tooling could correctly balance accountability and secrecy, they require a level of clarity and formalism that surpasses the social problem of CSAM.  This means that, at least with current technology, we would need to rely on properties of the images beyond their pixels.

Apple has claimed that it has a machine learning system that can detect sexually explicit material, which it will integrate into iMessage in iOS15.  If we assume that (1) the algorithm has high precision and accuracy, and (2) all CSAM images are sexually explicit,  NCMEC could prove (using zero-knowledge) that all of the images it is including in the hash list would be flagged by the new detection algorithm, without revealing the images themselves. 10  While it would still be possible for NCMEC to slip other sexually explicit, non-CSAM images into the hash list, including non-explicit images would be difficult. 11 The proof would only be as convincing as the machine learning algorithm’s accuracy and robustness, about which we currently know nothing. 

An alternative method would be to have NCMEC prove the provenance of these images as a source of their authenticity, using a chain-of-custody approach.  As CSAM passes through the detection, review, and collection process, each organization and individual that handled the image would use a digital signature to certify their involvement with the process.  These signatures would come from law enforcement, courts, and technology companies.  NCMEC could then generate a zero-knowledge proof that all of the images in the database were properly signed and reveal aggregate statistics about the sources of these images. 12  At the very least, this would allow the public to better understand how the hash list comes into existence and the responsibility for the integrity of the list is distributed.

In either case, Apple could further enhance the set intersection protocol by having user’s phones verify the proof asserting appropriate behavior of NCMEC, and refuse to participate in the protocol if the proof does not verify.

Are Transparency and Accountability Enough?

Apple’s announcement was greeted by a chorus of criticism by experts across multiple domains, covering many of these issues: civil society organizations, technology company leaders, and privacy researchers.  At this time of writing this, an open letter denouncing the system has accumulated over 7500 signatures.  

They have correctly noted that there are other problems with Apple’s proposal beyond its lack of accountability.  For instance, it seems likely that we will soon see calls for this technology to be applied to other circumstances, just as happened in the UK.  More autocratic countries will call for this technology’s use to detect content that it deems impermissible, like political protest. Indeed, countries might have their law enforcement take on the role of NCMEC and potentially also of Apple’s employee review.  This move by Apple, a long-time defender of user’s privacy, shifts the Overton Window on the privacy debate; the ramifications of this shift will be seismic.  In other words, there appears to be no safeguards against having Apple’s cool new cryptographic tool co-opted as a tool of oppression. No amount of cryptographic improvements and zero-knowledge proofs can stop this kind of mission creep.

We do not aim to take a stand regarding whether any version of Apple’s new protocol should be deployed at all, even with all the precautions and verifications proposed here.  In fact, we are still fighting this out between ourselves.  But, we agree that any deployed system must not be vulnerable to covert abuse; determining if a system is vulnerable will come down to which properties of the system can or cannot be verified – both publicly and by the user’s phone.

Finally, this episode has reminded us of an old truth about cryptographic protocols: proving the security of a protocol is only as useful as the definition of security.  When the definition is too narrow or covers too little of the system — as is the case here — provable security does not prevent abuse.  This happens when the analysis is scoped to cover only part of the system.

Ran Canetti is a professor of Computer Science at Boston University and the director of the center for Reliable Information System and Cyber Security. Visit his personal website at https://www.cs.tau.ac.il/~canetti/. Gabe Kaptchuk is a research professor of Computer Science at Boston University. Visit his personal website at kaptchuk.com.

“iPhone 6 with lock” by MatthewKeys is licensed with CC BY-ND 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nd/2.0/

Footnotes

[1] The National Center for Missing & Exploited Children (NCMEC) is a private non-profit co-founded by Congress and child safety advocates; NCMEC continues to be partially funded by the US Congress. As part of its activities, NCMEC has been performing the laudable job of keeping track of CSAM photos. In fact, NCMEC is the only private US entity that Federal law allows to store this material. [Go Back]

[2] The sexually explicit image detection algorithm is not available yet, so all that we know is from Apple’s announcement – which raises a number of questions. As per the announcement, the detection algorithm scans the pictures sent and received on a child phone, and attempts to detect pictures that contain nudity or other objectionable material. Still, there was no mention of making the system open to public scrutiny or audit for accuracy or even basic validity, nor for potential racial or cultural biases in the determinations. Also the design does not appear to take into consideration children that live in abusive households, or that otherwise have good reasons to not share their private lives with their parents. Indeed, making such a system available to their parents might make these children even more vulnerable to abuse. On the bright side, however, deployment of this tool does not appear to be mandatory. More importantly, Apple said that flagging of photos will remain private to the child and the parents. Neither Apple nor any other third party will be notified. [Go Back]

[3] These are the properties we hope that these hash functions have. In practice, there has been some debate if the current hash functions actually provide these properties. [Go Back]

[4] Having a representative of the storage provider make value judgement that the flagged photo is indeed CSAM is a crucial part of the process. In particular, it makes sure that the reporting of the user to NCMEC (and potentially law enforcement) is deemed as voluntary, rather than compelled by law enforcement. This distinction can play a crucial role in determining whether the flagged CSAM can be used as evidence in court, should the user be eventually prosecuted. [Go Back]

[5] In practice, this is a little more complicated. Apple has included a mechanism that prevents the exact number of matches to be learned, but it is possible that Apple could get a vague idea of the number of matches. [Go Back]

[6] It might also be doable while keeping the code hidden using zero-knowledge proofs, but doing so with reasonable efficiency would require additional research. We do not further explore this point here. [Go Back]

[7] Despite the indication given in the specification, the current protocol actually doesn’t allow auditors to validate the correctness of Apple’s work at all. That is, even if NCMEC wanted to validate that Apple correctly only exactly the hashes NCMEC sent it, it couldn’t not: the cryptographic design of the protocol prevents checking if Apple has surreptitiously added images for which the system will scan. The technical documentation instead says that the system will use non-cryptographic means to ensure the correctness of the hash list, which amounts to a pinky swear. Needless to say, this is a problem. From a technical perspective, this is relatively easy to change: The issue is that for each empty slot in the hash table, Apple should generate an elliptic curve point at random. However, there is no way for an auditor to validate that the randomness used is honest; it could be that this point encodes the hash of some other image that either Apple (or Law Enforcement acting through Apple) would like to detect. Importantly, the same assumptions that allow for hiding the hash list make it impossible to audit the system. Fixing this problem simply requires making the randomness in the protocol come from an auditable source (eg. a pseudorandom function chosen by NCMEC or using polynomial interpolation to limit the degrees of freedom and random oracles when running the algorithm). [Go Back]

[8] Zero-knowledge proofs are a cryptographic technique that allow a prover convince a verifier that some output (in this case the commitment produced by able) was honestly generated according to some publicly agreed upon algorithm (ie. the algorithm specified in the documentation) without revealing the inputs that were fed into the algorithm (the hash list). Importantly, anyone could verify these proofs, and therefore check that Apple acted honestly. Overviews of zero-knowledge proofs have been written targeting many audiences, including lawyers, the general public, and children. [Go Back]

[9] The system would involve NCMEC producing a vector commitment to the hash list they produced and sign it. An opening to this commitment would then be provided to Apple, who could prove using non-interactive zero-knowledge that the input set used to generate the set-intersection’s public parameters is the same set contained in the commitment. This zero-knowledge proof would need to reason over a circuit containing a permutation proof, an opening to the commitment scheme, and the public parameter generation algorithm. The only meaningful bottleneck (both theoretically and practically) would be the use of random oracles in mapping the images to elliptic curve points. Using SNARKs, Apple could optimize the system to only be computationally expensive for the server and make verification efficient. [Go Back]

[10] The concrete efficiency of this approach is a function of the complexity of the machine learning scanning algorithm, about which we currently know nothing. Some academic research has already investigated computing these types of functions with secure multiparty computation, a closely related cryptographic technique. Their results hint that this approach is likely more practical than it might appear at first glance. [Go Back]

[11] This assumes that it is difficult to find non-sexually explicit images that would be labeled as explicit. With most machine learning algorithms, it is possible to find adversarially generated images that are classified improperly. However, there is a good chance adversarially modifying an non-explicit image to be classified as sexually explicit would change the image hash, making it useless. [Go Back]

[12] The biggest barrier to deploying such a system in practice is the large public key infrastructure that would be required. Specifically, all the stake-holder organizations would need to maintain signing keys. While this might seem like a more concretely tractable problem than proving the correct evaluation of the neural network, we remind the reader that creating robust public key infrastructure has proven a surprisingly difficult task in the past. [Go Back]

View all posts

20 comments

  1. Pingback: Globeinfolive

Post Your Comment