Here’s how Apple manages images of child abuse on iCloud and e-mail

A warrant of search issued for the account of a Homeland Security Investigations provides a look at how Apple detects and reports the images of child abuse uploaded to iCloud, or sent via its e-mail server, protecting the privacy of clients innocent.

The first stage of the detection is automated, using a system that is common to most technology companies.

For every image of child abuse has already been detected by the authorities, creates a “hash“. This is actually a digital signature for that image, and the technology companies can have their systems that automatically search the pictures that match this hash.

Apple

Forbes explains what happens when a match is found:

Once you have found the hash, it is sufficient that a technology company can contact the competent authority, generally the National Center for Missing and Exploited Children (NCMEC). The NCMEC is a nonprofit organization that serves as a “clearing house” of the forces of law and order national for information on the sexual exploitation of children online. Typically, the organization calls the police after being informed of illegal content, often [urging] the criminal investigation.

However, Apple seems to go a little bit further, manually controlling the images to confirm that they are suspicious before providing them to law enforcement the name, address, and phone number associated with the relevant Apple ID.

The Apple approach seems ideal. Review the images only when they are compared with the hash of a known image, then there should be a very low risk of interception and viewing of images, the innocents of society. In addition, the manufacturer of the iPhone seems to do a manual check before reporting. This acts as a safeguard against an error in the hash, in order to be sure that the company hand over the personal data of the owner of the Apple ID.