Apple walks a privacy tightrope to discover child abuse in iCloud

[ad_1]

Over the years, technology The company has been struggling between two impulses: the need Encrypt users’ data to protect their privacyAnd need Found the worst abuse On their platform. Now, Apple has introduced a new encryption system designed to solve this problem, detecting child abuse images stored on iCloud, and theoretically not introducing new forms of privacy violations.In doing so, it also created disagreements between privacy and cryptography experts, who saw its work as an innovative new solution, while others saw it as a dangerous surrender. Government supervision.

Today, Apple has introduced a new set of technical measures in iMessage, iCloud, Siri and Search. The company claims that all of these measures are designed to prevent child abuse.The new opt-in setting in the family iCloud account will be used Machine learning Detect nudity in images sent in iMessage. The system can also block the sending or receiving of these images, display warnings, and in some cases remind parents that a child has viewed or sent these images. If Siri and the search function detect that someone is searching or viewing child sexual abuse material (also known as CSAM), it will now display a warning and provide options to seek help with their behavior or report their findings.

But among Apple’s most technologically innovative and controversial new features, iPhone, iPad, and Mac will now also integrate a new system that can check images uploaded to iCloud in the United States for known child sexual abuse image. This feature will use an encryption process, which is partly performed on the device and partly on Apple’s servers to detect these images and report them to the National Center for Missing and Exploited Children (NCMEC), and finally to U.S. law enforcement Department.

Apple argues that none of these new features that deal with CSAM will jeopardize user privacy-even the iCloud detection mechanism uses clever cryptography to prevent Apple’s scanning mechanism from accessing any visible images that are not CSAM. The system was designed and analyzed in collaboration with Dan Boneh, a cryptographer at Stanford University. The feature announced by Apple includes the recognition of several other well-known cryptography experts.

“I believe that the Apple PSI system provides an excellent balance between privacy and practicality, and will be very helpful in identifying CSAM content, while maintaining a high degree of user privacy and minimizing false positives,” Israeli cryptographer Benny Bar Ilan University, where Pinkas reviewed the Apple system, wrote in a statement to Wired.

The child safety organization also immediately praised Apple’s actions, saying that they have achieved the necessary balance, “to bring us closer to justice for the survivors who spread online in the most painful moments,” the chief executive of the child safety advocacy organization Officer Julie Cordua Thorne wrote in a statement to Wired magazine.

Other cloud storage providers from Microsoft to Dropbox have already tested the images uploaded to their servers. However, some privacy critics believe that by adding any type of image analysis to user devices, Apple has also taken a step towards a disturbing new form of surveillance and weakened its historically strong privacy stance in the face of law enforcement pressure. .

“I’m not defending child abuse. However, your personal device is constantly scanning and monitoring you locally based on some offensive content standards, and the whole idea of ​​conditionally reporting to the authorities is a very, very slippery slope. “Paris-based cryptographer and founder Nadim Kobeissi said. Based on the cryptographic software company Symbolic Software. “If this situation continues, I will definitely switch to an Android phone.”

Apple’s new system does not directly scan user images, either on their device or on Apple’s iCloud server. Rather, it is a clever and sophisticated new form of image analysis designed to prevent Apple from seeing these photos unless they have been determined to be part of a collection of multiple CSAM images uploaded by users. The system “hashes” all images sent by users to iCloud, converting the files into strings that are uniquely derived from these images. Then, like the old CSAM detection system (such as PhotoDNA), it compares them with the large number of known CSAM image hashes provided by NCMEC to find any matches.

Apple also uses a new form of hashing called NeuralHash, which the company says, despite changes such as cropping or coloring, can still match images. As important as preventing evasion, its system has never actually downloaded these NCMEC hashes to the user’s device. Instead, it uses some encryption techniques to convert them into so-called “blind databases”, which are then downloaded to the user’s mobile phone or PC, which contains seemingly meaningless strings derived from these hashes. This blinding prevents any user from obtaining hash values ​​and using them to bypass system detection.

[ad_2]

Source link