The Tchebycheff Approach
Tchebycheff Approach
Last night I read an article on Ars Technica that explained how Apple will install software onto American iPhones that will scan users phones for child sex images through a hashing technology called NeuralHash. I posted a link to the Ars Technica article onto my Instagram story, and asked my friends for their thoughts on the topic. The insightful replies that I received inspired me to do some more research on the topic, and also to categorize my own thoughts with this blog post.
Since I’ve been an adult, the majority of my electronic products have been purchased from Apple. I’ve owned almost every iteration of the iPhone, and currently own an Apple Watch, a MacBook Pro, a MacBook Air, AirPods, and an Apple TV. In addition to this, I’ve championed the iPhone’s privacy benefits to my friends and colleagues, with the crux of my arguments relying on Apple’s ardent policy of data living on-device. As a direct result of this, I wish to be clear up front; if Apple implements this forcefully onto its consumers, then I will sell all of my Apple products and move my business elsewhere.
The Apple Privacy Letter provides a layman’s overview of how the proposed software will work:
“Apple’s proposed technology works by continuously monitoring photos saved or shared on the user’s iPhone, iPad, or Mac. One system detects if a certain number of objectionable photos is detected in iCloud storage and alerts the authorities. Another notifies a child’s parents if iMessage is used to send or receive photos that a machine learning algorithm considers to contain nudity. Because both checks are performed on the user’s device, they have the potential to bypass any end-to-end encryption that would otherwise safeguard the user’s privacy.”
After the news concerning this new feature broke, Apple itself followed through with a technical explanation.
The technical explanation of how the software will function is as follows:
“The hashing technology, called NeuralHash, analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value.
In the explanation above, Apple itself uses the term “nearly identical”. This is because NeuralHash is based on a perceptual hashing function. A perceptual hashing function is a fingerprint of a multimedia file derived from various features from its content. Unlike cryptographic hash functions which rely on the avalanche effect of small changes in input leading to drastic changes in the output, perceptual hashes are “close” to one another if the features are similar.
The reason I make this distinction is because perceptual hashing algorithms are notoriously vulnerable to adversarial attacks [1] [2].
Apple, hypocritcally, tries to frame this egregious invasion of privacy in the first section of their System Overview by stating:
“Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.”
The wording of this statement is a direct example of doublethink, a term coined by George Orwell in his increasingly prognistic book Nineteen Eighty-Four. Apple is stating that it will focus on consumer privacy by manipulating more of your data than it ever has before. Quite Orwellian, indeed.
Truthfully, in effect, Apple is introducing a backdoor that completely thwarts all the goodwill that they’ve built with their, now obvious, PR-framed privacy commercials [1] [2].
I understand the need to reduce the amount of child abuse and child sex images that are shared online. I agree that we, as a society, should. I don’t agree, however, with the forfeiture of one’s own privacy to do so; especially when it’s coming from a company that holds privacy as a fundamental human right.
I urge others to get and stay informed on this topic, and any other that threatens digital privacy. A good resource is the Electronic Frontier Foundation, which characterizes itself as the leading nonprofit defending digital privacy, free speech, and innovation.
You can sign an open letter (I did) that protests against Apple’s development and forceful implementation of this feature onto the consumers of the devices it produces. The general mission statement of the letter is:
Our Request
We, the undersigned, ask that:
- Apple Inc.’s’ deployment of its proposed content monitoring technology is halted immediately.
- Apple Inc. issue a statement reaffirming their commitment to end-to-end encryption and to user privacy.
Apple’s current path threatens to undermine decades of work by technologists, academics and policy advocates towards strong privacy-preserving measures being the norm across a majority of consumer electronic devices and use cases. We ask that Apple reconsider its technology rollout, lest it undo that important work.
You can find out how to sign the letter in the link provided above, or directly sign the letter via GitHub.
Tchebycheff Approach
not financial advice, just a crypto degen thinking out loud
Thoughts I’ve had after reading the news about Apple’s new policies that support “Expanded Protections for Children”.
Using a Convolutional Neural Net to Swish the Kannada MNIST Challenge
Using Anaconda Behind a Firewall or Proxy
Recently I learned of a cool Python package calledpandas_profilingthat serves as an extension of the pandas.DataFrame.describe() function in the pandas modul...
This post is simply a collection of some of my favorite webcomics that my synthetic intelligence, nightfall, created during the last week and a half from Se...
A notebook detailing how to work through the Open AI taxi reinforcement learning problem written in Python 3. Source for environment documentation. import g...