Privacy Concerns for Dual-Use AI Image Clarity Tools

6 190
Avatar for M.Rosenquist
Written by
This user is who they claim to be.
We have manually verified this user via some other channel.
Proof
2 years ago

AI tech is a powerful tool. The original photo (left) was cleaned-up with an AI deep learning algorithm (Image source: from Murilo Gustineli) and restoring tremendous clarity.

The AI researchers outline their progress in their white paper Towards Real-World Blind Face Restoration with Generative Facial Prior (https://arxiv.org/pdf/2101.04061) and code is available for others to try on their project webpage: https://xinntao.github.io/projects/gfpgan.

The GFP-GAN system (Generative Facial Prior GFP — Generative Adversarial Network GAN), published by Xintao Wang, Yu Li, and Honglun Zhang and Ying Shan, is able to restore images much better than previous AI systems. The results are nothing short of impressive.

As a privacy professional, when I see these transformational examples, I have grave concerns about undesired monitoring of the population and the ability to clean-up distant or low-quality surveillance images, to identify and track a population.

Digital cameras are widely deployed by businesses and governments. A major limitation is the clarity of images at a distance. It becomes very difficult to positively identify subjects. With AI enhancing image clarity tools, identifying people at great distances or with poor resolution cameras could be automated at scale. That could allow the tracking of people wherever they go, catalog everyone they speak with, and if eventually applied to read-lips, it could eavesdrop on conversations at a distance.

However, you may be shocked to know that I am equally excited as this is also a potentially PRIVACY ENHANCING technology! Because this same type of AI can be used to perturb clear images in ways that undermine facial recognition algorithms.

Imagine this tech embedded in privacy-supporting cameras that modify the pixels in ways, unnoticeable to the human eye, but thwarts AI systems from conducting bulk identification of people from its video feed. Humans still see unblurred images but automated computer processes are thwarted from harvesting identified personal data at scale. Such a usage could find a potentially desirable balance between security and privacy.

It is up to everyone to decide how such tools will be used.

10
$ 22.99
$ 22.88 from @TheRandomRewarder
$ 0.10 from @Pantera
$ 0.01 from @Mictorrani
Sponsors of M.Rosenquist
empty
empty
empty
Avatar for M.Rosenquist
Written by
This user is who they claim to be.
We have manually verified this user via some other channel.
Proof
2 years ago

Comments

This looks terrific. Can it just be run with code blocks in Colab?

$ 0.00
2 years ago

Not sure, I didn't run the code myself.

$ 0.00
User's avatar M.Rosenquist
This user is who they claim to be.
We have manually verified this user via some other channel.
Proof
2 years ago

Technological developments are very helpful in our lives and always make it easy for what we want

$ 0.00
2 years ago

AI is a double-edged sword. AI can enhance and lower security and privacy.

$ 0.00
2 years ago

It is an exciting feature of AI. To make the effect more robust, the AI algorithms should be such that the results are irreversible, like one-way hash functions.

$ 0.00
2 years ago

This type of arms race has been going on for some time now.

$ 0.00
2 years ago