MIT Technology Review Subscribe

These new tools could help protect our pictures from AI

Plus: The race to find a better way to label AI.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Earlier this year, when I realized how ridiculously easy generative AI has made it to manipulate people’s images, I maxed out the privacy settings on my social media accounts and swapped my Facebook and Twitter profile pictures for illustrations of myself.
 
The revelation came after playing around with Stable Diffusion–based image editing software and various deepfake apps. With a headshot plucked from Twitter and a few clicks and text prompts, I was able to generate deepfake porn videos of myself and edit the clothes out of my photo. As a female journalist, I’ve experienced more than my fair share of online abuse. I was trying to see how much worse it could get with new AI tools at people’s disposal.

Advertisement

While nonconsensual deepfake porn has been used to torment women for years, the latest generation of AI makes it an even bigger problem. These systems are much easier to use than previous deepfake tech, and they can generate images that look completely convincing.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Image-to-image AI systems, which allow people to edit existing images using generative AI, “can be very high quality … because it’s basically based off of an existing single high-res image,” Ben Zhao, a computer science professor at the University of Chicago, tells me. “The result that comes out of it is the same quality, has the same resolution, has the same level of details, because oftentimes [the AI system] is just moving things around.” 

You can imagine my relief when I learned about a new tool that could help people protect their images from AI manipulation. PhotoGuard was created by researchers at MIT and works like a protective shield for photos. It alters them in ways that are imperceptible to us but stop AI systems from tinkering with them. If someone tries to edit an image that has been “immunized” by PhotoGuard using an app based on a generative AI model such as Stable Diffusion, the result will look unrealistic or warped. Read my story about it.

Another tool that works in a similar way is called Glaze. But rather than protecting people’s photos, it helps artists  prevent their copyrighted works and artistic styles from being scraped into training data sets for AI models. Some artists have been up in arms ever since image-generating AI models like Stable Diffusion and DALL-E 2 entered the scene, arguing that tech companies scrape their intellectual property and use it to train such models without compensation or credit.

Glaze, which was developed by Zhao and a team of researchers at the University of Chicago, helps them address that problem. Glaze “cloaks” images, applying subtle changes that are barely noticeable to humans but prevent AI models from learning the features that define a particular artist’s style. 

Zhao says Glaze corrupts AI models’ image generation processes, preventing them from spitting out an infinite number of images that look like work by particular artists. 

PhotoGuard has a demo online that works with Stable Diffusion, and artists will soon have access to Glaze. Zhao and his team are currently beta testing the system and will allow a limited number of artists to sign up to use it later this week. 

But these tools are neither perfect nor enough on their own. You could still take a screenshot of an image protected with PhotoGuard and use an AI system to edit it, for example. And while they prove that there are neat technical fixes to the problem of AI image editing, they’re worthless on their own unless tech companies start adopting tools like them more widely. Right now, our images online are fair game to anyone who wants to abuse or manipulate them using AI.

Advertisement

The most effective way to prevent our images from being manipulated by bad actors would be for social media platforms and AI companies to provide ways for people to immunize their images that work with every updated AI model. 

In a voluntary pledge to the White House, leading AI companies have pinky-promised to “develop” ways to detect AI-generated content. However, they did not promise to adopt them. If they are serious about protecting users from the harms of generative AI, that is perhaps the most crucial first step. 

Deeper Learning

Cryptography may offer a solution to the massive AI-labeling problem

Watermarking AI-generated content is generating a lot of buzz as a neat policy solution to mitigating the potential harm of generative AI. But there’s a problem: the best options currently available for identifying material that was created by artificial intelligence are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

Meet C2PA: Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. Read more from Tate Ryan-Mosley here.

Bits and Bytes

The AI-powered, totally autonomous future of war is here
A nice look at how a US Navy task force is using robotics and AI to prepare for the next age of conflict, and how defense startups are building tech for warfare. The military has embraced automation, even though many thorny ethical questions remain. (Wired)

Extreme heat and droughts are driving opposition to AI data centers 
The data centers that power AI models use up millions of gallons of water a year. Tech companies are facing increasing opposition to these facilities all over the world, and as natural resources are growing scarcer, governments are also starting to demand more information from them. (Bloomberg)

This Indian startup is sharing AI’s rewards with data annotators 
Cleaning up data sets that are used to train AI language models can be a harrowing job with little respect. Karya, a nonprofit, calls itself  “the world’s first ethical data company” and is funneling its profits to poor rural areas in India. It offers workers compensation many times above the Indian average. (Time

Advertisement

Google is using AI language models to train robots
The tech company is using a model trained on data from the web to help robots execute tasks and recognize objects they have not been trained on. Google hopes this method will make robots better at adjusting to the messy real world. (The New York Times

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement