Skip to Content
Artificial intelligence

A new way to train AI systems could keep them safer from hackers

July 10, 2020
CT scan of elderly man with old occipital infarct
Callista Images / Getty

The context: One of the greatest unsolved flaws of deep learning is its vulnerability to so-called adversarial attacks. When added to the input of an AI system, these perturbations, seemingly random or undetectable to the human eye, can make things go completely awry. Stickers strategically placed on a stop sign, for example, can trick a self-driving car into seeing a speed limit sign for 45 miles per hour, while stickers on a road can confuse a Tesla into veering into the wrong lane.

Safety critical: Most adversarial research focuses on image recognition systems, but deep-learning-based image reconstruction systems are vulnerable too. This is particularly troubling in health care, where the latter are often used to reconstruct medical images like CT or MRI scans from x-ray data. A targeted adversarial attack could cause such a system to reconstruct a tumor in a scan where there isn’t one.

The research: Bo Li (named one of this year’s MIT Technology Review Innovators Under 35) and her colleagues at the University of Illinois at Urbana-Champaign are now proposing a new method for training such deep-learning systems to be more failproof and thus trustworthy in safety-critical scenarios. They pit the neural network responsible for image reconstruction against another neural network responsible for generating adversarial examples, in a style similar to GAN algorithms. Through iterative rounds, the adversarial network attempts to fool the reconstruction network into producing things that aren’t part of the original data, or ground truth. The reconstruction network continuously tweaks itself to avoid being fooled, making it safer to deploy in the real world.

The results: When the researchers tested their adversarially trained neural network on two popular image data sets, it was able to reconstruct the ground truth better than other neural networks that had been “fail-proofed” with different methods. The results still aren’t perfect, however, which shows the method still needs refinement. The work will be presented next week at the International Conference on Machine Learning. (Read this week’s Algorithm for tips on how I navigate AI conferences like this one.)

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.