India's largest platform and marketplace for AI & Analytics leaders & professionals

Sign in

India's largest platform and marketplace for AI & Analytics leaders & professionals

3AI Digital Library

Using AI to Outwit Malicious AI

3AI December 13, 2020

Robust Intelligence is among a crop of companies that offer to protect clients from efforts at deception.

IN SEPTEMBER 2019, the National Institute of Standards and Technology issued its first-ever warning for an attack on a commercial artificial intelligence algorithm.

Security researchers had devised a way to attack a Proofpoint product that uses machine learning to identify spam emails. The system produced email headers that included a “score” of how likely a message was to be spam. But analyzing these scores, along with the contents of messages, made it possible to build a clone of the machine-learning model and craft spam messages that evaded detection.

The vulnerability notice may be the first of many. As AI is used more widely, new opportunities for exploiting weak spots in the technology also are emerging. That’s given rise to companies that probe AI systems for vulnerabilities, with the goal of catching malicious input before it can wreak havoc.

Startup Robust Intelligence is one such company. Over Zoom, Yaron Singer, its cofounder and CEO, demonstrates a program that uses AI to outwit the AI that reads checks, an early application for modern machine learning.

Singer’s program automatically tweaks the intensity of a few pixels that make up the numbers and letters written on the check. This alters what a widely used commercial check-scanning algorithm perceives. A scammer equipped with such a tool could empty a target’s bank account by modifying a legitimate check to add several zeros before depositing it.

“In a lot of applications, very, very small changes can lead to dramatically different results,” says Singer, a professor at Harvard who is running his company while on sabbatical in San Francisco. “But the problem runs deeper; it’s just the very nature of how we perform machine learning.”

Robust Intelligence’s tech is being used by companies including PayPal and NTT Data, as well as a large ride-share company; Singer says he can’t describe how exactly it is being used, for fear of tipping off would-be adversaries.

The company sells two tools: one that can be used to probe an AI algorithm for weaknesses and another that automatically intercepts potentially problematic inputs—a kind of AI firewall. The probing tool can run an algorithm many times, examining the inputs and outputs and seeking ways to trick it.

Such threats are not just theoretical. Researchers have shown how adversarial algorithms can trick real-world AI systems, including autonomous driving systems, text-mining programs, and computer vision code. In one oft-mentioned case, a group of MIT students 3D-printed a turtle that Google software recognized as a rifle, thanks to subtle markings on its surface.

“If you’re developing machine-learning models right now, then you really have no way to do some kind of red teaming, or penetration testing, for your machine-learning models,” Singer says.

Singer’s research focuses on perturbing the input of a machine-learning system to make it misbehave and designing systems to be safe in the first place. Tricking AI systems relies on the fact that they learn from examples and pick up subtle changes in ways that humans do not. By trying multiple carefully chosen inputs—for example, showing altered faces to a face-recognition system—and seeing how the system responds, an “adversarial” algorithm can infer what tweaks to make in order to produce an error or a particular result.

Along with the check-fooling system, Singer demonstrates a way of outwitting an online fraud-detection system as part of probing for weaknesses. This fraud system looks for signs that someone making a transaction is actually a bot, based on a wide range of characteristics, including the browser, the operating system, the IP address, and the time.

Singer also shows how his company’s tech can deceive commercial image-recognition and face-recognition systems with subtle tweaks to a photo. The face-recognition system concludes that a subtly doctored photo of Benjamin Netanyahu actually shows basketball player Julius Barnes. Singer gives the same pitch to prospective customers worried about how their newfangled AI systems could be subverted, and what that might do to their reputation.

Some big companies that use AI are starting to develop their own AI defenses. Facebook, for instance, has a “red team” that tries to hack its AI systems to identify weak spots.

Zico Kolter, chief scientist at the Bosch Center for Artificial Intelligence, says research on defending AI systems is still at an early stage. The German company hired Kolter, an associate professor at Carnegie Mellon who works on designing systems so that they are provably robust, to help ensure that the AI systems it is developing, including those used in automotive products, are not vulnerable. Kolter says most efforts to protect commercial systems aim to head off attacks rather than make sure they are not vulnerable.

In October, Bosch and 11 other companies, including Microsoft, Nvidia, IBM, and MITRE, released a software framework for probing AI systems for weak spots. A Gartner report from 2019 predicts that by 2022, 30 percent of all cyberattacks on AI systems will themselves leverage some form of AI.

Aleksander Madry, an associate professor at MIT who works on machine learning, says it still isn’t clear how to guarantee that AI systems are safe. He adds that the vulnerabilities being exploited reflect a more fundamental weakness with modern AI. But making AI systems more robust may also improve their intelligence. In a paper to be presented at a conference later this month, he and colleagues show that image-recognition algorithms that can withstand adversarial attacks also can be applied more effectively to new tasks, making them more useful.

The alien way that AI systems work doesn’t just make them vulnerable to attack, it also means they will fail in surprising ways. This is likely to result in problems in areas like medical imaging and finance, Madry says. “AI models are just very eager students, and they will do everything they can to solve this narrow problem,” he says. “Every company [using AI] needs to think about that.”

    3AI Trending Articles

  • Ways Digital Transformation is Changing Customer Experience

    Featured Article Author: Sidhartha Shishoo, SG Analytics  In today’s hyper-connected world, digital transformation has become more than just a buzzword—it’s a strategic imperative for businesses across sectors. As companies strive to meet evolving customer expectations, they’re leveraging technology, analytics, and AI to reshape the customer experience landscape. Let’s explore how this digital revolution is unfolding […]

  • Transforming Customer Experience through Unified Business Functions

    Author: Souvik Das, AVP – Platforms & Offerings, Genpact Digital and Pramit Dasgupta,VP – Platforms & Offerings, Genpact Digital The recent pandemic has changed customer expectations and interactions in many ways. Forever. As organizations evolve through the new normal and gear up to satisfy their customers, there is a need to rethink and redesign how […]

  • European Union to revamp Cybersecurity rules

    EU last year recorded around 450 cyber incidents involving European infrastructure, notably in the financial and energy sectors, and the pandemic has highlighted Europe’s deep dependence on the internet and exposed security weaknesses.  The European Union unveiled Wednesday plans to revamp the 27-nation bloc’s dated cybersecurity rules, just days after data on a new coronavirus vaccine was unlawfully accessed in […]

  • Revolutionizing Claims Journey through Generative AI

    Featured Article Author: Vidhya Veeraraghavan, Standard Chartered In the Claims-2030 series of articles, Mckinsey emphasizes that by 2030, every touchpoint in the claims journey, starting even before an incident occurs, will be supported by a mix of technology and human touch that seamlessly interacts to expedite the process and deliver a better experience across the […]