Brain Hacking May Mitigate Computer Users’ Risky Behavior

Bloomberg Law: Privacy & Data Security brings you single-source access to the expertise of Bloomberg Law’s privacy and data security editorial team, contributing practitioners,...

By Joyce E. Cutler

Security researchers and software developers are hacking the brains of computer users to figure out how to best alert them to dangerous behaviors that put companies at risk.

Finding non-intrusive ways to warn away risky computer behavior—without annoying users to the point that they become numb to cautionary messages—is the holy grail of user-based computer security. Companies with that chalice would be able to greatly reduce the human-error risks that are so often the source of opening up corporate systems to cybercriminals, causing data breaches and misuse of stored personal data.

Many computer users automatically swat away repetitive dialogue box warnings of impending doom, especially when they are engaged in another activity. Now, engineers are using data analytics based on user tracking to discover what might help users pay attention to warnings. Software engineers are exploring promising techniques, such as changing background colors in warning notifications and switching formats to distinguish substantial security warnings from mundane messages. Tapping people’s brains helps the engineers design more effective user interfaces.

brain

Rather than just training users to be more vigilant about security threats, security user interface designs must be compatible with, not opposed to, “the way our brains work,” Anthony Vance, a Brigham Young University information systems associate professor, said at the recent Enigman security conference in Oakland, Calif.

Christopher Novak, Verizon RISK Team director, told Bloomberg BNA that it’s challenging to design a system that identifies and isolates areas where people usually slip up so it becomes harder for them to repeat that behavior.

“When you think of the carbon-based life forms we’re trying to educate, they’re sitting there at their desk taking a training class and their phone’s going off, their other phone’s vibrating, their email pops up. We need to find a way to make it real, make it interactive and bring them into the experience,” Novak said.

Only Human

One big problem in security design “is that security isn’t usually the primary task: people don’t sit down at the computer to ‘do security,’” Serge Egelman, director of usable security and privacy research at the University of California, Berkeley-affiliated International Computer Science Institute, told Bloomberg BNA.

“Invariably they’re trying to do something else when security gets in the way,” he said. “All too often, system designers and developers don’t really think about the burdens they place on their users.”

The brain is trained to tune out things a person sees repeatedly, the researchers said. Once a user is habituated to warnings, he or she is likely to click past automatic responses to those warnings, they said. Alerts, then, aren’t seen as warnings so much as screen wallpaper in the daily digital experience.

One audience member at the conference, who identified himself as a Netflix Inc. engineer, said software engineers need to avoid an “arms race” stream of increasingly annoying security messages that “ends up looking like Times Square or Las Vegas Strip.” But what works to eliminate distractions needs to be tested on users, he said.

Endless Annoyance

“There isn’t an easy answer,” Lorrie Cranor, a Carnegie Mellon University professor of computer science, and engineering and public policy, told Bloomberg BNA.

One improvement would be making security software “as automatic as possible, so you don’t have to do anything to be protected,” said Cranor, the director of the school’s CyLab Usable Privacy and Security Laboratory. “To the extent that this just works, everybody wins.”

Reducing the frequency of security interruptions to users would also help, Cranor said. “There’s many times your computer interrupts you with a security message when it doesn’t need to,” she said. With smarter design, computers could run background checks and interrupt users only when necessary with a more meaningful message, she said.

Hacking the Brain

Functional magnetic resonance imaging (fMRI)—which measures brain activity through blood flow changes—and users willing to participate in experiments offer a way to test how distractions work and how best to eliminate them.

Vance and other researchers used fMRI to see what happens to the brain when a user is interrupted during a task. They discovered, among other things, that people are lousy at multitasking.

Working with Alphabet Inc.'s Google Chrome engineers, researchers identified good and bad times to present security messages over the course of a web browsing experience. Security warnings popped up at high task times, such as video viewing, and at low task times, such as waiting for a program to load. The less engaged the user is in the content or task before them, the better the security warnings worked, Vance said.

“The brain isn’t good at handling interruptions,” Vance said. So if security warnings are displayed without regard to a user’s task, their brains will see them as interfering interruptions rather than priority issues needing attention, he said.

For security warnings, just as in comedy, timing is everything. “If you have any insight at all into what the user is doing, the workflow of the users, then display the message at a time when the user is less engaged,” Vance said.

Wiggles Work

A wiggle goes a long way when it comes to getting users not to tune out something that looks a lot like all the other messages, Vance said.

Animating a message, changing the border color or otherwise differentiating it meant that users responded to warnings more frequently than those who saw the same old dialogue box, Vance, Cantor and Egelman found in separate studies. The implication is that security measures should look different or require users to act differently so they don’t simply swipe away warnings, Vance said.

“The potential danger is when this automatic response to something that pops up in your face carries over to a relatively rare security message,” Vance said.

To contact the reporter on this story: Joyce E. Cutler in San Francisco at JCutler@bna.com

To contact the editor responsible for this story: Donald Aplin at daplin@bna.com

Copyright © 2017 The Bureau of National Affairs, Inc. All Rights Reserved.