So Now We Have to Worry About Artificially-Intelligent Cybercriminals?


board-2181407_640

Companies may now have to set their sights on defending against cybercriminals armed with artificial intelligence. Even as companies adopt AI to help fight cyberattacks, the criminals are also on the trail of enhanced machine-learning skills.

Many cybersecurity companies are starting to invest or implement AI in their cybersecurity solutions and it is giving their security teams a significant boost, according to a recently released report commissioned by McAfee. 

However, usage of AI and machine-learning technologies aren’t limited to the good guys. Cybercriminals are starting to use these solutions to sift through large amounts of data to “classify victims that have weaker defenses” so they can get the maximum “return on their investment,” Steve Grobman, chief technology officer for McAfee, told Bloomberg BNA. 

Cybersecurity companies often look for innovative ways to stop hackers and cybercriminals from disrupting business continuity or stealing valuable intellectual property, sensitive data, and high-jacking systems for a ransom. One only has to look to the recent international Petya and WannaCry ransomware attacks that crippled thousands of computers for how far cybercriminals will go to get their bounty. 

Humans and machines must work together to create better cybersecurity protections as it can be a “game changer” for the industry, the McAfee report said. Machine-learning and AI allows machines “to automate the discovery of new attacks,” while allowing security teams to use their unique skill set to get ahead of hackers and cybercriminals.

Grobman told Bloomberg BNA that AI and machine-learning won’t replace cybersecurity teams, rather “it will change the way that cybersecurity professionals will do their jobs.” AI cybersecurity solutions will be able to collect and analyze “data at massive scale” and would “remove the need for humans to do the mundane and repeatable tasks,” he said. 

So much “energy is required” to execute these tasks that it is “preventing the professionals from looking at” cybersecurity risks in-depth, Grobman said. Instead, security professionals will be able to focus on what cybercriminals are doing “when it is never seen before,” he said. 

For example, AI and machine-learning may allow for cybercriminals to conduct spear-phishing campaigns on a massive level, Grobman said. AI allows the hacker to “automate the creation of custom content to build a spear-phishing attack to a massive number of individuals,” which previously would have had a low “victim conversion rate,” he said. Now, hackers can possibly get a “higher rate of return on their attack” and get a “bigger return on their investment,” he said. 

At the end of the day, companies need to see AI and machine-learning not as a cybersecurity “silver-bullet” and be aware that cybercriminals may use the technology to launch an attack in the future, Grobman said. 

To keep up with the constantly evolving world of privacy and security sign up for the Bloomberg BNA Privacy and Security Update.