Hackers take over Artificial Intelligence

Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.

cyber crime
Photocredit: FBI

The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.

The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is already being used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.

“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.

Scales of intelligence

Marc Goodman, author of Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It, says he isn’t surprised that so many Black Hat attendees see weaponized AI as being imminent, as it’s been part of cyber attacks for years.

“What does strike me as a bit odd is that 62 percent of infosec professionals are making an AI prediction,” Goodman told Gizmodo. “AI is defined by many different people many different ways. So I’d want further clarity on specifically what they mean by AI.”

Indeed, it’s likely on this issue where the expert opinions diverge.

The funny thing about artificial intelligence is that our conception of it changes as time passes, and as our technologies increasingly match human intelligence in many important ways. At the most fundamental level, intelligence describes the ability of an agent, whether it be biological or mechanical, to solve complex problems. We possess many tools with this capability, and we have for quite some time, but we almost instantly start to take these tools for granted once they appear.

Centuries ago, for example, the prospect of a calculating machine that could crunch numbers millions of times faster than a human would’ve most certainly been considered a radical technological advance, yet few today would consider the lowly calculator as being anything particularly special. Similarly, the ability to win at chess was once considered a high mark of human intelligence, but ever since Deep Blue defeated Garry Kasparov in 1997, this cognitive skill has lost its former luster. And so and and so forth with each passing breakthrough in AI.

Today, rapid-fire developments in machine learning (whereby systems learn from data and improve with experience without being explicitly programmed), natural language processing, neural networks (systems modeled on the human brain), and many other fields are likewise lowering the bar on our perception of what constitutes machine intelligence. In a few years, artificial personal assistants (like Siri or Alexa), self-driving cars, and disease-diagnosing algorithms will likewise lose, unjustifiably, their AI allure. We’ll start to take these things for granted, and disparage these forms of AI for not being perfectly human. But make no mistake—modern tools like machine intelligence and neural networks are a form of artificial intelligence, and to believe otherwise is something we do at our own peril; if we dismiss or ignore the power of these tools, we may be blindsided by those who are eager to exploit AI’s full potential, hackers included.

A related problem is that the term artificial intelligence conjures futuristic visions and sci-fi fantasies that are far removed from our current realities.

“The term AI is often misconstrued, with many people thinking of Terminator robots trying to hunt down John Connor—but that’s not what AI is,” said Wallace. “Rather, it’s a broad topic of study around the creation of various forms of intelligence that happen to be artificial.”

Wallace says there are many different realms of AI, with machine learning being a particularly important subset of AI at the current moment.

“In our line of work, we use narrow machine learning—which is a form of AI—when trying to apply intelligence to a specific problem,” he told Gizmodo. “For instance, we use machine learning when trying to determine if a file or process is malicious or not. We’re not trying to create a system that would turn into SkyNet. Artificial intelligence isn’t always what the media and science fiction has depicted it as, and when we [infosec professionals] talk about AI, we’re talking about broad areas of study that are much simpler and far less terrifying.”

Evil intents

These modern tools may be less terrifying than clichéd Terminator visions, but in the hands of the wrong individuals, they can still be pretty scary.

Deepak Dutt, founder and CEO of Zighra, a mobile security startup, says there’s a high likelihood that sophisticated AI will be used for cyberattacks in the near future, and that it might already be in use by countries such as Russia, China, and some Eastern European countries. In terms of how AI could be used in nefarious ways, Dutt has no shortage of ideas.

“Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person’s] accounts,” Dutt told Gizmodo. “It can also be used to automatically monitor e-mails and text messages, and to create personalized phishing mails for social engineering attacks [phishing scams are an illicit attempt to obtain sensitive information from an unsuspecting user]. AI can be used for mutating malware and ransomware more easily, and to search more intelligently and dig out and exploit vulnerabilities in a system.”

Dutt suspects that AI is already being used for cyberattacks, and that criminals are already using some sort of machine learning capabilities, for example, by automatically creating personalized phishing e-mails.

“But what is new is the sophistication of AI in terms of new machine learning techniques like Deep Learning, which can be used to achieve the scenarios I just mentioned with a higher level of accuracy and efficiency,” he said. Deep Learning, also known as hierarchical learning, is a subfield of machine learning that utilizes large neural networks. It has been applied to computer vision, speech recognition, social network filtering, and many other complex tasks, often producing results superior to human experts.

“Also the availability of large amounts of social network and public data sets (Big Data) helps. Advanced machine learning and Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyberattacks with higher sophistication.”

These days, the overwhelming number of cyber attacks is automated, according to Goodman. The human hacker going after an individual target is far rarer, and the more common approach now is to automate attacks with tools of AI and machine learning—everything from scripted Distributed Denial of Service (DDoS) attacks to ransomware, criminal chatbots, and so on. While it can be argued that automation is fundamentally unintelligent (conversely, a case can be made that some forms of automation, particularly those involving large sets of complex tasks, are indeed a form of intelligence), it’s the prospect of a machine intelligence orchestrating these automated tasks that’s particularly alarming. An AI can produce complex and highly targeted scripts at a rate and level of sophistication far beyond any individual human hacker.

Indeed, the possibilities seem almost endless. In addition to the criminal activities already described, AIs could be used to target vulnerable populations, perform rapid-fire hacks, develop intelligent malware, and so on.

Staffan Truvé, CEO of the Swedish Institute of Computer Science (SICS) and Chief Scientist at Recorded Future says that, as AI matures and becomes more of a commodity, the “bad guys,” as he puts it, will start using it to improve the performance of attacks, while also cutting costs. Unlike many of his colleagues, however, Truvé says that AI is not really being used by hackers at the moment, claiming that simpler algorithms (e.g. for self-modifying code) and automation schemes (e.g. to enable phishing schemes) are working just fine.

“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Truvé told Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”

AI on AI

With AI now part of the modern hacker’s toolkit, defenders are having to come up with novel ways of defending vulnerable systems. Thankfully, security professionals have a rather potent and obvious countermeasure at their disposal, namely artificial intelligence itself. Trouble is, this is bound to produce an arms race between the rival camps. Neither side really has a choice, as the only way to counter the other is to increasingly rely on intelligent systems.

“For security experts, this is Big Data problem—we’re dealing with tons of data—more than a single human could possibly produce,” said Wallace. “Once you’ve started to deal with an adversary, you have no choice but to use weaponized AI yourself.”

To stay ahead of the curve, Wallace recommends that security firms conduct their own internal research, and develop their own weaponized AI to fight and test their defenses. He calls it “an iron sharpens iron” approach to computer security. The Pentagon’s advanced research wing, DARPA, has already adopted this approach, organizing grand challenges in which AI developers pit their creations against each other in a virtual game of Capture the Flag. The process is very Darwinian, and reminiscent of yet another approach to AI development—evolutionary algorithms. For hackers and infosec professionals, it’s survival of the fittest AI.

Goodman agrees, saying “we will out of necessity” be using increasing amounts of AI “for everything from fraud detection to countering cyberattacks.” And in fact, several start-ups are already doing this, partnering with IBM Watson to combat cyber threats, says Goodman.

“AI techniques are being used today by defenders to look for patterns—the antivirus companies have been doing this for decades—and to do anomaly detection as a way to automatically detect if a system has been attacked and compromised,” said Truvé.

At his company, Recorded Future, Truvé is using AI techniques to do natural language processing to, for example, automatically detect when an attack is being planned and discussed on criminal forums, and to predict future threats. 

“Bad guys [with AI] will continue to use the same attack vectors as today, only in a more efficient manner, and therefore the AI based defense mechanisms being developed now will to a large extent be possible to also use against AI based attacks,” he said.

Dutt recommends that infosec teams continuously monitor the cyber attack activities of hackers and learn from them, continuously “innovate with a combination of supervised and unsupervised learning based defense strategies to detect and thwart attacks at the first sign,” and, like in any war, adopt superior defenses and strategy.

The bystander effect

So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on a AI in a never ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.

Source: http://gizmodo.com

Leave a Reply