Wednesday, September 13, 2017

How Artificial Intelligence Will Make Cyber Criminals More ‘Efficient’

The era of artificial intelligence is upon us, though there’s plenty of debate over how AI should be defined much less whether we should start worrying about an apocalyptic robot uprising. The latter issue recently ignited a highly publicized dispute between Elon Musk and Mark Zuckerberg, who argued that it was irresponsible to “try to drum up these doomsday scenarios”.

In the near-term however, it seems more than likely that AI will be weaponized by hackers in criminal organizations and governments to enhance now-familiar forms of cyberattacks like identity theft and DDoS attacks.

A recent survey has found that a majority of cybersecurity professionals believe that artificial intelligence will be used to power cyberattacks in the coming year. Cybersecurity firm Cylance conducted the survey at this year’s Black Hat USA conference and found that 62 percent of respondents believe that “there is high possibility that AI could be used by hackers for offensive purposes.”

Artificial intelligence can be used to automate elements of cyber attacks, making it even easier for human hackers (who need food and sleep) to conduct a higher rate of attacks with greater efficacy, writes Jeremy Straub, an assistant professor of computer science at North Dakota State University who has studied AI-decision making. For example, Straub notes that AI could be used to gather and organize databases of personal information needed to launch spearphishing attacks, reducing the workload for cybercriminals. Eventually, AI may result in more adaptive and resilient attacks that respond to the efforts of security professionals and seek out new vulnerabilities without human input.

Rudimentary forms of AI, like automation, have already been used to perpetrate cyber attacks at a massive scale, like last October’s DDoS attack that shut down large swathes of the internet.

“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, to Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.”

The flip side of these predictions is that, even as AI is used by malicious actors and nation-states to generate a greater number of attacks, AI will likely prove to be the best hope for countering the next generation of cyber attacks. The implication is that security professionals need to keep up in their arms race with hackers, staying apprised of the latest and most advanced attacker tactics and creating smarter solutions in response.

For the time being, however, cyber security professionals have observed hackers sticking to tried-and-true methods.

“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Staffan Truvé, CEO of the Swedish Institute of Computer Science said to Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”



from iDrop News http://ift.tt/2h2QKcR
via IFTTT

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.