By omic March 6, 2022 In Uncategorized

AI and the Evolving Threat Landscape

Advances in computing power and in theoretical and practical concepts in AI research, as well as breakthroughs in cyber security, promise that machine-learning algorithms and techniques will be a key part of cyberdefence – and possibly even attack. Human hackers whose machines competed in 2016 and 2017 are now evolving their technology, working in tandem with machines to win other hacking competitions and take on new challenges. (A notable example is Team Shellphish and its open-source exploit-automation tool “angr”). From a defensive point of view, cyber-security professionals already use a great deal of automation and machine-powered analysis. Yet the offensive use of automated capabilities is also on the rise. The majority of information-security professionals (62 per cent) surveyed by Cylance at Black Hat USA 2017 thought that hackers will weaponize AI, and begin using it offensively in 2018. And at DEFCON in 2017, a data scientist from Endgame (a US endpoint-security vendor) demonstrated and released a malware manipulation environment for Elon Musk’s popular OpenAI Gym, the open-source toolkit for learning algorithms. Endgame created an automated tool that learns how to mask a malicious file from anti-virus engines, by changing just a few bytes of its code in a way that maintains malicious capacity. This allows it to evade common security measures, which typically rely on file signatures – much like a fingerprint – to detect a malicious file.AI-powered attacks will outpace human response teams and outwit current legacy-based defenses; therefore, the mutually dependent partnership of human and AI will be the bedrock of defense strategies in the future. The battleground of the future is digital, and AI is the undisputed weapon of choice. There is no silver bullet to the generational challenge of cyber security, but one thing is clear: only AI can play AI at its own game. The technology is available, and the time to prepare is now.
The use of adversarial artificial intelligence will impact the security landscape in three key ways:

1 – Impersonation of trusted users

AI attacks will be highly tailored yet operate at scale. These malwares will be able to learn the nuances of an individual’s behavior and language by analyzing email and social media communications. They will be able to use this knowledge to replicate a user’s writing style, crafting messages that appear highly credible. Messages written by AI malware will therefore be almost impossible to distinguish from genuine communications. As the majority of attacks get into our systems through our inboxes, even the most cyber-aware computer user will be vulnerable.

2 – Blending into the background

Sophisticated threat actors can often maintain a long-term presence in their target environments for months at a time, without being detected. They move slowly and with caution, to evade traditional security controls and are often targeted to specific individuals and organizations. AI will also be able to learn the dominant communication channels and the best ports and protocols to use to move around a system, discretely blending in with routine activity. This ability to disguise itself amid the noise will mean that it is able to expertly spread within a digital environment, and stealthily compromise more devices than ever before. AI malware will also be able to analyse vast volumes of data at machine speed, rapidly identifying which data sets are valuable and which are not. This will save the (human) attacker a great deal of time and effort.

3 – Faster attacks with more effective consequences

Today’s most sophisticated attacks require skilled technicians to conduct research on their target and identify individuals of interest, understand their social network and observe over time how they interact with digital platforms. In tomorrow’s world, an offensive AI will be able to achieve the same level of sophistication in a fraction of the time, and at many times the scale.In this fast-changing scenario, a new approach to secure cyberspace using Deception Technology is needed. This basic idea behind these Technology is the thought process that however good a security system is, sooner or later it is going to be compromised. The aim of deception technology is to prevent a cyber criminal that has managed to infiltrate a network from doing any significant damage. The technology works by generating traps or deception decoys that mimic legitimate technology assets throughout the infrastructure. These decoys can run in a virtual or real operating system environment and are designed to trick the cyber criminal into thinking they have discovered a way to escalate privileges and steal credentials. Once a trap is triggered, notifications are broadcast to a centralized deception server that records the affected decoy and the attack vectors that were used by the cyber criminal.

Attivo Networks the current leader in Deception Technology has got precise tools to take care of current and future threats outlined above. Attivo ThreatDefend platform is designed specifically to take care of these current and evolving threats and uses machine-learning algorithms. The ThreatDefend Deception Platform is a modular solution comprised of Attivo BOTsink engagement servers, decoys, and deceptions, the ThreatStrikeTM endpoint deception suite, ThreatPathTM for attack path visibility, ThreatOpsTM incident response orchestration playbooks, and the Attivo Central Manager (ACM), which together create a comprehensive early detection and active defense against cyber threats.With several new such tools in development, and competitions fueling innovation the rules of the game have changed. It is not hard to imagine that the next few steps in this evolutionary ladder can create an autonomous system that will adapt, learn new environments and identify flaws, which it can exploit. So, the defender needs to think like an attacker and accordingly predict and adapt according the situation. May the good guy win.

Leave a reply