Artificial intelligence – or AI for short – is a powerful technology. It’s been used to develop programs and systems that have been able to exhibit traits that are akin to human behaviour. Characteristics such as the ability to adapt to changing environments or respond intelligently to situations. No doubt, AI technologies have helped with cybersecurity solutions and enhanced people’s lives one way or another for the better.
However, it’s also allowed hackers to leverage that same technology in developing intelligent malware programs and to use them to execute stealth attacks.
AI In Cybersecurity
Security experts over the years have conducted plenty of research to understand the capabilities of AI and incorporate them into security solutions. Today, users have AI-enabled security tools and products that can detect and respond to cybersecurity incidents with little to no input from people.
It has further applications in the following areas:
In Modelling Behaviour
Organizations that using AI can monitor behaviour of system users upon adoption. The reason to do this is ultimately to pinpoint takeover attacks. These are attacks where malicious employees steal login details of other users and use that account to commit cybercrimes. The idea is that AI learns over time the user’s activities and expects the behaviour to be consistent from there. Any break away from the norm will cause it to respond by locking out the user or immediately alerting system admins about the changes.
Use In Antivirus Products
Antivirus tools with AI capabilities can behave in a similar fashion to the previous point. Though this is in the capacity of a network or system. The AI will be able to detect what programs are exhibiting unusual behaviour than what the program would typically behave like.
The reason these programs can spot this is that malware programs are coded in much different ways than standard computer operations.
Through the help of machine learning to learn legitimate program interaction with an operating system, these products can pinpoint malware with ease. Furthermore, these programs can block malware from accessing any additional resources – effectively killing it.
This is different from traditional antivirus software as it scans a signature database to determine if there are any threats. In theory, hackers can bypass that so long as users aren’t using AI-enhanced antivirus software.
Automated Networks and System Analysis
The issue with manual analysis at this point is that it’s nearly impossible to perform well due to the large volume of data that’s generated from user activities. Cybercriminals can bypass these systems through command and control (referred to as C2) tactics to penetrate network defenses undetected.
AI in this capacity enhances these defensive measures to utilize anomaly detection, keyword matching, and monitoring statistics through automated networks and system analysis. These ensure continuous monitoring for prompt identification of attempted break-ins.
Lastly, AI can be used to scan emails. This is big because cybercriminals’ point of attack is often through email communications. This is where many malicious links and attachments are sent, hoping for unsuspecting people to click on them.
Symantec states that 54.6 percent of received email messages are spam or could contain malicious attachments or links.
Having an AI being able to scan emails is a huge benefit. AI can also use machine learning to identify phishing emails and to handle them swiftly. AI can go a step farther by simulating clicks on sent links, using anomaly detection techniques to spot suspicious activities too.
How Hackers Are Using AI
Even though we owe a lot to AI, hackers are now turning to AI and using it to weaponize malware and their attacks to counter these advancements. This has taken shape in various ways.
One of the common methods is concealing malicious codes in benign applications.
What this does is the program the codes are in get executed at a specific time. This can range from a certain time after the program has been installed or when there are enough users subscribed to the application. How this code is concealed until those points of time is through the use of AI models and deriving private keys to control when the malware is sent out.
Through Authentication Methods
Another avenue hackers can leverage is to predefine an application feature as an AI trigger for executing cyberattacks. This particular feature can even be used in authentication features like voice or visual recognition.
Because those features are standard, any time those features are used, hackers will be given the opportunity to attack at any point in time as a result. Worse, these models can be present for years without being detected, allowing hackers to wait until applications are most vulnerable and can strike.
Through Machine Learning
One of the big selling points as we know is machine learning. Hackers are also aware of this and can leverage that same model to create adaptable attacks and create intelligent malware programs.
It’s possible for hackers to then, during attacks, have the programs collect knowledge on what prevented them from a successful attack and retain what proved to be useful. AI-based attacks aren’t going to succeed in the first attempt in most situations, but their adaptive abilities allow hackers to succeed in subsequent attacks.
Through Smart Malware
Lastly, cybercriminals can use AI to create smart malware that can exploit unmitigated vulnerabilities. When these attacks come across a patched vulnerability, they adapt to try and compromise a system through different attacks instead.
Furthermore, it can create malware that can mimic trusted system components. This is used for stealth attacks. In this type of attack, AI-enabled malware programs can automatically learn computation environments of organizations, their patch update lifecycle, communication protocols, and periods where the system is the least protected.
Hackers can use that information to perform these stealth attacks and blend them as an organization’s security environment. These attacks are dangerous as hackers can break into systems and leave a system at any point in time.
As great as AI has been for all of us, it’s an important reminder that hackers can use the same tools too.