Tuesday 24th January 2023

BLOG: How AI Is Leveraged Across The Threat Landscape

Artificial intelligence (AI) is increasingly used to provide solutions across a diverse set of industries that rely on a growing number of connected devices to facilitate demand for real-time personalised services. This trend is being driven by advancements in cloud computing technology and developments in machine learning algorithms that allow businesses to improve their operational efficiency through the automation of processes and the generation of valuable analytics[1].

In cybersecurity, machine learning AI is increasingly used by companies such as Orpheus to analyse large data volumes to identify threats and minimise an organisations attack surface by scanning for vulnerabilities across endpoints, servers, and Dark Web discussion forums.[2]

Whilst this offers many security benefits, threat actors are also taking advantage of the same AI technologies to enhance the efficacy of their techniques and procedures. It is therefore likely that the increasing use of AI technologies by cybersecurity practitioners will foster a similar adoption by threat actors and vice versa. Orpheus assess that AI technologies are unlikely to radically alter the tactics used by threat actors but instead act as force multipliers to massively increase the efficacy of existing techniques.

Focused cyber threat intelligence services and products such as those provided by Orpheus are therefore vital in gaining enhanced visibility of such threats as they emerge and are key to prioritising their mitigation.

Executive Summary:

Advancements in AI are increasingly being integrated into the techniques and procedures used by adversaries across the threat landscape, acting as a force multiplier that drastically improves the potential scale and efficacy of their operations. This report offers a succinct overview of how threat actors have been observed utilising AI technologies to date. Key points are summarised below:

  • Deepfake technology is increasingly being used by state-backed groups to hinder attribution and conduct ever more sophisticated operations within the information space that are highly scalable.
  • Cybercriminal groups have been observed leveraging procedurally generated media in highly credible phishing and vishing campaigns.
  • AI and machine learning techniques applied to improve the efficacy of password-guessing tools, with scope to expand this to test malware and rehearse compromises in virtual environments.
  • Threat actors manipulate ranking algorithms for SEO poisoning to facilitate initial access, with the potential to exploit other legitimate AI services to generate behavioural outcomes outside the scope of their intended functionality.

What do we mean by AI:

In its simplest form, AI is the application of computer science and data engineering to facilitate some form of problem-solving capable of operating beyond the pre-defined code set by human operators, with the intention to emulate abstract human thinking. Encompassed within the field of AI are various sub-disciplines such as machine learning which refers to the use of algorithms and statistical models to create systems capable of making predictions or classifications based on input data.[3]

Whilst forms of software automation can use AI, most of the automated processes leveraged by threat actors generally use more traditional software that can manipulate data in line with its pre-coded functionality. AI-driven automation by contrast goes beyond manipulating data in a predictable way, instead creating new understandings about how to interpret that data by identifying patterns that can form the basis for predictions.[4]

How adversaries use AI:

Across the threat landscape adversaries are already utilising various AI technologies to conduct ever more sophisticated and scalable operations with a broad spectrum of motivations.

Information Operations:

Amongst the most prolific use of AI across the threat landscape is deep fake technology that allows threat actors to create procedurally generated images of entirely fictitious people through Generative Adversarial Networks (GAN). These technologies enable threat actors to create seemingly credible malicious profiles that feature images of people that cannot be cross-referenced to any real person even when using reverse image searchers. This makes attribution harder for security researchers as there are no legitimate user accounts that GAN images can be connected with to expose the malicious profile as being fraudulent and fictitious.

Orpheus have previously reported on an information campaign associated with individuals in the US military targeting groups across the Middle East and Russia that leveraged fake Facebook accounts with GAN images as recently as November 2022. It is assessed that state led malicious information campaigns that leverage this type of AI technologies are likely to increase in frequency and scale in line with rising geopolitical tensions between Western states led by the US and authoritarian regimes such as Russia with conflicting interests.

Phishing and Vishing:

Financially motivated threat actors are also known to use similar AI technology to produce other instances of procedurally generated synthetic media commonly referred to as “DeepFakes” that are used to masquerade as legitimate individuals. For example, cybercriminal groups have been identified using the same AI natural language processing technology as many automated customer-service applications to automate and improve the credibility of phishing campaigns.[5] Additionally, there are several reports of cybercriminal groups leveraging commercially available deepfake voice impersonation tools that can alter or clone voices in real time to conduct Vishing operations.[6]

Previously Orpheus have reported on several instances of fraudulent Vishing calls made to individuals or organisations purporting to be from reputable companies to induce victims to disclose personal information and steal credentials, with one operation targeting Morgan Stanley client accounts as recently as March 2022.

Already convincing Vishing operations such as these are only likely to become more effective with the greater proliferation of deepfake technology enabling threat actors to clone the voice of a person potentially known to the victim from a relatively small audio data sample. Rather than revolutionising how threat actors engage in malicious operations AI tools such as these act as force multipliers, improving on established phishing and vishing techniques by enhancing their credibility.

Machine Learning and SEO Poisoning:

Threat actors might also leverage machine learning AI to improve their operational efficacy by constructing their own virtual environments in which they can model the deployment of their own malware, essentially enabling them to rehearse and hone their techniques and procedures to overcome the security feature of defenders.[7] On a smaller scale, cybercriminal groups have already been observed using AI and machine learning techniques to improve their password-guessing tools resulting in an increase in the frequency and success rates of criminal hacking attempts.[8]

Threat actors might also seek to manipulate legitimate AI technology to compromise corporate environments by poisoning AI models with inaccurate data that leads to the generation of malicious actions. Machine learning AI models rely on correctly labelled and organised data sets to build accurate and repeatable algorithms, so altering the dataset AI’s are trained on can lead to unexpected outcomes. Threat actors could therefore potentially introduce benign files to the dataset of an AI-based antivirus program that appear similar to malware tools, essentially training the AI to ignore malicious files that might be used in future operations.

Another avenue for harm is the potential for threat actors to reverse-engineer a particular machine-learning algorithm, by studying how it operates they could even uncover the datasets used to train the AI which may include personal data.[9]  Understanding how legitimate AI technology’s functions will also enable threat actors to better manipulate them to generate behavioral outcomes outside the scope of their intended functionality.  For instance, Orpheus have already reported on multiple instances of threat actors using Search Engine Optimisation (SEO) poisoning to gain initial access by manipulating Google’s indexing algorithm to ensure that malicious websites or files such as PDF’s rank high for related searchers, increasing traffic and enhancing their apparent credibility. This process may become much easier and more effective if threat actors apply machine learning SEO tools to their operations.[10]

By Matthew Sheldon, Orpheus Cyber Research Analyst

[1] https://plat.ai/blog/artificial-intelligence-market-size-and-growth/

[2] https://orpheus-cyber.com/blog-how-artificial-intelligence-and-machine-learning-help-cybersecurity-in-2021/

[3] https://www.ibm.com/uk-en/topics/artificial-intelligence

[4] https://redmarker.ai/post/automation-vs-ai-ml#:~:text=Automation%20can%20use%20AI%3B%20however,%E2%80%94%20AI%20’understands’%20data.

[5] https://www.mandiant.com/resources/podcasts/how-adversaries-are-leveraging-ai

[6] https://www.bankinfosecurity.com/deepfakes-voice-impersonators-used-in-vishing-as-a-service-a-18050

[7] https://www.techtarget.com/searchsecurity/tip/How-hackers-use-AI-and-machine-learning-to-target-enterprises

[8] https://demakistech.com/how-hackers-use-ai-and-machine-learning-to-target-enterprises/

[9] https://www.unite.ai/how-hackers-are-wielding-artificial-intelligence/

[10] https://surferseo.com/blog/seo-machine-learning-algorithms-in-surfer/#:~:text=Machine%20learning%20(ML)%20helps%20in,and%20great%20outcome%20for%20customers.

 

 

Get our latest cyber intelligence insights straight into your inbox

Fill out the short form below to subscribe to our newsletter so that you never miss out on our cyber intelligence insights and news.