Blog
Chatbots Say Plenty About New Threats to Data
Chatbots are becoming a useful customer interaction and support tool for businesses. These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call center personnel, and they’re popping up everywhere.
Chatbots are becoming a useful customer interaction and support tool for businesses. These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call center personnel, and they’re popping up everywhere.
As with most other innovations, threat actors have found a use for them too.
A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire, or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.
It’s an array of risks and threats that could be hidden in an innocuous communication channel and are challenging to mitigate.
Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.
Few Chatbot Attacks Made Public
Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots.
In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statement that a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.
Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”
On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a woman who asked victims to enter their payment card information to become verified on the platform.
Mitigations and Assessment
Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.
Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.
Monitoring for and deploying regular software updates and security patches is imperative. Organizations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.
Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.
Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.