Blog
Laying the Groundwork for Combating AI-Powered Cybercrime
Artificial intelligence (AI) is already being applied to diverse use cases, from consumer-oriented devices—such as voice-controlled personal assistants and self-directed vacuum cleaners—to groundbreaking business applications that optimize everything from drug discovery to financial portfolio management. So naturally, there is growing interest within the information security community around how we can leverage AI—which encompasses the concepts of machine learning (ML) and deep learning (DL)—to combat cyber threats.
By Delfina Chain
Artificial intelligence (AI) is already being applied to diverse use cases, from consumer-oriented devices—such as voice-controlled personal assistants and self-directed vacuum cleaners—to groundbreaking business applications that optimize everything from drug discovery to financial portfolio management. So naturally, there is growing interest within the information security community around how we can leverage AI—which encompasses the concepts of machine learning (ML) and deep learning (DL)—to combat cyber threats.
The effectiveness and scalability of cybersecurity-related tasks, such as malware and spam detection, has already been enhanced by AI, and many expect ongoing AI innovations to have a transformative impact on cyber defense capabilities. However, security practitioners must also recognize that the rise of AI presents a potent opportunity for cybercriminals to optimize their malicious activities. Much like the rise of cybercrime-as-a-service offerings in the underground economy, threat-actor adoption of AI technology is expected to lower barriers to entry for lower-skilled actors seeking to conduct advanced malicious operations.
A report from the Future of Humanity Institute emphasizes the potential for AI to be used toward beneficial and harmful ends within the cyber realm, which is amplified by its efficiency, scalability, diffusibility, and potential to exceed human capabilities. Potential uses of AI among cybercriminals could include the development of highly evasive malware, the ability for automated systems to exhibit human-like behavior during denial-of-service attacks, and the optimization of activities such as vulnerability discovery and target prioritization. Fortunately, defenders have a leg up over adversaries in this arms race to harness the power of AI technology, largely due to the time- and resource-intensive nature of deploying AI at its current stage in development.
Implications for Defenders
The purpose of intelligence is to inform a course of action. For defenders, this course of action should be guided by the level of risk (likelihood x potential impact) posed by a threat. The best way to evaluate how likely a threat is to manifest is by monitoring threat-actor activity on the deep-and-dark-web (DDW) forums, underground marketplaces, and encrypted chat services on which they exchange resources and discuss their tactics, techniques, and procedures (TTPs).
Cybercriminal abuse of technology is nothing new, and by gaining visibility into adversaries’ ongoing efforts to develop more advanced TTPs, defenders can better anticipate and defend against evolving attack methods. Flashpoint analysts often observe cybercriminals abusing legitimate technologies in a number of ways, ranging from the use of pirated versions of the Cobalt Strike threat-emulation software to elude server fingerprinting to the use of tools designed to aid visually impaired or dyslexic individuals to bypass CAPTCHA in order to deliver automated spam. Flashpoint analysts also observe adversaries adapting their TTPs in response to evolving security technologies, such as the rise of ATM shimmers in response to EMV-chip technology. In all of these instances, Flashpoint analysts provided customers with the technical and contextual details needed take proactive action in defending their networks against these TTPs.
When adversaries’ abuse of AI technology begins to escalate, their activity within DDW and encrypted channels will be one of the earliest and most telling indicators. So by establishing access to the resources needed to keep a finger on the pulse of the cybercriminal underground, defenders can rest easy knowing they’re laying the groundwork needed to be among the first to know when threat actors develop new ways of abusing AI and other emerging technologies.
Delfina Chain
Senior Associate, Customer Engagement and Development
As a senior customer engagement and development associate, Delfina leverages her public- and private-sector experience to help clients in Latin America understand the region’s threat landscape, identify risks relevant to their organization, and mitigate these risks using actionable business risk intelligence (BRI). Prior to joining Flashpoint, Delfina worked as a cybersecurity, technology, and risk management analyst for the Argentine government. She has also worked in the financial services industry, focusing on project analysis, product implementation, compliance, and the adoption of internal anti-money laundering and anti-terrorist financing measures. She holds a law degree from Universidad de San Andres.