In this talk, we describe DeepLocker, a novel class of highly targeted and evasive attacks powered by artificial intelligence (AI). As cybercriminals increasingly weaponize AI, cyber defenders must understand the mechanisms and implications of the malicious use of AI in order to stay ahead of these threats and deploy appropriate defenses.
DeepLocker was developed as a proof of concept by IBM Research in order to understand how several AI and malware techniques already being seen in the wild could be combined to create a highly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim. It achieves this by using a Deep Neural Network (DNN) AI-model to hide its attack payload in benign carrier applications, while the payload will only be unlocked if—and only if —the intended target is reached. DeepLocker leverages several attributes for target identification, including visual, audio, geolocation, and system-level features. In contrast to existing evasive and targeted malware, this method would make it extremely challenging to reverse engineer the benign carrier software and recover the mission-critical secrets, including the attack payload and the specifics of the target.
We will perform a live demonstration of a proof-of-concept implementation of a DeepLocker malware, in which we camouflage well-known ransomware in a benign application such that it remains undetected by malware analysis tools, including anti-virus engines and malware sandboxes. We will discuss technical details, implications, and use cases of DeepLocker. More importantly, we will share countermeasures that could help defend against this type of attack in the wild.