skip to Main Content

Tiki-Taka: Attacking and Defending Deep Learning-based Intrusion Detection Systems

Chaoyun Zhang, Xavier Costa-Perez, Paul Patras

BibTeX:

@inproceedings{10.1145/3411495.3421359, author = {Zhang, Chaoyun and Costa-Perez, Xavier and Patras, Paul}, title = {Tiki-Taka: Attacking and Defending Deep Learning-Based Intrusion Detection Systems}, year = {2020}, isbn = {9781450380843}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url ={https://doi.org/10.1145/3411495.3421359}, doi = {10.1145/3411495.3421359}, booktitle = {Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop}, pages = {27–39}, numpages = {13},location = {Virtual Event, USA}, series = {CCSW’20}}

Abstract:

Neural networks are increasingly important in the developmentof Network Intrusion Detection Systems (NIDS), as they have the potential to achieve high detection accuracy while requiring limited feature engineering. Deep learning-based detectors can be however vulnerable to adversarial examples, by which attackers that maybe oblivious to the precise mechanics of the targeted NIDS add subtle perturbations to malicious traffic features, with the aim of evading detection and disrupting critical systems in a cost-effective manner. Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges.
In this paper, we introduce Tiki-Taka, a general framework for (i) assessing the robustness of state-of-the-art deep learning-based NIDS against adversarial manipulations, and which (ii) incorporates our proposed defense mechanisms to increase the NIDS’ resistance to attacks employing such evasion techniques. Specifically, we select five different cutting-edge adversarial attack mechanisms to subvert three popular malicious traffic detectors that employ neural networks. We experiment with a publicly available dataset and consider both one-to-all and one-to-one classification scenarios, i.e., discriminating illicit vs benign traffic and respectively identifying specific types of anomalous traffic among many observed. The results obtained reveal that, under realistic constraints, attackers can evade NIDS with up to 35.7% success rates, by only altering time-based features of the traffic generated. To counteract these weaknesses, we propose three defense mechanisms, namely: model voting ensembling, ensembling adversarial training, and query detection. To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS. We demonstrate that when employing the proposed methods, intrusion detection rates can be improved to nearly 100% against most types of malicious traffic, and attacks with potentially catastrophic consequences (e.g., botnet) can be thwarted. This confirms the effectiveness of our solutions and makes the case for their adoption when designing robust and reliable deep anomaly detectors

Back To Top