[PDF][PDF] IoT Malware Data Augmentation using a Generative Adversarial Network.

J Carter, S Mancoridis, P Protopapas… - …, 2024 - johnjohnphotos-media.s3 …
HICSS, 2024johnjohnphotos-media.s3 …
Behavioral malware detection has been shown to be an effective method for detecting
malware running on computing hosts. Machine learning (ML) models are often used for this
task, which use representative behavioral data from a device to make a classification as to
whether an observation is malware or not. Although these models can perform well,
machine learning models in security are often trained on imbalanced training datasets that
yield poor real-world efficacy, as they favor the overrepresented class. Thus, we need a way …
Abstract
Behavioral malware detection has been shown to be an effective method for detecting malware running on computing hosts. Machine learning (ML) models are often used for this task, which use representative behavioral data from a device to make a classification as to whether an observation is malware or not. Although these models can perform well, machine learning models in security are often trained on imbalanced training datasets that yield poor real-world efficacy, as they favor the overrepresented class. Thus, we need a way to augment the underrepresented class. Some common data augmentation techniques include SMOTE, data resampling/upsampling, or using generative algorithms. In this work, we explore using generative algorithms for this task, and show how those results compare to results obtained using SMOTE and upsampling. Specifically, we feed the less-represented class of data into a Generative Adversarial Network (GAN) to create enough realistic synthetic data to balance the dataset. In this work, we show how using a GAN to balance a dataset that favors benign data helps a shallow Neural Network achieve a higher Area Under the Receiver Operating Characteristic Curve (AUC) and a lower False Positive Rate (FPR).
johnjohnphotos-media.s3.amazonaws.com
Showing the best result for this search. See all results