Due to the ever-increasing threat of cyber-attacks to critical cyber
infrastructure, organizations are focusing on building their cybersecurity
knowledge base. A salient list of cybersecurity knowledge is the Common
Vulnerabilities and Exposures (CVE) list, which details vulnerabilities found
in a wide range of software and hardware. However, these vulnerabilities often
do not have a mitigation strategy to prevent an attacker from exploiting them.
A well-known cybersecurity risk management framework, MITRE ATT&CK, offers
mitigation techniques for many malicious tactics. Despite the tremendous
benefits that both CVEs and the ATT&CK framework can provide for key
cybersecurity stakeholders (e.g., analysts, educators, and managers), the two
entities are currently separate. We propose a model, named the CVE Transformer
(CVET), to label CVEs with one of ten MITRE ATT&CK tactics. The CVET model
contains a fine-tuning and self-knowledge distillation design applied to the
state-of-the-art pre-trained language model RoBERTa. Empirical results on a
gold-standard dataset suggest that our proposed novelties can increase model
performance in F1-score. The results of this research can allow cybersecurity
stakeholders to add preliminary MITRE ATT&CK information to their collected
CVEs.

By admin