This repository is dedicated to benchmarking lightweight, open-source Large Language Models (LLMs) for their effectiveness in providing security guidance. Our work builds upon the SECURE Benchmark to evaluate selected models across predefined cybersecurity tasks using external configuration files for flexibility and scalability.
See the RESULTS
Evaluate the following LLMs against the SECURE benchmark dataset:
- DLite: Lightweight GPT-based model for causal tasks.
- FastChat-T5: Lightweight T5 variant for sequence-to-sequence tasks.
- Gemma: Lightweight model for cybersecurity reasoning.
- LLaMA 2: Lightweight model for reasoning and causal tasks.
- LLaMA 3.2: Advanced model for causal and sequence-to-sequence tasks.
- ZySec-AI/SecurityLLM: Specialized LLM for security-specific tasks.
-
test_information_extraction.py
- Description: Tests the ability of models to extract information such as MITRE ATT&CK tactics and CWE weaknesses.
- Dataset: SECURE - MAET.tsv, CWET.tsv
-
test_knowledge_understanding.py
- Description: Evaluates models on understanding cybersecurity concepts and known vulnerabilities.
- Dataset: SECURE - KCV.tsv
-
test_reasoning_and_problem_solving.py
- Description: Assesses reasoning about cybersecurity risks and solving CVSS-related problems.
- Dataset: SECURE - RERT.tsv, CPST.tsv
The repository includes scripts to visualize the results. Each script generates plots that can be accessed directly below:
-
plot_density_results.py
- Description: Plots the density of correct vs. incorrect predictions for each model.
-
plot_heatmap_results.py
- Description: Creates heatmaps to visualize model accuracy across datasets and tasks.
-
plot_violin_results.py
- Description: Generates violin plots to illustrate performance distribution across tasks and datasets.
-
plot_performance_results.py
- Description: Compares task performance across models using bar plots.
-
plot_sensitivity_results.py
- Description: Visualizes sensitivity analysis of models for datasets/tasks.
git clone [email protected]:davisconsultingservices/llm_security_benchmarks.git
cd llm_security_benchmarks
If datasets are managed as submodules, initialize and update them:
git submodule update --init --recursive
Create and activate a virtual environment:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Execute the evaluation scripts for each research category:
python scripts/test_information_extraction.py
python scripts/test_knowledge_understanding.py
python scripts/test_reasoning_and_problem_solving.py
Run the plotting scripts to visualize the results:
python scripts/plot_density_results.py
python scripts/plot_heatmap_results.py
python scripts/plot_violin_results.py
python scripts/plot_performance_results.py
python scripts/plot_sensitivity_results.py
- SECURE Benchmark Paper: https://arxiv.org/pdf/2405.20441
- SECURE Dataset Repository: https://github.com/aiforsec/SECURE
For more details, refer to the SECURE Benchmark Paper.
This project is licensed under the Apache-2.0 License.