39

Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical Report

Zhuoran Yang
Ed Li
Jianliang He
Aman Priyanshu
Baturay Saglam
Paul Kassianik
Sajana Weerawardhena
Anu Vellore
Blaine Nelson
Neusha Javidnia
Arthur Goldblatt
Fraser Burch
Avi Zohary
Assaf Eisenman
Mahdi Sabbaghi
Supriti Vijay
Rahim Dharssi
Dhruv Kedia
Kojin Oshiba
Yaron Singer
Amin Karbasi
Main:20 Pages
6 Figures
Bibliography:4 Pages
7 Tables
Appendix:7 Pages
Abstract

We present Foundation-Sec-8B-Reasoning, the first open-source native reasoning model for cybersecurity. Built upon our previously released Foundation-Sec-8B base model (derived from Llama-3.1-8B-Base), the model is trained through a two-stage process combining supervised fine-tuning (SFT) and reinforcement learning from verifiable rewards (RLVR). Our training leverages proprietary reasoning data spanning cybersecurity analysis, instruction-following, and mathematical reasoning. Evaluation across 10 cybersecurity benchmarks and 10 general-purpose benchmarks demonstrates performance competitive with significantly larger models on cybersecurity tasks while maintaining strong general capabilities. The model shows effective generalization on multi-hop reasoning tasks and strong safety performance when deployed with appropriate system prompts and guardrails. This work demonstrates that domain-specialized reasoning models can achieve strong performance on specialized tasks while maintaining broad general capabilities. We release the model publicly atthis https URL.

View on arXiv
Comments on this paper