Jakob Nyber, Pontus Johnson (KTH Royal Institute of Technology)

We implemented and evaluated an automated cyber defense agent. The agent takes security alerts as input and uses reinforcement learning to learn a policy for executing predefined defensive measures. The defender policies were trained in an environment intended to simulate a cyber attack. In the simulation, an attacking agent attempts to capture targets in the environment, while the defender attempts to protect them by enabling defenses. The environment was modeled using attack graphs based on the Meta Attack Language language. We assumed that defensive measures have downtime costs, meaning that the defender agent was penalized for using them. We also assumed that the environment was equipped with an imperfect intrusion detection system that occasionally produces erroneous alerts based on the environment state. To evaluate the setup, we trained the defensive agent with different volumes of intrusion detection system noise. We also trained agents with different attacker strategies and graph sizes. In experiments, the defensive agent using policies trained with reinforcement learning outperformed agents using heuristic policies. Experiments also demonstrated that the policies could generalize across different attacker strategies. However, the performance of the learned policies decreased as the attack graphs increased in size.

View More Papers

Position Paper: Space System Threat Models Must Account for...

Benjamin Cyr and Yan Long (University of Michigan), Takeshi Sugawara (The University of Electro-Communications), Kevin Fu (Northeastern University)

Read More

LOKI: State-Aware Fuzzing Framework for the Implementation of Blockchain...

Fuchen Ma (Tsinghua University), Yuanliang Chen (Tsinghua University), Meng Ren (Tsinghua University), Yuanhang Zhou (Tsinghua University), Yu Jiang (Tsinghua University), Ting Chen (University of Electronic Science and Technology of China), Huizhong Li (WeBank), Jiaguang Sun (School of Software, Tsinghua University)

Read More