Jakob Nyber, Pontus Johnson (KTH Royal Institute of Technology)

We implemented and evaluated an automated cyber defense agent. The agent takes security alerts as input and uses reinforcement learning to learn a policy for executing predefined defensive measures. The defender policies were trained in an environment intended to simulate a cyber attack. In the simulation, an attacking agent attempts to capture targets in the environment, while the defender attempts to protect them by enabling defenses. The environment was modeled using attack graphs based on the Meta Attack Language language. We assumed that defensive measures have downtime costs, meaning that the defender agent was penalized for using them. We also assumed that the environment was equipped with an imperfect intrusion detection system that occasionally produces erroneous alerts based on the environment state. To evaluate the setup, we trained the defensive agent with different volumes of intrusion detection system noise. We also trained agents with different attacker strategies and graph sizes. In experiments, the defensive agent using policies trained with reinforcement learning outperformed agents using heuristic policies. Experiments also demonstrated that the policies could generalize across different attacker strategies. However, the performance of the learned policies decreased as the attack graphs increased in size.

View More Papers

Blaze: A Framework for Interprocedural Binary Analysis

Matthew Revelle, Matt Parker, Kevin Orr (Kudu Dynamics)

Read More

FCGAT: Interpretable Malware Classification Method using Function Call Graph...

Minami Someya (Institute of Information Security), Yuhei Otsubo (National Police Academy), Akira Otsuka (Institute of Information Security)

Read More

Applying Accessibility Metrics to Measure the Threat Landscape for...

John Breton, AbdelRahman Abdou (Carleton University)

Read More

Keynote: Cybersecurity Experimentation of the Future

Jelena Mirkovic (USC Information Sciences Institute)

Read More