Skip to content

Latest commit

 

History

History
103 lines (67 loc) · 2.92 KB

File metadata and controls

103 lines (67 loc) · 2.92 KB

GenAI in the SOC

Preview

GenAI in the SOC Paper - Acceleration v. Responsibility


Abstract

Generative AI is increasingly integrated into Security Operations Centers (SOCs) to accelerate alert triage, investigation, and response workflows.

This paper analyzes both the opportunities and risks of this integration, focusing on the fundamental tension between operational acceleration and the need for human responsibility, validation, and accountability.


Core Argument

Generative AI should be treated as a decision-support system, not as an autonomous decision-maker.

While AI can significantly accelerate analysis and pattern recognition, the final responsibility for security-critical decisions must remain with human experts.


Conceptual Model

Paradigm Shift

This model illustrates the transition from deterministic, rule-based systems to probabilistic, AI-assisted analysis, emphasizing the shift in epistemic certainty and the continued necessity of human oversight.


Human-in-the-Loop Architecture

Human-in-the-Loop

This architecture demonstrates how generative AI integrates into SOC workflows as an analytical layer, augmenting human analysts while preserving human decision authority in incident response.


About the Paper

This repository contains the full research paper, supporting figures, and supplementary material.

The work focuses on the role of generative AI in incident response and explores how efficiency gains can be achieved without compromising accountability and correctness in security-critical environments.


Repository Structure

See Repository Structure for an overview of the project layout.

  • /paper/ → Full paper (PDF)
  • /paper/source/ → Source files (ODT)
  • /figures/ → Diagrams and conceptual models
  • /appendix/ → Terminology and supporting material
  • /assets/ → Visual assets (social preview, cover)

Repository Layout Overview

.
├─ README.md
├─ LICENSE
├─ CITATION.cff
├─ .gitignore
│
├─ paper/
│  ├─ ESKme-GenAI-im-SOC-Paper-EN-v1.0.pdf
│  ├─ ESKme-GenAI-im-SOC-Paper-DE-v1.0.pdf
│  └─ source/
│     ├─ ESKme-GenAI-im-SOC-Paper-EN-v1.0.odt
│     └─ ESKme-GenAI-im-SOC-Paper-DE-v1.0.odt
│
├─ figures/
│  ├─ paradigm-shift-ai-soc-en.jpg
│  ├─ paradigm-shift-ai-soc-de.jpg
│  ├─ human-in-the-loop-architecture.png
│
├─ appendix/
│  └─ terminology.md
│
└─ assets/
   ├─ cover-en.jpg
   ├─ cover-de.jpg
   └─ social-preview.png
    

Citation

If you use this work, please cite it using the information provided in CITATION.cff.


License

This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.


ESKme ∴ Ethical.Shift.Keeper.//me