-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathCITATION.cff
More file actions
58 lines (45 loc) · 2.11 KB
/
CITATION.cff
File metadata and controls
58 lines (45 loc) · 2.11 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
cff-version: 1.2.0
title: "GenAI in the SOC Paper - Acceleration v. Responsibility"
message: "If you use this work, please cite it as below."
type: article
authors:
- name: "ESKme"
url: "https://ESKme.net"
repository-code: "https://github.com/ESKme/genai-in-the-soc-paper"
license: "CC-BY-SA-4.0"
abstract: >
This paper analyzes the integration of generative artificial intelligence into Security Operations
Centers (SOCs), with a specific focus on incident response processes.
The central objective is to examine how generative AI can accelerate analytical workflows such as
alert triage, log interpretation, and contextual threat analysis, while maintaining the necessary
level of human oversight, accountability, and decision authority.
The work introduces a conceptual framework describing the tension between operational acceleration
and responsibility in AI-assisted security environments. It argues that generative AI should be
understood as a decision-support system rather than an autonomous decision-making entity.
Based on this perspective, the paper outlines a human-in-the-loop architecture in which AI acts as
an analytical layer within the SOC workflow, augmenting but not replacing human expertise.
In addition, key risks associated with generative AI in security contexts are discussed, including
hallucinations, automation bias, lack of explainability, and adversarial manipulation.
The results demonstrate that while generative AI has the potential to significantly improve
efficiency in incident response, its responsible use requires clearly defined governance models,
validation mechanisms, and a strict separation between support and authority.
keywords:
- cybersecurity
- generative-ai
- soc
- incident-response
- ai-in-security
- human-in-the-loop
- decision-support
- security-operations-center
- threat-detection
- ai-governance
date-released: 2026-04-13
version: "1.0"
preferred-citation:
type: article
title: "GenAI in the SOC Paper - Acceleration v. Responsibility"
authors:
- name: "ESKme"
year: 2026
url: "https://github.com/ESKme/genai-in-the-soc-paper"