Skip to content

Commit 419e2e5

Browse files
authored
Add 2_0_vulns/LLM00_Preface.md (#466)
1 parent 3840ba6 commit 419e2e5

File tree

1 file changed

+32
-0
lines changed

1 file changed

+32
-0
lines changed

2_0_vulns/LLM00_Preface.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
## Letter from the Project Leads
2+
3+
The OWASP Top 10 for Large Language Model Applications started in 2023 as a community-driven effort to highlight and address security issues specific to AI applications. Since then, the technology has continued to spread across industries and applications, and so have the associated risks. As LLMs are embedded more deeply in everything from customer interactions to internal operations, developers and security professionals are discovering new vulnerabilities—and ways to counter them.
4+
5+
The 2023 list was a big success in raising awareness and building a foundation for secure LLM usage, but we've learned even more since then. In this new 2025 version, we’ve worked with a larger, more diverse group of contributors worldwide who have all helped shape this list. The process involved brainstorming sessions, voting, and real-world feedback from professionals in the thick of LLM application security, whether by contributing or refining those entries through feedback. Each voice was critical to making this new release as thorough and practical as possible.
6+
7+
### What’s New in the 2025 Top 10
8+
9+
The 2025 list reflects a better understanding of existing risks and introduces critical updates on how LLMs are used in real-world applications today. For instance, **Unbounded Consumption** expands on what was previously Denial of Service to include risks around resource management and unexpected costs—a pressing issue in large-scale LLM deployments.
10+
11+
The **Vector and Embeddings** entry responds to the community’s requests for guidance on securing Retrieval-Augmented Generation (RAG) and other embedding-based methods, now core practices for grounding model outputs.
12+
13+
We’ve also added **System Prompt Leakage** to address an area with real-world exploits that were highly requested by the community. Many applications assumed prompts were securely isolated, but recent incidents have shown that developers cannot safely assume that information in these prompts remains secret.
14+
15+
**Excessive Agency** has been expanded, given the increased use of agentic architectures that can give the LLM more autonomy. With LLMs acting as agents or in plug-in settings, unchecked permissions can lead to unintended or risky actions, making this entry more critical than ever.
16+
17+
### Moving Forward
18+
19+
Like the technology itself, this list is a product of the open-source community’s insights and experiences. It has been shaped by contributions from developers, data scientists, and security experts across sectors, all committed to building safer AI applications. We’re proud to share this 2025 version with you, and we hope it provides you with the tools and knowledge to secure LLMs effectively.
20+
21+
Thank you to everyone who helped bring this together and those who continue to use and improve it. We’re grateful to be part of this work with you.
22+
23+
24+
###@ Steve Wilson
25+
Project Lead
26+
OWASP Top 10 for Large Language Model Applications
27+
LinkedIn: https://www.linkedin.com/in/wilsonsd/
28+
29+
###@ Ads Dawson
30+
Technical Lead & Vulnerability Entries Lead
31+
OWASP Top 10 for Large Language Model Applications
32+
LinkedIn: https://www.linkedin.com/in/adamdawson0/

0 commit comments

Comments
 (0)