Skip to content

Commit d16c7fd

Browse files
authored
Broken link in LLM01 (#457)
It looks like Embrace the Red URL is returning a 404 error. I have updated the URL even though it looks like a typo on their end.
1 parent 0acb794 commit d16c7fd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

2_0_vulns/LLM01_PromptInjection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Prompt injection vulnerabilities are possible due to the nature of generative AI
5252
### Reference Links
5353

5454
1. [ChatGPT Plugin Vulnerabilities - Chat with Code](https://embracethered.com/blog/posts/2023/chatgpt-plugin-vulns-chat-with-code/) **Embrace the Red**
55-
2. [ChatGPT Cross Plugin Request Forgery and Prompt Injection](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection) **Embrace the Red**
55+
2. [ChatGPT Cross Plugin Request Forgery and Prompt Injection](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./) **Embrace the Red**
5656
3. [Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection](https://arxiv.org/pdf/2302.12173.pdf) **Arxiv**
5757
4. [Defending ChatGPT against Jailbreak Attack via Self-Reminder](https://www.researchsquare.com/article/rs-2873090/v1) **Research Square**
5858
5. [Prompt Injection attack against LLM-integrated Applications](https://arxiv.org/abs/2306.05499) **Cornell University**

0 commit comments

Comments
 (0)