-
-
Notifications
You must be signed in to change notification settings - Fork 267
Open
Description
Remember, an issue is not the place to ask questions. You can use our Slack channel for that, or you may want to start a discussion on the Discussion Board.
When reporting an issue, please be sure to include the following:
- Before you open an issue, please check if a similar issue already exists or has been closed before.
- A descriptive title and apply the specific LLM-0-10 label relative to the entry. See our available labels.
- A description of the problem you're trying to solve, including why you think this is a problem
- If the enhancement changes current behavior, reasons why your solution is better
- What artifact and version of the project you're referencing, and the location (I.E OWASP site, llmtop10.com, repo)
- The behavior you expect to see, and the actual behavior
Steps to Reproduce
- NA
- …
- …
What happens?
2_0_vulns/LLM01_PromptInjection.md here
i think we should segregate Basics of prompt injection, Jailbreaking, Prompt Leaking, Prompt Hijacking and Indirect Injections as separate entities
since context windows are now longer and more powerful, Many-Shot Jailbreaking is a common technique which was not the case back in v1.0 of our project and therefore i think it should be called out as a technique or at least a reference
What were you expecting to happen?
Any logs, error output, etc?
Any other comments?
Posted in #team-llm-promptinjection here
Solid references:
shakreiner and 1jokereleondz
Metadata
Metadata
Assignees
Labels
No labels
