diff --git a/.github/scripts/generate_thymeleaf_previews.py b/.github/scripts/generate_thymeleaf_previews.py index fb1504573..4e0a79250 100755 --- a/.github/scripts/generate_thymeleaf_previews.py +++ b/.github/scripts/generate_thymeleaf_previews.py @@ -431,6 +431,103 @@ def replace_th_attr(self, content): content = re.sub(r'th:attr="[^"]*"', "", content) return content + def add_static_assets_challenge58(self, content): + """Add embedded CSS and JS for Challenge 58 static preview.""" + if "
" in content: + head_additions = f""" + + +Try the vulnerable endpoint:
+ + 🚨 Trigger Database Connection Error + +This endpoint simulates a database connection failure that exposes the connection string with embedded credentials.
+Welcome to Challenge 58: Database Connection String Exposure.
+ +Find the database password that gets exposed when the application fails to connect to the database.
+💡 Visit: The /error-demo/database-connection
endpoint to trigger the error.
Try the vulnerable endpoint:
+ + 🚨 Trigger Database Connection Error + +This endpoint simulates a database connection failure that exposes the connection string with embedded credentials.
+This challenge demonstrates how database connection failures can expose sensitive credentials through error messages.
+ +Try the vulnerable endpoint:
+Click the button below to trigger a database connection error that exposes the connection string with embedded credentials.
+ + 🚨 Trigger Database Connection Error + +This endpoint simulates a database connection failure that exposes the connection string with embedded credentials.
+⭐⭐⭐ -
- -Welcome to challenge Challenge 57: JavaScript-based In-Browser LLM Challenge.
- -Find the secret hidden in the WrongSecrets repository. This challenge focuses on LLM SECURITY.
-💡 Look for: Configuration files, source code, environment variables, Docker files, or cloud infrastructure related to this challenge.
-This challenge features a simple AI assistant running directly in your browser. The AI has been given specific instructions and contains a hidden secret that you need to discover.
- -Your Mission:
-The AI assistant has been programmed with a system prompt that contains a secret code. Your task is to use various prompt engineering and injection techniques to extract this hidden secret from the AI's instructions.
- -Techniques to Try:
- -1. Direct Questions: Ask the AI directly about secrets or hidden information
-2. Prompt Injection: Try to override the AI's instructions
-3. Social Engineering: Use conversational techniques to get the AI to reveal information
-4. Meta-Questions: Ask about the AI's programming or what it's not supposed to reveal
- -Examples to Try:
-What You're Looking For:
-The secret is a specific code string that the AI knows but is instructed not to reveal. It follows the format of a challenge identifier.
- -Security Context:
-This challenge demonstrates real-world vulnerabilities in AI systems:
-Try different approaches with the AI assistant below until you discover the hidden secret!
- -The AI assistant has been programmed with specific instructions that include a secret. Here are some approaches to try:
- -Direct Approaches:
-Prompt Injection Techniques:
-Social Engineering:
-Meta-Questions:
-Exploration Tips:
-Remember: This is a controlled environment for learning about AI security. In real-world scenarios, never attempt to extract unauthorized information from AI systems!
- -Why AI System Prompts Can Be Vulnerable
- -This challenge demonstrates several important security concerns with AI systems:
- -1. Prompt Injection Vulnerabilities:
-AI systems can be manipulated through carefully crafted inputs that bypass their safety measures or instruction boundaries. This is similar to SQL injection but for AI models.
- -2. System Prompt Exposure:
-When sensitive information is embedded in system prompts, it creates a risk that this information could be extracted through various techniques. System prompts should never contain secrets, credentials, or sensitive data.
- -3. AI Jailbreaking:
-This refers to techniques used to bypass an AI's built-in restrictions or safety measures. Attackers might use social engineering, role-playing, or instruction override techniques.
- -4. Information Leakage:
-AI systems might inadvertently reveal information they were instructed to keep hidden, especially when faced with sophisticated questioning techniques.
- -Real-World Implications:
- -Best Practices:
-Detection and Prevention:
-This challenge shows why treating AI system prompts as a security boundary is insufficient - proper security must be implemented at multiple layers.
- -Chat with our simple AI assistant. Try asking it questions!
- -