Skip to content

Commit c28298d

Browse files
committed
fix comments
Signed-off-by: balteravishay <[email protected]>
1 parent 25b583f commit c28298d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ By keeping these points in mind, you can harness AI code assistants effectively
2121

2222
One of the first sections in your instructions should reinforce general secure coding best practices. These principles apply to all languages and frameworks, and you want the AI to **always** keep them in mind when generating code:
2323

24-
* **Input Validation & Output Encoding:** Instruct the AI to treat all external inputs as untrusted and to validate them. *Example: "user inputs should be checked for expected format and length*". Any output should be properly encoded to prevent injection attacks such as SQL injection or cross-site scripting (XSS). *Example: "Always validate function arguments and use parameterized queries for database access"* and *"Escape special characters in user-generated content before rendering it in HTML"*. Similarly, specify that when generating output contexts such as HTML or SQL, the assistant should use safe frameworks or encoding functions to avoid vulnerabilities. [[8]](#8) [[9]](#9) [[10]](#10)
24+
* **Input Validation & Output Encoding:** Instruct the AI to treat all external inputs as untrusted and to validate them. *Example: "user inputs should be checked for expected format and length"*. Any output should be properly encoded to prevent injection attacks such as SQL injection or cross-site scripting (XSS). *Example: "Always validate function arguments and use parameterized queries for database access"* and *"Escape special characters in user-generated content before rendering it in HTML"*. Similarly, specify that when generating output contexts such as HTML or SQL, the assistant should use safe frameworks or encoding functions to avoid vulnerabilities. [[8]](#8) [[9]](#9) [[10]](#10)
2525
* **Authentication, Authorization & Secrets Management:** Emphasize that credentials and sensitive tokens must never be hard-coded or exposed. Your instructions can say: *"Never include API keys, passwords, or secrets in code output, and use environment variables or secure vault references instead"*. Also instruct the AI to use secure authentication flows (for instance, using industry-standard libraries for handling passwords or tokens) and to enforce role-based access checks where appropriate. [[11]](#11) [[12]](#12) [[13]](#13)
2626
* **Error Handling & Logging:** Guide the AI to implement errors securely by catching exceptions and failures without revealing sensitive info (stack traces, server paths, etc.) to the end-user. In your instructions, you might include: *"When generating code, handle errors gracefully and log them, but do not expose internal details or secrets in error messages".* This ensures the assistant's suggestions include secure error-handling patterns (like generic user-facing messages and detailed logs only on the server side). Additionally, instruct the AI to use logging frameworks that can be configured for security (e.g. avoiding logging of personal data or secrets). [[14]](#14)
2727
* **Secure Defaults & Configurations:** Include guidance such as: *"Prefer safe defaults in configurations – for example, use HTTPS by default, require strong encryption algorithms, and disable insecure protocols or options".* By specifying this, the AI will be more likely to generate code that opts-in to security features. Always instruct the AI to follow the principle of least privilege (e.g. minimal file system permissions, least-privileged user accounts for services, etc.) in any configuration or code it proposes. [[15]](#15) [[16]](#16)

0 commit comments

Comments
 (0)