Skip to content

Commit 6ef53ab

Browse files
committed
fix lint
Signed-off-by: balteravishay <[email protected]>
1 parent c791dfe commit 6ef53ab

File tree

1 file changed

+4
-6
lines changed

1 file changed

+4
-6
lines changed

docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ AI code assistants can significantly speed up development, but they need guidanc
44

55
---
66

7-
## TL;DR
7+
## TL;DR
88

99
Short on time? Here's what really matters:
1010

@@ -35,11 +35,12 @@ One of the first sections in your instructions should reinforce general secure c
3535

3636
Modern software heavily relies on third-party libraries and dependencies. It's crucial that your AI assistant's instructions cover supply chain security, ensuring that suggested dependencies and build processes are secure:
3737

38-
* **Safe Dependency Selection:** Instruct the AI to prefer well-vetted, reputable libraries when suggesting code that pulls in external packages. For example: *"Use popular, community-trusted libraries for common tasks (and avoid adding obscure dependencies if a standard library or well-known package can do the same job)".* Emphasize evaluating packages before use – as a developer would manually. [[22]](#22)
38+
* **Safe Dependency Selection:** Instruct the AI to prefer well-vetted, reputable libraries when suggesting code that pulls in external packages. For example: *"Use popular, community-trusted libraries for common tasks (and avoid adding obscure dependencies if a standard library or well-known package can do the same job)".* Emphasize evaluating packages before use – as a developer would manually. [[22]](#22)
3939
* **Use Package Managers & Lock Versions:** Your instructions should tell the AI to use proper package management. For instance: *"Always use the official package manager for the given language (npm, pip, Maven, etc.) to install libraries, rather than copying code snippets".* Also, instruct it to specify version ranges or exact versions that are known to be secure. By doing so, the AI will generate code that, for example, uses a `requirements.txt` or `package.json` entry, which aids in maintaining supply chain integrity. [[23]](#23)
4040
* **Stay Updated & Monitor Vulnerabilities:** Include guidance for keeping dependencies up-to-date. For example: *"When suggesting dependency versions, prefer the latest stable release and mention updating dependencies regularly to patch vulnerabilities"*. [[24]](#24) [[25]](#25)
4141
* **Generate Software Bill of Materials (SBOM):** Instruct the AI to create and maintaining SBOMs for better visibility into your software supply chain. For example: *"Generate a Software Bill of Materials (SBOM) by using tools that support standard formats like SPDX or CycloneDX".* You can also mention provenance tracking: *"Where applicable, use in-toto attestations or similar frameworks to create verifiable records of your build and deployment processes".* This ensures comprehensive tracking of what goes into your software and provides the foundation for ongoing vulnerability monitoring and incident response. [[26]](#26)
4242
* **Integrity Verification:** To further secure the supply chain, you can instruct the assistant to show how to verify what it uses. For instance: *"When adding important external resources (scripts, containers, etc.), include steps to verify integrity (like checksum verification or signature validation) if applicable".* [[27]](#27)
43+
4344
---
4445

4546
## **Platform and Runtime Security Considerations**
@@ -61,7 +62,7 @@ It's valuable to dedicate individual paragraphs to language-specific security co
6162
* **C/C++ (Memory-Unsafe Languages):** For languages without automatic memory safety, instruct the AI to be extra cautious with memory management. *"In C or C++ code, always use bounds-checked functions (e.g., `strncpy` over `strcpy`), avoid dangerous functions like `gets`, and include buffer size constants to prevent overflow. Enable compiler defenses (stack canaries, fortify source, DEP/NX) in any build configurations you suggest"*. By giving such instructions, the assistant might prefer safer standard library calls or even suggest modern C++ classes (`std::vector` instead of raw arrays) to reduce manual memory handling. It will also acknowledge when an operation is risky, possibly inserting comments like "// ensure no buffer overflow". [[37]](#37) [[38]](#38)
6263
* **Rust, Go, and Memory-Safe Languages:** If the project involves memory-safe languages (Rust, Go, Java, C\#, etc.), you can note that the AI should leverage their safety features. *"In Rust code, avoid using `unsafe` blocks unless absolutely necessary and document any `unsafe` usage with justification".* Memory-safe-by-default languages enforce a lot at compile time, but you should still have the AI follow best practices of those ecosystems. For example, instruct: *"In any memory-safe language, prefer using safe library functions and types; don't circumvent their safety without cause".* If a language offers any tools to verify memory access, direct the AI assistant to use them while building or testing your code. For Example "*In go code, use the data race detector when building the application*". [[39]](#39)
6364
* **Python and Dynamic Languages:** Python, JavaScript, and other high-level languages manage memory for you, but come with their own security pitfalls. In your instructions, emphasize things like avoiding exec/eval with untrusted input in Python and being careful with command execution. *"For Python, do not use `exec`/`eval` on user input and prefer safe APIs (e.g., use the `subprocess` module with `shell=False` to avoid shell injection)".* Additionally, mention type checking or the use of linters: *"Follow PEP 8 and use type hints, as this can catch misuse early".* For JavaScript/TypeScript, you might add: *"When generating Node.js code, use prepared statements for database queries (just like any other language) and encode any data that goes into HTML to prevent XSS".* These instructions incorporate known best practices (like those from OWASP cheat sheets) directly into the AI's behavior. [[40]](#40)
64-
* **Java/C\# and Enterprise Languages:** In languages often used for large applications, you might focus on frameworks and configurations. *"For Java, when suggesting web code (e.g., using Spring), ensure to use built-in security annotations and avoid old, vulnerable libraries (e.g., use `BCryptPasswordEncoder` rather than writing a custom password hash)".* For C\#, similarly: *"Use .NET's cryptography and identity libraries instead of custom solutions".* Also instruct about managing object deserialization (both Java and C\# have had vulnerabilities in this area): *"Never suggest turning off security features like XML entity security or type checking during deserialization".* These language-specific notes guide the AI to incorporate the well-known secure patterns of each ecosystem. [[41]](#41)
65+
* **Java/C\# and Enterprise Languages:** In languages often used for large applications, you might focus on frameworks and configurations. *"For Java, when suggesting web code (e.g., using Spring), ensure to use built-in security annotations and avoid old, vulnerable libraries (e.g., use `BCryptPasswordEncoder` rather than writing a custom password hash)".* For C\#, similarly: *"Use .NET's cryptography and identity libraries instead of custom solutions".* Also instruct about managing object deserialization (both Java and C\# have had vulnerabilities in this area): *"Never suggest turning off security features like XML entity security or type checking during deserialization".* These language-specific notes guide the AI to incorporate the well-known secure patterns of each ecosystem. [[41]](#41)
6566

6667
---
6768

@@ -178,6 +179,3 @@ encryption can expose passwords, personal information, and financial data... If
178179
<a id="46">[46]</a> "Automated vulnerability scanners or approaches like chatbots that critically question the generated source code ('source code critics') can reduce the risk" (ANSSI, BSI - [AI Coding Assistants](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/ANSSI_BSI_AI_Coding_Assistants.pdf?__blob=publicationFile&v=7))
179180

180181
<a id="47">[47]</a> "... post-processing the output ... has a measurable impact on code quality, and is LLM-agnostic... Presumably, non-LLM static analyzers or linters may be integrated as part of the code generation procedure to provide checks along the way and avoid producing code that is visibly incorrect or dangerous" (Frontiers - [A systematic literature review on the impact of AI models on the security of code generation](https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2024.1386720/full))
181-
182-
183-

0 commit comments

Comments
 (0)