You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Small edits to Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md
* add link to Kiro steering documentation, sort tools alphabetically
* adopt terminology more applicable to professional software
development ("You are the Pilot" -> "You are the Developer")
* reinforce that the developer is responsible for potential harms of
the code, link to IFIP code of ethics and professional conduct for
definition of "harm". IFIP is the leading multinational, apolitical
organization in Information & Communications Technologies and
Sciences.
Signed-off-by: Matt Wilson <[email protected]>
Copy file name to clipboardExpand all lines: docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
by the OpenSSF Best Practices and the AI/ML Working Groups, 2025-08-01
4
4
5
-
AI code assistants can significantly speed up development. However, they need guidance to produce **secure** and robust code. This guide explains how to create custom instructions (e.g. [GitHub Copilot instructions file](https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot), [Cline instructions file](https://docs.cline.bot/enterprise-solutions/custom-instructions), [Cursor rules](https://docs.cursor.com/context/rules), [Claude markdown](https://docs.anthropic.com/en/docs/claude-code/common-workflows#create-an-effective-claude-md-file), etc.). These instructions ensure the AI assistant accounts for application code security, supply chain safety, and platform or language-specific considerations. They also help embed a "security conscience" into the tool. In practice, this means fewer vulnerabilities making it into your codebase. Remember that these instructions should be kept concise, specific, and actionable. The goal is to influence the AI's behaviour without overwhelming it. [[wiz2025a]](#wiz2025a)
5
+
AI code assistants can significantly speed up development. However, they need guidance to produce **secure** and robust code. This guide explains how to create custom instructions (e.g. [Claude markdown](https://docs.anthropic.com/en/docs/claude-code/common-workflows#create-an-effective-claude-md-file), [Cline instructions file](https://docs.cline.bot/enterprise-solutions/custom-instructions), [Cursor rules](https://docs.cursor.com/context/rules), [GitHub Copilot instructions file](https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot), [Kiro steering](https://kiro.dev/docs/steering/), etc.). These instructions ensure the AI assistant accounts for application code security, supply chain safety, and platform or language-specific considerations. They also help embed a "security conscience" into the tool. In practice, this means fewer vulnerabilities making it into your codebase. Remember that these instructions should be kept concise, specific, and actionable. The goal is to influence the AI's behaviour without overwhelming it. [[wiz2025a]](#wiz2025a)
6
6
7
7
These recommendations are based on expert opinion and various recommendations in the literature. We encourage experimentation and feedback to improve these recommendations. We, as an industry, are together learning how to best use these tools.
8
8
@@ -12,7 +12,7 @@ These recommendations are based on expert opinion and various recommendations in
12
12
13
13
Short on time? Here's what really matters:
14
14
15
-
***You Are the Pilot – AI is the Co-pilot:** The developer (you) remains in full control of the code. Critically evaluate and edit AI-generated code just as you would code written by a human colleague and never blindly accept suggestions. [[anssibsi2024a]](#anssibsi2024a)
15
+
***You Are the Developer – AI is the Apprentice:** The developer (you) remains in full control of the code, and you responsible for harms that may be caused by the code. Critically evaluate and edit AI-generated code just as you would code written by a human colleague and never blindly accept suggestions.[[ifip2021]](#ifip2021)[[anssibsi2024a]](#anssibsi2024a)
16
16
***Apply Engineering Best Practices Always:** AI-generated code isn't a shortcut around engineering processes such as code reviews, testing, static analysis, documentation, and version control discipline. [[markvero2025a]](#markvero2025a)
17
17
***Be Security-Conscious:** Assume AI-written code can have bugs or vulnerabilities, because it often does. AI coding assistants can introduce security issues like using outdated cryptography or outdated dependencies, ignoring error handling, or leaking secrets. Check for any secrets or sensitive data in the suggested code. Make sure dependency suggestions are safe and not pulling in known vulnerable packages. [[shihchiehdai2025a]](#shihchiehdai2025a), [[anssibsi2024b]](#anssibsi2024b)
18
18
***Guide the AI:** AI is a powerful assistant, but it works best with your guidance. Write clear prompts that specify security requirements. Don't hesitate to modify or reject AI outputs. Direct your AI tool to build its own instructions file based on this guide. [[swaroopdora2025a]](#swaroopdora2025a)[[haoyan2025a]](#haoyan2025a)
@@ -152,6 +152,8 @@ To strengthen your AI assistant's guidance, you should point it toward establish
152
152
153
153
<aid="wiz2025a">[wiz2025a]</a> "Rules files are an emerging pattern to allow you to provide standard guidance to AI coding assistants. You can use these rules to establish project, company, or developer specific context, preferences, or workflows" (Wiz Research - [Rules Files for Safer Vibe Coding](https://www.wiz.io/blog/safer-vibe-coding-rules-files))
154
154
155
+
<aid="ifip2021">[ifip2021]</a> "'harm' means negative consequences, especially when those consequences are significant and unjust." (International Federation for Information Processing - [IFIP Code of Ethics and Professional Conduct](https://inria.hal.science/hal-03266380v1))
156
+
155
157
<aid="anssibsi2024a">[anssibsi2024a]</a> "AI coding assistants are no substitute for experienced developers. An unrestrained use of the tools can have severe security implications." (ANSSI, BSI - [AI Coding Assistants](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/ANSSI_BSI_AI_Coding_Assistants.pdf?__blob=publicationFile&v=7))
156
158
157
159
<aid="markvero2025a">[markvero2025a]</a> "on average, we could successfully execute security exploits on around half of the correct programs generated by each LLM; and in less popular backend frameworks, models further struggle to generate correct and secure applications (Mark Vero, Niels Mündler, Victor Chibotaru, Veselin Raychev, Maximilian Baader, Nikola Jovanović, Jingxuan He, Martin Vechev - [Can LLMs Generate Correct and Secure Backends?](https://arxiv.org/abs/2502.11844))"
0 commit comments