Skip to content

Update Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md #954

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

by the OpenSSF Best Practices and the AI/ML Working Groups, 2025-08-01

AI code assistants can significantly speed up development. However, they need guidance to produce **secure** and robust code. This guide explains how to create custom instructions (e.g. [GitHub Copilot instructions file](https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot), [Cline instructions file](https://docs.cline.bot/enterprise-solutions/custom-instructions), [Cursor rules](https://docs.cursor.com/context/rules), [Claude markdown](https://docs.anthropic.com/en/docs/claude-code/common-workflows#create-an-effective-claude-md-file), etc.). These instructions ensure the AI assistant accounts for application code security, supply chain safety, and platform or language-specific considerations. They also help embed a "security conscience" into the tool. In practice, this means fewer vulnerabilities making it into your codebase. Remember that these instructions should be kept concise, specific, and actionable. The goal is to influence the AI's behaviour without overwhelming it. [[wiz2025a]](#wiz2025a)
AI code assistants can significantly speed up development. However, they need guidance to produce **secure** and robust code. This guide explains how to create custom instructions (e.g. [Claude markdown](https://docs.anthropic.com/en/docs/claude-code/common-workflows#create-an-effective-claude-md-file), [Cline instructions file](https://docs.cline.bot/enterprise-solutions/custom-instructions), [Cursor rules](https://docs.cursor.com/context/rules), [GitHub Copilot instructions file](https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot), [Kiro steering](https://kiro.dev/docs/steering/), etc.). These instructions ensure the AI assistant accounts for application code security, supply chain safety, and platform or language-specific considerations. They also help embed a "security conscience" into the tool. In practice, this means fewer vulnerabilities making it into your codebase. Remember that these instructions should be kept concise, specific, and actionable. The goal is to influence the AI's behaviour without overwhelming it. [[wiz2025a]](#wiz2025a)

These recommendations are based on expert opinion and various recommendations in the literature. We encourage experimentation and feedback to improve these recommendations. We, as an industry, are together learning how to best use these tools.

Expand All @@ -12,7 +12,7 @@ These recommendations are based on expert opinion and various recommendations in

Short on time? Here's what really matters:

* **You Are the Pilot – AI is the Co-pilot:** The developer (you) remains in full control of the code. Critically evaluate and edit AI-generated code just as you would code written by a human colleague and never blindly accept suggestions. [[anssibsi2024a]](#anssibsi2024a)
* **You Are the Developer – AI is the Assistant:** The developer (you) remains in full control of the code, and you are responsible for any harms that may be caused by the code. Critically evaluate and edit AI-generated code just as you would code written by a human colleague and never blindly accept suggestions in situations where that could eventually cause harm. [[ifip2021]](#ifip2021) [[anssibsi2024a]](#anssibsi2024a)
* **Apply Engineering Best Practices Always:** AI-generated code isn't a shortcut around engineering processes such as code reviews, testing, static analysis, documentation, and version control discipline. [[markvero2025a]](#markvero2025a)
* **Be Security-Conscious:** Assume AI-written code can have bugs or vulnerabilities, because it often does. AI coding assistants can introduce security issues like using outdated cryptography or outdated dependencies, ignoring error handling, or leaking secrets. Check for any secrets or sensitive data in the suggested code. Make sure dependency suggestions are safe and not pulling in known vulnerable packages. [[shihchiehdai2025a]](#shihchiehdai2025a), [[anssibsi2024b]](#anssibsi2024b)
* **Guide the AI:** AI is a powerful assistant, but it works best with your guidance. Write clear prompts that specify security requirements. Don't hesitate to modify or reject AI outputs. Direct your AI tool to build its own instructions file based on this guide. [[swaroopdora2025a]](#swaroopdora2025a) [[haoyan2025a]](#haoyan2025a)
Expand Down Expand Up @@ -152,6 +152,8 @@ To strengthen your AI assistant's guidance, you should point it toward establish

<a id="wiz2025a">[wiz2025a]</a> "Rules files are an emerging pattern to allow you to provide standard guidance to AI coding assistants. You can use these rules to establish project, company, or developer specific context, preferences, or workflows" (Wiz Research - [Rules Files for Safer Vibe Coding](https://www.wiz.io/blog/safer-vibe-coding-rules-files))

<a id="ifip2021">[ifip2021]</a> "'harm' means negative consequences, especially when those consequences are significant and unjust." (International Federation for Information Processing - [IFIP Code of Ethics and Professional Conduct](https://inria.hal.science/hal-03266380v1))

<a id="anssibsi2024a">[anssibsi2024a]</a> "AI coding assistants are no substitute for experienced developers. An unrestrained use of the tools can have severe security implications." (ANSSI, BSI - [AI Coding Assistants](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/ANSSI_BSI_AI_Coding_Assistants.pdf?__blob=publicationFile&v=7))

<a id="markvero2025a">[markvero2025a]</a> "on average, we could successfully execute security exploits on around half of the correct programs generated by each LLM; and in less popular backend frameworks, models further struggle to generate correct and secure applications (Mark Vero, Niels Mündler, Victor Chibotaru, Veselin Raychev, Maximilian Baader, Nikola Jovanović, Jingxuan He, Martin Vechev - [Can LLMs Generate Correct and Secure Backends?](https://arxiv.org/abs/2502.11844))"
Expand Down