Skip to content

Commit 4c8a62b

Browse files
Update docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md
Co-authored-by: David A. Wheeler <[email protected]> Signed-off-by: Avishay Balter <[email protected]>
1 parent bf8f93c commit 4c8a62b

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,3 +179,11 @@ encryption can expose passwords, personal information, and financial data... If
179179
<a id="46">[46]</a> "Automated vulnerability scanners or approaches like chatbots that critically question the generated source code ('source code critics') can reduce the risk" (ANSSI, BSI - [AI Coding Assistants](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/ANSSI_BSI_AI_Coding_Assistants.pdf?__blob=publicationFile&v=7))
180180

181181
<a id="47">[47]</a> "... post-processing the output ... has a measurable impact on code quality, and is LLM-agnostic... Presumably, non-LLM static analyzers or linters may be integrated as part of the code generation procedure to provide checks along the way and avoid producing code that is visibly incorrect or dangerous" (Frontiers - [A systematic literature review on the impact of AI models on the security of code generation](https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2024.1386720/full))
182+
183+
<a id="48">[48]</a> "These 30 tests generated a total of 2.23 million packages in response to our prompts, of which 440,445 (19.7%) were determined to be hallucinations, including 205,474 unique non-existent packages (i.e. packages that do not exist in PyPI or npm repositories and were distinct entries in the hallucination count, irrespective of their multiple occurrences) (Spracklen - [We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs](https://arxiv.org/abs/2406.10279))
184+
185+
<a id="49">[49]</a> "A new class of supply chain attacks named 'slopsquatting' has emerged from the increased use of generative AI tools for coding and the model's tendency to "hallucinate" non-existent package names. The term slopsquatting was coined by security researcher [Seth Larson](https://mastodon.social/@andrewnez/114302875075999244) as a spin on typosquatting, an attack method that tricks developers into installing malicious packages by using names that closely resemble popular libraries. Unlike typosquatting, slopsquatting doesn't rely on misspellings. Instead, threat actors could create malicious packages on indexes like PyPI and npm named after ones commonly made up by AI models in coding examples" (Toulas - [AI-hallucinated code dependencies become new supply chain risk](https://www.bleepingcomputer.com/news/security/ai-hallucinated-code-dependencies-become-new-supply-chain-risk/)
186+
187+
<a id="50">[50]</a> "3 of the 4 models ... proved to be highly adept in detecting their own hallucinations with detection accuracy above 75%. Table 2 displays the recall and precision values for this test, with similarly strong performance across the 3 proficient models. This phenomenon implies that each model’s specific error patterns are detectable by the same mechanisms that generate them, suggesting an inherent self-regulatory capability. The indication that these models have an implicit understanding of their own generative patterns that could be leveraged for self-improvement is an important finding for developing mitigation strategies." (Spracklen - [We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs](https://arxiv.org/abs/2406.10279))
188+
189+
<a id="51">[51]</a> "). Across all the examined LLMs, the persona/memetic proxy approach has led to the highest average number of security weaknesses among all the evaluated prompting techniques excluding the baseline prompt that does not include any security specifications." (Tony - [Prompting Techniques for Secure Code Generation: A Systematic Investigation](https://arxiv.org/abs/2407.07064v2))

0 commit comments

Comments
 (0)