@@ -5455,7 +5455,10 @@ A significant amount of AI security work today focuses on ML and LLMs;
54555455we will take the same focus here.
54565456
54575457First, let's discuss using AI systems to help write code.
5458- Unfortunately, AI-generated code often contains vulnerabilities.
5458+ It's *vital* to *not* blindly trust AI systems to write code.
5459+ Instead, when using them actively engage with the tools, rephrase questions,
5460+ and carefully check their results.
5461+ This is because AI-generated code often contains vulnerabilities.
54595462This should be expected; such systems are typically trained on
54605463code with vulnerabilities and they don't understand their context of use.
54615464One study found that participants using an AI assistant wrote significantly
@@ -5466,13 +5469,12 @@ Another found 35.8% of code snippets contained vulnerabilities
54665469AI-generated code will probably get better over time,
54675470but perfection is unlikely.
54685471Even worse, LLM systems often hallucinate package names that don't exist.
5469- Attackers can then perform *slopsquatting*, that is,
5470- they create malicious packages with those LLM-hallucinated fake names
5472+ Attackers sometimes perform *slopsquatting* attacks , that is,
5473+ attackers create malicious packages with those LLM-hallucinated fake names
54715474as a dangerous trap for the unwary
54725475[Gooding2025](https://socket.dev/blog/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks).
5473- It's *vital* to *not* blindly trust AI systems to write code.
5474- Instead, when using them actively engage with the tools, rephrase questions,
5475- and carefully check their results.
5476+ Again, don't blindly trust AI systems to write code; take steps such as
5477+ carefully checking their results.
54765478
54775479Now let's discuss how to build more secure software systems that *use* ML.
54785480Building ML systems often involve several processes, namely
0 commit comments