@@ -5455,7 +5455,10 @@ A significant amount of AI security work today focuses on ML and LLMs;
5455
5455
we will take the same focus here.
5456
5456
5457
5457
First, let's discuss using AI systems to help write code.
5458
- Unfortunately, AI-generated code often contains vulnerabilities.
5458
+ It's *vital* to *not* blindly trust AI systems to write code.
5459
+ Instead, when using them actively engage with the tools, rephrase questions,
5460
+ and carefully check their results.
5461
+ This is because AI-generated code often contains vulnerabilities.
5459
5462
This should be expected; such systems are typically trained on
5460
5463
code with vulnerabilities and they don't understand their context of use.
5461
5464
One study found that participants using an AI assistant wrote significantly
@@ -5466,13 +5469,12 @@ Another found 35.8% of code snippets contained vulnerabilities
5466
5469
AI-generated code will probably get better over time,
5467
5470
but perfection is unlikely.
5468
5471
Even worse, LLM systems often hallucinate package names that don't exist.
5469
- Attackers can then perform *slopsquatting*, that is,
5470
- they create malicious packages with those LLM-hallucinated fake names
5472
+ Attackers sometimes perform *slopsquatting* attacks , that is,
5473
+ attackers create malicious packages with those LLM-hallucinated fake names
5471
5474
as a dangerous trap for the unwary
5472
5475
[Gooding2025](https://socket.dev/blog/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks).
5473
- It's *vital* to *not* blindly trust AI systems to write code.
5474
- Instead, when using them actively engage with the tools, rephrase questions,
5475
- and carefully check their results.
5476
+ Again, don't blindly trust AI systems to write code; take steps such as
5477
+ carefully checking their results.
5476
5478
5477
5479
Now let's discuss how to build more secure software systems that *use* ML.
5478
5480
Building ML systems often involve several processes, namely
0 commit comments