This research proposal presents a methodological framework for adversarial generation of hallucinated packages (slopsquatting attacks) targeting software supply chains. It analyzes how LLMs generate non-existent package names (5.2%-21.7% hallucination rates) and proposes adversarial techniques to exploit these vulnerabilities. The framework includes threat modeling, generative models for realistic package names, validation methodologies, and defensive countermeasures for securing AI-assisted development workflows.
Key findings
Current code-generating LLMs exhibit package hallucination rates ranging from 5.2% to 21.7%, creating substantial attack opportunities.
Slopsquatting represents a paradigm shift from traditional typosquatting by exploiting LLM generative characteristics rather than user errors.
The proposed framework leverages adversarial machine learning techniques to optimize hallucinated package names for maximum attack success probability.
Hallucinated packages exhibit systematic patterns, such as resembling realistic URL-based module paths in Go.
Implementation-ready defensive mechanisms include real-time hallucination detection and package registry monitoring systems.
Limitations & open questions
Empirical validation is pending as this is an iterative research design proposal (Version 1.0).
Analysis is limited to specific LLM architectures and may not generalize to all future models or programming languages.
Ethical constraints may limit real-world testing of attack methodologies in production environments.