The Attack Surface of Enterprise LLM Deployments: Prompt Injection, Data Poisoning, and Model Theft
APA Citation
Romano, V. et al. (2024). The Attack Surface of Enterprise LLM Deployments: Prompt Injection, Data Poisoning, and Model Theft. *Network and Distributed System Security Symposium*. https://doi.org/10.14722/ndss.2024.25678
View original paper →What Did This Cybersecurity Research Find?
This cybersecurity AI security study cataloged and tested attack vectors against 30 enterprise LLM deployments (RAG systems, customer-facing chatbots, internal knowledge assistants). Cybersecurity teams defending LLM deployments face three primary attack classes: prompt injection succeeded against 78% of tested systems, training data extraction succeeded in 23%, and indirect prompt injection via poisoned documents affected 45% of RAG-based deployments.
Key Findings
- 1Direct prompt injection succeeded against 78% of tested enterprise LLM deployments
- 2Indirect prompt injection via poisoned RAG documents affected 45% of systems
- 3Training data extraction succeeded in 23% of fine-tuned model deployments
- 4Input validation and output filtering reduced successful prompt injection to 12%
- 5No single defense eliminated all attack vectors; layered controls were required
How Does This Apply to Cybersecurity Careers?
AI security is a rapidly growing specialization within cybersecurity. Application security engineers need to add LLM-specific testing to their assessment methodology.
Who Should Read This?
Frequently Asked Questions
What did this cybersecurity research find?
This cybersecurity AI security study cataloged and tested attack vectors against 30 enterprise LLM deployments (RAG systems, customer-facing chatbots, internal knowledge assistants). Cybersecurity teams defending LLM deployments face three primary attack classes: prompt injection succeeded against 78% of tested systems, training data extraction succeeded in 23%, and indirect prompt injection via poisoned documents affected 45% of RAG-based deployments.
How is this research relevant to cybersecurity careers?
AI security is a rapidly growing specialization within cybersecurity. Application security engineers need to add LLM-specific testing to their assessment methodology.
Where was this cybersecurity research published?
This study was published in Network and Distributed System Security Symposium in 2024. The DOI is 10.14722/ndss.2024.25678. Access the original paper through the publisher link above.
Explore Related Cybersecurity Resources
Was this page helpful?
Get cybersecurity career insights delivered weekly
Join cybersecurity professionals receiving weekly intelligence on threats, job market trends, salary data, and career growth strategies.
Get Cybersecurity Career Intelligence
Weekly insights on threats, job trends, and career growth.
Unsubscribe anytime. More options