OWASP Top 10 risks for LLMs (2025 update)

Key takeaways

  • The 2025 OWASP Top 10 for LLMs provides the latest view of the most critical risks in large language model applications.
  • New categories such as excessive agency, system prompt leakage, and misinformation reflect real-world deployment lessons.
  • Mitigation requires a mix of technical measures (validation, rate limiting, provenance checks) and governance (policies, oversight, supply chain assurance).
  • Security programs that encompass AI applications must adapt to LLM-specific risks rather than relying only on traditional application security practices.
  • Invicti supports these efforts with proof-based scanning and dedicated LLM application security checks, including prompt injection, insecure output handling, and system prompt leakage.

Introduction: Modern AI security needs modern threat models

As organizations adopt large language model (LLM) applications at scale, security risks are evolving just as quickly. The OWASP Foundation’s Top 10 for LLM Applications (part of the OWASP GenAI Security project) offers a structured way to understand and mitigate these threats. First published in 2023, the list has been updated for 2025 to reflect real-world incidents, changes in deployment practices, and emerging attack techniques in what could be the fastest-moving space in the history of cybersecurity.

For enterprises, these categories serve as both a warning and a guide. They highlight how LLM security is about far more than just protecting the models themselves – you also need to test and secure their entire surrounding ecosystem, from training pipelines to plugins, deployment environments, and host applications. The updated list also emphasizes socio-technical risks such as excessive agency and misinformation.

OWASP Top 10 for LLMs

  1. LLM01:2025 Prompt Injection
  2. LLM02:2025 Sensitive Information Disclosure
  3. LLM03:2025 Supply Chain
  4. LLM04:2025 Data and Model Poisoning
  5. LLM05:2025 Improper Output Handling
  6. LLM06:2025 Excessive Agency
  7. LLM07:2025 System Prompt Leakage
  8. LLM08:2025 Vector and Embedding Weaknesses
  9. LLM09:2025 Misinformation
  10. LLM10:2025 Unbounded Consumption

What’s new in 2025 vs earlier iterations

The 2025 edition builds on the original list with new categories that reflect emerging attack techniques, lessons from real-world deployments, and the growing use of LLMs in production environments. It also streamlines and broadens earlier entries to focus on the risks most relevant to today’s applications, while consolidating categories that overlapped in practice.

Here’s how the latest update compares to the initial version at a glance:

  • Prompt Injection remains the #1 risk.
  • New in 2025: Excessive Agency, System Prompt Leakage, Vector/Embedding Weaknesses, Misinformation, Unbounded Consumption.
  • Rank changes: Sensitive Information Disclosure (up from #6 to #2), Supply Chain (broadened and up from #5 to #3), Output Handling (down from #2 to #5).
  • Broadened scope: Training Data Poisoning has evolved into Data and Model Poisoning.
  • Folded into broader categories: Insecure Plugin Design, Overreliance, Model Theft, Model Denial of Service.

The OWASP Top 10 for large language model applications in detail (2025 edition)

LLM01:2025 Prompt Injection

Definition Manipulating LLM inputs to override instructions, extract data, or trigger harmful actions
How it happens Direct user prompts, hidden instructions in documents, or indirect injection via external sources
Potential consequences Data leakage, bypass of safety controls, execution of malicious tasks and code
Mitigation strategies Input sanitization, layered validation, sandboxing, user training, continuous red-teaming

Invicti includes checks for LLM prompt injection and related downstream vulnerabilities such as LLM server-side request forgery (SSRF) and LLM command injection, simulating adversarial inputs to detect exploitable conditions.

Want to learn more about prompt injection? Get the Invicti e-book: Prompt Injection Attacks on Applications That Use LLMs

LLM02:2025 Sensitive Information Disclosure

Definition LLMs exposing private, regulated, or confidential information
How it happens Memorization of training data, crafted queries
Potential consequences Data loss, compliance violations, reputational damage
Mitigation strategies Data minimization, access controls, monitoring outputs, differential privacy

LLM03:2025 Supply Chain

Definition Risks in third-party, open-source, or upstream LLM components and services
How it happens Malicious dependencies, compromised APIs, unverified model sources
Potential consequences Backdoors, poisoned data, unauthorized access
Mitigation strategies Vet dependencies, verify provenance, apply supply chain security controls

LLM04:2025 Data and Model Poisoning

Definition Malicious or manipulated data corrupting training or fine-tuning
How it happens Insertion of adversarial or backdoor data
Potential consequences Unsafe outputs, embedded exploits, biased behavior
Mitigation strategies Provenance checks, anomaly detection, continuous evaluation

LLM05:2025 Improper Output Handling

Definition Passing untrusted LLM outputs directly to downstream systems
How it happens No validation or sandboxing of responses
Potential consequences Injection attacks, workflow manipulation, code execution
Mitigation strategies Output validation, execution sandboxing, monitoring

Invicti detects insecure output handling by identifying unsafe model responses that could impact downstream applications.

LLM06:2025 Excessive Agency

Definition Granting LLMs too much control over sensitive actions or tools
How it happens Poorly designed integrations, unchecked tool access
Potential consequences Unauthorized operations, privilege escalation
Mitigation strategies Principle of least privilege, usage monitoring, guardrails

Invicti highlights tool usage exposure in LLM-integrated applications.

LLM07:2025 System Prompt Leakage

Definition Exposure of hidden instructions or system prompts
How it happens Adversarial queries, side-channel analysis
Potential consequences Bypass of guardrails, disclosure of sensitive logic
Mitigation strategies Masking, randomized prompts, monitoring outputs

Invicti detects LLM system prompt leakage during dynamic testing.

LLM08:2025 Vector and Embedding Weaknesses

Definition Exploiting weaknesses in embeddings or vector databases
How it happens Malicious embeddings, data pollution, injection in retrieval-augmented generation
Potential consequences Biased or manipulated responses, security bypass
Mitigation strategies Validate embeddings, sanitize inputs, secure vector stores

LLM09:2025 Misinformation

Definition Generation or amplification of false or misleading content
How it happens Prompt manipulation, reliance on low-quality data
Potential consequences Disinformation, compliance failures, reputational harm
Mitigation strategies Human review, fact-checking, monitoring for misuse

LLM10:2025 Unbounded Consumption

Definition Resource exhaustion or uncontrolled cost growth from LLM use
How it happens Flooding requests, complex prompts, recursive loops
Potential consequences Denial of service, cost spikes, degraded performance
Mitigation strategies Rate limiting, autoscaling protections, cost monitoring

Business impacts and risk management outcomes

LLM-related risks extend beyond technical security flaws to directly affect business outcomes. Here’s how the major LLM risks map to business impacts:

  • Prompt injection and improper output handling can expose sensitive data or trigger unauthorized actions, creating regulatory and financial liabilities. 
  • Sensitive information disclosure or supply chain weaknesses can compromise intellectual property and erode customer trust. 
  • Data and model poisoning can distort outputs and weaken competitive advantage, while unbounded consumption can inflate costs or disrupt availability. 
  • Socio-technical risks such as excessive agency and misinformation can lead to reputational harm and compliance failures.

The 2025 OWASP list underscores that managing LLM risks requires aligning technical defenses with enterprise priorities: safeguarding data, ensuring resilience, controlling costs, and maintaining confidence in AI-driven services.

Compliance landscape and regulatory considerations

LLM-related risks also intersect with existing compliance requirements. Data disclosure issues map directly to GDPR, HIPAA, and CCPA obligations, while broader systemic risks align with frameworks such as the EU AI Act, NIST AI RMF, and ISO standards. For organizations in regulated industries, securing LLM applications is not just best practice but a legal and regulatory necessity.

Security and governance strategies to mitigate LLM risks

Enterprises should approach LLM security as an integral part of their broader application security programs. Beyond individual security vulnerabilities, CISOs need clear and actionable steps that combine technical defenses with governance practices.

Key LLM security strategies for security professionals:

  • Integrate automated LLM detection and vulnerability scanning into broader AppSec programs to keep pace with rapid adoption.
  • Establish secure data pipelines by applying provenance checks, vetting third-party sources, and monitoring for anomalies.
  • Enforce rigorous input and output validation to prevent injection and leakage, and use sandboxing for untrusted model responses.
  • Harden deployment environments by securing APIs, containers, and CI/CD pipelines with least-privilege access and secrets management.
  • Strengthen identity and access management with strong authentication, authorization, and role-based controls across all LLM components.
  • Build governance frameworks with policies, accountability structures, and mandatory staff training on AI risk awareness.
  • Implement continuous monitoring, auditing, and red-teaming to stress-test defenses and simulate real-world attacks.

Conclusion: Applying the 2025 OWASP LLM Top 10 in your organization

The OWASP Top 10 for LLM Applications (2025) is a vital resource for organizations adopting generative AI. By framing risks across technical, operational, and socio-technical dimensions, it provides a structured guide to securing LLM applications. As with web and API security, success depends on combining accurate technical testing with governance and oversight.

Invicti’s proof-based scanning and LLM-specific security checks support this by validating real risks and reducing noise, helping enterprises strengthen security across both traditional applications and LLM-connected environments.

Next steps to take

  • See all the LLM security checks available in Invicti DAST
  • Get a demo of LLM detection and security scanning on the Invicti Platform
  • Make LLM security a systematic part of your application security program

FAQs about the OWASP Top 10 for LLMs

What exactly is the OWASP Top 10 for LLM Applications (2025)?

It’s OWASP’s updated list of the most critical security risks for LLM-based applications, covering emerging threats such as prompt injection, system prompt leakage, excessive agency, and misinformation.

How is this different from the traditional OWASP Top 10 for web apps?

The main OWASP top 10 highlights web application security risks like injection vulnerabilities, XSS, or insecure design. The LLM Top 10 initiative focuses on threats unique to AI systems, including prompt injection, data and model poisoning, improper output handling, and supply chain risks.

What are the highest priority threats among the Top 10?

While all are significant, prompt injection has been the #1 risk since the list was first compiled. Other crucial risk categories include sensitive information disclosure, supply chain risks, improper output handling, and excessive agency.

How can organizations start mitigating these LLM risks today?

Start with automated LLM detection and security scanning to identify exploitable vulnerabilities early. Build on this by applying threat modeling, enforcing input and output validation, using least privilege for integrations, vetting data and upstream sources, and establishing strong governance and oversight.

Why do executives need to care about these risks?

Because these risks go beyond technical flaws to include compliance, legal, reputational, regulatory, and business continuity impacts, making them a critical issue for enterprise leadership.

How can Invicti help with LLM security?

Invicti supports organizations with proof-based scanning and dedicated LLM security checks, including prompt injection, insecure output handling, system prompt leakage, and tool usage exposure. This helps teams validate real risks and strengthen security across AI-driven applications.

The post OWASP Top 10 risks for LLMs (2025 update) appeared first on Invicti.

Post a Comment

0 Comments