Program Scope
0DIN's GenAI Bug Bounty targets security boundaries across models and apps. If you have questions, ask us.
STARTING
$500
MEDIUM
$2,500
HIGH
$5,000
SEVERE
$15,000
Model Security Boundaries
Vulnerabilities that target the model itself — its guardrails, extraction surface, code execution, content integrity, and weights.
Weights and Layers Disclosure
$15,000Extracting or deducing a model's learned parameters and architectural details.
Content Manipulation
$5,000Injecting harmful or misleading elements into data the model consumes or produces.
Interpreter Jailbreak
$2,500Exploiting a model's ability to run code or invoke tools to escape its sandbox.
Guardrail Jailbreak
$1,000Bypassing a model's safety guardrails to produce restricted content.
In-Scope Models
Amazon
2 models
Amazon
Prompt Extraction : N/A
Guardrail Jailbreak : Copyright violations and illicit substances are not eligible for bounty.
Prompt Extraction : N/A
Anthropic
6 models
Anthropic
Prompt Extraction : N/A
Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.
Prompt Extraction : N/A
Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.
Prompt Extraction : N/A
Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.
Prompt Extraction : N/A
Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.
Prompt Extraction : N/A
Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.
Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.
Apple
1 model
Prompt Extraction : N/A
BigScience
1 model
BigScience
Prompt Extraction : N/A
Weights and Layers Disclosure : N/A
Cohere
1 model
Cohere
Prompt Extraction : N/A : https://docs.cohere.com/v2/docs/preambles
Guardrail Jailbreak : This model is out of scope for illicit substances.
Google
2 models
Prompt Extraction : N/A
Prompt Extraction : N/A
IBM
2 models
IBM
Prompt Extraction : N/A
Guardrail Jailbreak : Copyright violations and illicit substances are not eligible for bounty.
Prompt Extraction : N/A
Meta
2 models
Meta
Prompt Extraction : N/A
Prompt Extraction : N/A
NVIDIA
1 model
NVIDIA
Prompt Extraction : N/A
Guardrail Jailbreak : Copyright violations and illicit substances are not eligible for bounty.
OpenAI
10 models
OpenAI
Prompt Extraction : N/A
Interpreter Jailbreak : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Prompt Extraction : N/A
Perplexity
1 model
Perplexity
Prompt Extraction : N/A
Salesforce
1 model
Salesforce
Prompt Extraction : N/A
Twitter / X
1 model
Twitter / X
Prompt Extraction : N/A
Guardrail Jailbreak : This model is out of scope for illicit substances and copyright violations.
Other
1 model
Other
Prompt Extraction : N/A
Questions about scope? Reach out at 0din@mozilla.com