Skip to main content

Program Scope

0DIN's GenAI Bug Bounty targets security boundaries across models and apps. If you have questions, ask us.


STARTING

$500

MEDIUM

$2,500

HIGH

$5,000

SEVERE

$15,000

Model Security Boundaries

Vulnerabilities that target the model itself — its guardrails, extraction surface, code execution, content integrity, and weights.

Weights and Layers Disclosure

$15,000

Extracting or deducing a model's learned parameters and architectural details.

Content Manipulation

$5,000

Injecting harmful or misleading elements into data the model consumes or produces.

Interpreter Jailbreak

$2,500

Exploiting a model's ability to run code or invoke tools to escape its sandbox.

Guardrail Jailbreak

$1,000

Bypassing a model's safety guardrails to produce restricted content.

Prompt Extraction

$500

Coercing a model into revealing its underlying system prompt.

In-Scope Models

Amazon logo Amazon
2 models
Nova

Prompt Extraction : N/A

Guardrail Jailbreak : Copyright violations and illicit substances are not eligible for bounty.

Rufus

Prompt Extraction : N/A

Anthropic logo Anthropic
6 models
Claude 4.5 Haiku

Prompt Extraction : N/A

Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.

Claude 4.5 Opus

Prompt Extraction : N/A

Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.

Claude 4.5 Sonnet

Prompt Extraction : N/A

Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.

Claude 4.6 Opus

Prompt Extraction : N/A

Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.

Claude 4.6 Sonnet

Prompt Extraction : N/A

Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.

Claude for Chrome

Guardrail Jailbreak : Illicit substance guardrail bypasses are not accepted.

Apple logo Apple
1 model
Foundation Models Framework

Prompt Extraction : N/A

BigScience logo BigScience
1 model
BLOOM

Prompt Extraction : N/A

Weights and Layers Disclosure : N/A

Cohere logo Cohere
1 model
Command R

Prompt Extraction : N/A : https://docs.cohere.com/v2/docs/preambles

Guardrail Jailbreak : This model is out of scope for illicit substances.

Google logo Google
2 models
Gemini 3 Flash

Prompt Extraction : N/A

Gemini 3 Pro

Prompt Extraction : N/A

IBM logo IBM
2 models
Granite

Prompt Extraction : N/A

Guardrail Jailbreak : Copyright violations and illicit substances are not eligible for bounty.

Watson

Prompt Extraction : N/A

Meta logo Meta
2 models
LLaMa 4 Maverick

Prompt Extraction : N/A

LLaMa 4 Scout

Prompt Extraction : N/A

NVIDIA logo NVIDIA
1 model
NeMo Megatron

Prompt Extraction : N/A

Guardrail Jailbreak : Copyright violations and illicit substances are not eligible for bounty.

OpenAI logo OpenAI
10 models
DALL-E3

Prompt Extraction : N/A

Interpreter Jailbreak : N/A

GPT-5

Prompt Extraction : N/A

GPT-5.1

Prompt Extraction : N/A

GPT-5.2

Prompt Extraction : N/A

GPT-5.2 Pro

Prompt Extraction : N/A

GPT-5.4

Prompt Extraction : N/A

GPT-5 Chat

Prompt Extraction : N/A

GPT-5 mini

Prompt Extraction : N/A

GPT-5 nano

Prompt Extraction : N/A

GPT-5 Pro

Prompt Extraction : N/A

Perplexity logo Perplexity
1 model
Perplexity AI

Prompt Extraction : N/A

Salesforce logo Salesforce
1 model
Einstein

Prompt Extraction : N/A

Twitter / X logo Twitter / X
1 model
Grok 4

Prompt Extraction : N/A

Guardrail Jailbreak : This model is out of scope for illicit substances and copyright violations.

Other logo Other
1 model
Other Models

Prompt Extraction : N/A


Questions about scope? Reach out at 0din@mozilla.com