Honest answers. No marketing fluff.
A free, open-source Python library that scans everything your AI agent reads — emails, messages, web pages, files — and catches hidden attacks before your agent sees them.
Install: pip install sunglasses
Use: 3 lines of code. That's it.
AI agents read emails, browse the web, and process files. Any of that content can contain hidden instructions that hijack your agent — make it leak your API keys, forward your data, or execute commands.
This is called prompt injection. It's the #1 vulnerability in AI agents. Traditional security (antivirus, spam filters) doesn't catch it because it's not malware — it's just text. Hidden in plain sight.
SUNGLASSES catches it in under 1 millisecond.
$0. Forever. AGPL v3 license. Nobody can close-source it or take it away. Not us, not anyone.
No. Zero. Nothing. SUNGLASSES runs 100% locally on your machine. No cloud calls, no API keys, no telemetry, no analytics, no phone-home. Your content never leaves your computer.
Don't trust us? Read the code. It's open source. Run tcpdump while it scans — you'll see exactly zero network traffic.
You need basic Python — enough to run pip install and add 3 lines to your agent code. If you're already running an AI agent, you can use SUNGLASSES.
If you've never written code, you're probably not running AI agents yet — but when you do, install SUNGLASSES first.
Every scan is logged locally on your machine. You can generate a report anytime showing: total scans, attacks blocked, categories, and what would have happened without SUNGLASSES.
sunglasses report in your terminal. Or sunglasses report --html for a visual dashboard.
We never see your report. It lives on your machine. Delete anytime: rm -rf ~/.sunglasses/
pip install --upgrade sunglasses
New attack patterns are added to the database regularly. When you upgrade, you get them all. Like antivirus signature updates — but free and open source.
Under 1 millisecond per scan. Average: 0.04ms. That's 25,000 scans per second. Your agent won't even notice it's there.
13 languages: English, Spanish, Portuguese, French, German, Russian, Turkish, Arabic, Chinese, Japanese, Korean, Hindi, Indonesian.
Attacks don't just come in English. Someone in Turkey can inject Turkish instructions into your agent. SUNGLASSES catches them.
Text: emails, messages, chat inputs, API responses
Files: documents, code, configs
Web content: scraped pages, HTML
Images: OCR text, EXIF metadata, hidden text layers
PDFs: extracted text, hidden layers
QR codes: decoded content
Audio & Video: experimental (Whisper transcription + subtitle extraction). Needs pip install sunglasses[all]
Any Python code. Plus dedicated integrations for:
• Claude Code (MCP Server)
• LangChain
• CrewAI
• Any Python app — it's just a function call
SUNGLASSES doesn't replace your existing security tools. If you already use Snyk, Lakera, NeMo Guard, or anything else — keep them.
The Adapter system reads their output and combines it with SUNGLASSES' own scan into one unified report. Like bringing 3 doctor reports to a 4th doctor who reads them all together.
If a tool outputs JSON or text, SUNGLASSES can read it.
No. And anyone who claims 100% detection is lying.
This is practical risk reduction, not a magic shield. It catches the obvious and common attacks — which is what most real-world prompt injection looks like today.
Yes. A skilled attacker who studies our open-source patterns can craft something we haven't seen. That's true of every security tool ever made.
But here's the thing: most attacks aren't sophisticated. They're copy-pasted prompt injections. "Ignore previous instructions" in 47 variations. SUNGLASSES catches all of those. That's the 90% that matters.
For the other 10%, we're building the Honeypot (trap attackers) and Threat Registry (hold providers accountable). Those come after launch.
Partly true. The core uses keyword and pattern matching — on purpose.
Why not an LLM-based detector? Because that adds latency (seconds vs milliseconds), costs money (API calls), requires cloud connectivity, and ironically makes you vulnerable to the same prompt injection you're trying to detect.
Pattern matching at 0.04ms with zero dependencies beats an LLM call at 2 seconds with an API key for 90% of real-world attacks. For the edge cases, we'll add deeper analysis layers — but the fast scan stays.
Your antivirus uses signature matching too. Nobody calls it "just regex."
Please do. Seriously. Open an issue with your bypass. We'll add it to the database and everyone gets protected.
That's literally how security works. Every firewall, antivirus, and WAF in existence gets bypassed and updated. The question isn't "can it be bypassed" — it's "how fast do you patch."
We patch in public. Check our commit history.
Yes. And antivirus signatures are reverse-engineered daily. Firewall rules are public knowledge. OWASP publishes the top 10 vulnerabilities for everyone to read.
Security through obscurity is not security. Open source means thousands of eyes find problems faster than one closed team. It means you can verify we're not stealing your data. It means the community grows the database faster than any company.
If an attacker needs to read our source code to bypass us, we've already raised the bar above "zero protection" — which is what 99% of agents have today.
You shouldn't trust anyone. That's the point. Read the code.
SUNGLASSES is 100% open source. You can see every line. It makes zero network calls. It stores nothing in the cloud. There is nothing to trust — you can verify everything yourself.
We built this because we needed it and nothing existed. The code is public. Judge it on merit, not on who wrote it.
Great tools. Use them AND SUNGLASSES. That's literally what the Adapter is for.
But here's the difference:
• Lakera: cloud API, paid, they see your data
• LLM Guard: great but focused on prompt/response, not media extraction
• NeMo Guardrails: NVIDIA's framework, heavy dependency, enterprise-focused
SUNGLASSES: local, free, zero dependencies, scans text + images + audio + PDFs + QR codes, under 1ms. Different tool, different approach. Use what fits.
53 patterns × 331 keywords × 13 languages. Day one.
Your antivirus started with a handful of signatures too. The database grows with every community contribution. By next month it could be 200 patterns. By next year, thousands.
Also — those 53 patterns catch the attacks that actually happen in the wild today. We'd rather have 53 patterns that work than 5,000 that generate false positives and get turned off.
AGPL means: if you modify SUNGLASSES and distribute it, you share your modifications.
Using SUNGLASSES as a scanning tool in your pipeline does not automatically make your entire application AGPL. But if you're building something commercial, consult a lawyer for your specific use case — we're security engineers, not lawyers.
We chose AGPL specifically so nobody can take this, close-source it, and charge for it. The community built it. The community owns it.
Yes. AI tools helped write the code and do security research. A human designed the product, tested it, and decided what ships.
A locksmith uses tools to make locks. A security company uses cameras to secure buildings. An AI security tool built with AI assistance is the most logical thing in the world.
The code is open source. Judge the output, not the tools used to build it.
The joke is that 99% of AI agents have zero protection. That's the April Fools punchline nobody laughs at.
We launched on April 1 on purpose. Don't let your agents get FOOLED.
The code is real. The tests pass. pip install sunglasses and try it yourself.
Open a GitHub Issue with the text that got through. We'll add a pattern and push an update. Everyone benefits.
Don't have GitHub? Email contact@sunglasses.dev with what happened.
Yes. Patterns are JSON files with keywords and categories. Fork the repo, add your pattern, open a PR. We review and merge.
You can also load custom patterns locally without contributing them: pass extra_patterns=[...] to the engine.
Testers: Install it, try to bypass it, report what you find
Security researchers: Adversarial payloads, multi-language evasion, media-based attacks
Developers: Framework integrations, performance optimization, new extractors
Everyone: Tell people who run AI agents about this. The more people using it, the stronger the database gets.