PromptGuard
Description:
PromptGuard - AI Prompt Security is an AI-powered, drop-in "firewall" for LLMs that inspects and sanitizes prompts and surrounding context in real time—using heuristics, ML classifiers and LLM-based detectors—to block prompt injection, redact PII, and prevent data leaks before requests reach your model. It adds minimal latency (<40ms), provides tunable policies, logs and analytics, supports major providers (OpenAI, Anthropic, Google/Gemini, Groq, Azure), and is aimed at product, security and enterprise teams that need production-grade prompt governance and compliance.
A real-time prompt firewall that sanitizes inputs, blocks prompt injection, redacts PII, and prevents data leaks.
Note: This is a Google Colab, meaning that it's not actually a software as a service. Instead it's a series of pre-created codes that you can run without needing to understand how to code.
Note: This is a GitHub repository, meaning that it is code that someone created and made publicly available for anyone to use. These tools could require some knowledge of coding.
Pricing Model:
Freemium
Price Unknown / Product Not Launched Yet
This tool offers a free trial!
Special Offer For Future Tools Users
This tool has graciously provided a special offer that's exclusive to Future Tools Users!
Use Coupon Code:
Matt's Pick - This tool was selected as one of Matt's Picks!
Note: Matt's picks are tools that Matt Wolfe has personally reviewed in depth and found it to be either best in class or groundbreaking. This does not mean that there aren't better tools available or that the alternatives are worse. It means that either Matt hasn't reviewed the other tools yet or that this was his favorite among similar tools.
Check out
PromptGuard
-
A real-time prompt firewall that sanitizes inputs, blocks prompt injection, redacts PII, and prevents data leaks.
:








