Skip to content
#

prompt-security

Here are 26 public repositories matching this topic...

MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.

  • Updated Mar 27, 2024

"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."

  • Updated Mar 15, 2026
  • Python

Universal Prompt Security Standard (UPSS): A framework for externalizing, securing, and managing LLM prompts and genAI systems, inspired by and extending OWASP OPSS concepts for any organization or project.

  • Updated Mar 2, 2026
  • Python

Static analysis CLI that scans codebases for LLM prompt-injection, data-exfiltration, jailbreak, and unsafe agent/tool vulnerabilities. Runs fully offline, integrates with CI/CD, and outputs console, JSON, and SARIF reports.

  • Updated Mar 16, 2026
  • TypeScript

Behavioral persona GPT modeled after a logical diagnostician. Engineered to audit user reasoning, minimize cognitive bias, and challenge assumptions with high-precision critique. (Inspired by the deductive reasoning of Dr. Gregory House).

  • Updated Jan 2, 2026

Improve this page

Add a description, image, and links to the prompt-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prompt-security topic, visit your repo's landing page and select "manage topics."

Learn more