CyberArk announced the launch of FuzzyAI, an open-source framework that has jailbroken every major tested AI model. FuzzyAI helps organisations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models. To understand first-hand how organizations can adopt AI while mitigating cyber risks, Black Hat Europe 2024 attendees can explore the tool’s capabilities and applications.
Why FuzzyAI?
AI models are transforming industries with innovative applications in customer interactions, internal process improvements and automation. Internal usage of these models also presents new security challenges for which most organisations are unprepared.
FuzzyAI helps solve some of these challenges by offering organisations a systematic approach to testing AI models against various adversarial inputs, uncovering potential weak points in their security systems and making AI development and deployment safer. At the heart of FuzzyAI is a powerful fuzzer – a tool that reveals software defects and vulnerabilities – capable of exposing vulnerabilities found via more than ten distinct attack techniques, from bypassing ethical filters to exposing hidden system prompts. Key features of FuzzyAI include:
* Comprehensive fuzzing: FuzzyAI probes AI models with various attack techniques to expose vulnerabilities like bypassing guardrails, information leakage, prompt injection or harmful output generation.
* An extensible framework: Organisations and researchers can add their own attack methods to tailor tests for domain-specific vulnerabilities.
* Community collaboration: A growing community-driven ecosystem ensures continuous adversarial techniques and defence mechanisms advancements.
“The launch of FuzzyAI underlines CyberArk’s commitment to AI security and helps organisations take a significant step forward in addressing the security issues inherent in the evolving landscape of AI model usage,” said Peretz Regev, Chief Product Officer at CyberArk. “Developed by CyberArk Labs, FuzzyAI has demonstrated the ability to jailbreak every major tested AI model. FuzzyAI empowers organisations and researchers to identify weaknesses and actively fortify their AI systems against emerging threats.”