OWASP AI Exchange’s cover photo
OWASP AI Exchange

OWASP AI Exchange

Computer and Network Security

owaspai.org : the go-to resource for AI Security, feeding straight into international standards. Open source. 200 pages.

About us

The OWASP AI Exchange at owaspai.org is as a collaborative working document to advance the development of global AI security standards and regulations. It provides a comprehensive overview of AI threats, vulnerabilities, and controls to foster alignment among different standardization initiatives. This includes the EU AI act, ISO/IEC 27090 (AI security), the OWASP ML top 10, OWASP LLM top 10, and OpenCRE - which we want to use to provide the AI Exchange content through the security chatbot OpenCRE-Chat Our mission is to be the authoritative source for consensus, foster alignment, and drive collaboration among initiatives - NOT to set a standard. By doing so, it provides a safe, open, and independent place to find and share insights for everyone.

Website
https://owaspai.org/
Industry
Computer and Network Security
Company size
51-200 employees
Type
Nonprofit

Employees at OWASP AI Exchange

Updates

  • Learn about the lethal trifecta in agentic AI: the three things that make up the nightmare of data extraction using cross-user prompt injection: https://lnkd.in/eRXewxYS

    The biggest threat in Agentic AI? Cross-user prompt injection. Here’s the perfect storm: 1️⃣ It takes only one malicious instruction in any of the input data to perform a prompt injection that manipulates your agent running in the security context of a privileged user (say an admin). 2️⃣ There is hardly a watertight way to detect prompt injections. 3️⃣ You may have incident response and see something suspicious, but if the attack extracts sensitive data, you are already too late. 4️⃣ It is in developers' interest to allow agents to do many things (such as running commands) - increasing the attack surface. 5️⃣ It is in developers' interest to provide agents with access to many systems and data - increasing the blast radius. 6️⃣ Agents are there to communicate with the world, so there's typically a way to send extracted information to the outside. What to do based on the above? 💉 Limit agents' access to untrusted data dynamically - depending on the task at hand 🔍 At least do your best on in-line prompt injection detection (both input and output). There's a duty of care. 🤝 Let dev and ops work together on prompt injection alerts and runbooks. 🛑 Instruct developers and admins to harden agents in system access and in actions allowed. Zero model trust makes blast radius control critical! 🏰 Maximise defence in depth by regarding all your agents as potential malicious actors. The attached picture is a slide from a Software Improvement Group training I regularly provide, based on the OWASP AI Exchange. 1. attacker creates a public issue in a programming platform, with an instruction 2. a developer asks agentic AI to summarize new issues 3. the issue gets processed by an agent that also executes the instruction 4. that malicious instruction retrieves a secret token and sends it to the attacker Easy. This could have been prevented by hardening the agent for the task of summarizing issues. You don't need to send emails for that. The key is to incentivize admins and developers to actually perform this hardening, as it does not make their lives easier. For more information, use the OWASP AI Exchange and learn for example about Simon Willison's lethal trifecta as a threat model. Good luck! #ai #agenticai #security

    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    AI Security Roundup As 2025 comes to a close, I thought it would be a good time to re-share my Resilient Cyber AI Security Resources. As I’ve immersed myself in AI security the last several years along with others in the community, I’ve been fortunate to cover many excellent frameworks, guides and conversations. This is a roundup which I’ll be doing a refresh as of as we get into 2026 too, with many new and additional resources, deep dives and conversations. This includes: Conversations with folks such as 💰 Mike Privette and Chenxi Wang, Ph.D. on the market impact that AI is having as well as venture, startups and analysis. Deep dives into key frameworks such as OWASP AI Exchange, the LLM Top 10 and MITRE ‘s ATLAS with Rob van der Veer, Steve Wilson abd Christina Liaghati, PhD Insightful conversations with Sounil Yu, Walter Haydock, qnd Helen Oakley about critical AI risks and vulnerabilities. Perspectives from Grant Oviatt, Dylan Williams, Filip Stojkovski on how AI is changing SecOps, along with thoughts from Lior Div and Nathan Burke on the future of the SOC, who by the way just announced the largest Series A EVER in Cyber. And much more, so I hope folks enjoy it and keep an eye out for the next roundup I’ll be putting out as we get closer to the new year! 🎉 https://lnkd.in/efyXeHzg

    • No alternative text description for this image
  • The AI Exchange is going places.

    Here’s what I learned from teaching 40 top executives about AI. Last week I taught an AI security program for @Polynome and Abu Dhabi School of Management (ADSM) to senior leaders. I came home with a notebook full of insights — here are the ones worth sharing: 🔹 1. A global open-source gem is hiding in plain sight The Falcon family of GenAI models, built in the UAE, is world-class and quite open. Yet almost none of my clients mention it. It deserves far more attention globally. 🔹 2. The UAE is not “catching up” — it’s competing at the top The country is building a strong AI ecosystem, attracting global talent, and moving with impressive speed. It ranks among the most ambitious AI nations in the world. 🔹 3. The OWASP AI Exchange is becoming a true common language Again it proved to be the most practical foundation for teaching AI security. The community behind it continues to make a big difference. 🔹 4. Even an AI chicken farm can reveal real vulnerabilities We ran a security attack exercise on an AI-powered chicken farm. Chickens may not have privacy, but the group still identified three real AI weaknesses. Lesson: taking 5 minutes to ask “what could go wrong?” always pays off. 🔹 5. Every organisation thinks it’s behind in AI And the truth? Everybody is struggling. This is reassuring for many leaders and worth repeating. 🔹 6. It’s okay not to have an answer One of the most honest lessons: Professionals do not need to improvise answers to AI questions. Admitting “I get back to you later” builds more trust than pretending. 🔹 7. The UAE Cybersecurity Operations Center is straight out of science fiction His Excellency Dr. Al Kuwaiti gave us an exclusive tour. Imagine wall-to-wall screens showing live cyber attacks around the world. It feels like standing on the bridge of the USS Enterprise. 🔹 8. And yes — the Fajr Adhan still gives me goosebumps It wakes you early, but it remains one of the most beautiful sounds to start the day with. Huge congratulations to the inaugural cohort. And thank you Polynome, Dr. Ahmed Dabbagh and the crew for the warm hospitality. #ai #aisecurity Software Improvement Group OWASP® Foundation

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    Introducing: the A-word. AI is becoming a bit of a fixation. Maybe we should sometimes avoid the word and call it ‘A-word’ instead — just to remind ourselves not to obsess. Last Thursday, I opened the BSides Amsterdam conference with an AI talk (surprise), and after that there were zero presentations on AI! And honestly, I think that is great. Yes, we need to deal with AI to use it well and to manage its risks - but it has also become a distraction from the thousand other things that matter in our work and lives. It even pushes us to treat AI as a goal in itself — and that is not what we need: 📈 We should not just focus on how to apply AI, but on solving real business problems. 🌟 We need to stop idolising AI: thinking that it's going to solve everything. Only bet on the AI horse if you’re well-informed. 😥 We should not build separate processes and frameworks just for AI, but integrate it into what already works. 🔐 We better not focus security only on exotic AI-attacks. Many real risks are simple, such as prompt security. 🗣️ Not every story, solution, or opinion needs “AI” in it, to be valuable. It has become a bit too much. Of course, there are moments when we must talk about ‘A-word’. I do it all the time - while trying to keep in mind the notions above. Try it. And if this resonates, please spread to your connections for awareness. Let's stay grounded. #ai

    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    Last week, I had the privilege of attending IEEE TrustCom 2025, where I presented two papers from our ATHENE Center project RoMa (Robustness in Machine Learning). One paper dives into the evolving threat landscape of evasion attacks in continual learning systems, a critical area as AI systems increasingly adapt and grow over time. After each continual learning step (e.g., adding a new class), the effectiveness of evasion attacks can shift, typically becoming more effective (or staying as effective). So they do transfer across CL steps. Adversarial training, while resource-intensive and limited to scenarios where you control both model and data, isn’t a foolproof defense. In the second paper we explored the transferability of evasion attacks and how to assess the risk of susceptibility, a foundation for the risk assessment framework Disesdi Susanna Cox 🕷️ and I further developed in our recent arxiv paper. Beyond the research, I was proud to promote the OWASP AI Exchange and our mission: making AI secure worldwide. As AI systems become more dynamic, so must our defenses. Let’s build trustworthy AI together! #SecureAI #ContinualLearning #AdversarialAttacks #RiskEstimation #OWASPAIExchange

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Our own Jolly is presenting at IWWOF on AI safety and security.

    AI is changing the game for small and midsize businesses (SMEs). It brings new ways to grow, create, and connect. But here’s the truth: digital transformation without safety can put everything at risk. That’s why IWWOF would like to invite you to this event: “AI Safety Isn’t Optional in Today’s Cybersecurity Landscape for SMEs.” Together, we’ll explore: 🔹Why AI safety needs to be part of your cybersecurity culture   🔹How everyday AI tools can expose sensitive data (and what to do about it) 🔹Practical steps to make AI safety part of your cybersecurity culture Whether you’re already using AI or just getting started, this session will help you feel confident, secure, and ready to lead your business into the future. ✨ Event info:  📅Date: 13.11.2025 ⏰Time: 16:30 to 17:30 📍Venue: Business Turku, Tykistökatu 4 B, ElectroCity (Street level), 20520 Turku. 🌍Mode: Hybrid. 🔗Link for online participation: https://lnkd.in/ghyZpRfk 👉 Register for on-site participation: https://lnkd.in/gWhSf3KQ Let’s build a safer, smarter future together! #AI #AISafety #Cybersecurity #SMEs #DigitalTransformation #IWWOF

    • No alternative text description for this image
  • Great work by Michael Novack: using NotebookLLM to put together a video explaining Evasion attacks and the clever innovations that AI Exchange star members Niklas Bunzel and Disesdi Susanna Cox 🕷️have just published.

    Thanks Disesdi Susanna Cox 🕷️for your paper on quantifying the attack space for AI systems. Really interesting approach instead of just putting various language attacks and hoping for the best. I made a video with NotebookLM that helped me understand it, so thought it might help others. I did read the paper afterwards to validate its accuracy. Original paper : https://lnkd.in/eXWd7hew

  • Our very own Iryna Schwindt is leading thoughts on the recent 'Women in cybersecurity PodCast'. She has been, and still is. an important asset to our team.

    Today we are releasing a new episode with Iryna Schwindt - Lead Secure-by-Design Manager at Vodafone Group, where she embeds security controls and guardrails across digital channels and AI-driven products serving millions of users. With a career spanning role in cybersecurity engineering, risk management, and cloud security across Azure, AWS, and GCP, Iryna has become a leading voice on AI risk management, AI red teaming, and responsible AI implementation. During our conversation, she shares her inspiring journey into cybersecurity—from her early research on cryptography and embedded systems to leading Vodafone’ secure AI initiatives—and provides actionable advice for women entering or advancing in this dynamic field. Full episode: https://lnkd.in/gFf4KFi4 #WCP #womenincybersecurity #security #ai #career

Similar pages

Browse jobs