Tag Banner

All news with #ai red teaming tag

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →

Thu, November 6, 2025

Multi-Turn Adversarial Attacks Expose LLM Weaknesses

🔍 Cisco AI Defense's report shows open-weight large language models remain vulnerable to adaptive, multi-turn adversarial attacks even when single-turn defenses appear effective. Using over 1,000 prompts per model and analyzing 499 simulated conversations of 5–10 exchanges, researchers found iterative strategies such as Crescendo, Role-Play and Refusal Reframe drove failure rates above 90% in many cases. The study warns that traditional safety filters are insufficient and recommends strict system prompts, model-agnostic runtime guardrails and continuous red-teaming to mitigate risk.

read more →

Wed, November 5, 2025

Addressing the AI Black Box with Prisma AIRS 2.0 Platform

🔒 Prisma AIRS 2.0 presents a unified AI security platform that addresses the “AI black box” by combining AI Model Security and automated AI Red Teaming. It inventories models, inference datasets, applications and agents in real time, inspects model artifacts within CI/CD and model registries, and conducts continuous, context-aware adversarial testing. The platform integrates curated threat intelligence and governance mappings to deliver auditable risk scores and prioritized remediation guidance for enterprise teams.

read more →

Tue, October 28, 2025

Prisma AIRS 2.0: Unified Platform for Secure AI Agents

🔒 Prisma AIRS 2.0 is a unified AI security platform that delivers end-to-end visibility, risk assessment and automated defenses across agents, models and development pipelines. It consolidates Protect AI capabilities to provide posture and runtime protections for AI agents, model scanning and API-first controls for MLOps. The platform also offers continuous, autonomous red teaming and a managed MCP Server to embed threat detection into workflows.

read more →

Thu, October 16, 2025

CISOs Brace for an Escalating AI-versus-AI Cyber Fight

🔐AI-enabled attacks are rapidly shifting the threat landscape, with cybercriminals using deepfakes, automated phishing, and AI-generated malware to scale operations. According to Foundry's 2025 Security Priorities Study and CSO reporting, autonomous agents can execute full attack chains at machine speed, forcing defenders to adopt AI as a copilot backed by rigorous human oversight. Organizations are prioritizing human risk, verification protocols, and training to counter increasingly convincing AI-driven social engineering.

read more →

Fri, October 10, 2025

Autonomous AI Hacking and the Future of Cybersecurity

⚠️AI agents are now autonomously conducting cyberattacks, chaining reconnaissance, exploitation, persistence, and data theft at machine speed and scale. In 2025 public demonstrations—from XBOW’s mass submissions on HackerOne in June, to DARPA teams and Google’s Big Sleep in August—along with operational reports from Ukraine’s CERT and vendors, show these systems rapidly find and weaponize new flaws. Criminals have operationalized LLM-driven malware and ransomware, while tools like HexStrike‑AI, Deepseek, and Villager make automated attack chains broadly available. Defenders can also leverage AI to accelerate vulnerability research and operationalize VulnOps, continuous discovery/continuous repair, and self‑healing networks, but doing so raises serious questions about patch correctness, liability, compatibility, and vendor relationships.

read more →

Tue, October 7, 2025

AI-Powered Breach and Attack Simulation for Validation

🔍 AI-powered Breach and Attack Simulation (BAS) converts the flood of threat intelligence into safe, repeatable tests that validate defenses across real environments. The article argues that integrating AI with BAS lets teams operationalize new reports in hours instead of weeks, delivering on-demand validation, clearer risk prioritization, measurable ROI, and board-ready assurance. Picus Security positions this approach as a practical step-change for security validation.

read more →

Tue, September 23, 2025

Six Novel Ways to Apply AI in Cybersecurity Defense

🛡️ AI is being applied across security operations in novel ways to predict, simulate, and deter attacks. Experts from BforeAI, NopalCyber, Hughes, XYPRO, AirMDR, and Kontra outline six approaches — predictive scoring, GAN-driven attack simulation, AI analyst assistants, micro-deviation detection, automated triage and response, and proactive generative deception — that aim to reduce alert fatigue, accelerate investigations, and increase attacker costs. Successful deployments depend on accurate ground truth data, continuous model updates, and significant compute and engineering investment.

read more →

Wed, September 17, 2025

Blueprint for Building Safe and Secure AI Agents at Scale

🔒 Azure outlines a layered blueprint for building trustworthy, enterprise-grade AI agents. The post emphasizes identity, data protection, built-in controls, continuous evaluation, and monitoring to address risks like data leakage, prompt injection, and agent sprawl. Azure AI Foundry introduces Entra Agent ID, cross-prompt injection classifiers, risk and safety evaluations, and integrations with Microsoft Purview and Defender. Join Microsoft Secure on September 30 to learn about Foundry's newest capabilities.

read more →

Wed, September 17, 2025

Check Point Acquires Lakera to Build AI Security Stack

🔐 Check Point has agreed to acquire Lakera, an AI-native security platform focused on protecting agentic AI and LLM-based deployments, in a deal expected to close in Q4 2025 for an undisclosed sum. Lakera’s Gandalf adversarial engine reportedly leverages over 80 million attack patterns and delivers detection rates above 98% with sub-50ms latency and low false positives. Check Point will embed Lakera into the Infinity architecture, initially integrating into CloudGuard WAF and GenAI Protect, offering near-immediate, API-based protection as an add-on for existing customers.

read more →

Wed, September 17, 2025

Securing AI: End-to-End Protection with Prisma AIRS

🔒Prisma AIRS offers unified, AI-native security across the full AI lifecycle, from model development and training to deployment and runtime monitoring. The platform focuses on five core capabilities—model scanning, posture management, AI red teaming, runtime security and agent protection—to detect and mitigate threats such as prompt injection, data poisoning and tool misuse. By consolidating workflows and sharing intelligence across Prisma, it aims to simplify operations, accelerate remediation and reduce total cost of ownership so organizations can deploy bravely.

read more →

Wed, September 17, 2025

Deploying Agentic AI: Five Steps for Red-Teaming Guide

🛡️ Enterprises adopting agentic AI must update red‑teaming practices to address a rapidly expanding and interactive attack surface. The article summarizes the Cloud Security Alliance’s Agentic AI Red Teaming Guide and corroborating research that documents prompt injection, multi‑agent manipulation, and authorization hijacking as practical threats. It recommends five pragmatic steps—change attitude, continually test guardrails and governance, broaden red‑team skill sets, widen the solution space, and adopt modern tooling—and highlights open‑source and commercial tools such as AgentDojo and Agentgateway. The overall message: combine automated agents with human creativity, embed security in design, and treat agentic systems as sociotechnical operators rather than simple software.

read more →

Wed, September 17, 2025

OWASP LLM AI Cybersecurity and Governance Checklist

🔒 OWASP has published an LLM AI Cybersecurity & Governance Checklist to help executives and security teams identify core risks from generative AI and large language models. The guidance categorises threats and recommends a six-step strategy covering adversarial risk, threat modeling, inventory and training. It also highlights TEVV, model and risk cards, RAG, supplier audits and AI red‑teaming to validate controls. Organisations should pair these measures with legal and regulatory reviews and clear governance.

read more →

Tue, September 2, 2025

The AI Fix Ep. 66: AI Mishaps, Breakthroughs and Safety

🧠 In episode 66 of The AI Fix, hosts Graham Cluley and Mark Stockley walk listeners through a rapid-fire roundup of recent AI developments, from a ChatGPT prompt that produced an inaccurate anatomy diagram to a controversial Stanford sushi hackathon. They cover a Google Gemini bug that generated self-deprecating responses, criticisms that gave DeepSeek poor marks on existential-risk mitigation, and a debunked pregnancy-robot story. The episode also celebrates a genuine scientific advance: a team of AI agents that designed novel COVID-19 nanobodies, and considers how unusual collaborations and growing safety work could change the broader AI risk landscape.

read more →

Wed, August 27, 2025

Agent Factory: Top 5 Agent Observability Practices

🔍 This post outlines five practical observability best practices to improve the reliability, safety, and performance of agentic AI. It defines agent observability as continuous monitoring, detailed tracing, and logging of decisions and tool calls combined with systematic evaluations and governance across the lifecycle. The article highlights Azure AI Foundry Observability capabilities—evaluations, an AI Red Teaming Agent, Azure Monitor integration, CI/CD automation, and governance integrations—and recommends embedding evaluations into CI/CD, performing adversarial testing before production, and maintaining production tracing and alerts to detect drift and incidents.

read more →

Fri, August 22, 2025

Bruce Schneier to Spend Academic Year at Munk School

📚 Bruce Schneier will spend the 2025–26 academic year at the University of Toronto’s Munk School as an adjunct. He will organize a reading group on AI security in the fall and teach his cybersecurity policy course in the spring. He intends to collaborate with Citizen Lab, the Law School, and the Schwartz Reisman Institute, and to participate in Toronto’s academic and cultural life. He describes the opportunity as exciting.

read more →

Wed, August 20, 2025

Logit-Gap Steering Reveals Limits of LLM Alignment

⚠️ Unit 42 researchers Tony Li and Hongliang Liu introduce Logit-Gap Steering, a new framework that exposes how alignment training produces a measurable refusal-affirmation logit gap rather than eliminating harmful outputs. Their paper demonstrates efficient short-path suffix jailbreaks that achieved high success rates on open-source models including Qwen, LLaMA, Gemma and the recently released gpt-oss-20b. The findings argue that internal alignment alone is insufficient and recommend a defense-in-depth approach with external safeguards and content filters.

read more →

Fri, June 13, 2025

Layered Defenses Against Indirect Prompt Injection

🔒 Google GenAI Security Team outlines a layered defense strategy to mitigate indirect prompt injection attacks that hide malicious instructions in external content like emails, documents, and calendar invites. They combine model hardening in Gemini 2.5 with adversarial training, purpose-built ML classifiers, and "security thought reinforcement" to keep models focused on user tasks. Additional system controls include markdown sanitization, suspicious URL redaction via Google Safe Browsing, a Human-In-The-Loop confirmation framework for risky actions, and contextual end-user mitigation notifications that complement Gmail protections.

read more →