Tag Banner

All news with #model poisoning tag

Wed, November 19, 2025

CIO: Embed Security into AI from Day One at Scale

🔐 Meerah Rajavel, CIO at Palo Alto Networks, argues that security must be integrated into AI from the outset rather than tacked on later. She frames AI value around three pillars — velocity, efficiency and experience — and describes how Panda AI transformed employee support, automating 72% of IT requests. Rajavel warns that models and data are primary attack surfaces and urges supply-chain, runtime and prompt protections, noting the company embeds these controls in Cortex XDR.

read more →

Tue, November 18, 2025

Rethinking Identity in the AI Era: Building Trust Fast

🔐 CISOs are grappling with an accelerating identity crisis as stolen credentials and compromised identities account for a large share of breaches. Experts warn that traditional, human-centric IAM models were not designed for agentic AI and the thousands of autonomous agents that can act and impersonate at machine speed. The SINET Identity Working Group advocates an AI Trust Fabric built on cryptographic, proofed identities, dynamic fine-grained authorization, just-in-time access, explicit delegation, and API-driven controls to reduce risks such as prompt injection, model theft, and data poisoning.

read more →

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →

Tue, November 4, 2025

CISO Predictions 2026: Resilience, AI, and Threats

🔐 Fortinet’s CISO Collective outlines priorities and risks CISOs will face in 2026. The briefing warns that AI will accelerate innovation while expanding attack surfaces, increasing LLM breaches, adversarial model attacks, and deepfake-enabled BEC. It highlights geopolitical and space-related threats such as GPS jamming and satellite interception, persistent regulatory pressure including NIS2 and DORA, and a chronic cybersecurity skills gap. Recommendations emphasize governed AI, identity hardening, quantum readiness, and resilience-driven leadership.

read more →

Mon, November 3, 2025

AI Summarization Optimization Reshapes Meeting Records

📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.

read more →

Thu, October 30, 2025

Five Generative AI Security Threats and Defensive Steps

🔒 Microsoft summarizes the top generative AI security risks and mitigation strategies in a new e-book, highlighting threats such as prompt injection, data poisoning, jailbreaks, and adaptive evasion. The post underscores cloud vulnerabilities, large-scale data exposure, and unpredictable model behavior that create new attack surfaces. It recommends unified defenses—such as CNAPP approaches—and presents Microsoft Defender for Cloud as an example that combines posture management with runtime detection to protect AI workloads.

read more →

Wed, October 29, 2025

AI-targeted Cloaking Tricks Agentic Browsers, Warns SPLX

⚠ Researchers report a new form of context-poisoning called AI-targeted cloaking that serves different content to agentic browsers and AI crawlers. SPLX shows attackers can use a trivial user-agent check to deliver alternate pages to crawlers from ChatGPT and Perplexity, turning retrieved content into manipulated ground truth. The technique mirrors search engine cloaking but targets AI overviews and autonomous reasoning, creating a potent misinformation vector. A concurrent hTAG analysis also found many agents execute risky actions with minimal safeguards, amplifying potential harm.

read more →

Thu, October 23, 2025

Hugging Face and VirusTotal: Integrating Security Insights

🔒 VirusTotal and Hugging Face have announced a collaboration to surface security insights directly within the Hugging Face platform. When browsing model files, datasets, or related artifacts, users will now see multi‑scanner results including VirusTotal detections and links to public reports so potential risks can be reviewed before downloading. VirusTotal is also enhancing its analysis portfolio with AI-driven tools such as Code Insight and format‑aware scanners (picklescan, safepickle, ModelScan) to highlight unsafe deserialization flows and other risky patterns. The integration aims to increase visibility across the AI supply chain and help researchers, developers, and defenders build more secure models and workflows.

read more →

Tue, October 21, 2025

The AI Fix #73: Gemini gambling, poisoning LLMs and fallout

🧠 In episode 73 of The AI Fix, hosts Graham Cluley and Mark Stockley explore a sweep of recent AI developments, from the rise of AI-generated content to high-profile figures relying on chatbots. They discuss research suggesting Google Gemini exhibits behaviours resembling pathological gambling and report on a Gemma-style model uncovering a potential cancer therapy pathway. The show also highlights legal and security concerns— including a lawyer criticised for repeated AI use, generals consulting chatbots, and techniques for poisoning LLMs with only a few malicious samples.

read more →

Tue, October 21, 2025

Securing AI in Defense: Trust, Identity, and Controls

🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.

read more →

Mon, October 20, 2025

Agentic AI and the OODA Loop: The Integrity Problem

🛡️ Bruce Schneier and Barath Raghavan argue that agentic AIs run repeated OODA loops—Observe, Orient, Decide, Act—over web-scale, adversarial inputs, and that current architectures lack the integrity controls to handle untrusted observations. They show how prompt injection, dataset poisoning, stateful cache contamination, and tool-call vectors (e.g., MCP) let attackers embed malicious control into ordinary inputs. The essay warns that fixing hallucinations is insufficient: we need architectural integrity—semantic verification, privilege separation, and new trust boundaries—rather than surface patches.

read more →

Wed, October 15, 2025

MAESTRO Framework: Securing Generative and Agentic AI

🔒 MAESTRO, introduced by the Cloud Security Alliance in 2025, is a layered framework to secure generative and agentic AI in regulated environments such as banking. It defines seven interdependent layers—from Foundation Models to the Agent Ecosystem—and prescribes minimum viable controls, operational responsibilities and observability practices to mitigate systemic risks. MAESTRO is intended to complement existing standards like MITRE, OWASP, NIST and ISO while focusing on outcomes and cross-agent interactions.

read more →

Tue, September 30, 2025

AI Risks Push Integrity Protection to Forefront for CISOs

🔒 CISOs must now prioritize integrity protection as AI introduces new attack surfaces such as data poisoning, prompt injection and adversarial manipulation. Shadow AI — unsanctioned use of models and services — increases risks of data leakage and insecure integrations. Defenses should combine Security by Design, governance, transparency and compliance (e.g., GDPR, EU AI Act) to detect poisoned data and prevent model drift.

read more →

Fri, September 26, 2025

Hidden Cybersecurity Risks of Deploying Generative AI

⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.

read more →

Wed, September 24, 2025

Two critical Wondershare RepairIt flaws risk data and AI

⚠️ Trend Micro disclosed two critical authentication-bypass vulnerabilities in Wondershare RepairIt that exposed private user files, AI models, and build artifacts due to embedded overly permissive cloud tokens and unencrypted storage. The flaws, tracked as CVE-2025-10643 (CVSS 9.1) and CVE-2025-10644 (CVSS 9.4), allow attackers to circumvent authentication and potentially execute arbitrary code via supply-chain tampering. Trend Micro reported the issues through ZDI in April 2025 and warns users to restrict interaction with the product until a vendor fix is issued.

read more →

Tue, September 9, 2025

Fortinet + AI: Next‑Gen Cloud Security and Protection

🔐 AI adoption in the cloud is accelerating, reshaping workloads and expanding attack surfaces while introducing new risks such as prompt injection, model manipulation, and data exfiltration. Fortinet recommends a layered defense built into the Fortinet Security Fabric, combining zero trust, segmentation, web/API protection, and cloud-native posture controls to secure AI infrastructure. Complementing those controls, AI-driven operations and correlation — exemplified by Gemini 2.5 Pro integrations — filter noise, correlate cross-platform logs, and surface prioritized, actionable recommendations. Together these measures reduce mean time to detect and respond and help contain threats before they spread.

read more →

Tue, September 2, 2025

Secure AI at Machine Speed: Full-Stack Enterprise Defense

🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.

read more →

Tue, August 26, 2025

Block Unsafe LLM Prompts with Firewall for AI at the Edge

🛡️ Cloudflare has integrated unsafe content moderation into Firewall for AI, using Llama Guard 3 to detect and block harmful prompts in real time at the network edge. The model-agnostic filter identifies categories including hate, violence, sexual content, criminal planning, and self-harm, and lets teams block or log flagged prompts without changing application code. Detection runs on Workers AI across Cloudflare's GPU fleet with a 2-second analysis cutoff, and logs record categories but not raw prompt text. The feature is available in beta to existing customers.

read more →

Sun, August 24, 2025

Cloudflare AI Week 2025: Securing AI, Protecting Content

🔒 Cloudflare this week outlines a multi-pronged plan to help organizations build secure, production-grade AI experiences while protecting original content and infrastructure. The company will roll out controls to detect Shadow AI, enforce approved AI toolchains, and harden models against poisoning or misuse. It is expanding Crawl Control for content owners and enhancing the AI Gateway with caching, observability, and framework integrations to reduce risk and operational cost.

read more →