All news with #data poisoning tag
Mon, December 8, 2025
AI Creates New Security Risks for OT Networks, Warn Agencies
⚠️ CISA and international partner agencies have issued guidance warning that integrating AI into operational technology (OT) for critical infrastructure can introduce new security and safety risks. The guidance highlights threats such as prompt injection, data poisoning, data collection issues, AI drift and hallucinations, as well as human de‑skilling and cognitive overload. It urges adoption of secure design principles, cautious deployment, operator education and consideration of in‑house development to retain long‑term control.
Thu, December 4, 2025
NSA Warns AI Introduces New Risks to OT Networks, Allies
⚠️ The NSA, together with the Australian Signals Directorate and allied security agencies, published the Principles for the Secure Integration of Artificial Intelligence in Operational Technology to highlight emerging risks as AI is applied to safety-critical OT networks. The guidance flags adversarial prompt injection, data poisoning, AI drift, hallucinations, loss of explainability, human de-skilling and alert fatigue as primary concerns. It urges operators to adopt CISA secure design practices, maintain accurate asset inventories, consider in-house development tradeoffs, and apply rigorous oversight before deploying AI in OT environments.
Thu, December 4, 2025
How Companies Can Prepare for Emerging AI Security Threats
🔒 Generative AI introduces new attack surfaces that alter trust relationships between users, applications and models. Siemens' pentest and security teams differentiate Offensive Security (targeted technical pentests) from Red Teaming (broader organizational simulations of real attackers). Traditional ML risks such as image or biometric misclassification remain relevant, but experts now single out prompt injection as the most serious threat — simple crafted inputs can leak system prompts, cause misinformation, or convert innocuous instructions into dangerous command injections.
Wed, December 3, 2025
Global Execs Rank Disinformation, AI and Cyber Risks
🧭 Business leaders across 116 economies told the World Economic Forum that misinformation/disinformation, cyber insecurity and the adverse outcomes of AI rank among the top near-term threats to national stability. The WEF’s Executive Opinion Survey 2025 canvassed 11,000 executives, who placed technological risks alongside economic and societal concerns. Respondents flagged AI-driven deepfakes, model exploitation and AI-assisted cyber techniques as amplifiers of both disinformation campaigns and critical-system threats.
Tue, November 25, 2025
2026 Predictions: Autonomous AI and the Year of the Defender
🛡️In 2026 Palo Alto Networks forecasts a shift to the Year of the Defender as enterprises counter AI-driven threats with AI-enabled defenses. The report outlines six predictions — identity deepfakes, autonomous agents as insider threats, data poisoning, executive legal exposure, accelerated quantum urgency, and the browser as an AI workspace. It urges autonomy with control, unified DSPM/AI‑SPM platforms, and crypto agility to secure the AI economy.
Fri, November 21, 2025
GenAI GRC: Moving Supply Chain Risk to the Boardroom
🔒 Chief information security officers face a new class of supply-chain risk driven by generative AI. Traditional GRC — quarterly questionnaires and compliance reports — now lags threats like shadow AI and model drift, which are invisible to periodic audits. The author recommends a GenAI-powered GRC: contextual intelligence, continuous monitoring via a digital trust ledger, and automated regulatory synthesis to convert technical exposure into board-ready resilience metrics.
Wed, November 12, 2025
Secure AI by Design: A Policy Roadmap for Organizations
🛡️ In just a few years, AI has shifted from futuristic innovation to core business infrastructure, yet security practices have not kept pace. Palo Alto Networks presents a Secure AI by Design Policy Roadmap that defines the AI attack surface and prescribes actionable measures across external tools, agents, applications, and infrastructure. The Roadmap aligns with recent U.S. policy moves — including the June 2025 Executive Order and the July 2025 White House AI Action Plan — and calls for purpose-built defenses rather than retrofitting legacy controls.
Tue, November 11, 2025
CISO Guide: Defending Against AI Supply-Chain Attacks
⚠️ AI-enabled supply chain attacks have surged in scale and sophistication, with malicious package uploads to open-source repositories rising 156% year-over-year and real incidents — from PyPI trojans to compromises of Hugging Face, GitHub and npm — already impacting production environments. These threats are polymorphic, context-aware, semantically camouflaged and temporally evasive, rendering signature-based tools increasingly ineffective. CISOs should prioritize AI-aware detection, behavioral provenance, runtime containment and strict contributor verification immediately to reduce exposure and satisfy emerging regulatory obligations such as the EU AI Act.
Thu, November 6, 2025
Digital Health Needs Security at Its Core to Scale AI
🔒 The article argues that AI-driven digital health initiatives proved essential during COVID-19 but simultaneously exposed critical cybersecurity gaps that threaten pandemic preparedness. It warns that expansive data ecosystems, IoT devices and cloud pipelines multiply attack surfaces and that subtle AI-specific threats — including data poisoning, model inversion and adversarial inputs — can undermine public-health decisions. The author urges security by design, including zero-trust architectures, data provenance, encryption, model governance and cross-disciplinary drills so AI can deliver trustworthy, resilient public health systems.
Wed, November 5, 2025
Researchers Find ChatGPT Vulnerabilities in GPT-4o/5
🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.
Tue, November 4, 2025
CISO Predictions 2026: Resilience, AI, and Threats
🔐 Fortinet’s CISO Collective outlines priorities and risks CISOs will face in 2026. The briefing warns that AI will accelerate innovation while expanding attack surfaces, increasing LLM breaches, adversarial model attacks, and deepfake-enabled BEC. It highlights geopolitical and space-related threats such as GPS jamming and satellite interception, persistent regulatory pressure including NIS2 and DORA, and a chronic cybersecurity skills gap. Recommendations emphasize governed AI, identity hardening, quantum readiness, and resilience-driven leadership.
Tue, October 21, 2025
The AI Fix #73: Gemini gambling, poisoning LLMs and fallout
🧠 In episode 73 of The AI Fix, hosts Graham Cluley and Mark Stockley explore a sweep of recent AI developments, from the rise of AI-generated content to high-profile figures relying on chatbots. They discuss research suggesting Google Gemini exhibits behaviours resembling pathological gambling and report on a Gemma-style model uncovering a potential cancer therapy pathway. The show also highlights legal and security concerns— including a lawyer criticised for repeated AI use, generals consulting chatbots, and techniques for poisoning LLMs with only a few malicious samples.
Thu, October 16, 2025
Microsoft: 100 Trillion Signals Daily as AI Fuels Risk
🛡️ The Microsoft Digital Defense Report 2025 reveals Microsoft systems analyze more than 100 trillion security signals every day and warns that AI now underpins both defense and attack. The report describes adversaries using generative AI to automate phishing, scale social engineering and discover vulnerabilities faster, while autonomous malware adapts tactics in real time. Identity compromise is the leading vector—phishing and social engineering caused 28% of breaches—and although MFA blocks over 99% of unauthorized access attempts, adoption remains uneven. Microsoft urges board-level attention, phishing-resistant MFA, cloud workload mapping and monitoring, intelligence sharing and immediate AI and quantum risk planning.
Wed, October 15, 2025
MAESTRO Framework: Securing Generative and Agentic AI
🔒 MAESTRO, introduced by the Cloud Security Alliance in 2025, is a layered framework to secure generative and agentic AI in regulated environments such as banking. It defines seven interdependent layers—from Foundation Models to the Agent Ecosystem—and prescribes minimum viable controls, operational responsibilities and observability practices to mitigate systemic risks. MAESTRO is intended to complement existing standards like MITRE, OWASP, NIST and ISO while focusing on outcomes and cross-agent interactions.
Tue, October 7, 2025
Google won’t fix new ASCII smuggling attack in Gemini
⚠️ Google has declined to patch a new ASCII smuggling vulnerability in Gemini, a technique that embeds invisible Unicode Tags characters to hide instructions from users while still being processed by LLMs. Researcher Viktor Markopoulos of FireTail demonstrated hidden payloads delivered via Calendar invites, emails, and web content that can alter model behavior, spoof identities, or extract sensitive data. Google said the issue is primarily social engineering rather than a security bug.
Tue, September 30, 2025
AI Risks Push Integrity Protection to Forefront for CISOs
🔒 CISOs must now prioritize integrity protection as AI introduces new attack surfaces such as data poisoning, prompt injection and adversarial manipulation. Shadow AI — unsanctioned use of models and services — increases risks of data leakage and insecure integrations. Defenses should combine Security by Design, governance, transparency and compliance (e.g., GDPR, EU AI Act) to detect poisoned data and prevent model drift.
Fri, September 26, 2025
Hidden Cybersecurity Risks of Deploying Generative AI
⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.
Wed, September 17, 2025
Quarter of UK and US Firms Hit by Data Poisoning Attacks
🛡️ New IO research reports that 26% of surveyed UK and US organisations have experienced data poisoning, and 37% observe employees using generative AI tools without permission. The third annual State of Information Security Report highlights rising concern around AI-generated phishing, misinformation, deepfakes and shadow AI. Despite the risks, most respondents say they feel prepared and are adopting acceptable use policies to curb unsanctioned tool use.
Wed, September 10, 2025
Top Cybersecurity Trends: AI, Identity, and Threats
🤖 Generative AI remains the dominant force shaping enterprise security priorities, but the initial hype is giving way to more measured ROI scrutiny and operational caution. Analysts say gen AI is entering a trough of disillusionment even as vendors roll out agentic AI offerings for autonomous threat detection and response. The article highlights rising risks — from model theft and data poisoning to AI-enabled vishing — along with brisk M&A activity, a shift to identity-centric defenses, and growing demand for specialized cyber roles.
Thu, August 28, 2025
Securing AI Before Times: Preparing for AI-driven Threats
🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.