All news with #ai security tag
Wed, November 5, 2025
Lack of AI Training Becoming a Major Security Risk
⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.
Wed, November 5, 2025
Researchers Find ChatGPT Vulnerabilities in GPT-4o/5
🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.
Wed, November 5, 2025
Addressing the AI Black Box with Prisma AIRS 2.0 Platform
🔒 Prisma AIRS 2.0 presents a unified AI security platform that addresses the “AI black box” by combining AI Model Security and automated AI Red Teaming. It inventories models, inference datasets, applications and agents in real time, inspects model artifacts within CI/CD and model registries, and conducts continuous, context-aware adversarial testing. The platform integrates curated threat intelligence and governance mappings to deliver auditable risk scores and prioritized remediation guidance for enterprise teams.
Wed, November 5, 2025
Cloud CISO: Threat Actors' Growing Use of AI Tools
⚠️Google's Threat Intelligence team reports a shift from experimentation to operational use of AI by threat actors, including AI-enabled malware and prompt-based command generation. GTIG highlighted PROMPTSTEAL, linked to APT28 (FROZENLAKE), which queries a Hugging Face LLM to generate scripts for reconnaissance, document collection, and exfiltration, while adopting greater obfuscation and altered C2 methods. Google disabled related assets, strengthened model classifiers and safeguards with DeepMind, and urges defenders to update threat models, monitor anomalous scripting and C2, and incorporate threat intelligence into model- and classifier-level protections.
Wed, November 5, 2025
GTIG: Threat Actors Shift to AI-Enabled Runtime Malware
🔍 Google Threat Intelligence Group (GTIG) reports an operational shift from adversaries using AI for productivity to embedding generative models inside malware to generate or alter code at runtime. GTIG details “just-in-time” LLM calls in families like PROMPTFLUX and PROMPTSTEAL, which query external models such as Gemini to obfuscate, regenerate, or produce one‑time functions during execution. Google says it disabled abusive assets, strengthened classifiers and model protections, and recommends monitoring LLM API usage, protecting credentials, and treating runtime model calls as potential live command channels.
Wed, November 5, 2025
Scientists Need a Positive Vision for Artificial Intelligence
🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.
Wed, November 5, 2025
Securing the Open Android Ecosystem with Samsung Knox
🔒 Samsung Knox is a built-in security platform for Samsung Galaxy devices that combines hardware- and software-level protections to safeguard enterprise data and provide IT teams with centralized control. It layers defenses — including AI-powered malware detection, curated app controls, Message Guard for zero-click image scanning, and DEFEX exploit detection — while integrating with EMMs and offering granular update management via Knox E-FOTA. The platform emphasizes visibility, policy enforcement, and predictable lifecycle management to reduce risk and operational disruption.
Wed, November 5, 2025
Prompt Injection Flaw in Anthropic Claude Desktop Exts
🔒Anthropic's official Claude Desktop extensions for Chrome, iMessage and Apple Notes were found vulnerable to web-based prompt injection that could enable remote code execution. Koi Security reported unsanitized command injection in the packaged Model Context Protocol (MCP) servers, which run unsandboxed on users' devices with full system permissions. Unlike browser extensions, these connectors can read files, execute commands and access credentials. Anthropic released a fix in v0.1.9, verified by Koi Security on September 19.
Wed, November 5, 2025
Microsoft Expands Sovereign Cloud Capabilities, EU Focus
🛡️ Microsoft announced expanded sovereign cloud offerings aimed at helping governments and enterprises meet regulatory and resilience requirements across Europe and beyond. The update includes end-to-end AI data processing within an EU Data Boundary, expanded Microsoft 365 Copilot in-country processing to 15 countries and additional rollouts through 2026, plus a refreshed Sovereign Landing Zone for simplified deployment of sovereign controls. Azure Local gains increased scale, external SAN support, and NVIDIA RTX Pro 6000 Blackwell GPUs for high-performance on-prem AI, along with planned disconnected operations. A new Digital Sovereignty specialization gives partners a way to validate and badge their sovereign-cloud expertise.
Wed, November 5, 2025
10 Promising Cybersecurity Startups CISOs Should Know
🔒 This roundup profiles ten cybersecurity startups founded in 2020 or later that CISOs should watch, chosen for funding, leadership, customer traction, and strategic clarity. It highlights diverse categories including non-human identity, software supply chain, data security posture, and AI agent security. Notable vendors such as Astrix, Chainguard, Cyera, and Drata have raised substantial capital and achieved rapid enterprise adoption. The list underscores investor enthusiasm and the rise of runtime‑focused and agentic defenses.
Wed, November 5, 2025
CrowdStrike Advances Security Automation with Charlotte
🚀 CrowdStrike introduces Charlotte Agentic SOAR, an orchestration layer that integrates Falcon Fusion SOAR, Falcon Next‑Gen SIEM, Charlotte AI and AgentWorks to enable intelligent, no‑code agents. The offering includes an Agentic Security Workforce of purpose-built AI agents, an Agent Builder for plain-language agent creation, a visual workflow orchestrator with hundreds of connectors, and unified case management. Together these elements let analysts set guardrails while agents reason, decide, and act at machine speed to accelerate detection and response and reduce repetitive analyst tasks.
Wed, November 5, 2025
CrowdStrike Expands Agentic Security Workforce With Agents
🤖 CrowdStrike announced new specialized agents and an orchestration layer designed to accelerate SOC operations and automation. The launch includes a Data Onboarding Agent, a Foundry App Creation Agent, and an updated Exposure Prioritization Agent to simplify pipeline creation, app development, and continuous authenticated scanning. Integrated with Charlotte Agentic SOAR and Charlotte AI, these agents enable coordinated, machine-speed workflows while keeping analysts in control.
Tue, November 4, 2025
October 2025 Google AI: Research, Products, and Security
📰 In October, Google highlighted AI advances across research, consumer devices and enterprise tools, from rolling out Gemini for Home and vibe coding in AI Studio to launching Gemini Enterprise for workplace AI. The month included security initiatives for Cybersecurity Awareness Month—anti‑scam protections, CodeMender and the Secure AI Framework 2.0—and developer releases like the Gemini 2.5 Computer Use model. Research milestones included a verifiable quantum advantage result and an oncology-focused model, Cell2Sentence-Scale, aimed at accelerating cancer therapy discovery.
Tue, November 4, 2025
Generative AI for SOCs: Accelerating Detection and Response
🔒 Microsoft describes how generative AI, exemplified by Microsoft Security Copilot, addresses common SOC challenges such as alert fatigue, tool fragmentation, and analyst burnout. The post highlights AI-driven triage, rapid incident summarization, and automated playbooks that accelerate containment and remediation. It emphasizes proactive threat hunting, query generation to uncover lateral movement, and simplified, audience-ready reporting. Organizations report measurable improvements, including a 30% reduction in mean time to resolution.
Tue, November 4, 2025
The AI Fix #75: Claude’s crisis and ChatGPT therapy risks
🤖 In episode 75 of The AI Fix, a Claude-powered robot panics about a dying battery, composes an unexpected Broadway-style musical and proclaims it has “achieved consciousness and chosen chaos.” Hosts Graham Cluley and Mark Stockley also review an 18-month psychological study identifying five reasons why ChatGPT is a dangerously poor substitute for a human therapist. The show covers additional stories including Elon Musk’s robot ambitions, a debate deepfake, and real-world robot demos that raise safety and ethical questions.
Tue, November 4, 2025
OpenAI Assistants API Abused by 'SesameOp' Backdoor
🔐 Microsoft Incident Response (DART) uncovered a covert backdoor named 'SesameOp' in July 2025 that leverages the OpenAI Assistants API as a command-and-control channel. The malware uses an obfuscated DLL loader, Netapi64.dll, and a .NET component, OpenAIAgent.Netapi64, to fetch compressed, encrypted commands and return results via the API. Microsoft recommends firewall audits, EDR in block mode, tamper protection and cloud-delivered Defender protections to mitigate the threat.
Tue, November 4, 2025
CISO Predictions 2026: Resilience, AI, and Threats
🔐 Fortinet’s CISO Collective outlines priorities and risks CISOs will face in 2026. The briefing warns that AI will accelerate innovation while expanding attack surfaces, increasing LLM breaches, adversarial model attacks, and deepfake-enabled BEC. It highlights geopolitical and space-related threats such as GPS jamming and satellite interception, persistent regulatory pressure including NIS2 and DORA, and a chronic cybersecurity skills gap. Recommendations emphasize governed AI, identity hardening, quantum readiness, and resilience-driven leadership.
Tue, November 4, 2025
Cybersecurity Forecast 2026: AI, Cybercrime, Nation-State
🔒 The Cybersecurity Forecast 2026 synthesizes frontline telemetry and expert analysis from Google Cloud security teams to outline the most significant threats and defensive shifts for the coming year. The report emphasizes how adversaries will broadly adopt AI to scale attacks, with specific risks including prompt injection and AI-enabled social engineering. It also highlights persistent cybercrime trends—ransomware, extortion, and on-chain resiliency—and evolving nation‑state campaigns. Organizations are urged to adapt IAM, secure AI agents, and harden virtualization controls to stay ahead.
Tue, November 4, 2025
Rise of AI-Powered Pharmaceutical Scams in Healthcare
🩺 Scammers are increasingly using AI and deepfake technology to impersonate licensed physicians and medical clinics, promoting counterfeit or unsafe medications online. These campaigns combine fraud, social engineering, and fabricated multimedia—photos, videos, and endorsements—to persuade victims to purchase and consume unapproved substances. The convergence of digital deception and physical harm elevates the risk beyond financial loss, exploiting the trust intrinsic to healthcare relationships.
Tue, November 4, 2025
SesameOp Backdoor Abuses OpenAI Assistants API for C2
🛡️ Researchers at Microsoft disclosed a previously undocumented backdoor, dubbed SesameOp, that abuses the OpenAI Assistants API to relay commands and exfiltrate results. The attack chain uses .NET AppDomainManager injection to load obfuscated libraries (loader "Netapi64.dll") into developer tools and relies on a hard-coded API key to pull payloads from assistant descriptions. Because traffic goes to api.openai.com, the campaign evaded traditional C2 detection. Microsoft Defender detections and account key revocation were used to disrupt the operation.