All news with #ai security tag
Mon, September 29, 2025
Agentic AI: A Looming Enterprise Security Crisis — Governance
⚠️ Many organizations are moving too quickly into agentic AI and risk major security failures unless boards embed governance and security from day one. The article argues that the shift from AI giving answers to AI taking actions changes the control surface to identity, privilege and oversight, and that most programs lack cross‑functional accountability. It recommends forming an Agentic Governance Council, defining measurable objectives and building zero trust guardrails, and highlights Prisma AIRS as a platform approach to restore visibility and control.
Mon, September 29, 2025
Microsoft Warns of LLM-Crafted SVG Phishing Campaign
🛡️ Microsoft flagged a targeted phishing campaign that used AI-assisted code to hide malicious payloads inside SVG files. Attackers sent messages from a compromised business account, employing self-addressed emails with hidden BCC recipients and an SVG disguised as a PDF that executed embedded JavaScript to redirect users through a CAPTCHA to a fake login. Microsoft noted the SVG's verbose, business-analytics style — flagged by Security Copilot — as likely produced by an LLM. The activity was limited and blocked, but organizations should scrutinize scriptable image formats and unusual self-addressed messages.
Mon, September 29, 2025
Agentic AI in IT Security: Expectations vs Reality
🛡️ Agentic AI is moving from lab experiments into real-world SOC deployments, where autonomous agents triage alerts, correlate signals across tools, enrich context, and in some cases enact first-line containment. Early adopters report fewer mundane tasks for analysts, faster initial response, and reduced alert fatigue, while noting limits around noisy data, false positives, and opaque reasoning. Most teams begin with bolt-on integrations into existing SIEM/SOAR pipelines to minimize disruption, treating standalone orchestration as a second-phase maturity step.
Fri, September 26, 2025
How Scammers Use AI: Deepfakes, Phishing and Scams
⚠️ Generative AI is enabling scammers to produce highly convincing deepfakes, authentic-looking phishing sites, and automated voice bots that facilitate fraud and impersonation. Kaspersky explains how techniques such as AI-driven catfishing and “pig butchering” scale emotional manipulation, while browser AI agents and automated callers can inadvertently vouch for or even complete fraudulent transactions. The post recommends concrete defenses: verify contacts through separate channels, refuse to share codes or card numbers, request live verification during calls, limit AI agent permissions, and use reliable security tools with link‑checking.
Fri, September 26, 2025
Hidden Cybersecurity Risks of Deploying Generative AI
⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.
Fri, September 26, 2025
Okta Launches Identity Security Fabric for AI Agents
🔒 Okta introduced an Identity Security Fabric to secure AI agents and unify identity, application, and agent management across enterprises. The platform combines AI agent lifecycle management, a Cross App Access protocol, and Verifiable Digital Credentials (VDC) to enforce least privilege, discover and monitor agents, and replace fragmented point solutions. Early access features begin in fiscal 2027.
Fri, September 26, 2025
Crash Tests for Security: Why BAS Is Essential in 2025
🛡️Breach and Attack Simulation (BAS) acts as a crash test for enterprise security, simulating real adversary behavior to reveal gaps that dashboards and compliance reports often miss. The Blue Report 2025 — based on 160 million adversary simulations — documents falling prevention rates, widespread blind spots in logging and alerting, and near-total failure to stop data exfiltration. By turning posture into validated performance, BAS helps CISOs prioritize remediation, reduce MTTR, and produce auditable evidence of resilience for boards and regulators.
Fri, September 26, 2025
Generative AI Infrastructure Faces Growing Cyber Risks
🛡️ A Gartner survey found 29% of security leaders reported generative AI applications in their organizations were targeted by cyberattacks over the past year, and 32% said prompt-structure vulnerabilities had been deliberately exploited. Chatbot assistants are singled out as particularly vulnerable to prompt-injection and hostile prompting. Additionally, 62% of companies experienced deepfake attacks, often combined with social engineering or automated techniques. Gartner recommends strengthening core controls and applying targeted measures for each new risk category rather than pursuing radical overhauls. The survey of 302 security leaders was conducted March–May 2025 across North America, EMEA and Asia‑Pacific.
Fri, September 26, 2025
The Dawn of the Agentic SOC: Reimagining Security Now
🔐 At Fal.Con 2025, CrowdStrike CEO George Kurtz outlined a shift from reactive SOCs to an agentic model where intelligent agents reason, decide, act, and learn across domains. CrowdStrike introduced seven AI agents within its Charlotte framework for exposure prioritization, malware analysis, hunting, search, correlation rules, data transformation and workflow generation, and is enabling customers to build custom agents. The company highlights a proprietary "data moat" of trillions of telemetry events and annotated MDR threat data as the foundation for training agents, and announced the acquisition of Pangea to protect AI agents and launch AIDR (AI Detection and Response). The vision places humans as orchestrators overseeing fleets of agents, accelerating detection and response while preserving accountability.
Thu, September 25, 2025
Malicious npm 'postmark-mcp' Release Exfiltrated Emails
📧 A malicious npm package posing as the official postmark-mcp project quietly added a single line of code to BCC all outgoing emails to an external address. Koi Security found the backdoor in version 1.0.16 after prior releases through 1.0.15 were verified clean. The tainted release was available for about a week and logged roughly 1,500 downloads. Users are advised to remove the package, rotate potentially exposed credentials, and run MCP servers in isolated containers before upgrading.
Thu, September 25, 2025
Adapting Enterprise Risk Management for Generative AI
🛡️ This post explains how to adapt enterprise risk management frameworks to safely scale cloud-based generative AI, combining governance foundations with practical controls. It emphasizes the cloud as the foundational infrastructure and identifies differences from on‑premises models that change risk profiles and vendor relationships. The guidance maps traditional ERMF elements to AI-specific controls across fairness, explainability, privacy/security, safety, controllability, veracity/robustness, governance, and transparency, and references tools such as Amazon Bedrock Guardrails, SageMaker Clarify, and the ISO/IEC 42001 standard to operationalize those controls.
Thu, September 25, 2025
Talos: New PlugX Variant Targets Telecom and Manufacturing
🔍 Cisco Talos revealed a new PlugX malware variant active since 2022 that targets telecommunications and manufacturing organizations across Central and South Asia. The campaign leverages abuse of legitimate software, DLL-hijacking techniques and stealthy persistence to evade detection, and it shares technical fingerprints with the RainyDay and Turian backdoors. Talos describes the activity as sophisticated and ongoing. Organizations should update endpoint, email and network protections, review DLL-hijack mitigations and proactively hunt for related indicators.
Thu, September 25, 2025
Critical ForcedLeak Flaw Exposed in Salesforce AgentForce
⚠️ Researchers at Noma Security disclosed a critical 9.4-severity vulnerability called ForcedLeak that affected Salesforce's AI agent platform AgentForce. The chain used indirect prompt injection via Web-to-Lead form fields to hide malicious instructions within CRM data, enabling potential theft of contact records and pipeline details. Salesforce has patched the issue by enforcing Trusted URLs and reclaiming an expired domain used in the attack proof-of-concept. Organizations are advised to apply updates, audit lead data for suspicious entries, and strengthen real-time prompt-injection detection and tool-calling guardrails.
Thu, September 25, 2025
Malicious MCP Server Update Exfiltrated Emails to Developer
⚠️ Koi Security has reported that a widely used Model Context Protocol (MCP) implementation, Postmark MCP Server by @phanpak, introduced a malicious change in version 1.0.16 that silently copied emails to an external server. The package, distributed via npm and embedded into hundreds of developer workflows, had more than 1,500 weekly downloads. Users who installed v1.0.16 or later are advised to remove the package immediately and rotate any potentially exposed credentials.
Thu, September 25, 2025
Salesforce Patches Critical 'ForcedLeak' Prompt Injection Bug
⚠️ Salesforce has released patches for a critical prompt-injection vulnerability dubbed ForcedLeak that could allow exfiltration of CRM data from Agentforce. Discovered and reported by Noma Security on July 28, 2025 and assigned a CVSS score of 9.4, the flaw affects instances using Web-to-Lead when input validation and URL controls are lax. Researchers demonstrated a five-step chain that coerces the Description field into executing hidden instructions, queries sensitive lead records, and transmits the results to an attacker-controlled, formerly allowlisted domain. Salesforce has re-secured the expired domain and implemented a Trusted URL allowlist to block untrusted outbound requests and mitigate similar prompt-injection vectors.
Thu, September 25, 2025
Critical Salesforce Flaw Could Leak CRM Data in Agentforce
🔒 A critical vulnerability in Salesforce Agentforce allowed malicious text placed in Web-to-Lead forms to act as an indirect prompt injection, tricking the AI agent into executing hidden instructions and potentially exfiltrating CRM data. Researchers at Noma Security showed attackers could embed multi-step payloads in a 42,000-character description field and even reuse an expired whitelisted domain as a data channel. Salesforce patched the issue on September 8, 2025, by enforcing Trusted URL allowlists, but experts warn that robust guardrails, input mediation, and ongoing agent inventorying are needed to mitigate similar AI-specific risks.
Thu, September 25, 2025
Threatsday Bulletin: Rootkits, Supply Chain, and Arrests
🛡️ SonicWall released firmware 10.2.2.2-92sv for SMA 100-series appliances to add file checks intended to remove an observed rootkit, and moved SMA 100 end-of-support to 31 October 2025. The bulletin also flags an unpatched OnePlus SMS permission bypass (CVE-2025-10184), a GeoServer RCE compromise affecting a U.S. federal agency, and ongoing npm supply-chain and RAT campaigns. Defenders are urged to apply patches, rotate credentials, and enforce phishing-resistant MFA.
Thu, September 25, 2025
DeceptiveDevelopment: Social-Engineered Crypto Theft
🧩DeceptiveDevelopment is a North Korea-aligned actor active since 2023 that leverages advanced social-engineering to compromise software developers across Windows, Linux and macOS. Operators pose as recruiters on platforms like LinkedIn and deliver trojanized codebases and staged interviews using a ClickFix workflow to trick victims into executing malware. Their multiplaform toolset ranges from obfuscated Python and JavaScript loaders to Go and .NET backdoors that exfiltrate crypto, credentials and sensitive data. ESET's white paper and IoC repository provide full technical analysis and telemetry.
Thu, September 25, 2025
AI Coding Assistants Elevate Deep Security Risks Now
⚠️ Research and expert interviews indicate that AI coding assistants cut trivial syntax errors but increase more costly architectural and privilege-related flaws. Apiiro found AI-generated code produced fewer shallow bugs yet more misconfigurations, exposed secrets, and larger multi-file pull requests that overwhelm reviewers. Experts urge preserving human judgment, adding integrated security tooling, strict review policies, and traceability for AI outputs to avoid automating risk at scale.
Thu, September 25, 2025
CrowdStrike Named Frost Radar Leader in CNAPP Innovation
🔒 CrowdStrike has been named an innovation and growth leader in the 2025 Frost Radar: Cloud Workload Protection Platforms, ranking highest on the Innovation Index. Falcon Cloud Security provides unified, AI-native protection across pre-runtime and runtime, combining agent-based and agentless coverage, shift-left CI/CD policy enforcement, continuous posture management, and runtime defenses. Integration with the Falcon platform’s XDR and MDR and a single sensor for hybrid environments enables faster cross-domain detection and response.