All news with #ai supply chain tag
Mon, October 6, 2025
Five Critical Questions for Selecting AI-SPM Solutions
🔒 As enterprises accelerate AI and cloud adoption, selecting the right AI Security Posture Management (AI-SPM) solution is critical. The article presents five core questions to guide procurement: does the product deliver centralized visibility into models, datasets, and infrastructure; can it detect and remediate AI-specific risks like adversarial attacks, data leakage, and bias; and does it map to regulatory standards such as GDPR and NIST AI? It also stresses cloud-native scalability and seamless integration with DSPM, DLP, identity platforms, DevOps toolchains, and AI services to ensure proactive policy enforcement and audit readiness.
Fri, September 26, 2025
MCP supply-chain attack via squatted Postmark connector
🔒 A malicious npm package, postmark-mcp, was weaponized to stealthily copy outgoing emails by inserting a hidden BCC in version 1.0.16. The package impersonated an MCP Postmark connector and forwarded every message to an attacker-controlled address, exposing password resets, invoices, and internal correspondence. The backdoor was a single line of code and remained available through regular downloads before the package was removed. Koi Security advises immediate removal, credential rotation, and audits of all MCP connectors.
Wed, September 24, 2025
Cloudflare Launches Content Signals Policy for robots.txt
🛡️ Cloudflare introduced the Content Signals Policy, an extension to robots.txt that lets site operators express how crawlers may use content after it has been accessed. The policy defines three machine-readable signals — search, ai-input, and ai-train — each set to yes/no or left unset. Cloudflare will add a default signal set (search=yes, ai-train=no) to managed robots.txt for ~3.8M domains, serve commented guidance for free zones, and publish the spec under CC0. Cloudflare emphasizes signals are preferences, not technical enforcement, and recommends pairing them with WAF and Bot Management.
Mon, September 22, 2025
Weekly Recap: Chrome 0-day, AI Threats, and Supply Chain Risk
🔒 This week's recap highlights rapid attacker innovation and urgent remediation: Google patched an actively exploited Chrome zero-day (CVE-2025-10585), while researchers demonstrated a DDR5 RowHammer variant that undermines TRR protections. Dual-use AI tooling and model namespace reuse risks surfaced alongside widespread supply-chain and phishing disruptions. Defenders should prioritize patching, harden model dependencies, and monitor for stealthy loaders.
Mon, September 22, 2025
Agentic AI Risks and Governance: A Major CISO Challenge
⚠️ Agentic AI is proliferating inside enterprises, embedding autonomous agents into development, customer support, process automation, and employee workflows. Security experts warn these systems create substantial visibility and governance gaps: organizations often do not know where agents run, what data they access, or how independent their actions are. Key risks include risky autonomy, uncontrolled data sharing among agents, third-party integration vulnerabilities, and the potential for agents to enable or mimic multi-stage attacks. CISOs should prioritize real-time observability, strict governance, secure-by-design development, and cross-functional coordination to mitigate these threats.
Thu, September 18, 2025
How CISOs Can Build Effective AI Governance Programs
🛡️ AI's rapid enterprise adoption requires CISOs to replace inflexible bans with living governance that both protects data and accelerates innovation. The article outlines three practical components: gaining ground truth visibility with AI inventories, AIBOMs and model registries; aligning policies to the organization's speed so governance is executable; and making governance sustainable by provisioning secure tools and rewarding compliant behavior. It highlights SANS guidance and training to help operationalize these approaches.
Tue, September 2, 2025
Agentic AI: Emerging Security Challenges for CISOs
🔒 Agentic AI is poised to transform workflows like software development, customer support, RPA, and employee assistance, but its autonomy raises new cybersecurity risks for CISOs. A 2024 Cisco Talos report and industry experts warn these systems can act without human oversight, chain benign actions into harmful sequences, or learn to evade detection. Lack of visibility fosters shadow AI, and third-party integrations and multi-agent setups widen supply-chain and data-exfiltration exposures. Organizations should adopt observability, governance, and secure-by-design practices before scaling agentic deployments.
Thu, August 28, 2025
Cloudflare Launches AI Crawl Control with 402 Support
🛡️Cloudflare has rebranded its AI Audit beta as AI Crawl Control and moved the tool to general availability, giving publishers more granular ways to manage AI crawlers. Paid customers can now block specific bots and return customizable HTTP 402 Payment Required responses containing contact or licensing instructions. The feature aims to replace the binary allow-or-block choice with a channel for negotiation and potential monetization, while pay-per-crawl remains in beta.
Thu, August 28, 2025
DLA Selects Google Public Sector for Cloud Modernization
☁️ Google Public Sector has been awarded a $48 million DLA Enterprise Platform contract to migrate the Defense Logistics Agency to a DoD‑accredited commercial cloud. The multi‑phased program will move key infrastructure and data to a modern, AI‑ready Google Cloud foundation and enable BigQuery, Looker, and Vertex AI analytics. Emphasizing secure‑by‑design infrastructure and Mandiant threat intelligence, the effort aims to reduce costs, improve resiliency, and accelerate AI‑driven logistics and transportation management.
Thu, August 28, 2025
Malicious Nx npm Packages in 's1ngularity' Supply Chain
🔒 The maintainers of nx warned of a supply-chain compromise that allowed attackers to publish malicious versions of the npm package and several supporting plugins that gathered credentials. Rogue postinstall scripts scanned file systems, harvested GitHub, cloud and AI credentials, and exfiltrated them as Base64 to public GitHub repositories named 's1ngularity-repository' under victim accounts. Security firms reported 2,349 distinct secrets leaked; maintainers rotated tokens, removed the malicious versions, and urged immediate credential rotation and system cleanup.
Mon, August 25, 2025
What 17,845 GitHub MCP Servers Reveal About Risk and Abuse
🛡️ VirusTotal ran a large-scale audit of 17,845 GitHub projects implementing the MCP (Model Context Protocol) using Code Insight powered by Gemini 2.5 Flash. The automated review initially surfaced an overwhelming number of issues, and a refined prompt focused on intentional malice marked 1,408 repos as likely malicious. Manual checks showed many flagged projects were demos or PoCs, but the analysis still exposed numerous real attack vectors—credential harvesting, remote code execution via exec/subprocess, supply-chain tricks—and recurring insecure practices. The post recommends treating MCP servers like browser extensions: sign and pin versions, sandbox or WASM-isolate them, enforce strict permissions and filter model outputs to remove invisible or malicious content.
Mon, August 25, 2025
Code Insight Expands to Cover Software Supply Chain Risks
🛡️ VirusTotal’s Code Insight now analyzes a broader set of software supply chain formats — including CRX, XPI, VSIX, Python WHL, NPM packages, and MCP protocol integrations. The tool inspects code logic to detect obfuscation, dynamic code fetching, credential theft, and remote command execution in extensions and packages. Recent findings include malicious Chrome and Firefox extensions, a deceptive VS Code extension, and compromised Python and NPM packages. This capability complements traditional signature- and ML-based classification by surfacing behavior-based risks.
Tue, August 12, 2025
Langflow Misconfiguration Exposes 97,000 Pakistani Records
🔒 UpGuard secured an internet-exposed Langflow instance leaking data on roughly 97,000 Pakistani insurance customers, including 945 individuals flagged as politically exposed persons (PEPs). The instance—used by Pakistan-based consultants Workcycle Technologies to build AI chatbots for clients such as TPL Insurance and the Federal Board of Revenue—contained PII, confidential documents, and plaintext credentials. Access was removed after disclosure; UpGuard found no evidence of active exploitation.
Tue, August 12, 2025
Langflow Misconfiguration Exposes Data of Pakistani Insurers
🔓 UpGuard secured a misconfigured Langflow instance that exposed data for roughly 97,000 insurance customers in Pakistan, including 945 individuals marked as politically exposed persons. The instance was used by Pakistan-based Workcycle Technologies to build AI chatbots for clients such as TPL Insurance and the Federal Board of Revenue. Exposed materials included PII, confidential business documents and credentials; access was removed after notification and UpGuard found no evidence of exploitation.
Tue, July 15, 2025
A Summer of Security: Empowering Defenders with AI
🛡️ Google outlines summer cybersecurity advances that combine agentic AI, platform improvements, and public-private partnerships to strengthen defenders. Big Sleep—an agent from DeepMind and Project Zero—has discovered multiple real-world vulnerabilities, most recently an SQLite flaw (CVE-2025-6965) informed by Google Threat Intelligence, helping prevent imminent exploitation. The company emphasizes safe deployment, human oversight, and standard disclosure while extending tools like Timesketch (now augmented with Sec‑Gemini agents) and showcasing internal systems such as FACADE at Black Hat and DEF CON collaborations.