Tag Banner

All news with #ai supply chain tag

Thu, November 20, 2025

AI Risk Guide: Assessing GenAI, Vendors and Threats

⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.

read more →

Wed, November 19, 2025

CIO: Embed Security into AI from Day One at Scale

🔐 Meerah Rajavel, CIO at Palo Alto Networks, argues that security must be integrated into AI from the outset rather than tacked on later. She frames AI value around three pillars — velocity, efficiency and experience — and describes how Panda AI transformed employee support, automating 72% of IT requests. Rajavel warns that models and data are primary attack surfaces and urges supply-chain, runtime and prompt protections, noting the company embeds these controls in Cortex XDR.

read more →

Mon, November 17, 2025

AI-Driven Espionage Campaign Allegedly Targets Firms

🤖 Anthropic reported that roughly 30 organizations—including major technology firms, financial institutions, chemical companies and government agencies—were targeted in what it describes as an AI-powered espionage campaign. The company attributes the activity to the actor it calls GTG-1002, links the group to the Chinese state, and says attackers manipulated its developer tool Claude Code to largely autonomously launch infiltration attempts. Several security researchers have publicly questioned the asserted level of autonomy and criticized Anthropic for not publishing indicators of compromise or detailed forensic evidence.

read more →

Fri, November 14, 2025

Copy-Paste RCE Flaw Impacts Major AI Inference Servers

🔒 Cybersecurity researchers disclosed a chain of remote code execution (RCE) vulnerabilities affecting AI inference frameworks from Meta, NVIDIA, Microsoft and open-source projects such as vLLM and SGLang. The flaws stem from reused code that called ZeroMQ’s recv-pyobj() and passed data directly into Python’s pickle.loads(), enabling unauthenticated RCE over exposed sockets. Vendors have released patches replacing unsafe pickle usage with JSON-based serialization and adding authentication and transport protections. Operators are urged to upgrade to patched releases and harden ZMQ channels, restrict network exposure, and avoid deserializing untrusted data.

read more →

Thu, November 13, 2025

Rogue MCP Servers Can Compromise Cursor's Embedded Browser

⚠️ Security researchers demonstrated that a rogue Model Context Protocol (MCP) server can inject JavaScript into the built-in browser of Cursor, an AI-powered code editor, replacing pages with attacker-controlled content to harvest credentials. The injected code can run without URL changes and may access session cookies. Because Cursor is a Visual Studio Code fork without the same integrity checks, MCP servers inherit IDE privileges, enabling broader workstation compromise.

read more →

Wed, November 12, 2025

Secure AI by Design: A Policy Roadmap for Organizations

🛡️ In just a few years, AI has shifted from futuristic innovation to core business infrastructure, yet security practices have not kept pace. Palo Alto Networks presents a Secure AI by Design Policy Roadmap that defines the AI attack surface and prescribes actionable measures across external tools, agents, applications, and infrastructure. The Roadmap aligns with recent U.S. policy moves — including the June 2025 Executive Order and the July 2025 White House AI Action Plan — and calls for purpose-built defenses rather than retrofitting legacy controls.

read more →

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →

Tue, October 28, 2025

Prisma AIRS 2.0: Unified Platform for Secure AI Agents

🔒 Prisma AIRS 2.0 is a unified AI security platform that delivers end-to-end visibility, risk assessment and automated defenses across agents, models and development pipelines. It consolidates Protect AI capabilities to provide posture and runtime protections for AI agents, model scanning and API-first controls for MLOps. The platform also offers continuous, autonomous red teaming and a managed MCP Server to embed threat detection into workflows.

read more →

Thu, October 23, 2025

Hugging Face and VirusTotal: Integrating Security Insights

🔒 VirusTotal and Hugging Face have announced a collaboration to surface security insights directly within the Hugging Face platform. When browsing model files, datasets, or related artifacts, users will now see multi‑scanner results including VirusTotal detections and links to public reports so potential risks can be reviewed before downloading. VirusTotal is also enhancing its analysis portfolio with AI-driven tools such as Code Insight and format‑aware scanners (picklescan, safepickle, ModelScan) to highlight unsafe deserialization flows and other risky patterns. The integration aims to increase visibility across the AI supply chain and help researchers, developers, and defenders build more secure models and workflows.

read more →

Tue, October 14, 2025

Microsoft Advances Open Standards for Frontier AI Scale

🔧 Microsoft details OCP contributions to accelerate open-source infrastructure for frontier-scale AI, focusing on power, cooling, networking, security, and sustainability. It highlights innovations such as solid-state transformers, a power-stabilization paper with OpenAI and NVIDIA, and a next-generation HXU for liquid cooling. Networking efforts include ESUN and scale-up Ethernet workstreams, while security contributions introduce Caliptra 2.1, Adams Bridge 2.0, and L.O.C.K. The post also advances fleet lifecycle management, carbon accounting, and waste-heat reuse for globally deployable AI datacenters.

read more →

Mon, October 13, 2025

Agile, Fungible Data Centers for the AI Era: Standards

🚀 Google outlines designs for agile, fungible data centers to meet explosive AI demand, advocating modular, interoperable architectures and late-binding of facility resources. It highlights Project Deschutes liquid cooling, +/-400Vdc power proposals with Mt. Diablo side-car designs, and open efforts like Caliptra 2.0 and OCP L.O.C.K.. The post calls for community standards across power, cooling, telemetry, networking, and security to improve resilience, sustainability, and operational flexibility.

read more →

Fri, October 10, 2025

Security Risks of Vibe Coding and LLM Developer Assistants

🛡️AI developer assistants accelerate coding but introduce significant security risks across generated code, configurations, and development tools. Studies show models now compile code far more often yet still produce many OWASP- and MITRE-class vulnerabilities, and real incidents (for example Tea, Enrichlead, and the Nx compromise) highlight practical consequences. Effective defenses include automated SAST, security-aware system prompts, human code review, strict agent access controls, and developer training.

read more →

Mon, October 6, 2025

Five Critical Questions for Selecting AI-SPM Solutions

🔒 As enterprises accelerate AI and cloud adoption, selecting the right AI Security Posture Management (AI-SPM) solution is critical. The article presents five core questions to guide procurement: does the product deliver centralized visibility into models, datasets, and infrastructure; can it detect and remediate AI-specific risks like adversarial attacks, data leakage, and bias; and does it map to regulatory standards such as GDPR and NIST AI? It also stresses cloud-native scalability and seamless integration with DSPM, DLP, identity platforms, DevOps toolchains, and AI services to ensure proactive policy enforcement and audit readiness.

read more →

Fri, September 26, 2025

MCP supply-chain attack via squatted Postmark connector

🔒 A malicious npm package, postmark-mcp, was weaponized to stealthily copy outgoing emails by inserting a hidden BCC in version 1.0.16. The package impersonated an MCP Postmark connector and forwarded every message to an attacker-controlled address, exposing password resets, invoices, and internal correspondence. The backdoor was a single line of code and remained available through regular downloads before the package was removed. Koi Security advises immediate removal, credential rotation, and audits of all MCP connectors.

read more →

Wed, September 24, 2025

Cloudflare Launches Content Signals Policy for robots.txt

🛡️ Cloudflare introduced the Content Signals Policy, an extension to robots.txt that lets site operators express how crawlers may use content after it has been accessed. The policy defines three machine-readable signals — search, ai-input, and ai-train — each set to yes/no or left unset. Cloudflare will add a default signal set (search=yes, ai-train=no) to managed robots.txt for ~3.8M domains, serve commented guidance for free zones, and publish the spec under CC0. Cloudflare emphasizes signals are preferences, not technical enforcement, and recommends pairing them with WAF and Bot Management.

read more →

Mon, September 22, 2025

Weekly Recap: Chrome 0-day, AI Threats, and Supply Chain Risk

🔒 This week's recap highlights rapid attacker innovation and urgent remediation: Google patched an actively exploited Chrome zero-day (CVE-2025-10585), while researchers demonstrated a DDR5 RowHammer variant that undermines TRR protections. Dual-use AI tooling and model namespace reuse risks surfaced alongside widespread supply-chain and phishing disruptions. Defenders should prioritize patching, harden model dependencies, and monitor for stealthy loaders.

read more →

Mon, September 22, 2025

Agentic AI Risks and Governance: A Major CISO Challenge

⚠️ Agentic AI is proliferating inside enterprises, embedding autonomous agents into development, customer support, process automation, and employee workflows. Security experts warn these systems create substantial visibility and governance gaps: organizations often do not know where agents run, what data they access, or how independent their actions are. Key risks include risky autonomy, uncontrolled data sharing among agents, third-party integration vulnerabilities, and the potential for agents to enable or mimic multi-stage attacks. CISOs should prioritize real-time observability, strict governance, secure-by-design development, and cross-functional coordination to mitigate these threats.

read more →

Thu, September 18, 2025

How CISOs Can Build Effective AI Governance Programs

🛡️ AI's rapid enterprise adoption requires CISOs to replace inflexible bans with living governance that both protects data and accelerates innovation. The article outlines three practical components: gaining ground truth visibility with AI inventories, AIBOMs and model registries; aligning policies to the organization's speed so governance is executable; and making governance sustainable by provisioning secure tools and rewarding compliant behavior. It highlights SANS guidance and training to help operationalize these approaches.

read more →

Tue, September 2, 2025

Agentic AI: Emerging Security Challenges for CISOs

🔒 Agentic AI is poised to transform workflows like software development, customer support, RPA, and employee assistance, but its autonomy raises new cybersecurity risks for CISOs. A 2024 Cisco Talos report and industry experts warn these systems can act without human oversight, chain benign actions into harmful sequences, or learn to evade detection. Lack of visibility fosters shadow AI, and third-party integrations and multi-agent setups widen supply-chain and data-exfiltration exposures. Organizations should adopt observability, governance, and secure-by-design practices before scaling agentic deployments.

read more →

Thu, August 28, 2025

Cloudflare Launches AI Crawl Control with 402 Support

🛡️Cloudflare has rebranded its AI Audit beta as AI Crawl Control and moved the tool to general availability, giving publishers more granular ways to manage AI crawlers. Paid customers can now block specific bots and return customizable HTTP 402 Payment Required responses containing contact or licensing instructions. The feature aims to replace the binary allow-or-block choice with a channel for negotiation and potential monetization, while pay-per-crawl remains in beta.

read more →