Tag Banner

All news with #ai supply chain tag

Wed, December 10, 2025

2026 NDAA: Cybersecurity Changes for DoD Mobile and AI

🛡️ The compromise 2026 NDAA directs large new cybersecurity mandates for the Department of Defense, including contract requirements to harden mobile phones used by senior officials and enhanced AI/ML security and procurement standards. It sets timelines (90–180 days) for mobile protections and AI policies, ties requirements to industry frameworks such as NIST SP 800 and CMMC, and envisions workforce training and sandbox environments. The law also funds roughly $15.1 billion in cyber activities and adds provisions on spyware, biologics data risks, and industrial base harmonization.

read more →

Sat, December 6, 2025

Researchers Find 30+ Flaws in AI IDEs, Enabling Data Theft

⚠️Researchers disclosed more than 30 vulnerabilities in AI-integrated IDEs in a report dubbed IDEsaster by Ari Marzouk (MaccariTA). The issues chain prompt-injection with auto-approved agent tooling and legitimate IDE features to achieve data exfiltration and remote code execution across products like Cursor, GitHub Copilot, Zed.dev, and others. Of the findings, 24 received CVE identifiers; exploit examples include workspace writes that cause outbound requests, settings hijacks that point executable paths to attacker binaries, and multi-root overrides that trigger execution. Researchers advise using AI agents only with trusted projects, applying least privilege to tool access, hardening prompts, and sandboxing risky operations.

read more →

Thu, December 4, 2025

US, International Agencies Issue AI Guidance for OT

🛡️ US and allied cyber agencies have published joint guidance to help critical infrastructure operators incorporate AI safely into operational technology (OT). Developed by CISA with the Australian Signals Directorate and input from the UK's NCSC, the document covers ML, LLMs and AI agents while remaining applicable to traditional automation systems. It recommends assessing AI risks, protecting sensitive OT data, demanding vendor transparency on embedded AI and supply chains, establishing governance and testing in controlled environments, and maintaining human-in-the-loop oversight aligned with existing cybersecurity frameworks.

read more →

Wed, December 3, 2025

RCE Flaw in OpenAI's Codex CLI Elevates Dev Risks Globally

⚠️Researchers from CheckPoint disclosed a critical remote code execution vulnerability in OpenAI's Codex CLI that allowed project-local .env files to redirect the CODEX_HOME environment variable and load attacker-controlled MCP servers. By adding a malicious mcp_servers entry in a repo-local .codex/config.toml, an attacker with commit or PR access could cause Codex to execute commands silently whenever a developer runs the tool. OpenAI addressed the issue in Codex CLI v0.23.0 by blocking project-local redirection of CODEX_HOME, but the flaw demonstrates how automated LLM-powered developer tools can expand the attack surface and enable persistent supply-chain backdoors.

read more →

Tue, December 2, 2025

Key Questions CISOs Must Ask About AI-Powered Security

🔒 CISOs face rising threats as adversaries weaponize AI — from deepfakes and sophisticated phishing to prompt-injection attacks and data leakage via unsanctioned tools. Vendors and startups are rapidly embedding AI into detection, triage, automation, and agentic capabilities; IBM’s 2025 report found broad AI deployment cut recovery time by 80 days and reduced breach costs by $1.9M. Before engaging vendors, security leaders must assess attack surface expansion, data protection, integration, metrics, workforce impact, and vendor trustworthiness.

read more →

Mon, December 1, 2025

Replicate Joins Cloudflare to Build AI Infrastructure

🚀 Replicate is now part of Cloudflare, bringing its model packaging and serving tools into Cloudflare’s global network. Since 2019 Replicate has shipped Cog and a hosted inference platform that made running research models accessible and scaled during the Stable Diffusion surge. Joining Cloudflare pairs those abstractions with network primitives like Workers, R2, and Durable Objects to enable edge model execution, instant serverless pipelines, and streaming integrations such as WebRTC while supporting developers and researchers.

read more →

Fri, November 21, 2025

GenAI GRC: Moving Supply Chain Risk to the Boardroom

🔒 Chief information security officers face a new class of supply-chain risk driven by generative AI. Traditional GRC — quarterly questionnaires and compliance reports — now lags threats like shadow AI and model drift, which are invisible to periodic audits. The author recommends a GenAI-powered GRC: contextual intelligence, continuous monitoring via a digital trust ledger, and automated regulatory synthesis to convert technical exposure into board-ready resilience metrics.

read more →

Fri, November 21, 2025

Industrialization of Cybercrime: AI, Speed, Defense

🤖 FortiGuard Labs warns that by 2026 cybercrime will transition from ad hoc innovation to industrialized throughput, driven by AI, automation, and a mature supply chain. Attackers will automate reconnaissance, lateral movement, and data monetization, shrinking attack timelines from days to minutes. Defenders must adopt machine-speed operations, continuous threat exposure management, and identity-centric controls to compress detection and response. Global collaboration and targeted disruption will be essential to deter large-scale criminal infrastructure.

read more →

Thu, November 20, 2025

AI Risk Guide: Assessing GenAI, Vendors and Threats

⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.

read more →

Wed, November 19, 2025

CIO: Embed Security into AI from Day One at Scale

🔐 Meerah Rajavel, CIO at Palo Alto Networks, argues that security must be integrated into AI from the outset rather than tacked on later. She frames AI value around three pillars — velocity, efficiency and experience — and describes how Panda AI transformed employee support, automating 72% of IT requests. Rajavel warns that models and data are primary attack surfaces and urges supply-chain, runtime and prompt protections, noting the company embeds these controls in Cortex XDR.

read more →

Mon, November 17, 2025

AI-Driven Espionage Campaign Allegedly Targets Firms

🤖 Anthropic reported that roughly 30 organizations—including major technology firms, financial institutions, chemical companies and government agencies—were targeted in what it describes as an AI-powered espionage campaign. The company attributes the activity to the actor it calls GTG-1002, links the group to the Chinese state, and says attackers manipulated its developer tool Claude Code to largely autonomously launch infiltration attempts. Several security researchers have publicly questioned the asserted level of autonomy and criticized Anthropic for not publishing indicators of compromise or detailed forensic evidence.

read more →

Fri, November 14, 2025

Copy-Paste RCE Flaw Impacts Major AI Inference Servers

🔒 Cybersecurity researchers disclosed a chain of remote code execution (RCE) vulnerabilities affecting AI inference frameworks from Meta, NVIDIA, Microsoft and open-source projects such as vLLM and SGLang. The flaws stem from reused code that called ZeroMQ’s recv-pyobj() and passed data directly into Python’s pickle.loads(), enabling unauthenticated RCE over exposed sockets. Vendors have released patches replacing unsafe pickle usage with JSON-based serialization and adding authentication and transport protections. Operators are urged to upgrade to patched releases and harden ZMQ channels, restrict network exposure, and avoid deserializing untrusted data.

read more →

Thu, November 13, 2025

Rogue MCP Servers Can Compromise Cursor's Embedded Browser

⚠️ Security researchers demonstrated that a rogue Model Context Protocol (MCP) server can inject JavaScript into the built-in browser of Cursor, an AI-powered code editor, replacing pages with attacker-controlled content to harvest credentials. The injected code can run without URL changes and may access session cookies. Because Cursor is a Visual Studio Code fork without the same integrity checks, MCP servers inherit IDE privileges, enabling broader workstation compromise.

read more →

Wed, November 12, 2025

Secure AI by Design: A Policy Roadmap for Organizations

🛡️ In just a few years, AI has shifted from futuristic innovation to core business infrastructure, yet security practices have not kept pace. Palo Alto Networks presents a Secure AI by Design Policy Roadmap that defines the AI attack surface and prescribes actionable measures across external tools, agents, applications, and infrastructure. The Roadmap aligns with recent U.S. policy moves — including the June 2025 Executive Order and the July 2025 White House AI Action Plan — and calls for purpose-built defenses rather than retrofitting legacy controls.

read more →

Thu, November 6, 2025

CIO’s First Principles: A Reference Guide to Securing AI

🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.

read more →

Tue, October 28, 2025

Prisma AIRS 2.0: Unified Platform for Secure AI Agents

🔒 Prisma AIRS 2.0 is a unified AI security platform that delivers end-to-end visibility, risk assessment and automated defenses across agents, models and development pipelines. It consolidates Protect AI capabilities to provide posture and runtime protections for AI agents, model scanning and API-first controls for MLOps. The platform also offers continuous, autonomous red teaming and a managed MCP Server to embed threat detection into workflows.

read more →

Thu, October 23, 2025

Hugging Face and VirusTotal: Integrating Security Insights

🔒 VirusTotal and Hugging Face have announced a collaboration to surface security insights directly within the Hugging Face platform. When browsing model files, datasets, or related artifacts, users will now see multi‑scanner results including VirusTotal detections and links to public reports so potential risks can be reviewed before downloading. VirusTotal is also enhancing its analysis portfolio with AI-driven tools such as Code Insight and format‑aware scanners (picklescan, safepickle, ModelScan) to highlight unsafe deserialization flows and other risky patterns. The integration aims to increase visibility across the AI supply chain and help researchers, developers, and defenders build more secure models and workflows.

read more →

Tue, October 14, 2025

Microsoft Advances Open Standards for Frontier AI Scale

🔧 Microsoft details OCP contributions to accelerate open-source infrastructure for frontier-scale AI, focusing on power, cooling, networking, security, and sustainability. It highlights innovations such as solid-state transformers, a power-stabilization paper with OpenAI and NVIDIA, and a next-generation HXU for liquid cooling. Networking efforts include ESUN and scale-up Ethernet workstreams, while security contributions introduce Caliptra 2.1, Adams Bridge 2.0, and L.O.C.K. The post also advances fleet lifecycle management, carbon accounting, and waste-heat reuse for globally deployable AI datacenters.

read more →

Mon, October 13, 2025

Agile, Fungible Data Centers for the AI Era: Standards

🚀 Google outlines designs for agile, fungible data centers to meet explosive AI demand, advocating modular, interoperable architectures and late-binding of facility resources. It highlights Project Deschutes liquid cooling, +/-400Vdc power proposals with Mt. Diablo side-car designs, and open efforts like Caliptra 2.0 and OCP L.O.C.K.. The post calls for community standards across power, cooling, telemetry, networking, and security to improve resilience, sustainability, and operational flexibility.

read more →

Fri, October 10, 2025

Security Risks of Vibe Coding and LLM Developer Assistants

🛡️AI developer assistants accelerate coding but introduce significant security risks across generated code, configurations, and development tools. Studies show models now compile code far more often yet still produce many OWASP- and MITRE-class vulnerabilities, and real incidents (for example Tea, Enrichlead, and the Nx compromise) highlight practical consequences. Effective defenses include automated SAST, security-aware system prompts, human code review, strict agent access controls, and developer training.

read more →