< ciso
brief />
Tag Banner

All news with #ai application security tag

26 articles

Architecting AI Infrastructure for U.S. Winter Olympians

🤖 In collaboration with Google DeepMind, the team built an AI pose-estimation pipeline that converts single 2D video into a 63-joint 3D biomechanical model for U.S. Olympians. The system uses learned temporal priors to infer occluded joints and delivers near-instant results by running models on statically provisioned TPU slices. Orchestration, scaling, and security are managed with Vertex AI and VPC private endpoints.
read more →

LangChain path traversal bug raises AI pipeline risks

🛡️ Cyera researchers warn that insufficient input validation in AI orchestration tools can expose sensitive enterprise data. A newly disclosed path traversal flaw in LangChain (CVE-2026-34070) lets crafted input resolve paths outside intended directories and read arbitrary host files. Cyera analyzed that alongside an earlier unsafe deserialization issue (CVE-2025-68664) and a SQL injection affecting LangGraph checkpointing (CVE-2025-67644), showing how each flaw maps to distinct data exposures. Maintainers have released fixes; organizations should apply patches and adopt allowlists, sandboxing, safe deserialization practices, and parameterized queries immediately.
read more →

FM Logistic Optimizes Warehouse Routing with AlphaEvolve

🚚 FM Logistic used AlphaEvolve on Google Cloud to tackle large-scale warehouse routing by applying evolutionary code generation powered by Gemini models. Starting from an existing stepwise routing baseline, the agent generated, scored, and iterated thousands of candidate algorithms against a representative dataset to minimize average travel distance per pick while avoiding operational failures. The adapted routing logic delivered a 10.4% efficiency improvement and reduced annual warehouse travel by more than 15,000 km.
read more →

CursorJack: MCP Deeplink Risk in AI Development Environment

⚠️ Proofpoint researchers disclosed CursorJack, a technique that abuses Cursor's Model Context Protocol (MCP) deeplinks to embed installation configurations that can lead to local code execution or the installation of remote malicious servers. Exploitation requires a user to click a crafted deeplink and approve an installation prompt; success depends on system configuration and user privileges, and no zero‑click vector was observed. Proofpoint published a proof‑of‑concept, notified Cursor, and recommends verifying MCP sources, tightening permission controls, and improving visibility into installation parameters to mitigate social‑engineering risks.
read more →

Detecting and Responding to Prompt Abuse in AI Tools

🔍 This post, the second in Microsoft's AI Application Security series, moves from planning to practical detection and response for prompt abuse. It describes common attack types — direct prompt override, extractive abuse targeting sensitive inputs, and indirect prompt injection via hidden instructions such as URL fragments — and why these are hard to spot without telemetry. The article provides a stepwise detection and incident response playbook and maps mitigations to Microsoft tools so teams can log interactions, sanitize inputs, and contain incidents.
read more →

Cloudflare One: Unified Data Security Across Surfaces

🔐 Cloudflare One reframes enterprise security around protecting sensitive data across networks, endpoints, SaaS, and AI interfaces. The post introduces new controls — clipboard restrictions for browser-based RDP, operation-level mapping surfaced in logs, on-device Endpoint DLP in the Cloudflare One Client, and Microsoft 365 Copilot scanning via API CASB. Together these features aim to give consistent visibility and enforcement so policy follows data rather than product boundaries.
read more →

Shai-Hulud-style npm worm strikes CI and AI tooling

🐛 Socket researchers disclosed an active npm supply-chain campaign dubbed SANDWORM_MODE that leverages typosquatted packages to infiltrate developer machines, CI pipelines, and AI coding assistants. The malicious packages (at least 19 observed) harvest npm and GitHub tokens, environment secrets, and cloud keys, then use stolen credentials to modify repositories and amplify via weaponized GitHub Actions. The campaign also injects a malicious MCP server into AI tool configs to enable prompt-injection exfiltration, includes a dormant polymorphic engine, and implements a configurable 'dead switch' that can wipe home directories.
read more →

Microsoft: Copilot Bug Summarizes Confidential Emails

⚠️Microsoft says a bug in Microsoft 365 Copilot has been summarizing confidential emails since late January, bypassing organizations' configured data loss prevention (DLP) safeguards. The flaw affected the Copilot 'work tab' chat and improperly read messages stored in Sent Items and Drafts, including those with sensitivity labels intended to block automated processing. Microsoft attributes the behavior to a code error, began rolling out a fix in early February, and is monitoring deployment while contacting a subset of impacted users. The company has not yet disclosed the full scope or number of affected organizations and has flagged the incident as an advisory.
read more →

Infostealer Targets OpenClaw, Exfiltrating AI Agent Data

🔐 Security researchers have documented an infostealer attack that exposed sensitive files from local AI assistants, specifically OpenClaw. Hudson Rock reported the malware harvested configuration and key material—including openclaw.json, device.json, and agent memory files—allowing token theft, private key access, and capture of users' operational context. The incident underscores risks from plaintext secrets and permissive defaults in agentic tools.
read more →

What CISOs Need to Know About OpenClaw Risks and Mitigations

⚠️ OpenClaw is an open‑source AI‑agent orchestration tool that runs locally, integrates with common chat apps and can use any LLM backend, driving rapid adoption. Researchers have found widespread exposed instances, critical authentication‑bypass flaws, plaintext credentials in the ClawHub marketplace and hundreds of malicious skills enabling credential theft and remote code execution. Experts urge enterprises to ban or tightly restrict use, enforce least privilege, MFA, endpoint segmentation and continuous telemetry if pilots are allowed.
read more →

Amazon Bedrock AgentCore Browser Adds Browser Profiles

🔐 Amazon Bedrock AgentCore Browser now supports browser profiles that persist authentication state across sessions. You can authenticate once, save cookies and local storage to a profile, and reuse it to keep agents logged in without repeated manual logins. Profiles offer flexible read-only and persistent modes and enable parallel sessions to share authentication, cutting session setup from minutes to tens of seconds for high-volume automated workflows.
read more →

Ship Production-Ready AI and Multimodal Workshops Roadshow

🚀 Google Cloud is launching a two-day roadshow across North America focused on building production-grade and multimodal AI systems. Day 1, the Production-Ready AI Intensive, covers stability, security, and scalable architecture including multi-agent orchestration with the Agent Development Kit (ADK), A2A protocols on Cloud Run, automated evaluation via the Vertex AI Gen AI Evaluation SDK, and defenses like Model Armor and Sensitive Data Protection. Day 2, the Multimodal Frontier, is a hands-on, code-first workshop on real-time perception and interaction: simultaneous audio/video processing, Graph RAG with Spanner Graph, Persistent Memory Banks, and the Gemini Live API for zero-latency, interruptible agents. Sessions include labs, credits, and networking; seats are limited.
read more →

Chainlit flaws enable cloud key leaks and SSRF risks

⚠️ Chainlit, a widely used open-source framework for building conversational AI chatbots, contained high-severity vulnerabilities that can expose arbitrary files and permit server-side request forgery, enabling data theft and lateral movement within compromised environments. Zafran Security identified two primary issues: CVE-2026-22218 (arbitrary file read, CVSS 7.1) and CVE-2026-22219 (SSRF with SQLAlchemy, CVSS 8.3). Both were responsibly disclosed on November 23, 2025 and patched in Chainlit 2.9.4 on December 24, 2025. Administrators should upgrade, audit deployments for misuse, and rotate any potentially exposed credentials.
read more →

Chainlit Vulnerabilities Permit File Reads and SSRF Access

⚠️ Security researchers disclosed two critical vulnerabilities in the Python-based AI app framework Chainlit that allow unauthenticated attackers to read arbitrary server files and trigger SSRF requests. The flaws (CVE-2026-22218 and CVE-2026-22219), fixed in Chainlit 2.9.4, stem from an unvalidated custom Element type exposing path and URL properties. Exploits can leak environment variables, API keys, LLM prompts, and cloud credentials, enabling lateral movement and broader compromise.
read more →

Securing MCPs: Control of Agentic AI Tool Access and Risks

🔒 This webinar explains why MCPs — the control plane that governs what agentic AI can execute — are a critical but often overlooked security boundary. Drawing on recent incidents such as CVE-2025-6514, the session shows how trusted proxies and misconfigurations can convert automation into a remote code execution vector at scale. Participants will learn to detect shadow API keys, audit agent actions, and apply practical controls to secure agentic AI without slowing development.
read more →

Optimizing AlloyDB AI Text-to-SQL Accuracy in Production

🔍 Google Cloud describes how the AlloyDB AI natural language API translates user questions into SQL and how to tune it for near‑perfect accuracy in enterprise applications. The post outlines a hill‑climbing workflow that improves results by adding descriptive (table and column) and prescriptive (templates, facets) context, plus an automated value index for private terms. It highlights capabilities for business relevance, explainability, and verified results, and explains agent integration options such as the MCP Toolbox and Gemini Enterprise.
read more →

AlphaEvolve on Google Cloud: Gemini-driven evolution

🔬 AlphaEvolve is a Gemini-powered coding agent on Google Cloud that automates evolutionary optimization of algorithms for complex, code-defined problems. It takes a problem specification, evaluation logic, and a compile-ready seed program, then uses Gemini models to propose mutated code variants and an evolutionary framework to select and refine the best candidates. Early internal results at Google demonstrate measurable efficiency improvements, and the AlphaEvolve Service API is available through a private Early Access Program for interested organizations.
read more →

AWS preview: Fully managed MCP servers for EKS and ECS

🔔 Amazon EKS and ECS now offer fully managed MCP servers in preview, providing a cloud-hosted Model Context Protocol endpoint to enrich AI-powered development and operations. These servers remove local installation and maintenance, and deliver enterprise features such as automatic updates and patching, centralized security via AWS IAM, and audit logging through AWS CloudTrail. Developers can connect AI coding assistants like Kiro CLI, Cursor, or Cline for context-aware code generation and debugging, while operators gain access to a knowledge base of best practices and troubleshooting guidance.
read more →

Rogue MCP Servers Can Compromise Cursor's Embedded Browser

⚠️ Security researchers demonstrated that a rogue Model Context Protocol (MCP) server can inject JavaScript into the built-in browser of Cursor, an AI-powered code editor, replacing pages with attacker-controlled content to harvest credentials. The injected code can run without URL changes and may access session cookies. Because Cursor is a Visual Studio Code fork without the same integrity checks, MCP servers inherit IDE privileges, enabling broader workstation compromise.
read more →

Microsoft accelerates migration and modernization with AI

🔧 Microsoft outlined a set of agentic AI tools to speed migration and modernization across applications and data. GitHub Copilot now automates Java and .NET upgrades and end-to-end app modernization flows, while Azure Migrate adds AI-driven guidance, connected Copilot workflows, and broader application-awareness. The Azure Accelerate program pairs expert deployment support and funding to reduce friction and help teams move projects faster.
read more →