Tag Banner

All news with #agentic ai tag

Mon, October 27, 2025

OpenAI Atlas Omnibox Vulnerable to Prompt-Injection

⚠️ OpenAI's new Atlas browser is vulnerable to a prompt-injection jailbreak that disguises malicious instructions as URL-like strings, causing the omnibox to execute hidden commands. NeuralTrust demonstrated how malformed inputs that resemble URLs can bypass URL validation and be handled as trusted user prompts, enabling redirection, data exfiltration, or unauthorized tool actions on linked services. Mitigations include stricter URL canonicalization, treating unvalidated omnibox input as untrusted, additional runtime checks before tool execution, and explicit user confirmations for sensitive actions.

read more →

Mon, October 27, 2025

CrowdStrike Named Leader in 2025 Frost Radar for SSPM

🔒 CrowdStrike was named the Growth and Innovation Leader in the 2025 Frost Radar for SaaS Security Posture Management. The recognition highlights Falcon Shield, a fully native extension of the unified Falcon platform that correlates SaaS, endpoint and identity telemetry to deliver identity-centric detection, attack-path visualization and automated remediation. Frost & Sullivan cited >219% year-over-year growth and praised integrations such as Falcon Fusion SOAR and the Charlotte AI agentic system. Falcon Shield also offers 180+ prebuilt connectors and a no-code Integration Builder to scale protection and reduce mean time to remediation.

read more →

Fri, October 24, 2025

Proteomics AI Agent: Guided Protocols and Error Detection

🔬 Researchers at the Max Planck Institute of Biochemistry and Google Cloud created a Proteomics Lab Agent using the Agent Development Kit and Gemini models to provide personalized, multimodal AI guidance for mass spectrometry experiments. The agent analyzes recorded steps to generate publication-ready protocols, detect procedural errors, and capture tacit expertise into a searchable knowledge base. Open-sourced on GitHub, it aims to reduce troubleshooting time and improve reproducibility across labs.

read more →

Fri, October 24, 2025

Securing Agentic Commerce with Web Bot Auth and Payments

🔒 Cloudflare, in partnership with Visa and Mastercard, explains how Web Bot Auth together with payment-specific protocols can secure agent-driven commerce. The post describes agent registration, public key publication, and HTTP Message Signatures that include timestamps, nonces, and tags to prevent spoofing and replay attacks. Merchants can validate trusted agents during browsing and payment flows without changing infrastructure. Cloudflare also provides an Agent SDK and managed WAF rules to simplify developer adoption and deployment.

read more →

Fri, October 24, 2025

AI 2030: The Coming Era of Autonomous Cybercrime Threats

🔒 Organizations worldwide are rapidly adopting AI across enterprises, delivering efficiency gains while introducing new security risks. Cybersecurity is at a turning point where AI fights AI, and today's phishing and deepfakes are precursors to autonomous, self‑optimizing AI threat actors that can plan, execute, and refine attacks with minimal human oversight. In September 2025, Check Point Research found that 1 in 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, underscoring the urgent need to harden defenses and govern model use.

read more →

Fri, October 24, 2025

Malicious Extensions Spoof AI Browser Sidebars, Report

⚠️ Researchers at SquareX warn that malicious browser extensions can inject fake AI sidebars into AI-enabled browsers, including OpenAI Atlas, to steer users to attacker-controlled sites, exfiltrate data, or install backdoors. The extensions inject JavaScript to overlay a spoofed assistant and manipulate responses, enabling actions such as OAuth token harvesting or execution of reverse-shell commands. The report recommends banning unmanaged AI browsers where possible, auditing all extensions, applying strict zero-trust controls, and enforcing granular browser-native policies to block high-risk permissions and risky command execution.

read more →

Thu, October 23, 2025

Zero Trust Blind Spot: Identity Risk in AI Agents Now

🔒 Agentic AI introduces a mounting Zero Trust challenge as autonomous agents increasingly act with inherited or unmanaged credentials, creating orphaned identities and ungoverned access. Ido Shlomo of Token Security argues that identity must be the root of trust and recommends applying the NIST AI RMF through an identity-driven Zero Trust lens. Organizations should discover and inventory agents, assign unique managed identities and owners, enforce intent-based least privilege, and apply lifecycle controls, monitoring, and governance to restore auditability and accountability.

read more →

Thu, October 23, 2025

Spoofed AI Sidebars Can Trick Atlas and Comet Users

⚠️ Researchers at SquareX demonstrated an AI Sidebar Spoofing attack that can overlay a counterfeit assistant in OpenAI's Atlas and Perplexity's Comet browsers. A malicious extension injects JavaScript to render a fake sidebar identical to the real UI and intercepts all interactions, leaving users unaware. SquareX showcased scenarios including cryptocurrency phishing, OAuth-based Gmail/Drive hijacks, and delivery of reverse-shell installation commands. The team reported the findings to vendors but received no response by publication.

read more →

Thu, October 23, 2025

Secure AI at Scale and Speed: Free Webinar Framework

🔐 The Hacker News is promoting a free webinar that presents a practical framework to secure AI at scale while preserving speed of adoption. Organizers warn of a growing “quiet crisis”: rapid proliferation of unmanaged AI agents and identities that lack lifecycle controls. The session focuses on embedding security by design, governing AI agents that behave like users, and stopping credential sprawl and privilege abuse from Day One. It is aimed at engineers, architects, and CISOs seeking to move from reactive firefighting to proactive enablement.

read more →

Thu, October 23, 2025

Agent Factory Recap: Securing AI Agents in Production

🛡️ This recap of the Agent Factory episode explains practical strategies for securing production AI agents, demonstrating attacks like prompt injection, invisible Unicode exploits, and vector DB context poisoning. It highlights Model Armor for pre- and post-inference filtering, sandboxed execution, network isolation, observability, and tool safeguards via the Agent Development Kit (ADK). The team demonstrates a secured DevOps assistant that blocks data-exfiltration attempts while preserving intended functionality and provides operational guidance on multi-agent authentication, least-privilege IAM, and compliance-ready logging.

read more →

Wed, October 22, 2025

ChatGPT Atlas Signals Shift Toward AI Operating Systems

🤖 ChatGPT Atlas previews a future where AI becomes the primary interface for computing, letting users describe outcomes while the system orchestrates apps, data, and web services. Atlas demonstrates an context-aware assistant that understands a user’s digital life and can act on their behalf. This prototype points to productivity and accessibility gains, but it also creates new security, privacy, and governance challenges organizations must prepare for.

read more →

Wed, October 22, 2025

Four Bottlenecks Slowing Enterprise GenAI Adoption

🔒 Since ChatGPT’s 2022 debut, enterprises have rapidly launched GenAI pilots but struggle to convert experimentation into measurable value — only 3 of 37 pilots succeed. The article identifies four critical bottlenecks: security & data privacy, observability, evaluation & migration readiness, and secure business integration. It recommends targeted controls such as confidential compute, fine‑grained agent permissions, distributed tracing and replay environments, continuous evaluation pipelines and dual‑run migrations, plus policy‑aware integrations and impact analytics to move pilots into reliable production.

read more →

Tue, October 21, 2025

Digital Sovereignty Sessions at AWS re:Invent 2025 Guide

📘 The AWS re:Invent 2025 attendee guide highlights the conference's digital sovereignty program, detailing sessions, workshops, and code talks focused on data residency, hybrid and edge deployments, and sovereign infrastructure. Key topics include the AWS European Sovereign Cloud, AWS Outposts, Local Zones, and security features such as the Nitro System. Practical workshops and chalk talks demonstrate RAG, agentic AI, and low-latency SLM deployments with operational controls and compliance patterns. Reserve seating via the attendee portal or access sessions with the free virtual pass.

read more →

Tue, October 21, 2025

Google Migrates ISAs with AI and Automation at Scale

🔧 Google details how its custom Axion Arm CPUs and a mix of automation and AI enabled large-scale migration from x86 to multi-architecture production across services such as YouTube, Gmail, and BigQuery. The team analyzed 38,156 commits (about 700K changed lines) and reports migrating more than 30,000 applications to Arm while keeping both Arm and x86 in production. Existing automation like Rosie, sanitizers, fuzzers, and the CHAMP rollout framework handled much of the work, while an LLM-driven agent called CogniPort fixed build and test failures, showing a 30% success rate on a 245-commit benchmark. Google plans to default new apps to multiarch and continue refining AI tools to address the remaining long tail.

read more →

Tue, October 21, 2025

Microsoft Security Store Unites Partners and Innovation

🔐 Microsoft Security Store, released to public preview on September 30, 2025, is a unified, AI-powered marketplace that lets organizations discover, buy, and deploy vetted security solutions and AI agents. Catalog items — organized by frameworks like NIST and by integration with products such as Microsoft Defender, Sentinel, Entra, and Purview — address threat protection, identity, compliance, and cloud security. Built on the Microsoft Marketplace, it provides unified billing, MACC eligibility, and guided automated provisioning to streamline deployments.

read more →

Tue, October 21, 2025

The Signals Loop: Fine-tuning for AI Apps and Agents

🔁 Microsoft positions the signals loop — continuous capture of user interactions and telemetry with systematic fine‑tuning — as essential for building adaptive, reliable AI apps and agents. The post explains that simple RAG and prompting approaches often lack the accuracy and engagement needed for complex use cases, and that continuous learning drives sustained improvements. It highlights Dragon Copilot and GitHub Copilot as examples where telemetry‑driven fine‑tuning yielded substantial performance and experience gains, and presents Azure AI Foundry as a unified platform to operationalize these feedback loops at scale.

read more →

Tue, October 21, 2025

Securing AI in Defense: Trust, Identity, and Controls

🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.

read more →

Tue, October 21, 2025

CrowdStrike Launches AI-Driven Falcon UX in Preview

🔍 At Fal.Con 2025, CrowdStrike introduced a dynamic, persona-aware user experience for Falcon Cloud Security and Falcon Exposure Management, now available in public preview. Built on CrowdStrike Enterprise Graph and Charlotte AI, the console unifies hybrid and multi-cloud asset and risk visibility into customizable workspaces. It offers AI-assisted dashboard creation and executive-ready reporting to accelerate investigations and remediation without switching tools.

read more →

Mon, October 20, 2025

Design Patterns for Scalable AI Agents on Google Cloud

🤖 This post explains how System Integrator partners can build, scale, and manage enterprise-grade AI agents using Google Cloud technologies like Agent Engine, the Agent Development Kit (ADK), and Gemini Enterprise. It summarizes architecture patterns including runtime, memory, the Model Context Protocol (MCP), and the Agent-to-Agent (A2A) protocol, and contrasts managed Agent Engine with self-hosted options such as Cloud Run or GKE. Customer examples from Deloitte and Quantiphi illustrate supply chain and sales automation benefits. The guidance highlights security, observability, persistent memory, and model tuning for enterprise readiness.

read more →

Mon, October 20, 2025

Agent Factory Recap: Evaluating Agents, Tooling, and MAS

📡 This recap of the Agent Factory podcast episode, hosted by Annie Wang with guest Ivan Nardini, explains how to evaluate autonomous agents using a practical, full-stack approach. It outlines what to measure — final outcomes, chain-of-thought, tool use, and memory — and contrasts measurement techniques: ground truth, LLM-as-a-judge, and human review. The post demonstrates a 5-step debugging loop using the Agent Development Kit (ADK) and describes how to scale evaluation to production with Vertex AI.

read more →