All news with #agentic ai tag
Wed, December 10, 2025
Building a security-first culture for agentic AI enterprises
🔒 Microsoft argues that as organizations adopt agentic AI, security must be a strategic priority that enables growth, trust, and continued innovation. The post identifies risks such as oversharing, data leakage, compliance gaps, and agent sprawl, and recommends three pillars: prepare for AI and agent integration, strengthen organization-wide skilling, and foster a security-first culture. It points to resources like Microsoft’s AI adoption model, Microsoft Learn, and the AI Skills Navigator to help operationalize these steps.
Wed, December 10, 2025
Google Named Leader in IDC Hyperscaler Marketplaces 2025
🚀 Google is recognized as a Leader in the 2025 IDC MarketScape for Worldwide Hyperscaler Marketplaces. The assessment highlights Google Cloud Marketplace for its integrated portfolio of SaaS, AI agents, foundational models, datasets, and services validated for enterprise readiness. The platform emphasizes AI innovation with a dedicated AI agent category, deep integration with Vertex AI and deployment via Gemini Enterprise. It also offers partner validation, enterprise governance tools, AI-driven discovery, flexible private offer buying, and global transaction support.
Wed, December 10, 2025
Microsoft Ignite 2025: Building with Agentic AI and Azure
🚀 Microsoft Ignite 2025 showcased a suite of Azure and AI updates aimed at accelerating production use of agentic systems. Anthropic's Claude models are now available in Microsoft Foundry alongside OpenAI GPTs, and Azure HorizonDB adds PostgreSQL compatibility with built-in vector indexing for RAG. New Azure Copilot agents automate migration, operations, and optimization, while refreshed hardware (Blackwell Ultra GPUs, Cobalt CPUs, Azure Boost DPU) targets scalable training and secure inference.
Wed, December 10, 2025
Google Adds Official MCP Support Across Key Cloud Services
🔌 Google announced fully-managed, remote support for Anthropic's Model Context Protocol (MCP), enabling agents and standard MCP clients to access a unified, enterprise-ready endpoint for Google and Google Cloud services. The managed MCP servers integrate with services like Google Maps, BigQuery, GCE, and GKE to let agents perform geospatial queries, in-place analytics, and infrastructure operations. Built-in discovery, governance, IAM controls, audit logging, and Google Cloud Model Armor provide security and observability. Developers can expose and govern APIs via Apigee and the Cloud API Registry to create discoverable tools for agentic workflows.
Wed, December 10, 2025
Apigee Adds Managed MCP Support for Secure APIs and Policy
🔒 Google’s Apigee now supports MCP with fully managed, remote servers, enabling organizations to expose existing APIs as agent tools without code changes or running MCP infrastructure. By creating an MCP proxy with your OpenAPI spec and a /mcp basepath, Apigee handles transcoding, protocol handling, and automatic registration in API hub. You can apply Apigee’s built-in security, identity, quota, and analytics controls to govern and monitor agent interactions. The capability is currently available in preview for a limited set of customers.
Wed, December 10, 2025
Tools and Strategies to Secure Model Context Protocol
🔒 Model Context Protocol (MCP) is increasingly used to connect AI agents with enterprise data sources, but real-world incidents at SaaS vendors have exposed practical weaknesses. The article describes what MCP security solutions should provide — discovery, runtime protection, strong authentication and comprehensive logging — and surveys offerings from hyperscalers, platform providers and startups. It stresses least-privilege and Zero Trust as core defenses.
Tue, December 9, 2025
AlphaEvolve on Google Cloud: Gemini-driven evolution
🔬 AlphaEvolve is a Gemini-powered coding agent on Google Cloud that automates evolutionary optimization of algorithms for complex, code-defined problems. It takes a problem specification, evaluation logic, and a compile-ready seed program, then uses Gemini models to propose mutated code variants and an evolutionary framework to select and refine the best candidates. Early internal results at Google demonstrate measurable efficiency improvements, and the AlphaEvolve Service API is available through a private Early Access Program for interested organizations.
Tue, December 9, 2025
Google deploys second model to guard Gemini Chrome agent
🛡️ Google has added a separate user alignment critic to its Gemini-powered Chrome browsing agent to vet and block proposed actions that do not match user intent. The critic is isolated from web content and sees only metadata about planned actions, providing feedback to the primary planning model when it rejects a step. Google also enforces origin sets to limit where the agent can read or act, requires confirmations for banking, medical, password use and purchases, and runs a classifier plus automated red‑teaming to detect prompt injection attempts during preview.
Tue, December 9, 2025
The AI Fix #80: DeepSeek, Antigravity, and Rude AI
🔍 In episode 80 of The AI Fix, hosts Graham Cluley and Mark Stockley scrutinize DeepSeek 3.2 'Speciale', a bargain model touted as a GPT-5 rival at a fraction of the cost. They also cover Jensen Huang’s robotics-for-fashion pitch, a 75kg humanoid performing acrobatic kicks, and surreal robot-dog NFT stunts in Miami. Graham recounts Google’s Antigravity IDE mistakenly clearing caches — a cautionary tale about giving agentic systems real power — while Mark examines research suggesting LLMs sometimes respond better to rude prompts, raising questions about how these models interpret tone and instruction.
Tue, December 9, 2025
Google Adds Layered Defenses to Chrome's Agentic AI
🛡️ Google announced a set of layered security measures for Chrome after adding agentic AI features, aimed at reducing the risk of indirect prompt injections and cross-origin data exfiltration. The centerpiece is a User Alignment Critic, a separate model that reviews and can veto proposed agent actions using only action metadata to avoid being poisoned by malicious page content. Chrome also enforces Agent Origin Sets via a gating function that classifies task-relevant origins into read-only and read-writable sets, requires gating approval before adding new origins, and pairs these controls with a prompt-injection classifier, Safe Browsing, on-device scam detection, user work logs, and explicit approval prompts for sensitive actions.
Tue, December 9, 2025
Gartner Urges Enterprises to Block AI Browsers Now
⚠️ Gartner has advised enterprises to block AI browsers until associated risks can be adequately managed. In its report Cybersecurity Must Block AI Browsers for Now, analysts warn that default settings prioritise user experience over security and list threats such as prompt injection, credential exposure and erroneous agent actions. Researchers and vendors have also flagged vulnerabilities and urged risk assessments and oversight.
Tue, December 9, 2025
Experts Warn AI Is Becoming Integrated in Cyberattacks
🔍 Industry debate is heating up over AI’s role in the cyber threat chain, with some experts calling warnings exaggerated while many frontline practitioners report concrete AI-assisted attacks. Recent reports from Google and Anthropic document malware and espionage leveraging LLMs and agentic tools. CISOs are urged to balance fundamentals with rapid defenses and prepare boards for trade-offs.
Mon, December 8, 2025
AWS unveils AI-driven security enhancements at re:Invent
🔒 AWS announced a suite of AI- and automation-driven security features at re:Invent 2025 designed to shift cloud protection from reactive response to proactive prevention. AWS Security Agent and agentic incident response add continuous code review and automated investigations, while ML enhancements in GuardDuty and near real-time analytics in Security Hub improve multi-stage threat detection. Agent-centric IAM tools, including policy autopilot and private sign-in routes, streamline permissions and enforce granular, zero-trust access for agents and workloads.
Mon, December 8, 2025
Chrome Adds Security Layer for Gemini Agentic Browsing
🛡️ Google is introducing a new defense layer in Chrome called User Alignment Critic to protect upcoming agentic browsing features powered by Gemini. The isolated secondary LLM operates as a high‑trust system component that vets each action the primary agent proposes, using deterministic rules, origin restrictions and a prompt‑injection classifier to block risky or irrelevant behaviors. Chrome will pause for user confirmation on sensitive sites, run continuous red‑teaming and push fixes via auto‑update, and is offering bounties to encourage external testing.
Mon, December 8, 2025
Gartner Urges Enterprises to Block AI Browsers Now
⚠️Gartner recommends blocking AI browsers such as ChatGPT Atlas and Perplexity Comet because they transmit active web content, open tabs, and browsing context to cloud services, creating risks of irreversible data loss. Analysts cite prompt-injection, credential exposure, and autonomous agent errors as primary threats. Organizations should block installations with existing network and endpoint controls and restrict any pilots to small, low-risk groups.
Mon, December 8, 2025
Architecting Security for Agentic Browsing in Chrome
🛡️ Chrome describes a layered approach to secure agentic browsing with Gemini, focusing on defenses against indirect prompt injection and goal‑hijacking. A new User Alignment Critic — an isolated, high‑trust model — reviews planned agent actions using only metadata and can veto misaligned steps. Chrome also enforces Agent Origin Sets to limit readable and writable origins, adds deterministic confirmations for sensitive actions, runs prompt‑injection detection in real time, and sustains continuous red‑teaming and monitoring to reduce exfiltration and unwanted transactions.
Mon, December 8, 2025
Agentic BAS AI Translates Threat Headlines to Defenses
🔐 Picus Security describes an agentic BAS approach that turns threat headlines into safe, validated emulation campaigns within hours. Rather than allowing LLMs to generate payloads, the platform maps incoming intelligence to a 12-year curated Threat Library and orchestrates benign atomic actions. A multi-agent architecture — Planner, Researcher, Threat Builder, and Validation — reduces hallucinations and unsafe outputs. The outcome is rapid, auditable testing that mirrors adversary TTPs without producing real exploit code.
Sat, December 6, 2025
Researchers Find 30+ Flaws in AI IDEs, Enabling Data Theft
⚠️Researchers disclosed more than 30 vulnerabilities in AI-integrated IDEs in a report dubbed IDEsaster by Ari Marzouk (MaccariTA). The issues chain prompt-injection with auto-approved agent tooling and legitimate IDE features to achieve data exfiltration and remote code execution across products like Cursor, GitHub Copilot, Zed.dev, and others. Of the findings, 24 received CVE identifiers; exploit examples include workspace writes that cause outbound requests, settings hijacks that point executable paths to attacker binaries, and multi-root overrides that trigger execution. Researchers advise using AI agents only with trusted projects, applying least privilege to tool access, hardening prompts, and sandboxing risky operations.
Fri, December 5, 2025
MCP Sampling Risks: New Prompt-Injection Attack Vectors
🔒 This Unit 42 investigation (published December 5, 2025) analyzes security risks introduced by the Model Context Protocol (MCP) sampling feature in a popular coding copilot. The authors demonstrate three proof-of-concept attacks—resource theft, conversation hijacking, and covert tool invocation—showing how malicious MCP servers can inject hidden prompts and trigger unobserved model completions. The report evaluates detection techniques and recommends layered mitigations, including request sanitization, response filtering, and strict access controls to protect LLM integrations.
Fri, December 5, 2025
Microsoft named Leader in 2025 Gartner Email Security
🔒 Microsoft has been named a Leader in the 2025 Gartner® Magic Quadrant for Email Security, recognizing advances in Microsoft Defender for Office 365. The announcement highlights agentic AI innovations and automated workflows—including an agentic email grading system and the Microsoft Security Copilot Phishing Triage Agent—that reduce manual triage and speed investigations. Microsoft also cites new protections like email bombing detection and expanded coverage across collaboration surfaces such as Microsoft Teams, while committing to greater transparency through in-product benchmarking and reporting.