
Hyperscalers Advance Agentic Platforms, Security, and Data
Coverage: 22 Apr 2026 (UTC)
< view all daily briefs >Hyperscalers concentrated on governed, production-scale AI. Google advanced the agentic data layer with BigQuery updates spanning Iceberg-managed tables, cross-cloud analytics, real-time replication, and a preview of BigQuery Graph for multi-hop reasoning. Complementing the data tier, the expanded Gemini Enterprise platform emphasized agent lifecycle governance with cryptographic identities, centralized gateways, and safety controls.
Governance And Security For Agents
Google positioned enterprise controls as foundational for agentic workloads. Within Gemini Enterprise, capabilities such as Agent Identity, an Agent Gateway, and Model Armor target risks like prompt injection, tool poisoning, and data leakage, while simulation, evaluation, and observability aim to keep multi-step behavior within policy. Extending protections to user journeys, Fraud Defense—the next evolution of reCAPTCHA—introduces agentic activity measurement, a policy engine, and an AI‑resistant QR code challenge. Google cites an average 51% reduction in account takeover when separating legitimate users from abuse, and existing reCAPTCHA customers inherit the upgrade without migration.
AWS moved to simplify multicloud operations with Security Hub Extended, a full‑stack model that unifies procurement, billing, and telemetry across AWS and partners under the Open Cybersecurity Schema Framework. The service correlates GuardDuty, Inspector, and Macie findings with partner signals to map attack paths and highlight root causes for faster remediation. In parallel, Check Point broadened its network protection footprint: Cloud Firewall as a Service is now in preview on Google Cloud, bringing AI‑driven threat prevention and centralized, Zero Trust‑aligned policy to multi‑cloud environments.
Data Layer For Reasoning And Action
Google’s data announcements framed a shift from raw assets toward governed context for agents. BigQuery’s lakehouse features—managed Iceberg, an Iceberg REST catalog, catalog federation, and cross‑cloud analytics—aim to reduce data movement and standardize access, while real‑time replication from Spanner, AlloyDB, and Cloud SQL targets fresh, low‑latency inputs. The new Knowledge Catalog evolves Dataplex into an enterprise context engine, aggregating technical and semantic metadata across Google and partner systems, enriching it with Gemini‑powered extraction and verified patterns, and exposing a hybrid semantic search path designed for sub‑second, access‑controlled retrieval.
On the BI front, Looker introduced agent types that summarize dashboards, converse in context, and trigger governed downstream actions rooted in the semantic layer. A modernized UI, AI assistants for visuals and expressions, CI for SQL validation, and MCP support are intended to increase analyst productivity and reduce hallucinations by grounding agents in consistent business logic.
Infrastructure For Scale And Sovereignty
Google bundled hardware and orchestration advances under the AI Hypercomputer, combining TPU 8t for training and TPU 8i for inference/RL with the Virgo Network, which targets roughly 4× prior bandwidth and large‑scale accelerator fabrics. Storage and I/O were tuned for AI throughput: Storage updates include Cloud Storage Rapid (Rapid Bucket/Cache) for sub‑millisecond access and high request rates, and Managed Lustre scaling to 10 TB/s with a cost‑efficient Dynamic tier—claims that aim to cut GPU idle time, speed data loading, and accelerate checkpoints and restores.
Kubernetes‑based inference and RL received targeted improvements. GKE added an Agent Sandbox using gVisor for kernel‑level isolation of untrusted code, private GA of hypercluster to span up to 256,000 nodes, and a Titanium Intelligence Enclave to keep model weights and prompts sealed from operators. Inference Gateway features like Predictive Latency Boost and KV cache tiering seek faster time‑to‑first‑token, while intent‑based autoscaling reduces reaction times to roughly five seconds. At the network edge, the Cross‑Cloud Network added ambient networking for GKE and Cloud Run, multi‑region inference routing, high‑capacity interconnects up to 400 Gbps, and security updates including PQC, NGFW sandboxing, and expanded Cloud Armor rules.
For regulated and disconnected environments, Distributed Cloud expanded support for NVIDIA Blackwell (HGX B200/B300), introduced new machine families and higher IOPS, and added an AI gateway control plane with GPU‑aware load balancing, tracing, and logging. Air‑gapped and connected models remain available, and Gemini Flash is in preview for connected deployments—positioned to deliver local generative AI while preserving sovereignty and auditability.
Multicloud Agent Tooling And Ecosystem
AWS streamlined agent development with a managed harness and CLI for AgentCore, enabling rapid, code‑light prototyping in per‑session microVMs, filesystem persistence for long‑running tasks, and IaC deployment via AWS CDK. For model adaptation, SageMaker AI added serverless customization for Qwen3.5 models (SFT and RFT) to cut infrastructure overhead during fine‑tuning. On Google’s side, an official Skills repo ships compact, vendor‑authored guidance in a Markdown‑first format to reduce context bloat and token costs. To accelerate enterprise adoption, Google also launched a Partners fund of $750M, expanded incentives and training, and an Agent Gallery/Marketplace to surface vetted, enterprise‑grade agents under centralized governance.