All news with #axion tag
Thu, November 6, 2025
Google Cloud previews Axion-based N4A general VMs Series
🚀 Google Cloud has introduced the Axion-based N4A VM series in preview, positioned as the most cost-effective N-series to date with up to 2× better price-performance and 80% better performance-per-watt versus comparable x86 VMs. Available on Compute Engine, GKE, Dataproc and Batch, N4A supports up to 64 vCPUs, 512 GB DDR5, 50 Gbps networking, Custom Machine Types and new Hyperdisk storage profiles (Balanced, Throughput, ML). Early customers report substantial cost and performance gains.
Thu, November 6, 2025
Google Cloud Announces Axion C4A Metal Bare-Metal Arm
🔧 Google Cloud is introducing C4A metal, a bare-metal instance class powered by its Arm-based Axion processors, entering preview soon. Designed for workloads that require direct hardware access and Arm-native compatibility, C4A metal delivers 96 vCPUs, 768 GB DDR5 memory, up to 100 Gbps networking, and support for Google Cloud Hyperdisk variants. C4A metal targets Android development, automotive simulation, CI/CD, security workloads, and custom hypervisors by eliminating nested virtualization overhead and preserving Arm instruction-set parity.
Thu, November 6, 2025
Google Cloud Announces Ironwood TPUs and Axion VMs
🚀 Google Cloud announced general availability of Ironwood, its seventh-generation TPU, alongside a new family of Arm-based Axion VMs. Ironwood is optimized for large-scale training, reinforcement learning, and high-volume, low-latency inference, with claims of 10x peak performance over TPU v5p and multi-fold efficiency gains versus TPU v6e (Trillium). The architecture supports superpods up to 9,216 chips, 9.6 Tb/s inter‑chip interconnect, up to 1.77 PB shared HBM, and Optical Circuit Switching for dynamic fabric routing. Complementary software and orchestration updates — including Cluster Director, MaxText improvements, vLLM support, and GKE Inference Gateway — aim to reduce time-to-first-token and serving costs, while Axion N4A/C4A instances provide ARM-based CPU options for cost-sensitive inference and data-prep workloads.
Tue, October 21, 2025
Google Migrates ISAs with AI and Automation at Scale
🔧 Google details how its custom Axion Arm CPUs and a mix of automation and AI enabled large-scale migration from x86 to multi-architecture production across services such as YouTube, Gmail, and BigQuery. The team analyzed 38,156 commits (about 700K changed lines) and reports migrating more than 30,000 applications to Arm while keeping both Arm and x86 in production. Existing automation like Rosie, sanitizers, fuzzers, and the CHAMP rollout framework handled much of the work, while an LLM-driven agent called CogniPort fixed build and test failures, showing a 30% success rate on a 245-commit benchmark. Google plans to default new apps to multiarch and continue refining AI tools to address the remaining long tail.
Fri, October 17, 2025
Axion C4A and N4 VMs Now GA for Cloud SQL Enterprise
🚀 Google has made Axion-powered C4A and Intel-based N4 virtual machines generally available for Cloud SQL Enterprise Plus and Enterprise editions, promising substantial gains in throughput and price-performance. Hyperdisk Balanced storage is supported on both families to boost I/O, increase throughput, and allow independent configuration of capacity, throughput, and IOPS. Customer tests report lower costs, reduced latency, and large throughput gains. These machines are available in select regions; check Cloud SQL pricing and region documentation for details.