Tag Banner

All news with #google tag

Fri, November 7, 2025

Deploy n8n on Cloud Run for Serverless AI Workflows

🚀 Deploy the official n8n Docker image to Cloud Run in minutes to run scalable, serverless AI workflows. Cloud Run scales from zero and persists data in Cloud SQL while you only pay for active usage. The post shows how to call Gemini as the agent LLM and optionally connect workflows to Google Workspace via OAuth for Gmail, Calendar, and Drive. For production, follow the n8n docs to add Secrets Manager, Cloud SQL, and Terraform-based deployment.

read more →

Fri, November 7, 2025

Ericsson Secures Data Integrity with Dataplex Governance

🔒 Ericsson has implemented a global data governance framework using Dataplex Universal Catalog on Google Cloud to ensure data integrity, discoverability, and compliance across its Managed Services operation. The program standardized a business glossary, automated quality checks with incident-driven alerts, and visualized column-level lineage to support analytics, AI, and automation at scale. It balances defensive compliance with offensive innovation and embeds stewardship through Ericsson’s Data Operating Model.

read more →

Fri, November 7, 2025

Leak: Google Gemini 3 Pro and Nano Banana 2 Launch Plans

🤖 Google appears set to release two new models: Gemini 3 Pro, optimized for coding and general use, and Nano Banana 2 (codenamed GEMPIX2), focused on realistic image generation. Gemini 3 Pro was listed on Vertex AI as "gemini-3-pro-preview-11-2025" and is expected to begin rolling out in November with a reported 1 million token context window. Nano Banana 2 was also spotted on the Gemini site and could ship as early as December 2025.

read more →

Fri, November 7, 2025

Agent Factory Recap: Build AI Apps in Minutes with Google

🤖 This recap of The Agent Factory features Logan Kilpatrick from Google DeepMind demonstrating vibe coding in Google AI Studio, a Build workflow that turns a natural-language app idea into a live prototype in under a minute. Live demos included a virtual food photographer, grounding with Google Maps, the AI Studio Gallery, and a speech-driven "Yap to App" pair programmer. The episode also surveyed agent ecosystem updates—Veo 3.1, Anthropic Skills, and Gemini improvements—and highlighted the shift from models to action-capable systems.

read more →

Fri, November 7, 2025

Build Your First AI Agent Workforce with Google's ADK

🤖 Google’s open-source Agent Development Kit (ADK) simplifies creating autonomous AI agents that use LLMs such as Gemini as their reasoning core. The post presents three hands-on codelabs that guide developers through building a personal assistant agent, adding custom and third-party tools, and orchestrating multi-agent workflows. Each lab demonstrates practical patterns—scaffolding an agent, integrating tools like Google Search and LangChain components, and using Workflow Agents and session state to pass information—so teams can progress from experiment to production-ready agent systems.

read more →

Fri, November 7, 2025

Google Adds Maps Form to Report Review Extortion Scams

📍 Google has introduced a dedicated form for businesses on Google Maps to report extortion attempts where threat actors post inauthentic negative reviews and demand payment to remove them. The move targets review bombing schemes that flood profiles with fake one-star reviews and then coerce owners, often via third-party messaging apps. Google also highlighted related threats — from job and AI impersonation scams to malicious VPN apps and fraud recovery cons — and advised practical precautions for affected merchants and users.

read more →

Thu, November 6, 2025

November 2025 Fraud and Scams Advisory — Key Trends

🔔 Google’s Trust & Safety team published a November 2025 advisory describing rising online scam trends, attacker tactics, and recommended defenses. Analysts highlight key categories — online job scams, negative review extortion, AI product impersonation, malicious VPNs, fraud recovery scams, and seasonal holiday lures — and note increased misuse of AI to scale fraud. The advisory outlines impacts including financial theft, identity fraud, and device or network compromise, and recommends protections such as 2‑Step Verification, Gmail phishing defenses, Google Play Protect, and Safe Browsing Enhanced Protection.

read more →

Thu, November 6, 2025

Inside Ironwood: Google's Co‑Designed TPU AI Stack

🚀 The Ironwood TPU stack is a co‑designed hardware and software platform that scales from massive pre‑training to low‑latency inference. It combines dense MXU compute, ample HBM3E memory, and a high‑bandwidth ICI/OCS interconnect with compiler-driven optimizations in XLA and native support for JAX and PyTorch. Pallas and Mosaic enable hand‑tuned kernels for peak performance, while observability and orchestration tools address resilience and efficiency across pods and superpods.

read more →

Thu, November 6, 2025

Google Cloud Announces Ironwood TPUs and Axion VMs

🚀 Google Cloud announced general availability of Ironwood, its seventh-generation TPU, alongside a new family of Arm-based Axion VMs. Ironwood is optimized for large-scale training, reinforcement learning, and high-volume, low-latency inference, with claims of 10x peak performance over TPU v5p and multi-fold efficiency gains versus TPU v6e (Trillium). The architecture supports superpods up to 9,216 chips, 9.6 Tb/s inter‑chip interconnect, up to 1.77 PB shared HBM, and Optical Circuit Switching for dynamic fabric routing. Complementary software and orchestration updates — including Cluster Director, MaxText improvements, vLLM support, and GKE Inference Gateway — aim to reduce time-to-first-token and serving costs, while Axion N4A/C4A instances provide ARM-based CPU options for cost-sensitive inference and data-prep workloads.

read more →

Thu, November 6, 2025

AI-Powered Malware Emerges: Google Details New Threats

🛡️ Google Threat Intelligence Group (GTIG) reports that cybercriminals are actively integrating large language models into malware campaigns, moving beyond mere tooling to generate, obfuscate, and adapt malicious code. GTIG documents new families — including PROMPTSTEAL, PROMPTFLUX, FRUITSHELL, and PROMPTLOCK — that query commercial APIs to produce or rewrite payloads and evade detection. Researchers also note attackers use social‑engineering prompts to trick LLMs into revealing sensitive guidance and that underground marketplaces increasingly offer AI-enabled “malware-as-a-service,” lowering the bar for less skilled threat actors.

read more →

Thu, November 6, 2025

Build Your First AI Travel Assistant with Gemini Today

🚀 This codelab walks developers through building a functional travel chatbot using Google's Gemini via the Vertex AI SDK. It explains how to connect a web frontend to Gemini, craft system instructions to shape assistant behavior, and enable function-calling to fetch live data such as geocoding and weather. No advanced ML expertise is required; the lab provides step-by-step code samples, API usage, and practical recommendations for iterating prompts so you can produce a working, production-ready demo.

read more →

Thu, November 6, 2025

Google Warns: AI-Enabled Malware Actively Deployed

⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.

read more →

Thu, November 6, 2025

Leading Bug Bounty Programs and Market Shifts 2025

🔒 Bug bounty programs remain a core component of security testing in 2025, drawing external researchers to identify flaws across web, mobile, AI, and critical infrastructure. Leading platforms like Bugcrowd, HackerOne, Synack and vendors such as Apple, Google, Microsoft and OpenAI have broadened scopes and increased payouts. Firms now reward full exploit chains and emphasize human-led reconnaissance over purely automated scanning. Programs also support regulatory compliance in critical sectors.

read more →

Wed, November 5, 2025

Vertex AI Agent Builder: Build, Scale, Govern Agents

🚀 Vertex AI Agent Builder is Google Cloud's integrated platform to build, scale, and govern production AI agents. The update expands the Agent Development Kit (ADK) and Agent Engine with configurable context layers to reduce token usage, an adaptable plugins framework, and new language SDK support including Go. Production features include observability, evaluation tools, simplified deployment via the ADK CLI, and strengthened governance with native agent identities and Model Armor protections.

read more →

Wed, November 5, 2025

Google: PROMPTFLUX malware uses Gemini to self-write

🤖 Google researchers disclosed a VBScript threat named PROMPTFLUX that queries Gemini via a hard-coded API key to request obfuscated VBScript designed to evade static detection. A 'Thinking Robot' component logs AI responses to %TEMP% and writes updated scripts to the Windows Startup folder to maintain persistence. Samples include propagation attempts to removable drives and mapped network shares, and variants that rewrite their source on an hourly cadence. Google assesses the malware as experimental and currently lacking known exploit capabilities.

read more →

Wed, November 5, 2025

Google: New AI-Powered Malware Families Deployed

⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.

read more →

Wed, November 5, 2025

GTIG Report: AI-Enabled Threats Transform Cybersecurity

🔒 The Google Threat Intelligence Group (GTIG) released a report documenting a clear shift: adversaries are moving beyond benign productivity uses of AI and are experimenting with AI-enabled operations. GTIG observed state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, tailored phishing lure creation and data exfiltration. Threats described include AI-powered, self-modifying malware, prompt-engineering to bypass safety guardrails, and underground markets selling advanced AI attack capabilities. Google says it has disrupted malicious assets and applied that intelligence to strengthen classifiers and its AI models.

read more →

Wed, November 5, 2025

Cloud CISO: Threat Actors' Growing Use of AI Tools

⚠️Google's Threat Intelligence team reports a shift from experimentation to operational use of AI by threat actors, including AI-enabled malware and prompt-based command generation. GTIG highlighted PROMPTSTEAL, linked to APT28 (FROZENLAKE), which queries a Hugging Face LLM to generate scripts for reconnaissance, document collection, and exfiltration, while adopting greater obfuscation and altered C2 methods. Google disabled related assets, strengthened model classifiers and safeguards with DeepMind, and urges defenders to update threat models, monitor anomalous scripting and C2, and incorporate threat intelligence into model- and classifier-level protections.

read more →

Wed, November 5, 2025

GTIG: Threat Actors Shift to AI-Enabled Runtime Malware

🔍 Google Threat Intelligence Group (GTIG) reports an operational shift from adversaries using AI for productivity to embedding generative models inside malware to generate or alter code at runtime. GTIG details “just-in-time” LLM calls in families like PROMPTFLUX and PROMPTSTEAL, which query external models such as Gemini to obfuscate, regenerate, or produce one‑time functions during execution. Google says it disabled abusive assets, strengthened classifiers and model protections, and recommends monitoring LLM API usage, protecting credentials, and treating runtime model calls as potential live command channels.

read more →

Wed, November 5, 2025

Securing the Open Android Ecosystem with Samsung Knox

🔒 Samsung Knox is a built-in security platform for Samsung Galaxy devices that combines hardware- and software-level protections to safeguard enterprise data and provide IT teams with centralized control. It layers defenses — including AI-powered malware detection, curated app controls, Message Guard for zero-click image scanning, and DEFEX exploit detection — while integrating with EMMs and offering granular update management via Knox E-FOTA. The platform emphasizes visibility, policy enforcement, and predictable lifecycle management to reduce risk and operational disruption.

read more →