Security operations leaned into agentic automation and AI governance today. From CrowdStrike came specialized agents and coordinated execution to push SOC workflows toward machine speed. A unified response to AI risk arrived with Prisma AIRS 2.0, emphasizing discovery, assessment and protection. Google expanded Vertex Agent Builder to accelerate production agents under stronger governance, while cloud platforms and researchers detailed new threats, incidents and fixes.
Agentic automation takes shape
CrowdStrike introduced Charlotte SOAR as an orchestration layer that blends Falcon Fusion SOAR, Next‑Gen SIEM, Charlotte AI, AgentWorks and the Enterprise Graph. The platform promotes an Agentic Security Workforce: purpose‑built agents for repetitive SOC tasks and a no‑code builder to define missions, scope and guardrails, with visual playbooks and native case management to coordinate triggers and actions. The approach aims to move teams from deterministic playbooks toward adaptive, AI‑driven workflows under defender control.
The company also outlined operational‑technology visibility advances in Falcon for XIoT, including zero‑touch discovery across subnets, segmentation visibility for live policy breaches, and a unified interface to explore assets and risk inside the Falcon platform. The enhancements target faster inventory, clearer network paths across Purdue levels and fewer console hops for OT and security teams. Some capabilities are forward‑looking; customers are advised to plan against generally available features.
Securing AI systems and supply chains
Palo Alto Networks detailed Prisma AIRS 2.0’s three‑phase lifecycle—Discover, Assess, Protect—backed by deep model artifact inspection across dozens of formats, continuous AI red teaming with 500+ scenarios, and curated intelligence from Unit 42 and the Huntr community. Outputs emphasize governance with auditable risk scores mapped to frameworks such as the OWASP Top 10 for LLMs and NIST’s AI RMF, moving inspection left into CI/CD and registries to catch trojans, backdoors and embedded malware before deployment. The goal is measurable, repeatable reduction of AI supply‑chain and behavioral risks.
Google Cloud expanded Vertex AI Agent Builder with configurable context layers, a plugins framework for tool use and self‑healing, and a Go SDK alongside Python and Java. Production rollout is streamlined via a CLI to deploy to the managed Agent Engine with dashboards, traces and an interactive playground, while a new evaluation layer adds a user simulator for quality and safety checks. Governance advances include agent identities as first‑class IAM principals, Model Armor for runtime protection, and integrations with Security Command Center’s AI Protection to discover agentic assets and detect threats. These controls aim to pair developer agility with least‑privilege access and runtime safeguards.
Cloud observability and infrastructure upgrades
AWS expanded CloudWatch Database Insights anomaly detection to cover database, OS and per‑SQL metrics, surfacing deviations with explanations and step‑by‑step remediation suggestions to cut time to diagnosis. In parallel, CloudWatch Application Signals folded Synthetics canary diagnostics into AI‑assisted audits, correlating canary artifacts with traces, metrics and dependency graphs to prioritize likely root causes across network, auth, performance, script and infrastructure issues.
New memory‑optimized EC2 R8a instances, powered by 5th Gen AMD EPYC processors, promise higher performance and price‑performance versus R7a—targeting databases, caches, analytics and SAP workloads. And Amazon CloudFront IPv6 support for Anycast Static IPs brings dual‑stack addressing from edge locations, easing IPv6 compliance and reachability while maintaining IPv4 connectivity.
Threat activity and confirmed incidents
Google’s threat team documented a shift to operational AI misuse, with malware families invoking model APIs at runtime for code generation, obfuscation and evasion. The GTIG report ties activity to both state‑aligned and criminal actors, notes an underground market for AI‑enabled tooling, and recommends treating runtime LLM calls as live command channels while hardening detections around API keys, model endpoints and cloud credentials. This matters because model‑aware detections and telemetry will increasingly factor into incident response.
A multi‑year, cross‑border action dismantled three fraud networks that misused stolen card data to create millions of recurring subscription charges; details of Operation Chargeback cite €300m in losses, 4.3m affected cardholders across 193 countries, arrests, and asset seizures. Separately, the University of Pennsylvania reported a credential‑theft–driven intrusion affecting internal systems and marketing databases, with subsequent mass email abuse via Marketing Cloud; see BleepingComputer for scope and mitigations. SonicWall attributed a September exposure of firewall configuration backups to a state‑sponsored actor and urged extensive credential and secret rotation to mitigate follow‑on risk; investigative details are outlined by BleepingComputer.
On the web stack, researchers warned of trivial account takeover via a broken access control flaw in the widely used Post SMTP WordPress plugin; exploitation is active and site owners should patch or disable, review logs and hunt for persistence. And a study summarized by Kaspersky found roughly half of sampled satellite traffic remained unencrypted, exposing sensitive data across sectors; recommended mitigations include end‑to‑end encryption and VPNs with kill switches where link‑layer protections are absent.