
Infrastructure Attacks, Cloud Updates, and New KEV Entries
Coverage: 26 Jan 2026 (UTC)
< view all daily briefs >Public‑sector AI and cloud access controls set the tone today, with Google positioning Gemini for Government as a security‑accredited path to deploy agentic AI at scale. Against this prevention backdrop, a destructive campaign against Poland’s power grid attributed to Sandworm underscores the stakes for critical infrastructure, as reported by CSOonline.
Cloud identity and AI tools evolve
AWS expanded network flexibility by enabling IPv6 connectivity for IAM Identity Center via new dual‑stack endpoints, described in the AWS blog. Organizations can phase the transition from IPv4‑only to dual‑stack, update firewalls for the app.aws/api.aws domains, and—if federating—adjust ACS and SCIM URLs to add or migrate to the dual‑stack endpoint. CloudTrail can help verify which endpoint family clients use. Why it matters: IPv6 support helps meet compliance mandates and future‑proof identity access at internet scale without breaking existing workflows.
Google Cloud deepened AI‑in‑analytics by bringing Gemini and Vertex AI tasks directly to BigQuery with AI.GENERATE/GENERATE_TABLE for structured outputs and AI.EMBED/AI.SIMILARITY for vector workflows, as outlined in the BigQuery AI update. The functions accept multimodal inputs and support End User Credentials to streamline authentication for interactive queries. Prototyping can leverage AI.SIMILARITY, with a path to production using VECTOR_SEARCH on precomputed embeddings. The net effect is reduced friction moving from exploration to scalable retrieval‑augmented analytics.
Advisories and exploitation activity
CISA added five entries to the Known Exploited Vulnerabilities catalog, spanning Linux Kernel, SmarterMail, Microsoft Office, and GNU InetUtils, with remediation timelines for federal agencies set under BOD 22‑01. The alert urges swift patching or compensating controls for high‑risk attack vectors and continued monitoring for exploitation indicators. Details are in the CISA KEV notice. Why it matters: KEV listings reflect observed attacks, raising priority for asset inventory, patching, and interim hardening.
Separately, a critical remote code execution weakness in vCenter Server is now actively exploited, prompting immediate patching guidance from Broadcom and echoed by CISA; see reporting at BleepingComputer. In the Microsoft ecosystem, an out‑of‑band update addresses a high‑severity Office security feature bypass that has seen in‑the‑wild exploitation; temporary registry‑based mitigations are available for environments that cannot update immediately, per BleepingComputer. These cases reinforce the need for accelerated patch cycles, careful testing of mitigations, and close monitoring for post‑patch exploitation.
Intrusions and supply‑chain risks
On the destructive‑operations front, investigators attributed a late‑December attack on Poland’s electricity assets to Sandworm, citing deployment of the Dynowiper tool and overlap with prior OT‑focused campaigns. According to CSOonline, the operation aimed to erase critical system data and disrupt recovery. The episode fits a broader pattern of wiper use against energy and industrial targets and highlights the value of hardened backups, segmented networks, and rapid detection of destructive tooling.
A separate supply‑chain compromise affected eScan antivirus update channels, where trojanized, certificate‑signed packages installed a downloader and a 64‑bit backdoor with multiple persistence and anti‑remediation techniques. Infosecurity reports that the malware modified hosts files and vendor settings to block fixes, and that revocation of the compromised certificate is among the recommended steps alongside endpoint sweeps for known artifacts. For unprotected systems, the guidance is to assume compromise, isolate, and investigate.
In the botnet arena, sources told KrebsOnSecurity that Kimwolf operators allegedly gained access to the Badbox 2.0 control panel—potentially enabling direct malware deployment to a large installed base of infected Android TV devices. The claim is based on a shared control‑panel screenshot and subsequent pivots on listed email addresses and domains previously tied to Badbox infrastructure. The situation remains fluid; researchers report notifications to visible contacts, while law enforcement actions against Badbox continue.
Developers also face data‑exfiltration risk from two AI‑branded VS Code extensions that covertly stream opened files and edits to an external endpoint and can exfiltrate up to 50 workspace files on command. Koi Security’s findings, covered by The Hacker News, show embedded analytics SDKs for device fingerprinting and remote‑command features alongside otherwise plausible coding assistance. Separately, Koi disclosed “PackageGate” issues that weaken lifecycle‑script and lockfile protections across multiple JS package managers, with patches in some tools. The combined picture: trusted tooling and plugin ecosystems remain high‑value vectors for silent data loss.
AI risk and policy responses
The security community is converging on concrete controls for autonomous agents. A practical distillation of the OWASP ASI Top 10 emphasizes least autonomy and least privilege, short‑lived scoped credentials, human confirmation for high‑risk actions, sandboxed execution, intent gates, rigorous I/O validation, immutable logging, behavioral watchdogs, supply‑chain verification, and authenticated inter‑agent channels; see Kaspersky. The guidance frames agent security as a layered engineering problem with clear guardrails and operational checkpoints.
Regulatory and policy actions also accelerated. The European Commission opened proceedings under the Digital Services Act into whether X adequately assessed and mitigated risks around the Grok model after sexually explicit images—including potential CSAM—were generated; details via BleepingComputer. In Germany, Interior Minister Alexander Dobrindt signaled a more offensive response posture to cyberattacks, including disrupting infrastructure abroad and expanding authorities for security services, as reported by CSOonline. The policy arc is clear: expectations for pre‑deployment AI risk controls are rising, and governments are redefining thresholds for intervention and response.