< ciso
brief />
Canvas Extortion, Dirty Frag Patches, And Cloud Security Updates

Canvas Extortion, Dirty Frag Patches, And Cloud Security Updates

Coverage: 11 May 2026 (UTC)

< view all daily briefs >

Major education and developer ecosystems faced targeted pressure, while kernel maintainers raced to address a new Linux privilege-escalation chain. Cloud providers introduced performance gains and management upgrades for AI-era workloads, and a US policy adjustment extended security update windows for banned routers to limit exposure. The day underscored both the scale of active threat activity and the industry’s push to harden infrastructure and streamline secure operations.

Extortion and Supply Chain Pressure on Education and DevOps

The ShinyHunters group escalated an extortion campaign following an intrusion at Instructure’s Canvas platform, according to Infosecurity. Researchers report the April 25 breach led to theft of roughly 275 million records spanning 8,809 institutions and more than 3.65 TB of data after a vulnerability in the Free‑For‑Teacher edition was exploited. After an initial site-wide demand, the actor adopted school‑by‑school pressure, defacing about 330 institutional Canvas login pages and threatening full data release on May 12 absent settlements. Instructure reportedly applied patches rather than negotiate, while guidance emphasized immediate steps for institutions and users: rotate Canvas‑related passwords, enable MFA where available, inform communities to expect phishing and spoofed prompts, and monitor financial and credit activity over time. The incident touches universities, colleges, school districts, corporate training environments, and stage/test instances, elevating the risk of long‑term misuse of personal data.

Separately, Checkmarx confirmed that its Jenkins Application Security Testing plugin was compromised and a malicious build published to the Jenkins Marketplace, as detailed by BleepingComputer. The rogue artifact (2026.5.09) was uploaded outside the official release pipeline after attackers known as TeamPCP accessed GitHub repositories, reportedly reusing credentials from a prior supply‑chain compromise. The backdoored build is believed to harvest secrets where installed. The vendor is releasing a clean version, shared IOCs, and advises reverting to plugin version 2.0.13‑829.vc72453fa_1c16, rotating all secrets, and investigating for lateral movement or persistence; installations of the rogue build should be treated as compromised. The sequence reflects persistent targeting of application security vendors and the downstream risk to CI/CD pipelines when release credentials and artifact provenance controls falter.

Kernel Patching and Policy Moves

Microsoft researchers disclosed a Linux local privilege‑escalation technique dubbed Dirty Frag that chains CVE‑2026‑43284 (IPsec ESP) and CVE‑2026‑43500 (RxRPC), enabling escalation from low‑privileged accounts to root by corrupting page‑cache‑backed memory, with active exploitation reported in the wild, per Dirty Frag. The ESP flaw was patched on May 8, while RxRPC fixes were still being deployed, affecting major distributions and container platforms cited by researchers. Recommended mitigations include disabling esp4, esp6, and rxrpc modules where not required, tightening and monitoring local shell and container access, applying vendor patches promptly, and watching for abnormal privilege escalation signals. Successful exploitation may introduce persistent changes that require thorough incident response, including potential reimaging.

The US Federal Communications Commission extended the period during which suppliers of banned foreign‑made consumer‑grade routers may deliver software and firmware security updates to US users until at least January 1, 2029, according to Infosecurity. The allowance is narrowly scoped to harm‑mitigating updates—such as vulnerability patches and compatibility fixes—and excludes new features. The extension also applies to certain foreign‑made drone systems and components banned earlier, with devices that have conditional DoD or DHS approvals remaining exceptions. The move aims to reduce risks posed by unpatched, end‑of‑life devices while longer‑term replacement and mitigation efforts proceed.

Google Cloud: High-Performance Data and Secure AI Foundations

Google introduced Cloud Storage Rapid, a family of object storage capabilities for data‑intensive AI and analytics, featuring Rapid Bucket for ultra‑low latency and high throughput, and Rapid Cache to accelerate reads from existing buckets without code changes; measured gains include fewer blocked GPU hours and faster checkpoint operations, as described in Cloud Storage Rapid. In parallel, Google outlined a topology‑aware, probabilistic availability model for industrial‑scale TPU training that shifts reliability from per‑instance to cluster‑level guarantees, enabling predictable capacity slices for trillion‑parameter workloads and improving overall training goodput, as detailed in Cluster reliability.

Google expanded Database Center into a unified, AI‑native hub for managing large database fleets, with a Gemini‑powered interface that correlates signals and validates recommendations via a Testing agent to forecast the performance impact of changes before rollout, per Database Center. The company also announced general availability of PostgreSQL 18 in AlloyDB alongside a paid Extended Support offering that bridges community end‑of‑life with three additional years of High and Critical CVE protection, SLA coverage for eligible clusters, and the ability to create new clusters on legacy major versions, as outlined in AlloyDB PG18.

For the public sector, Google described a coordinated approach to move agencies from pilots to agentic systems by combining the AI Hypercomputer architecture, eighth‑generation TPUs, Virgo Networking, and expanded Google Distributed Cloud capabilities with compliance‑ready data and security controls—including Cloud Armor and Model Armor authorizations—to support resilient, scalable, and secure deployments, per Agentic foundation. Complementing this, Google Threat Intelligence Group reported industrialized adversary use of generative AI—from likely AI‑developed zero‑day exploits to polymorphic malware and agentic orchestration—and mapped supply‑chain risks to SAIF taxonomy categories, while highlighting defensive AI agents and disrupted operations, in the GTIG report.

AWS and Partners: GPU, Networking, and App Security Updates

AWS expanded access to high‑end training instances in managed notebooks: EC2 P4de is now available on SageMaker Studio in Tokyo, Singapore, and Frankfurt, offering larger GPU memory per device and reported gains of up to 60% in training performance and about 20% lower training cost versus P4d, per P4de expansion. In the US East (N. Virginia) region, EC2 P6‑B200 instances paired with eight NVIDIA Blackwell GPUs are generally available for SageMaker Studio notebooks, targeting fine‑tuning and interactive development of large models with up to 2x training performance over P5en, according to P6-B200 on Studio.

For mid‑sized experimentation and inference, AWS broadened regional availability of EC2 G6e (L40s GPUs) and G6 (L4 GPUs) in SageMaker Studio notebooks, with reported performance gains over prior generations to support generative AI fine‑tuning, interactive training, and latency‑sensitive inference, per G6e expansion and G6 expansion.

Networking throughput for multi‑AZ architectures improved as ENA Express added support for traffic between Availability Zones, lifting single‑flow cross‑AZ bandwidth up to 25 Gbps using the SRD protocol for both TCP and UDP, per ENA Express. At the application edge, AWS WAF introduced dynamic label interpolation to forward WAF classification signals and contextual attributes in headers or response bodies using a single rule, simplifying rule management and enabling adaptive responses, as described in WAF labels.

To streamline modernization during migrations, AWS Transform added an agentic AI capability to analyze source code, generate Dockerfiles, build images, publish to Amazon ECR, emit Terraform and Helm artifacts, and integrate CVE scanning, supporting at‑scale replatforming to containers alongside rehosting paths, per AWS Transform. For AI development workflows, Anthropic’s native console, APIs, and early‑access features are now accessible directly through AWS accounts via the generally available Claude Platform on AWS—operated by Anthropic outside the AWS security boundary but integrated with IAM, consolidated billing, and CloudTrail, as outlined in Claude Platform.

Beyond AWS, Microsoft and Red Hat highlighted Azure Red Hat OpenShift as a foundation for moving AI from pilots to production with unified governance, identity, and security; customer deployments include Banco Bradesco and Topicus, with features such as OpenShift Virtualization, Zero Trust identity, Confidential Containers, expanded NVIDIA GPU support, and new regional availability, per Azure Red Hat OpenShift.