< ciso
brief />
Critical ICS Fixes, AI Agent Safeguards, and Supply-Chain Threats

Critical ICS Fixes, AI Agent Safeguards, and Supply-Chain Threats

Coverage: 17 Mar 2026 (UTC)

< view all daily briefs >

Industrial risk and AI safety shared the spotlight. A critical flaw in Schneider Electric’s SCADAPack x70 RTUs received prominent attention in a new CISA advisory with patches and concrete mitigations, while vendors advanced platform hardening across AI agents and global inference infrastructure. At the same time, researchers tracked broad supply‑chain compromise in open‑source ecosystems and ransomware actors refining social‑engineering and loader tactics.

Critical fixes for industrial control

A newly disclosed vulnerability in SCADAPack x70 and associated RemoteConnect tooling carries a CVSS 9.8 and allows code execution over Modbus TCP if left unpatched. The advisory details vendor fixes in RemoteConnect R3.4.2 and SCADAPack firmware 9.12.2, along with practical defenses—segmentation, the RTU firewall service, disabling logic debug, and restricted remote access—aimed at reducing the blast radius in operational environments. The issue affects globally deployed devices and can threaten energy sector processes; timely patching and access control are emphasized.

Additional industrial updates focus on third‑party components and SDKs used in automation environments. Festo’s Automation Suite bundled CODESYS components with numerous high‑severity issues across access control, deserialization, and memory safety; the republished CISA advisory points to updated CODESYS versions and cautions against opening untrusted project files. Siemens shipped SICAM SIAPP SDK v2.1.7 to address out‑of‑bounds writes, buffer overflows, length‑handling flaws, command injection, and unauthorized deletions; the CISA note reiterates that many conditions are exploitable only when APIs are misused or hardening is absent. Schneider Electric also fixed a hard‑coded credentials issue in EcoStruxure IT Data Center Expert, with the CISA advisory highlighting risk when SOCKS Proxy is enabled and urging verification that the feature remains off by default. Why it matters: these advisories surface exploitable pathways common to OT deployments—weak network boundaries, legacy components, and misconfigurations—while providing concrete remediation steps.

Agent safeguards and global AI infrastructure

Nvidia announced NemoClaw, an initiative to run OpenClaw-based agents with isolation via the open‑source OpenShell runtime. According to CSO Online, OpenShell layers kernel‑level sandboxing with a privacy router that inspects behaviors and blocks unsafe actions or data exfiltration. The approach aims to let enterprises place agent workloads closer to sensitive data while reducing risk; commentators note that observability, policy enforcement, rollback, and auditability remain requirements for production‑grade deployments.

Google Cloud introduced the preview of multi‑cluster GKE Inference Gateway, which routes AI inference across regions and clusters based on hardware fit and objectives such as latency or capacity. The Google Cloud design uses Kubernetes custom resources—InferencePool and InferenceObjective—and attaches backend policies for metrics‑driven load balancing, in‑flight limits, and failover. Centralized config in a single cluster helps platform teams coordinate global policies without altering target clusters, supporting disaster recovery and pooled GPU/TPU utilization.

Modernization and AI operations also advanced on AWS. The vendor consolidated mainframe refactor, replatform, and reimagine paths under a single experience and integrated Blu Insights into AWS Transform; the latest AWS update highlights complimentary code transformation and removal of prior certification gates to speed proofs of concept. Separately, voice AI agents in the contact‑center platform now support 13 more locales—bringing coverage to 40—per the Amazon Connect release, expanding multilingual self‑service for global deployments.

For agent runtime control, the new InvokeAgentRuntimeCommand API allows shell command execution inside active AgentCore Runtime sessions with streamed output, simplifying workflows that combine deterministic tools with LLM reasoning. The AWS note advises least‑privilege controls, input validation, and observability, especially where agents and commands share resources.

Software supply chains and social engineering collide

Researchers attributed a renewed GlassWorm campaign to a single actor after finding 433 compromised components across GitHub repos, npm packages, and VSCode/OpenVSX extensions. As reported by BleepingComputer, the attack often hijacks GitHub accounts to inject obfuscated payloads, then publishes trojanized packages and extensions. Components poll a Solana address for instructions that rotate payload URLs, ultimately running a JavaScript information stealer that targets wallet data, access tokens, credentials, and SSH keys. Detection tips include searching for the marker variable lzcdrtfxyqiplpd, suspicious i.js files, unexpected Node.js installations in home directories, and anomalous commit metadata.

LeakNet broadened its initial access by adopting the ClickFix technique and a bring‑your‑own‑runtime Deno loader. According to The Hacker News, victims are tricked into running an msiexec command that installs Deno and executes Base64‑encoded JavaScript in memory, followed by DLL sideloading, PsExec‑based lateral movement, and S3‑backed staging/exfiltration before ransomware deployment. Why it matters: the combination of social engineering and signed runtimes reduces on‑disk artifacts and complicates prevention based on executable blocklists.

Mobile fraud operators are bypassing app‑level defenses by hooking Android’s runtime. Infosecurity Magazine reports that LSPosed‑based modules (e.g., “Digital Lutera”) intercept SMS, spoof device identifiers, and inject fake records to defeat SIM‑binding, enabling real‑time account takeovers and payment fraud. Because system hooks persist beyond app reinstalls and signatures remain intact, defenders need OS‑level integrity checks, carrier‑assisted validation, and stronger MFA.

Regionally focused threats continued. The Hacker News covered Genians’ attribution of a spear‑phishing campaign to Konni: a malicious LNK fetches AutoIt‑based EndRAT with scheduled‑task persistence and selective propagation through victims’ KakaoTalk accounts. In a separate operational mishap, South Korea’s National Tax Service inadvertently exposed a seized wallet’s recovery phrase in a published photo, after which funds were transferred from the address; Schneier highlights the need for strict redaction and institutional custody processes.

Policy responses also featured: the Council of the European Union sanctioned companies and individuals in China and Iran tied to widespread compromises and influence operations. The BleepingComputer account cites asset freezes, financing prohibitions, and travel bans as tools to constrain listed actors and align with prior designations by partners.

AI runtime risks and guardrail testing

Researchers outlined multiple AI platform risks across execution sandboxes and ecosystem tools. A BeyondTrust report, summarized by The Hacker News, shows DNS‑mediated data exfiltration from Bedrock’s AgentCore Code Interpreter sandbox and recommends VPC isolation and DNS firewalls; an account‑takeover flaw in LangSmith (CVE‑2026‑25750) was fixed in 0.12.71; and unauthenticated RCEs in open‑source SGLang (CVE‑2026‑3059/3060/3989) prompted CERT/CC guidance to restrict ZeroMQ access and segment networks. The common thread is tightening identity, egress, and component exposure while applying least privilege and monitoring for anomalous runtime behavior.

Beyond implementation bugs, model guardrails remain brittle under paraphrased prompts. Unit 42 used a genetic‑style fuzzing method to generate meaning‑preserving variants of disallowed requests and found highly variable evasion rates across open and closed models, including extreme failures in some model‑keyword pairs. The authors recommend narrowing assistant scope, layering controls, validating outputs, rate‑limiting tool access, and adopting continuous adversarial testing. Why it matters: guardrails are probabilistic and require ongoing measurement rather than one‑time certification.