Incidents
A ransomware-as-a-service group is expanding its playbook. In a post on its leak site, MedusaLocker openly invited penetration testers and insiders to supply entry points into corporate networks, according to Fortra. The advert prioritizes candidates who already hold valid access to enterprise environments, effectively courting initial access brokers and compromised employees. That approach outsources one of the most time‑consuming stages of an intrusion—gaining a foothold—allowing operators to move more quickly to ransomware deployment and extortion. Prior government reporting has linked MedusaLocker activity to weaknesses in remote desktop protocol (RDP), underscoring the practical importance of hardening internet‑facing remote access. The operational takeaway is direct: minimize exposed RDP and VPN services, keep them patched, apply strong authentication and least privilege, and monitor for insider misuse. Where penetration testing is used, keep it authorized, scoped, and monitored, and rigorously evaluate third‑party access to avoid becoming a candidate for resale by brokers.
Platforms
On the consumer side of large platforms, a newly introduced location‑sharing feature is drawing scrutiny. Check Point examined Instagram’s Friend Map, an opt‑in capability that aggregates and visualizes where users go. The analysis warns that pattern‑of‑life maps raise meaningful safety risks, from stalking and doxxing to targeted crime, and can also surface behavioral insights that aid social engineering. The guidance is pragmatic: disable sharing or restrict it to trusted contacts, regularly audit app and device location permissions, and consider enterprise policies that limit persistent location collection for staff in sensitive roles. The broader lesson is that small changes in default sharing or visualized context can turn benign data into a security liability.
In enterprise cloud security, Fortinet outlined five common gaps and how its Lacework FortiCNAPP aims to address them: fragmented visibility, slow or siloed misconfiguration detection, disjointed control‑plane and runtime protections, uneven application‑layer defenses, and weak integration with development pipelines. The post highlights continuous asset discovery and cross‑cloud inventory, CSPM that ranks findings and maps to frameworks such as CIS, NIST, and PCI, and runtime capabilities like file integrity and process behavior monitoring. It also describes cloud detection and response using Kubernetes audit trails and provider logs, with composite alerts correlating host telemetry and cloud events to raise fidelity. For apps and APIs, the platform integrates WAF, bot mitigation, and behavioral analysis, and ties into CI/CD to scan infrastructure‑as‑code and images pre‑deployment. Fortinet positions this consolidation as a way to reduce tooling sprawl and accelerate investigation and remediation without slowing cloud delivery.
Policies
Critical infrastructure operators received new guidance on a foundational control: asset inventory. CISA, working with U.S. and international partners, published a joint document to help owners and operators of operational technology (OT) environments create and maintain authoritative inventories and taxonomies. The guidance focuses on industrial control systems, instrumentation, and automation across sectors such as water, energy, manufacturing, and transportation. It aligns with the Cross‑Sector Cybersecurity Performance Goals and recommends standardized taxonomies, continuous inventory processes, and maintaining source‑of‑truth asset records. Improved visibility supports faster incident response, better vulnerability management, and more consistent risk prioritization, strengthening resilience and interoperability across organizations and sectors.
Research
Automation and AI safety were at the center of the latest Smashing Security episode. One segment examined demonstrations where crafted Google Calendar invitations can trigger Workspace agents or connected assistants, turning a calendar entry into a control surface for smart‑home devices. The discussion points to default behaviors and broad agent permissions as risk multipliers, and suggests mitigations such as disabling auto‑accept, tightening agent privileges with least‑privilege policies, and monitoring automation logs. Another segment reviewed a reported case in which an individual followed an unsafe ChatGPT recommendation involving pesticide as seasoning, resulting in medical treatment—an example of harmful hallucinations and the limits of content filters. The practical advice remains consistent: treat AI outputs as unverified until validated, insert human oversight where recommendations can affect safety, and restrict automated actions to the minimal scope necessary.