All news with #appsec tag
Tue, December 9, 2025
Further Hardening of Mali GPU Drivers with SELinux
🔒 Google’s Android Security and Privacy team collaborated with Arm to analyze the Mali GPU driver and implement SELinux-based IOCTL filtering that reduces the kernel driver's attack surface. The team categorized IOCTLs as unprivileged, instrumentation, and restricted, and used a staged rollout—first opt-in testing via a gpu_harden attribute, then opt-out with a gpu_debug domain—to validate behavior in real devices. The post provides step-by-step guidance for vendors to adopt a platform-level macro, define device-specific IOCTL lists, and enforce policy to keep deprecated and debug IOCTLs unreachable in production.
Tue, December 9, 2025
Festo LX Appliance XSS Vulnerability (CVE-2021-23414)
⚠️ Festo SE & Co. KG's LX Appliance contains a cross-site scripting (XSS) vulnerability tied to the video.js library (CVE-2021-23414) that can allow crafted course content to execute scripts in high-privilege user sessions. The issue affects LX Appliance versions prior to June 2023 and has a CVSS v3.1 base score of 6.1. Festo coordinated disclosure with CERT@VDE and published advisory FSA-202301. Administrators should update affected appliances and apply recommended network isolation and secure remote access controls.
Mon, December 8, 2025
Debunking Common Cloud Security Misconceptions Today
🔒 In a December 8, 2025 Fortinet post, Ali Bidabadi and Carl Windsor dispel persistent myths about cloud security and emphasize the shared responsibility model. They warn that simple misconfigurations — not sophisticated attacks — often cause large exposures and that cloud-native controls alone leave gaps. The authors recommend adopting CNAPP, third-party NGFW and WAF solutions, and continuous visibility to reduce risk across multi-cloud and hybrid environments.
Fri, December 5, 2025
AI Agents in CI/CD Can Be Tricked into Privileged Actions
⚠️ Researchers at Aikido Security discovered that AI agents embedded in CI/CD workflows can be manipulated to execute high-privilege commands by feeding user-controlled strings (issue bodies, PR descriptions, commit messages) directly into prompts. Workflows pairing GitHub Actions or GitLab CI/CD with tools like Gemini CLI, Claude Code, OpenAI Codex or GitHub AI Inference are at risk. The attack, dubbed PromptPwnd, can cause unintended repository edits, secret disclosure, or other high-impact actions; the researchers published detection rules and a free scanner to help teams remediate unsafe workflows.
Fri, December 5, 2025
The CISO Paradox: Enabling Innovation, Managing Risk
🔐 CISOs must stop being the “department of no” and enable rapid product delivery without introducing new risks. Security needs to be embedded early through close collaboration with product teams, clear business-aligned risk tolerances, and pragmatic guardrails. Assign a dedicated security partner to each product, integrate CI/CD and Infrastructure-as-Code enforcement, and automate policy checks so safe changes proceed while risky ones fail with actionable remediation.
Thu, December 4, 2025
Skills Shortages Outpace Headcount in Cybersecurity 2025
🔍 ISC2’s 2025 Cybersecurity Workforce Study, based on responses from more than 16,000 professionals, reports that 59% of organizations now face critical or significant cyber-skills shortages, up from 44% last year. Technical gaps are most acute in AI (41%), cloud security (36%), risk assessment (29%) and application security (28%), with governance, risk and compliance and security engineering each at 27%. The survey cites a dearth of talent (30%) and budget shortfalls (29%) as leading causes and links shortages to concrete impacts—88% reported at least one significant security incident. Despite concerns, headcount appears to be stabilizing and many professionals view AI as an opportunity for specialization and career growth.
Mon, December 1, 2025
The CISO’s Paradox: Enabling Innovation While Managing Risk
🔒 Security leaders must shift from gatekeeper to partner, embedding practical risk controls early in product lifecycles so teams can deliver fast without exposing the business. By defining business-language risk tolerances, standardizing identity and logging, and automating guardrails in CI/CD and infrastructure-as-code, governance becomes an accelerator rather than a bottleneck. Pre-vetted, secure-by-default templates, runtime shielding and risk-based telemetry make the secure path easier for developers while preserving production resilience.
Tue, November 25, 2025
How CloudGuard WAF Reduces Risk and Total Cost of Ownership
🔒 Check Point's CloudGuard WAF combines high prevention accuracy with reduced operational overhead to lower risk and total cost of ownership. In the WAF Comparison Project 2024–25 (1,040,242 legitimate requests across 692 sites, 13 vendors) it delivered ~99.4% detection and ~0.8% false positives. That accuracy, paired with less manual tuning and faster false-positive triage, cuts hidden expenses and breach exposure while protecting apps and APIs.
Mon, November 24, 2025
GhostAd: Hidden Google Play Adware Draining Devices
🔍 Check Point's Harmony Mobile Detection Team discovered a broad Android adware campaign on Google Play that operated as a persistent background advertising engine. Masquerading as benign utilities and emoji editors, the apps continued running after closure or reboot, quietly consuming battery and mobile data. The campaign, dubbed GhostAd, comprised at least 15 related apps, with five still available at discovery.
Thu, November 20, 2025
An Open Letter to Cybersecurity Vendors and Investors
🔊 The cybersecurity market is awash in noise: vendors and investors chase flashy pitches while the long-standing vulnerabilities that cause real breaches remain neglected. The author argues CISOs don’t buy technology so much as they buy reduced risk and confidence, so purchases must fit roadmaps, integrate cleanly, and be sustainable. He prioritizes visibility, identity, automation that empowers people, and tools that reinforce fundamentals like patching and segmentation. Hype, overlapping products, and complexity are rejected in favor of practical reliability.
Wed, November 19, 2025
Fortinet Adds AI-Driven Managed IPS Rules for AWS Cloud
🔒 Fortinet is an official launch partner for third-party rules on AWS Network Firewall, introducing Fortinet Managed IPS Rules powered by FortiGuard AI-Powered Security Services. The managed service uses AI/ML from FortiGuard Labs to automatically translate global threat telemetry into continuously updated IPS rules, removing manual tuning and improving detection timeliness. Deployment is fast via AWS Marketplace and integrates natively with AWS Network Firewall, helping teams scale protection across cloud workloads while supporting compliance objectives.
Wed, November 19, 2025
Cloudflare Outage Highlights Risks of Single-Vendor Reliance
🔍 An intermittent outage at Cloudflare on Nov. 18 briefly disrupted many major websites and forced some customers to pivot DNS and routing to preserve availability. Those provisional workarounds may have exposed origin infrastructure by bypassing edge protections such as WAFs and bot management. Security teams should review OWASP-related logs, emergency DNS changes, and any ad hoc services or devices introduced during the outage. The incident underscores single-vendor risk and the need for formal fallback plans.
Wed, November 19, 2025
Application Containment and Ringfencing for Zero Trust
🔒 Ringfencing, or granular application containment, enforces least privilege for authorized software by restricting file, registry, network, and interprocess access. It complements allowlisting by preventing misuse of trusted tools that attackers commonly weaponize, such as scripting engines and archivers. Effective rollout uses a monitoring agent, simulated denies, and phased enforcement to minimize operational disruption. Properly applied, containment reduces lateral movement, blocks mass exfiltration and ransomware encryption while preserving business workflows.
Mon, November 17, 2025
Best-in-Class GenAI Security: CloudGuard WAF Meets Lakera
🔒 The rise of generative AI introduces new attack surfaces that conventional security stacks were never designed to address. This post outlines how pairing CloudGuard WAF with Lakera's AI-risk controls creates layered protection by inspecting prompts, model interactions, and data flows at the application edge. The integrated solution aims to prevent prompt injection, sensitive-data leakage, and harmful content generation while maintaining application availability and performance.
Tue, November 11, 2025
Pixnapping vulnerability: Android screen-snooping risk
🔒 A newly disclosed exploit named Pixnapping (CVE-2025-48561) allows a malicious Android app with no special permissions to read screen pixels from other apps and reconstruct sensitive content. The attack chains intent-based off-screen rendering, translucent overlays, and a GPU compression timing side channel to infer pixel values. Google issued a September patch but researchers bypassed it, and a more robust fix is planned.
Fri, November 7, 2025
Expanding CloudGuard: Securing GenAI Application Platforms
🔒 Check Point expands CloudGuard to protect GenAI applications by extending the ML-driven, open-source CloudGuard WAF that learns from live traffic. The platform moves beyond traditional static WAFs to secure web interactions, APIs (REST, GraphQL) and model-integrated endpoints with continuous learning and high threat-prevention accuracy. This evolution targets modern attack surfaces introduced by generative AI workloads and APIs.
Thu, November 6, 2025
November 2025 Fraud and Scams Advisory — Key Trends
🔔 Google’s Trust & Safety team published a November 2025 advisory describing rising online scam trends, attacker tactics, and recommended defenses. Analysts highlight key categories — online job scams, negative review extortion, AI product impersonation, malicious VPNs, fraud recovery scams, and seasonal holiday lures — and note increased misuse of AI to scale fraud. The advisory outlines impacts including financial theft, identity fraud, and device or network compromise, and recommends protections such as 2‑Step Verification, Gmail phishing defenses, Google Play Protect, and Safe Browsing Enhanced Protection.
Tue, November 4, 2025
Microsoft Teams Vulnerabilities Expose Trust Abuse Today
🔒 Check Point Research identified multiple vulnerabilities in Microsoft Teams that could let attackers impersonate executives, manipulate message content, and spoof in-app notifications. The flaws exploit trust mechanisms built into real-time collaboration features used by more than 320 million monthly active users, turning expectations of authenticity into an attack vector. Researchers emphasize that trust alone isn’t a security strategy and urge rapid remediation by vendors and mitigations by organizations. Administrators should prioritize updates, review messaging policies, and increase user awareness to reduce exposure.
Tue, November 4, 2025
Modern Software Supply-Chain Attacks and Impact Today
🔒 Modern supply-chain incidents like the Chalk and Debug hijacks show that impact goes far beyond direct financial theft. Response teams worldwide paused work, scanned environments, and executed remediation efforts even though researchers at Socket Security traced the attackers' on-chain haul to roughly $600. The larger cost is operational disruption, repeated investigations, and erosion of trust across OSS ecosystems. Organizations must protect people, registries, and CI/CD pipelines to contain downstream contamination.
Fri, October 31, 2025
OpenAI Unveils Aardvark: GPT-5 Agent for Code Security
🔍 OpenAI has introduced Aardvark, an agentic security researcher powered by GPT-5 that autonomously scans source code repositories to identify vulnerabilities, assess exploitability, and propose targeted patches that can be reviewed by humans. Embedded in development pipelines, the agent monitors commits and incoming changes continuously, prioritizes threats by severity and likely impact, and attempts controlled exploit verification in sandboxed environments. Using OpenAI Codex for patch generation, Aardvark is in private beta and has already contributed to the discovery of multiple CVEs in open-source projects.