All news with #ai security tag
Thu, December 4, 2025
Skills Shortages Outpace Headcount in Cybersecurity 2025
🔍 ISC2’s 2025 Cybersecurity Workforce Study, based on responses from more than 16,000 professionals, reports that 59% of organizations now face critical or significant cyber-skills shortages, up from 44% last year. Technical gaps are most acute in AI (41%), cloud security (36%), risk assessment (29%) and application security (28%), with governance, risk and compliance and security engineering each at 27%. The survey cites a dearth of talent (30%) and budget shortfalls (29%) as leading causes and links shortages to concrete impacts—88% reported at least one significant security incident. Despite concerns, headcount appears to be stabilizing and many professionals view AI as an opportunity for specialization and career growth.
Thu, December 4, 2025
Building a Production-Ready AI Security Foundation
🔒 This guide presents a practical defense-in-depth approach to move generative AI projects from prototype to production by protecting the application, data, and infrastructure layers. It includes hands-on labs demonstrating how to deploy Model Armor for real-time prompt and response inspection, implement Sensitive Data Protection pipelines to detect and de-identify PII, and harden compute and storage with private VPCs, Secure Boot, and service perimeter controls. Reusable templates, automated jobs, and integration blueprints help teams reduce prompt injection, data leakage, and exfiltration risk while aligning operational controls with compliance and privacy expectations.
Thu, December 4, 2025
Protecting LLM Chats from the Whisper Leak Attack Today
🛡️ Recent research shows the “Whisper Leak” attack can infer the topic of LLM conversations by analyzing timing and packet patterns during streaming responses. Microsoft’s study tested 30 models and thousands of prompts, finding topic-detection accuracy from 71% to 100% for some models. Providers including OpenAI, Mistral, Microsoft Azure, and xAI have added invisible padding to network packets to disrupt these timing signals. Users can further protect sensitive chats by using local models, disabling streaming output, avoiding untrusted networks, or using a trusted VPN and up-to-date anti-spyware.
Thu, December 4, 2025
Indirect Prompt Injection: Hidden Risks to AI Systems
🔐 The article explains how indirect prompt injection — malicious instructions embedded in external content such as documents, images, emails and webpages — can manipulate AI tools without users seeing the exploit. It contrasts indirect attacks with direct prompt injection and cites CrowdStrike's analysis of over 300,000 adversarial prompts and 150 techniques. Recommended defenses include detection, input sanitization, allowlisting, privilege separation, monitoring and user education to shrink this expanding attack surface.
Thu, December 4, 2025
How Companies Can Prepare for Emerging AI Security Threats
🔒 Generative AI introduces new attack surfaces that alter trust relationships between users, applications and models. Siemens' pentest and security teams differentiate Offensive Security (targeted technical pentests) from Red Teaming (broader organizational simulations of real attackers). Traditional ML risks such as image or biometric misclassification remain relevant, but experts now single out prompt injection as the most serious threat — simple crafted inputs can leak system prompts, cause misinformation, or convert innocuous instructions into dangerous command injections.
Wed, December 3, 2025
Android expands in-call scam protection to banks and fintech
🔒 Android is expanding its pilot for in-call scam protection that detects when users launch participating financial apps while screen sharing during calls from unsaved numbers. The feature warns users, offers a one-tap end-call and stop-sharing option, and enforces a 30-second pause to disrupt social engineering. After UK success and pilots in Brazil and India, Google is rolling pilots with US fintechs including Cash App and banks like JPMorganChase.
Wed, December 3, 2025
Fortinet Named Challenger in Gartner Email Security MQ
📧 Fortinet was named a Challenger in the 2025 Gartner Magic Quadrant for Email Security, reflecting continued progress across its email protection portfolio. FortiMail Email Security and FortiMail Workspace Security combine AI-native detection, sandboxing, DMARC, enhanced BEC and account takeover defenses, and flexible on-premises and cloud deployment options. The company positions this suite as a cost-effective, integrated alternative that also extends protection to web browsers, cloud storage, and collaboration apps.
Wed, December 3, 2025
Brazil Hit by WhatsApp Worm and RelayNFC Fraud Campaign
🔒 Water Saci has shifted to a layered infection chain that uses HTA files and malicious PDFs delivered via WhatsApp to deploy a banking trojan in Brazil. The actors moved from PowerShell to a Python-based worm that propagates through WhatsApp Web, while an MSI/AutoIt installer and process-hollowing techniques load the trojan only on Portuguese (Brazil) systems. Trend Micro links the behavior to Casbaneiro-style features and notes possible use of code-translation or AI tools to port scripts. In parallel, a React Native Android strain named RelayNFC executes real-time NFC APDU relays to enable contactless payment fraud.
Wed, December 3, 2025
Adversarial Poetry Bypasses AI Guardrails Across Models
✍️ Researchers from Icaro Lab (DexAI), Sapienza University of Rome, and Sant’Anna School found that short poetic prompts can reliably subvert AI safety filters, in some cases achieving 100% success. Using 20 crafted poems and the MLCommons AILuminate benchmark across 25 proprietary and open models, they prompted systems to produce hazardous instructions — from weapons-grade plutonium to steps for deploying RATs. The team observed wide variance by vendor and model family, with some smaller models surprisingly more resistant. The study concludes that stylistic prompts exploit structural alignment weaknesses across providers.
Wed, December 3, 2025
Secure Integration of AI into Operational Technology
🔒 CISA and the Australian Signals Directorate released joint guidance, Principles for the Secure Integration of Artificial Intelligence in Operational Technology, to help critical infrastructure owners and operators balance AI benefits with OT safety and reliability. The guidance focuses on ML, LLMs, and AI agents while remaining applicable to traditional statistical and logic-based systems. It emphasizes four core areas—Understand AI, Assess AI Use in OT, Establish AI Governance, and Embed Safety and Security—and recommends integrating AI considerations into incident response and compliance activities.
Wed, December 3, 2025
Guide: Secure Integration of AI in Operational Technology
🔒 The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre published a joint guide outlining four principles to safely integrate AI into operational technology (OT). The guidance emphasizes educating personnel, assessing AI uses and data risks, establishing governance, and embedding safety and security. It focuses on ML, LLMs, and AI agents while remaining applicable to other automation approaches. CISA and international partners encourage OT owners and operators to adopt these risk-informed practices to protect critical infrastructure.
Wed, December 3, 2025
AI Phishing Factories: Tools Fueling Modern BEC Attacks
🔒 Today's low-cost AI services have industrialized cybercrime, enabling novice actors to produce highly convincing BEC and phishing content at scale. Tools such as WormGPT, FraudGPT, and SpamGPT remove traditional barriers by generating personalized messages, exploit code, and automated delivery that evade static filters. Defensive detection alone is insufficient when signatures continually mutate; organizations must protect identity and neutralize credential exposure. Join the webinar to learn targeted signatures and access-point controls to stop attacks even after a click.
Wed, December 3, 2025
Global Execs Rank Disinformation, AI and Cyber Risks
🧭 Business leaders across 116 economies told the World Economic Forum that misinformation/disinformation, cyber insecurity and the adverse outcomes of AI rank among the top near-term threats to national stability. The WEF’s Executive Opinion Survey 2025 canvassed 11,000 executives, who placed technological risks alongside economic and societal concerns. Respondents flagged AI-driven deepfakes, model exploitation and AI-assisted cyber techniques as amplifiers of both disinformation campaigns and critical-system threats.
Wed, December 3, 2025
AI, Automation and Integration: Cyber Protection 2026
🔒 In 2025 threat actors increasingly used AI—deepfakes, automated scripts, and AI-generated lures—to scale ransomware, phishing, and data-exfiltration attacks, exposing gaps between siloed security and backup tools. Publicly disclosed ransomware victims rose sharply and phishing remained the dominant initial vector, overwhelming legacy protections. Organizations are moving to AI-driven automation and unified detection, response, and recovery platforms to shorten dwell time and streamline compliance.
Wed, December 3, 2025
Chopping AI Down to Size: Practical AI for Security
🪓 Security teams face a pivotal moment as AI becomes embedded across products while core decision-making remains opaque and vendor‑controlled. The author urges building and tuning small, controlled AI‑assisted utilities so teams can define training data, risk criteria, and behavior rather than blindly trusting proprietary models. Practical skills — basic Python, ML literacy, and active model engagement — are framed as essential. The piece concludes with an invitation to a SANS 2026 keynote for deeper, actionable guidance.
Wed, December 3, 2025
Picklescan Flaws Enable Malicious PyTorch Model Execution
⚠️ Picklescan, a Python pickle scanner, has three critical flaws that can be abused to execute arbitrary code when loading untrusted PyTorch models. Discovered by JFrog researchers, the issues — a file-extension bypass (CVE-2025-10155), a ZIP CRC bypass (CVE-2025-10156) and an unsafe-globals bypass (CVE-2025-10157) — let attackers present malicious models as safe. The vulnerabilities were responsibly disclosed on June 29, 2025 and fixed in Picklescan 0.0.31 on September 9; users should upgrade and review model-loading practices and downstream automation that accepts third-party models.
Wed, December 3, 2025
AI Security Posture Management: A Practical Buyer's Guide
🔒 AI-SPM is emerging to protect AI/ML pipelines, cloud-hosted models and large datasets without moving data. The guide outlines core capabilities — agentless access, data classification, pipeline protection, model monitoring and compliance checks — and summarizes offerings from vendors such as Cyera, LegitSecurity, Microsoft, Orca and Palo Alto Networks. It also advises reviewing standards like MITRE ATLAS and OWASP LLM when evaluating tools.
Tue, December 2, 2025
The AI Fix #79 — Gemini 3, poetry jailbreaks, robot safety
🎧 In episode 79 of The AI Fix, hosts Graham Cluley and Mark Stockley examine the latest surprises from Gemini 3, including boastful comparisons, hallucinations about the year, and reactions from industry players. They also discuss an arXiv paper proposing adversarial poetry as a universal jailbreak for LLMs and the ensuing debate over its provenance. Additional segments cover robot-versus-appliance antics, a controversial AI teddy pulled from sale after disturbing interactions with children, and whether humans need safer robots — or stricter oversight.
Tue, December 2, 2025
ChatGPT Experiences Worldwide Outage; Conversations Lost
⚠️OpenAI's ChatGPT experienced a global outage that caused errors and disappearing conversations for many users. Many reported seeing messages such as "something seems to have gone wrong" and "There was an error generating a response," while some conversations vanished and new messages kept loading indefinitely. DownDetector recorded over 30,000 reports, and OpenAI acknowledged elevated errors and said engineers were working on a fix. Service began returning as of 15:14 ET, though performance remained slow.
Tue, December 2, 2025
Build Forward-Thinking Cybersecurity Teams for Tomorrow
🧠 The democratization of advanced attack capabilities means cybersecurity leaders must rethink talent strategies now. Ann Johnson argues the primary vulnerability in an AI-transformed landscape is human: teams must combine technical expertise with cognitive diversity to interrogate and adapt to probabilistic AI outputs. Organizations should change hiring, onboarding, retention, and continuous upskilling to create resilient, future-ready security teams.