All news with #ai security tag
Thu, December 11, 2025
How CISOs Justify Security Investments to the Board
🔒 CISOs must position security investments as strategic enablers that directly support corporate objectives rather than as purely technical upgrades. Presentations should connect proposed solutions to outcomes like entering new markets, protecting margins, ensuring compliance, and improving resilience. Use concrete scenarios, cost models, and recovery timelines to show how investments reduce probability and impact of incidents while improving operational stability. Tailor messaging to the board’s maturity and speak in terms of risk, return, and shareholder value.
Thu, December 11, 2025
Smashing Security 447 — AI Abuse, Stalking and Museum Heist
🤖 On episode 447 of the Smashing Security podcast Graham Cluley and guest Jenny Radcliffe explore how generative AI can enable stalking — reporting that Grok was used to doxx people, outline stalking strategies, and share revenge‑porn tips. They also recount the audacious Louvre crown jewels heist, where thieves abused assumptions about what ‘looks normal’. Graham additionally interviews Rob Edmondson about how Microsoft 365 misconfigurations and over‑privileged accounts create serious security exposures. The episode emphasizes practical lessons in threat modelling and access hygiene.
Wed, December 10, 2025
Google Ads Lead to ChatGPT/Grok Guides Installing AMOS
⚠️ Security researchers warn of a macOS infostealer campaign that uses Google search ads to push users toward publicly shared ChatGPT and Grok conversations containing malicious installation instructions. According to Kaspersky and Huntress, the ClickFix attack spoofs troubleshooting guides and decodes a base64 payload into a bash script that prompts for a password, then uses it to install the AMOS infostealer with root privileges. Users are urged not to execute commands copied from online chats and to verify safety first.
Wed, December 10, 2025
Building a security-first culture for agentic AI enterprises
🔒 Microsoft argues that as organizations adopt agentic AI, security must be a strategic priority that enables growth, trust, and continued innovation. The post identifies risks such as oversharing, data leakage, compliance gaps, and agent sprawl, and recommends three pillars: prepare for AI and agent integration, strengthen organization-wide skilling, and foster a security-first culture. It points to resources like Microsoft’s AI adoption model, Microsoft Learn, and the AI Skills Navigator to help operationalize these steps.
Wed, December 10, 2025
Palo Alto Networks Joins Google Unified Security Recommended
🤝 Google Cloud announced Palo Alto Networks has joined the Google Unified Security Recommended program, bringing validated integrations across endpoint, network, and access security to deepen interoperability and choice for customers. The integration ingests telemetry from Cortex XDR, VM‑Series NGFWs and Prisma Access into Google Security Operations to drive AI-powered analytics, threat hunting and faster investigation and response. Customers can execute automated playbook actions and procure qualified solutions via the Google Cloud Marketplace for streamlined deployment.
Wed, December 10, 2025
How Staff+ Security Engineers Can Force-Multiply Impact
🔧 Staff+ security engineers should move from being individual problem-solvers to force multipliers by enabling others, automating enforcement, and shaping security strategy. The article recommends practical mechanisms—policy-as-code, paved paths, mentorship trees—and disciplined delegation to scale impact. It urges embedding security via shift-left practices, reusable reference architectures, and cautious AI-assisted tooling. During incidents, act as an orchestrator, set inflection points, and bridge teams with leadership to preserve strategic influence.
Wed, December 10, 2025
Behind the Breaches: Case Studies of Modern Threat Actors
🔍 This analysis examines leaked communications and recent incidents to reveal how modern threat actors organize, adapt and blur the lines between criminal, contractor and researcher roles. Leaked BlackBasta chats show internal discord, leadership opacity, technical debt and disputes over revenue and workload. The EncryptHub case highlights a solo operator who both conducted malware and credited vulnerability disclosures to Microsoft, illustrating the growing hybridization of actor identities. Finally, BlackLock’s open recruitment for "traffers" demonstrates how the ransomware supply chain is becoming commoditized and industrialized.
Wed, December 10, 2025
When Quantum Computing Meets AI: The Next Cyber Battleground
🧠 The convergence of AI and quantum computing is poised to redefine computing, cybersecurity and geopolitical power. Quantum machine learning can accelerate model training and enable real-time simulation by exploiting qubits' parallelism, while quantum key distribution promises communication that is far more resistant to interception. At the same time, this synergy raises risks: quantum-capable adversaries could undermine current cryptography and enable advanced cyberattacks.
Wed, December 10, 2025
Gartner Urges Enterprises to Block AI Browsers Now
⚠️ Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts strongly recommend that enterprises block AI browsers for the foreseeable future, citing both known vulnerabilities and additional risks inherent to an immature technology. They warn of irreversible, non‑auditable data loss when browsers send active web content, tab data and browsing history to cloud services, and of prompt‑injection attacks that can cause fraudulent actions. Concrete flaws—such as unencrypted OAuth tokens in ChatGPT Atlas and the Comet 'CometJacking' issue—underscore that traditional controls are insufficient; Gartner advises blocking installs with existing network and endpoint controls, restricting pilots to small, low‑risk groups, and updating AI policies.
Wed, December 10, 2025
FBI Alerts on AI-Assisted Fake Kidnapping Video Scams
⚠️ The FBI is warning of AI-assisted fake kidnapping scams that use fabricated images, video, and audio to extort victims. Criminal actors typically send texts claiming a loved one has been abducted and follow with multimedia that appears genuine but often contains subtle inaccuracies. Examples include missing tattoos, incorrect body proportions, and other mismatches, and attackers may use time-limited messages to pressure victims. Observers note the technique is currently of uncertain effectiveness but likely to be automated and scaled as AI tools improve.
Wed, December 10, 2025
Google Patches Zero-Click Gemini Enterprise Vulnerability
🔒 Google has patched a zero-click vulnerability in Gemini Enterprise and Vertex AI Search that could have allowed attackers to exfiltrate corporate data via hidden instructions embedded in shared Workspace content. Discovered by Noma Security in June 2025 and dubbed "GeminiJack," the flaw exploited Retrieval-Augmented Generation (RAG) retrieval to execute indirect prompt injection without any user interaction. Google updated how the systems interact, separated Vertex AI Search from Gemini Enterprise, and changed retrieval and indexing workflows to mitigate the issue.
Wed, December 10, 2025
November 2025: Ransomware and GenAI Drive Cyber Attacks
🛡️ In November 2025, organizations faced an average of 2,003 cyber-attacks per week, a 3% rise from October and 4% above November 2024. Check Point Research attributes the increase to a surge in ransomware, broader attack surfaces and growing exposure from internal use of generative AI tools. The education sector was hit hardest, averaging 4,656 attacks per organization per week. These trends elevate operational, data and recovery risks across industries.
Wed, December 10, 2025
Webinar: Exploiting Cloud Misconfigurations in AWS, AI & K8s
🔒 The Cortex Cloud team at Palo Alto Networks is hosting a technical webinar that dissects three recent cloud investigations and demonstrates practical defenses. Speakers will reveal the mechanics of AWS identity misconfigurations, techniques attackers use to hide malicious artifacts by mimicking AI model naming, and how overprivileged Kubernetes entities are abused. The session emphasizes Code-to-Cloud detection, runtime intelligence, and audit-log analysis to close visibility gaps; register to attend the live deep dive.
Wed, December 10, 2025
Polymorphic AI Malware: Hype vs. Practical Reality Today
🧠 Polymorphic AI malware is more hype than breakthrough: attackers are experimenting with LLMs, but practical advantages over traditional polymorphic techniques remain limited. AI mainly accelerates tasks—debugging, translating samples, generating boilerplate, and crafting convincing phishing lures—reducing the skill barrier and increasing campaign tempo. Many AI-assisted variants are unstable or detectable in practice; defenders should focus on behavioral detection, identity protections, and response automation rather than fearing instant, reliable self‑rewriting malware.
Wed, December 10, 2025
2026 NDAA: Cybersecurity Changes for DoD Mobile and AI
🛡️ The compromise 2026 NDAA directs large new cybersecurity mandates for the Department of Defense, including contract requirements to harden mobile phones used by senior officials and enhanced AI/ML security and procurement standards. It sets timelines (90–180 days) for mobile protections and AI policies, ties requirements to industry frameworks such as NIST SP 800 and CMMC, and envisions workforce training and sandbox environments. The law also funds roughly $15.1 billion in cyber activities and adds provisions on spyware, biologics data risks, and industrial base harmonization.
Wed, December 10, 2025
Designing an Internet Teens Want: Access Over Bans
🧑💻 A Google‑commissioned study by youth specialists Livity centers the voices of over 7,000 European teenagers to show how adolescents want technology designed with people in mind. Teens report widespread, routine use of AI for learning and creativity and ask for clear, age‑appropriate guidance rather than blanket bans. The report recommends default-on safety and privacy controls, curriculum-level AI and media literacy, clearer reporting and labeling, and parental support programs.
Wed, December 10, 2025
Tools and Strategies to Secure Model Context Protocol
🔒 Model Context Protocol (MCP) is increasingly used to connect AI agents with enterprise data sources, but real-world incidents at SaaS vendors have exposed practical weaknesses. The article describes what MCP security solutions should provide — discovery, runtime protection, strong authentication and comprehensive logging — and surveys offerings from hyperscalers, platform providers and startups. It stresses least-privilege and Zero Trust as core defenses.
Tue, December 9, 2025
Microsoft Patch Tuesday — December 2025 Security Fixes
🛡️ Microsoft released its final Patch Tuesday of 2025, addressing 56 vulnerabilities including one actively exploited zero-day, CVE-2025-62221, and two publicly disclosed bugs. The zero-day is a privilege escalation in the Windows Cloud Files Mini Filter Driver, a core component used by cloud sync services such as OneDrive. Three flaws received Microsoft’s Critical rating, including two Office bugs exploitable via Outlook’s Preview Pane. Administrators should prioritize updates for the flagged privilege escalation issues and apply patches promptly.
Tue, December 9, 2025
Partners Fuel Innovation with Cortex XSIAM & Prisma SASE
🚀 Palo Alto Networks announced that partners voted Cortex XSIAM as CRN’s 2025 Product of the Year for Security Operations Platform/SIEM and Prisma SASE as a 2025 Tech Innovator. Solution providers credited XSIAM’s AI-driven approach for sweeping the evaluation — leading in technology, revenue and customer need — and praised its ability to shift SOCs from tool management to outcome delivery. Partners highlighted Prisma SASE’s multicloud architecture, unified policies and AI copilot as essential for securing hybrid workforces, informed by feedback from over 70,000 customers and the recent Prisma SASE 4.0 release. Palo Alto frames these awards as validation of platform convergence and continued partner enablement.
Tue, December 9, 2025
Changing the Physics of Cyber Defense with Graphs Today
🔍 John Lambert of MSTIC argues defenders should model infrastructure as directed graphs of credentials, entitlements, dependencies and logs so they can trace the attacker’s “red thread.” He introduces the algebras of defense—graphs, relational tables, anomalies, and vectors over time—that let analysts and AI ask domain-specific questions like blast radius or path to crown jewels. Lambert also emphasizes preventative hygiene: asset and entitlement management, deprecating legacy systems, segmentation, and phishing-resistant MFA. He urges collaborative intelligence and AI-enabled tooling to shift advantage back to defenders.