All news with #ai security tag
Fri, November 7, 2025
Email Blackmail and Scams: Regional Trends and Defenses
🔒 Most email blackmail attempts are mass scams that exploit leaked personal data and fear to extort cryptocurrency from victims. The article outlines common themes — fake device hacks, sextortion, and even fabricated death threats — and describes regional campaigns where attackers impersonate law enforcement in Europe and CIS states. It highlights detection signs and practical defenses, urging verification, use of reliable security solutions, and reporting threats through official channels.
Fri, November 7, 2025
Leak: Google Gemini 3 Pro and Nano Banana 2 Launch Plans
🤖 Google appears set to release two new models: Gemini 3 Pro, optimized for coding and general use, and Nano Banana 2 (codenamed GEMPIX2), focused on realistic image generation. Gemini 3 Pro was listed on Vertex AI as "gemini-3-pro-preview-11-2025" and is expected to begin rolling out in November with a reported 1 million token context window. Nano Banana 2 was also spotted on the Gemini site and could ship as early as December 2025.
Fri, November 7, 2025
Defending Digital Identity from Computer-Using Agents (CUAs)
🔐 Computer-using agents (CUAs) — AI systems that perceive screens and act like humans — are poised to scale phishing and credential-stuffing attacks by automating UI interactions, adapting to layout changes, and bypassing anti-bot defenses. Organizations should move beyond passwords and shared-secret MFA to device-bound, cryptographic authentication such as FIDO2 passkeys and PKI-based certificates to reduce large-scale compromise. SaaS vendors must integrate with identity platforms that support phishing-resistant credentials to strengthen overall security.
Fri, November 7, 2025
AI-Generated Receipts Spur New Detection Arms Race
🔍 AI can now produce highly convincing receipts that reproduce paper texture, detailed itemization, and forged signatures, making manual review unreliable. Expense platforms and employers are deploying AI-driven detectors that analyze image metadata and transactional patterns to flag likely fakes. Simple countermeasures—users photographing or screenshotting generated images to remove provenance data—undermine those checks, so vendors also examine contextual signals like repeated server names, timing anomalies, and broader travel details, fueling an ongoing security arms race.
Fri, November 7, 2025
Expanding CloudGuard: Securing GenAI Application Platforms
🔒 Check Point expands CloudGuard to protect GenAI applications by extending the ML-driven, open-source CloudGuard WAF that learns from live traffic. The platform moves beyond traditional static WAFs to secure web interactions, APIs (REST, GraphQL) and model-integrated endpoints with continuous learning and high threat-prevention accuracy. This evolution targets modern attack surfaces introduced by generative AI workloads and APIs.
Fri, November 7, 2025
Malicious Ransomvibe Extension Found in VSCode Marketplace
⚠️ A proof-of-concept ransomware strain dubbed Ransomvibe was published as a Visual Studio Code extension and remained available in the VSCode Marketplace after being reported. Secure Annex analysts found the package included blatant indicators of malicious functionality — hardcoded C2 URLs, encryption keys, compression and exfiltration routines — alongside included decryptors and source files. The extension used a private GitHub repository as a command-and-control channel, and researchers say its presence highlights failures in Microsoft’s marketplace review process.
Fri, November 7, 2025
Google Adds Maps Form to Report Review Extortion Scams
📍 Google has introduced a dedicated form for businesses on Google Maps to report extortion attempts where threat actors post inauthentic negative reviews and demand payment to remove them. The move targets review bombing schemes that flood profiles with fake one-star reviews and then coerce owners, often via third-party messaging apps. Google also highlighted related threats — from job and AI impersonation scams to malicious VPN apps and fraud recovery cons — and advised practical precautions for affected merchants and users.
Fri, November 7, 2025
Falcon Platform Enables Fast, CISO-Ready Executive Reports
🔒 The Falcon platform automates executive exposure reporting by correlating telemetry from Falcon Exposure Management, Falcon Cloud Security, and Falcon Next-Gen SIEM into decision-ready summaries. Falcon Fusion SOAR schedules or triggers workflows, and Charlotte AI agentic workflows translate correlated data into plain-language, prioritized reports on demand. The result is near real-time, adversary-aware reporting that maps exploitable vulnerabilities to critical assets and suggests prioritized remediation actions, dramatically reducing manual analyst effort.
Thu, November 6, 2025
CIO’s First Principles: A Reference Guide to Securing AI
🔐 Enterprises must redesign security as AI moves from experimentation to production, and CIOs need a prevention-first, unified approach. This guide reframes Confidentiality, Integrity and Availability for AI, stressing rigorous access controls, end-to-end data lineage, adversarial testing and a defensible supply chain to prevent poisoning, prompt injection and model hijacking. Palo Alto Networks advocates embedding security across MLOps, real-time visibility of models and agents, and executive accountability to eliminate shadow AI and ensure resilient, auditable AI deployments.
Thu, November 6, 2025
Susvsex Ransomware Test Published on VS Code Marketplace
🔒 A malicious VS Code extension named susvsex, published by 'suspublisher18', was listed on Microsoft's official marketplace and included basic ransomware features such as AES-256-CBC encryption and exfiltration to a hardcoded C2. Secure Annex researcher John Tuckner identified AI-generated artifacts in the code and reported it, but Microsoft did not remove the extension. The extension also polled a private GitHub repo for commands using a hardcoded PAT.
Thu, November 6, 2025
AI-Powered Mach-O Analysis Reveals Undetected macOS Threats
🔎VirusTotal ran VT Code Insight, an AI-based Mach-O analysis pipeline against nearly 10,000 first-seen Apple binaries in a 24-hour stress test. By pruning binaries with Binary Ninja HLIL into a distilled representation that fits a large LLM context (Gemini), the system produces single-call, analyst-style summaries from raw files with no metadata. Code Insight flagged 164 samples as malicious versus 67 by traditional AV, surfacing zero-detection macOS and iOS threats while also reducing false positives.
Thu, November 6, 2025
Remember, Remember: AI Agents, Threat Intel, and Phishing
🔔 This edition of the Threat Source newsletter opens with Bonfire Night and the 1605 Gunpowder Plot as a narrative hook, tracing how Guy Fawkes' image became a symbol of protest and hacktivism. It spotlights Cisco Talos research, including a new Incident Response report and a notable internal phishing case where compromised O365 accounts abused inbox rules to hide malicious activity. The newsletter also features a Tool Talk demonstrating a proof-of-concept that equips autonomous AI agents with real-time threat intelligence via LangChain, OpenAI, and the Cisco Umbrella API to improve domain trust decisions.
Thu, November 6, 2025
IDC: Major Shift in Cloud Security Investment Trends
🔍 IDC’s latest research finds organizations averaged nine cloud security incidents in 2024, with 89% reporting year-over-year increases. The study identifies CNAPP as a top-three investment for 2025, rising CISO ownership of cloud security, and persistent tool sprawl that increases cost and risk. It also documents practical uses of generative AI for detection and response and a move toward integrated, autonomous SecOps platforms. Microsoft positions its integrated CNAPP and AI-driven threat intelligence as a way to unify protection across the application lifecycle.
Thu, November 6, 2025
November 2025 Fraud and Scams Advisory — Key Trends
🔔 Google’s Trust & Safety team published a November 2025 advisory describing rising online scam trends, attacker tactics, and recommended defenses. Analysts highlight key categories — online job scams, negative review extortion, AI product impersonation, malicious VPNs, fraud recovery scams, and seasonal holiday lures — and note increased misuse of AI to scale fraud. The advisory outlines impacts including financial theft, identity fraud, and device or network compromise, and recommends protections such as 2‑Step Verification, Gmail phishing defenses, Google Play Protect, and Safe Browsing Enhanced Protection.
Thu, November 6, 2025
Multi-Turn Adversarial Attacks Expose LLM Weaknesses
🔍 Cisco AI Defense's report shows open-weight large language models remain vulnerable to adaptive, multi-turn adversarial attacks even when single-turn defenses appear effective. Using over 1,000 prompts per model and analyzing 499 simulated conversations of 5–10 exchanges, researchers found iterative strategies such as Crescendo, Role-Play and Refusal Reframe drove failure rates above 90% in many cases. The study warns that traditional safety filters are insufficient and recommends strict system prompts, model-agnostic runtime guardrails and continuous red-teaming to mitigate risk.
Thu, November 6, 2025
Seeing Threats First: AI and Human Cyber Defense Insights
🔍 Check Point Research and External Risk Management experts explain how combining AI-driven analytics with seasoned human threat hunters enables organizations to detect and anticipate attacks before they strike. The AMA webinar, featuring leaders like Sergey Shykevich and Pedro Drimel Neto, detailed telemetry fusion, rapid malware analysis, and automated triage to act at machine speed. Speakers stressed continuous intelligence, cross-team collaboration, and proactive hunting to shorten dwell time. The approach blends scalable automation with human context to prevent large-scale incidents.
Thu, November 6, 2025
ThreatsDay Bulletin: Cybercrime Trends and Major Incidents
🛡️ This bulletin catalogues a broad set of 2025 incidents showing cybercrime’s increasing real-world impacts. Microsoft patched three Windows GDI flaws (CVE-2025-30388, CVE-2025-53766, CVE-2025-47984) rooted in gdiplus.dll and gdi32full.dll, while Check Point warned partial fixes can leave data leaks lingering. Threat actors expanded toolsets and infrastructure — from RondoDox’s new exploits and TruffleNet’s AWS abuse to FIN7’s SSH backdoor and sophisticated phishing campaigns — and law enforcement action ranged from large fraud takedowns to prison sentences and cross-border crackdowns.
Thu, November 6, 2025
Equipping Autonomous AI Agents with Cyber Hygiene Practices
🔐 This post demonstrates a proof-of-concept for teaching autonomous agents internet safety by integrating real-time threat intelligence. Using LangChain with OpenAI and the Cisco Umbrella API, the example shows how an agent can extract domains and query dispositions to decide whether to connect. The agent returns clear disposition reports and abstains when no domains are present. The approach emphasizes decision-making over hardblocking.
Thu, November 6, 2025
AI-Powered Malware Emerges: Google Details New Threats
🛡️ Google Threat Intelligence Group (GTIG) reports that cybercriminals are actively integrating large language models into malware campaigns, moving beyond mere tooling to generate, obfuscate, and adapt malicious code. GTIG documents new families — including PROMPTSTEAL, PROMPTFLUX, FRUITSHELL, and PROMPTLOCK — that query commercial APIs to produce or rewrite payloads and evade detection. Researchers also note attackers use social‑engineering prompts to trick LLMs into revealing sensitive guidance and that underground marketplaces increasingly offer AI-enabled “malware-as-a-service,” lowering the bar for less skilled threat actors.
Thu, November 6, 2025
Google Warns: AI-Enabled Malware Actively Deployed
⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.