All news with #ai security tag
Tue, October 21, 2025
Digital Sovereignty Sessions at AWS re:Invent 2025 Guide
📘 The AWS re:Invent 2025 attendee guide highlights the conference's digital sovereignty program, detailing sessions, workshops, and code talks focused on data residency, hybrid and edge deployments, and sovereign infrastructure. Key topics include the AWS European Sovereign Cloud, AWS Outposts, Local Zones, and security features such as the Nitro System. Practical workshops and chalk talks demonstrate RAG, agentic AI, and low-latency SLM deployments with operational controls and compliance patterns. Reserve seating via the attendee portal or access sessions with the free virtual pass.
Tue, October 21, 2025
Cursor, Windsurf IDEs Exposed to 94+ Chromium Flaws
⚠️ The latest releases of Cursor and Windsurf IDEs embed outdated Chromium and V8 engines that contain at least 94 known, patched vulnerabilities. Ox Security researchers demonstrated a proof‑of‑concept exploiting CVE-2025-7656 (a Maglev JIT integer overflow) to crash Cursor, and warn that similar flaws could enable denial‑of‑service or arbitrary code execution in real attacks. Attack vectors include deeplinks, malicious extensions, poisoned README previews or documentation; the two IDEs together serve an estimated 1.8 million developers. Cursor dismissed the DoS finding as out of scope and Windsurf did not respond to inquiries.
Tue, October 21, 2025
DeepSeek Privacy and Security: What Users Should Know
🔒 DeepSeek collects extensive interaction data — chats, images and videos — plus account details, IP address and device/browser information, and retains it for an unspecified period under a vague “retain as long as needed” policy. The service operates under Chinese jurisdiction, so stored chats may be accessible to local authorities and have been observed on China Mobile servers. Users can disable model training in web and mobile Data settings, export or delete chats (export is web-only), or run the open-source model locally to avoid server-side retention, but local deployment and deletion have trade-offs and require device protections.
Tue, October 21, 2025
Microsoft Security Store Unites Partners and Innovation
🔐 Microsoft Security Store, released to public preview on September 30, 2025, is a unified, AI-powered marketplace that lets organizations discover, buy, and deploy vetted security solutions and AI agents. Catalog items — organized by frameworks like NIST and by integration with products such as Microsoft Defender, Sentinel, Entra, and Purview — address threat protection, identity, compliance, and cloud security. Built on the Microsoft Marketplace, it provides unified billing, MACC eligibility, and guided automated provisioning to streamline deployments.
Tue, October 21, 2025
Sophisticated Investment Scam Impersonates Singapore Official
🔍 Cybersecurity researchers have uncovered a large-scale investment scam that impersonated Singapore’s top officials, including Prime Minister Lawrence Wong and Minister K Shanmugam, to promote a fraudulent forex platform. The campaign used verified Google Ads, hundreds of fake news domains and deepfake videos, funneling victims through multiple redirects to a Mauritius-registered trading site. Group-IB reported advanced evasion techniques and localized targeting to show scam pages only to Singaporean users, pressuring many to invest and then blocking withdrawals.
Tue, October 21, 2025
The AI Fix #73: Gemini gambling, poisoning LLMs and fallout
🧠 In episode 73 of The AI Fix, hosts Graham Cluley and Mark Stockley explore a sweep of recent AI developments, from the rise of AI-generated content to high-profile figures relying on chatbots. They discuss research suggesting Google Gemini exhibits behaviours resembling pathological gambling and report on a Gemma-style model uncovering a potential cancer therapy pathway. The show also highlights legal and security concerns— including a lawyer criticised for repeated AI use, generals consulting chatbots, and techniques for poisoning LLMs with only a few malicious samples.
Tue, October 21, 2025
Ransomware Payouts Rise to $3.6M as Tactics Evolve
🔒 The average ransomware payment climbed to $3.6m in 2025, up from $2.5m in 2024, as attackers shift to fewer but more lucrative, targeted campaigns. ExtraHop's Global Threat Landscape Report found 70% of affected organisations paid ransoms, with healthcare and government incidents averaging nearly $7.5m each. The study highlights expanding risks from public cloud, third‑party integrations and generative AI, and urges organisations to map their attack surface, monitor internal traffic for lateral movement and prepare for AI‑enabled tactics.
Tue, October 21, 2025
AI-Enabled Ransomware: CISOs’ Top Security Concern
🛡️ CrowdStrike’s 2025 ransomware survey finds that AI is compressing attacker timelines and enhancing phishing, malware creation, and social engineering, forcing defenders to react in minutes rather than hours. 78% of respondents reported a ransomware incident in the past year, yet fewer than 25% recovered within 24 hours and paying victims often faced repeat compromise and data theft. CISOs rank AI-enabled ransomware as their top AI-related security concern, and many organizations are accelerating adoption of AI detection, automated response, and improved training.
Tue, October 21, 2025
Securing AI in Defense: Trust, Identity, and Controls
🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.
Tue, October 21, 2025
Amazon Nova adds customizable content moderation settings
🔒 Amazon announced that Amazon Nova models now support customizable content moderation settings for approved business use cases that require processing or generating sensitive content. Organizations can adjust controls across four domains—safety, sensitive content, fairness, and security—while Amazon enforces essential, non-configurable safeguards to protect children and preserve privacy. Customization is available for Amazon Nova Lite and Amazon Nova Pro in the US East (N. Virginia) region; customers should contact their AWS Account Manager to confirm eligibility.
Tue, October 21, 2025
CISOs' 2025 Priorities: Data, AI, and Simplification
🔒 CSO's 2025 Security Priorities Study finds security leaders are juggling expanding responsibilities while facing greater complexity in selecting the right tools. Seventy-six percent say solution selection is more complex and 57% had trouble finding incident root causes in the past year. Top focuses are protecting sensitive data, securing cloud systems, and simplifying IT infrastructure, with 73% now more likely to consider AI-enabled security. Many plan to rely on managed service providers and maintain level budgets while driving strategic AI and governance initiatives.
Tue, October 21, 2025
Ransomware Reality: High Confidence, Low Preparedness
⚠️ The CrowdStrike State of Ransomware Survey reveals a sizable gap between organizational confidence and actual ransomware readiness. Half of 1,100 security leaders say they are "very well prepared," yet 78% were attacked in the past year and fewer than 25% recovered within 24 hours. The report warns that AI-accelerated attacks deepen this gap and recommends AI-native detection and response such as Falcon to regain the advantage.
Tue, October 21, 2025
CrowdStrike Launches AI-Driven Falcon UX in Preview
🔍 At Fal.Con 2025, CrowdStrike introduced a dynamic, persona-aware user experience for Falcon Cloud Security and Falcon Exposure Management, now available in public preview. Built on CrowdStrike Enterprise Graph and Charlotte AI, the console unifies hybrid and multi-cloud asset and risk visibility into customizable workspaces. It offers AI-assisted dashboard creation and executive-ready reporting to accelerate investigations and remediation without switching tools.
Mon, October 20, 2025
Closing the Cybersecurity Skills Gap: New Pathways
🔐 Cyber Awareness Month highlights the persistent cybersecurity skills shortage and the opportunities it creates for new entrants and experienced professionals. The 2025 Cybersecurity Skills Gap Report documents a global shortfall of more than 4.7 million roles and identifies high demand for data, cloud, network and AI security expertise. Employers increasingly favor certifications (65%) over degrees, opening practical pathways for career changers, veterans, and adjacent IT or business professionals. Investing in upskilling, governance, and awareness programs can reduce breach risk and improve retention.
Mon, October 20, 2025
Cybersecurity Awareness Month 2025: Ransomware Resilience
🔒 ESET's Cybersecurity Awareness Month 2025 video, presented by Chief Security Evangelist Tony Anscombe, explains why ransomware continues to threaten organizations large and small. Citing Verizon's 2025 DBIR and a Coalition Inc. study, it notes that 44% of breaches involved ransomware and 40% of insured victims paid ransoms. The video outlines common intrusion vectors and practical steps — backups, patching, access controls and training — organizations should take to improve resilience.
Mon, October 20, 2025
AI-Driven Social Engineering Tops ISACA Threats for 2026
⚠️A new ISACA report identifies AI-driven social engineering as the top cyber threat for 2026, cited by 63% of nearly 3,000 IT and security professionals. The 2026 Tech Trends and Priorities report, published 20 October 2025, shows AI concerns outpacing ransomware (54%) and supply chain attacks (35%), while only 13% of organizations feel very prepared to manage generative AI risks. ISACA urges organizations to adopt AI governance, strengthen compliance amid divergent US and EU approaches, and invest in talent, resilience and legacy modernization.
Mon, October 20, 2025
AI-Powered Phishing Detection: Next-Gen Security Engine
🛡️ Check Point introduces a continuously trained AI engine that analyzes website content, structure, and authentication flows to detect phishing with high accuracy. Integrated with ThreatCloud AI, it delivers protection across Quantum gateways, Harmony Email, Endpoint, and Harmony Mobile. The model learns from millions of domains and real-time telemetry to adapt to new evasion techniques. Early results indicate improved detection of brand impersonation and credential-harvesting pages.
Mon, October 20, 2025
Agentic AI and the OODA Loop: The Integrity Problem
🛡️ Bruce Schneier and Barath Raghavan argue that agentic AIs run repeated OODA loops—Observe, Orient, Decide, Act—over web-scale, adversarial inputs, and that current architectures lack the integrity controls to handle untrusted observations. They show how prompt injection, dataset poisoning, stateful cache contamination, and tool-call vectors (e.g., MCP) let attackers embed malicious control into ordinary inputs. The essay warns that fixing hallucinations is insufficient: we need architectural integrity—semantic verification, privilege separation, and new trust boundaries—rather than surface patches.
Mon, October 20, 2025
ChatGPT privacy and security: data control guide 2025
🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.
Mon, October 20, 2025
2025 APJ eCrime Landscape: Emerging Threat Trends and Risks
🔒 The CrowdStrike 2025 APJ eCrime Landscape Report outlines a rapidly evolving criminal ecosystem across Asia Pacific and Japan, driven by regional marketplaces and increasingly automated ransomware. The report highlights active Chinese-language underground markets (Chang’an, FreeCity, Huione Guarantee) and the rise of AI-developed ransomware, with 763 APJ victims named on ransomware and dedicated leak sites between January 2024 and April 2025. It profiles local eCrime groups (the SPIDER cluster) and service providers such as Magical Cat and CDNCLOUD, and concludes with prioritized defenses for identity, cloud, and social-engineering resilience.