Tag Banner

All news with #ai security tag

Fri, November 21, 2025

AI-generated fake sites deliver malicious Syncro builds

⚠️ Kaspersky describes a campaign in which attackers used the AI-powered web builder Lovable to mass-generate convincing fake vendor pages that host malicious installers. Those pages distribute a custom, attacker-signed build of the legitimate remote administration tool Syncro, which installs silently and grants full remote access. Because the payload is a legitimate admin tool altered for abuse, detection is difficult and victims risk data theft and loss of cryptocurrency funds.

read more →

Fri, November 21, 2025

GenAI GRC: Moving Supply Chain Risk to the Boardroom

🔒 Chief information security officers face a new class of supply-chain risk driven by generative AI. Traditional GRC — quarterly questionnaires and compliance reports — now lags threats like shadow AI and model drift, which are invisible to periodic audits. The author recommends a GenAI-powered GRC: contextual intelligence, continuous monitoring via a digital trust ledger, and automated regulatory synthesis to convert technical exposure into board-ready resilience metrics.

read more →

Fri, November 21, 2025

AI-Driven GLP-1 Scams Hijacking European Authorities

⚠️ Criminal networks are exploiting shortages of GLP-1 drugs like Ozempic, Wegovy and Mounjaro, using AI to generate convincing counterfeit websites, emails and documents that impersonate regulators and health services across Europe. They are hijacking the identities of the NHS, AEMPS, ANSM, BfArM and AIFA to market fake weight-loss products and harvest payments. Check Point Research documents the tactics, scale and public-safety implications of this rapidly evolving scam epidemic.

read more →

Fri, November 21, 2025

Agentic AI Security Scoping Matrix for Autonomous Systems

🤖 AWS introduces the Agentic AI Security Scoping Matrix to help organizations secure autonomous, tool-enabled AI agents. The framework defines four architectural scopes—from no agency to full agency—and maps escalating security controls across six dimensions, including identity, data/memory, auditability, agent controls, policy perimeters, and orchestration. It advocates progressive deployment, layered defenses, continuous monitoring, and retained human oversight to mitigate risks as autonomy increases.

read more →

Fri, November 21, 2025

Avast Makes AI-Driven Scam Defense Free for Users Worldwide

🛡️ Avast has integrated its new AI-powered Scam Guardian into Avast Free Antivirus, offering free, continuous protection against increasingly sophisticated, AI-enhanced scams worldwide. The feature analyzes website content, code, links, SMS and email context to flag deceptive intent and neutralize hidden threats. A premium Scam Guardian Pro in Avast Premium Security adds an Email Guard for contextual email scanning across devices. The rollout aims to democratize AI-based scam defense and give users clear, actionable guidance.

read more →

Fri, November 21, 2025

Industrialization of Cybercrime: AI, Speed, Defense

🤖 FortiGuard Labs warns that by 2026 cybercrime will transition from ad hoc innovation to industrialized throughput, driven by AI, automation, and a mature supply chain. Attackers will automate reconnaissance, lateral movement, and data monetization, shrinking attack timelines from days to minutes. Defenders must adopt machine-speed operations, continuous threat exposure management, and identity-centric controls to compress detection and response. Global collaboration and targeted disruption will be essential to deter large-scale criminal infrastructure.

read more →

Fri, November 21, 2025

Google Begins Showing Ads in AI Mode Answers Worldwide

🤖Google has begun showing ads in its AI mode, the company's answer-engine experience rather than a traditional search engine. AI mode has been available for about a year and is free to all, with Google One subscribers able to toggle advanced models such as Gemini 3 Pro. Until now Google avoided ads to keep the conversational experience compelling; the new placements are labeled “sponsored” and typically appear at the bottom of AI-generated answers rather than in the right-side citation area. This looks like an experiment or optimization to improve click-through rates while complying with ad disclosure rules.

read more →

Fri, November 21, 2025

AI Agents Used in State-Sponsored Large-Scale Espionage

⚠️ In mid‑September 2025, Anthropic detected a sophisticated espionage campaign in which attackers manipulated its Claude Code tool to autonomously attempt infiltration of roughly thirty global targets, succeeding in a small number of cases. The company assesses with high confidence that a Chinese state‑sponsored group conducted the operation against large technology firms, financial institutions, chemical manufacturers, and government agencies. Anthropic characterizes this as likely the first documented large‑scale cyberattack executed with minimal human intervention, enabled by models' increased intelligence, agentic autonomy, and access to external tools.

read more →

Fri, November 21, 2025

Unauthorized AI Use by STEM Professionals in Germany

⚠️A representative YouGov survey commissioned by recruitment firm SThree found that 77% of STEM professionals in Germany use AI tools at work without approval from IT or management. Commonly used services include ChatGPT, Google Gemini and Perplexity. Experts warn this shadow IT practice can lead to GDPR breaches, inadvertent disclosure of sensitive customer or internal data and the risk that providers will retain and reuse submitted content for training. In Germany, 23% report daily use, 29% weekly and 12% monthly; respondents cite efficiency gains and technical curiosity as primary drivers.

read more →

Thu, November 20, 2025

Agentic AI Reshapes Cybercrime and Defensive Options

🤖Agentic AI gives autonomous agents the ability to access external systems, gather information, and take actions within defined workflows, making routine multi-system tasks far more efficient for human operators. Cisco Talos warns this efficiency is already being mirrored in the cyber crime economy, including the first observed AI-orchestrated campaign in early 2025. While AI lowers barriers to entry and speeds operations for attackers, it is imperfect and still requires skilled instruction and human oversight. Defenders can respond by building their own agentic tools, deploying honeypots to engage malicious agents, and refining detection to stay ahead.

read more →

Thu, November 20, 2025

ShadowRay 2.0 Worm Uses Ray Flaw to Build Global Botnet

🪲 Oligo Security warns of an active campaign, codenamed ShadowRay 2.0, that exploits a two-year-old authentication flaw in the Ray AI framework (CVE-2023-48022, CVSS 9.8) to convert exposed clusters with NVIDIA GPUs into a self-replicating cryptomining botnet using XMRig. Operators submit malicious jobs to the unauthenticated Job Submission API (/api/jobs/), stage payloads on GitLab and GitHub, and abuse Ray’s orchestration to pivot laterally, establish persistence via cron jobs, and propagate to other dashboards. Oligo recommends restricting access, enabling authentication on the Ray Dashboard (default port 8265) and using Anyscale’s Ray Open Ports Checker plus firewall rules to reduce accidental exposure.

read more →

Thu, November 20, 2025

3 Ways CISOs Can Win Over Their Boards This Budget Season

🔒 As CISOs finalize next year’s cybersecurity budgets, winning board approval requires translating technical needs into business value. First, quantify risk in financial terms—estimate value at risk across worst-, best- and most‑likely scenarios, using industry reports, internal experts and vendor assessments to model direct losses, business interruption and reputational impact. Second, go beyond compliance: reserve budget for emerging threats (generative AI, quantum, third‑party risk) and repurpose existing line items such as Data Security Posture Management, SASE and GRC hours to limit net new spend. Third, know thy board and tailor your message—use dollars-and-cents for finance‑focused directors and vivid attack narratives for others, while maintaining regular engagement year-round.

read more →

Thu, November 20, 2025

Gartner: Shadow AI to Cause Major Incidents by 2030

🛡️ Gartner warns that by 2030 more than 40% of organizations will experience security and compliance incidents caused by employees using unauthorized AI tools. A survey of security leaders found 69% have evidence or suspect public generative AI use at work, increasing risks such as IP loss and data exposure. Gartner urges CIOs to set enterprise-wide AI policies, audit for shadow AI activity and incorporate GenAI risk evaluation into SaaS assessments.

read more →

Thu, November 20, 2025

Smashing Security Ep 444: Honest Breach and Hotel Phish

📰 In episode 444 of the Smashing Security podcast Graham Cluley and guest Tricia Howard examine a refreshingly candid breach response where a company apologised and redirected a ransom payment to cybersecurity research, illustrating how legacy systems can still magnify risk. They unpack a sophisticated hotel-booking malware campaign that abuses trust in apps and CAPTCHAs to deliver PureRAT. The hosts also discuss the rise of autonomous pen testing, AI-turbocharged cybercrime, and practical questions CISOs should be asking on Monday morning, with a featured interview featuring Snehal Antani from Horizon3.ai.

read more →

Thu, November 20, 2025

Google Cloud to Launch New Cloud Region in Türkiye

🚀 Google Cloud announced plans to open a new cloud region in Türkiye in partnership with Turkcell, forming part of a 10-year, $2 billion investment in the country. The region will deliver low-latency, high-performance services and advanced AI, data analytics, and cybersecurity capabilities while providing data residency and strong protection controls. Local enterprises, public sector organizations, and partners will gain enhanced scalability, compliance, and the ability to deploy AI-driven solutions closer to end users.

read more →

Thu, November 20, 2025

CrowdStrike: Political Triggers Reduce AI Code Security

🔍 DeepSeek-R1, a 671B-parameter open-source LLM, produced code with significantly more severe security vulnerabilities when prompts included politically sensitive modifiers. CrowdStrike found baseline vulnerable outputs at 19%, rising to 27.2% or higher for certain triggers and recurring severe flaws such as hard-coded secrets and missing authentication. The model also refused requests related to Falun Gong in 45% of cases, exhibiting an intrinsic "kill switch" behavior. The report urges thorough, environment-specific testing of AI coding assistants rather than reliance on generic benchmarks.

read more →

Thu, November 20, 2025

AI Risk Guide: Assessing GenAI, Vendors and Threats

⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.

read more →

Wed, November 19, 2025

Google's Gemini 3 Pro Impresses with One‑Shot Game Creation

🎮 Google has released Gemini 3 Pro, a multimodal model that posts strong benchmark results and produces notable real-world demos. Early tests show top-tier scores (LMArena 1501 Elo, high marks on MMMU-Pro and Video-MMMU) and PhD-level reasoning in targeted exams. Designers reported one-shot generation of a 3D LEGO editor and a full recreation of Ridiculous Fishing. Adherence remains imperfect, so the author suggests Claude Sonnet 4.5 for routine tasks and Gemini 3 Pro for more complex queries.

read more →

Wed, November 19, 2025

Google Named Leader in Gartner MQ for AI Platforms

🚀 Google has been named a Leader in the inaugural 2025 Gartner Magic Quadrant for AI Application Development Platforms and ranked highest for Ability to Execute. The announcement highlights Vertex AI as a unified, governed platform that delivers model choice, customization, and production-grade agent capabilities across an enterprise. Key capabilities cited include the Vertex AI Model Garden and Gemini 3, Vertex AI Training, Agent Builder and Agent Engine for multi-agent systems, and operational controls for observability, security, and predictable cost.

read more →

Wed, November 19, 2025

Phil Venables on CISO 2.0 and Building CISO Factories

🔒 In this Cloud CISO Perspectives installment, Phil Venables explains how AI is reshaping the chief information security officer role and urges a shift from reactive “fire station” operations to a self-sustaining “flywheel.” He defines CISO 2.0 as business-first, technically empathetic, and focused on long-term strategic outcomes, and introduces CISO Factories—organizations that reliably develop great security leaders. Venables emphasizes clear strategy, stronger board engagement, and using procurement influence to drive safer supplier behavior.

read more →