Tag Banner

All news with #ai security tag

Thu, October 30, 2025

LinkedIn to Use EU, UK and Other Profiles for AI Training

🔒 Microsoft-owned LinkedIn will begin using profile details, public posts and feed activity from users in the UK, EU, Switzerland, Canada and Hong Kong to train generative AI models and to support personalised ads across Microsoft starting 3 November 2025. Private messages are excluded. Users can opt out via Settings & Privacy > Data Privacy and toggle Data for Generative AI Improvement to Off. Organisations should update social media policies and remind staff to review their advertising and data-sharing settings.

read more →

Thu, October 30, 2025

Amazon Bedrock AgentCore Browser Adds Web Bot Auth Preview

🔐 Amazon Bedrock AgentCore Browser now previews Web Bot Auth, a draft IETF protocol that cryptographically identifies AI agents to websites. The feature automatically generates credentials, signs HTTP requests with private keys, and registers verified agent identities to reduce CAPTCHA interruptions and human intervention in automated workflows. It streamlines verification across major providers such as Akamai, Cloudflare, and HUMAN Security, and is available in nine AWS Regions on a consumption-based pricing model with no upfront costs.

read more →

Wed, October 29, 2025

AI Literacy Is Critical for Cybersecurity Readiness

🔒 Artificial intelligence is reshaping cybersecurity, creating both enhanced defensive capabilities and new risks that require broad AI literacy. The White House's America’s AI Action Plan and Fortinet’s 2025 Cybersecurity Global Skills Gap Report show strong adoption—97% of organizations use or plan AI in security—yet 48% cite lack of staff expertise as a major barrier. Fortinet recommends targeted training, policies for generative AI use, and its Security Awareness modules to help close the gap and reduce threat exposure.

read more →

Wed, October 29, 2025

Fortinet Expands Unified SASE with Global POPs and AI

🚀 Fortinet announced enhancements to Fortinet Unified SASE, expanding its global footprint to over 170 points of presence and embedding AI-powered operations. FortiAI-Assist automates diagnostics and remediation to accelerate mean time to resolution, while an agentless Secure Browser and SaaS Security Posture Management extend DLP and compliance controls across 80+ SaaS apps. These updates aim to boost performance, simplify operations, and strengthen data protection for distributed workforces.

read more →

Wed, October 29, 2025

AI-targeted Cloaking Tricks Agentic Browsers, Warns SPLX

⚠ Researchers report a new form of context-poisoning called AI-targeted cloaking that serves different content to agentic browsers and AI crawlers. SPLX shows attackers can use a trivial user-agent check to deliver alternate pages to crawlers from ChatGPT and Perplexity, turning retrieved content into manipulated ground truth. The technique mirrors search engine cloaking but targets AI overviews and autonomous reasoning, creating a potent misinformation vector. A concurrent hTAG analysis also found many agents execute risky actions with minimal safeguards, amplifying potential harm.

read more →

Wed, October 29, 2025

PhantomRaven npm Campaign Uses Invisible Dependencies

🕵️ Researchers at Koi Security uncovered an ongoing npm credential-harvesting campaign called PhantomRaven, active since August 2025, that steals npm tokens, GitHub credentials and CI/CD secrets. The attacker hides malicious payloads using Remote Dynamic Dependencies (RDD), fetching code from attacker-controlled servers at install time to bypass static scans. The campaign leveraged slopsquatting—typo variants that exploit AI hallucinations—to increase installs; Koi found 126 infected packages with about 20,000 downloads and at least 80 still live at publication.

read more →

Wed, October 29, 2025

Preparing for the Digital Battlefield of Identity Risk

🔒 BeyondTrust's 2026 predictions argue that the next major breaches will stem from unmanaged identity debt rather than simple phishing. The report highlights three identity-driven threats: agentic AI acting as privileged deputies vulnerable to prompt manipulation, automated "account poisoning" in financial systems, and long-dormant "ghost" identities surfacing in legacy IAM. The authors recommend an identity-first posture with strict least-privilege, context-aware controls, real-time auditing, and stronger identity governance.

read more →

Wed, October 29, 2025

Open-Source b3 Benchmark Boosts LLM Security Testing

🛡️ The UK AI Security Institute (AISI), Check Point and Lakera have launched b3, an open-source benchmark to assess and strengthen the security of backbone LLMs that power AI agents. b3 focuses on the specific LLM calls within agent workflows where malicious inputs can trigger harmful outputs, using 10 representative "threat snapshots" combined with a dataset of 19,433 adversarial attacks from Lakera’s Gandalf initiative. The benchmark surfaces vulnerabilities such as system prompt exfiltration, phishing link insertion, malicious code injection, denial-of-service and unauthorized tool calls, making LLM security more measurable, reproducible and comparable across models and applications.

read more →

Wed, October 29, 2025

Practical AI Tactics for GRC: Opportunities and Risks

🔍 Join a free expert webinar that translates rapid AI advances into practical, actionable tactics for Governance, Risk, and Compliance (GRC) teams. The session will showcase real-world examples of AI improving compliance workflows, early lessons from agentic AI deployments, and the common risks teams often overlook. Expect clear guidance on mitigation strategies, regulatory gaps, and how to prepare your team to make AI a competitive compliance advantage.

read more →

Wed, October 29, 2025

BSI Warns of Growing AI Governance Gap in Business

⚠️ The British Standards Institution warns of a widening AI governance gap as many organisations accelerate AI adoption without adequate controls. An AI-assisted review of 100+ annual reports and two polls of 850+ senior leaders found strong investment intent but sparse governance: only 24% have a formal AI program and 47% use formal processes. The report highlights weaknesses in incident management, training-data oversight and inconsistent approaches across markets.

read more →

Wed, October 29, 2025

Cybersecurity Awareness Month 2025: Deepfakes and Trust

🔍 Advances in AI and deepfake technology make it increasingly difficult to tell what’s real online, enabling convincingly fake videos, images and audio that scammers exploit to deceive individuals and organizations. Threat actors use deepfakes of public figures to promote bogus investments, create synthetic nudes to extort victims and deploy fake voices and videos to trick employees into wiring corporate funds. Watch ESET Chief Security Evangelist Tony Anscombe outline practical defenses to recognize and resist deepfakes, and explore other Cybersecurity Awareness Month videos on authentication, patching, ransomware and shadow IT.

read more →

Wed, October 29, 2025

Identity Crisis at the Perimeter: AI-Driven Impersonation

🛡️ Organizations face an identity crisis as generative AI and vast troves of breached personal data enable realistic digital doppelgangers. Attackers now automate hyper-personalized phishing, smishing and vishing, clone voices, and run coordinated multi-channel campaigns that reference real colleagues and recent projects. The article urges a shift to “never trust, always verify,” with radical visibility, rapid detection and phishing-resistant authentication such as FIDO2. It also warns of emerging agentic AI and recommends strict least-privilege controls plus continuous red-teaming.

read more →

Wed, October 29, 2025

Top 7 Agentic AI Use Cases Transforming Cybersecurity

🔐 Agentic AI is presented as a practical cybersecurity capability that can operate without direct human supervision, handling high-volume, time-sensitive tasks at machine speed. Industry leaders from Zoom to Dell Technologies and Deloitte highlight seven priority use cases — from autonomous threat detection and SOC augmentation to real-time zero‑trust enforcement — that capitalize on AI's scale and speed. The technology aims to reduce alert fatigue, accelerate mitigation, and free human teams for strategic work.

read more →

Tue, October 28, 2025

AI-Driven Malicious SEO and the Fight for Web Trust

🛡️ The article explains how malicious SEO operations use keyword stuffing, purchased backlinks, cloaking and mass-produced content to bury legitimate sites in search results. It warns that generative AI now amplifies this threat by producing tens of thousands of spam articles, spinning up fake social accounts and enabling more sophisticated cloaking. Defenders must deploy AI-based detection, graph-level backlink analysis and network behavioral analytics to spot coordinated abuse. The piece emphasizes proactive, ecosystem-wide monitoring to protect trust and legitimate businesses online.

read more →

Tue, October 28, 2025

Check Point's AI Cloud Protect with NVIDIA BlueField

🔒 Check Point has made AI Cloud Protect powered by NVIDIA BlueField available for enterprise deployment, offering DPU-accelerated security for cloud AI workloads. The solution aims to inspect and protect GenAI traffic and prompts to reduce data exposure risks while integrating with existing cloud environments. It targets prompt manipulation and infrastructure attacks at scale and is positioned for organizations building AI factories.

read more →

Tue, October 28, 2025

Securing the AI Factory: Palo Alto Networks and NVIDIA

🔒 Palo Alto Networks outlines a platform-centric approach to protect the enterprise AI Factory, announcing integration of Prisma AIRS with NVIDIA BlueField DPUs. The collaboration embeds distributed zero-trust security directly into infrastructure, delivering agentless, penalty-free runtime protection and real-time workload threat detection. Validated on NVIDIA RTX PRO Server and optimized for BlueField‑3, with BlueField‑4 forthcoming, the solution ties into Strata Cloud Manager and Cortex for end-to-end visibility and control, aiming to secure AI operations at scale without compromising performance.

read more →

Tue, October 28, 2025

Google for Startups: AI Cohort Boosts LATAM Cybersecurity

🔐 Google selected 11 startups for its inaugural Google for Startups Accelerator: AI for Cybersecurity in Latin America, a ten-week program that pairs founders with Google's technical resources, mentorship, and product support. The cohort — drawn from Brazil, Chile, Colombia, and Mexico — focuses on AI-driven solutions across threat detection, compliance automation, fraud prevention, and protections for AI agents. Participants will receive hands-on guidance to scale, validate, and deploy tools that reduce cyber risk across the region.

read more →

Tue, October 28, 2025

Investment Scams Mimicking Crypto and Forex Surge in Asia

🔍 Group-IB's research warns of a rapid rise in fake investment platforms across Asia that mimic cryptocurrency and forex exchanges to defraud victims. Organized, cross-border groups recruit via social media and messaging apps, deploying polished trading interfaces, automated chatbots and complex back-end systems to extract payments. The report maps two analytical models — Victim Manipulation Flow and Multi-Actor Fraud Network — and urges banks and regulators to monitor reused infrastructure and tighten KYC controls.

read more →

Tue, October 28, 2025

Researchers Expose GhostCall and GhostHire Campaigns

🔍 Kaspersky details two tied campaigns, GhostCall and GhostHire, that target Web3 and blockchain professionals worldwide and emphasize macOS-focused infection chains and social-engineering lures. The attacks deploy a range of payloads — DownTroy, CosmicDoor, RooTroy and others — to harvest secrets, escalate access, and persist. Guidance stresses user vigilance, strict dependency vetting, and centralized secrets management. Kaspersky links the activity to the BlueNoroff/Lazarus cluster and notes the actor has increasingly used generative AI to craft imagery and accelerate malware development.

read more →

Tue, October 28, 2025

Enabling a Safe Agentic Web with reCAPTCHA Controls

🔐 Google Cloud outlines a pragmatic framework to secure the emerging agentic web while preserving smooth user experiences. The post details how reCAPTCHA and Google Cloud combine agent and user identity, continuous behavior analysis, and AI-resistant mitigations such as mobile-device attestations. It highlights enabling safe agentic commerce via protocols like AP2 and tighter integration with cloud AI services.

read more →