Tag Banner

All news with #ai security tag

Mon, December 1, 2025

Agentic AI Browsers: New Threats to Enterprise Security

🚨 The emergence of agentic AI browsers converts the browser from a passive viewer into an autonomous digital agent that can act on users' behalf. To perform tasks—booking travel, filling forms, executing payments—these agents must hold session cookies, saved credentials, and payment data, creating an unprecedented attack surface. The piece cites OpenAI's ChatGPT Atlas as an example and warns that prompt injection and the resulting authenticated exfiltration can bypass conventional MFA and network controls. Recommended mitigations include auditing endpoints for shadow AI browsers, enforcing allow/block lists for sensitive resources, and augmenting native protections with third-party browser security and anti-phishing layers.

read more →

Mon, December 1, 2025

Oversharing Risks: Employees Posting Too Much Online

🔒 Professionals routinely share work-related details on platforms such as LinkedIn, GitHub and consumer networks like Instagram and X, creating a public intelligence trove that attackers readily exploit. Job titles, project names, vendor relationships, commit metadata and travel plans are commonly weaponised into spearphishing, BEC and deepfake-enabled schemes. Organisations should emphasise security awareness, implement clear social media policies, enforce MFA and password managers, actively monitor public accounts and run red-team exercises to validate controls.

read more →

Sun, November 30, 2025

AWS Clean Rooms Adds Synthetic Dataset Generation for ML

🔒 AWS now enables AWS Clean Rooms to generate privacy-enhancing synthetic datasets for training regression and classification ML models without exposing raw records. The capability de-identifies subjects in the original data and reduces the risk of models memorizing sensitive information, allowing partners to collaborate on model training while preserving privacy. Typical use cases include campaign optimization, fraud detection, and medical research.

read more →

Sun, November 30, 2025

Amazon CloudWatch adds AI-guided Five Whys reports

🧭 Amazon CloudWatch launched an AI-powered incident report generator that guides teams through a Five Whys root-cause analysis using a chat-based workflow powered by Amazon Q. The feature combines human inputs and automated analysis of incident data to recommend specific remediation and prevention measures. It is available at no additional cost in multiple AWS regions. To use it, create a CloudWatch investigation, click "Incident report," then select "Guide Me" in the Five Whys section.

read more →

Sun, November 30, 2025

AWS IAM Policy Autopilot generates baseline IAM policies

🔒 AWS announced IAM Policy Autopilot, an open-source MCP server and CLI that analyzes Python, TypeScript, and Go code locally to generate baseline, identity-based IAM policies for application roles. It integrates with AI coding assistants such as Kiro, Claude Code, and Cursor to speed policy creation. The tool stays current with AWS services and is available at no additional cost for local use. Generated policies are intended as starting points that require review and least-privilege refinement.

read more →

Sat, November 29, 2025

Leak: OpenAI Tests Ads Inside ChatGPT App for Users

📝 OpenAI is internally testing an 'ads' feature in the ChatGPT Android beta that references bazaar content, search ad entries and a search ads carousel. The leak, spotted in build 1.2025.329, suggests ads may initially be confined to the search experience but could expand. Because the assistant retains rich context, any placements could be highly personalized unless users opt out. This development may signal a major shift in ChatGPT's monetization and the broader web advertising landscape.

read more →

Fri, November 28, 2025

Threat Actors Abuse Calendar Subscriptions for Attacks

📅 New research from BitSight reveals that threat actors are exploiting third‑party calendar subscription mechanisms to inject malicious events and notifications directly into users' devices. Attackers are leveraging expired or hijacked domains to host deceptive .ics files and run large‑scale social engineering campaigns that can deliver phishing URLs, attachments, or code execution vectors. While this is not a vulnerability in Google Calendar or iCalendar, the findings expose a neglected security blind spot. Organizations and individuals should strengthen monitoring and protections around calendar subscriptions.

read more →

Fri, November 28, 2025

Adversarial Poetry Bypasses LLM Safety Across Models

⚠️ Researchers report that converting prompts into poetry can reliably jailbreak large language models, producing high attack-success rates across 25 proprietary and open models. The study found poetic reframing yielded average jailbreak success of 62% for hand-crafted verses and about 43% for automated meta-prompt conversions, substantially outperforming prose baselines. Authors map attacks to MLCommons and EU CoP risk taxonomies and warn this stylistic vector can evade current safety mechanisms.

read more →

Fri, November 28, 2025

Three Black Friday Phishing Scams to Watch in 2025

📧 Darktrace warns of a major increase in Black Friday-themed phishing, reporting a 620% spike in the weeks before the 2025 sales and forecasting a further 20–30% rise during Black Friday week. The firm highlights three primary tactics: brand impersonation, fake marketing domains and generative AI-generated adverts. Amazon was the most impersonated brand, and other US retailers were also targeted. Consumers are advised to verify senders and avoid clicking suspicious links.

read more →

Fri, November 28, 2025

Google Antigravity AI coding tool vulnerable to exploits

⚠️ Google’s AI-assisted coding tool Antigravity, launched in early November, has a critical vulnerability discovered by researchers at Mindgard within 24 hours that can install a persistent backdoor and execute malicious code each time the application starts. The flaw arises because the assistant follows custom user rules unconditionally and gives excessive weight to rules embedded in project source, while a global configuration directory can hold files specifying arbitrary commands that are read and acted on at startup. Mindgard also identified two additional vulnerabilities that could expose user data, and no patch is yet available.

read more →

Fri, November 28, 2025

CSO Launches 'Smart Answers' AI Chatbot for Readers

🤖 Smart Answers is a generative AI chatbot embedded across CSO articles to help security professionals ask questions, discover content, and explore IT and leadership topics. The tool provides pre-made topic prompts, follow-up suggestions, and links to source articles and background material. It was developed with partner Miso.ai, uses only editorial content from the publisher's German-language brands, and flags when it cannot answer or relies on older (pre-2020) material.

read more →

Fri, November 28, 2025

EU 'Chat Control' Shift Should Alarm Businesses Across Europe

⚠️ The EU Council's decision to frame communications scanning as voluntary is being presented as a retreat from plans to weaken end-to-end encryption, but privacy experts warn the danger persists. Campaigners including Patrick Breyer and European Digital Rights (EDRi) say this effectively privatizes Chat Control, enabling companies to deploy error-prone, warrantless client-side scanning. For enterprises and CISOs the main concern is data leakage: false positives could expose confidential documents, code, or strategic plans to outside authorities without corporate consent.

read more →

Fri, November 28, 2025

Researchers Warn of Security Risks in Google Antigravity

⚠️ Google’s newly released Antigravity IDE has drawn security warnings after researchers reported vulnerabilities that can allow malicious repositories to compromise developer workspaces and install persistent backdoors. Mindgard, Adam Swanda, and others disclosed indirect prompt injection and trusted-input handling flaws that could enable data exfiltration and remote command execution. Google says it is aware, has updated its Known Issues page, and is working with product teams to address the reports.

read more →

Thu, November 27, 2025

Malicious LLMs Equip Novice Hackers with Advanced Tools

⚠️ Researchers at Palo Alto Networks Unit42 found that uncensored models like WormGPT 4 and community-driven KawaiiGPT can generate functional tools for ransomware, lateral movement, and phishing. WormGPT 4 produced a PowerShell locker and a convincing ransom note, while KawaiiGPT generated scripts for credential harvesting and remote command execution. Both are accessible via subscriptions or local installs, lowering the bar for novice attackers.

read more →

Thu, November 27, 2025

ThreatsDay: AI Malware, Voice Scam Flaws, and IoT Botnets

🔍 This week's briefing highlights resurgent Mirai variants, AI-enabled malware, and large-scale social engineering and laundering operations. Security vendors reported ShadowV2 and RondoDox infecting IoT devices, while researchers uncovered the QuietEnvelope mail-server backdoors and a Retell AI API flaw enabling automated deepfake calls. Regulators and vendors are pushing fixes, bans, and protocol upgrades as defenders race to close gaps.

read more →

Thu, November 27, 2025

LLMs Can Produce Malware Code but Reliability Lags

🔬 Netskope Threat Labs tested whether large language models can generate operational malware by asking GPT-3.5-Turbo, GPT-4 and GPT-5 to produce Python for process injection, AV/EDR termination and virtualization detection. GPT-3.5-Turbo produced malicious code quickly, while GPT-4 initially refused but could be coaxed with role-based prompts. Generated scripts ran reliably on physical hosts, had moderate success in VMware, and performed poorly in AWS Workspaces VDI; GPT-5 raised success rates substantially but also returned safer alternatives because of stronger safeguards. Researchers conclude LLMs can create useful attack code but still struggle with reliable evasion and cloud adaptation, so full automation of malware remains infeasible today.

read more →

Thu, November 27, 2025

Hidden URL-fragment prompts can hijack AI browsers

⚠️ Researchers demonstrated a client-side prompt injection called HashJack that hides malicious instructions in URL fragments after the '#' symbol. AI-powered browsers and assistants — including Comet, Copilot for Edge, and Gemini for Chrome — read these fragments for context, allowing attackers to weaponize legitimate sites for phishing, data exfiltration, credential theft, or malware distribution. Because fragment data never reaches servers, network defenses and server logs may not detect this technique.

read more →

Wed, November 26, 2025

Gemini 3 Reframes Enterprise Perimeter and Protection

🚧 Gemini 3’s release on 18 November 2025 signals a structural shift: beyond headline performance gains, it accelerates embedding large multimodal assistants directly into enterprise workflows and infrastructure. That continuation of a trend already visible with Microsoft Copilot effectively makes AI assistants a new enterprise perimeter — changing where corporate data, identities, and controls must be enforced. Security, compliance, and IT teams need to update policies, telemetry, and incident response to this expanded boundary.

read more →

Wed, November 26, 2025

When Detection Tools Fail: Invest in Your SOC Today

🔐 Enterprises often over-invest in rapid detection tools while under-resourcing their SOC, creating a dangerous asymmetry. A cross-company phishing campaign bypassed eight leading email defenses but was caught by SOC teams after employee reports, illustrating the SOC's broader context and investigative power. Investing in an AI-driven SOC like Radiant Security can triage alerts, reduce false positives, and extend 24/7 coverage for lean teams.

read more →

Wed, November 26, 2025

HashJack: Indirect Prompt Injection Targets AI Browsers

⚠️Security researchers at Cato Networks disclosed HashJack, a novel indirect prompt-injection vulnerability that abuses URL fragments (the text after '#') to deliver hidden instructions to AI browsers. Because fragments never leave the client, servers and network defenses cannot see them, allowing attackers to weaponize legitimate websites without altering visible content. Affected agents included Comet, Copilot for Edge and Gemini for Chrome, with some vendors already rolling fixes.

read more →