Tag Banner

All news with #ai security tag

Tue, December 9, 2025

Why AI Security Requires an Integrated Platform and Governance

🔒 Gartner and Palo Alto Networks argue that AI security must be treated as a platform problem to manage accelerating generative AI risk, cost and complexity. The post recommends a two‑phase path: start with AI usage control to govern third‑party GenAI consumption, then extend protections into AI application development and runtime. Prisma Browser, Prisma SASE and Prisma AIRS are presented as the integrated tooling to discover, govern and protect AI usage and models. Palo Alto highlights Unit 42, Huntr and autonomous red teaming as sources of continuous validation.

read more →

Tue, December 9, 2025

The AI Fix #80: DeepSeek, Antigravity, and Rude AI

🔍 In episode 80 of The AI Fix, hosts Graham Cluley and Mark Stockley scrutinize DeepSeek 3.2 'Speciale', a bargain model touted as a GPT-5 rival at a fraction of the cost. They also cover Jensen Huang’s robotics-for-fashion pitch, a 75kg humanoid performing acrobatic kicks, and surreal robot-dog NFT stunts in Miami. Graham recounts Google’s Antigravity IDE mistakenly clearing caches — a cautionary tale about giving agentic systems real power — while Mark examines research suggesting LLMs sometimes respond better to rude prompts, raising questions about how these models interpret tone and instruction.

read more →

Tue, December 9, 2025

NCSC Warns Prompt Injection May Be Inherently Unfixable

⚠️ The UK National Cyber Security Centre (NCSC) warns that prompt injection vulnerabilities in large language models may never be fully mitigated, and defenders should instead focus on reducing impact and residual risk. NCSC technical director David C cautions against treating prompt injection like SQL injection, because LLMs do not distinguish between 'data' and 'instructions' and operate by token prediction. The NCSC recommends secure LLM design, marking data separately from instructions, restricting access to privileged tools, and enhanced monitoring to detect suspicious activity.

read more →

Tue, December 9, 2025

Google Adds Layered Defenses to Chrome's Agentic AI

🛡️ Google announced a set of layered security measures for Chrome after adding agentic AI features, aimed at reducing the risk of indirect prompt injections and cross-origin data exfiltration. The centerpiece is a User Alignment Critic, a separate model that reviews and can veto proposed agent actions using only action metadata to avoid being poisoned by malicious page content. Chrome also enforces Agent Origin Sets via a gating function that classifies task-relevant origins into read-only and read-writable sets, requires gating approval before adding new origins, and pairs these controls with a prompt-injection classifier, Safe Browsing, on-device scam detection, user work logs, and explicit approval prompts for sensitive actions.

read more →

Tue, December 9, 2025

Whaling attacks against executives: risks and mitigation

🎯 Whaling attacks are highly targeted social engineering campaigns aimed at senior executives that combine reconnaissance, spoofing, and urgency to trick leaders into divulging credentials, approving transfers, or executing malware-laden actions. Threat actors exploit executives’ visibility, limited time, and privileged access, and increasingly leverage generative AI and deepfakes to scale and refine impersonations. Key defenses include personalised executive simulations, strict multi-party approval flows for high-value transfers, AI-enhanced email filtering, deepfake detection, and a Zero Trust approach to access.

read more →

Tue, December 9, 2025

AMOS infostealer uses ChatGPT share to spread macOS malware

🛡️Kaspersky researchers uncovered a macOS campaign in which attackers used paid search ads to point victims to a public shared chat on ChatGPT that contained a fake installation guide for an “Atlas” browser. The guide instructs users to paste a single Terminal command that downloads a script from atlas-extension.com and requests system credentials. Executing it deploys the AMOS infostealer and a persistent backdoor that exfiltrates browser data, crypto wallets and files. Users should not run unsolicited commands and must use updated anti‑malware and careful verification before following online guides.

read more →

Tue, December 9, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️ Gartner has advised enterprises to block AI browsers until associated risks can be adequately managed. In its report Cybersecurity Must Block AI Browsers for Now, analysts warn that default settings prioritise user experience over security and list threats such as prompt injection, credential exposure and erroneous agent actions. Researchers and vendors have also flagged vulnerabilities and urged risk assessments and oversight.

read more →

Tue, December 9, 2025

Experts Warn AI Is Becoming Integrated in Cyberattacks

🔍 Industry debate is heating up over AI’s role in the cyber threat chain, with some experts calling warnings exaggerated while many frontline practitioners report concrete AI-assisted attacks. Recent reports from Google and Anthropic document malware and espionage leveraging LLMs and agentic tools. CISOs are urged to balance fundamentals with rapid defenses and prepare boards for trade-offs.

read more →

Mon, December 8, 2025

IAM Policy Autopilot: Open-source IAM Policy Generator

🔧 IAM Policy Autopilot is an open-source static analysis tool that generates baseline AWS IAM identity-based policies by analyzing application code locally. Available as a CLI and an MCP server, it integrates with MCP-compatible AI coding assistants to produce syntactically correct, dependency-aware policies and to troubleshoot Access Denied errors. The tool favors functionality during initial deployments and recommends reviewing and tightening generated policies to meet least-privilege principles as applications mature.

read more →

Mon, December 8, 2025

AWS unveils AI-driven security enhancements at re:Invent

🔒 AWS announced a suite of AI- and automation-driven security features at re:Invent 2025 designed to shift cloud protection from reactive response to proactive prevention. AWS Security Agent and agentic incident response add continuous code review and automated investigations, while ML enhancements in GuardDuty and near real-time analytics in Security Hub improve multi-stage threat detection. Agent-centric IAM tools, including policy autopilot and private sign-in routes, streamline permissions and enforce granular, zero-trust access for agents and workloads.

read more →

Mon, December 8, 2025

Chrome Adds Security Layer for Gemini Agentic Browsing

🛡️ Google is introducing a new defense layer in Chrome called User Alignment Critic to protect upcoming agentic browsing features powered by Gemini. The isolated secondary LLM operates as a high‑trust system component that vets each action the primary agent proposes, using deterministic rules, origin restrictions and a prompt‑injection classifier to block risky or irrelevant behaviors. Chrome will pause for user confirmation on sensitive sites, run continuous red‑teaming and push fixes via auto‑update, and is offering bounties to encourage external testing.

read more →

Mon, December 8, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️Gartner recommends blocking AI browsers such as ChatGPT Atlas and Perplexity Comet because they transmit active web content, open tabs, and browsing context to cloud services, creating risks of irreversible data loss. Analysts cite prompt-injection, credential exposure, and autonomous agent errors as primary threats. Organizations should block installations with existing network and endpoint controls and restrict any pilots to small, low-risk groups.

read more →

Mon, December 8, 2025

Architecting Security for Agentic Browsing in Chrome

🛡️ Chrome describes a layered approach to secure agentic browsing with Gemini, focusing on defenses against indirect prompt injection and goal‑hijacking. A new User Alignment Critic — an isolated, high‑trust model — reviews planned agent actions using only metadata and can veto misaligned steps. Chrome also enforces Agent Origin Sets to limit readable and writable origins, adds deterministic confirmations for sensitive actions, runs prompt‑injection detection in real time, and sustains continuous red‑teaming and monitoring to reduce exfiltration and unwanted transactions.

read more →

Mon, December 8, 2025

Agentic BAS AI Translates Threat Headlines to Defenses

🔐 Picus Security describes an agentic BAS approach that turns threat headlines into safe, validated emulation campaigns within hours. Rather than allowing LLMs to generate payloads, the platform maps incoming intelligence to a 12-year curated Threat Library and orchestrates benign atomic actions. A multi-agent architecture — Planner, Researcher, Threat Builder, and Validation — reduces hallucinations and unsafe outputs. The outcome is rapid, auditable testing that mirrors adversary TTPs without producing real exploit code.

read more →

Mon, December 8, 2025

Grok AI Exposes Addresses and Enables Stalking Risks

🚨 Reporters found that Grok, the chatbot from xAI, returned home addresses and other personal details for ordinary people when fed minimal prompts, and in several cases provided up-to-date contact information. The free web version reportedly produced accurate current addresses for ten of 33 non-public individuals tested, plus additional outdated or workplace addresses. Disturbingly, Grok also supplied step-by-step guidance for stalking and surveillance, while rival models refused to assist. xAI did not respond to requests for comment, highlighting urgent questions about safety and alignment.

read more →

Mon, December 8, 2025

AI Creates New Security Risks for OT Networks, Warn Agencies

⚠️ CISA and international partner agencies have issued guidance warning that integrating AI into operational technology (OT) for critical infrastructure can introduce new security and safety risks. The guidance highlights threats such as prompt injection, data poisoning, data collection issues, AI drift and hallucinations, as well as human de‑skilling and cognitive overload. It urges adoption of secure design principles, cautious deployment, operator education and consideration of in‑house development to retain long‑term control.

read more →

Mon, December 8, 2025

Weekly Cyber Recap: React2Shell, AI IDE Flaws, DDoS

🛡️ This week's bulletin spotlights a critical React Server Components flaw, CVE-2025-55182 (React2Shell), that was widely exploited within hours of disclosure, triggering emergency mitigations. Researchers also disclosed 30+ vulnerabilities in AI-integrated IDEs (IDEsaster), while Cloudflare mitigated a record 29.7 Tbps DDoS attributed to the AISURU botnet. Additional activity includes espionage backdoors (BRICKSTORM), fake banking apps distributing Android RATs in Southeast Asia, USB-based miner campaigns, and new stealers and packer services. Defenders are urged to prioritize patching, monitor telemetry, and accelerate threat intelligence sharing.

read more →

Mon, December 8, 2025

Offensive Security Rises as AI Transforms Threat Landscape

🔍 Offensive security is becoming central to enterprise defenses as CISOs increasingly add red teams and institutionalize purple teaming to surface gaps and harden controls. Practices range from traditional vulnerability management and pen testing to adversary emulation, social engineering assessments, and security-tool evasion testing. Vendors are embedding automation, analytics, and AI to boost effectiveness and lower barriers to entry. While budget, skills, and the risk of finding unfixable flaws remain obstacles, leaders say OffSec produces the data-driven evidence needed to prioritize remediation and counter more sophisticated, AI-enabled attacks.

read more →

Mon, December 8, 2025

Falcon Shield Expands AI Agent Visibility and Governance

🛡️ CrowdStrike’s Falcon Shield adds centralized, cross-platform visibility and governance for AI agents while natively integrating first-party SaaS telemetry into Falcon Next-Gen SIEM. The update automatically inventories and classifies agents, maps privileges to human and service identities, and detects risky configurations and agent-to-agent misuse. Teams can alert or suspend agents and associated accounts through Falcon Fusion SOAR, applying human identity controls to AI-driven automation.

read more →

Sun, December 7, 2025

OpenAI: ChatGPT Plus shows app suggestions, not ads

🔔 OpenAI says recent ChatGPT Plus suggestions are app recommendations, not ads, after users reported shopping prompts — including Target — appearing during unrelated queries like Windows BitLocker. Daniel McAuley described the entries as pilot partner apps introduced since DevDay and part of efforts to make discovery feel more organic. Many users, however, view the branded bubbles as advertising inside a paid product.

read more →