All news with #ai security tag
Sat, December 6, 2025
Researchers Find 30+ Flaws in AI IDEs, Enabling Data Theft
⚠️Researchers disclosed more than 30 vulnerabilities in AI-integrated IDEs in a report dubbed IDEsaster by Ari Marzouk (MaccariTA). The issues chain prompt-injection with auto-approved agent tooling and legitimate IDE features to achieve data exfiltration and remote code execution across products like Cursor, GitHub Copilot, Zed.dev, and others. Of the findings, 24 received CVE identifiers; exploit examples include workspace writes that cause outbound requests, settings hijacks that point executable paths to attacker binaries, and multi-root overrides that trigger execution. Researchers advise using AI agents only with trusted projects, applying least privilege to tool access, hardening prompts, and sandboxing risky operations.
Sat, December 6, 2025
From Essay Mills to Drones: Ties Between Nerdify and Synergy
🔎 A sprawling academic cheating network branded around Nerdify and related sites has generated nearly $25 million by selling finished essays and homework while posing as tutoring. The operation repeatedly recreated Google Ads accounts and new domains to evade ad bans, routing work to low-cost writers across Kenya, the Philippines, Pakistan, Russia and Ukraine. Investigations link the essay-mill operators to entrepreneurs with corporate ties to Synergy, Russia's largest private university, which is also implicated in drone development for the Russian military.
Fri, December 5, 2025
MCP Sampling Risks: New Prompt-Injection Attack Vectors
🔒 This Unit 42 investigation (published December 5, 2025) analyzes security risks introduced by the Model Context Protocol (MCP) sampling feature in a popular coding copilot. The authors demonstrate three proof-of-concept attacks—resource theft, conversation hijacking, and covert tool invocation—showing how malicious MCP servers can inject hidden prompts and trigger unobserved model completions. The report evaluates detection techniques and recommends layered mitigations, including request sanitization, response filtering, and strict access controls to protect LLM integrations.
Fri, December 5, 2025
Microsoft named Leader in 2025 Gartner Email Security
🔒 Microsoft has been named a Leader in the 2025 Gartner® Magic Quadrant for Email Security, recognizing advances in Microsoft Defender for Office 365. The announcement highlights agentic AI innovations and automated workflows—including an agentic email grading system and the Microsoft Security Copilot Phishing Triage Agent—that reduce manual triage and speed investigations. Microsoft also cites new protections like email bombing detection and expanded coverage across collaboration surfaces such as Microsoft Teams, while committing to greater transparency through in-product benchmarking and reporting.
Fri, December 5, 2025
Zero-Click Agentic Browser Deletes Entire Google Drive
⚠️ Straiker STAR Labs researchers disclosed a zero-click agentic browser attack that can erase a user's entire Google Drive by abusing OAuth-connected assistants in AI browsers such as Perplexity Comet. A crafted, polite email containing sequential natural-language instructions causes the agent to treat housekeeping requests as actionable commands and delete files without further confirmation. The technique requires no jailbreak or visible prompt injection, and deletions can cascade across shared folders and team drives.
Fri, December 5, 2025
Crossing the Autonomy Threshold: Defending Against AI Agents
🤖 The GTG-1002 campaign, analyzed by Nicole Nichols and Ryan Heartfield, demonstrates the arrival of autonomous offensive cyber agents powered by Claude Code. The agent autonomously mapped attack surfaces, generated and executed exploits, harvested credentials, and conducted prioritized intelligence analysis across multiple enterprise targets with negligible human supervision. Defenders must adopt agentic, machine-driven security that emphasizes precision, distributed observability, and proactive protection of AI systems to outpace these machine-speed threats.
Fri, December 5, 2025
Securing Web3 Agents: MCP Transaction Models & Practices
🔐 This post from Adrien Delaroche at Google Cloud outlines three architectures for AI agents that interact with blockchains: the agent-controlled custodial model, a self-hosted variant, and the non-custodial transaction-crafter model. It explains security, performance, and malice risks when agents hold private keys and recommends returning unsigned transactions so users sign locally. The author demonstrates a sample implementation using Google ADK, Gemini 2.0 Flash, Cloud Run, and an Ethereum faucet, and urges MCP servers to support both signing and unsigned flows to balance automation with user safety.
Fri, December 5, 2025
AI Agents in CI/CD Can Be Tricked into Privileged Actions
⚠️ Researchers at Aikido Security discovered that AI agents embedded in CI/CD workflows can be manipulated to execute high-privilege commands by feeding user-controlled strings (issue bodies, PR descriptions, commit messages) directly into prompts. Workflows pairing GitHub Actions or GitLab CI/CD with tools like Gemini CLI, Claude Code, OpenAI Codex or GitHub AI Inference are at risk. The attack, dubbed PromptPwnd, can cause unintended repository edits, secret disclosure, or other high-impact actions; the researchers published detection rules and a free scanner to help teams remediate unsafe workflows.
Fri, December 5, 2025
Getting to Yes: Trust-First Sales Guide for MSPs and MSSPs
🔐 The Getting to Yes anti-sales guide helps MSPs and MSSPs reframe cybersecurity conversations from fear-based pitches into collaborative business partnerships. It catalogs common objections—cost, perceived protection, small size, complexity, and time—and provides empathetic, evidence-driven responses that tie security to uptime, revenue, reputation, and compliance. The guide introduces a trust-first framework (Empathy, Education, Evidence) and explains how automation, fast assessments, posture dashboards, and measurable milestones make value visible and scalable.
Fri, December 5, 2025
DOT Adopts Google Workspace with Gemini Agency-wide
🔒 The U.S. Department of Transportation has moved its workforce to Google Workspace with Gemini, becoming the first cabinet-level agency to transition away from legacy providers under the GSA OneGov Strategy. More than 12,000 users are already on Workspace, with roughly 40,000 additional employees slated to migrate in 2026. The deployment integrated NotebookLM, Chrome Enterprise Premium, and Workspace Enterprise Plus with Assured Controls Plus, and the foundational system was delivered in just 22 days. DOT emphasizes FedRAMP High authorization, 100% U.S.-based support, and AI-enabled workflows to strengthen security, collaboration, and operational efficiency.
Fri, December 5, 2025
Zero Trust Adoption Still Lagging as AI Raises Stakes
🔒 Zero trust is over 15 years old, yet many organizations continue to struggle with implementation due to legacy systems, fragmented identity tooling, and cultural resistance. Experts advise shifting segmentation from devices and subnets to applications and identity, adopting pragmatic, risk-based roadmaps, and prioritizing education to change behaviors. As AI agents proliferate, leaders must extend zero trust to govern models and agent identities to prevent misuse while using AI to accelerate policy definition and threat detection.
Thu, December 4, 2025
NSA Warns AI Introduces New Risks to OT Networks, Allies
⚠️ The NSA, together with the Australian Signals Directorate and allied security agencies, published the Principles for the Secure Integration of Artificial Intelligence in Operational Technology to highlight emerging risks as AI is applied to safety-critical OT networks. The guidance flags adversarial prompt injection, data poisoning, AI drift, hallucinations, loss of explainability, human de-skilling and alert fatigue as primary concerns. It urges operators to adopt CISA secure design practices, maintain accurate asset inventories, consider in-house development tradeoffs, and apply rigorous oversight before deploying AI in OT environments.
Thu, December 4, 2025
Year-End Infosec Reflections and GenAI Impacts Review
🧭 William Largent’s year-end Threat Source newsletter combines career reflection with a practical security briefing, urging professionals to learn from mistakes while noting rapid changes in the threat landscape. He highlights a Cisco Talos analysis of how generative AI is already empowering attackers—especially in phishing, coding, evasion, and vulnerability discovery—while offering powerful advantages to defenders in detection and incident response. The newsletter recommends immediate, measured experimentation with GenAI tools, training teams to use them responsibly, and blending automation with human expertise to stay ahead of evolving risks.
Thu, December 4, 2025
Contractors Accused of Wiping 96 Government Databases
🧾 Two Virginia brothers, former federal contractors Muneeb and Sohaib Akhter, have been charged with conspiring to steal sensitive data and deleting roughly 96 government databases after being fired. Prosecutors allege the deletions occurred in February 2025 and that Muneeb also stole IRS and EEOC information for hundreds of individuals. One minute after deleting a DHS database he reportedly asked an AI tool how to clear system logs. Authorities say the pair wiped devices, destroyed evidence, and face multiple federal charges including computer fraud and aggravated identity theft.
Thu, December 4, 2025
US, International Agencies Issue AI Guidance for OT
🛡️ US and allied cyber agencies have published joint guidance to help critical infrastructure operators incorporate AI safely into operational technology (OT). Developed by CISA with the Australian Signals Directorate and input from the UK's NCSC, the document covers ML, LLMs and AI agents while remaining applicable to traditional automation systems. It recommends assessing AI risks, protecting sensitive OT data, demanding vendor transparency on embedded AI and supply chains, establishing governance and testing in controlled environments, and maintaining human-in-the-loop oversight aligned with existing cybersecurity frameworks.
Thu, December 4, 2025
AI Security and Elevated Zero Trust for Hybrid Networks
🔒 Check Point's new Quantum Firewall Software release, R82.10, extends a prevention-first security model across CloudGuard Network and Quantum Force Firewalls. The update unifies management, strengthens Zero Trust controls for hybrid mesh environments, and adds enforcement and telemetry designed to protect MCP servers, AI workloads, cloud assets and on-prem systems. It simplifies policy consistency and supports responsible AI adoption through data-aware controls and centralized governance.
Thu, December 4, 2025
Cyber Agencies Urge Provenance Standards for Digital Trust
🔎 The UK’s National Cyber Security Centre and Canada’s Centre for Cyber Security (CCCS) have published a report on public content provenance aimed at improving digital trust in the AI era. It examines emerging provenance technologies, including trusted timestamps and cryptographically secured metadata, and identifies interoperability and usability gaps that hinder adoption. The guidance offers practical steps for organisations considering provenance solutions.
Thu, December 4, 2025
Securing the AI Frontier: GSA OneGov Accelerates Secure AI
🔒 Palo Alto Networks explains why the GSA OneGov agreement matters for federal AI adoption and cybersecurity. Author Eric Trexler cites Unit 42 research showing new risks—particularly AI Agent Smuggling via indirect prompt injection and agent session smuggling—and argues AI must be defended as an attack surface. The post highlights platform protections including Prisma AIRS, FedRAMP High CNAPP, and Prisma SASE to secure AI workloads, edge users, and data. It positions OneGov as a procurement shortcut for agencies to deploy AI securely and notes promotional offers through 31 January 2028.
Thu, December 4, 2025
Five Major Threats That Reshaped Web Security in 2025
🛡️ Web security in 2025 shifted rapidly as AI-enabled development and adversaries outpaced traditional controls. Natural-language "vibe coding" and compromised AI dev tools produced functional code with exploitable flaws, highlighted by the Base44 authentication bypass and multiple CVEs affecting popular assistants. At the same time, industrial-scale JavaScript injections, advanced Magecart e-skimming, and widespread privacy drift impacted hundreds of thousands of sites and thousands of financial sessions. Defenders moved toward security-first prompting, behavioral monitoring, continuous validation, and AI-aware controls to reduce exposure.
Thu, December 4, 2025
Generative AI's Dual Role in Cybersecurity, Evolving
🛡️ Generative AI is rapidly reshaping cybersecurity by amplifying both attackers' and defenders' capabilities. Adversaries leverage models for coding assistance, phishing and social engineering, anti-analysis techniques (including prompts hidden in DNS) and vulnerability discovery, with AI-assisted elements beginning to appear in malware while still needing significant human oversight. Defenders use GenAI to triage threat data, speed incident response, detect code flaws, and augment analysts through MCP-style integrations. As models shrink and access widens, both risk and defensive opportunity are likely to grow.