Tag Banner

All news with #ai security tag

Fri, October 24, 2025

Malicious Extensions Spoof AI Browser Sidebars, Report

⚠️ Researchers at SquareX warn that malicious browser extensions can inject fake AI sidebars into AI-enabled browsers, including OpenAI Atlas, to steer users to attacker-controlled sites, exfiltrate data, or install backdoors. The extensions inject JavaScript to overlay a spoofed assistant and manipulate responses, enabling actions such as OAuth token harvesting or execution of reverse-shell commands. The report recommends banning unmanaged AI browsers where possible, auditing all extensions, applying strict zero-trust controls, and enforcing granular browser-native policies to block high-risk permissions and risky command execution.

read more →

Thu, October 23, 2025

Mic-E-Mouse: Eavesdropping via High-Resolution Mice

🔊 A recent study by researchers at the University of California, Irvine shows that very high-resolution optical sensors in some mice can detect minute desk vibrations produced by speech. The theoretical attack, labeled Mic-E-Mouse, requires mice with extremely high DPI (≈10,000+) and very high polling rates (≈4,000 Hz+) and malware to exfiltrate raw sensor frames. The raw signals are extremely noisy, but Wiener filtering and ML-based denoising allowed partial speech recovery under controlled lab conditions. Significant practical limitations — few qualifying models, controlled setups with speakers inches from the sensor, and steep drops in accuracy with common barriers — plus straightforward mitigations make the attack largely a proof of concept for now.

read more →

Thu, October 23, 2025

Microsoft Unveils Mico: Copilot Avatar for Empathy

🤖 Microsoft today introduced Mico, a new avatar for its AI-powered Copilot designed to feel more personal, supportive, and empathetic. The optional visual presence listens, adapts its expressions and color to interactions, and will respectfully push back when presented with incorrect information. The Copilot Fall Release also adds features such as Copilot Groups for up to 32 collaborators, long-term memory, Deep Research Proactive Actions, and a Learn Live voice-enabled tutor. These updates begin rolling out in the United States with broader regional availability planned.

read more →

Thu, October 23, 2025

Pakistan-linked APT36 deploys DeskRAT against BOSS Linux

🔍 Sekoia.io researchers uncovered a cyber-espionage campaign, beginning June 2025, that targets Indian government Linux systems using a new Golang RAT named DeskRAT. The operation primarily abused the Indian government‑endorsed BOSS Linux distribution via phishing ZIPs that executed Bash downloaders and displayed decoy PDFs. Attackers used dedicated staging servers and a new operator dashboard to manage victims and exfiltrate files.

read more →

Thu, October 23, 2025

Zero Trust Blind Spot: Identity Risk in AI Agents Now

🔒 Agentic AI introduces a mounting Zero Trust challenge as autonomous agents increasingly act with inherited or unmanaged credentials, creating orphaned identities and ungoverned access. Ido Shlomo of Token Security argues that identity must be the root of trust and recommends applying the NIST AI RMF through an identity-driven Zero Trust lens. Organizations should discover and inventory agents, assign unique managed identities and owners, enforce intent-based least privilege, and apply lifecycle controls, monitoring, and governance to restore auditability and accountability.

read more →

Thu, October 23, 2025

Spoofed AI Sidebars Can Trick Atlas and Comet Users

⚠️ Researchers at SquareX demonstrated an AI Sidebar Spoofing attack that can overlay a counterfeit assistant in OpenAI's Atlas and Perplexity's Comet browsers. A malicious extension injects JavaScript to render a fake sidebar identical to the real UI and intercepts all interactions, leaving users unaware. SquareX showcased scenarios including cryptocurrency phishing, OAuth-based Gmail/Drive hijacks, and delivery of reverse-shell installation commands. The team reported the findings to vendors but received no response by publication.

read more →

Thu, October 23, 2025

Secure AI at Scale and Speed: Free Webinar Framework

🔐 The Hacker News is promoting a free webinar that presents a practical framework to secure AI at scale while preserving speed of adoption. Organizers warn of a growing “quiet crisis”: rapid proliferation of unmanaged AI agents and identities that lack lifecycle controls. The session focuses on embedding security by design, governing AI agents that behave like users, and stopping credential sprawl and privilege abuse from Day One. It is aimed at engineers, architects, and CISOs seeking to move from reactive firefighting to proactive enablement.

read more →

Thu, October 23, 2025

ThreatsDay: Widespread Attacks Exploit Trusted Systems

🔒 This ThreatsDay bulletin highlights a series of recent incidents where attackers favored the easiest paths in: tricking users, abusing trusted services, and exploiting stale or misconfigured components. Notable items include a malicious npm package with a post-install backdoor, a CA$176M FINTRAC penalty for missed crypto reporting, session hijacking via MCP (CVE-2025-6515), and OAuth-based persistent backdoors. Practical defenses emphasized are rapid patching, disabling risky install hooks, auditing OAuth apps and advertisers, and hardening agent and deserialization boundaries.

read more →

Thu, October 23, 2025

Agent Factory Recap: Securing AI Agents in Production

🛡️ This recap of the Agent Factory episode explains practical strategies for securing production AI agents, demonstrating attacks like prompt injection, invisible Unicode exploits, and vector DB context poisoning. It highlights Model Armor for pre- and post-inference filtering, sandboxed execution, network isolation, observability, and tool safeguards via the Agent Development Kit (ADK). The team demonstrates a secured DevOps assistant that blocks data-exfiltration attempts while preserving intended functionality and provides operational guidance on multi-agent authentication, least-privilege IAM, and compliance-ready logging.

read more →

Thu, October 23, 2025

Hugging Face and VirusTotal: Integrating Security Insights

🔒 VirusTotal and Hugging Face have announced a collaboration to surface security insights directly within the Hugging Face platform. When browsing model files, datasets, or related artifacts, users will now see multi‑scanner results including VirusTotal detections and links to public reports so potential risks can be reviewed before downloading. VirusTotal is also enhancing its analysis portfolio with AI-driven tools such as Code Insight and format‑aware scanners (picklescan, safepickle, ModelScan) to highlight unsafe deserialization flows and other risky patterns. The integration aims to increase visibility across the AI supply chain and help researchers, developers, and defenders build more secure models and workflows.

read more →

Thu, October 23, 2025

Manipulating Meeting Notetakers: AI Summarization Risks

📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.

read more →

Wed, October 22, 2025

Prompt Hijacking Risks MCP-Based AI Workflows Exposed

⚠️ Security researchers warn that MCP-based AI workflows are vulnerable to "prompt hijacking" when MCP servers issue predictable or reused session IDs, allowing attackers to inject malicious prompts into active client sessions. JFrog demonstrated the issue in oatpp-mcp (CVE-2025-6515), where guessable session IDs could be harvested and reassigned to craft poisoned responses. Recommended mitigations include generating session IDs with cryptographically secure RNGs (≥128 bits of entropy) and having clients validate unpredictable event IDs.

read more →

Wed, October 22, 2025

ChatGPT Atlas Signals Shift Toward AI Operating Systems

🤖 ChatGPT Atlas previews a future where AI becomes the primary interface for computing, letting users describe outcomes while the system orchestrates apps, data, and web services. Atlas demonstrates an context-aware assistant that understands a user’s digital life and can act on their behalf. This prototype points to productivity and accessibility gains, but it also creates new security, privacy, and governance challenges organizations must prepare for.

read more →

Wed, October 22, 2025

Model Armor and Apigee: Protecting Generative AI Apps

🔒 Google Cloud’s Model Armor integrates with Apigee to screen prompts, responses, and agent interactions, helping organizations mitigate prompt injection, jailbreaks, sensitive data exposure, malicious links, and harmful content. The model‑agnostic, cloud‑agnostic service supports REST APIs and inline integrations with Apigee, Vertex AI, Agentspace, and network service extensions. The article provides step‑by‑step setup: enable the API, create templates, assign service account roles, add SanitizeUserPrompt and SanitizeModelResponse policies to Apigee proxies, and review findings in the AI Protection dashboard.

read more →

Wed, October 22, 2025

CISO Imperative: Building Resilience in Accelerating Threats

🔒 The Microsoft Digital Defense Report 2025 warns that cyber threats are accelerating in speed, scale, and sophistication, driven by AI and coordinated, cross-border operations. Attack windows have shrunk—compromises can occur within 48 hours in cloud containers—while AI-powered phishing and credential theft have grown markedly more effective. For CISOs this requires reframing security as a business enabler, prioritizing resilience, automation, and modern identity controls such as phishing-resistant MFA. The Secure Future Initiative provides practitioner-tested patterns to operationalize these priorities.

read more →

Wed, October 22, 2025

Meta launches new anti-scam tools for WhatsApp, Messenger

🛡️ Meta is rolling out new anti-scam features for Messenger and WhatsApp to help users detect and avoid fraud. Messenger testing includes AI-assisted scam detection that warns about suspicious new contacts and offers options to block, report, or submit messages for review. WhatsApp will display warnings about screen-sharing with unknown callers. These protections are enabled by default.

read more →

Wed, October 22, 2025

AI-Powered Mobile Threats Elevate Need to Rethink Security

📱 The 2025 Verizon Mobile Security Index underscores growing danger as mobile devices account for the majority of global internet traffic and increasingly serve as primary attack surfaces. Check Point highlights the rise of AI-powered threats, persistent phishing, and human error that expand exposure. Organizations must rethink security architectures, strengthen endpoint controls, and adopt AI-aware defenses across apps, devices, and identities to reduce risk.

read more →

Wed, October 22, 2025

Pentera Resolve Aims to Close the Remediation Gap Now

🔧 Pentera today unveiled Pentera Resolve, a platform extension that embeds automated remediation workflows into security validation to bridge the persistent remediation gap. The product converts validated findings into tracked, auditable tickets routed to owners in tools like ServiceNow, Jira, and Slack. Powered by AI-driven triage and contextual enrichment, it aims to replace manual consolidation with a measurable, repeatable remediation loop of validate, remediate, and re-test.

read more →

Wed, October 22, 2025

Face Recognition Failures Affect Nonstandard Faces

⚠️ Bruce Schneier highlights how facial recognition systems frequently fail people with nonstandard facial features, producing concrete barriers to services and daily technologies. Those interviewed report being denied access to public and financial services and encountering nonfunctional phone unlocking and social media filters. The author argues the root cause is often design choices by engineers who trained models on a narrow range of faces and calls for inclusive design plus accessible backup systems when biometric methods fail.

read more →

Wed, October 22, 2025

Four Bottlenecks Slowing Enterprise GenAI Adoption

🔒 Since ChatGPT’s 2022 debut, enterprises have rapidly launched GenAI pilots but struggle to convert experimentation into measurable value — only 3 of 37 pilots succeed. The article identifies four critical bottlenecks: security & data privacy, observability, evaluation & migration readiness, and secure business integration. It recommends targeted controls such as confidential compute, fine‑grained agent permissions, distributed tracing and replay environments, continuous evaluation pipelines and dual‑run migrations, plus policy‑aware integrations and impact analytics to move pilots into reliable production.

read more →