Tag Banner

All news with #openai tag

Mon, November 3, 2025

SesameOp backdoor abuses OpenAI Assistants API for C2

🛡️ Microsoft DART researchers uncovered SesameOp, a novel .NET backdoor that leverages the OpenAI Assistants API as a covert command-and-control (C2) channel instead of traditional infrastructure. The implant includes a heavily obfuscated loader (Netapi64.dll) and a backdoor (OpenAIAgent.Netapi64) that persist via .NET AppDomainManager injection, using layered RSA/AES encryption and GZIP compression to fetch, execute, and exfiltrate commands. Microsoft and OpenAI investigated jointly and disabled the suspected API key; detections and mitigation guidance are provided for defenders.

read more →

Mon, November 3, 2025

OpenAI Aardvark: Autonomous GPT-5 Agent for Code Security

🛡️ OpenAI Aardvark is an autonomous GPT-5-based agent that scans, analyzes and patches code by emulating a human security researcher. Rather than only flagging suspicious patterns, it maps repositories, builds contextual threat models, validates findings in sandboxes and proposes fixes via Codex, then rechecks changes to prevent regressions. OpenAI reports it found 92% of benchmark vulnerabilities and has already identified real issues in open-source projects, offering free coordinated scanning for selected non-commercial repositories.

read more →

Sat, November 1, 2025

OpenAI Eyes Memory-Based Ads for ChatGPT to Boost Revenue

📰 OpenAI is weighing memory-based advertising on ChatGPT as it looks to diversify revenue beyond subscriptions and enterprise deals. The company, valued near $500 billion, has about 800 million users but only ~5% pay, and paid customers generate the bulk of recent revenue. Internally the move is debated — focus groups suggest some users already assume sponsored answers — and the company is expanding cheaper Go plans and purchasable credits.

read more →

Fri, October 31, 2025

OpenAI Unveils Aardvark: GPT-5 Agent for Code Security

🔍 OpenAI has introduced Aardvark, an agentic security researcher powered by GPT-5 that autonomously scans source code repositories to identify vulnerabilities, assess exploitability, and propose targeted patches that can be reviewed by humans. Embedded in development pipelines, the agent monitors commits and incoming changes continuously, prioritizes threats by severity and likely impact, and attempts controlled exploit verification in sandboxed environments. Using OpenAI Codex for patch generation, Aardvark is in private beta and has already contributed to the discovery of multiple CVEs in open-source projects.

read more →

Fri, October 31, 2025

OpenAI Aardvark: GPT-5 Agent to Find and Fix Code Bugs

🛡️ OpenAI has introduced Aardvark, a GPT-5-powered autonomous agent designed to scan, reason about, and patch code with the judgment of a human security researcher. Announced in private beta, Aardvark maps repositories, builds contextual threat models, continuously monitors commits, and validates exploitability in sandboxed environments before reporting findings. When vulnerabilities are confirmed, it proposes fixes via Codex and re-analyzes patches to avoid regressions. OpenAI reports a 92% detection rate in benchmark tests and has already identified real-world flaws in open-source projects, including ten issues assigned CVE identifiers.

read more →

Thu, October 30, 2025

OpenAI Updates GPT-5 to Better Handle Emotional Distress

🧭 OpenAI rolled out an October 5 update that enables GPT-5 to better recognize and respond to mental and emotional distress in conversations. The change specifically upgrades GPT-5 Instant—the fast, low-end default—so it can detect signs of acute distress and route sensitive exchanges to reasoning models when needed. OpenAI says it developed the update with mental-health experts to prioritize de-escalation and provide appropriate crisis resources while retaining supportive, grounding language. The update is available broadly and complements new company-context access via connected apps.

read more →

Thu, October 30, 2025

Atlas browser CSRF flaw lets attackers poison ChatGPT memory

⚠️ Researchers at LayerX disclosed a vulnerability in ChatGPT Atlas that can let attackers inject hidden instructions into a user's memory via a CSRF vector, contaminating stored context and persisting across sessions and devices. The exploit works by tricking an authenticated user to visit a malicious page which issues a CSRF request to silently write memory entries that later influence assistant responses. Detection requires behavioral hunting—correlating browser logs, exported chats and timestamped memory changes—since there are no file-based indicators. Administrators are advised to limit Atlas in enterprise pilots, export and review chat histories, and treat affected accounts as compromised until memory is cleared and credentials rotated.

read more →

Tue, October 28, 2025

GitHub Agent HQ: Native AI Agents and Governance Launch

🤖 Agent HQ integrates AI agents directly into the GitHub workflow, making third-party coding assistants available through paid Copilot subscriptions. It introduces a cross-surface mission control to assign, steer, and track agents from GitHub, VS Code, mobile, and the CLI. VS Code additions include Plan Mode, AGENTS.md for custom agent rules, and an MCP Registry to discover partner servers. Enterprise features add governance, audit logging, branch CI controls, and a Copilot metrics dashboard.

read more →

Tue, October 28, 2025

GitHub Agent HQ: Native, Open Ecosystem & Controls

🚀 GitHub introduced Agent HQ, a native platform that centralizes AI agents within the GitHub workflow. The initiative will bring partner coding agents from OpenAI, Anthropic, Google, Cognition, and xAI into Copilot subscriptions and VS Code. A unified "mission control" offers a consistent command center across GitHub, VS Code, mobile, and the CLI. Enterprise-grade controls, code quality tooling, and a Copilot metrics dashboard provide governance and visibility for teams.

read more →

Tue, October 28, 2025

The AI Fix 74: AI Glasses, Deepfakes, and AGI Debate

🎧 In episode 74 of The AI Fix, hosts Graham Cluley and Mark Stockley survey recent AI developments including Amazon’s experimental delivery glasses, Channel 4’s AI presenter, and reports of LLM “brain rot.” They examine practical security risks — such as malicious browser extensions spoofing AI sidebars and AI browsers being tricked into purchases — alongside wider societal debates. The episode also highlights public calls to pause work on super-intelligence and explores what AGI really means.

read more →

Tue, October 28, 2025

Atlas Browser Flaw Lets Attackers Poison ChatGPT Memory

⚠️ Researchers at LayerX Security disclosed a vulnerability in OpenAI’s Atlas browser that allows attackers to inject hidden instructions into a user’s ChatGPT memory via a CSRF-style flow. An attacker lures a logged-in user to a malicious page, leverages existing authentication, and taints the account-level memory so subsequent prompts can trigger malicious behavior. LayerX reported the issue to OpenAI and advised enterprises to restrict Atlas use and monitor AI-driven anomalies. Detection relies on behavioral indicators rather than traditional malware artifacts.

read more →

Mon, October 27, 2025

ChatGPT Atlas 'Tainted Memories' CSRF Risk Exposes Accounts

⚠️ Researchers disclosed a CSRF-based vulnerability in ChatGPT Atlas that can inject malicious instructions into the assistant's persistent memory, potentially enabling arbitrary code execution, account takeover, or malware deployment. LayerX warns that corrupted memories persist across devices and sessions until manually deleted and that Atlas' anti-phishing defenses lag mainstream browsers. The flaw converts a convenience feature into a persistent attack vector that can be invoked during normal prompts.

read more →

Mon, October 27, 2025

OpenAI Atlas Omnibox Vulnerable to Prompt-Injection

⚠️ OpenAI's new Atlas browser is vulnerable to a prompt-injection jailbreak that disguises malicious instructions as URL-like strings, causing the omnibox to execute hidden commands. NeuralTrust demonstrated how malformed inputs that resemble URLs can bypass URL validation and be handled as trusted user prompts, enabling redirection, data exfiltration, or unauthorized tool actions on linked services. Mitigations include stricter URL canonicalization, treating unvalidated omnibox input as untrusted, additional runtime checks before tool execution, and explicit user confirmations for sensitive actions.

read more →

Fri, October 24, 2025

Malicious Extensions Spoof AI Browser Sidebars, Report

⚠️ Researchers at SquareX warn that malicious browser extensions can inject fake AI sidebars into AI-enabled browsers, including OpenAI Atlas, to steer users to attacker-controlled sites, exfiltrate data, or install backdoors. The extensions inject JavaScript to overlay a spoofed assistant and manipulate responses, enabling actions such as OAuth token harvesting or execution of reverse-shell commands. The report recommends banning unmanaged AI browsers where possible, auditing all extensions, applying strict zero-trust controls, and enforcing granular browser-native policies to block high-risk permissions and risky command execution.

read more →

Thu, October 23, 2025

Spoofed AI Sidebars Can Trick Atlas and Comet Users

⚠️ Researchers at SquareX demonstrated an AI Sidebar Spoofing attack that can overlay a counterfeit assistant in OpenAI's Atlas and Perplexity's Comet browsers. A malicious extension injects JavaScript to render a fake sidebar identical to the real UI and intercepts all interactions, leaving users unaware. SquareX showcased scenarios including cryptocurrency phishing, OAuth-based Gmail/Drive hijacks, and delivery of reverse-shell installation commands. The team reported the findings to vendors but received no response by publication.

read more →

Wed, October 22, 2025

ChatGPT Atlas Signals Shift Toward AI Operating Systems

🤖 ChatGPT Atlas previews a future where AI becomes the primary interface for computing, letting users describe outcomes while the system orchestrates apps, data, and web services. Atlas demonstrates an context-aware assistant that understands a user’s digital life and can act on their behalf. This prototype points to productivity and accessibility gains, but it also creates new security, privacy, and governance challenges organizations must prepare for.

read more →

Tue, October 21, 2025

The AI Fix #73: Gemini gambling, poisoning LLMs and fallout

🧠 In episode 73 of The AI Fix, hosts Graham Cluley and Mark Stockley explore a sweep of recent AI developments, from the rise of AI-generated content to high-profile figures relying on chatbots. They discuss research suggesting Google Gemini exhibits behaviours resembling pathological gambling and report on a Gemma-style model uncovering a potential cancer therapy pathway. The show also highlights legal and security concerns— including a lawyer criticised for repeated AI use, generals consulting chatbots, and techniques for poisoning LLMs with only a few malicious samples.

read more →

Mon, October 20, 2025

Developers leaking secrets via VSCode and OpenVSX extensions

🔒 Researchers at Wiz found that careless developers published Visual Studio extensions to the VSCode Marketplace and OpenVSX containing more than 550 validated secrets across over 500 extensions, including API keys and personal access tokens for providers such as OpenAI, AWS, GitHub, Azure DevOps, and multiple databases. The primary cause was bundled dotfiles (notably .env) and hardcoded credentials in source and config files, with AI-related configs and build manifests also contributing. Microsoft and OpenVSX collaborated with Wiz on coordinated remediation: notifying publishers, adding pre-publication secrets scanning, blocking verified secrets, and prefixing OVSX tokens to reduce abuse.

read more →

Mon, October 20, 2025

ChatGPT privacy and security: data control guide 2025

🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.

read more →

Sat, October 18, 2025

OpenAI Confirms GPT-6 Not Shipping in 2025; GPT-5 May Evolve

🤖 OpenAI says GPT-6 will not ship in 2025 but continues to iterate on its existing models. The company currently defaults to GPT-5 Auto, which dynamically routes queries between more deliberative reasoning models and the faster GPT-5-instant variant. OpenAI has issued multiple updates to GPT-5 since launch. After viral analyst claims that GPT-6 would arrive by year-end, a pseudonymous OpenAI employee and company representatives denied those reports, leaving room for interim updates such as a potential GPT-5.5.

read more →