Tag Banner

All news with #openai tag

Wed, December 10, 2025

Over 10,000 Docker Hub Images Expose Live Secrets Globally

🔒 A November scan by threat intelligence firm Flare found 10,456 Docker Hub images exposing credentials, including live API tokens for AI models and production systems. The leaks span about 101 organizations — from SMBs to a Fortune 500 company and a major national bank — and often stem from mistakes like committed .env files, hardcoded tokens, and Docker manifests. Flare urges immediate revocation of exposed keys, centralized secrets management, and active SDLC scanning to prevent prolonged abuse.

read more →

Wed, December 10, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️ Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts strongly recommend that enterprises block AI browsers for the foreseeable future, citing both known vulnerabilities and additional risks inherent to an immature technology. They warn of irreversible, non‑auditable data loss when browsers send active web content, tab data and browsing history to cloud services, and of prompt‑injection attacks that can cause fraudulent actions. Concrete flaws—such as unencrypted OAuth tokens in ChatGPT Atlas and the Comet 'CometJacking' issue—underscore that traditional controls are insufficient; Gartner advises blocking installs with existing network and endpoint controls, restricting pilots to small, low‑risk groups, and updating AI policies.

read more →

Tue, December 9, 2025

The AI Fix #80: DeepSeek, Antigravity, and Rude AI

🔍 In episode 80 of The AI Fix, hosts Graham Cluley and Mark Stockley scrutinize DeepSeek 3.2 'Speciale', a bargain model touted as a GPT-5 rival at a fraction of the cost. They also cover Jensen Huang’s robotics-for-fashion pitch, a 75kg humanoid performing acrobatic kicks, and surreal robot-dog NFT stunts in Miami. Graham recounts Google’s Antigravity IDE mistakenly clearing caches — a cautionary tale about giving agentic systems real power — while Mark examines research suggesting LLMs sometimes respond better to rude prompts, raising questions about how these models interpret tone and instruction.

read more →

Tue, December 9, 2025

AMOS infostealer uses ChatGPT share to spread macOS malware

🛡️Kaspersky researchers uncovered a macOS campaign in which attackers used paid search ads to point victims to a public shared chat on ChatGPT that contained a fake installation guide for an “Atlas” browser. The guide instructs users to paste a single Terminal command that downloads a script from atlas-extension.com and requests system credentials. Executing it deploys the AMOS infostealer and a persistent backdoor that exfiltrates browser data, crypto wallets and files. Users should not run unsolicited commands and must use updated anti‑malware and careful verification before following online guides.

read more →

Tue, December 9, 2025

Experts Warn AI Is Becoming Integrated in Cyberattacks

🔍 Industry debate is heating up over AI’s role in the cyber threat chain, with some experts calling warnings exaggerated while many frontline practitioners report concrete AI-assisted attacks. Recent reports from Google and Anthropic document malware and espionage leveraging LLMs and agentic tools. CISOs are urged to balance fundamentals with rapid defenses and prepare boards for trade-offs.

read more →

Mon, December 8, 2025

Gartner Urges Enterprises to Block AI Browsers Now

⚠️Gartner recommends blocking AI browsers such as ChatGPT Atlas and Perplexity Comet because they transmit active web content, open tabs, and browsing context to cloud services, creating risks of irreversible data loss. Analysts cite prompt-injection, credential exposure, and autonomous agent errors as primary threats. Organizations should block installations with existing network and endpoint controls and restrict any pilots to small, low-risk groups.

read more →

Sun, December 7, 2025

OpenAI: ChatGPT Plus shows app suggestions, not ads

🔔 OpenAI says recent ChatGPT Plus suggestions are app recommendations, not ads, after users reported shopping prompts — including Target — appearing during unrelated queries like Windows BitLocker. Daniel McAuley described the entries as pilot partner apps introduced since DevDay and part of efforts to make discovery feel more organic. Many users, however, view the branded bubbles as advertising inside a paid product.

read more →

Fri, December 5, 2025

AI Agents in CI/CD Can Be Tricked into Privileged Actions

⚠️ Researchers at Aikido Security discovered that AI agents embedded in CI/CD workflows can be manipulated to execute high-privilege commands by feeding user-controlled strings (issue bodies, PR descriptions, commit messages) directly into prompts. Workflows pairing GitHub Actions or GitLab CI/CD with tools like Gemini CLI, Claude Code, OpenAI Codex or GitHub AI Inference are at risk. The attack, dubbed PromptPwnd, can cause unintended repository edits, secret disclosure, or other high-impact actions; the researchers published detection rules and a free scanner to help teams remediate unsafe workflows.

read more →

Thu, December 4, 2025

Amazon Bedrock Adds OpenAI-Compatible Responses API

🚀 Amazon Bedrock now exposes an OpenAI-compatible Responses API on new service endpoints, enabling asynchronous inference for long-running workloads, streaming and non-streaming modes, and automatic stateful conversation reconstruction so developers no longer must resend full histories. The endpoints provide Chat Completions with reasoning-effort support for models served by Mantle, Amazon’s distributed inference engine. Integration requires only a base URL change for OpenAI SDK–compatible code, and support starts today for OpenAI’s GPT OSS 20B and 120B models, with additional models coming soon.

read more →

Thu, December 4, 2025

Protecting LLM Chats from the Whisper Leak Attack Today

🛡️ Recent research shows the “Whisper Leak” attack can infer the topic of LLM conversations by analyzing timing and packet patterns during streaming responses. Microsoft’s study tested 30 models and thousands of prompts, finding topic-detection accuracy from 71% to 100% for some models. Providers including OpenAI, Mistral, Microsoft Azure, and xAI have added invisible padding to network packets to disrupt these timing signals. Users can further protect sensitive chats by using local models, disabling streaming output, avoiding untrusted networks, or using a trusted VPN and up-to-date anti-spyware.

read more →

Wed, December 3, 2025

RCE Flaw in OpenAI's Codex CLI Elevates Dev Risks Globally

⚠️Researchers from CheckPoint disclosed a critical remote code execution vulnerability in OpenAI's Codex CLI that allowed project-local .env files to redirect the CODEX_HOME environment variable and load attacker-controlled MCP servers. By adding a malicious mcp_servers entry in a repo-local .codex/config.toml, an attacker with commit or PR access could cause Codex to execute commands silently whenever a developer runs the tool. OpenAI addressed the issue in Codex CLI v0.23.0 by blocking project-local redirection of CODEX_HOME, but the flaw demonstrates how automated LLM-powered developer tools can expand the attack surface and enable persistent supply-chain backdoors.

read more →

Wed, December 3, 2025

Adversarial Poetry Bypasses AI Guardrails Across Models

✍️ Researchers from Icaro Lab (DexAI), Sapienza University of Rome, and Sant’Anna School found that short poetic prompts can reliably subvert AI safety filters, in some cases achieving 100% success. Using 20 crafted poems and the MLCommons AILuminate benchmark across 25 proprietary and open models, they prompted systems to produce hazardous instructions — from weapons-grade plutonium to steps for deploying RATs. The team observed wide variance by vendor and model family, with some smaller models surprisingly more resistant. The study concludes that stylistic prompts exploit structural alignment weaknesses across providers.

read more →

Tue, December 2, 2025

ChatGPT Outage Causes Global Errors and Missing Chats

🔴 OpenAI's ChatGPT experienced a global outage that produced "something seems to have gone wrong" errors and stalled responses, with some users reporting that entire conversations disappeared and new messages never finished loading. BleepingComputer observed the model continuously loading without delivering replies, while DownDetector recorded over 30,000 reports. OpenAI confirmed elevated errors at 02:40 ET, said it was working on a fix, and by 15:14 ET service had begun returning but remained slow.

read more →

Tue, December 2, 2025

ChatGPT Experiences Worldwide Outage; Conversations Lost

⚠️OpenAI's ChatGPT experienced a global outage that caused errors and disappearing conversations for many users. Many reported seeing messages such as "something seems to have gone wrong" and "There was an error generating a response," while some conversations vanished and new messages kept loading indefinitely. DownDetector recorded over 30,000 reports, and OpenAI acknowledged elevated errors and said engineers were working on a fix. Service began returning as of 15:14 ET, though performance remained slow.

read more →

Tue, December 2, 2025

Amazon Bedrock Adds 18 Fully Managed Open Models Today

🚀 Amazon Bedrock expanded its model catalog with 18 new fully managed open-weight models, the largest single addition to date. The offering includes Gemma 3, Mistral Large 3, NVIDIA Nemotron Nano 2, OpenAI gpt-oss variants and other vendor models. Through a unified API, developers can evaluate, switch, and adopt these models in production without rewriting applications or changing infrastructure. Models are available in supported AWS Regions.

read more →

Mon, December 1, 2025

Agentic AI Browsers: New Threats to Enterprise Security

🚨 The emergence of agentic AI browsers converts the browser from a passive viewer into an autonomous digital agent that can act on users' behalf. To perform tasks—booking travel, filling forms, executing payments—these agents must hold session cookies, saved credentials, and payment data, creating an unprecedented attack surface. The piece cites OpenAI's ChatGPT Atlas as an example and warns that prompt injection and the resulting authenticated exfiltration can bypass conventional MFA and network controls. Recommended mitigations include auditing endpoints for shadow AI browsers, enforcing allow/block lists for sensitive resources, and augmenting native protections with third-party browser security and anti-phishing layers.

read more →

Sat, November 29, 2025

Leak: OpenAI Tests Ads Inside ChatGPT App for Users

📝 OpenAI is internally testing an 'ads' feature in the ChatGPT Android beta that references bazaar content, search ad entries and a search ads carousel. The leak, spotted in build 1.2025.329, suggests ads may initially be confined to the search experience but could expand. Because the assistant retains rich context, any placements could be highly personalized unless users opt out. This development may signal a major shift in ChatGPT's monetization and the broader web advertising landscape.

read more →

Fri, November 28, 2025

Public GitLab Repositories Exposed 17,000+ Secrets

🔒 After scanning all 5.6 million public repositories on GitLab Cloud, a security engineer discovered more than 17,000 exposed secrets across over 2,800 unique domains. Using the open-source tool TruffleHog and an AWS-driven pipeline (SQS queue and Lambda workers), the researcher completed the scan in just over 24 hours at a cost of $770. Notifications were automated with Claude Sonnet 3.7 and scripts; affected parties revoked many credentials and the researcher collected $9,000 in bug bounties, though some secrets remain exposed.

read more →

Thu, November 27, 2025

OpenAI Data Exposed After Mixpanel Phishing Incident

🔒 OpenAI confirmed a customer data exposure after its analytics partner Mixpanel suffered a smishing attack on November 8, which allowed attackers to access profile metadata tied to platform.openai.com accounts. Stolen fields included names, email addresses, approximate location, OS/browser details, referrers, and organization or user IDs. OpenAI says ChatGPT and core systems were not breached and that no API keys, passwords, payment data, or model payloads were exposed. The company has terminated its use of Mixpanel and is notifying impacted customers directly.

read more →

Thu, November 27, 2025

OpenAI Vendor Mixpanel Breach Exposes API User Data

🔒 According to an OpenAI statement, cybercriminals accessed analytics provider Mixpanel's systems in early November, and data tied to some API users may have been exposed. Potentially affected fields include account names, associated email addresses, approximate browser-derived location (city, state, country), operating system and browser details, referring websites, and organization or user IDs. OpenAI said its own systems and products such as ChatGPT were not impacted, that sensitive items like chat histories, API requests, API usage data, passwords, credentials, API keys, payment details, and government IDs were not compromised, and that it has removed Mixpanel from its systems while working with the vendor to investigate.

read more →