All news with #openai tag
Thu, November 20, 2025
OpenAI's GPT-5.1 Codex-Max Can Code Independently for Hours
🛠️OpenAI has rolled out GPT-5.1-Codex-Max, a Codex variant optimized for long-running programming tasks and improved token efficiency. Unlike the general-purpose GPT-5.1, Codex is tailored to operate inside terminals and integrate with GitHub, and OpenAI says the model can work independently for hours. It is faster, more capable on real-world engineering tasks, uses roughly 30% fewer "thinking" tokens, and adds Windows and PowerShell capabilities. GPT-5.1-Codex-Max is available in the Codex CLI, IDE extensions, cloud, and code review.
Wed, November 19, 2025
Amazon Bedrock Adds Support for OpenAI GPT OSS Models
🚀 Amazon Bedrock now supports importing custom weights for gpt-oss-120b and gpt-oss-20b, allowing customers to bring tuned OpenAI GPT OSS models into a fully managed, serverless environment. This capability eliminates the need to manage infrastructure or model serving while enabling deployment of text-to-text models for reasoning, agentic, and developer tasks. gpt-oss-120b is optimized for production and high-reasoning use cases; gpt-oss-20b targets lower-latency or specialized scenarios. The feature is generally available in US‑East (N. Virginia).
Tue, November 18, 2025
Amazon Bedrock adds Priority and Flex inference tiers
🔔 Amazon Bedrock introduces two new inference tiers—Priority and Flex—to help customers balance cost and latency for varied AI workloads. Flex targets non-time-critical jobs like model evaluations and summarization with discounted pricing and lower scheduling priority. Priority offers premium performance and preferential processing (up to 25% better OTPS vs. Standard) for mission-critical, real-time applications. The existing Standard tier remains available for general-purpose use.
Thu, November 13, 2025
AI Sidebar Spoofing Targets Comet and Atlas Browsers
⚠️ Security researchers disclosed a novel attack called AI sidebar spoofing that allows malicious browser extensions to place counterfeit in‑page AI assistants that visually mimic legitimate sidebars. Demonstrated against Comet and confirmed for Atlas, the extension injects JavaScript, forwards queries to a real LLM when requested, and selectively alters replies to inject phishing links, malicious OAuth prompts, or harmful terminal commands. Users who install extensions without scrutiny face a tangible risk.
Wed, November 12, 2025
Tenable Reveals New Prompt-Injection Risks in ChatGPT
🔐 Researchers at Tenable disclosed seven techniques that can cause ChatGPT to leak private chat history by abusing built-in features such as web search, conversation memory and Markdown rendering. The attacks are primarily indirect prompt injections that exploit a secondary summarization model (SearchGPT), Bing tracking redirects, and a code-block rendering bug. Tenable reported the issues to OpenAI, and while some fixes were implemented several techniques still appear to work.
Tue, November 11, 2025
The AI Fix #76 — AI self-awareness and the death of comedy
🧠 In episode 76 of The AI Fix, hosts Graham Cluley and Mark Stockley navigate a string of alarming and absurd AI stories from November 2025. They discuss US judges who blamed AI for invented case law, a Chinese humanoid that dramatically shed its outer skin onstage, Toyota’s unsettling walking chair, and Google’s plan to put specialised AI chips in orbit. The conversation explores reliability, public trust and whether prompting an LLM to "notice its noticing" changes how conscious it sounds.
Mon, November 10, 2025
Whisper Leak side channel exposes topics in encrypted AI
🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.
Mon, November 10, 2025
Researchers Trick ChatGPT into Self Prompt Injection
🔒 Researchers at Tenable identified seven techniques that can coerce ChatGPT into disclosing private chat history by abusing built-in features like web browsing and long-term Memories. They show how OpenAI’s browsing pipeline routes pages through a weaker intermediary model, SearchGPT, which can be prompt-injected and then used to seed malicious instructions back into ChatGPT. Proof-of-concepts include exfiltration via Bing-tracked URLs, Markdown image loading, and a rendering quirk, and Tenable says some issues remain despite reported fixes.
Sat, November 8, 2025
OpenAI Prepares GPT-5.1, Reasoning, and Pro Models
🤖 OpenAI is preparing to roll out the GPT-5.1 family — GPT-5.1 (base), GPT-5.1 Reasoning, and subscription-based GPT-5.1 Pro — to the public in the coming weeks, with models also expected on Azure. The update emphasizes faster performance and strengthened health-related guardrails rather than a major capability leap. OpenAI also launched a compact Codex variant, GPT-5-Codex-Mini, to extend usage limits and reduce costs for high-volume users.
Sat, November 8, 2025
Microsoft Reveals Whisper Leak: Streaming LLM Side-Channel
🔒 Microsoft has disclosed a novel side-channel called Whisper Leak that can let a passive observer infer the topic of conversations with streaming language models by analyzing encrypted packet sizes and timings. Researchers at Microsoft (Bar Or, McDonald and the Defender team) demonstrate classifiers that distinguish targeted topics from background traffic with high accuracy across vendors including OpenAI, Mistral and xAI. Providers have deployed mitigations such as random-length response padding; Microsoft recommends avoiding sensitive topics on untrusted networks, using VPNs, or preferring non-streaming models and providers that implemented fixes.
Fri, November 7, 2025
Whisper Leak: Side-Channel Attack on Remote LLM Services
🔍 Microsoft researchers disclosed "Whisper Leak", a new side-channel that can infer conversation topics from encrypted, streamed language model responses by analyzing packet sizes and timings. The study demonstrates high classifier accuracy on a proof-of-concept sensitive topic and shows risk increases with more training data or repeated interactions. Industry partners including OpenAI, Mistral, Microsoft Azure, and xAI implemented streaming obfuscation mitigations that Microsoft validated as substantially reducing practical risk.
Thu, November 6, 2025
Remember, Remember: AI Agents, Threat Intel, and Phishing
🔔 This edition of the Threat Source newsletter opens with Bonfire Night and the 1605 Gunpowder Plot as a narrative hook, tracing how Guy Fawkes' image became a symbol of protest and hacktivism. It spotlights Cisco Talos research, including a new Incident Response report and a notable internal phishing case where compromised O365 accounts abused inbox rules to hide malicious activity. The newsletter also features a Tool Talk demonstrating a proof-of-concept that equips autonomous AI agents with real-time threat intelligence via LangChain, OpenAI, and the Cisco Umbrella API to improve domain trust decisions.
Thu, November 6, 2025
Equipping Autonomous AI Agents with Cyber Hygiene Practices
🔐 This post demonstrates a proof-of-concept for teaching autonomous agents internet safety by integrating real-time threat intelligence. Using LangChain with OpenAI and the Cisco Umbrella API, the example shows how an agent can extract domains and query dispositions to decide whether to connect. The agent returns clear disposition reports and abstains when no domains are present. The approach emphasizes decision-making over hardblocking.
Thu, November 6, 2025
Google Warns: AI-Enabled Malware Actively Deployed
⚠️ Google’s Threat Intelligence Group has identified a new class of AI-enabled malware that leverages large language models at runtime to generate and obfuscate malicious code. Notable families include PromptFlux, which uses the Gemini API to rewrite its VBScript dropper for persistence and lateral spread, and PromptSteal, a Python data miner that queries Qwen2.5-Coder-32B-Instruct to create on-demand Windows commands. GTIG observed PromptSteal used by APT28 in Ukraine, while other examples such as PromptLock, FruitShell and QuietVault demonstrate varied AI-driven capabilities. Google warns this "just-in-time AI" approach could accelerate malware sophistication and democratize cybercrime.
Thu, November 6, 2025
Leading Bug Bounty Programs and Market Shifts 2025
🔒 Bug bounty programs remain a core component of security testing in 2025, drawing external researchers to identify flaws across web, mobile, AI, and critical infrastructure. Leading platforms like Bugcrowd, HackerOne, Synack and vendors such as Apple, Google, Microsoft and OpenAI have broadened scopes and increased payouts. Firms now reward full exploit chains and emphasize human-led reconnaissance over purely automated scanning. Programs also support regulatory compliance in critical sectors.
Wed, November 5, 2025
Researchers Find ChatGPT Vulnerabilities in GPT-4o/5
🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.
Tue, November 4, 2025
OpenAI Assistants API Abused by 'SesameOp' Backdoor
🔐 Microsoft Incident Response (DART) uncovered a covert backdoor named 'SesameOp' in July 2025 that leverages the OpenAI Assistants API as a command-and-control channel. The malware uses an obfuscated DLL loader, Netapi64.dll, and a .NET component, OpenAIAgent.Netapi64, to fetch compressed, encrypted commands and return results via the API. Microsoft recommends firewall audits, EDR in block mode, tamper protection and cloud-delivered Defender protections to mitigate the threat.
Tue, November 4, 2025
SesameOp Backdoor Abuses OpenAI Assistants API for C2
🛡️ Researchers at Microsoft disclosed a previously undocumented backdoor, dubbed SesameOp, that abuses the OpenAI Assistants API to relay commands and exfiltrate results. The attack chain uses .NET AppDomainManager injection to load obfuscated libraries (loader "Netapi64.dll") into developer tools and relies on a hard-coded API key to pull payloads from assistant descriptions. Because traffic goes to api.openai.com, the campaign evaded traditional C2 detection. Microsoft Defender detections and account key revocation were used to disrupt the operation.
Tue, November 4, 2025
Microsoft Detects SesameOp Backdoor Using OpenAI API
🔒 Microsoft’s Detection and Response Team (DART) detailed a novel .NET backdoor called SesameOp that leverages the OpenAI Assistants API as a covert command-and-control channel. Discovered in July 2025 during a prolonged intrusion, the implant uses a loader (Netapi64.dll) and an OpenAIAgent.Netapi64 component to fetch encrypted commands and return execution results via the API. The DLL is heavily obfuscated with Eazfuscator.NET and is injected at runtime using .NET AppDomainManager injection for stealth and persistence.
Mon, November 3, 2025
SesameOp Backdoor Uses OpenAI Assistants API Stealthily
🔐 Microsoft security researchers identified a new backdoor, SesameOp, which abuses the OpenAI Assistants API as a covert command-and-control channel. Discovered during a July 2025 investigation, the backdoor retrieves compressed, encrypted commands via the API, decrypts and executes them, and returns encrypted exfiltration through the same channel. Microsoft and OpenAI disabled the abused account and key; recommended mitigations include auditing firewall logs, enabling tamper protection, and configuring endpoint detection in block mode.