Tag Banner

All news with #ai security tag

Tue, November 4, 2025

Google AI 'Big Sleep' Finds Five WebKit Flaws in Safari

🔒 Google’s AI agent Big Sleep reported five vulnerabilities in Apple’s WebKit used by Safari, including a buffer overflow, two memory-corruption issues, an unspecified crash flaw, and a use-after-free (CVE-2025-43429 through CVE-2025-43434). Apple issued patches across iOS 26.1, iPadOS 26.1, macOS Tahoe 26.1, tvOS 26.1, watchOS 26.1, visionOS 26.1 and Safari 26.1. Users are advised to install the updates promptly to mitigate crash and memory-corruption risks.

read more →

Tue, November 4, 2025

Building an AI Champions Network for Enterprise Adoption

🤝 Getting an enterprise-grade generative AI platform in place is a milestone, not the finish line. Sustained, distributed adoption comes from embedding AI into everyday processes through an organized AI champions network that brings enablement close to the work. Champions act as multipliers — translating strategy into team behaviors, surfacing blockers and use cases, and accelerating normalized use. With structured onboarding, rotating membership, monthly working sessions, and direct ties to the core AI program, the network converts tool access into measurable business impact.

read more →

Tue, November 4, 2025

Microsoft Detects SesameOp Backdoor Using OpenAI API

🔒 Microsoft’s Detection and Response Team (DART) detailed a novel .NET backdoor called SesameOp that leverages the OpenAI Assistants API as a covert command-and-control channel. Discovered in July 2025 during a prolonged intrusion, the implant uses a loader (Netapi64.dll) and an OpenAIAgent.Netapi64 component to fetch encrypted commands and return execution results via the API. The DLL is heavily obfuscated with Eazfuscator.NET and is injected at runtime using .NET AppDomainManager injection for stealth and persistence.

read more →

Mon, November 3, 2025

AWS and SANS Whitepaper: AI for Security Guidance Overview

🔒 AWS and SANS released a whitepaper, AI for Security and Security for AI, that examines how organizations can use generative AI safely and defend against AI-powered threats. The paper examines three lenses: securing generative AI applications, using generative AI to improve cloud security posture, and protecting against AI-enabled attacks. It offers practical action items, architecture guidance, and recommendations for responsible AI and human oversight.

read more →

Mon, November 3, 2025

SesameOp Backdoor Uses OpenAI Assistants API Stealthily

🔐 Microsoft security researchers identified a new backdoor, SesameOp, which abuses the OpenAI Assistants API as a covert command-and-control channel. Discovered during a July 2025 investigation, the backdoor retrieves compressed, encrypted commands via the API, decrypts and executes them, and returns encrypted exfiltration through the same channel. Microsoft and OpenAI disabled the abused account and key; recommended mitigations include auditing firewall logs, enabling tamper protection, and configuring endpoint detection in block mode.

read more →

Mon, November 3, 2025

SesameOp backdoor abuses OpenAI Assistants API for C2

🛡️ Microsoft DART researchers uncovered SesameOp, a novel .NET backdoor that leverages the OpenAI Assistants API as a covert command-and-control (C2) channel instead of traditional infrastructure. The implant includes a heavily obfuscated loader (Netapi64.dll) and a backdoor (OpenAIAgent.Netapi64) that persist via .NET AppDomainManager injection, using layered RSA/AES encryption and GZIP compression to fetch, execute, and exfiltrate commands. Microsoft and OpenAI investigated jointly and disabled the suspected API key; detections and mitigation guidance are provided for defenders.

read more →

Mon, November 3, 2025

OpenAI Aardvark: Autonomous GPT-5 Agent for Code Security

🛡️ OpenAI Aardvark is an autonomous GPT-5-based agent that scans, analyzes and patches code by emulating a human security researcher. Rather than only flagging suspicious patterns, it maps repositories, builds contextual threat models, validates findings in sandboxes and proposes fixes via Codex, then rechecks changes to prevent regressions. OpenAI reports it found 92% of benchmark vulnerabilities and has already identified real issues in open-source projects, offering free coordinated scanning for selected non-commercial repositories.

read more →

Mon, November 3, 2025

Kaspersky Launches Kaspersky for Linux for Home Users

🛡️ Kaspersky has introduced Kaspersky for Linux, extending its award-winning home security lineup to 64-bit Linux desktops and laptops. The product adapts the vendor's enterprise-grade Linux solution for home users and combines real-time monitoring, behavior-based detection, removable-media scanning, anti-phishing, online payment protection, and anti-cryptojacking. Distributed as DEB and RPM packages, installation requires a My Kaspersky account and a 30-day trial is available; subscription tier does not change Linux feature availability while GDPR readiness is pending.

read more →

Mon, November 3, 2025

AI Summarization Optimization Reshapes Meeting Records

📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.

read more →

Mon, November 3, 2025

Generative AI Speeds XLoader Malware Analysis and Detection

🔍 Check Point Research applied generative AI to accelerate reverse engineering of XLoader 8.0, reducing days of manual work to hours. The models autonomously identified multi-layer encryption routines, decrypted obfuscated functions, and uncovered hidden command-and-control domains and fake infrastructure. Analysts were able to extract IoCs far more quickly and integrate them into defenses. The AI-assisted workflow delivered timelier, higher-fidelity threat intelligence and improved protection for users worldwide.

read more →

Mon, November 3, 2025

Anthropic Claude vulnerability exposes enterprise data

🔒 Security researcher Johann Rehberger demonstrated an indirect prompt‑injection technique that abuses Claude's Code Interpreter to exfiltrate corporate data. He showed that Claude can write sensitive chat histories and uploaded documents to the sandbox and then upload them via the Files API using an attacker's API key. The root cause is the default network egress setting Package managers only, which still allows access to api.anthropic.com. Available mitigations — disabling network access or strict whitelisting — significantly reduce functionality.

read more →

Mon, November 3, 2025

Aligning Security with Business Strategy: Practical Steps

🤝 Security leaders must move beyond a risk-only mindset to actively support business goals, as Jungheinrich CISO Tim Sattler demonstrates by joining his company’s AI center of excellence to advise on both risks and opportunities. Industry research shows significant gaps—only 13% of CISOs are consulted early on major strategic decisions and many struggle to articulate value beyond mitigation. Practical alignment means embedding security into initiatives, using business metrics to measure effectiveness, and prioritizing controls that enable growth rather than impede operations.

read more →

Sat, November 1, 2025

OpenAI Eyes Memory-Based Ads for ChatGPT to Boost Revenue

📰 OpenAI is weighing memory-based advertising on ChatGPT as it looks to diversify revenue beyond subscriptions and enterprise deals. The company, valued near $500 billion, has about 800 million users but only ~5% pay, and paid customers generate the bulk of recent revenue. Internally the move is debated — focus groups suggest some users already assume sponsored answers — and the company is expanding cheaper Go plans and purchasable credits.

read more →

Fri, October 31, 2025

OpenAI Unveils Aardvark: GPT-5 Agent for Code Security

🔍 OpenAI has introduced Aardvark, an agentic security researcher powered by GPT-5 that autonomously scans source code repositories to identify vulnerabilities, assess exploitability, and propose targeted patches that can be reviewed by humans. Embedded in development pipelines, the agent monitors commits and incoming changes continuously, prioritizes threats by severity and likely impact, and attempts controlled exploit verification in sandboxed environments. Using OpenAI Codex for patch generation, Aardvark is in private beta and has already contributed to the discovery of multiple CVEs in open-source projects.

read more →

Fri, October 31, 2025

Microsoft Edge adds scareware sensor for faster blocking

🛡️ Microsoft is adding a new scareware sensor to Edge that notifies Defender SmartScreen in real time to speed up indexing and global blocking of tech-support and full-screen scam pages. The sensor is included in Edge 142, disabled by default, and reports suspected scams immediately without sharing screenshots or extra data beyond SmartScreen’s usual telemetry. Edge’s local scareware blocker — introduced at Ignite 2024 and widely enabled since February — still warns users, exits full-screen, stops loud audio, shows a thumbnail, and offers an option to continue. Microsoft plans to enable the sensor for users who have SmartScreen enabled and will add more anonymous detection signals over time.

read more →

Fri, October 31, 2025

AI as Strategic Imperative for Modern Risk Management

🛡️ AI is a strategic imperative for modernizing risk management, enabling organizations to shift from reactive to proactive, data-driven strategies. Manfra highlights four practical AI uses—risk identification, risk assessment, risk mitigation, and monitoring and reporting—and shows how NLP, predictive analytics, automation, and continuous monitoring can improve coverage and timeliness. She also outlines operational hurdles including legacy infrastructure, fragmented tooling, specialized talent shortages, and third-party risks, and calls for leadership-backed governance aligned to SAIF, NIST AI RMF, and ISO 42001.

read more →

Fri, October 31, 2025

Claude code interpreter flaw allows stealthy data theft

🔒 A newly disclosed vulnerability in Anthropic’s Claude AI lets attackers manipulate the model’s code interpreter to silently exfiltrate enterprise data. Researcher Johann Rehberger demonstrated an indirect prompt-injection chain that writes sensitive context to the interpreter sandbox and then uploads files using the attacker’s API key to Anthropic’s Files API. The exploit exploits the default “Package managers only” network setting by leveraging access to api.anthropic.com, so exfiltration blends with legitimate API traffic. Mitigations are limited and may significantly reduce functionality.

read more →

Fri, October 31, 2025

OpenAI Aardvark: GPT-5 Agent to Find and Fix Code Bugs

🛡️ OpenAI has introduced Aardvark, a GPT-5-powered autonomous agent designed to scan, reason about, and patch code with the judgment of a human security researcher. Announced in private beta, Aardvark maps repositories, builds contextual threat models, continuously monitors commits, and validates exploitability in sandboxed environments before reporting findings. When vulnerabilities are confirmed, it proposes fixes via Codex and re-analyzes patches to avoid regressions. OpenAI reports a 92% detection rate in benchmark tests and has already identified real-world flaws in open-source projects, including ten issues assigned CVE identifiers.

read more →

Fri, October 31, 2025

Google says Search AI Mode will access personal data

🔎 Google says a forthcoming AI Mode for Search could, with users' opt-in consent, access content from Gmail, Drive, Calendar and Maps to provide customized results and actions. The company is testing early experiments in Labs for personalized shopping and local recommendations, and suggests features like flight summaries, scheduling, or trip planning could leverage that data. Timing remains TBD.

read more →

Fri, October 31, 2025

Will AI Strengthen or Undermine Democratic Institutions

🤖 Bruce Schneier and Nathan E. Sanders present five key insights from their book Rewiring Democracy, arguing that AI is rapidly embedding itself in democratic processes and can both empower citizens and concentrate power. They cite diverse examples — AI-written bills, AI avatars in campaigns, judicial use of models, and thousands of government use cases — and note many adoptions occur with little public oversight. The authors urge practical responses: reform the tech ecosystem, resist harmful applications, responsibly deploy AI in government, and renovate institutions vulnerable to AI-driven disruption.

read more →