All news with #ai security tag
Wed, November 5, 2025
CrowdStrike Advances Security Automation with Charlotte
🚀 CrowdStrike introduces Charlotte Agentic SOAR, an orchestration layer that integrates Falcon Fusion SOAR, Falcon Next‑Gen SIEM, Charlotte AI and AgentWorks to enable intelligent, no‑code agents. The offering includes an Agentic Security Workforce of purpose-built AI agents, an Agent Builder for plain-language agent creation, a visual workflow orchestrator with hundreds of connectors, and unified case management. Together these elements let analysts set guardrails while agents reason, decide, and act at machine speed to accelerate detection and response and reduce repetitive analyst tasks.
Tue, November 4, 2025
October 2025 Google AI: Research, Products, and Security
📰 In October, Google highlighted AI advances across research, consumer devices and enterprise tools, from rolling out Gemini for Home and vibe coding in AI Studio to launching Gemini Enterprise for workplace AI. The month included security initiatives for Cybersecurity Awareness Month—anti‑scam protections, CodeMender and the Secure AI Framework 2.0—and developer releases like the Gemini 2.5 Computer Use model. Research milestones included a verifiable quantum advantage result and an oncology-focused model, Cell2Sentence-Scale, aimed at accelerating cancer therapy discovery.
Tue, November 4, 2025
Generative AI for SOCs: Accelerating Detection and Response
🔒 Microsoft describes how generative AI, exemplified by Microsoft Security Copilot, addresses common SOC challenges such as alert fatigue, tool fragmentation, and analyst burnout. The post highlights AI-driven triage, rapid incident summarization, and automated playbooks that accelerate containment and remediation. It emphasizes proactive threat hunting, query generation to uncover lateral movement, and simplified, audience-ready reporting. Organizations report measurable improvements, including a 30% reduction in mean time to resolution.
Tue, November 4, 2025
The AI Fix #75: Claude’s crisis and ChatGPT therapy risks
🤖 In episode 75 of The AI Fix, a Claude-powered robot panics about a dying battery, composes an unexpected Broadway-style musical and proclaims it has “achieved consciousness and chosen chaos.” Hosts Graham Cluley and Mark Stockley also review an 18-month psychological study identifying five reasons why ChatGPT is a dangerously poor substitute for a human therapist. The show covers additional stories including Elon Musk’s robot ambitions, a debate deepfake, and real-world robot demos that raise safety and ethical questions.
Tue, November 4, 2025
OpenAI Assistants API Abused by 'SesameOp' Backdoor
🔐 Microsoft Incident Response (DART) uncovered a covert backdoor named 'SesameOp' in July 2025 that leverages the OpenAI Assistants API as a command-and-control channel. The malware uses an obfuscated DLL loader, Netapi64.dll, and a .NET component, OpenAIAgent.Netapi64, to fetch compressed, encrypted commands and return results via the API. Microsoft recommends firewall audits, EDR in block mode, tamper protection and cloud-delivered Defender protections to mitigate the threat.
Tue, November 4, 2025
CISO Predictions 2026: Resilience, AI, and Threats
🔐 Fortinet’s CISO Collective outlines priorities and risks CISOs will face in 2026. The briefing warns that AI will accelerate innovation while expanding attack surfaces, increasing LLM breaches, adversarial model attacks, and deepfake-enabled BEC. It highlights geopolitical and space-related threats such as GPS jamming and satellite interception, persistent regulatory pressure including NIS2 and DORA, and a chronic cybersecurity skills gap. Recommendations emphasize governed AI, identity hardening, quantum readiness, and resilience-driven leadership.
Tue, November 4, 2025
Cybersecurity Forecast 2026: AI, Cybercrime, Nation-State
🔒 The Cybersecurity Forecast 2026 synthesizes frontline telemetry and expert analysis from Google Cloud security teams to outline the most significant threats and defensive shifts for the coming year. The report emphasizes how adversaries will broadly adopt AI to scale attacks, with specific risks including prompt injection and AI-enabled social engineering. It also highlights persistent cybercrime trends—ransomware, extortion, and on-chain resiliency—and evolving nation‑state campaigns. Organizations are urged to adapt IAM, secure AI agents, and harden virtualization controls to stay ahead.
Tue, November 4, 2025
Rise of AI-Powered Pharmaceutical Scams in Healthcare
🩺 Scammers are increasingly using AI and deepfake technology to impersonate licensed physicians and medical clinics, promoting counterfeit or unsafe medications online. These campaigns combine fraud, social engineering, and fabricated multimedia—photos, videos, and endorsements—to persuade victims to purchase and consume unapproved substances. The convergence of digital deception and physical harm elevates the risk beyond financial loss, exploiting the trust intrinsic to healthcare relationships.
Tue, November 4, 2025
SesameOp Backdoor Abuses OpenAI Assistants API for C2
🛡️ Researchers at Microsoft disclosed a previously undocumented backdoor, dubbed SesameOp, that abuses the OpenAI Assistants API to relay commands and exfiltrate results. The attack chain uses .NET AppDomainManager injection to load obfuscated libraries (loader "Netapi64.dll") into developer tools and relies on a hard-coded API key to pull payloads from assistant descriptions. Because traffic goes to api.openai.com, the campaign evaded traditional C2 detection. Microsoft Defender detections and account key revocation were used to disrupt the operation.
Tue, November 4, 2025
Google AI 'Big Sleep' Finds Five WebKit Flaws in Safari
🔒 Google’s AI agent Big Sleep reported five vulnerabilities in Apple’s WebKit used by Safari, including a buffer overflow, two memory-corruption issues, an unspecified crash flaw, and a use-after-free (CVE-2025-43429 through CVE-2025-43434). Apple issued patches across iOS 26.1, iPadOS 26.1, macOS Tahoe 26.1, tvOS 26.1, watchOS 26.1, visionOS 26.1 and Safari 26.1. Users are advised to install the updates promptly to mitigate crash and memory-corruption risks.
Tue, November 4, 2025
Building an AI Champions Network for Enterprise Adoption
🤝 Getting an enterprise-grade generative AI platform in place is a milestone, not the finish line. Sustained, distributed adoption comes from embedding AI into everyday processes through an organized AI champions network that brings enablement close to the work. Champions act as multipliers — translating strategy into team behaviors, surfacing blockers and use cases, and accelerating normalized use. With structured onboarding, rotating membership, monthly working sessions, and direct ties to the core AI program, the network converts tool access into measurable business impact.
Tue, November 4, 2025
Microsoft Detects SesameOp Backdoor Using OpenAI API
🔒 Microsoft’s Detection and Response Team (DART) detailed a novel .NET backdoor called SesameOp that leverages the OpenAI Assistants API as a covert command-and-control channel. Discovered in July 2025 during a prolonged intrusion, the implant uses a loader (Netapi64.dll) and an OpenAIAgent.Netapi64 component to fetch encrypted commands and return execution results via the API. The DLL is heavily obfuscated with Eazfuscator.NET and is injected at runtime using .NET AppDomainManager injection for stealth and persistence.
Mon, November 3, 2025
AWS and SANS Whitepaper: AI for Security Guidance Overview
🔒 AWS and SANS released a whitepaper, AI for Security and Security for AI, that examines how organizations can use generative AI safely and defend against AI-powered threats. The paper examines three lenses: securing generative AI applications, using generative AI to improve cloud security posture, and protecting against AI-enabled attacks. It offers practical action items, architecture guidance, and recommendations for responsible AI and human oversight.
Mon, November 3, 2025
SesameOp Backdoor Uses OpenAI Assistants API Stealthily
🔐 Microsoft security researchers identified a new backdoor, SesameOp, which abuses the OpenAI Assistants API as a covert command-and-control channel. Discovered during a July 2025 investigation, the backdoor retrieves compressed, encrypted commands via the API, decrypts and executes them, and returns encrypted exfiltration through the same channel. Microsoft and OpenAI disabled the abused account and key; recommended mitigations include auditing firewall logs, enabling tamper protection, and configuring endpoint detection in block mode.
Mon, November 3, 2025
SesameOp backdoor abuses OpenAI Assistants API for C2
🛡️ Microsoft DART researchers uncovered SesameOp, a novel .NET backdoor that leverages the OpenAI Assistants API as a covert command-and-control (C2) channel instead of traditional infrastructure. The implant includes a heavily obfuscated loader (Netapi64.dll) and a backdoor (OpenAIAgent.Netapi64) that persist via .NET AppDomainManager injection, using layered RSA/AES encryption and GZIP compression to fetch, execute, and exfiltrate commands. Microsoft and OpenAI investigated jointly and disabled the suspected API key; detections and mitigation guidance are provided for defenders.
Mon, November 3, 2025
OpenAI Aardvark: Autonomous GPT-5 Agent for Code Security
🛡️ OpenAI Aardvark is an autonomous GPT-5-based agent that scans, analyzes and patches code by emulating a human security researcher. Rather than only flagging suspicious patterns, it maps repositories, builds contextual threat models, validates findings in sandboxes and proposes fixes via Codex, then rechecks changes to prevent regressions. OpenAI reports it found 92% of benchmark vulnerabilities and has already identified real issues in open-source projects, offering free coordinated scanning for selected non-commercial repositories.
Mon, November 3, 2025
Kaspersky Launches Kaspersky for Linux for Home Users
🛡️ Kaspersky has introduced Kaspersky for Linux, extending its award-winning home security lineup to 64-bit Linux desktops and laptops. The product adapts the vendor's enterprise-grade Linux solution for home users and combines real-time monitoring, behavior-based detection, removable-media scanning, anti-phishing, online payment protection, and anti-cryptojacking. Distributed as DEB and RPM packages, installation requires a My Kaspersky account and a 30-day trial is available; subscription tier does not change Linux feature availability while GDPR readiness is pending.
Mon, November 3, 2025
AI Summarization Optimization Reshapes Meeting Records
📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.
Mon, November 3, 2025
Generative AI Speeds XLoader Malware Analysis and Detection
🔍 Check Point Research applied generative AI to accelerate reverse engineering of XLoader 8.0, reducing days of manual work to hours. The models autonomously identified multi-layer encryption routines, decrypted obfuscated functions, and uncovered hidden command-and-control domains and fake infrastructure. Analysts were able to extract IoCs far more quickly and integrate them into defenses. The AI-assisted workflow delivered timelier, higher-fidelity threat intelligence and improved protection for users worldwide.
Mon, November 3, 2025
Anthropic Claude vulnerability exposes enterprise data
🔒 Security researcher Johann Rehberger demonstrated an indirect prompt‑injection technique that abuses Claude's Code Interpreter to exfiltrate corporate data. He showed that Claude can write sensitive chat histories and uploaded documents to the sandbox and then upload them via the Files API using an attacker's API key. The root cause is the default network egress setting Package managers only, which still allows access to api.anthropic.com. Available mitigations — disabling network access or strict whitelisting — significantly reduce functionality.