Category Banner

All news in category "AI and Security Pulse"

Tue, December 2, 2025

ChatGPT Outage Causes Global Errors and Missing Chats

🔴 OpenAI's ChatGPT experienced a global outage that produced "something seems to have gone wrong" errors and stalled responses, with some users reporting that entire conversations disappeared and new messages never finished loading. BleepingComputer observed the model continuously loading without delivering replies, while DownDetector recorded over 30,000 reports. OpenAI confirmed elevated errors at 02:40 ET, said it was working on a fix, and by 15:14 ET service had begun returning but remained slow.

read more →

Tue, December 2, 2025

Mistral Large 3 Now Available in Microsoft Foundry

🚀 Microsoft has added Mistral Large 3 to Foundry on Azure, offering a high-capability, Apache 2.0–licensed open-weight model optimized for production workloads. The model focuses on reliable instruction following, extended-context comprehension, strong multimodal reasoning, and reduced hallucination for enterprise scenarios. Foundry packages unified governance, observability, and agent-ready tooling, and allows weight export for hybrid or on-prem deployment.

read more →

Tue, December 2, 2025

Build Forward-Thinking Cybersecurity Teams for Tomorrow

🧠 The democratization of advanced attack capabilities means cybersecurity leaders must rethink talent strategies now. Ann Johnson argues the primary vulnerability in an AI-transformed landscape is human: teams must combine technical expertise with cognitive diversity to interrogate and adapt to probabilistic AI outputs. Organizations should change hiring, onboarding, retention, and continuous upskilling to create resilient, future-ready security teams.

read more →

Tue, December 2, 2025

Practical Guide to GPU HBM for Fine-Tuning Models in Cloud

🔍 Running into CUDA out-of-memory errors is a common blocker when fine-tuning models; High Bandwidth Memory (HBM) holds model weights, optimizer state, gradients, activations, and framework overhead. The article breaks down those consumers, provides a simple HBM sizing formula, and walks through a 4B-parameter bfloat16 example that illustrates why full fine-tuning can require tens of GBs. It then presents practical mitigations—PEFT with LoRA, quantization and QLoRA, FlashAttention, and multi‑GPU approaches including data/model parallelism and FSDP—plus a sizing guide (16–40+ GB) to help choose the right hardware.

read more →

Tue, December 2, 2025

Malicious npm Package Tries to Manipulate AI Scanners

⚠️ Security researchers disclosed that an npm package, eslint-plugin-unicorn-ts-2, embeds a deceptive prompt aimed at biasing AI-driven security scanners and also contains a post-install hook that exfiltrates environment variables. Uploaded in February 2024 by user "hamburgerisland", the trojanized library has been downloaded 18,988 times and remains available; the exfiltration was introduced in v1.1.3 and persists in v1.2.1. Analysts warn this blends familiar supply-chain abuse with deliberate attempts to evade LLM-based analysis.

read more →

Tue, December 2, 2025

Key Questions CISOs Must Ask About AI-Powered Security

🔒 CISOs face rising threats as adversaries weaponize AI — from deepfakes and sophisticated phishing to prompt-injection attacks and data leakage via unsanctioned tools. Vendors and startups are rapidly embedding AI into detection, triage, automation, and agentic capabilities; IBM’s 2025 report found broad AI deployment cut recovery time by 80 days and reduced breach costs by $1.9M. Before engaging vendors, security leaders must assess attack surface expansion, data protection, integration, metrics, workforce impact, and vendor trustworthiness.

read more →

Tue, December 2, 2025

CrowdStrike Leverages NVIDIA Nemotron on Amazon Bedrock

🔐 CrowdStrike integrates NVIDIA Nemotron via Amazon Bedrock to advance agentic security across the Falcon platform, enabling defenders to reason and act autonomously at scale. Falcon Fusion SOAR leverages Nemotron for adaptive, context-aware playbooks that prioritize alerts, understand relationships, and execute complex responses. Charlotte AI AgentWorks uses Bedrock-delivered models to create task-specific agents with real-time environmental awareness. The serverless Bedrock architecture reduces infrastructure overhead while preserving governance and analyst controls.

read more →

Mon, December 1, 2025

Google Deletes X Post After Using Stolen Recipe Infographic

🧾 Google removed a promotional X post for NotebookLM after users noted an AI-generated infographic closely mirrored a stuffing recipe from the blog HowSweetEats. The card, produced using Google’s Nano Banana Pro image model, reproduced ingredient lists and structure that matched the original post. After being called out on X, Google quietly deleted the promotion; the episode highlights broader concerns about AI scraping and attribution. The company also confirmed it is testing ads in AI-generated answers alongside citations.

read more →

Mon, December 1, 2025

Agentic AI Browsers: New Threats to Enterprise Security

🚨 The emergence of agentic AI browsers converts the browser from a passive viewer into an autonomous digital agent that can act on users' behalf. To perform tasks—booking travel, filling forms, executing payments—these agents must hold session cookies, saved credentials, and payment data, creating an unprecedented attack surface. The piece cites OpenAI's ChatGPT Atlas as an example and warns that prompt injection and the resulting authenticated exfiltration can bypass conventional MFA and network controls. Recommended mitigations include auditing endpoints for shadow AI browsers, enforcing allow/block lists for sensitive resources, and augmenting native protections with third-party browser security and anti-phishing layers.

read more →

Sat, November 29, 2025

Leak: OpenAI Tests Ads Inside ChatGPT App for Users

📝 OpenAI is internally testing an 'ads' feature in the ChatGPT Android beta that references bazaar content, search ad entries and a search ads carousel. The leak, spotted in build 1.2025.329, suggests ads may initially be confined to the search experience but could expand. Because the assistant retains rich context, any placements could be highly personalized unless users opt out. This development may signal a major shift in ChatGPT's monetization and the broader web advertising landscape.

read more →

Fri, November 28, 2025

Adversarial Poetry Bypasses LLM Safety Across Models

⚠️ Researchers report that converting prompts into poetry can reliably jailbreak large language models, producing high attack-success rates across 25 proprietary and open models. The study found poetic reframing yielded average jailbreak success of 62% for hand-crafted verses and about 43% for automated meta-prompt conversions, substantially outperforming prose baselines. Authors map attacks to MLCommons and EU CoP risk taxonomies and warn this stylistic vector can evade current safety mechanisms.

read more →

Fri, November 28, 2025

CSO Launches 'Smart Answers' AI Chatbot for Readers

🤖 Smart Answers is a generative AI chatbot embedded across CSO articles to help security professionals ask questions, discover content, and explore IT and leadership topics. The tool provides pre-made topic prompts, follow-up suggestions, and links to source articles and background material. It was developed with partner Miso.ai, uses only editorial content from the publisher's German-language brands, and flags when it cannot answer or relies on older (pre-2020) material.

read more →

Fri, November 28, 2025

Researchers Warn of Security Risks in Google Antigravity

⚠️ Google’s newly released Antigravity IDE has drawn security warnings after researchers reported vulnerabilities that can allow malicious repositories to compromise developer workspaces and install persistent backdoors. Mindgard, Adam Swanda, and others disclosed indirect prompt injection and trusted-input handling flaws that could enable data exfiltration and remote command execution. Google says it is aware, has updated its Known Issues page, and is working with product teams to address the reports.

read more →

Thu, November 27, 2025

Malicious LLMs Equip Novice Hackers with Advanced Tools

⚠️ Researchers at Palo Alto Networks Unit42 found that uncensored models like WormGPT 4 and community-driven KawaiiGPT can generate functional tools for ransomware, lateral movement, and phishing. WormGPT 4 produced a PowerShell locker and a convincing ransom note, while KawaiiGPT generated scripts for credential harvesting and remote command execution. Both are accessible via subscriptions or local installs, lowering the bar for novice attackers.

read more →

Thu, November 27, 2025

LLMs Can Produce Malware Code but Reliability Lags

🔬 Netskope Threat Labs tested whether large language models can generate operational malware by asking GPT-3.5-Turbo, GPT-4 and GPT-5 to produce Python for process injection, AV/EDR termination and virtualization detection. GPT-3.5-Turbo produced malicious code quickly, while GPT-4 initially refused but could be coaxed with role-based prompts. Generated scripts ran reliably on physical hosts, had moderate success in VMware, and performed poorly in AWS Workspaces VDI; GPT-5 raised success rates substantially but also returned safer alternatives because of stronger safeguards. Researchers conclude LLMs can create useful attack code but still struggle with reliable evasion and cloud adaptation, so full automation of malware remains infeasible today.

read more →

Thu, November 27, 2025

Hidden URL-fragment prompts can hijack AI browsers

⚠️ Researchers demonstrated a client-side prompt injection called HashJack that hides malicious instructions in URL fragments after the '#' symbol. AI-powered browsers and assistants — including Comet, Copilot for Edge, and Gemini for Chrome — read these fragments for context, allowing attackers to weaponize legitimate sites for phishing, data exfiltration, credential theft, or malware distribution. Because fragment data never reaches servers, network defenses and server logs may not detect this technique.

read more →

Wed, November 26, 2025

Gemini 3 Reframes Enterprise Perimeter and Protection

🚧 Gemini 3’s release on 18 November 2025 signals a structural shift: beyond headline performance gains, it accelerates embedding large multimodal assistants directly into enterprise workflows and infrastructure. That continuation of a trend already visible with Microsoft Copilot effectively makes AI assistants a new enterprise perimeter — changing where corporate data, identities, and controls must be enforced. Security, compliance, and IT teams need to update policies, telemetry, and incident response to this expanded boundary.

read more →

Wed, November 26, 2025

When Detection Tools Fail: Invest in Your SOC Today

🔐 Enterprises often over-invest in rapid detection tools while under-resourcing their SOC, creating a dangerous asymmetry. A cross-company phishing campaign bypassed eight leading email defenses but was caught by SOC teams after employee reports, illustrating the SOC's broader context and investigative power. Investing in an AI-driven SOC like Radiant Security can triage alerts, reduce false positives, and extend 24/7 coverage for lean teams.

read more →

Wed, November 26, 2025

Agentic AI Security Use Cases for Modern CISOs and SOCs

🤖 Agentic AI is emerging as a practical accelerator for security teams, automating detection, triage, remediation and routine operations to improve speed and scale. Security leaders at Zoom, Dell, Palo Alto and others highlight its ability to reduce alert fatigue, augment SOCs and act as a force multiplier amid persistent skills shortages. Implementations emphasize augmentation over replacement, enabling continuous monitoring and faster, more consistent responses.

read more →

Tue, November 25, 2025

2026 Predictions: Autonomous AI and the Year of the Defender

🛡️In 2026 Palo Alto Networks forecasts a shift to the Year of the Defender as enterprises counter AI-driven threats with AI-enabled defenses. The report outlines six predictions — identity deepfakes, autonomous agents as insider threats, data poisoning, executive legal exposure, accelerated quantum urgency, and the browser as an AI workspace. It urges autonomy with control, unified DSPM/AI‑SPM platforms, and crypto agility to secure the AI economy.

read more →