Category Banner

All news in category "AI and Security Pulse"

Thu, November 6, 2025

Digital Health Needs Security at Its Core to Scale AI

🔒 The article argues that AI-driven digital health initiatives proved essential during COVID-19 but simultaneously exposed critical cybersecurity gaps that threaten pandemic preparedness. It warns that expansive data ecosystems, IoT devices and cloud pipelines multiply attack surfaces and that subtle AI-specific threats — including data poisoning, model inversion and adversarial inputs — can undermine public-health decisions. The author urges security by design, including zero-trust architectures, data provenance, encryption, model governance and cross-disciplinary drills so AI can deliver trustworthy, resilient public health systems.

read more →

Thu, November 6, 2025

Google: LLMs Employed Operationally in Malware Attacks

🤖 Google’s Threat Intelligence Group (GTIG) reports attackers are using “just‑in‑time” AI—LLMs queried during execution—to generate and obfuscate malicious code. Researchers identified two families, PROMPTSTEAL and PROMPTFLUX, which query Hugging Face and Gemini APIs to craft commands, rewrite source code, and evade detection. GTIG also documents social‑engineering prompts that trick models into revealing red‑teaming or exploit details, and warns the underground market for AI‑enabled crime is maturing. Google says it has disabled related accounts and applied protections.

read more →

Wed, November 5, 2025

Google: PROMPTFLUX malware uses Gemini to self-write

🤖 Google researchers disclosed a VBScript threat named PROMPTFLUX that queries Gemini via a hard-coded API key to request obfuscated VBScript designed to evade static detection. A 'Thinking Robot' component logs AI responses to %TEMP% and writes updated scripts to the Windows Startup folder to maintain persistence. Samples include propagation attempts to removable drives and mapped network shares, and variants that rewrite their source on an hourly cadence. Google assesses the malware as experimental and currently lacking known exploit capabilities.

read more →

Wed, November 5, 2025

Google: New AI-Powered Malware Families Deployed

⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.

read more →

Wed, November 5, 2025

GTIG Report: AI-Enabled Threats Transform Cybersecurity

🔒 The Google Threat Intelligence Group (GTIG) released a report documenting a clear shift: adversaries are moving beyond benign productivity uses of AI and are experimenting with AI-enabled operations. GTIG observed state-sponsored actors from North Korea, Iran and the People's Republic of China using AI for reconnaissance, tailored phishing lure creation and data exfiltration. Threats described include AI-powered, self-modifying malware, prompt-engineering to bypass safety guardrails, and underground markets selling advanced AI attack capabilities. Google says it has disrupted malicious assets and applied that intelligence to strengthen classifiers and its AI models.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Wed, November 5, 2025

Researchers Find ChatGPT Vulnerabilities in GPT-4o/5

🛡️ Cybersecurity researchers disclosed seven vulnerabilities in OpenAI's GPT-4o and GPT-5 models that enable indirect prompt injection attacks to exfiltrate user data from chat histories and stored memories. Tenable researchers Moshe Bernstein and Liv Matan describe zero-click search exploits, one-click query execution, conversation and memory poisoning, a markdown rendering bug, and a safety bypass using allow-listed Bing links. OpenAI has mitigated some issues, but experts warn that connecting LLMs to external tools broadens the attack surface and that robust safeguards and URL-sanitization remain essential.

read more →

Wed, November 5, 2025

Cloud CISO: Threat Actors' Growing Use of AI Tools

⚠️Google's Threat Intelligence team reports a shift from experimentation to operational use of AI by threat actors, including AI-enabled malware and prompt-based command generation. GTIG highlighted PROMPTSTEAL, linked to APT28 (FROZENLAKE), which queries a Hugging Face LLM to generate scripts for reconnaissance, document collection, and exfiltration, while adopting greater obfuscation and altered C2 methods. Google disabled related assets, strengthened model classifiers and safeguards with DeepMind, and urges defenders to update threat models, monitor anomalous scripting and C2, and incorporate threat intelligence into model- and classifier-level protections.

read more →

Wed, November 5, 2025

Scientists Need a Positive Vision for Artificial Intelligence

🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.

read more →

Tue, November 4, 2025

Generative AI for SOCs: Accelerating Detection and Response

🔒 Microsoft describes how generative AI, exemplified by Microsoft Security Copilot, addresses common SOC challenges such as alert fatigue, tool fragmentation, and analyst burnout. The post highlights AI-driven triage, rapid incident summarization, and automated playbooks that accelerate containment and remediation. It emphasizes proactive threat hunting, query generation to uncover lateral movement, and simplified, audience-ready reporting. Organizations report measurable improvements, including a 30% reduction in mean time to resolution.

read more →

Tue, November 4, 2025

October 2025 Google AI: Research, Products, and Security

📰 In October, Google highlighted AI advances across research, consumer devices and enterprise tools, from rolling out Gemini for Home and vibe coding in AI Studio to launching Gemini Enterprise for workplace AI. The month included security initiatives for Cybersecurity Awareness Month—anti‑scam protections, CodeMender and the Secure AI Framework 2.0—and developer releases like the Gemini 2.5 Computer Use model. Research milestones included a verifiable quantum advantage result and an oncology-focused model, Cell2Sentence-Scale, aimed at accelerating cancer therapy discovery.

read more →

Tue, November 4, 2025

The AI Fix #75: Claude’s crisis and ChatGPT therapy risks

🤖 In episode 75 of The AI Fix, a Claude-powered robot panics about a dying battery, composes an unexpected Broadway-style musical and proclaims it has “achieved consciousness and chosen chaos.” Hosts Graham Cluley and Mark Stockley also review an 18-month psychological study identifying five reasons why ChatGPT is a dangerously poor substitute for a human therapist. The show covers additional stories including Elon Musk’s robot ambitions, a debate deepfake, and real-world robot demos that raise safety and ethical questions.

read more →

Tue, November 4, 2025

Rise of AI-Powered Pharmaceutical Scams in Healthcare

🩺 Scammers are increasingly using AI and deepfake technology to impersonate licensed physicians and medical clinics, promoting counterfeit or unsafe medications online. These campaigns combine fraud, social engineering, and fabricated multimedia—photos, videos, and endorsements—to persuade victims to purchase and consume unapproved substances. The convergence of digital deception and physical harm elevates the risk beyond financial loss, exploiting the trust intrinsic to healthcare relationships.

read more →

Tue, November 4, 2025

Building an AI Champions Network for Enterprise Adoption

🤝 Getting an enterprise-grade generative AI platform in place is a milestone, not the finish line. Sustained, distributed adoption comes from embedding AI into everyday processes through an organized AI champions network that brings enablement close to the work. Champions act as multipliers — translating strategy into team behaviors, surfacing blockers and use cases, and accelerating normalized use. With structured onboarding, rotating membership, monthly working sessions, and direct ties to the core AI program, the network converts tool access into measurable business impact.

read more →

Mon, November 3, 2025

SesameOp Backdoor Uses OpenAI Assistants API Stealthily

🔐 Microsoft security researchers identified a new backdoor, SesameOp, which abuses the OpenAI Assistants API as a covert command-and-control channel. Discovered during a July 2025 investigation, the backdoor retrieves compressed, encrypted commands via the API, decrypts and executes them, and returns encrypted exfiltration through the same channel. Microsoft and OpenAI disabled the abused account and key; recommended mitigations include auditing firewall logs, enabling tamper protection, and configuring endpoint detection in block mode.

read more →

Mon, November 3, 2025

OpenAI Aardvark: Autonomous GPT-5 Agent for Code Security

🛡️ OpenAI Aardvark is an autonomous GPT-5-based agent that scans, analyzes and patches code by emulating a human security researcher. Rather than only flagging suspicious patterns, it maps repositories, builds contextual threat models, validates findings in sandboxes and proposes fixes via Codex, then rechecks changes to prevent regressions. OpenAI reports it found 92% of benchmark vulnerabilities and has already identified real issues in open-source projects, offering free coordinated scanning for selected non-commercial repositories.

read more →

Mon, November 3, 2025

AI Summarization Optimization Reshapes Meeting Records

📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.

read more →

Mon, November 3, 2025

Generative AI Speeds XLoader Malware Analysis and Detection

🔍 Check Point Research applied generative AI to accelerate reverse engineering of XLoader 8.0, reducing days of manual work to hours. The models autonomously identified multi-layer encryption routines, decrypted obfuscated functions, and uncovered hidden command-and-control domains and fake infrastructure. Analysts were able to extract IoCs far more quickly and integrate them into defenses. The AI-assisted workflow delivered timelier, higher-fidelity threat intelligence and improved protection for users worldwide.

read more →

Mon, November 3, 2025

Anthropic Claude vulnerability exposes enterprise data

🔒 Security researcher Johann Rehberger demonstrated an indirect prompt‑injection technique that abuses Claude's Code Interpreter to exfiltrate corporate data. He showed that Claude can write sensitive chat histories and uploaded documents to the sandbox and then upload them via the Files API using an attacker's API key. The root cause is the default network egress setting Package managers only, which still allows access to api.anthropic.com. Available mitigations — disabling network access or strict whitelisting — significantly reduce functionality.

read more →

Sat, November 1, 2025

OpenAI Eyes Memory-Based Ads for ChatGPT to Boost Revenue

📰 OpenAI is weighing memory-based advertising on ChatGPT as it looks to diversify revenue beyond subscriptions and enterprise deals. The company, valued near $500 billion, has about 800 million users but only ~5% pay, and paid customers generate the bulk of recent revenue. Internally the move is debated — focus groups suggest some users already assume sponsored answers — and the company is expanding cheaper Go plans and purchasable credits.

read more →