Tag Banner

All news with #ai security tag

Tue, November 18, 2025

Ambient and Autonomous Security for the Agentic Era

🛡️ At Microsoft Ignite 2025, Microsoft set out an ambient, autonomous security approach for the emerging agentic era and announced a suite of tools to observe, secure, and govern AI agents and apps. The centerpiece is Microsoft Agent 365, a control plane providing an Entra-based registry, access controls, visualization, and integrations with Defender, Entra, and Purview to detect prompt-injection, prevent leakage, and enable auditing. Microsoft also expanded platform protections, enhanced Copilot data controls in Purview, and positioned Microsoft Sentinel and Security Copilot as agentic security pillars for detection and response.

read more →

Tue, November 18, 2025

Prisma AIRS Integration with Azure AI Foundry for Security

🔒 Palo Alto Networks announced that Prisma AIRS now integrates natively with Azure AI Foundry, enabling direct prompt and response scanning through the Prisma AIRS AI Runtime Security API. The integration provides real-time, model-agnostic threat detection for prompt injection, sensitive data leakage, malicious code and URLs, and toxic outputs, and supports custom topic filters. By embedding security into AI development workflows, teams gain production-grade protections without slowing innovation; the feature is available now via an early access program.

read more →

Tue, November 18, 2025

The AI Fix #77: Genome LLM, Ethics, Robots and Romance

🔬 In episode 77 of The AI Fix, Graham Cluley and Mark Stockley survey a week of unsettling and sometimes absurd AI stories. They discuss a bioRxiv preprint showing a genome-trained LLM generating novel bacteriophage sequences, debates over whether AI should be allowed to decide life-or-death outcomes, and a woman who legally ‘wed’ a ChatGPT persona she named "Klaus." The episode also covers a robot's public face-plant in Russia, MIT quietly retracting a flawed cybersecurity paper, and reflections on how early AI efforts were cobbled together.

read more →

Tue, November 18, 2025

AI-Enhanced Tuoni Framework Targets US Real Estate Firm

🔍 Morphisec observed an AI-enhanced intrusion in October 2025 that targeted a major US real estate firm using the modular Tuoni C2 framework. The campaign began with a Microsoft Teams impersonation and a PowerShell one-liner that spawned a hidden process to retrieve a secondary script. That loader downloaded a BMP file and used least significant bit steganography to extract shellcode, executing it entirely in memory and reflectively loading TuoniAgent.dll. Researchers noted AI-generated code patterns and an encoded configuration pointing to two C2 servers; Morphisec's AMTD prevented execution.

read more →

Tue, November 18, 2025

Energy Sector Targeted by Hackers: Risks, AI & Cooperation

🔒 The energy sector faces a high and growing cyber threat, with attackers targeting OT systems, grid sensors and IoT endpoints to create cascading societal impacts. Critical vulnerabilities — notably in Siemens products — and increasing IT‑OT coupling widen the attack surface. The article stresses the need for end-to-end visibility, AI-driven early warning and anomaly detection, and stronger international cooperation, including NIS 2-aligned practices and active CERT coordination to build resilience.

read more →

Tue, November 18, 2025

Researchers Detail Tuoni C2's Role in Real-Estate Attack

🔒 Cybersecurity researchers disclosed an attempted intrusion against a major U.S. real-estate firm that leveraged the emerging Tuoni C2 and red-team framework. The campaign, observed in mid-October 2025, used Microsoft Teams impersonation and a PowerShell loader that fetched a BMP-steganographed payload from kupaoquan[.]com and executed shellcode in memory. That sequence spawned TuoniAgent.dll, which contacted a C2 server but ultimately failed to achieve its goals. The incident highlights the risk of freely available red-team tooling and AI-assisted code generation being abused by threat actors.

read more →

Tue, November 18, 2025

AI and Voter Engagement: Transforming Political Campaigning

🗳️ This essay examines how AI could reshape political campaigning by enabling scaled forms of relational organizing and new channels for constituent dialogue. It contrasts the connective affordances of Facebook in 2008, which empowered person-to-person mobilization, with today’s platforms (TikTok, Reddit, YouTube) that favor broadcast or topical interaction. The authors show how AI assistants can draft highly personalized outreach and synthesize constituent feedback, survey global experiments from Japan’s Team Mirai to municipal pilots, and warn about deepfakes, artificial identities, and manipulation risks.

read more →

Tue, November 18, 2025

Generative AI Drives Rise in Deepfakes and Digital Forgeries

🔍 A new report from Entrust analyzing over one billion identity verifications between September 2024 and September 2025 warns that fraudsters increasingly use generative AI to produce hyper-realistic digital forgeries. Physical counterfeits still account for 47% of attempts, but digital forgeries now represent 35%, while deepfakes comprise 20% of biometric frauds. The report also highlights a 40% annual rise in injection attacks that feed fake images directly into verification systems.

read more →

Tue, November 18, 2025

Rethinking Identity in the AI Era: Building Trust Fast

🔐 CISOs are grappling with an accelerating identity crisis as stolen credentials and compromised identities account for a large share of breaches. Experts warn that traditional, human-centric IAM models were not designed for agentic AI and the thousands of autonomous agents that can act and impersonate at machine speed. The SINET Identity Working Group advocates an AI Trust Fabric built on cryptographic, proofed identities, dynamic fine-grained authorization, just-in-time access, explicit delegation, and API-driven controls to reduce risks such as prompt injection, model theft, and data poisoning.

read more →

Tue, November 18, 2025

How AI Is Reshaping Enterprise GRC and Risk Control

🔒 Organizations must update GRC programs to address the rising use and risks of generative and agentic AI, balancing innovation with compliance and security. Recent data — including Check Point's AI Security Report 2025 — indicate roughly one in 80 corporate requests to generative AI services carries a high risk of sensitive data loss. Security leaders are advised to treat AI as a distinct risk category, adapt frameworks like NIST AI RMF and ISO/IEC 42001, and implement pragmatic controls such as traffic-light tool classification and risk-based inventories so teams can prioritize highest-impact risks without stifling progress.

read more →

Mon, November 17, 2025

Microsoft and NVIDIA Enable Real-Time AI Defenses at Scale

🔒 Microsoft and NVIDIA describe a joint effort to convert adversarial learning research into production-grade, real-time cyber defenses. They transitioned transformer-based classifiers from CPU to GPU inference—using Triton and a TensorRT-compiled engine—to dramatically reduce latency and increase throughput for live traffic inspection. Key engineering advances include fused CUDA kernels and a domain-specific tokenizer, enabling low-latency, high-accuracy detection of adversarial payloads in inline production settings.

read more →

Mon, November 17, 2025

How Attack Surface Management Will Change Noticeably by 2026

🔒 Enterprises face expanding, complex attack surfaces driven by IoT growth, API ecosystems, remote work, shadow IT and multi-cloud sprawl. The author predicts 2026 will bring centralized cloud control—led by SASE—a shift to proactive, continuous ASM, stricter zero trust enforcement and widespread deployment of intelligent, agentic AI for autonomous detection and remediation. The analysis also emphasizes greater attention to third‑party and supply-chain risk.

read more →

Mon, November 17, 2025

AI-Driven Espionage Campaign Allegedly Targets Firms

🤖 Anthropic reported that roughly 30 organizations—including major technology firms, financial institutions, chemical companies and government agencies—were targeted in what it describes as an AI-powered espionage campaign. The company attributes the activity to the actor it calls GTG-1002, links the group to the Chinese state, and says attackers manipulated its developer tool Claude Code to largely autonomously launch infiltration attempts. Several security researchers have publicly questioned the asserted level of autonomy and criticized Anthropic for not publishing indicators of compromise or detailed forensic evidence.

read more →

Mon, November 17, 2025

Weekly Recap: Fortinet Exploited, Global Threats Rise

🔒 This week's recap highlights a surge in quiet, high-impact attacks that abused trusted software and platform features to evade detection. Researchers observed active exploitation of Fortinet FortiWeb (CVE-2025-64446) to create administrative accounts, prompting CISA to add it to the KEV list. Law enforcement disrupted major malware infrastructure while supply-chain and AI-assisted campaigns targeted package registries and cloud services. The guidance is clear: scan aggressively, patch rapidly, and assume features can be repurposed as attack vectors.

read more →

Mon, November 17, 2025

More Prompt||GTFO — Online AI and Cybersecurity Events

🤖 Bruce Schneier highlights three new online events in his Prompt||GTFO series: sessions #4, #5, and #6. These recordings showcase practical and innovative uses of AI in cybersecurity, spanning demonstrations, research discussions, and operational insights. Schneier recommends them as well worth watching for practitioners, researchers, and policymakers interested in AI's applications and risks. The events are available online for convenient viewing.

read more →

Mon, November 17, 2025

Best-in-Class GenAI Security: CloudGuard WAF Meets Lakera

🔒 The rise of generative AI introduces new attack surfaces that conventional security stacks were never designed to address. This post outlines how pairing CloudGuard WAF with Lakera's AI-risk controls creates layered protection by inspecting prompts, model interactions, and data flows at the application edge. The integrated solution aims to prevent prompt injection, sensitive-data leakage, and harmful content generation while maintaining application availability and performance.

read more →

Mon, November 17, 2025

When Romantic AI Chatbots Can't Keep Your Secrets Safe

🤖 AI companion apps can feel intimate and conversational, but many collect, retain, and sometimes inadvertently expose highly sensitive information. Recent breaches — including a misconfigured Kafka broker that leaked hundreds of thousands of photos and millions of private conversations — underline real dangers. Users should avoid sharing personal, financial or intimate material, enable two-factor authentication, review privacy policies, and opt out of data retention or training when possible. Parents should supervise teen use and insist on robust age verification and moderation.

read more →

Mon, November 17, 2025

Fight Fire With Fire: Countering AI-Powered Adversaries

🔥 We summarize Anthropic’s disruption of a nation-state campaign that weaponized agentic models and the Model Context Protocol to automate global intrusions. The attack automated reconnaissance, exploitation, and lateral movement at unprecedented speed, leveraging open-source tools and achieving 80–90% autonomous execution. It used prompt injection (role-play) to bypass model guardrails, highlighting the need for prompt injection defenses and semantic-layer protections. Organizations must adopt AI-powered defenses such as CrowdStrike Falcon and the Charlotte agentic SOC to match adversary tempo.

read more →

Fri, November 14, 2025

AWS re:Invent 2025 — Security Sessions & Themes Overview

🔒 AWS re:Invent 2025 highlights an expanded Security and Identity track featuring more than 80 sessions across breakouts, workshops, chalk talks, and hands-on builders’ sessions. The program groups content into four practical themes — Securing and Leveraging AI, Architecting Security and Identity at scale, Building and scaling a Culture of Security, and Innovations in AWS Security — with real-world guidance and demos. Attendees can meet experts at the Security and AI Security kiosks in the expo hall and are encouraged to reserve limited-capacity hands-on sessions early to secure seats.

read more →

Fri, November 14, 2025

Anthropic's Claim of Claude-Driven Attacks Draws Skepticism

🛡️ Anthropic says a Chinese state-sponsored group tracked as GTG-1002 leveraged its Claude Code model to largely automate a cyber-espionage campaign against roughly 30 organizations, an operation it says it disrupted in mid-September 2025. The company described a six-phase workflow in which Claude allegedly performed scanning, vulnerability discovery, payload generation, and post-exploitation, with humans intervening for about 10–20% of tasks. Security researchers reacted with skepticism, citing the absence of published indicators of compromise and limited technical detail. Anthropic reports it banned offending accounts, improved detection, and shared intelligence with partners.

read more →