Tag Banner

All news with #ai security tag

Wed, September 17, 2025

Rethinking AI Data Security: A Practical Buyer's Guide

🛡️ Generative AI is now central to enterprise work, but rapid adoption has exposed gaps in legacy security models that were not designed for last‑mile behaviors. The piece argues buyers must reframe evaluations around real-world AI use — inside browsers and across sanctioned and shadow tools — and prioritize solutions offering real-time monitoring, contextual enforcement, and low‑friction deployment. It warns against blunt blocking and promotes nuanced controls such as redaction, just‑in‑time warnings, and conditional approvals to protect data while preserving productivity.

read more →

Wed, September 17, 2025

Quarter of UK and US Firms Hit by Data Poisoning Attacks

🛡️ New IO research reports that 26% of surveyed UK and US organisations have experienced data poisoning, and 37% observe employees using generative AI tools without permission. The third annual State of Information Security Report highlights rising concern around AI-generated phishing, misinformation, deepfakes and shadow AI. Despite the risks, most respondents say they feel prepared and are adopting acceptable use policies to curb unsanctioned tool use.

read more →

Wed, September 17, 2025

Deploying Agentic AI: Five Steps for Red-Teaming Guide

🛡️ Enterprises adopting agentic AI must update red‑teaming practices to address a rapidly expanding and interactive attack surface. The article summarizes the Cloud Security Alliance’s Agentic AI Red Teaming Guide and corroborating research that documents prompt injection, multi‑agent manipulation, and authorization hijacking as practical threats. It recommends five pragmatic steps—change attitude, continually test guardrails and governance, broaden red‑team skill sets, widen the solution space, and adopt modern tooling—and highlights open‑source and commercial tools such as AgentDojo and Agentgateway. The overall message: combine automated agents with human creativity, embed security in design, and treat agentic systems as sociotechnical operators rather than simple software.

read more →

Wed, September 17, 2025

CrowdStrike Launches Threat AI: Agentic Threat Intel

🔍 CrowdStrike unveiled Threat AI, described as the industry’s first agentic threat intelligence system, built on the Falcon platform to reason, hunt, and act across adversary activity. The initial agents — a Malware Analysis Agent and a Hunt Agent — automate complex workflows like reversing, classification, retrohunting, and continuous threat hunting to surface actionable recommendations. CrowdStrike also released a Threat Intelligence Browser Extension for Chrome to provide intelligence in analysts’ workflows, aiming to accelerate investigations and help SOCs respond at machine speed.

read more →

Wed, September 17, 2025

CrowdStrike Secures AI Across the Enterprise with Partners

🔒 CrowdStrike describes how the Falcon platform delivers unified visibility and lifecycle defense across the full AI stack, from GPUs and training data to inference pipelines and SaaS agents. The post highlights integrations with NVIDIA, AWS, Intel, Dell, Meta, and Salesforce to extend protection into infrastructure, data, models, and applications. It also introduces agentic defense via Charlotte AI for autonomous triage and rapid response, and emphasizes governance controls to prevent data leaks and adversarial manipulation.

read more →

Wed, September 17, 2025

OWASP LLM AI Cybersecurity and Governance Checklist

🔒 OWASP has published an LLM AI Cybersecurity & Governance Checklist to help executives and security teams identify core risks from generative AI and large language models. The guidance categorises threats and recommends a six-step strategy covering adversarial risk, threat modeling, inventory and training. It also highlights TEVV, model and risk cards, RAG, supplier audits and AI red‑teaming to validate controls. Organisations should pair these measures with legal and regulatory reviews and clear governance.

read more →

Tue, September 16, 2025

Microsoft Purview Updates for Fabric: Securing Data for AI

🔒 Microsoft announced Purview innovations for Fabric at FabCon to unify discovery, protection, and governance across Azure, Microsoft 365, and Microsoft Fabric. New generally available controls include Information Protection policies for Fabric items, DLP for structured data in OneLake, and Insider Risk Management for Fabric. Preview features add DSPM data risk assessments and enhanced Copilot controls, while the Unified Catalog gains finer metadata, tagging, and data‑quality workflows to improve discoverability and trust.

read more →

Tue, September 16, 2025

Chinese AI Villager Pen-Testing Tool: 11,000 PyPI Downloads

🧭 Villager, an AI-native penetration testing framework developed by Chinese group Cyberspike, has reached nearly 11,000 downloads on PyPI just two months after release. The tool integrates Kali Linux utilities with DeepSeek AI models and operates as a Model Context Protocol (MCP) client to automate red team workflows. Researchers at Straiker reported that Villager can spin up on-demand Kali containers, automate browser testing, use a database of more than 4,200 prompts for decision-making, and deploy self-destructing containers — features that lower the barrier to sophisticated attacks and raise concerns about dual-use abuse.

read more →

Tue, September 16, 2025

Check Point to Acquire Lakera, Expanding AI Security

🚀 Check Point is acquiring Lakera to build a comprehensive AI security stack for enterprises adopting generative and AI-driven applications. The move aims to protect the emerging AI attack surface by combining Check Point's security platform with Lakera's AI threat analysis and model-protection capabilities. Customers should expect integrated defenses for models, data, and pipelines, increased visibility into model behavior, and tools for managing model risk and compliance.

read more →

Tue, September 16, 2025

The AI Fix — Episode 68: Merch, Hoaxes and AI Rights

🎧 In episode 68 of The AI Fix, hosts Graham Cluley and Mark Stockley blend news, commentary and light-hearted banter while launching a new merch store. The discussion covers real-world harms from AI-generated hoaxes that sent Manila firefighters to a non-existent fire, Albania appointing an AI-made minister, and reports of the so-called 'godfather of AI' being spurned by ChatGPT. They also explore wearable telepathic interfaces like AlterEgo, the rise of AI rights advocacy, and listener support options including ad-free subscriptions and merch purchases.

read more →

Tue, September 16, 2025

Google Announces AP2: Protocol for Agent-Led Payments

🤖 Google introduced the Agent Payments Protocol (AP2), an open standard developed with more than 60 payments and technology firms to enable secure, agent-initiated transactions across platforms. AP2 extends A2A and MCP, using cryptographically-signed Mandates and verifiable credentials to prove authorization, ensure authenticity, and provide a non-repudiable audit trail. The protocol supports cards, real-time bank transfers, and crypto.

read more →

Tue, September 16, 2025

Villager: AI-Native Red-Teaming Tool Raises Alarms

⚠ Villager is an AI-native red-teaming framework from a shadowy Chinese developer, Cyberspike, that has been downloaded more than 10,000 times in roughly two months. The tool automates reconnaissance, exploitation, payload generation, and lateral movement into a single pipeline, integrating Kali toolsets with DeepSeek AI models and publishing on PyPI. Security firms warn the automation compresses days of skilled activity into minutes, creating dual-use risks for both legitimate testers and malicious actors and raising supply-chain and detection concerns.

read more →

Tue, September 16, 2025

HMRC Tax Refund Phishing Reports Decline Sharply in 2025

📉 Bridewell's analysis of FOI data shows a marked fall in HMRC-impersonation phishing reports in the first half of 2025, with 41,202 incidents versus 102,226 in 2024 and 152,995 in 2023. Email-based attacks drove most of the decline while SMS phishing rose. The firm warns AI-enhanced social engineering is increasing and advises users to pause, avoid suspicious links and verify communications via official channels.

read more →

Tue, September 16, 2025

AI-Powered ZTNA Protects the Hybrid Future and Agility

🔒 Enterprises face a paradox: AI promises intelligent, automated access control, but hybrid complexity and legacy systems are blocking adoption. Teams report being buried in manual policy creation, vendor integrations and constant firefighting despite mature platforms like Palo Alto Networks, Netskope and Zscaler. AI-driven ZTNA shifts the model from policy-first to behavior-first, building behavioral baselines that generate context-aware policies and can wrap legacy apps without invasive changes. Success requires operational bandwidth, reliable data and a mindset shift to treat access control as a business enabler rather than a compliance burden.

read more →

Tue, September 16, 2025

Securing the Agentic Era: Astrix's Agent Control Plane

🔒 Astrix introduces the industry's first Agent Control Plane (ACP) to enable secure-by-design deployment of autonomous AI agents across the enterprise. ACP issues short-lived, precisely scoped credentials and enforces just-in-time, least-privilege access while centralizing inventory and activity trails. The platform streamlines policy-driven approvals for developers, speeds audits for security teams, and reduces compliance and operational risk by discovering non-human identities (NHIs) and remediating excessive privileges in real time.

read more →

Tue, September 16, 2025

CISOs Assess Practical Limits of AI for Security Ops

🤖 Security leaders report early wins from AI in detection, triage, and automation, but emphasize limits and oversight. Prioritizing high-value telemetry for real-time detection while moving lower-priority logs to data lakes improves signal-to-noise and shortens response times, according to Myke Lyons. Financial firms are experimenting with agentic AI to block business email compromise in real time, yet researchers and practitioners warn of missed detections and 'ghost alerts.' Organizations that treat AI as a copilot with governance, explainability, and institutional context see more reliable, safer outcomes.

read more →

Tue, September 16, 2025

CrowdStrike Falcon: Building an Agentic Security Platform

🚀 The CrowdStrike Falcon fall release reframes the platform as an Agentic Security Platform, introducing four core innovations: Enterprise Graph, Charlotte AI AgentWorks, the Agent Collaboration framework (powered by MCP), and an AI-native console. Enterprise Graph unifies telemetry into a real-time, AI-ready data layer to give humans and agents shared context. Charlotte AI AgentWorks delivers a no-code environment to design, test, deploy, and govern mission-specific security agents at scale, while MCP enables secure, orchestrated multi-agent collaboration.

read more →

Tue, September 16, 2025

CrowdStrike to Acquire Pangea to Secure Enterprise AI

🔒 CrowdStrike announced its intent to acquire Pangea to deliver the industry’s first AI detection and response (AIDR) capability, securing enterprise AI use and development across data, models, agents, identities, infrastructure, and interactions. Unveiled at Fal.Con 2025 by Michael Sentonas, the deal will integrate Pangea’s prompt‑layer and interaction security with the Falcon platform to provide unified visibility, governance, and enforcement across the AI lifecycle. The combined solution targets prompt injection, model manipulation, shadow AI and sensitive data exfiltration while enabling developers and security teams to innovate faster with built‑in safeguards.

read more →

Mon, September 15, 2025

Code Assistant Risks: Indirect Prompt Injection and Misuse

🛡️ Unit 42 describes how IDE-integrated AI code assistants can be abused to insert backdoors, leak secrets, or produce harmful output by exploiting features like chat, auto-complete, and context attachment. The report highlights an indirect prompt injection vector where attackers contaminate public or third‑party data sources; when that data is attached as context, malicious instructions can hijack the assistant. It recommends reviewing generated code, controlling attached context, adopting standard LLM security practices, and contacting Unit 42 if compromise is suspected.

read more →

Mon, September 15, 2025

Amazon GuardDuty Protection Plans and Threat Detection

🔐 Amazon GuardDuty centralizes continuous threat detection across AWS using AI/ML and integrated threat intelligence. It offers optional protection plans—S3, EKS, Runtime Monitoring, Malware Protection for EC2 and S3, RDS, and Lambda—that extend detections to service-specific telemetry and runtime behaviors. Built-in Extended Threat Detection correlates signals into high-confidence attack sequences and maps findings to MITRE ATT&CK, providing prioritized remediation guidance.

read more →