Category Banner

All news in category "AI and Security Pulse"

Tue, September 30, 2025

Defending LLM Applications Against Unicode Tag Smuggling

🔒 This AWS Security Blog post examines how Unicode tag block characters (U+E0000–U+E007F) can be abused to hide instructions inside text sent to LLMs, enabling prompt-injection and hidden-character smuggling. It explains why Java's UTF-16 surrogate handling can make one-pass sanitizers inadequate and shows recursive sanitization as a remedy, plus Python-safe filters. The post also outlines using Amazon Bedrock Guardrails denied topics or Lambda-based handlers as mitigation and notes visual/compatibility trade-offs.

read more →

Tue, September 30, 2025

The AI Fix #70: Surveillance Changes AI Behavior and Safety

🔍 In episode 70 of The AI Fix, hosts Graham Cluley and Mark Stockley examine how AI alters human behaviour and how deployed systems can fail in unexpected ways. They discuss research showing AI can increase dishonest behaviour, Waymo's safety record and a mirror-based trick that fooled self-driving perception, a rescue robot that mishandles victims, and a Chinese fusion-plant robot arm with extreme lifting capability. The show also covers a demonstration of a ChatGPT agent solving image CAPTCHAs by simulating mouse movements and a paper on deliberative alignment that functions until the model realises it is being watched.

read more →

Tue, September 30, 2025

Gemini Trifecta Exposes Indirect AI Attack Surfaces

⚠️Tenable has revealed three vulnerabilities in Google's Gemini platform, collectively dubbed the "Gemini Trifecta," that enable indirect prompt injection and data exfiltration through integrations. The issues allow attackers to poison GCP logs consumed by Gemini Cloud Assist, inject malicious entries into Chrome search history to manipulate the Search Personalization Model, and coerce the Browsing Tool into fetching attacker-controlled URLs that leak sensitive query data. Google has patched the flaws, and Tenable urges security teams to treat AI integrations as active threat surfaces and implement input sanitization, output validation, monitoring, and regular penetration testing.

read more →

Tue, September 30, 2025

Stop Alert Chaos: Contextual SOCs Improve Incident Response

🔍 The Hacker News piece argues that traditional, rule‑driven SOCs produce overwhelming alert noise that prevents timely, accurate incident response. It advocates flipping the model to treat incoming signals as parts of a larger story—normalizing, correlating, and enriching logs across identity, endpoints, cloud workloads, and SIEMs so analysts receive coherent investigations rather than isolated alerts. The contributed article presents Conifers and its CognitiveSOC™ platform as an example of agentic AI that automates multi‑tier investigations, reduces false positives, and shortens MTTR while keeping human judgment central.

read more →

Tue, September 30, 2025

Advanced Threat Hunting with LLMs and the VirusTotal API

🛡️ This post summarizes a hands-on workshop from LABScon that demonstrated automating large-scale threat hunting by combining the VirusTotal API with LLMs inside interactive Google Colab notebooks. The team recommends vt-py for robust programmatic access and provides a pre-built "meta Colab" that supplies Gemini with documentation and working code snippets so it can generate executable Python queries. Practical demos include LNK and CRX analyses, flattened dataframes, Sankey and choropleth visualizations, and stepwise relationship retrieval to accelerate investigations.

read more →

Tue, September 30, 2025

How Falcon ASPM Secures GenAI Applications at CrowdStrike

🔒 Falcon ASPM provides continuous, code-level visibility to secure generative and agentic AI applications such as Charlotte AI. It detects real-time drift, produces a runtime SBOM, and maps architecture and data flows to flag reachable vulnerabilities, softcoded credentials, and anomalous service behaviors. Contextualized alerts and mitigation guidance help teams prioritize fixes and reduce exploitable risk across complex microservice environments.

read more →

Tue, September 30, 2025

AI Risks Push Integrity Protection to Forefront for CISOs

🔒 CISOs must now prioritize integrity protection as AI introduces new attack surfaces such as data poisoning, prompt injection and adversarial manipulation. Shadow AI — unsanctioned use of models and services — increases risks of data leakage and insecure integrations. Defenses should combine Security by Design, governance, transparency and compliance (e.g., GDPR, EU AI Act) to detect poisoned data and prevent model drift.

read more →

Mon, September 29, 2025

Grok 4 Arrives in Azure AI Foundry for Business Use

🔒 Microsoft and xAI have brought Grok 4 to Azure AI Foundry, combining a 128K-token context window, native tool use, and integrated web search with enterprise safety controls and compliance checks. The release highlights first-principles reasoning and enhanced problem solving across STEM and humanities tasks, plus variants optimized for reasoning, speed, and code. Azure AI Content Safety is enabled by default and Microsoft publishes a model card with safety and evaluation details. Pricing and deployment tiers are available through Azure.

read more →

Mon, September 29, 2025

Brave Launches Ask Brave to Merge AI Chat and Search

🔎 Ask Brave unifies traditional search and AI chat into a single, privacy-focused interface accessible at search.brave.com/ask. The free feature combines search results with AI-generated responses and supports follow-up interaction in a chat-style format. Users can invoke it with a trailing “??”, the Ask button, or the Ask tab; it runs in standard or deep research modes.

read more →

Mon, September 29, 2025

Boards Should Be Bilingual: AI and Cybersecurity Strategy

🔐 Boards and security leaders should become bilingual in AI and cybersecurity to manage growing risks and unlock strategic value. As AI adoption increases, models and agents expand the attack surface, requiring hardened data infrastructure, tighter access controls, and clearer governance. Boards that learn to speak both languages can better oversee investments, M&A decisions, and cross-functional resilience while using AI to strengthen defense and competitive advantage.

read more →

Mon, September 29, 2025

Microsoft Blocks Phishing Using AI-Generated Code Tactics

🔒 Microsoft Threat Intelligence stopped a credential phishing campaign that likely used AI-generated code to hide a payload inside an SVG file disguised as a PDF. Attackers sent self-addressed emails from a compromised small-business account, hiding real targets in the Bcc field and attaching a file named "23mb – PDF- 6 pages.svg." Embedded JavaScript decoded business-style obfuscation to redirect victims to a fake CAPTCHA and a fraudulent sign-in page, and Microsoft Defender for Office 365 blocked the campaign by flagging delivery patterns, suspicious domains and anomalous code behavior.

read more →

Mon, September 29, 2025

Can AI Reliably Write Vulnerability Detection Checks?

🔍 Intruder’s security team tested whether large language models can write Nuclei vulnerability templates and found one-shot LLM prompts often produced invalid or weak checks. Using an agentic approach with Cursor—indexing a curated repo and applying rules—yielded outputs much closer to engineer-written templates. The current workflow uses standard prompts and rules so engineers can focus on validation and deeper research while AI handles repetitive tasks.

read more →

Mon, September 29, 2025

Anthropic Claude Sonnet 4.5 Now Available in Bedrock

🚀 Anthropic’s Claude Sonnet 4.5 is now available through Amazon Bedrock, providing managed API access to the company’s most capable model. The model leads SWE-bench Verified benchmarks with improved instruction following, stronger code-refactoring judgment, and enhanced production-ready code generation. Bedrock adds automated context editing and a memory tool to extend usable context and boost accuracy for long-running agents across global regions.

read more →

Mon, September 29, 2025

OpenAI Trials Free ChatGPT Plus and Expands $4 GPT Go

🔔 OpenAI is testing a limited free trial for ChatGPT Plus while expanding its lower-cost $4 GPT Go plan to Indonesia after an initial launch in India. Some existing users see a “start free trial” prompt on the ChatGPT pricing page, though new accounts may be excluded to limit abuse. The $4 option and the $20 Plus tier both provide access to GPT-5 with differing levels of memory, image creation, and research capabilities, and a $200 Pro tier targets heavier professional use.

read more →

Mon, September 29, 2025

OpenAI Routes GPT-4o Conversations to Safety Models

🔒 OpenAI confirmed that when GPT-4o detects sensitive, emotional, or potentially harmful activity it may route individual messages to a dedicated safety model, reported by some users as gpt-5-chat-safety. The switch occurs on a per-message, temporary basis and ChatGPT will indicate which model is active if asked. The routing is implemented as an irreversible part of the service's safety architecture and cannot be turned off by users; OpenAI says this helps strengthen safeguards and learn from real-world use before wider rollouts.

read more →

Mon, September 29, 2025

AI Becomes Essential in SOCs as Alert Volumes Soar

🔍 Security leaders report a breaking point as daily alert volumes average 960 and large enterprises exceed 3,000, forcing teams to leave many incidents uninvestigated. A survey of 282 security leaders shows AI has moved from experiment to strategic priority, with 55% deploying AI copilots for triage, detection tuning, and threat hunting. Organizations cite data privacy, integration complexity, and explainability as primary barriers while projecting AI will handle roughly 60% of SOC workloads within three years. Prophet Security is highlighted as an agentic AI SOC platform that automates triage and accelerates investigations to reduce dwell time.

read more →

Mon, September 29, 2025

Notion 3.0 Agents Expose Prompt-Injection Risk to Data

⚠️ Notion 3.0 introduces AI agents that, the author argues, create a dangerous attack surface. The vulnerability exploits Simon Willson’s lethal trifecta—access to private data, exposure to untrusted content, and the ability to communicate externally—by hiding executable instructions in a white-on-white PDF that instructs the model to collect and exfiltrate client data via a constructed URL. The post warns that current agentic systems cannot reliably distinguish trusted commands from malicious inputs and urges caution before deployment.

read more →

Mon, September 29, 2025

Agentic AI: A Looming Enterprise Security Crisis — Governance

⚠️ Many organizations are moving too quickly into agentic AI and risk major security failures unless boards embed governance and security from day one. The article argues that the shift from AI giving answers to AI taking actions changes the control surface to identity, privilege and oversight, and that most programs lack cross‑functional accountability. It recommends forming an Agentic Governance Council, defining measurable objectives and building zero trust guardrails, and highlights Prisma AIRS as a platform approach to restore visibility and control.

read more →

Mon, September 29, 2025

Microsoft Warns of LLM-Crafted SVG Phishing Campaign

🛡️ Microsoft flagged a targeted phishing campaign that used AI-assisted code to hide malicious payloads inside SVG files. Attackers sent messages from a compromised business account, employing self-addressed emails with hidden BCC recipients and an SVG disguised as a PDF that executed embedded JavaScript to redirect users through a CAPTCHA to a fake login. Microsoft noted the SVG's verbose, business-analytics style — flagged by Security Copilot — as likely produced by an LLM. The activity was limited and blocked, but organizations should scrutinize scriptable image formats and unusual self-addressed messages.

read more →

Mon, September 29, 2025

Agentic AI in IT Security: Expectations vs Reality

🛡️ Agentic AI is moving from lab experiments into real-world SOC deployments, where autonomous agents triage alerts, correlate signals across tools, enrich context, and in some cases enact first-line containment. Early adopters report fewer mundane tasks for analysts, faster initial response, and reduced alert fatigue, while noting limits around noisy data, false positives, and opaque reasoning. Most teams begin with bolt-on integrations into existing SIEM/SOAR pipelines to minimize disruption, treating standalone orchestration as a second-phase maturity step.

read more →