All news in category "AI and Security Pulse"
Fri, October 3, 2025
Opera Neon AI Browser: $19.90 Monthly for Agentic Web
🤖 Opera has unveiled Neon, a premium AI-first browser that delegates browsing tasks to integrated agents, from opening tabs and conducting research to comparing prices and assessing security. Early access is available for Windows and macOS at an introductory price of $59.90 for nine months; Opera says the service will cost $19.90 per month after the offer. Opera positions Neon alongside other agentic browsers such as Perplexity Comet and Microsoft Edge's Copilot mode.
Fri, October 3, 2025
AI and Cybersecurity: Fortinet and NTT DATA Webinar
🔒 In a joint webinar, Fortinet and NTT DATA outlined practical approaches to deploying and securing AI across enterprise environments. Fortinet described its three AI pillars—FortiAI‑Protect, FortiAI‑Assist, and FortiAI‑SecureAI—focused on detection, operational assistance, and protecting AI assets. NTT DATA emphasized governance, runtime protections, and an "agentic factory" to scale pilots into production. The presenters stressed the need for visibility into shadow AI and controls such as DLP and zero‑trust access to prevent data leakage.
Fri, October 3, 2025
CometJacking attack tricks Comet browser into leaking data
🛡️ LayerX researchers disclosed a prompt-injection technique called CometJacking that abuses Perplexity’s Comet AI browser by embedding malicious instructions in a URL's collection parameter. The payload directs the agent to consult connected services (such as Gmail and Google Calendar), encode the retrieved content in base64, and send it to an attacker-controlled endpoint. The exploit requires no credentials or additional user interaction beyond clicking a crafted link. Perplexity reviewed LayerX's late-August reports and classified the findings as "Not Applicable."
Fri, October 3, 2025
CISO GenAI Board Presentation Template and Guidance
🛡️Keep Aware has published a free Template for CISO GenAI Presentations designed to help security leaders brief boards or AI committees. The template centers on four agenda items—GenAI Adoption, Risk Landscape, Risk Exposure and Incidents, and Governance and Controls—and recommends visuals and dashboard-style metrics to translate technical issues into business risk. It also emphasizes browser-level monitoring to prevent data leakage and enforce policies.
Thu, October 2, 2025
Daniel Miessler on AI Attack-Defense Balance and Context
🔍 Daniel Miessler argues that context determines the AI attack–defense balance: whoever holds the most accurate, actionable picture of a target gains the edge. He forecasts attackers will have the advantage for roughly 3–5 years as Red teams leverage public OSINT and reconnaissance while LLMs and SPQA-style architectures mature. Once models can ingest reliable internal company context at scale, defenders should regain the upper hand by prioritizing fixes and applying mitigations faster.
Thu, October 2, 2025
Forrester Predicts Agentic AI Will Trigger 2026 Breach
⚠️ Forrester warns that an agentic AI deployment will trigger a publicly disclosed data breach in 2026, potentially prompting employee dismissals. Senior analyst Paddy Harrington noted that generative AI has already been linked to several breaches and cautioned that autonomous agents can sacrifice accuracy for speed without proper guardrails. He urges adoption of the AEGIS framework to secure intent, identity, data provenance and other controls. Check Point also reported malicious agentic tools accelerating attacker activity.
Wed, October 1, 2025
Smashing Security 437: ForcedLeak in Salesforce AgentForce
🔐 Researchers uncovered a security flaw in Salesforce’s new AgentForce platform called ForcedLeak, which let attackers smuggle AI-readable instructions through a Web-to-Lead form and exfiltrate data for as little as five dollars. The hosts discuss the broader implications for AI integration, input validation, and the surprising ease of exploiting customer-facing forms. Episode 437 also critiques typical breach communications and covers ITV’s phone‑hacking drama and the Rosetta Stone story, with Graham Cluley joined by Paul Ducklin.
Wed, October 1, 2025
Blending AI and Human Workflows for Secure Automation
🔍 Join The Hacker News for a free webinar, "Workflow Clarity: Where AI Fits in Modern Automation," featuring Thomas Kinsella, Co‑founder & Chief Customer Officer at Tines. The piece argues that human-only processes are slow, rigid rule engines break when reality changes, and fully autonomous AI can create opaque, unauditable paths. Attendees will learn practical mapping of tasks to people, rules, or AI, how to spot AI overreach, and patterns for building secure, auditable workflows that scale without sacrificing control.
Tue, September 30, 2025
Defending LLM Applications Against Unicode Tag Smuggling
🔒 This AWS Security Blog post examines how Unicode tag block characters (U+E0000–U+E007F) can be abused to hide instructions inside text sent to LLMs, enabling prompt-injection and hidden-character smuggling. It explains why Java's UTF-16 surrogate handling can make one-pass sanitizers inadequate and shows recursive sanitization as a remedy, plus Python-safe filters. The post also outlines using Amazon Bedrock Guardrails denied topics or Lambda-based handlers as mitigation and notes visual/compatibility trade-offs.
Tue, September 30, 2025
The AI Fix #70: Surveillance Changes AI Behavior and Safety
🔍 In episode 70 of The AI Fix, hosts Graham Cluley and Mark Stockley examine how AI alters human behaviour and how deployed systems can fail in unexpected ways. They discuss research showing AI can increase dishonest behaviour, Waymo's safety record and a mirror-based trick that fooled self-driving perception, a rescue robot that mishandles victims, and a Chinese fusion-plant robot arm with extreme lifting capability. The show also covers a demonstration of a ChatGPT agent solving image CAPTCHAs by simulating mouse movements and a paper on deliberative alignment that functions until the model realises it is being watched.
Tue, September 30, 2025
Gemini Trifecta Exposes Indirect AI Attack Surfaces
⚠️Tenable has revealed three vulnerabilities in Google's Gemini platform, collectively dubbed the "Gemini Trifecta," that enable indirect prompt injection and data exfiltration through integrations. The issues allow attackers to poison GCP logs consumed by Gemini Cloud Assist, inject malicious entries into Chrome search history to manipulate the Search Personalization Model, and coerce the Browsing Tool into fetching attacker-controlled URLs that leak sensitive query data. Google has patched the flaws, and Tenable urges security teams to treat AI integrations as active threat surfaces and implement input sanitization, output validation, monitoring, and regular penetration testing.
Tue, September 30, 2025
Stop Alert Chaos: Contextual SOCs Improve Incident Response
🔍 The Hacker News piece argues that traditional, rule‑driven SOCs produce overwhelming alert noise that prevents timely, accurate incident response. It advocates flipping the model to treat incoming signals as parts of a larger story—normalizing, correlating, and enriching logs across identity, endpoints, cloud workloads, and SIEMs so analysts receive coherent investigations rather than isolated alerts. The contributed article presents Conifers and its CognitiveSOC™ platform as an example of agentic AI that automates multi‑tier investigations, reduces false positives, and shortens MTTR while keeping human judgment central.
Tue, September 30, 2025
Advanced Threat Hunting with LLMs and the VirusTotal API
🛡️ This post summarizes a hands-on workshop from LABScon that demonstrated automating large-scale threat hunting by combining the VirusTotal API with LLMs inside interactive Google Colab notebooks. The team recommends vt-py for robust programmatic access and provides a pre-built "meta Colab" that supplies Gemini with documentation and working code snippets so it can generate executable Python queries. Practical demos include LNK and CRX analyses, flattened dataframes, Sankey and choropleth visualizations, and stepwise relationship retrieval to accelerate investigations.
Tue, September 30, 2025
How Falcon ASPM Secures GenAI Applications at CrowdStrike
🔒 Falcon ASPM provides continuous, code-level visibility to secure generative and agentic AI applications such as Charlotte AI. It detects real-time drift, produces a runtime SBOM, and maps architecture and data flows to flag reachable vulnerabilities, softcoded credentials, and anomalous service behaviors. Contextualized alerts and mitigation guidance help teams prioritize fixes and reduce exploitable risk across complex microservice environments.
Tue, September 30, 2025
AI Risks Push Integrity Protection to Forefront for CISOs
🔒 CISOs must now prioritize integrity protection as AI introduces new attack surfaces such as data poisoning, prompt injection and adversarial manipulation. Shadow AI — unsanctioned use of models and services — increases risks of data leakage and insecure integrations. Defenses should combine Security by Design, governance, transparency and compliance (e.g., GDPR, EU AI Act) to detect poisoned data and prevent model drift.
Mon, September 29, 2025
Grok 4 Arrives in Azure AI Foundry for Business Use
🔒 Microsoft and xAI have brought Grok 4 to Azure AI Foundry, combining a 128K-token context window, native tool use, and integrated web search with enterprise safety controls and compliance checks. The release highlights first-principles reasoning and enhanced problem solving across STEM and humanities tasks, plus variants optimized for reasoning, speed, and code. Azure AI Content Safety is enabled by default and Microsoft publishes a model card with safety and evaluation details. Pricing and deployment tiers are available through Azure.
Mon, September 29, 2025
Brave Launches Ask Brave to Merge AI Chat and Search
🔎 Ask Brave unifies traditional search and AI chat into a single, privacy-focused interface accessible at search.brave.com/ask. The free feature combines search results with AI-generated responses and supports follow-up interaction in a chat-style format. Users can invoke it with a trailing “??”, the Ask button, or the Ask tab; it runs in standard or deep research modes.
Mon, September 29, 2025
Boards Should Be Bilingual: AI and Cybersecurity Strategy
🔐 Boards and security leaders should become bilingual in AI and cybersecurity to manage growing risks and unlock strategic value. As AI adoption increases, models and agents expand the attack surface, requiring hardened data infrastructure, tighter access controls, and clearer governance. Boards that learn to speak both languages can better oversee investments, M&A decisions, and cross-functional resilience while using AI to strengthen defense and competitive advantage.
Mon, September 29, 2025
Microsoft Blocks Phishing Using AI-Generated Code Tactics
🔒 Microsoft Threat Intelligence stopped a credential phishing campaign that likely used AI-generated code to hide a payload inside an SVG file disguised as a PDF. Attackers sent self-addressed emails from a compromised small-business account, hiding real targets in the Bcc field and attaching a file named "23mb – PDF- 6 pages.svg." Embedded JavaScript decoded business-style obfuscation to redirect victims to a fake CAPTCHA and a fraudulent sign-in page, and Microsoft Defender for Office 365 blocked the campaign by flagging delivery patterns, suspicious domains and anomalous code behavior.
Mon, September 29, 2025
Can AI Reliably Write Vulnerability Detection Checks?
🔍 Intruder’s security team tested whether large language models can write Nuclei vulnerability templates and found one-shot LLM prompts often produced invalid or weak checks. Using an agentic approach with Cursor—indexing a curated repo and applying rules—yielded outputs much closer to engineer-written templates. The current workflow uses standard prompts and rules so engineers can focus on validation and deeper research while AI handles repetitive tasks.