Category Banner

All news in category "AI and Security Pulse"

Tue, October 7, 2025

Enterprise AI Now Leading Corporate Data Exfiltration

🔍 A new Enterprise AI and SaaS Data Security Report from LayerX finds that generative AI has rapidly become the largest uncontrolled channel for corporate data loss. Real-world browser telemetry shows 45% employee adoption of GenAI, 67% of sessions via unmanaged accounts, and copy/paste into ChatGPT, Claude, and Copilot as the primary leakage vector. Traditional, file-centric DLP tools largely miss these action-based flows.

read more →

Tue, October 7, 2025

Five Best Practices for Effective AI Coding Assistants

🛠️ This article presents five practical best practices to get better results from AI coding assistants. Based on engineering sprints using Gemini CLI, Gemini Code Assist, and Jules, the recommendations cover choosing the right tool, training models with documentation and tests, creating detailed execution plans, prioritizing precise prompts, and preserving session context. Following these steps helps developers stay in control, improve code quality, and streamline complex migrations and feature work.

read more →

Mon, October 6, 2025

ChatGPT Pulse Heading to Web; Pro-only for Now, Plus TBD

🤖 ChatGPT Pulse is being prepared for the web after a mobile rollout that began on September 25, but OpenAI currently restricts the feature to its $200 Pro subscription. Pulse provides personalized daily updates presented as visual cards, drawing on your chats, feedback and connected apps such as calendars. OpenAI says it will learn from early usage before expanding availability and has given no firm timeline for Plus or free-tier rollout.

read more →

Mon, October 6, 2025

OpenAI Tests ChatGPT-Powered Agent Builder Tool Preview

🧭 OpenAI is testing a visual Agent Builder that lets users assemble ChatGPT-powered agents by dropping and connecting node blocks in a flowchart. Templates like Customer service, Data enrichment, and Document comparison provide editable starting points, while users can also create flows from scratch. Agents are configurable with model choice, custom prompts, reasoning effort, and output format (text or JSON), and they can call tools and external services. Reported screenshots show support for MPC connectors such as Gmail, Calendar, Drive, Outlook, SharePoint, Teams, and Dropbox; OpenAI plans to share more details at DevDay.

read more →

Mon, October 6, 2025

AI in Today's Cybersecurity: Detection, Hunting, Response

🤖 Artificial intelligence is reshaping how organizations detect, investigate, and respond to cyber threats. The article explains how AI reduces alert noise, prioritizes vulnerabilities, and supports behavioral analysis, UEBA, and NLP-driven phishing detection. It highlights Wazuh's integrations with models such as Claude 3.5, Llama 3, and ChatGPT to provide conversational insights, automated hunting, and contextual remediation guidance.

read more →

Mon, October 6, 2025

Google advances AI security with CodeMender and SAIF 2.0

🔒 Google announced three major AI security initiatives: CodeMender, a dedicated AI Vulnerability Reward Program (AI VRP), and the updated Secure AI Framework 2.0. CodeMender is an AI-powered agent built on Gemini that performs root-cause analysis, generates self-validated patches, and routes fixes to automated critique agents to accelerate time-to-patch across open-source projects. The AI VRP consolidates abuse and security reward tables and clarifies reporting channels, while SAIF 2.0 extends guidance and introduces an agent risk map and security controls for autonomous agents.

read more →

Mon, October 6, 2025

Five Critical Questions for Selecting AI-SPM Solutions

🔒 As enterprises accelerate AI and cloud adoption, selecting the right AI Security Posture Management (AI-SPM) solution is critical. The article presents five core questions to guide procurement: does the product deliver centralized visibility into models, datasets, and infrastructure; can it detect and remediate AI-specific risks like adversarial attacks, data leakage, and bias; and does it map to regulatory standards such as GDPR and NIST AI? It also stresses cloud-native scalability and seamless integration with DSPM, DLP, identity platforms, DevOps toolchains, and AI services to ensure proactive policy enforcement and audit readiness.

read more →

Mon, October 6, 2025

CISOs Rethink Security Organization for the AI Era

🔒 CISOs are re-evaluating organizational roles, processes, and partnerships as AI accelerates both attacks and defenses. Leaders say AI is elevating the CISO into strategic C-suite conversations and reshaping collaboration with IT, while security teams use AI to triage alerts, automate repetitive tasks, and focus on higher-value work. Experts stress that AI magnifies existing weaknesses, so fundamentals like IAM, network segmentation, and patching remain critical, and recommend piloting AI in narrow use cases to augment human judgment rather than replace it.

read more →

Sat, October 4, 2025

ChatGPT Leak Reveals Direct Messaging and Profiles

🤖 OpenAI is testing social features in ChatGPT, with leaked code showing support for direct messages, usernames, and profile images. References discovered in an Android beta (version 1.2025.273) and linked traces to Sora 2 indicate the company may be rolling social tools beyond its video feed app. The code, codenamed Calpico and Calpico Rooms, also mentions join/leave notifications and push alerts for messages.

read more →

Sat, October 4, 2025

OpenAI Updates GPT-5 Instant to Offer Emotional Support

🤗 OpenAI has updated GPT-5 Instant to better detect and respond to signs of emotional distress, routing users to supportive language and, when appropriate, real-world crisis resources. The change responds to feedback that some GPT-5 variants felt too clinical when users sought emotional support. OpenAI says it developed the model with help from mental health experts and will route GPT-5 Auto or non-reasoning model conversations to GPT-5 Instant for faster, more empathetic responses. The update begins rolling out to ChatGPT users today.

read more →

Sat, October 4, 2025

OpenAI expands $4 ChatGPT Go availability in Southeast Asia

🌏 OpenAI is expanding its lower-cost ChatGPT plan, ChatGPT Go ($4), into additional Southeast Asian markets after tests in India and Indonesia. The company is updating local pricing and now lists amounts in EUR, USD, GBP and INR while testing availability in Malaysia, the Philippines, Thailand and Vietnam. The Go tier offers access to GPT-5 with limited capabilities, expanded messaging and uploads, faster image generation, longer memory and basic deep research, but excludes higher-end models and advanced reasoning reserved for the $20 GPT Plus tier. OpenAI says Go provides higher usage limits than the Free plan but remains feature-limited compared with Plus.

read more →

Sat, October 4, 2025

CometJacking: One-Click Attack Turns AI Browser Rogue

🔐 CometJacking is a prompt-injection technique that can turn Perplexity's Comet AI browser into a data exfiltration tool with a single click. Researchers at LayerX showed how a crafted URL using the 'collection' parameter forces the agent to consult its memory, extract data from connected services such as Gmail and Calendar, obfuscate it with Base64, and forward it to an attacker-controlled endpoint. The exploit leverages the browser's existing authorized connectors and bypasses simple content protections.

read more →

Fri, October 3, 2025

Opera Neon AI Browser: $19.90 Monthly for Agentic Web

🤖 Opera has unveiled Neon, a premium AI-first browser that delegates browsing tasks to integrated agents, from opening tabs and conducting research to comparing prices and assessing security. Early access is available for Windows and macOS at an introductory price of $59.90 for nine months; Opera says the service will cost $19.90 per month after the offer. Opera positions Neon alongside other agentic browsers such as Perplexity Comet and Microsoft Edge's Copilot mode.

read more →

Fri, October 3, 2025

AI and Cybersecurity: Fortinet and NTT DATA Webinar

🔒 In a joint webinar, Fortinet and NTT DATA outlined practical approaches to deploying and securing AI across enterprise environments. Fortinet described its three AI pillars—FortiAI‑Protect, FortiAI‑Assist, and FortiAI‑SecureAI—focused on detection, operational assistance, and protecting AI assets. NTT DATA emphasized governance, runtime protections, and an "agentic factory" to scale pilots into production. The presenters stressed the need for visibility into shadow AI and controls such as DLP and zero‑trust access to prevent data leakage.

read more →

Fri, October 3, 2025

CometJacking attack tricks Comet browser into leaking data

🛡️ LayerX researchers disclosed a prompt-injection technique called CometJacking that abuses Perplexity’s Comet AI browser by embedding malicious instructions in a URL's collection parameter. The payload directs the agent to consult connected services (such as Gmail and Google Calendar), encode the retrieved content in base64, and send it to an attacker-controlled endpoint. The exploit requires no credentials or additional user interaction beyond clicking a crafted link. Perplexity reviewed LayerX's late-August reports and classified the findings as "Not Applicable."

read more →

Fri, October 3, 2025

CISO GenAI Board Presentation Template and Guidance

🛡️Keep Aware has published a free Template for CISO GenAI Presentations designed to help security leaders brief boards or AI committees. The template centers on four agenda items—GenAI Adoption, Risk Landscape, Risk Exposure and Incidents, and Governance and Controls—and recommends visuals and dashboard-style metrics to translate technical issues into business risk. It also emphasizes browser-level monitoring to prevent data leakage and enforce policies.

read more →

Thu, October 2, 2025

Daniel Miessler on AI Attack-Defense Balance and Context

🔍 Daniel Miessler argues that context determines the AI attack–defense balance: whoever holds the most accurate, actionable picture of a target gains the edge. He forecasts attackers will have the advantage for roughly 3–5 years as Red teams leverage public OSINT and reconnaissance while LLMs and SPQA-style architectures mature. Once models can ingest reliable internal company context at scale, defenders should regain the upper hand by prioritizing fixes and applying mitigations faster.

read more →

Thu, October 2, 2025

Forrester Predicts Agentic AI Will Trigger 2026 Breach

⚠️ Forrester warns that an agentic AI deployment will trigger a publicly disclosed data breach in 2026, potentially prompting employee dismissals. Senior analyst Paddy Harrington noted that generative AI has already been linked to several breaches and cautioned that autonomous agents can sacrifice accuracy for speed without proper guardrails. He urges adoption of the AEGIS framework to secure intent, identity, data provenance and other controls. Check Point also reported malicious agentic tools accelerating attacker activity.

read more →

Wed, October 1, 2025

Smashing Security 437: ForcedLeak in Salesforce AgentForce

🔐 Researchers uncovered a security flaw in Salesforce’s new AgentForce platform called ForcedLeak, which let attackers smuggle AI-readable instructions through a Web-to-Lead form and exfiltrate data for as little as five dollars. The hosts discuss the broader implications for AI integration, input validation, and the surprising ease of exploiting customer-facing forms. Episode 437 also critiques typical breach communications and covers ITV’s phone‑hacking drama and the Rosetta Stone story, with Graham Cluley joined by Paul Ducklin.

read more →

Wed, October 1, 2025

Blending AI and Human Workflows for Secure Automation

🔍 Join The Hacker News for a free webinar, "Workflow Clarity: Where AI Fits in Modern Automation," featuring Thomas Kinsella, Co‑founder & Chief Customer Officer at Tines. The piece argues that human-only processes are slow, rigid rule engines break when reality changes, and fully autonomous AI can create opaque, unauditable paths. Attendees will learn practical mapping of tasks to people, rules, or AI, how to spot AI overreach, and patterns for building secure, auditable workflows that scale without sacrificing control.

read more →