Category Banner

All news in category "AI and Security Pulse"

Wed, October 8, 2025

OpenAI Disrupts Malware Abuse by Russian, DPRK, China

🛡️ OpenAI said it disrupted three clusters that misused ChatGPT to assist malware development, including Russian-language actors refining a RAT and credential stealer, North Korean operators tied to Xeno RAT campaigns, and Chinese-linked accounts targeting semiconductor firms. The company also blocked accounts used for scams, influence operations, and surveillance assistance and said actors worked around direct refusals by composing building-block code. OpenAI emphasized that models often declined explicit malicious prompts and that many outputs were not inherently harmful on their own.

read more →

Wed, October 8, 2025

Autonomous AI Hacking: How Agents Will Reshape Cybersecurity

⚠️ AI agents are increasingly automating cyberattacks, performing reconnaissance, exploitation, and data theft at machine speed and scale. In 2023 examples include XBOW's mass vulnerability reports, DARPA teams finding dozens of flaws in hours, and reports of adversaries using Claude and HexStrike-AI to orchestrate ransomware and persistent intrusions. This shift threatens accelerated attacks beyond traditional patch cycles while presenting new defensive opportunities such as AI-assisted vulnerability discovery, VulnOps, and even self-healing networks.

read more →

Tue, October 7, 2025

Google won’t fix new ASCII smuggling attack in Gemini

⚠️ Google has declined to patch a new ASCII smuggling vulnerability in Gemini, a technique that embeds invisible Unicode Tags characters to hide instructions from users while still being processed by LLMs. Researcher Viktor Markopoulos of FireTail demonstrated hidden payloads delivered via Calendar invites, emails, and web content that can alter model behavior, spoof identities, or extract sensitive data. Google said the issue is primarily social engineering rather than a security bug.

read more →

Tue, October 7, 2025

Google DeepMind's CodeMender Automatically Patches Code

🛠️ Google’s DeepMind unveiled CodeMender, an AI agent that automatically detects, patches, and rewrites vulnerable code to remediate existing flaws and prevent future classes of vulnerabilities. Backed by Gemini Deep Think models and an LLM-based critique tool, it validates changes to reduce regressions and self-correct as needed. DeepMind says it has upstreamed 72 fixes to open-source projects so far and will engage maintainers for feedback to improve adoption and trust.

read more →

Tue, October 7, 2025

AI-Powered Breach and Attack Simulation for Validation

🔍 AI-powered Breach and Attack Simulation (BAS) converts the flood of threat intelligence into safe, repeatable tests that validate defenses across real environments. The article argues that integrating AI with BAS lets teams operationalize new reports in hours instead of weeks, delivering on-demand validation, clearer risk prioritization, measurable ROI, and board-ready assurance. Picus Security positions this approach as a practical step-change for security validation.

read more →

Tue, October 7, 2025

AI Fix #71 — Hacked Robots, Power-Hungry AI and More

🤖 In episode 71 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a wide-ranging mix of AI and robotics stories, from a giant robot spider that went 'backpacking' to DoorDash's delivery 'Minion' and a TikToker forcing an AI to converse with condiments. The episode highlights technical feats — GPT-5 winning the ICPC World Finals and Claude Sonnet 4.5 coding for 30 hours — alongside quirky projects like a 5-million-parameter transformer built in Minecraft. It also investigates a security flaw that left Unitree robot fleets exposed and discusses an alarming estimate that training a frontier model could require the power capacity of five nuclear plants by 2028.

read more →

Tue, October 7, 2025

Google launches AI bug bounty program; rewards up to $30K

🛡️ Google has launched a new AI Vulnerability Reward Program to incentivize security researchers to find and report flaws in its AI systems. The program targets high-impact vulnerabilities across flagship offerings including Google Search, Gemini Apps, and Google Workspace core apps, and also covers AI Studio, Jules, and other AI integrations. Rewards scale with severity and novelty—up to $30,000 for exceptional reports and up to $20,000 for standard flagship security flaws. Additional bounties include $15,000 for sensitive data exfiltration and smaller awards for phishing enablement, model theft, and access control issues.

read more →

Tue, October 7, 2025

DeepMind's CodeMender: AI Agent to Fix Code Vulnerabilities

🔧 Google DeepMind has unveiled CodeMender, an autonomous agent built on Gemini Deep Think models that detects, debugs and patches complex software vulnerabilities. In the last six months it produced and submitted 72 security patches to open-source projects, including codebases up to 4.5 million lines. CodeMender pairs large-model reasoning with advanced program-analysis tooling — static and dynamic analysis, differential testing, fuzzing and SMT solvers — and a multi-agent critique process to validate fixes and avoid regressions. DeepMind says all patches are currently human-reviewed and it plans to expand maintainer outreach, release the tool to developers, and publish technical findings.

read more →

Tue, October 7, 2025

Enterprise AI Now Leading Corporate Data Exfiltration

🔍 A new Enterprise AI and SaaS Data Security Report from LayerX finds that generative AI has rapidly become the largest uncontrolled channel for corporate data loss. Real-world browser telemetry shows 45% employee adoption of GenAI, 67% of sessions via unmanaged accounts, and copy/paste into ChatGPT, Claude, and Copilot as the primary leakage vector. Traditional, file-centric DLP tools largely miss these action-based flows.

read more →

Tue, October 7, 2025

Five Best Practices for Effective AI Coding Assistants

🛠️ This article presents five practical best practices to get better results from AI coding assistants. Based on engineering sprints using Gemini CLI, Gemini Code Assist, and Jules, the recommendations cover choosing the right tool, training models with documentation and tests, creating detailed execution plans, prioritizing precise prompts, and preserving session context. Following these steps helps developers stay in control, improve code quality, and streamline complex migrations and feature work.

read more →

Mon, October 6, 2025

ChatGPT Pulse Heading to Web; Pro-only for Now, Plus TBD

🤖 ChatGPT Pulse is being prepared for the web after a mobile rollout that began on September 25, but OpenAI currently restricts the feature to its $200 Pro subscription. Pulse provides personalized daily updates presented as visual cards, drawing on your chats, feedback and connected apps such as calendars. OpenAI says it will learn from early usage before expanding availability and has given no firm timeline for Plus or free-tier rollout.

read more →

Mon, October 6, 2025

OpenAI Tests ChatGPT-Powered Agent Builder Tool Preview

🧭 OpenAI is testing a visual Agent Builder that lets users assemble ChatGPT-powered agents by dropping and connecting node blocks in a flowchart. Templates like Customer service, Data enrichment, and Document comparison provide editable starting points, while users can also create flows from scratch. Agents are configurable with model choice, custom prompts, reasoning effort, and output format (text or JSON), and they can call tools and external services. Reported screenshots show support for MPC connectors such as Gmail, Calendar, Drive, Outlook, SharePoint, Teams, and Dropbox; OpenAI plans to share more details at DevDay.

read more →

Mon, October 6, 2025

AI in Today's Cybersecurity: Detection, Hunting, Response

🤖 Artificial intelligence is reshaping how organizations detect, investigate, and respond to cyber threats. The article explains how AI reduces alert noise, prioritizes vulnerabilities, and supports behavioral analysis, UEBA, and NLP-driven phishing detection. It highlights Wazuh's integrations with models such as Claude 3.5, Llama 3, and ChatGPT to provide conversational insights, automated hunting, and contextual remediation guidance.

read more →

Mon, October 6, 2025

Google advances AI security with CodeMender and SAIF 2.0

🔒 Google announced three major AI security initiatives: CodeMender, a dedicated AI Vulnerability Reward Program (AI VRP), and the updated Secure AI Framework 2.0. CodeMender is an AI-powered agent built on Gemini that performs root-cause analysis, generates self-validated patches, and routes fixes to automated critique agents to accelerate time-to-patch across open-source projects. The AI VRP consolidates abuse and security reward tables and clarifies reporting channels, while SAIF 2.0 extends guidance and introduces an agent risk map and security controls for autonomous agents.

read more →

Mon, October 6, 2025

Five Critical Questions for Selecting AI-SPM Solutions

🔒 As enterprises accelerate AI and cloud adoption, selecting the right AI Security Posture Management (AI-SPM) solution is critical. The article presents five core questions to guide procurement: does the product deliver centralized visibility into models, datasets, and infrastructure; can it detect and remediate AI-specific risks like adversarial attacks, data leakage, and bias; and does it map to regulatory standards such as GDPR and NIST AI? It also stresses cloud-native scalability and seamless integration with DSPM, DLP, identity platforms, DevOps toolchains, and AI services to ensure proactive policy enforcement and audit readiness.

read more →

Mon, October 6, 2025

CISOs Rethink Security Organization for the AI Era

🔒 CISOs are re-evaluating organizational roles, processes, and partnerships as AI accelerates both attacks and defenses. Leaders say AI is elevating the CISO into strategic C-suite conversations and reshaping collaboration with IT, while security teams use AI to triage alerts, automate repetitive tasks, and focus on higher-value work. Experts stress that AI magnifies existing weaknesses, so fundamentals like IAM, network segmentation, and patching remain critical, and recommend piloting AI in narrow use cases to augment human judgment rather than replace it.

read more →

Sat, October 4, 2025

ChatGPT Leak Reveals Direct Messaging and Profiles

🤖 OpenAI is testing social features in ChatGPT, with leaked code showing support for direct messages, usernames, and profile images. References discovered in an Android beta (version 1.2025.273) and linked traces to Sora 2 indicate the company may be rolling social tools beyond its video feed app. The code, codenamed Calpico and Calpico Rooms, also mentions join/leave notifications and push alerts for messages.

read more →

Sat, October 4, 2025

OpenAI Updates GPT-5 Instant to Offer Emotional Support

🤗 OpenAI has updated GPT-5 Instant to better detect and respond to signs of emotional distress, routing users to supportive language and, when appropriate, real-world crisis resources. The change responds to feedback that some GPT-5 variants felt too clinical when users sought emotional support. OpenAI says it developed the model with help from mental health experts and will route GPT-5 Auto or non-reasoning model conversations to GPT-5 Instant for faster, more empathetic responses. The update begins rolling out to ChatGPT users today.

read more →

Sat, October 4, 2025

OpenAI expands $4 ChatGPT Go availability in Southeast Asia

🌏 OpenAI is expanding its lower-cost ChatGPT plan, ChatGPT Go ($4), into additional Southeast Asian markets after tests in India and Indonesia. The company is updating local pricing and now lists amounts in EUR, USD, GBP and INR while testing availability in Malaysia, the Philippines, Thailand and Vietnam. The Go tier offers access to GPT-5 with limited capabilities, expanded messaging and uploads, faster image generation, longer memory and basic deep research, but excludes higher-end models and advanced reasoning reserved for the $20 GPT Plus tier. OpenAI says Go provides higher usage limits than the Free plan but remains feature-limited compared with Plus.

read more →

Sat, October 4, 2025

CometJacking: One-Click Attack Turns AI Browser Rogue

🔐 CometJacking is a prompt-injection technique that can turn Perplexity's Comet AI browser into a data exfiltration tool with a single click. Researchers at LayerX showed how a crafted URL using the 'collection' parameter forces the agent to consult its memory, extract data from connected services such as Gmail and Calendar, obfuscate it with Base64, and forward it to an attacker-controlled endpoint. The exploit leverages the browser's existing authorized connectors and bypasses simple content protections.

read more →