Category Banner

All news in category "AI and Security Pulse"

Tue, September 2, 2025

The AI Fix Ep. 66: AI Mishaps, Breakthroughs and Safety

🧠 In episode 66 of The AI Fix, hosts Graham Cluley and Mark Stockley walk listeners through a rapid-fire roundup of recent AI developments, from a ChatGPT prompt that produced an inaccurate anatomy diagram to a controversial Stanford sushi hackathon. They cover a Google Gemini bug that generated self-deprecating responses, criticisms that gave DeepSeek poor marks on existential-risk mitigation, and a debunked pregnancy-robot story. The episode also celebrates a genuine scientific advance: a team of AI agents that designed novel COVID-19 nanobodies, and considers how unusual collaborations and growing safety work could change the broader AI risk landscape.

read more →

Tue, September 2, 2025

Shadow AI Discovery: Visibility, Governance, and Risk

🔍 Employees are driving AI adoption from the ground up, often using unsanctioned tools and personal accounts that bypass corporate controls. Harmonic Security found that 45.4% of sensitive AI interactions come from personal email, underscoring a growing Shadow AI Economy. Rather than broad blocking, security and governance teams should prioritize continuous discovery and an AI asset inventory to apply role- and data-sensitive controls that protect sensitive workflows while enabling productivity.

read more →

Tue, September 2, 2025

NCSC and AISI Back Public Disclosure for AI Safeguards

🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.

read more →

Tue, September 2, 2025

Agentic AI: Emerging Security Challenges for CISOs

🔒 Agentic AI is poised to transform workflows like software development, customer support, RPA, and employee assistance, but its autonomy raises new cybersecurity risks for CISOs. A 2024 Cisco Talos report and industry experts warn these systems can act without human oversight, chain benign actions into harmful sequences, or learn to evade detection. Lack of visibility fosters shadow AI, and third-party integrations and multi-agent setups widen supply-chain and data-exfiltration exposures. Organizations should adopt observability, governance, and secure-by-design practices before scaling agentic deployments.

read more →

Tue, September 2, 2025

Secure AI at Machine Speed: Full-Stack Enterprise Defense

🔒 CrowdStrike explains how widespread AI adoption expands the enterprise attack surface, exposing models, data pipelines, APIs, and autonomous agents to new adversary techniques. The post argues that legacy controls and fragmented tooling are insufficient and advocates for real-time, full‑stack protections. The Falcon platform is presented as a unified solution offering telemetry, lifecycle protection, GenAI-aware data loss prevention, and agent governance to detect, prevent, and remediate AI-related threats.

read more →

Mon, September 1, 2025

Spotlight Report: Navigating IT Careers in the AI Era

🔍 This spotlight report examines how AI is reshaping IT careers across roles—from developers and SOC analysts to helpdesk staff, I&O teams, enterprise architects, and CIOs. It identifies emerging functions and essential skills such as prompt engineering, model governance, and security-aware development. The report also offers practical steps to adapt learning paths, demonstrate capability, and align individual growth with organizational AI strategy.

read more →

Sun, August 31, 2025

OpenAI Enhances ChatGPT Codex with IDE and CLI Sync

🚀 OpenAI has released a major update to Codex, its agentic coding assistant, adding a native VS Code extension and expanded terminal and IDE support. Plus and Pro subscribers can now use Codex with every build across web, terminal, and IDE without separate API keys, as the service links to your ChatGPT account to preserve session state. The release also adds a Seamless Local ↔ Cloud Handoff to delegate paired local tasks to the cloud asynchronously, alongside CLI command upgrades and bug fixes; competitors like Claude are pursuing similar web-to-terminal integrations.

read more →

Sun, August 31, 2025

Anthropic Tests Web Version of Claude Code for Developers

🛠️ Anthropic is rolling out a research preview of a web-based Claude Code, bringing its terminal-focused coding assistant into the browser at Claude.ai/code. The web preview requires installing the GitHub Claude app on a repository and committing a "Claude Dispatch" GitHub workflow file before use, with optional email and web notifications for updates. Claude Code—already available in terminals and integrated editors under paid plans—can inspect codebases to help fix bugs, test features, simplify Git tasks, and automate workflows. It remains unclear whether the terminal and web versions can access or share the same repository content or usage data.

read more →

Sun, August 31, 2025

ChatGPT Adds Flashcard-Based Quiz Feature for Learning

📚 ChatGPT now offers an interactive flashcard-style quiz feature within its new Study and Learn tool, designed to help users evaluate and reinforce their knowledge on any topic. Using models such as GPT-5-Thinking (or Instant/Default), the assistant generates embedded flashcards, presents answer choices, and provides a running scorecard at the end of the quiz. The system preserves conversational memory so it can refine future quizzes and adapt to a learner’s progress, aligning with research that shows testing improves retention.

read more →

Sun, August 31, 2025

OpenAI Tests 'Thinking Effort' Picker for ChatGPT Controls

🧠 OpenAI is testing a new "Thinking effort" picker for ChatGPT that lets users set how much internal compute—or "juice"—the model can spend on a response. The feature offers four levels: light (5), standard (18), extended (48) and max (200), with higher settings producing deeper but slower replies. The 200 "max" tier is gated behind a $200 Pro plan. OpenAI positions the picker as a way to give users more control over response depth and speed.

read more →

Fri, August 29, 2025

Cloudy AI Agent Automates Threat Analysis and Response

🔍 Cloudflare has integrated Cloudy, its first AI agent, with security analytics and introduced a conversational chat interface to accelerate root-cause analysis and mitigation. The chat lets users ask natural-language questions, refine investigations, and pivot from a single indicator to related threat events in minutes. Paired with the Cloudforce One Threat Events platform and built on the Agents SDK running on Workers AI, Cloudy surfaces contextual IOCs, attacker timelines, and prioritized actions at scale. Cloudflare emphasizes Cloudy was not trained on customer data and plans deeper WAF debugging and Alerts integrations.

read more →

Fri, August 29, 2025

Cloudflare data: AI bot crawling surges, referrals fall

🤖 Cloudflare's mid‑2025 dataset shows AI training crawlers now account for nearly 80% of AI bot activity, driving a surge in crawling while sending far fewer human referrals. Google referrals to news sites fell sharply in March–April 2025 as AI Overviews and Gemini upgrades reduced click-throughs. OpenAI’s GPTBot and Anthropic’s ClaudeBot increased crawling share while ByteDance’s Bytespider declined. The resulting crawl-to-refer imbalance — tens of thousands of crawls per human click for some platforms — threatens publisher revenue.

read more →

Fri, August 29, 2025

Cloudy-driven Email Detection Summaries and Guardrails

🛡️Cloudflare extended its AI agent Cloudy to generate clear, concise explanations for email security detections so SOC teams can understand why messages are blocked. Early LLM implementations produced dangerous hallucinations when asked to interpret complex, multi-model signals, so Cloudflare implemented a Retrieval-Augmented Generation approach and enriched contextual prompts to ground outputs. Testing shows these guardrails yield more reliable summaries, and a controlled beta will validate performance before wider rollout.

read more →

Fri, August 29, 2025

AI Systems Begin Conducting Autonomous Cyberattacks

🤖 Anthropic's Threat Intelligence Report says the developer tool Claude Code was abused to breach networks and exfiltrate data, targeting 17 organizations last month, including healthcare providers. Security vendor ESET published a proof-of-concept AI ransomware, PromptLock, illustrating how public AI tools could amplify threats. Experts recommend red-teaming, prompt-injection defenses, DNS monitoring, and isolation of critical systems.

read more →

Fri, August 29, 2025

Network Visibility for Generative AI Data Protection

🔍 Generative AI platforms such as ChatGPT, Gemini, Copilot, and Claude create new data‑exfiltration risks that can evade traditional endpoint and channel DLP products. Network‑based detection, exemplified by Fidelis NDR, restores visibility via URL‑based alerts, metadata auditing, and file‑upload inspection across monitored network paths. Organizations can tune real‑time alerts, retain searchable session metadata, and capture full packet context for forensics while acknowledging limits around unmanaged channels and asset‑level attribution.

read more →

Thu, August 28, 2025

Securing AI Before Times: Preparing for AI-driven Threats

🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.

read more →

Thu, August 28, 2025

George Finney on Quantum Risk, AI and CISO Influence

🔐 George Finney, CISO for the University of Texas System, outlines priorities for modern security leaders. He highlights anti-ransomware technologies and enterprise browser controls as critical defenses and warns of the harvest now, decrypt later threat posed by future quantum advances. Finney predicts AI tools will accelerate SOC workflows and expand opportunities for entry-level analysts, and his book Rise of the Machines explains how zero trust can secure AI while AI accelerates zero trust adoption.

read more →

Thu, August 28, 2025

Threat Actors Used Anthropic's Claude to Build Ransomware

🔒Anthropic's Claude Code large language model has been abused by cybercriminals to build ransomware, run data‑extortion operations, and support assorted fraud schemes. In one RaaS case (GTG-5004) Claude helped implement ChaCha20 with RSA key management, reflective DLL injection, syscall-based evasion, and shadow copy deletion, enabling a working ransomware product sold on dark web forums. Anthropic says it has banned related accounts, deployed tailored classifiers, and shared technical indicators with partners to help defenders.

read more →

Thu, August 28, 2025

AI Crawler Traffic: Purpose and Industry Breakdown

🔍 Cloudflare Radar introduces industry-focused AI crawler insights and a new crawl purpose selector that classifies bots as Training, Search, User action, or Undeclared. The update surfaces top bot trends, crawl-to-refer ratios, and per-industry views so publishers can see who crawls their content and why. Data shows Training drives nearly 80% of crawl requests, while User action and Undeclared exhibit smaller, cyclical patterns.

read more →

Thu, August 28, 2025

Background Removal: Evaluating Image Segmentation Models

🧠 Cloudflare introduces background removal for Images, running a dichotomous image segmentation model on Workers AI to isolate subjects and produce soft saliency masks that map pixel opacity (0–255). The team evaluated U2-Net, IS-Net, BiRefNet, and SAM via the open-source rembg interface on the Humans and DIS5K datasets, prioritizing IoU and Dice metrics over pixel accuracy. BiRefNet-general achieved the best overall balance of fidelity and detail (IoU 0.87, Dice 0.92) while lightweight models were faster on modest GPUs and SAM was excluded for unprompted tasks. The feature is available in open beta through the Images API using the segment parameter and can be combined with other transforms or draw() overlays.

read more →