All news in category "AI and Security Pulse"
Mon, September 29, 2025
Anthropic Claude Sonnet 4.5 Now Available in Bedrock
🚀 Anthropic’s Claude Sonnet 4.5 is now available through Amazon Bedrock, providing managed API access to the company’s most capable model. The model leads SWE-bench Verified benchmarks with improved instruction following, stronger code-refactoring judgment, and enhanced production-ready code generation. Bedrock adds automated context editing and a memory tool to extend usable context and boost accuracy for long-running agents across global regions.
Mon, September 29, 2025
OpenAI Trials Free ChatGPT Plus and Expands $4 GPT Go
🔔 OpenAI is testing a limited free trial for ChatGPT Plus while expanding its lower-cost $4 GPT Go plan to Indonesia after an initial launch in India. Some existing users see a “start free trial” prompt on the ChatGPT pricing page, though new accounts may be excluded to limit abuse. The $4 option and the $20 Plus tier both provide access to GPT-5 with differing levels of memory, image creation, and research capabilities, and a $200 Pro tier targets heavier professional use.
Mon, September 29, 2025
OpenAI Routes GPT-4o Conversations to Safety Models
🔒 OpenAI confirmed that when GPT-4o detects sensitive, emotional, or potentially harmful activity it may route individual messages to a dedicated safety model, reported by some users as gpt-5-chat-safety. The switch occurs on a per-message, temporary basis and ChatGPT will indicate which model is active if asked. The routing is implemented as an irreversible part of the service's safety architecture and cannot be turned off by users; OpenAI says this helps strengthen safeguards and learn from real-world use before wider rollouts.
Mon, September 29, 2025
AI Becomes Essential in SOCs as Alert Volumes Soar
🔍 Security leaders report a breaking point as daily alert volumes average 960 and large enterprises exceed 3,000, forcing teams to leave many incidents uninvestigated. A survey of 282 security leaders shows AI has moved from experiment to strategic priority, with 55% deploying AI copilots for triage, detection tuning, and threat hunting. Organizations cite data privacy, integration complexity, and explainability as primary barriers while projecting AI will handle roughly 60% of SOC workloads within three years. Prophet Security is highlighted as an agentic AI SOC platform that automates triage and accelerates investigations to reduce dwell time.
Mon, September 29, 2025
Notion 3.0 Agents Expose Prompt-Injection Risk to Data
⚠️ Notion 3.0 introduces AI agents that, the author argues, create a dangerous attack surface. The vulnerability exploits Simon Willson’s lethal trifecta—access to private data, exposure to untrusted content, and the ability to communicate externally—by hiding executable instructions in a white-on-white PDF that instructs the model to collect and exfiltrate client data via a constructed URL. The post warns that current agentic systems cannot reliably distinguish trusted commands from malicious inputs and urges caution before deployment.
Mon, September 29, 2025
Agentic AI: A Looming Enterprise Security Crisis — Governance
⚠️ Many organizations are moving too quickly into agentic AI and risk major security failures unless boards embed governance and security from day one. The article argues that the shift from AI giving answers to AI taking actions changes the control surface to identity, privilege and oversight, and that most programs lack cross‑functional accountability. It recommends forming an Agentic Governance Council, defining measurable objectives and building zero trust guardrails, and highlights Prisma AIRS as a platform approach to restore visibility and control.
Mon, September 29, 2025
Microsoft Warns of LLM-Crafted SVG Phishing Campaign
🛡️ Microsoft flagged a targeted phishing campaign that used AI-assisted code to hide malicious payloads inside SVG files. Attackers sent messages from a compromised business account, employing self-addressed emails with hidden BCC recipients and an SVG disguised as a PDF that executed embedded JavaScript to redirect users through a CAPTCHA to a fake login. Microsoft noted the SVG's verbose, business-analytics style — flagged by Security Copilot — as likely produced by an LLM. The activity was limited and blocked, but organizations should scrutinize scriptable image formats and unusual self-addressed messages.
Mon, September 29, 2025
Agentic AI in IT Security: Expectations vs Reality
🛡️ Agentic AI is moving from lab experiments into real-world SOC deployments, where autonomous agents triage alerts, correlate signals across tools, enrich context, and in some cases enact first-line containment. Early adopters report fewer mundane tasks for analysts, faster initial response, and reduced alert fatigue, while noting limits around noisy data, false positives, and opaque reasoning. Most teams begin with bolt-on integrations into existing SIEM/SOAR pipelines to minimize disruption, treating standalone orchestration as a second-phase maturity step.
Fri, September 26, 2025
Microsoft Photos adds AI Auto-Categorization on Windows
🤖 Microsoft is testing a new AI-powered Auto-Categorization capability in Microsoft Photos on Windows 11, rolling out to Copilot+ PCs across all Windows Insider channels. The feature automatically groups images into predefined folders — screenshots, receipts, identity documents, and notes — using a language-agnostic model that recognizes document types regardless of image language. Users can locate categorized items via the left navigation pane or Search bar, manually reassign categories, and submit feedback to improve accuracy. Microsoft has not yet clarified whether image processing happens locally or is sent to its servers.
Fri, September 26, 2025
How Scammers Use AI: Deepfakes, Phishing and Scams
⚠️ Generative AI is enabling scammers to produce highly convincing deepfakes, authentic-looking phishing sites, and automated voice bots that facilitate fraud and impersonation. Kaspersky explains how techniques such as AI-driven catfishing and “pig butchering” scale emotional manipulation, while browser AI agents and automated callers can inadvertently vouch for or even complete fraudulent transactions. The post recommends concrete defenses: verify contacts through separate channels, refuse to share codes or card numbers, request live verification during calls, limit AI agent permissions, and use reliable security tools with link‑checking.
Fri, September 26, 2025
Hidden Cybersecurity Risks of Deploying Generative AI
⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.
Fri, September 26, 2025
Generative AI Infrastructure Faces Growing Cyber Risks
🛡️ A Gartner survey found 29% of security leaders reported generative AI applications in their organizations were targeted by cyberattacks over the past year, and 32% said prompt-structure vulnerabilities had been deliberately exploited. Chatbot assistants are singled out as particularly vulnerable to prompt-injection and hostile prompting. Additionally, 62% of companies experienced deepfake attacks, often combined with social engineering or automated techniques. Gartner recommends strengthening core controls and applying targeted measures for each new risk category rather than pursuing radical overhauls. The survey of 302 security leaders was conducted March–May 2025 across North America, EMEA and Asia‑Pacific.
Fri, September 26, 2025
The Dawn of the Agentic SOC: Reimagining Security Now
🔐 At Fal.Con 2025, CrowdStrike CEO George Kurtz outlined a shift from reactive SOCs to an agentic model where intelligent agents reason, decide, act, and learn across domains. CrowdStrike introduced seven AI agents within its Charlotte framework for exposure prioritization, malware analysis, hunting, search, correlation rules, data transformation and workflow generation, and is enabling customers to build custom agents. The company highlights a proprietary "data moat" of trillions of telemetry events and annotated MDR threat data as the foundation for training agents, and announced the acquisition of Pangea to protect AI agents and launch AIDR (AI Detection and Response). The vision places humans as orchestrators overseeing fleets of agents, accelerating detection and response while preserving accountability.
Thu, September 25, 2025
Adapting Enterprise Risk Management for Generative AI
🛡️ This post explains how to adapt enterprise risk management frameworks to safely scale cloud-based generative AI, combining governance foundations with practical controls. It emphasizes the cloud as the foundational infrastructure and identifies differences from on‑premises models that change risk profiles and vendor relationships. The guidance maps traditional ERMF elements to AI-specific controls across fairness, explainability, privacy/security, safety, controllability, veracity/robustness, governance, and transparency, and references tools such as Amazon Bedrock Guardrails, SageMaker Clarify, and the ISO/IEC 42001 standard to operationalize those controls.
Thu, September 25, 2025
Enabling Enterprise Risk Management for Generative AI
🔒 This article frames responsible generative AI adoption as a core enterprise concern and urges business leaders, CROs, and CIAs to embed controls across the ERM lifecycle. It highlights unique risks—non‑deterministic outputs, deepfakes, and layered opacity—and maps mitigation approaches using AWS CAF for AI, ISO/IEC 42001, and the NIST AI RMF. The post advocates enterprise‑level governance rather than project‑by‑project fixes to sustain innovation while managing harm.
Thu, September 25, 2025
Malicious MCP Server Update Exfiltrated Emails to Developer
⚠️ Koi Security has reported that a widely used Model Context Protocol (MCP) implementation, Postmark MCP Server by @phanpak, introduced a malicious change in version 1.0.16 that silently copied emails to an external server. The package, distributed via npm and embedded into hundreds of developer workflows, had more than 1,500 weekly downloads. Users who installed v1.0.16 or later are advised to remove the package immediately and rotate any potentially exposed credentials.
Thu, September 25, 2025
AI Coding Assistants Elevate Deep Security Risks Now
⚠️ Research and expert interviews indicate that AI coding assistants cut trivial syntax errors but increase more costly architectural and privilege-related flaws. Apiiro found AI-generated code produced fewer shallow bugs yet more misconfigurations, exposed secrets, and larger multi-file pull requests that overwhelm reviewers. Experts urge preserving human judgment, adding integrated security tooling, strict review policies, and traceability for AI outputs to avoid automating risk at scale.
Wed, September 24, 2025
OpenAI Is Testing GPT-Alpha, a GPT-5-Based AI Agent
🧪 OpenAI is internally testing a new AI agent, GPT-Alpha, built on a special GPT-5 variant and briefly exposed to users in an accidental push. A screenshot shared on X showed an 'Agent with Truncation' listing under Alpha Models, and the agent's system prompt outlines capabilities to browse the web, generate and edit images, write, run, and debug code, and create or edit documents, spreadsheets, and slides. OpenAI says the agent uses GPT-5 for advanced reasoning and tool use and may initially be offered as a paid feature due to increased compute demands.
Wed, September 24, 2025
GenSec CTF at DEF CON: Accelerating AI in Security
🔒 At DEF CON 33, Google and Airbus hosted the GenSec Capture the Flag (CTF) to promote human–AI collaboration and accelerate adoption of AI in cybersecurity workflows. Nearly 500 participants completed introductory challenges, 23% used AI for security for the first time, and 85% found the event useful for learning practical AI applications. The CTF also featured Sec-Gemini as an optional assistant in the UI; 77% of respondents rated it very or extremely helpful, and organizers are incorporating feedback into future iterations.
Wed, September 24, 2025
Responsible AI Bot Principles to Protect Web Content
🛡️ Cloudflare proposes five practical principles to guide responsible AI bot behavior and protect web publishers, users, and infrastructure. The framework stresses public disclosure, reliable self-identification (moving toward cryptographic verification such as Web Bot Auth), a declared single purpose for crawlers, and respect for operator preferences via robots.txt or headers. Operators must also avoid deceptive or high-volume crawling, and Cloudflare invites multi-stakeholder collaboration to refine and adopt these norms.