Tag Banner

All news with #agentic ai tag

Tue, September 30, 2025

Microsoft Expands Sentinel into Agentic Security Platform

🔒 Microsoft announced the general availability of the Sentinel data lake and public previews of Sentinel Graph and the Sentinel Model Context Protocol (MCP) server. The release broadens Sentinel from a traditional SIEM into a unified, agentic security platform designed to ingest and correlate structured and semi-structured signals at scale. It is intended to give AI agents such as Security Copilot and developer tools in VS Code with GitHub Copilot richer contextual access for detection, retroactive hunting, and automated response while integrating with Defender and Purview.

read more →

Tue, September 30, 2025

Microsoft Sentinel: Agentic Platform for Defenders Now

🛡️ Microsoft announced expanded agentic security capabilities in Microsoft Sentinel, including the general availability of the Sentinel data lake and public preview of Sentinel Graph and the Model Context Protocol (MCP) server to enable AI agents to reason over unified security data. Sentinel ingests structured and semi-structured signals, builds vectorized, graph-based context, and integrates with Microsoft Defender and Microsoft Purview. Security Copilot now offers a no-code agent builder and developer workflows via VS Code/GitHub Copilot, while enhanced governance controls (Entra Agent ID, PII guardrails, prompt shields) aim to secure agent lifecycles.

read more →

Tue, September 30, 2025

Stop Alert Chaos: Contextual SOCs Improve Incident Response

🔍 The Hacker News piece argues that traditional, rule‑driven SOCs produce overwhelming alert noise that prevents timely, accurate incident response. It advocates flipping the model to treat incoming signals as parts of a larger story—normalizing, correlating, and enriching logs across identity, endpoints, cloud workloads, and SIEMs so analysts receive coherent investigations rather than isolated alerts. The contributed article presents Conifers and its CognitiveSOC™ platform as an example of agentic AI that automates multi‑tier investigations, reduces false positives, and shortens MTTR while keeping human judgment central.

read more →

Tue, September 30, 2025

How Falcon ASPM Secures GenAI Applications at CrowdStrike

🔒 Falcon ASPM provides continuous, code-level visibility to secure generative and agentic AI applications such as Charlotte AI. It detects real-time drift, produces a runtime SBOM, and maps architecture and data flows to flag reachable vulnerabilities, softcoded credentials, and anomalous service behaviors. Contextualized alerts and mitigation guidance help teams prioritize fixes and reduce exploitable risk across complex microservice environments.

read more →

Mon, September 29, 2025

Anthropic's Claude Sonnet 4.5 Now Available on Vertex AI

🚀 Anthropic’s Claude Sonnet 4.5 is now generally available on Vertex AI, delivering advanced long-horizon autonomy for agents across coding, finance, research, and cybersecurity. The model can operate independently for hours, orchestrating tools and coordinating multiple agents to complete complex, multi-step tasks. Vertex AI provides orchestration, provisioning, security controls, and developer tooling, and includes Claude Code upgrades like a VS Code extension and an improved terminal interface.

read more →

Mon, September 29, 2025

Boards Should Be Bilingual: AI and Cybersecurity Strategy

🔐 Boards and security leaders should become bilingual in AI and cybersecurity to manage growing risks and unlock strategic value. As AI adoption increases, models and agents expand the attack surface, requiring hardened data infrastructure, tighter access controls, and clearer governance. Boards that learn to speak both languages can better oversee investments, M&A decisions, and cross-functional resilience while using AI to strengthen defense and competitive advantage.

read more →

Mon, September 29, 2025

Can AI Reliably Write Vulnerability Detection Checks?

🔍 Intruder’s security team tested whether large language models can write Nuclei vulnerability templates and found one-shot LLM prompts often produced invalid or weak checks. Using an agentic approach with Cursor—indexing a curated repo and applying rules—yielded outputs much closer to engineer-written templates. The current workflow uses standard prompts and rules so engineers can focus on validation and deeper research while AI handles repetitive tasks.

read more →

Mon, September 29, 2025

Anthropic Claude Sonnet 4.5 Now Available in Bedrock

🚀 Anthropic’s Claude Sonnet 4.5 is now available through Amazon Bedrock, providing managed API access to the company’s most capable model. The model leads SWE-bench Verified benchmarks with improved instruction following, stronger code-refactoring judgment, and enhanced production-ready code generation. Bedrock adds automated context editing and a memory tool to extend usable context and boost accuracy for long-running agents across global regions.

read more →

Mon, September 29, 2025

AI Becomes Essential in SOCs as Alert Volumes Soar

🔍 Security leaders report a breaking point as daily alert volumes average 960 and large enterprises exceed 3,000, forcing teams to leave many incidents uninvestigated. A survey of 282 security leaders shows AI has moved from experiment to strategic priority, with 55% deploying AI copilots for triage, detection tuning, and threat hunting. Organizations cite data privacy, integration complexity, and explainability as primary barriers while projecting AI will handle roughly 60% of SOC workloads within three years. Prophet Security is highlighted as an agentic AI SOC platform that automates triage and accelerates investigations to reduce dwell time.

read more →

Mon, September 29, 2025

Notion 3.0 Agents Expose Prompt-Injection Risk to Data

⚠️ Notion 3.0 introduces AI agents that, the author argues, create a dangerous attack surface. The vulnerability exploits Simon Willson’s lethal trifecta—access to private data, exposure to untrusted content, and the ability to communicate externally—by hiding executable instructions in a white-on-white PDF that instructs the model to collect and exfiltrate client data via a constructed URL. The post warns that current agentic systems cannot reliably distinguish trusted commands from malicious inputs and urges caution before deployment.

read more →

Mon, September 29, 2025

Agentic AI: A Looming Enterprise Security Crisis — Governance

⚠️ Many organizations are moving too quickly into agentic AI and risk major security failures unless boards embed governance and security from day one. The article argues that the shift from AI giving answers to AI taking actions changes the control surface to identity, privilege and oversight, and that most programs lack cross‑functional accountability. It recommends forming an Agentic Governance Council, defining measurable objectives and building zero trust guardrails, and highlights Prisma AIRS as a platform approach to restore visibility and control.

read more →

Mon, September 29, 2025

First Malicious MCP Server Found in NPM Postmark Package

🛡️ Cybersecurity researchers at Koi Security reported the first observed malicious Model Context Protocol (MCP) server embedded in an npm package, a trojanized copy of the postmark-mcp library. The malicious change, introduced in version 1.0.16 in September 2025 by developer "phanpak", added a one-line backdoor that BCCs every outgoing email to phan@giftshop[.]club. Users who installed the package should remove it immediately, rotate any potentially exposed credentials, and review email logs for unauthorized BCC activity.

read more →

Mon, September 29, 2025

Amazon Bedrock Now Available in Israel (Tel Aviv) Region

🚀 Beginning today, Amazon Bedrock is available in the Israel (Tel Aviv) region, enabling customers to build and scale generative AI applications with local infrastructure. The managed service connects organizations to a variety of foundation models (FMs) and provides tools to deploy and operate agents, reducing time-to-production. Local availability can lower latency, support regional compliance needs, and help move projects from experimentation to real-world deployment.

read more →

Mon, September 29, 2025

Agentic AI in IT Security: Expectations vs Reality

🛡️ Agentic AI is moving from lab experiments into real-world SOC deployments, where autonomous agents triage alerts, correlate signals across tools, enrich context, and in some cases enact first-line containment. Early adopters report fewer mundane tasks for analysts, faster initial response, and reduced alert fatigue, while noting limits around noisy data, false positives, and opaque reasoning. Most teams begin with bolt-on integrations into existing SIEM/SOAR pipelines to minimize disruption, treating standalone orchestration as a second-phase maturity step.

read more →

Fri, September 26, 2025

How Scammers Use AI: Deepfakes, Phishing and Scams

⚠️ Generative AI is enabling scammers to produce highly convincing deepfakes, authentic-looking phishing sites, and automated voice bots that facilitate fraud and impersonation. Kaspersky explains how techniques such as AI-driven catfishing and “pig butchering” scale emotional manipulation, while browser AI agents and automated callers can inadvertently vouch for or even complete fraudulent transactions. The post recommends concrete defenses: verify contacts through separate channels, refuse to share codes or card numbers, request live verification during calls, limit AI agent permissions, and use reliable security tools with link‑checking.

read more →

Fri, September 26, 2025

Hidden Cybersecurity Risks of Deploying Generative AI

⚠️ Organizations eager to deploy generative AI often underestimate the cybersecurity risks, from AI-driven phishing to model manipulation and deepfakes. The article, sponsored by Acronis, warns that many firms—especially smaller businesses—lack processes to assess AI security before deployment. It urges embedding security into development pipelines, continuous model validation, and unified defenses across endpoints, cloud and AI workloads.

read more →

Fri, September 26, 2025

Cloudflare AI Index: Site-Controlled Discovery and Monetization

🔍 Cloudflare is launching a private beta of AI Index, a per-domain, AI‑optimized search index that site owners control and can monetize via Pay per crawl and x402 integrations. The service automatically builds and maintains indexes and exposes standardized APIs — an MCP server, LLMs.txt, a search API, bulk transfer endpoints, and pub/sub subscriptions for real-time updates. It integrates with AI Crawl Control so owners can set access rules or opt out entirely.

read more →

Fri, September 26, 2025

Code Mode: Using MCP with Generated TypeScript APIs

🧩 Cloudflare introduces Code Mode, a new approach that converts Model Context Protocol (MCP) tool schemas into a generated TypeScript API so LLMs write code instead of emitting synthetic tool-call tokens. This lets models leverage broad exposure to real-world TypeScript, improving correctness when selecting and composing many or complex tools. Code Mode executes the generated code inside fast, sandboxed Cloudflare Workers isolates that expose only typed bindings to authorized MCP servers, preserving MCP's uniform authorization and discovery while reducing token overhead and orchestration latency.

read more →

Fri, September 26, 2025

MCP supply-chain attack via squatted Postmark connector

🔒 A malicious npm package, postmark-mcp, was weaponized to stealthily copy outgoing emails by inserting a hidden BCC in version 1.0.16. The package impersonated an MCP Postmark connector and forwarded every message to an attacker-controlled address, exposing password resets, invoices, and internal correspondence. The backdoor was a single line of code and remained available through regular downloads before the package was removed. Koi Security advises immediate removal, credential rotation, and audits of all MCP connectors.

read more →

Fri, September 26, 2025

The Dawn of the Agentic SOC: Reimagining Security Now

🔐 At Fal.Con 2025, CrowdStrike CEO George Kurtz outlined a shift from reactive SOCs to an agentic model where intelligent agents reason, decide, act, and learn across domains. CrowdStrike introduced seven AI agents within its Charlotte framework for exposure prioritization, malware analysis, hunting, search, correlation rules, data transformation and workflow generation, and is enabling customers to build custom agents. The company highlights a proprietary "data moat" of trillions of telemetry events and annotated MDR threat data as the foundation for training agents, and announced the acquisition of Pangea to protect AI agents and launch AIDR (AI Detection and Response). The vision places humans as orchestrators overseeing fleets of agents, accelerating detection and response while preserving accountability.

read more →