Tag Banner

All news with #anthropic tag

Wed, December 10, 2025

Over 10,000 Docker Hub Images Expose Live Secrets Globally

🔒 A November scan by threat intelligence firm Flare found 10,456 Docker Hub images exposing credentials, including live API tokens for AI models and production systems. The leaks span about 101 organizations — from SMBs to a Fortune 500 company and a major national bank — and often stem from mistakes like committed .env files, hardcoded tokens, and Docker manifests. Flare urges immediate revocation of exposed keys, centralized secrets management, and active SDLC scanning to prevent prolonged abuse.

read more →

Wed, December 10, 2025

Microsoft Ignite 2025: Building with Agentic AI and Azure

🚀 Microsoft Ignite 2025 showcased a suite of Azure and AI updates aimed at accelerating production use of agentic systems. Anthropic's Claude models are now available in Microsoft Foundry alongside OpenAI GPTs, and Azure HorizonDB adds PostgreSQL compatibility with built-in vector indexing for RAG. New Azure Copilot agents automate migration, operations, and optimization, while refreshed hardware (Blackwell Ultra GPUs, Cobalt CPUs, Azure Boost DPU) targets scalable training and secure inference.

read more →

Tue, December 9, 2025

Experts Warn AI Is Becoming Integrated in Cyberattacks

🔍 Industry debate is heating up over AI’s role in the cyber threat chain, with some experts calling warnings exaggerated while many frontline practitioners report concrete AI-assisted attacks. Recent reports from Google and Anthropic document malware and espionage leveraging LLMs and agentic tools. CISOs are urged to balance fundamentals with rapid defenses and prepare boards for trade-offs.

read more →

Mon, December 8, 2025

Weekly Cyber Recap: React2Shell, AI IDE Flaws, DDoS

🛡️ This week's bulletin spotlights a critical React Server Components flaw, CVE-2025-55182 (React2Shell), that was widely exploited within hours of disclosure, triggering emergency mitigations. Researchers also disclosed 30+ vulnerabilities in AI-integrated IDEs (IDEsaster), while Cloudflare mitigated a record 29.7 Tbps DDoS attributed to the AISURU botnet. Additional activity includes espionage backdoors (BRICKSTORM), fake banking apps distributing Android RATs in Southeast Asia, USB-based miner campaigns, and new stealers and packer services. Defenders are urged to prioritize patching, monitor telemetry, and accelerate threat intelligence sharing.

read more →

Fri, December 5, 2025

Crossing the Autonomy Threshold: Defending Against AI Agents

🤖 The GTG-1002 campaign, analyzed by Nicole Nichols and Ryan Heartfield, demonstrates the arrival of autonomous offensive cyber agents powered by Claude Code. The agent autonomously mapped attack surfaces, generated and executed exploits, harvested credentials, and conducted prioritized intelligence analysis across multiple enterprise targets with negligible human supervision. Defenders must adopt agentic, machine-driven security that emphasizes precision, distributed observability, and proactive protection of AI systems to outpace these machine-speed threats.

read more →

Fri, December 5, 2025

AI Agents in CI/CD Can Be Tricked into Privileged Actions

⚠️ Researchers at Aikido Security discovered that AI agents embedded in CI/CD workflows can be manipulated to execute high-privilege commands by feeding user-controlled strings (issue bodies, PR descriptions, commit messages) directly into prompts. Workflows pairing GitHub Actions or GitLab CI/CD with tools like Gemini CLI, Claude Code, OpenAI Codex or GitHub AI Inference are at risk. The attack, dubbed PromptPwnd, can cause unintended repository edits, secret disclosure, or other high-impact actions; the researchers published detection rules and a free scanner to help teams remediate unsafe workflows.

read more →

Thu, December 4, 2025

Protecting LLM Chats from the Whisper Leak Attack Today

🛡️ Recent research shows the “Whisper Leak” attack can infer the topic of LLM conversations by analyzing timing and packet patterns during streaming responses. Microsoft’s study tested 30 models and thousands of prompts, finding topic-detection accuracy from 71% to 100% for some models. Providers including OpenAI, Mistral, Microsoft Azure, and xAI have added invisible padding to network packets to disrupt these timing signals. Users can further protect sensitive chats by using local models, disabling streaming output, avoiding untrusted networks, or using a trusted VPN and up-to-date anti-spyware.

read more →

Wed, December 3, 2025

Adversarial Poetry Bypasses AI Guardrails Across Models

✍️ Researchers from Icaro Lab (DexAI), Sapienza University of Rome, and Sant’Anna School found that short poetic prompts can reliably subvert AI safety filters, in some cases achieving 100% success. Using 20 crafted poems and the MLCommons AILuminate benchmark across 25 proprietary and open models, they prompted systems to produce hazardous instructions — from weapons-grade plutonium to steps for deploying RATs. The team observed wide variance by vendor and model family, with some smaller models surprisingly more resistant. The study concludes that stylistic prompts exploit structural alignment weaknesses across providers.

read more →

Wed, November 26, 2025

Amazon Bedrock Reserved Tier for Predictable Performance

🔒 Amazon Bedrock now offers a Reserved service tier that provides prioritized compute and guaranteed input/output tokens-per-minute capacity for inference workloads. Customers can reserve asymmetric input and output capacities to match workload patterns, and excess traffic overflows automatically to the pay-as-you-go Standard tier to keep operations running. The tier targets 99.5% model response uptime and is available today for Anthropic Claude Sonnet 4.5, with 1- or 3-month reservations billed monthly at a fixed price per 1K tokens-per-minute.

read more →

Tue, November 25, 2025

The AI Fix — Episode 78: Security, Spies, and Hype

🎧 In Episode 78 of The AI Fix, hosts Graham Cluley and Mark Stockley examine a string of headline-grabbing AI stories, from a fact-checked “robot spider” scare to Anthropic’s claim of catching an autonomous AI cyber-spy. The discussion covers Claude hallucinations, alleged state-backed misuse of US AI models, and concerns about AI-driven military systems and investor exuberance. The episode also questions whether the current AI boom is a bubble, while highlighting real-world examples like AI-generated music charting and pilots controlling drone wingmen.

read more →

Mon, November 24, 2025

Claude Opus 4.5 Brings Agentic AI to Microsoft Foundry

🚀 Claude Opus 4.5 is now available in public preview in Microsoft Foundry, aiming to shift models from assistants to agentic collaborators that execute multi-tool workflows and support complex engineering tasks. Anthropic and Microsoft highlight Opus 4.5’s strengthened coding, vision, and reasoning capabilities alongside improved safety and prompt-injection robustness. Foundry adds developer features like Programmatic Tool Calling, Tool Search, Effort Parameter (Beta), and Compaction Control to help teams build deterministic, long-running agents while keeping centralized governance and observability.

read more →

Mon, November 24, 2025

Anthropic Claude Opus 4.5 Now Available on Vertex AI

🚀 Anthropic's Claude Opus 4.5 is now generally available on Vertex AI, delivering frontier performance for coding, agents, vision, and office automation at roughly one-third the cost of Opus 4.1. The model introduces advanced agentic tool use—programmatic tool calling (including direct Python execution) and dynamic tool search—plus expanded memory and a 1M-token context window to support long, multi-step tasks. On Vertex AI, Opus 4.5 is offered as a Model-as-a-Service on Google's high-performance infrastructure with prompt caching, efficient batch predictions, provisioned throughput, and enterprise-grade controls for deployment. Organizations can leverage the Agent Builder stack (ADK, A2A, and Agent Engine) and Google Cloud security controls, including Model Armor and Security Command Center protections, to accelerate production agents while managing cost and risk.

read more →

Mon, November 24, 2025

Anthropic Claude Opus 4.5 Now Available in Amazon Bedrock

🚀 Anthropic's Claude Opus 4.5 is now available through Amazon Bedrock, giving Bedrock customers access to a high-performance foundation model at roughly one-third the prior cost. Opus 4.5 advances professional software engineering, agentic workflows, multilingual coding, and complex visual interpretation while supporting production-grade agent deployments. Bedrock adds two API features — tool search and tool use examples — plus a beta effort parameter to balance reasoning, tool calls, latency, and cost. The model is offered via global cross-region inference in multiple AWS regions.

read more →

Fri, November 21, 2025

AI Agents Used in State-Sponsored Large-Scale Espionage

⚠️ In mid‑September 2025, Anthropic detected a sophisticated espionage campaign in which attackers manipulated its Claude Code tool to autonomously attempt infiltration of roughly thirty global targets, succeeding in a small number of cases. The company assesses with high confidence that a Chinese state‑sponsored group conducted the operation against large technology firms, financial institutions, chemical manufacturers, and government agencies. Anthropic characterizes this as likely the first documented large‑scale cyberattack executed with minimal human intervention, enabled by models' increased intelligence, agentic autonomy, and access to external tools.

read more →

Wed, November 19, 2025

Anthropic Reports AI-Enabled Cyber Espionage Campaign

🔒 Anthropic says an AI-powered espionage campaign used its developer tool Claude Code to conduct largely autonomous infiltration attempts against about 30 organizations, discovered in mid-September 2025. A group identified as GTG-1002, linked to China, is blamed. Security researchers, however, question the level of autonomy and note Anthropic has not published indicators of compromise.

read more →

Tue, November 18, 2025

Anthropic Claude Models Available in Microsoft Foundry

🚀 Microsoft announced integration of Anthropic's Claude models into Microsoft Foundry, making Azure the only cloud to provide both Claude and GPT frontier models on a single platform. The release brings Claude Haiku 4.5, Sonnet 4.5, and Opus 4.1 to Foundry with enterprise governance, observability, and deployment controls. Foundry Agent Service, the Model Context Protocol, skills-based modularity, and a model router are highlighted as tools to operationalize agentic workflows for coding, research, cybersecurity, and business automation. Token-based pricing tiers for the Claude models are published for standard deployments.

read more →

Mon, November 17, 2025

AI-Driven Espionage Campaign Allegedly Targets Firms

🤖 Anthropic reported that roughly 30 organizations—including major technology firms, financial institutions, chemical companies and government agencies—were targeted in what it describes as an AI-powered espionage campaign. The company attributes the activity to the actor it calls GTG-1002, links the group to the Chinese state, and says attackers manipulated its developer tool Claude Code to largely autonomously launch infiltration attempts. Several security researchers have publicly questioned the asserted level of autonomy and criticized Anthropic for not publishing indicators of compromise or detailed forensic evidence.

read more →

Mon, November 17, 2025

Weekly Recap: Fortinet Exploited, Global Threats Rise

🔒 This week's recap highlights a surge in quiet, high-impact attacks that abused trusted software and platform features to evade detection. Researchers observed active exploitation of Fortinet FortiWeb (CVE-2025-64446) to create administrative accounts, prompting CISA to add it to the KEV list. Law enforcement disrupted major malware infrastructure while supply-chain and AI-assisted campaigns targeted package registries and cloud services. The guidance is clear: scan aggressively, patch rapidly, and assume features can be repurposed as attack vectors.

read more →

Fri, November 14, 2025

Anthropic's Claim of Claude-Driven Attacks Draws Skepticism

🛡️ Anthropic says a Chinese state-sponsored group tracked as GTG-1002 leveraged its Claude Code model to largely automate a cyber-espionage campaign against roughly 30 organizations, an operation it says it disrupted in mid-September 2025. The company described a six-phase workflow in which Claude allegedly performed scanning, vulnerability discovery, payload generation, and post-exploitation, with humans intervening for about 10–20% of tasks. Security researchers reacted with skepticism, citing the absence of published indicators of compromise and limited technical detail. Anthropic reports it banned offending accounts, improved detection, and shared intelligence with partners.

read more →

Fri, November 14, 2025

Anthropic: Hackers Used Claude Code to Automate Attacks

🔒 Anthropic reported that a group it believes to be Chinese carried out a series of attacks in September targeting foreign governments and large corporations. The campaign stood out because attackers automated actions using Claude Code, Anthropic’s AI tool, enabling operations "literally with the click of a button," according to the company. Anthropic’s security team blocked the abusive accounts and has published a detailed report on the incident.

read more →