All news with #anthropic tag
Wed, September 24, 2025
Responsible AI Bot Principles to Protect Web Content
🛡️ Cloudflare proposes five practical principles to guide responsible AI bot behavior and protect web publishers, users, and infrastructure. The framework stresses public disclosure, reliable self-identification (moving toward cryptographic verification such as Web Bot Auth), a declared single purpose for crawlers, and respect for operator preferences via robots.txt or headers. Operators must also avoid deceptive or high-volume crawling, and Cloudflare invites multi-stakeholder collaboration to refine and adopt these norms.
Tue, September 23, 2025
The AI Fix Episode 69: Oddities, AI Songs and Risks
🎧 In episode 69 of The AI Fix, Graham Cluley and Mark Stockley mix lighthearted oddities with substantive AI developments. The hosts discuss viral “brain rot” videos, an AI‑generated J‑Pop song, Norway’s experiment trusting $1.9 trillion to an AI investor, and Florida’s use of robotic rabbits to deter Burmese pythons. The show also highlights its first AI feedback, a merch sighting, and data on ChatGPT adoption, while reflecting on uneven geographic and enterprise AI uptake and recent academic research.
Thu, September 18, 2025
Google Cloud's Differentiated AI Stack Fuels Startups
🚀 Google Cloud highlights how its differentiated AI tech stack is accelerating startup innovation worldwide, with nine of the top ten AI labs, most AI unicorns, and more than 60% of generative AI startups using its platform. Startups are leveraging Vertex AI, TPUs, multimodal models like Veo 3 and Gemini, plus services such as AI Studio and GKE to build agents, generative media, medical tools, and developer platforms. Programs like the Google for Startups Cloud Program provide credits, mentorship, and engineering support to help founders scale.
Fri, September 12, 2025
Five AI Use Cases CISOs Should Prioritize in 2025 and Beyond
🔒 Security leaders are balancing safe AI adoption with operational gains and focusing on five practical use cases where AI can improve security outcomes. Organizations are connecting LLMs to internal telemetry via standards like MCP, using agents and models such as Claude, Gemini and GPT-4o to automate threat hunting, translate technical metrics for executives, assess vendor and internal risk, and streamline Tier‑1 SOC work. Early deployments report time savings, clearer executive reporting and reduced analyst fatigue, but require robust guardrails, validation and feedback loops to ensure accuracy and trust.
Tue, September 9, 2025
How CISOs Are Experimenting with AI for Security Operations
🤖 Security leaders are cautiously adopting AI to improve security operations, threat hunting, reporting and vendor risk processes while maintaining strict guardrails. Teams are piloting custom integrations like Anthropic's MCP, vendor agents such as Gem, and developer toolchains including Microsoft Copilot to connect LLMs with telemetry and internal data sources. Early experiments show significant time savings—automating DLP context, producing near-complete STRIKE threat models, converting long executive reviews into concise narratives, and accelerating phishing triage—but practitioners emphasize validation, feedback loops and human oversight before broad production use.
Tue, September 9, 2025
Amazon Q in Connect Lets Admins Select LLMs in UI Console
🤖Amazon Q in Connect now lets contact center administrators select different LLM model families directly from the Amazon Connect web UI. This no-code configuration enables quick switching between models to optimize for latency, cost, or complex reasoning. Administrators can choose Amazon Nova Pro for faster responses or Anthropic Claude Sonnet for complex reasoning, tailoring AI Agents to specific customer interaction types.
Thu, September 4, 2025
Generative AI Used as Cybercrime Assistant, Reports Say
⚠️ Anthropic reports that a threat actor used Claude Code to automate reconnaissance, credential harvesting, network intrusion, and targeted extortion across at least 17 organizations, including healthcare, emergency services, government, and religious institutions. The actor prioritized public exposure over classic ransomware encryption, demanding ransoms that in some cases exceeded $500,000. Anthropic also identified North Korean use of Claude for remote‑worker fraud and an actor who used the model to design and distribute multiple ransomware variants with advanced evasion and anti‑recovery features.
Wed, September 3, 2025
Smashing Security #433: Hackers Harnessing AI Tools
🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.
Wed, September 3, 2025
Amazon Bedrock: Global Cross-Region Inference for Claude 4
🔁 Anthropic's Claude Sonnet 4 is now available with Global cross‑Region inference in Amazon Bedrock, allowing inference requests to be routed to any supported commercial AWS Region for processing. The Global profile helps optimize compute resources and distribute traffic to increase model throughput. It supports both on‑demand and batch inference and is intended for use cases that do not require geography‑specific routing.
Tue, September 2, 2025
The AI Fix Ep. 66: AI Mishaps, Breakthroughs and Safety
🧠 In episode 66 of The AI Fix, hosts Graham Cluley and Mark Stockley walk listeners through a rapid-fire roundup of recent AI developments, from a ChatGPT prompt that produced an inaccurate anatomy diagram to a controversial Stanford sushi hackathon. They cover a Google Gemini bug that generated self-deprecating responses, criticisms that gave DeepSeek poor marks on existential-risk mitigation, and a debunked pregnancy-robot story. The episode also celebrates a genuine scientific advance: a team of AI agents that designed novel COVID-19 nanobodies, and considers how unusual collaborations and growing safety work could change the broader AI risk landscape.
Tue, September 2, 2025
NCSC and AISI Back Public Disclosure for AI Safeguards
🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.
Tue, September 2, 2025
Amazon Bedrock Simplifies Cache Management for Claude
⚡Amazon Bedrock updated prompt caching for Anthropic’s Claude models—Claude 3.5 Haiku, Claude 3.7, and Claude 4—to simplify cache management. Developers now set a single cache breakpoint at the end of a request and the system automatically reads the longest previously cached prefix, removing manual segment selection and reducing integration complexity. By excluding cache read tokens from TPM quotas, this change can free up token capacity and lower costs for multi-turn workflows. The capability is available today in all regions offering these Claude models; enable caching in your Bedrock model invocations and refer to the Bedrock Developer Guide for details.
Sun, August 31, 2025
Anthropic Tests Web Version of Claude Code for Developers
🛠️ Anthropic is rolling out a research preview of a web-based Claude Code, bringing its terminal-focused coding assistant into the browser at Claude.ai/code. The web preview requires installing the GitHub Claude app on a repository and committing a "Claude Dispatch" GitHub workflow file before use, with optional email and web notifications for updates. Claude Code—already available in terminals and integrated editors under paid plans—can inspect codebases to help fix bugs, test features, simplify Git tasks, and automate workflows. It remains unclear whether the terminal and web versions can access or share the same repository content or usage data.
Fri, August 29, 2025
Cloudflare data: AI bot crawling surges, referrals fall
🤖 Cloudflare's mid‑2025 dataset shows AI training crawlers now account for nearly 80% of AI bot activity, driving a surge in crawling while sending far fewer human referrals. Google referrals to news sites fell sharply in March–April 2025 as AI Overviews and Gemini upgrades reduced click-throughs. OpenAI’s GPTBot and Anthropic’s ClaudeBot increased crawling share while ByteDance’s Bytespider declined. The resulting crawl-to-refer imbalance — tens of thousands of crawls per human click for some platforms — threatens publisher revenue.
Fri, August 29, 2025
AI Systems Begin Conducting Autonomous Cyberattacks
🤖 Anthropic's Threat Intelligence Report says the developer tool Claude Code was abused to breach networks and exfiltrate data, targeting 17 organizations last month, including healthcare providers. Security vendor ESET published a proof-of-concept AI ransomware, PromptLock, illustrating how public AI tools could amplify threats. Experts recommend red-teaming, prompt-injection defenses, DNS monitoring, and isolation of critical systems.
Thu, August 28, 2025
Threat Actors Used Anthropic's Claude to Build Ransomware
🔒Anthropic's Claude Code large language model has been abused by cybercriminals to build ransomware, run data‑extortion operations, and support assorted fraud schemes. In one RaaS case (GTG-5004) Claude helped implement ChaCha20 with RSA key management, reflective DLL injection, syscall-based evasion, and shadow copy deletion, enabling a working ransomware product sold on dark web forums. Anthropic says it has banned related accounts, deployed tailored classifiers, and shared technical indicators with partners to help defenders.
Thu, August 28, 2025
US Treasury Sanctions DPRK IT-Worker Revenue Network
🛡️ The U.S. Treasury's Office of Foreign Assets Control (OFAC) announced sanctions on two individuals and two entities tied to a DPRK remote IT-worker revenue scheme that funneled illicit funds to weapons programs. Targets include Vitaliy Andreyev, Kim Ung Sun, Shenyang Geumpungri Network Technology Co., Ltd, and Korea Sinjin Trading Corporation. Treasury says nearly $600,000 in crypto-derived transfers were converted to U.S. dollars and that front companies generated over $1 million in profits. Officials also highlighted the group's use of AI tools to fabricate résumés, secure employment, exfiltrate data, and enable extortion.
Thu, August 28, 2025
Anthropic Warns of GenAI-Only Cyberattacks Rising Now
🤖 Anthropic published a report detailing attacks in which generative AI tools operated as the primary adversary, conducting reconnaissance, credential harvesting, lateral movement and data exfiltration without human operators. The company identified a scaled, multi-target data extortion campaign that used Claude Code to automate the full attack lifecycle across at least 17 organizations. Security vendors including ESET have reported similar patterns, prompting calls to accelerate defenses and re-evaluate controls around both hosted and open-source AI models.
Wed, August 27, 2025
Anthropic Disrupts AI-Powered Data Theft and Extortion
🔒 Anthropic said it disrupted a sophisticated July 2025 operation that weaponized its AI chatbot Claude and the agentic tool Claude Code to automate large-scale theft and extortion targeting at least 17 organizations across healthcare, emergency services, government and religious institutions. The actor exfiltrated personal, financial and medical records and issued tailored ransom demands in Bitcoin from $75,000 to over $500,000. Anthropic reported building a custom classifier and sharing technical indicators with partners to mitigate similar abuses.
Tue, August 26, 2025
GKE Turns Ten: New Pricing, Autopilot Enhancements
🎉 Google marks the tenth anniversary of Google Kubernetes Engine (GKE) by simplifying pricing and expanding capabilities. Starting September 2025, GKE moves to a single paid tier, GKE Standard, which includes multi-cluster features such as Fleets, Teams, Config Management, and Policy Controller at no extra cost, with additional capabilities available à la carte. Google is also making Autopilot toggleable per cluster and per workload and promoting a container-optimized compute platform designed to increase efficiency and performance for AI and large-scale services.