All news with #anthropic tag
Fri, October 10, 2025
Security Risks of Vibe Coding and LLM Developer Assistants
🛡️AI developer assistants accelerate coding but introduce significant security risks across generated code, configurations, and development tools. Studies show models now compile code far more often yet still produce many OWASP- and MITRE-class vulnerabilities, and real incidents (for example Tea, Enrichlead, and the Nx compromise) highlight practical consequences. Effective defenses include automated SAST, security-aware system prompts, human code review, strict agent access controls, and developer training.
Tue, October 7, 2025
AI Fix #71 — Hacked Robots, Power-Hungry AI and More
🤖 In episode 71 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a wide-ranging mix of AI and robotics stories, from a giant robot spider that went 'backpacking' to DoorDash's delivery 'Minion' and a TikToker forcing an AI to converse with condiments. The episode highlights technical feats — GPT-5 winning the ICPC World Finals and Claude Sonnet 4.5 coding for 30 hours — alongside quirky projects like a 5-million-parameter transformer built in Minecraft. It also investigates a security flaw that left Unitree robot fleets exposed and discusses an alarming estimate that training a frontier model could require the power capacity of five nuclear plants by 2028.
Mon, October 6, 2025
AI in Today's Cybersecurity: Detection, Hunting, Response
🤖 Artificial intelligence is reshaping how organizations detect, investigate, and respond to cyber threats. The article explains how AI reduces alert noise, prioritizes vulnerabilities, and supports behavioral analysis, UEBA, and NLP-driven phishing detection. It highlights Wazuh's integrations with models such as Claude 3.5, Llama 3, and ChatGPT to provide conversational insights, automated hunting, and contextual remediation guidance.
Thu, October 2, 2025
ThreatsDay Bulletin: Exploits Target Cars, Cloud, Browsers
🔔 From unpatched vehicles to hijacked clouds, this ThreatsDay bulletin outlines active threats and defensive moves across endpoints, cloud, browsers, and vehicles. Observers reported internet-wide scans exploiting PAN-OS GlobalProtect (CVE-2024-3400) and campaigns that use weak MS‑SQL credentials to deploy XiebroC2 for persistent access. New AirBorne CarPlay/iAP2 flaws can chain to take over Apple CarPlay in some cases without user interaction, while attackers quietly poison browser preferences to sideload malicious extensions. On defence, Google announced AI-driven ransomware detection for Drive and Microsoft plans an Edge revocation feature to curb sideloaded threats.
Mon, September 29, 2025
Anthropic's Claude Sonnet 4.5 Now Available on Vertex AI
🚀 Anthropic’s Claude Sonnet 4.5 is now generally available on Vertex AI, delivering advanced long-horizon autonomy for agents across coding, finance, research, and cybersecurity. The model can operate independently for hours, orchestrating tools and coordinating multiple agents to complete complex, multi-step tasks. Vertex AI provides orchestration, provisioning, security controls, and developer tooling, and includes Claude Code upgrades like a VS Code extension and an improved terminal interface.
Mon, September 29, 2025
Anthropic Claude Sonnet 4.5 Now Available in Bedrock
🚀 Anthropic’s Claude Sonnet 4.5 is now available through Amazon Bedrock, providing managed API access to the company’s most capable model. The model leads SWE-bench Verified benchmarks with improved instruction following, stronger code-refactoring judgment, and enhanced production-ready code generation. Bedrock adds automated context editing and a memory tool to extend usable context and boost accuracy for long-running agents across global regions.
Wed, September 24, 2025
Responsible AI Bot Principles to Protect Web Content
🛡️ Cloudflare proposes five practical principles to guide responsible AI bot behavior and protect web publishers, users, and infrastructure. The framework stresses public disclosure, reliable self-identification (moving toward cryptographic verification such as Web Bot Auth), a declared single purpose for crawlers, and respect for operator preferences via robots.txt or headers. Operators must also avoid deceptive or high-volume crawling, and Cloudflare invites multi-stakeholder collaboration to refine and adopt these norms.
Tue, September 23, 2025
The AI Fix Episode 69: Oddities, AI Songs and Risks
🎧 In episode 69 of The AI Fix, Graham Cluley and Mark Stockley mix lighthearted oddities with substantive AI developments. The hosts discuss viral “brain rot” videos, an AI‑generated J‑Pop song, Norway’s experiment trusting $1.9 trillion to an AI investor, and Florida’s use of robotic rabbits to deter Burmese pythons. The show also highlights its first AI feedback, a merch sighting, and data on ChatGPT adoption, while reflecting on uneven geographic and enterprise AI uptake and recent academic research.
Thu, September 18, 2025
Google Cloud's Differentiated AI Stack Fuels Startups
🚀 Google Cloud highlights how its differentiated AI tech stack is accelerating startup innovation worldwide, with nine of the top ten AI labs, most AI unicorns, and more than 60% of generative AI startups using its platform. Startups are leveraging Vertex AI, TPUs, multimodal models like Veo 3 and Gemini, plus services such as AI Studio and GKE to build agents, generative media, medical tools, and developer platforms. Programs like the Google for Startups Cloud Program provide credits, mentorship, and engineering support to help founders scale.
Fri, September 12, 2025
Five AI Use Cases CISOs Should Prioritize in 2025 and Beyond
🔒 Security leaders are balancing safe AI adoption with operational gains and focusing on five practical use cases where AI can improve security outcomes. Organizations are connecting LLMs to internal telemetry via standards like MCP, using agents and models such as Claude, Gemini and GPT-4o to automate threat hunting, translate technical metrics for executives, assess vendor and internal risk, and streamline Tier‑1 SOC work. Early deployments report time savings, clearer executive reporting and reduced analyst fatigue, but require robust guardrails, validation and feedback loops to ensure accuracy and trust.
Tue, September 9, 2025
How CISOs Are Experimenting with AI for Security Operations
🤖 Security leaders are cautiously adopting AI to improve security operations, threat hunting, reporting and vendor risk processes while maintaining strict guardrails. Teams are piloting custom integrations like Anthropic's MCP, vendor agents such as Gem, and developer toolchains including Microsoft Copilot to connect LLMs with telemetry and internal data sources. Early experiments show significant time savings—automating DLP context, producing near-complete STRIKE threat models, converting long executive reviews into concise narratives, and accelerating phishing triage—but practitioners emphasize validation, feedback loops and human oversight before broad production use.
Tue, September 9, 2025
Amazon Q in Connect Lets Admins Select LLMs in UI Console
🤖Amazon Q in Connect now lets contact center administrators select different LLM model families directly from the Amazon Connect web UI. This no-code configuration enables quick switching between models to optimize for latency, cost, or complex reasoning. Administrators can choose Amazon Nova Pro for faster responses or Anthropic Claude Sonnet for complex reasoning, tailoring AI Agents to specific customer interaction types.
Thu, September 4, 2025
Generative AI Used as Cybercrime Assistant, Reports Say
⚠️ Anthropic reports that a threat actor used Claude Code to automate reconnaissance, credential harvesting, network intrusion, and targeted extortion across at least 17 organizations, including healthcare, emergency services, government, and religious institutions. The actor prioritized public exposure over classic ransomware encryption, demanding ransoms that in some cases exceeded $500,000. Anthropic also identified North Korean use of Claude for remote‑worker fraud and an actor who used the model to design and distribute multiple ransomware variants with advanced evasion and anti‑recovery features.
Wed, September 3, 2025
Smashing Security #433: Hackers Harnessing AI Tools
🤖 In episode 433 of Smashing Security, Graham Cluley and Mark Stockley examine how attackers are weaponizing AI, from embedding malicious instructions in legalese to using generative agents to automate intrusions and extortion. They discuss LegalPwn prompt-injection tactics that hide payloads in comments and disclaimers, and new findings from Anthropic showing AI-assisted credential theft and custom ransomware notes. The episode also includes lighter segments on keyboard history and an ingenious AI-generated CAPTCHA.
Wed, September 3, 2025
Amazon Bedrock: Global Cross-Region Inference for Claude 4
🔁 Anthropic's Claude Sonnet 4 is now available with Global cross‑Region inference in Amazon Bedrock, allowing inference requests to be routed to any supported commercial AWS Region for processing. The Global profile helps optimize compute resources and distribute traffic to increase model throughput. It supports both on‑demand and batch inference and is intended for use cases that do not require geography‑specific routing.
Tue, September 2, 2025
The AI Fix Ep. 66: AI Mishaps, Breakthroughs and Safety
🧠 In episode 66 of The AI Fix, hosts Graham Cluley and Mark Stockley walk listeners through a rapid-fire roundup of recent AI developments, from a ChatGPT prompt that produced an inaccurate anatomy diagram to a controversial Stanford sushi hackathon. They cover a Google Gemini bug that generated self-deprecating responses, criticisms that gave DeepSeek poor marks on existential-risk mitigation, and a debunked pregnancy-robot story. The episode also celebrates a genuine scientific advance: a team of AI agents that designed novel COVID-19 nanobodies, and considers how unusual collaborations and growing safety work could change the broader AI risk landscape.
Tue, September 2, 2025
NCSC and AISI Back Public Disclosure for AI Safeguards
🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.
Tue, September 2, 2025
Amazon Bedrock Simplifies Cache Management for Claude
⚡Amazon Bedrock updated prompt caching for Anthropic’s Claude models—Claude 3.5 Haiku, Claude 3.7, and Claude 4—to simplify cache management. Developers now set a single cache breakpoint at the end of a request and the system automatically reads the longest previously cached prefix, removing manual segment selection and reducing integration complexity. By excluding cache read tokens from TPM quotas, this change can free up token capacity and lower costs for multi-turn workflows. The capability is available today in all regions offering these Claude models; enable caching in your Bedrock model invocations and refer to the Bedrock Developer Guide for details.
Sun, August 31, 2025
Anthropic Tests Web Version of Claude Code for Developers
🛠️ Anthropic is rolling out a research preview of a web-based Claude Code, bringing its terminal-focused coding assistant into the browser at Claude.ai/code. The web preview requires installing the GitHub Claude app on a repository and committing a "Claude Dispatch" GitHub workflow file before use, with optional email and web notifications for updates. Claude Code—already available in terminals and integrated editors under paid plans—can inspect codebases to help fix bugs, test features, simplify Git tasks, and automate workflows. It remains unclear whether the terminal and web versions can access or share the same repository content or usage data.
Fri, August 29, 2025
Cloudflare data: AI bot crawling surges, referrals fall
🤖 Cloudflare's mid‑2025 dataset shows AI training crawlers now account for nearly 80% of AI bot activity, driving a surge in crawling while sending far fewer human referrals. Google referrals to news sites fell sharply in March–April 2025 as AI Overviews and Gemini upgrades reduced click-throughs. OpenAI’s GPTBot and Anthropic’s ClaudeBot increased crawling share while ByteDance’s Bytespider declined. The resulting crawl-to-refer imbalance — tens of thousands of crawls per human click for some platforms — threatens publisher revenue.