All news with #amazon bedrock tag
Wed, November 19, 2025
Amazon Bedrock Expands Availability to New Regions
🚀 Amazon Bedrock is now available in Africa (Cape Town), Canada West (Calgary), Mexico (Central), and Middle East (Bahrain). The managed service provides access to multiple foundation models and tools to build, deploy, and operate secure, scalable generative AI applications and agents. Customers in these Regions can expect lower latency, improved regional data options, and an easier path from experimentation to production.
Wed, November 19, 2025
Amazon Bedrock Expands Availability to Four New Regions
🚀 Beginning today, Amazon has made Amazon Bedrock available in Africa (Cape Town), Canada West (Calgary), Mexico (Central), and Middle East (Bahrain). The managed service provides secure access to a variety of foundation models and tools for building and operating generative AI applications and agents. With regional endpoints, customers can reduce latency and address data residency and compliance needs. To get started, customers can consult the Bedrock documentation and regional resources.
Wed, November 19, 2025
AWS Marketplace Adds A2A Server Support for AgentCore
🛠️ AWS Marketplace now supports Agent-to-Agent (A2A) servers and streamlined deployment for third-party AI agents built for Amazon Bedrock AgentCore Runtime. The update pre-populates required environment variables in the AgentCore console and adds AWS CLI instructions within Marketplace listings so customers can procure and deploy A2A servers directly. AWS Partners can list A2A and MCP servers and containerized AgentCore Runtime products, define vendor launch configurations, and enable flexible pricing (including free API-based SaaS) to accelerate onboarding. These capabilities reduce deployment complexity and add protocol flexibility to meet diverse customer needs.
Wed, November 19, 2025
Amazon Bedrock Adds Support for OpenAI GPT OSS Models
🚀 Amazon Bedrock now supports importing custom weights for gpt-oss-120b and gpt-oss-20b, allowing customers to bring tuned OpenAI GPT OSS models into a fully managed, serverless environment. This capability eliminates the need to manage infrastructure or model serving while enabling deployment of text-to-text models for reasoning, agentic, and developer tasks. gpt-oss-120b is optimized for production and high-reasoning use cases; gpt-oss-20b targets lower-latency or specialized scenarios. The feature is generally available in US‑East (N. Virginia).
Tue, November 18, 2025
Amazon Bedrock adds Priority and Flex inference tiers
🔔 Amazon Bedrock introduces two new inference tiers—Priority and Flex—to help customers balance cost and latency for varied AI workloads. Flex targets non-time-critical jobs like model evaluations and summarization with discounted pricing and lower scheduling priority. Priority offers premium performance and preferential processing (up to 25% better OTPS vs. Standard) for mission-critical, real-time applications. The existing Standard tier remains available for general-purpose use.
Tue, November 18, 2025
AWS Releases Responsible AI and Updated ML Lenses at Scale
🔔 AWS has published one new Responsible AI lens and updated Generative AI and Machine Learning lenses to guide safe, secure, and production-ready AI workloads. The guidance addresses fairness, reliability, and operational readiness while helping teams move from experimentation to production. Updates include recommendations for Amazon SageMaker HyperPod, Agentic AI, and integrations with Amazon SageMaker Unified Studio, Amazon Q, and Amazon Bedrock. The lenses are aimed at business leaders, ML engineers, data scientists, and risk and compliance professionals.
Mon, November 17, 2025
Amazon Bedrock Data Automation Adds 10 Speech Languages
🎙️ Amazon Bedrock Data Automation (BDA) now supports 10 additional languages for speech analytics beyond English: Portuguese, French, Italian, Spanish, German, Chinese, Cantonese, Taiwanese, Korean, and Japanese. BDA can transcribe audio in the detected language, generate GenAI-powered insights, and produce summaries either in the detected language or in English. It also creates multi-lingual transcripts when recordings contain more than one supported language, simplifying analysis of customer calls, meetings, education sessions, clinical discussions, and public safety audio. Support is available in eight AWS Regions.
Mon, November 10, 2025
Anthropic's Claude Sonnet 4.5 Now in AWS GovCloud (US)
🚀 Anthropic's Claude Sonnet 4.5 is now available in Amazon Bedrock within AWS GovCloud (US‑West and US‑East) via US‑GOV Cross‑Region Inference. The model emphasizes advanced instruction following, superior code generation and refactoring judgment, and is optimized for long‑horizon agents and high‑volume workloads. Bedrock adds an automatic context editor and a new external memory tool so Claude can clear stale tool-call context and store information outside the context window, improving accuracy and performance for security, financial services, and enterprise automation use cases.
Mon, November 3, 2025
AWS Config Adds 52 New Resource Types Across Key Services
🔔 AWS Config now supports 52 additional AWS resource types across services including Amazon EC2, Amazon Bedrock, and Amazon SageMaker. With recording for all resource types enabled, AWS Config will automatically begin tracking these additions and they are available to Config rules and aggregators. You can monitor the new types in all Regions where supported, expanding discovery, assessment, audit, and remediation coverage.
Fri, October 31, 2025
AWS Marketplace: Flexible Pricing and Deployment for Agents
🤖 AWS Marketplace now offers flexible pricing and simplified deployment for AI agents and tools, including contract-based and usage-based options for Amazon Bedrock AgentCore Runtime containers. The update also streamlines OAuth credential management via Quick Launch for API-based agents and allows supported remote MCP servers procured through Marketplace to be used as MCP targets on AgentCore Gateway. These enhancements reduce deployment complexity and give partners more pricing flexibility while improving scalability for customers.
Fri, October 31, 2025
Model Context Protocol Proxy for AWS now generally available
🔒 The Model Context Protocol (MCP) Proxy for AWS is now generally available, offering a client-side proxy that lets MCP clients connect to remote, AWS-hosted MCP servers using AWS SigV4 authentication. It supports agentic development tools such as Amazon Q Developer CLI, Kiro, Cursor, and agent frameworks like Strands Agents, and interoperates with MCP servers built on Amazon Bedrock AgentCore Gateway or Runtime. The open-source Proxy includes safety controls (read-only mode), configurable retry logic, and logging for troubleshooting, and can be installed from source, via Python package managers, or as a container to integrate with existing MCP-supported tools.
Thu, October 30, 2025
TwelveLabs Pegasus 1.2 Now in Three Additional AWS Regions
🚀 Amazon expanded availability of TwelveLabs Pegasus 1.2 to US East (Ohio), US West (N. California), and Europe (Frankfurt) via Amazon Bedrock. Pegasus 1.2 is a video-first language model optimized for long-form video understanding, video-to-text generation, and temporal reasoning across visual, audio, and textual signals. The regional rollout brings the model closer to customers' data and end users, reducing latency and simplifying deployment architectures. Developers can now build enterprise-grade video intelligence applications in these regions.
Thu, October 30, 2025
Amazon Bedrock AgentCore Browser Adds Web Bot Auth Preview
🔐 Amazon Bedrock AgentCore Browser now previews Web Bot Auth, a draft IETF protocol that cryptographically identifies AI agents to websites. The feature automatically generates credentials, signs HTTP requests with private keys, and registers verified agent identities to reduce CAPTCHA interruptions and human intervention in automated workflows. It streamlines verification across major providers such as Akamai, Cloudflare, and HUMAN Security, and is available in nine AWS Regions on a consumption-based pricing model with no upfront costs.
Wed, October 29, 2025
TwelveLabs Marengo 3.0 Now on Amazon Bedrock Platform
🎥 TwelveLabs' Marengo Embed 3.0 is now available on Amazon Bedrock, providing a unified video-native multimodal embedding that represents video, images, audio, and text in a single vector space. The release doubles processing capacity—up to 4 hours and 6 GB per file—expands language support to 36 languages, and improves sports analysis and multimodal search precision. It supports synchronous low-latency text and image inference and asynchronous processing for video, audio, and large files.
Wed, October 29, 2025
Stability AI Image Tools Expanded in Amazon Bedrock
🖼 Amazon Bedrock now offers four new image-editing tools in Stability AI Image Services: Outpaint, Fast Upscale, Conservative Upscale, and Creative Upscale. These additions expand the platform's Edit, Upscale, and Control capabilities, enabling creators to perform targeted edits and resolution enhancements with greater precision. The tools are accessible via the Bedrock API and are initially supported in US West (Oregon), US East (N. Virginia), and US East (Ohio).
Wed, October 29, 2025
Amazon Web Grounding for Nova Models Now Generally Available
🌐 Web Grounding is now generally available as a built-in tool for Nova models, usable today with Nova Premier via the Amazon Bedrock tool use API. It retrieves and incorporates publicly available information with citations to support responses, enabling a turnkey RAG solution that reduces hallucinations and improves accuracy. Cross-region inference makes the tool available in US East (N. Virginia), US East (Ohio), and US West (Oregon). Support for additional Nova models will follow.
Tue, October 28, 2025
Amazon Nova Multimodal Embeddings — Unified Cross-Modal
🚀 Amazon announces general availability of Amazon Nova Multimodal Embeddings, a unified embedding model designed for agentic RAG and semantic search across text, documents, images, video, and audio. The model handles inputs up to 8K tokens and video/audio segments up to 30 seconds, with segmentation for larger files and selectable embedding dimensions. Both synchronous and asynchronous APIs are supported to balance latency and throughput, and Nova is available in Amazon Bedrock in US East (N. Virginia).
Fri, October 17, 2025
Securing Amazon Bedrock API Keys: Best Practices Guidance
🔐 AWS details practical guidance for implementing and managing Amazon Bedrock API keys, the service-specific credentials that provide bearer-token access to Bedrock. It recommends STS temporary credentials when possible and defines two API key types: short-term (client-generated, auto-expiring) and long-term (IAM-user associated). Protection advice includes using SCPs, iam and bedrock condition keys, and storing long-term keys in secure vaults. Detection and monitoring use CloudTrail, EventBridge rules, and an AWS Config rule, and response steps show CLI commands to deactivate and delete compromised keys.
Fri, October 17, 2025
AWS Bedrock Guardrails: Customer-Managed KMS Keys Support
🔐 AWS now supports customer-managed AWS Key Management Service (KMS) keys for Amazon Bedrock Guardrails Automated Reasoning checks. Customers can encrypt policy content and test artifacts with their own keys instead of the default key, retaining control over lifecycle and access. This capability helps regulated organizations meet compliance requirements and is available in all Bedrock Guardrails regions. Refer to AWS documentation and the Bedrock console to get started.
Thu, October 16, 2025
Encoding-Based Attack Protection with Bedrock Guardrails
🔒 Amazon Bedrock Guardrails offers configurable, cross-model safeguards to protect generative AI applications from encoding-based attacks that attempt to hide harmful content using encodings such as Base64, hexadecimal, ROT13, and Morse code. It implements a layered defense—output-focused filtering, prompt-attack detection, and customizable denied topics—so legitimate encoded inputs are allowed while attempts to request or generate encoded harmful outputs are blocked. The design emphasizes usability and performance by avoiding exhaustive input decoding and relying on post-generation evaluation.