Tag Banner

All news with #twelvelabs tag

Thu, October 30, 2025

TwelveLabs Pegasus 1.2 Now in Three Additional AWS Regions

🚀 Amazon expanded availability of TwelveLabs Pegasus 1.2 to US East (Ohio), US West (N. California), and Europe (Frankfurt) via Amazon Bedrock. Pegasus 1.2 is a video-first language model optimized for long-form video understanding, video-to-text generation, and temporal reasoning across visual, audio, and textual signals. The regional rollout brings the model closer to customers' data and end users, reducing latency and simplifying deployment architectures. Developers can now build enterprise-grade video intelligence applications in these regions.

read more →

Wed, October 29, 2025

TwelveLabs Marengo 3.0 Now on Amazon Bedrock Platform

🎥 TwelveLabs' Marengo Embed 3.0 is now available on Amazon Bedrock, providing a unified video-native multimodal embedding that represents video, images, audio, and text in a single vector space. The release doubles processing capacity—up to 4 hours and 6 GB per file—expands language support to 36 languages, and improves sports analysis and multimodal search precision. It supports synchronous low-latency text and image inference and asynchronous processing for video, audio, and large files.

read more →

Tue, September 9, 2025

TwelveLabs Marengo 2.7 Embeddings Now Synchronous in Bedrock

Amazon Bedrock now supports synchronous inference for TwelveLabs Marengo Embed 2.7, delivering low-latency text and image embeddings directly in API responses. Previously optimized for asynchronous processing of large video, audio, and image files, Marengo 2.7’s new mode enables responsive search and retrieval features—such as instant natural-language video search and image similarity discovery—while retaining advanced video understanding via asynchronous workflows.

read more →

Tue, August 19, 2025

TwelveLabs Pegasus 1.2 Now in AWS Virginia and Seoul

📹 TwelveLabs Pegasus 1.2 is now available in US East (N. Virginia) and Asia Pacific (Seoul) through Amazon Bedrock. The video-first language model is optimized for long-form content and combines visual, audio, and textual signals to deliver advanced video-to-text generation and temporal understanding. Regional availability reduces latency and simplifies architecture for enterprise video-intelligence applications. To begin, request model access via the Amazon Bedrock console.

read more →