All news with #mistral ai tag
Tue, December 2, 2025
Mistral Large 3 Now Available in Microsoft Foundry
🚀 Microsoft has added Mistral Large 3 to Foundry on Azure, offering a high-capability, Apache 2.0–licensed open-weight model optimized for production workloads. The model focuses on reliable instruction following, extended-context comprehension, strong multimodal reasoning, and reduced hallucination for enterprise scenarios. Foundry packages unified governance, observability, and agent-ready tooling, and allows weight export for hybrid or on-prem deployment.
Tue, December 2, 2025
Amazon Bedrock Adds 18 Fully Managed Open Models Today
🚀 Amazon Bedrock expanded its model catalog with 18 new fully managed open-weight models, the largest single addition to date. The offering includes Gemma 3, Mistral Large 3, NVIDIA Nemotron Nano 2, OpenAI gpt-oss variants and other vendor models. Through a unified API, developers can evaluate, switch, and adopt these models in production without rewriting applications or changing infrastructure. Models are available in supported AWS Regions.
Tue, December 2, 2025
Mistral Large 3 and Ministral 3 Now on Amazon Bedrock
🚀 Amazon Bedrock now offers Mistral Large 3 and the Ministral 3 family alongside additional Mistral AI checkpoints, giving customers early access to open-weight multimodal models. Mistral Large 3 employs a granular Mixture-of-Experts architecture with 41B active and 675B total parameters and supports a 256K context window for long-form comprehension and agentic workflows. The Ministral 3 series (14B, 8B, 3B) plus Voxtral and Magistral small models let developers choose scales optimized for production assistants, RAG systems, single-GPU edge deployment, or low-resource environments.
Mon, November 10, 2025
Whisper Leak side channel exposes topics in encrypted AI
🔎 Microsoft researchers disclosed a new side-channel attack called Whisper Leak that can infer the topic of encrypted conversations with language models by observing network metadata such as packet sizes and timings. The technique exploits streaming LLM responses that emit tokens incrementally, leaking size and timing patterns even under TLS. Vendors including OpenAI, Microsoft Azure, and Mistral implemented mitigations such as random-length padding and obfuscation parameters to reduce the effectiveness of the attack.