All news with #vector database tag
Thu, August 28, 2025
Make Websites Conversational with NLWeb and AutoRAG
🤖 Cloudflare offers a one-click path to conversational search by combining Microsoft’s NLWeb open standard with Cloudflare’s managed retrieval engine, AutoRAG. The integration crawls and indexes site content into R2 and a managed vector store, serves embeddings and inference via Workers AI, and exposes both a user-facing /ask endpoint and an agent-focused /mcp endpoint. Publishers get continuous re-indexing, controlled agent access, and observability through an AI Gateway, removing much of the infrastructure burden for conversational experiences.
Mon, August 25, 2025
Amazon RDS Supports MariaDB 11.8 with Vector Engine
🚀 Amazon RDS for MariaDB now supports MariaDB 11.8 (minor 11.8.3), the community's latest long-term maintenance release. The update introduces MariaDB Vector, enabling storage of vector embeddings and use of retrieval-augmented generation (RAG) directly in the managed database. It also adds controls to limit maximum temporary file and table sizes to better manage storage. You can upgrade manually, via snapshot restore, or with Amazon RDS Managed Blue/Green deployments; 11.8 is available in all regions where RDS MariaDB is offered.
Mon, August 25, 2025
Amazon Neptune Adds BYOKG RAG Support via GraphRAG
🔍 Amazon Web Services announced general availability of Bring Your Own Knowledge Graph (BYOKG) support for Retrieval-Augmented Generation (RAG) using the open-source GraphRAG Toolkit. Developers can now connect domain-specific graphs stored in Amazon Neptune (Database or Analytics) directly to LLM workflows, combining graph queries with vector search. This reduces hallucinations and improves multi-hop and temporal reasoning, easing operationalization of graph-aware generative AI.
Fri, August 15, 2025
Amazon Neptune integrates with Cognee for GenAI memory
🧠 Amazon Neptune now integrates with Cognee to provide graph-native memory for agentic generative AI applications. The integration enables developers to use Amazon Neptune Analytics as the persistent graph and vector store behind Cognee’s memory layer, supporting large-scale memory graphs, long-term memory, and multi-hop reasoning. Hybrid retrieval across graph, vector, and keyword modalities helps agents deliver more personalized, cost-efficient, and context-aware experiences; documentation and a sample notebook are available to accelerate adoption.