Jump to content

Binary quantization in Azure AI Search: optimized storage and faster search


Recommended Posts

Guest fsunavala-msft
Posted
As organizations continue to harness the power of Generative AI for building [URL='https://learn.microsoft.com/azure/search/retrieval-augmented-generation-overview']Retrieval-Augmented Generation (RAG)[/URL] applications and agents, the need for efficient, high-performance, and scalable solutions has never been greater. Today, we're excited to introduce [URL='https://learn.microsoft.com/azure/search/vector-search-how-to-configure-compression-storage#option-1-configure-quantization']Binary Quantization[/URL], a new feature that reduces vector sizes by up to 96% while reducing search latency by up to 40%. [HEADING=1]What is Binary Quantization?[/HEADING] Binary Quantization (BQ) is a technique that compresses high-dimensional vectors by representing each dimension as a single bit. This method drastically reduces the memory footprint of a vector index and accelerates vector comparison operations at the cost of recall. The loss of recall can be compensated for with two techniques called oversampling and reranking, giving you tools to choose what to prioritize in your application: recall, speed, or cost. [HEADING=1]Why should I use Binary Quantization?[/HEADING] Binary quantization is most applicable to customers who want to store a very large number of vectors at a low cost. Azure AI Search keeps the vector indexes in memory to offer the best possible search performance. Binary Quantization (BQ) allows to reduce the size of the in-memory vector index, which in turn reduces the number of [URL='https://learn.microsoft.com/azure/search/search-what-is-azure-search']Azure AI Search[/URL] partitions you need to fit your data, leading to cost reductions. Binary quantization reduces the size of the in-memory vector index by converting 32-bit floating point numbers into 1-bit values, can achieve up to a 28x reduction in vector index size (slightly less than the theoretical 32x due to overheads introduced by the index data structures). The table below shows the impact of binary quantization on vector index size and storage use. [B]Table 1.1: Vector Index Storage Benchmarks[/B] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER]
[CENTER][B]Compression Configuration[/B][/CENTER][CENTER][B]Document Count[/B][/CENTER][CENTER][B]Vector Index Size (GB)[/B][/CENTER][CENTER][B]Total Storage Size (GB)[/B][/CENTER][CENTER][B]% Vector Index Savings[/B][/CENTER][CENTER][B]% Storage Savings[/B][/CENTER]
[CENTER]Uncompressed[/CENTER][CENTER]1M[/CENTER][CENTER]5.77[/CENTER][CENTER]24.77[/CENTER][CENTER] [/CENTER][CENTER] [/CENTER]
[CENTER]SQ[/CENTER][CENTER]1M[/CENTER][CENTER]1.48[/CENTER][CENTER]20.48[/CENTER][CENTER]74%[/CENTER][CENTER]17%[/CENTER]
[CENTER]BQ[/CENTER][CENTER]1M[/CENTER][CENTER]0.235[/CENTER][CENTER]19.23[/CENTER][CENTER]96%[/CENTER][CENTER]22%[/CENTER]
[I]Table 1.1 compares the storage metrics of three different vector compression configurations: Uncompressed, Scalar Quantization (SQ), and Binary Quantization (BQ). The data illustrates significant storage and performance improvements with Binary Quantization, showing up to 96% savings in vector index size and 22% in overall storage. MTEB/dbpedia was used with default vector search settings and OpenAI text-embeddings-ada-002 @1536 dimensions.[/I] [HEADING=2]Increased Performance [/HEADING] Binary Quantization (BQ) enhances performance, reducing query latencies by 10-40% compared to uncompressed indexes. The improvement will vary based on oversampling rate, dataset size, vector dimensionality, and service configuration. BQ is fast for a few reasons, such as Hamming distance being faster to compute than cosine similarity, and packed bit vectors being smaller yielding improved locality. This makes it a great choice where speed is critical, and allows moderate oversampling to be applied to balance speed with relevance. [HEADING=2]Quality Retainment[/HEADING] Reduction in storage use and improvements in the search performance come at the cost of recall when binary quantization is used. However, the tradeoff can be managed effectively using techniques like oversampling and reranking. Oversampling retrieves a greater set of potential documents to offset the resolution loss due to quantization. Reranking will recalculate similarity scores using the full-resolution vectors. The table below shows a subset of the [URL='https://huggingface.co/blog/mteb']MTEB datasets[/URL] for [URL='https://platform.openai.com/docs/guides/embeddings']OpenAI[/URL] and [URL='https://cohere.com/embeddings']Cohere[/URL] embeddings with binary quantization mean [EMAIL]NDCG@10[/EMAIL] with and without reranking/oversampling. [B]Table 1.2: Impact of Binary Quantization on Mean NDCG@10 Across MTEB Subset[/B] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER] [CENTER][/CENTER]
[CENTER] [B]Model[/B] [/CENTER] [CENTER] [B]No Rerank (Δ / %)[/B] [/CENTER] [CENTER] [B]Rerank 2x Oversampling (Δ / %)[/B] [/CENTER]
[CENTER] Cohere Embed V3 (1024d) [/CENTER] [CENTER] -4.883 (-9.5%) [/CENTER] [CENTER] -0.393 (-0.76%) [/CENTER]
[CENTER] OpenAI text-embedding-3-small (1536d) [/CENTER] [CENTER] -2.312 (-4.55%) [/CENTER] [CENTER] +0.069 (+0.14%) [/CENTER]
[CENTER] OpenAI text-embedding-3-large (3072d) [/CENTER] [CENTER] -1.024 (-1.86%) [/CENTER] [CENTER] +0.006 (+0.01%) [/CENTER]
[I]Table 1.2 compares the relative point differences of Mean NDCG@10 when using Binary Quantization from an Uncompressed index across different embeddings models from a subset of MTEB datasets.[/I] [B]Key takeaways: [/B] [LIST] [*]BQ+Reranking yields higher retrieval quality compared to no reranking [*]The impact of reranking is more pronounced in models with lower dimensions, while for higher dimensions, the effect is smaller and sometimes negligible [*]Strongly considering reranking with full precision vectors to minimize or even eliminate recall loss caused by quantization [/LIST] [HEADING=1]When to Use Binary Quantization[/HEADING] Binary Quantization is recommended for applications with high-dimensional vectors and large datasets, where storage efficiency and fast search performance are critical. It is particularly effective for embeddings with dimensions greater than 1024. However, for smaller dimensions, we recommend testing BQ's quality or considering SQ as an alternative. Additionally, BQ performs exceptionally well when embeddings are centered around zero, as seen in popular embedding models like OpenAI and Cohere. BQ + reranking/oversampling works by searching over a compressed vector index in-memory and reranking using full-precision vectors stored on disk, allowing you to significantly reduce costs while maintaining strong search quality. This approach achieves the goal of efficiently operating on memory-constrained settings by leveraging both memory and SSDs to deliver high performance and scalability with large datasets. BQ adds to our [URL='https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/announcing-cost-effective-rag-at-scale-with-azure-ai-search/ba-p/4104961'][FONT=inherit]price-performance enhancements[/FONT][/URL] made over the past several months, offering storage savings and performance improvements. By adopting this feature, organizations can achieve faster search results and lower operational costs, ultimately driving better outcomes and user experiences. [HEADING=2]More Functionality now Generally Available[/HEADING] We're pleased to share several vector search enhancements are now generally available in Azure AI Search. These updates provide users with more control over their retriever in RAG solutions and optimize LLM performance. Here are the key highlights: [LIST] [*][URL='https://aka.ms/integrated-vectorization-ga']Integrated vectorization[/URL] with Azure OpenAI for Azure AI Search is now generally available! [*][URL='https://aka.ms/azureaisearch-binary-data']Support for Binary Vector Types:[/URL] Azure AI Search supports narrow vector types including binary vectors. This feature enables the storage and processing of larger vector datasets at lower costs while maintaining fast search capabilities. [*][URL='https://learn.microsoft.com/en-us/azure/search/vector-search-how-to-query?tabs=query-2024-07-01%2Cfilter-2024-07-01%2Cbuiltin-portal#vector-weighting']Vector Weighting:[/URL] This feature allows users to assign relative importance to vector queries over term queries in hybrid search scenarios. It gives more control over the final result set by enabling users to favor vector similarity over keyword similarity. [*][URL='https://learn.microsoft.com/en-us/azure/search/index-add-scoring-profiles']Document Boosting:[/URL] Boost your search results with scoring profiles tailored to vector and hybrid search queries. Whether you prioritize freshness, geolocation, or specific keywords, our new feature allows for targeted document boosting, ensuring more relevant results for your needs. [/LIST] [HEADING=2][B]Getting started with Azure AI Search[/B] [/HEADING] To get started with binary quantization, visit our official documentation here: [URL='https://learn.microsoft.com/en-us/azure/search/vector-search-how-to-configure-compression-storage#add-compressions-to-a-search-index']Reduce vector size - Azure AI Search | Microsoft Learn[/URL] [LIST] [*]Learn more about [URL='https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search']Azure AI Search[/URL] and about all the [URL='https://learn.microsoft.com/en-us/azure/search/whats-new']latest features[/URL].  [*]Start creating a search service in the [URL='https://learn.microsoft.com/azure/search/search-create-service-portal']Azure Portal,[/URL] [URL='https://learn.microsoft.com/en-us/azure/search/search-manage-powershell#create-or-delete-a-service']Azure CLI[/URL], the [URL='https://learn.microsoft.com/en-us/azure/search/search-manage-rest#create-or-update-a-service']Management REST API[/URL], [URL='https://learn.microsoft.com/en-us/azure/search/search-get-started-arm']ARM template[/URL], or a [URL='https://learn.microsoft.com/en-us/azure/search/search-get-started-bicep?tabs=CLI']Bicep file[/URL].  [*]Learn about [URL='https://learn.microsoft.com/azure/search/retrieval-augmented-generation-overview']Retrieval Augmented Generation in Azure AI Search[/URL]. [*]Explore our preview client libraries in [URL='https://pypi.org/project/azure-search-documents/11.6.0b4/']Python[/URL], [URL='https://www.nuget.org/packages/Azure.Search.Documents/11.6.0-beta.4'].NET[/URL], [URL='https://central.sonatype.com/artifact/com.azure/azure-search-documents/overview']Java[/URL], and [URL='https://www.npmjs.com/package/@azure/search-documents?activeTab=readme']JavaScript[/URL], offering diverse integration methods to cater to varying user needs.  [*]Explore how to create end-to-end RAG applications with [URL='https://azure.microsoft.com/products/ai-studio/']Azure AI Studio[/URL]. [/LIST] [url="https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/binary-quantization-in-azure-ai-search-optimized-storage-and/ba-p/4221918"]Continue reading...[/url]

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...