Integrating vision into RAG applications

  • Thread starter Thread starter Pamela_Fox
  • Start date Start date
P

Pamela_Fox

Retrieval Augmented Generation (RAG) is a popular technique to get LLMs to provide answers that are grounded in a data source. What do you do when your knowledge base includes images, like graphs or photos? By adding multimodal models into your RAG flow, you can get answers based off image sources, too!



Our most popular RAG solution accelerator, azure-search-openai-demo, now has an optional feature for RAG on image sources. In the example question below, the app answers a question that requires correctly interpreting a bar graph:

Pamela_Fox_4-1725805808613.png



This blog post will walk through the changes we made to enable multimodal RAG, both so that developers using the solution accelerator can understand how it works, and so that developers using other RAG solutions can bring in multimodal support.

First let's talk about two essential ingredients: multimodal LLMs and multimodal embedding models.



Multimodal LLMs​


Azure now offers multiple multimodal LLMs: gpt-4o and gpt-4o-mini, through the Azure OpenAI service, and Phi-3.5-vision-instruct, through the Azure AI Model Catalog. These models allow you to send in both images and text, and return text responses. (In the future, we may have LLMs that take audio input and return non-text inputs!)



For example, an API call to the gpt-4o model can contain a question along with an image URL:



Code:
{
  "role": "user",
  "content": [
    { 
      "type": "text", 
      "text": "Whats in this image?" 
    },
    { 
      "type": "image_url", 
      "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" } 
    } 
  ]
}





Those image URLs can be specified as full HTTP URLs, if the image happens to be available on the public web, or they can be specified as base-64 encoded Data URIs, which is particularly helpful for privately stored images.



For more examples working with gpt-4o, check out openai-chat-vision-quickstart, a repo which can deploy a simple Chat+Vision app to Azure, plus includes Jupyter notebooks showcasing scenarios.



Multimodal embedding models​


Azure also offers a multimodal embedding API, as part of the Azure AI Vision APIs, that can compute embeddings in a multimodal space for both text and images. The API uses the state-of-the-art Florence model from Microsoft Research.



For example, this API call returns the embedding vector for an image:



Code:
curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2024-02-01-preview&model-version=2023-04-15" \
--data-ascii " { 'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png' }"





Once we have the ability to embed both images and text in the same embedding space, we can use vector search to find images that are similar to a user's query. For an example, check out this notebook that setups a basic multimodal search of images using Azure AI Search.


Multimodal RAG​


With those two multimodal models, we were able to give our RAG solution the ability to include image sources in both the retrieval and answering process.



At a high-level, we made the following changes:

  • Search index: We added a new field to the Azure AI Search index to store the embedding returned by the multimodal Azure AI Vision API (while keeping the existing field that stores the OpenAI text embeddings).
  • Data ingestion: In addition to our usual PDF ingestion flow, we also convert each PDF document page to an image, store that image with the filename rendered on top, and add the embedding to the index.
  • Question answering: We search the index using both the text and multimodal embeddings. We send both the text and the image to gpt-4o, and ask it to answer the question based on both kinds of sources.
  • Citations: The frontend displays both image sources and text sources, to help users understand how the answer was generated.

Let's dive deeper into each of the changes above.



Search index​


For our standard RAG on documents approach, we use an Azure AI search index that stores the following fields:

  • content: The extracted text content from Azure Document Intelligence, which can process a wide range of files and can even OCR images inside files.
  • sourcefile: The filename of the document
  • sourcepage: The filename with page number, for more precise citations
  • embedding: A vector field with 1536 dimensions, to store the embedding of the content field, computed using text-only OpenAI ada-002 model.

For RAG on images, we add an additional field:

  • imageEmbedding: A vector field with 1024 dimensions, to store the embedding of the image version of the document page, computed using the AI Vision vectorizeImage API endpoint.



Data ingestion​


For our standard RAG approach, data ingestion involves these steps:

  1. Use Azure Document Intelligence to extract text out of a document
  2. Use a splitting strategy to chunk the text into sections. This is necessary in order to keep chunk sizes at a reasonable size, as sending too much content to an LLM at once tends to reduce answer quality.
  3. Upload the original file to Azure Blob storage.
  4. Compute ada-002 embeddings for the content field.
  5. Add each chunk to the Azure AI search index.

For RAG on images, we add two additional steps before indexing: uploading an image version of each document page to Blob Storage and computing multi-modal embeddings for each image.



Generating citable images​


The images are not just a direct copy of the document page. Instead, they contain the original document filename written in the top left corner of the image, like so:

Pamela_Fox_5-1725805808614.png





This crucial step will enable the GPT vision model to later provide citations in its answers. From a technical perspective, we achieved this by first using the PyMuPDF Python package to convert documents to images, then using the Pillow Python package to add a top border to the image and write the filename there.



Question answering​


Now that our Blob storage container has citable images and our AI search index has multi-modal embeddings, users can start to ask questions about images.



Our RAG app has two primary question asking flows, one for "single-turn" questions, and the other for "multi-turn" questions which incorporates as much conversation history that can fit in the context window. To simplify this explanation, we'll focus on the single-turn flow.

Our single-turn RAG on documents flow looks like:

Pamela_Fox_6-1725805808615.png



  1. Receive a user question from the frontend.
  2. Compute an embedding for the user question using the OpenAI ada-002 model.
  3. Use the user question to fetch matching documents from the Azure AI search index, using a hybrid search that does a keyword search on the text and a vector search on the question embedding.
  4. Pass the resulting document chunks and the original user question to the gpt-3.5 model, with a system prompt that instructs it to adhere to the sources and provide citations with a certain format.



Our single-turn RAG on documents-plus-images flows looks like this:

Pamela_Fox_0-1725835687475.png



  1. Receive a user question from the frontend.
  2. Compute an embedding for the user question using the OpenAI ada-002 model AND an additional embedding using the AI Vision API multimodal model.
  3. Use the user question to fetch matching documents from the Azure AI search index, using a hybrid multivector search that also searches on the imageEmbedding field using the additional embedding. This way, the underlying vector search algorithm will find results that are both similar semantically to the text of the document but also similar semantically to any images in the document (e.g. "what trends are increasing?" could match a chart with a line going up and to the right).
  4. For each document chunk returned in the search results, convert the Blob image URL into a base64 data-encoded URI. Pass both the text content and the image URIs to a GPT vision model, with this prompt that describes how to find and format citations:
    Code:
    The documents contain text, graphs, tables and images. 
    
    Each image source has the file name in the top left corner of the image with coordinates (10,10) pixels and is in the format SourceFileName:<file_name> 
    
    Each text source starts in a new line and has the file name followed by colon and the actual information. Always include the source name from the image or text for each fact you use in the response in the format: [filename]  
    
    Answer the following question using only the data provided in the sources below. 
    
    The text and image source can be the same file name, don't use the image title when citing the image source, only use the file name as mentioned.

Now, users can ask questions where the answers are entirely contained in the images and get correct answers! This can be a great fit for diagram-heavy domains, like finance.



Considerations​


We have seen some really exciting uses of this multimodal RAG approach, but there is much to explore to improve the experience.



More file types: Our repository only implements image generation for PDFs, but developers are now ingesting many more formats, both image files like PNG and JPEG as well as non-image files like HTML, docx, etc. We'd love help from the community in bringing support for multimodal RAG to more file formats.



More selective embeddings: Our ingestion flow uploads images for *every* PDF page, but many pages may be lacking in visual content, and that can negatively affect vector search results. For example, if your PDF contains completely blank pages, and the index stored the embeddings for those, we have found that vector searches often retrieve those blank pages. Perhaps in the multimodal space, "blankness" is considered similar to everything. We've considered approaches like using a vision model in the ingestion phase to decide whether an image is meaningful, or using that model to write a very descriptive caption for images instead of storing the image embeddings themselves.



Image extraction: Another approach would be to extract images from document pages, and store each image separately. That would be helpful for documents where the pages contain multiple distinct images with different purposes, since then the LLM would be able to focus more on only the most relevant image.



We would love your help in experimenting with RAG on images, sharing how it works for your domain, and suggesting what we can improve. Head over to our repo and follow the steps for deploying with the optional GPT vision feature enabled!

Continue reading...
 
Back
Top