R
rahulsaha
Searching for content on a website uses keyword-based search, a method with its limitations. For example, searching for a ‘rain jacket for women’ on an e-commerce website returns all jackets for both men and women, as the search focuses on just the keyword ‘jacket’. The search simply isn’t contextual enough to provide the desired results.
Semantic search is a technique that allows users to search for information using natural language queries rather than specific keywords. With semantic search, the meaning and intent of users’ queries are inferred, and personalized relevant results are returned. Generative AI, on the other hand, is a type of artificial intelligence that can generate new content from existing data, such as text, images, or audio. You can use generative AI to produce summaries, captions, recommendations, or responses based on a user's input and preferences.
This blog post discusses how to build intelligent MySQL applications with semantic search and generative AI response using Azure Open AI and Azure Database for MySQL with Azure AI search. As an example, we’ll use a Magento ecommerce app designed to sell jackets, and then build a “Product Recommender CoPilot” chat application that:
The simplest way to include the rich capabilities of semantic search and generative AI in your applications is to build a solution using the Retrieval Augmented Generation (RAG) architecture with Azure AI Search and Azure Open AI services.
Retrieval Augmented Generation, or RAG, is an architecture that augments the natural language understanding and generation capabilities of LLMs like ChatGPT by adding an information retrieval system like Azure AI Search which works with your data stored in data sources like Azure Database for MySQL. In a typical RAG pattern:
A sample RAG architecture for the AI solution we’ll show you how to build in this blog post appears in the following graphic:
In this post, we’ll walk you through how to set up the backend data sources, indexers, and models required to build this solution. It is a detailed guide to the sample Python code hosted in our GitHub repository in a Jupyter Notebook: azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql.
For a short demo of the entire process using a Magento ecommerce application, watch the following video!
Before getting started, you need to ensure that the following prerequisites are in place.
Supporting semantic search in an e-commerce application that leverages Azure AI search and Azure Open AI Services in the backend requires completing the following:
Azure AI search pulls the contents and reviews data from a backend MySQL flexible server by using an indexer that runs periodically. The reviewed data is further chunked and vectorized using Azure OpenAI’s text embedding model. In Azure AI search, the vectorized data then persists in a vector search index.
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. The data source is defined as an independent resource so that it can be used by multiple indexers. In this example, we’ll use a custom table with product details and review data which is stored in a database in Azure Database for MySQL.
In the Azure AI service, before creating a search Index, we’ll need to create a connection to your data source. We’ll import Azure AI classes like ‘SearchClient’, ‘SearchIndexerClient’, ‘SearchIndexerDataSourceConnection’ and their functions like “create_or_update_data_source_connection()” to setup the data source connection in Azure AI Search. We’ll also import several other models - the comprehensive list is shared in the following code sample.
Code: “1. Set up data source connection in Azure AI Search” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
We are now ready to create an Azure Database for MySQL indexer that periodically pulls the product details and reviews data from the database in Azure Database for MySQL, chunks and vectorizes the data and persists it in a vector search index.
To do this, we’ll first create an index which takes the product details and reviews data for each product, splits the combined text into chunks and embeds each chunk as a vector. For any incoming user query, we’ll search for the most relevant chunk using vector search and semantic search.
In Azure AI Search, a search index available to the search engine for indexing, full text search, vector search, hybrid search, and filtered queries. An index is defined by a schema and is saved to the search service.
Code: “II. Set up automatic chunking, vectorization and indexing” -> “Create index” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
We’ll create a skillset using two pre-built “skills”:
We’ll then apply an Index Projector to make it so that our final index has one item for every chunk of text, rather than one item for every original row in the database.
Code: “II. Set up automatic chunking, vectorization and indexing” -> “Create skillset” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
After you define the data source and create the index, you're ready to create the indexer. The configuration of the Indexer requires the inputs, parameters, and properties that control run time behaviors.
To create an indexer, we’ll provide the name of the data source, index and skillset that we created in the previous steps. We’ll then run the indexer at periodic intervals.
Code: “II. Set up automatic chunking, vectorization and indexing” -> “Create indexer” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
After the indexer is created, you can use the Azure portal to view the indexer under the Indexers blade in Azure AI Search. You can also run the indexer from the Azure portal or using Azure CLI. The runs can be configured to be ad hoc or scheduled:
With the Azure AI Search indexer ready and running, you can now use the vector search and semantic search capabilities from your application. To call the search function, provide the user query text, required query parameters like the number of nearest neighbors to return as top hits, and the columns or fields in the index to be considered. Also select query type “Semantic” to include both vector search and semantic search and provide the name of semantic search configuration object that you created in section II.
Code: “III. Use vector search from a sample application” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
For a user query like “suggest me some rain jackets for women“, this single API call performs vector search and semantic re-ranking, and returns the top ranked results as shown in the following screenshot. You’ll also see the vector search score, semantic re-ranker score, and all the details of the recommended product.
To generate custom responses to the users, we need to simply make a call to the Azure OpenAI chat completion service. To trigger the completion, we first input some text as a “prompt”. The LLM in the service then generates the completion and attempts to match our context or pattern. An example prompt for this scenario could be: “You are an AI assistant that recommends products to people based on the product reviews data matching their query. Your answer should summarize the review text, include the product ID, include the parent id as review id, and mention the overall sentiment of the review.”
To generate the GPT response, we’ll use the Azure OpenAI’s chat.completions.create() API call and supply it with the user query, the Open AI model to be used, the Azure AI Search index as data source, and the prompt.
Code: “IV. Generate GPT Response to the user” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
Finally, it’s important to test the solution in Azure Open AI Studio, a user-friendly platform that allows developers to explore cutting-edge APIs and models and to build, test, deploy, and manage AI solutions for use in websites, applications, and other production environments.
For an end-to-end test of the AI solution in the Azure OpenAI Studio playground, perform the following steps:
Now, you’re all set to deploy this Product Recommender CoPilot AI solution to production!
If you’re running applications, such as content management systems (CMS), e-commerce applications, or gaming sites, with data hosted in Azure Database for MySQL, you can enhance your user experience by building generative AI search and chat applications using LLMs available in Azure OpenAI and vector storage and indexing provided by Azure AI Search. Unleash the power of your data hosted on MySQL with the simple and seamless AI integrations on Azure!
If you have any queries or suggestions for more AI-related content, please let us know by contacting us at AskAzureDBforMySQL@service.microsoft.com. We’re also open to collaborating with you on technical collateral! Check out our Contributors initiative at aka.ms/mysql-contributors to learn more.
Continue reading...
Semantic search is a technique that allows users to search for information using natural language queries rather than specific keywords. With semantic search, the meaning and intent of users’ queries are inferred, and personalized relevant results are returned. Generative AI, on the other hand, is a type of artificial intelligence that can generate new content from existing data, such as text, images, or audio. You can use generative AI to produce summaries, captions, recommendations, or responses based on a user's input and preferences.
This blog post discusses how to build intelligent MySQL applications with semantic search and generative AI response using Azure Open AI and Azure Database for MySQL with Azure AI search. As an example, we’ll use a Magento ecommerce app designed to sell jackets, and then build a “Product Recommender CoPilot” chat application that:
- Recognizes the intent of a user’s natural language queries.
- Generates custom responses to recommend suitable products using the product details and reviews data stored in Azure Database for MySQL.
Architecture
The simplest way to include the rich capabilities of semantic search and generative AI in your applications is to build a solution using the Retrieval Augmented Generation (RAG) architecture with Azure AI Search and Azure Open AI services.
What is a RAG architecture?
Retrieval Augmented Generation, or RAG, is an architecture that augments the natural language understanding and generation capabilities of LLMs like ChatGPT by adding an information retrieval system like Azure AI Search which works with your data stored in data sources like Azure Database for MySQL. In a typical RAG pattern:
- A user submits a query or prompt in natural language.
- The query is routed to Azure AI Search to find the relevant information.
- Azure AI Search sends the top ranked semantic search results to a Large Language Model (LLM).
- The LLM then processes the natural language query and uses reasoning capabilities to generate a response to the initial prompt.
Sample product recommender Copilot architecture
A sample RAG architecture for the AI solution we’ll show you how to build in this blog post appears in the following graphic:
- Azure AI search pulls the content (in our case, the product details and reviews data) from a backend Azure Database for MySQL database by using an indexer that runs periodically.
- The product details and reviews data are further chunked and vectorized using Azure OpenAI’s text embedding model.
- Azure AI Search then persists this vectorized data in a vector search index.
- When a user uses the “Product Recommender CoPilot” chat application, the query is sent to an Azure OpenAI Chat Completion Service.
- Azure AI Search is now used as a data source to find the most relevant response using vector-search or hybrid search (vector + semantic search).
- The Azure OpenAI Chat Completion service then uses these search results to generate a custom response back to the user query.
In this post, we’ll walk you through how to set up the backend data sources, indexers, and models required to build this solution. It is a detailed guide to the sample Python code hosted in our GitHub repository in a Jupyter Notebook: azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql.
For a short demo of the entire process using a Magento ecommerce application, watch the following video!
Prerequisites
Before getting started, you need to ensure that the following prerequisites are in place.
- An Azure account with an active subscription. If you don't have one, create one for free here.
- An Azure AI Search resource. If you don’t have one, you can create one via the Azure portal or the Azure CLI, as explained in the article here.
- An Azure Open AI Services resource. If you don’t have one, you can create one via the Azure portal or the Azure CLI, as explained in the article here.
- A MySQL database (in Azure Database for MySQL or any database provider) populated with product and reviews data obtained from your ecommerce application like Magento.
- To create an Azure Database for MySQL server, follow the instructions in the article here.
- If you need some sample product and reviews data to try out this example, refer “Upload data to MySQL DB” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
Process
Supporting semantic search in an e-commerce application that leverages Azure AI search and Azure Open AI Services in the backend requires completing the following:
I. Set up data source connection in Azure AI Search.
II. Set up automatic chunking, vectorization and indexing.
III. Use vector search from a sample application.
IV. Generate a GPT response to the user.
V. Test the solution in the Azure OpenAI Studio playground.
Azure AI search pulls the contents and reviews data from a backend MySQL flexible server by using an indexer that runs periodically. The reviewed data is further chunked and vectorized using Azure OpenAI’s text embedding model. In Azure AI search, the vectorized data then persists in a vector search index.
I. Set up data source connection in Azure AI Search
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. The data source is defined as an independent resource so that it can be used by multiple indexers. In this example, we’ll use a custom table with product details and review data which is stored in a database in Azure Database for MySQL.
In the Azure AI service, before creating a search Index, we’ll need to create a connection to your data source. We’ll import Azure AI classes like ‘SearchClient’, ‘SearchIndexerClient’, ‘SearchIndexerDataSourceConnection’ and their functions like “create_or_update_data_source_connection()” to setup the data source connection in Azure AI Search. We’ll also import several other models - the comprehensive list is shared in the following code sample.
Code: “1. Set up data source connection in Azure AI Search” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
II. Set up automatic chunking, vectorization and indexing
We are now ready to create an Azure Database for MySQL indexer that periodically pulls the product details and reviews data from the database in Azure Database for MySQL, chunks and vectorizes the data and persists it in a vector search index.
To do this, we’ll first create an index which takes the product details and reviews data for each product, splits the combined text into chunks and embeds each chunk as a vector. For any incoming user query, we’ll search for the most relevant chunk using vector search and semantic search.
Create an Azure AI Search index
In Azure AI Search, a search index available to the search engine for indexing, full text search, vector search, hybrid search, and filtered queries. An index is defined by a schema and is saved to the search service.
- We’ll first create an index object from the ‘SearchIndexClient’ class.
- Then, we define the field mappings to correlate the fields in a MySQL database to the fields in the AI search Index. As the combined text is generally long, it needs to be chunked into smaller words. To do this, we’ll add an additional search field called “chunk”.
- Next, we’ll decide the searches that the index will support. In this example, we’ll use ‘vector search’ along with ‘semantic re-ranking’. Here is how the two work together in our solution:
- Vector search is first performed on all the entries in the search index.
- Semantic search is sort of a neural network where the search can be performed only of limited number. The top 50 results obtained using vector search is sent to the neural network, which re-ranks these documents and provides the top matched result in the reduced context. Semantic search does a better optimization to find the best results. It also produces short-form captions and answers that are useful as LLM inputs for generation.
- We’ll then define the vector search and semantic configurations:
- Vector search configuration – You can choose different algorithms, such as HNSW or KNN, to perform the vector search. In this post, we’ll choose the most used algorithm – HNSW. We’ll configure the HNSW algorithm to use the ‘COSINE’ metric. Considering each vector as a point in the multi-dimensional space, the algorithm will not find the cosine distance between the points. Lower the distance, more similar the vectors.
- Semantic configuration – Here, we’ll define the index field on which semantic re-ranking is performed.
- Finally, we’ll create the search index using the above two configurations on the relevant MySQL DB table fields.
Code: “II. Set up automatic chunking, vectorization and indexing” -> “Create index” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
Chunking
We’ll create a skillset using two pre-built “skills”:
- The “Split Skill”, which takes the concatenated text and divides it into chunks.
- The “Azure OpenAI Embedding Skill”, which takes the outputs of the Split Skill and vectorizes them individually.
We’ll then apply an Index Projector to make it so that our final index has one item for every chunk of text, rather than one item for every original row in the database.
Code: “II. Set up automatic chunking, vectorization and indexing” -> “Create skillset” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
Create and run the MySQL Indexer
After you define the data source and create the index, you're ready to create the indexer. The configuration of the Indexer requires the inputs, parameters, and properties that control run time behaviors.
To create an indexer, we’ll provide the name of the data source, index and skillset that we created in the previous steps. We’ll then run the indexer at periodic intervals.
Code: “II. Set up automatic chunking, vectorization and indexing” -> “Create indexer” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
After the indexer is created, you can use the Azure portal to view the indexer under the Indexers blade in Azure AI Search. You can also run the indexer from the Azure portal or using Azure CLI. The runs can be configured to be ad hoc or scheduled:
- Ad hoc – An indexer can be configured to run only once or on-demand. You can use APIs (like we used in the code sample), Azure portal or CLI to achieve this.
- Scheduled – You can schedule and indexer to run at a certain time. Go to “Settings” section in the Azure portal view of the indexer, and choose hourly, one time, daily, or any other custom settings.
III. Use vector search from a sample application
With the Azure AI Search indexer ready and running, you can now use the vector search and semantic search capabilities from your application. To call the search function, provide the user query text, required query parameters like the number of nearest neighbors to return as top hits, and the columns or fields in the index to be considered. Also select query type “Semantic” to include both vector search and semantic search and provide the name of semantic search configuration object that you created in section II.
Code: “III. Use vector search from a sample application” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
For a user query like “suggest me some rain jackets for women“, this single API call performs vector search and semantic re-ranking, and returns the top ranked results as shown in the following screenshot. You’ll also see the vector search score, semantic re-ranker score, and all the details of the recommended product.
IV. Generate a GPT response to the user
To generate custom responses to the users, we need to simply make a call to the Azure OpenAI chat completion service. To trigger the completion, we first input some text as a “prompt”. The LLM in the service then generates the completion and attempts to match our context or pattern. An example prompt for this scenario could be: “You are an AI assistant that recommends products to people based on the product reviews data matching their query. Your answer should summarize the review text, include the product ID, include the parent id as review id, and mention the overall sentiment of the review.”
To generate the GPT response, we’ll use the Azure OpenAI’s chat.completions.create() API call and supply it with the user query, the Open AI model to be used, the Azure AI Search index as data source, and the prompt.
Code: “IV. Generate GPT Response to the user” in the Jupyter Notebook azure-mysql/Azure_MySQL_AI_Search_Sample at master Azure/azure-mysql
V. Test the solution in the Azure OpenAI Studio playground
Finally, it’s important to test the solution in Azure Open AI Studio, a user-friendly platform that allows developers to explore cutting-edge APIs and models and to build, test, deploy, and manage AI solutions for use in websites, applications, and other production environments.
For an end-to-end test of the AI solution in the Azure OpenAI Studio playground, perform the following steps:
- Go to Azure OpenAI studio, select your subscription and the Azure AI Search resource, and navigate to the “Chat playground”.
- In the section, in the Add your data tab, select Add a data source.
- In the pop-up window, select Azure AI Search as the data source.
- Fill in subscription details, and then choose the Azure AI Search service and Azure AI Search index created in the previous sections.
- Enable the Add vector search to this search resource option.
- Select the Azure OpenAI – text-embedding-ada-002 model and select Next.
- Select Hybrid + semantic search type.
- Review all the details and add the data source.
- In the Prompt tab under the Setup section, type your prompt message in the System message text box.
You can use the sample prompt from the previous step or modify the prompt to add more details, such as product images, in the response. - To test the solution, type a sample query in the chat box and watch your copilot generate a smart recommendation in response!
Now, you’re all set to deploy this Product Recommender CoPilot AI solution to production!
Conclusion
If you’re running applications, such as content management systems (CMS), e-commerce applications, or gaming sites, with data hosted in Azure Database for MySQL, you can enhance your user experience by building generative AI search and chat applications using LLMs available in Azure OpenAI and vector storage and indexing provided by Azure AI Search. Unleash the power of your data hosted on MySQL with the simple and seamless AI integrations on Azure!
If you have any queries or suggestions for more AI-related content, please let us know by contacting us at AskAzureDBforMySQL@service.microsoft.com. We’re also open to collaborating with you on technical collateral! Check out our Contributors initiative at aka.ms/mysql-contributors to learn more.
Continue reading...