New AI integration for your SQL databases | RAG, Vector Search, Admin Automation

  • Thread starter Thread starter Zachary Cavanell
  • Start date Start date
Z

Zachary Cavanell

Check out new AI integrations for your Azure SQL databases. With Retrieval Augmented Generation, you can bridge structured data with generative AI, enhancing natural language queries across applications. With advanced vector-based semantic search, discover precise insights tailored to your data, while Copilot in Azure streamlines troubleshooting and T-SQL query authoring. Optimize workflows, personalize responses, and unlock new levels of efficiency in SQL-driven AI applications.



Main.png



Accelerate performance troubleshooting and complex query authoring tasks with Copilot in Azure. Quickly diagnose database issues and receive expert recommendations for optimization, ensuring optimal performance and reliability. Seamlessly traverse hierarchies within tables and generate intricate queries with ease, saving time and resources.



Bob Ward, Azure Principal Architect, shows how to unleash the full potential of your SQL data, driving innovation and intelligence across your applications.





Transform SQL applications with AI.​




1.png

See how new capabilities in Azure SQL can help you unlock innovation faster with modern, AI apps.

Boost relevance and maintain dynamic indexes by generating vector embeddings.​




2.png

Utilize skillsets to apply specialized services, optimizing data representation. Check out AI capabilities in Azure SQL.





Precise search results with hybrid search capabilities in SQL.​




3.png

Utilize vectors, embeddings, JSON, and geospatial queries seamlessly for tailored outcomes. Check it out and get started.

Watch our video:​








QUICK LINKS:​


00:00 — AI and Azure SQL
01:40 — Using T-SQL for search
02:30 — Using Azure AI Search
03:17 — Vector embeddings and skillsets
04:08 — Connect your SQL data to an AI app
05:44 — Test it in Azure OpenAI Studio playground
07:22 — Combine native JSON data type in SQL
08:30 — Hybrid search
09:56 — Copilot in Azure: Performance troubleshooting
11:11 — Copilot in Azure: Query authoring
12:24 — Permissions
12:40 — Wrap up

Link References​


For building AI apps, check out Intelligent applications - Azure SQL Database

Try out new copilot experiences at Microsoft Copilot skills and Azure SQL Database (preview) - Azure SQL

Unfamiliar with Microsoft Mechanics?​


As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

Keep getting this insider knowledge, join us on social:​


Video Transcript:​


- If you have SQL databases, you can now bring your data to generative AI apps as part of Retrieval Augmented Generation, apply the latest AI-powered, vector-based semantic search to light up natural language queries across your apps and use Copilot in Azure yourself for faster AI-powered troubleshooting and expert help in T-SQL query authoring. So to unpack all of this, I’m joined once again by Bob Ward from the SQL Team, no stranger to Mechanics, welcome back to the show.



- It’s great to be back, Jeremy.



- Thanks for joining us today. You know, especially now more than ever, generative AI is important and your data source is really key to grounding the accuracy of responses. That said, that you don’t often see, you know, structured data being used in generative AI scenarios, but there’s no reason you can’t do that now.



- There isn’t. There’s nothing holding you back from leveraging your existing SQL data, on premises or cloud, to get started with generative AI today. Large language models are trained on a finite set of open-world knowledge, which obviously does not include what is in your SQL database. And so the approach we use here is Retrieval Augmented Generation, where you can define SQL data as a trusted source. Then, when a user submits a prompt, additional information can be retrieved here from SQL and used to augment the user prompt, which provides the large language model with additional context. And it can reason over this to generate a more informed and accurate response.



- And something we should point out here is that any data that you present to the large language model is actually kind of sitting to the side of it and it’s never absorbed into the LLM.



- Right, and it plays one of SQL’s strengths, which its granular access controls. As you design apps that use your SQL data for retrieval, these AI, or Copilot-styled experiences, inherit the user’s access permission for data.



- Right, so what’s actually needed here though? I mean, I know a lot of people are probably watching right now and they’ve got all the right pieces in place maybe to do generative AI using their SQL backing with ‘like’ clauses and different semantic search. What else do I need?



- Well, the first thing you do here is get natural language search to work. I have a sample retail product database here, and I’ll start with how you would normally search using SQL. As you see in the first comment, I want to help my customers find some inexpensive options to remodel their living room. And I had the usual options here like looking for a category, even using the ‘like’ clause and full text search from fields like the description. But when I run that, it’s not very specific because this can include many different things. I could even see an air conditioner, a camera, and even a big brake kit for a car. Those aren’t relevant for a living room remodel.



- So just using T-SQL by itself, then, gets us part of the way there, but it’s not enough.



- Well, things like ‘like’ clauses can kind of do some basic similarity search, but we can do better with AI. For this, on the back end, we’re going to use Azure AI Search as our search index. Then, the popular index will use the Azure OpenAI embeddings API, which will take the information in the database and generate what’s called a vector embedding. It’s a bit like a GPS coordinate. Then, on the query side, the user’s prompt is also vectorized with an embedding, so ensures both are effectively speaking the same language. From there, the query engine can use the vector data to find similarities between what you’ve queried in your prompt and the data in the database. In this way, you can interact in a descriptive natural language way, you aren’t bound to keywords and the system knows what to pull up looking for similarity matches. In my case, I’ve already created an index with Azure AI Search. Let me show you what’s inside the Fields tab. It’s based on my product data table, and if we scroll through the fields, these are the fields that I’ve assigned to my index based on the field in the table, and you’ll see there’s one very special one that’s called an embedding. I was able to use the Azure OpenAI service to automatically generate these embeddings to represent our data. You can see my vector index here and its size, and here’s the index that I’m building using the product data and the index will stay in sync as the data changes. Now, let me jump into the Indexes area and I’ll show you what’s behind this indexer. Inside settings, you can see what I used to build, this is something called a skillset. Skillsets are services that can be applied to your data to do specific things. This is where we use Azure OpenAI Service to calculate the embeddings.



- Okay, so now you’ve got, kind of, the prerequisite foundations in place, your vector index, what’s the next step?



- So now that I’ve got the foundation in place for a natural language chat, I can go to Azure OpenAI Studio that lets you pick the large language model you want and bring your own data to ground it, which in our case will be our newly-created index from the Azure AI Search service. Now I’m in Azure OpenAI Studio and I have a model deployed to get basic generative AI running, but it’s not connected my data yet, so it’s only knows what’s in the base training set for the LLM. Now, I can add my own data source. First, I’ll choose the data source: Azure AI Search. Now, I’ll choose the search service we just saw, then, the index for the product data. And because I have the embeddings, I can also use vector search. I’ll select that option and pick my embedding model. Here, I can make some custom field mappings if I’d like, like what is the content part of the index, bringing in the description, then the prices, my branding, package size, category, and so on. Then the title of each document. Each document could be the product name and I also have a product URL field as well. I’ll go ahead and pull that in. Now I have my dataset and there’s actually my vector field that I can use. I’ll hit Next and I’m given choices for search options. I’ll choose the hybrid model, which lets me combine both vectors and keywords during search and that’s it. Now I’ve connected my SQL data to the AI app.



- Right, the nice thing is going back to hybrid search, because it also uses keywords, you can search for, say, specific part numbers or text strings, or you can also use natural language to describe what you want.



- Right. You can do both. And now with our data source set up, I can use Azure OpenAI’s Studio playground to test generative AI responses over my data. I’ve already started by establishing a system message for how the AI assistant should behave so that it acts and sounds like a shopping agent with a few additional instructions. And now I’m going to try a sample prompt. I’ll ask something similar like I did in SQL. “I’m on a tight budget and I need to find things that make my living room more appealing and easy to install.” And you can see here that using the OpenAI model with our semantic search index, there are more relevant results returned. The formatting’s great, and I can easily see the prices and which items are easy to install. Some of these products may be considered appealing with wall decals and so forth. I can even look at references so I can go click here and get more details. And I can keep going and have a conversation with it where it uses the information from this chat session for context. So I can ask, “Are there any inexpensive furniture options?” And it has the context to know I’m still talking about a living room and maybe I’m looking for sofas or tables, and so it shows me a variety of other inexpensive options to remodel my living room. This is a much richer search experience because I’m using the power of vector search with embeddings against my data, and prompt, combined with the large language model for rich relevant natural language responses.



- Right, again, so just to break down kind of what we’re doing here, we’re basically retrieving additional information from SQL and we’re appending that information to the user’s prompt and then presenting the kind of aggregate of both of those things to the large language model.



- That’s right, it’s all session based. And again, your data’s never used to train the model. And something unique to SQL: search gets even better when we combine the new native JSON data type in SQL. This lets you use JSON, regular expression search, geospatial in SQL, together with our Azure AI-enabled vector-based semantic search. So it’s a hybrid search.



- Okay, so where would something like this then be useful?



- Let me show you a real example from Davide Mauri in the SQL team. He’s an Italian who loves good focaccia bread. He wanted to create a solution to find the best focaccia bread in any city across the globe. I realize that this example is very specific, but you can apply to other scenarios. Let’s say we’re going to London and the only data we can go off easily is the star ratings that people enter in mobile review apps. I can write a query for star ratings along with geography for London, and finally, for cuisine with Italian. I can start by using the new regular expression capabilities in SQL to find anywhere with a category of Italian or pizza. And if I do that, while I might be able to find those ratings at a bakery or restaurant level, it can’t go to specific menu items and the query returns irrelevant things like ice cream. Remember, Davide only cares about this one type of bread, and right now we’re already using JSON and geospatial, but what we really need is hybrid search to find the exact review we need. In fact, this is the review that we’re really after: ID 307305, and if I want to see if I can build a SQL query to find it. For that, I can use vectors and embeddings for the prompt to get this review to be the top one. And the good news: I can do all of this inside SQL. I have a stored procedure here and I’m calling the Azure OpenAI service using the REST API inside the engine itself. Now I’m going to send it a prompt, “A place where to get some good fokaccia.” And even if I can’t spell the Italian word right, it knows my intent, then that will convert the prompt to an embedding. Now, we need to compare that embedding with what we have in our customer reviews, which we’ve already converted to embeddings. This is a fairly complex query and I’m using a combination now of the vector-embedding search using the distant function to find cosine similarity. We’ve also got the city equal to London using geospatial with a regular expression plus the highly-rated establishments with enough ratings to be significant. And there’s the JSON for our attribute. And I’ve put it all that together and finally ordered it by distance. Now, I’m going to run that query and try to get the review we want on top. Boom. And sure enough, that very first one was the one I was looking for, Michetta and review ID 307305.



- Okay, so this time it worked and now we’ve actually looked at all the different ways you can incorporate Gen AI into your SQL workloads as a developer. But I know that we also have Copilot in Azure lets you use Gen AI to help with everyday SQL tests.



- We do. Something unique to SQL is the brand new context-aware Copilot in Azure experience. It reasons over your configuration and telemetry to personalize results. Let me show you a few examples starting with performance troubleshooting. If you spent time doing this work, you’ve probably heard someone say, “My database is slow.” And the diagnostic challenge there is that that’s not very descriptive, but let’s try that with Copilot. Here, Copilot can look at performance information to help me narrow this down fast. It’s found that there’s a high CPU utilization. In fact, there is a max at over 98% and it was 80% for 14 minutes in the past hour. It also narrowed down the single query that caused the problem and gives me more troubleshooting tips. Below, it even suggests additional prompts like how to optimize my query. Let’s try that. So it’s going to diagnose the issue and it finds information specific to that database and query. In this case, it discovers that there is a missing index needed and gives the T-SQL CREATE INDEX command specific to my table to solve the problem.



- And it quickly diagnosed and gave you what’s needed effectively to fix the issue. It’s kind of cool. It’s like a little Bob AI, right, built into Azure.



- Funny you should say that, I actually helped train it. Beyond troubleshooting, it’s got a great query authoring experience. For example, it’s pretty hard in SQL to traverse hierarchies inside a table. And there’s a way in T-SQL to do this, but it’s not easy to write these queries. Here’s where Copilot can also help. This time, I’ll start in the query editor and I can launch the inline Copilot from here. So I’ll paste in my prompt to find the hierarchy of product categories at any level, and I can choose the tables I want to select. In my case, I’ll keep all 16 of them, and this is sending our schema to the model as context. Now, I’ll generate the query. I can see a nice verbose comment here explaining the logic. So I’ll go ahead and accept it. Then close the inline Copilot to give me a bit more screen real estate. And when I scroll down past the comment, you can see the full curry text uses a recursive CTE or common table expression. It selected the right IDs at the parent level from the right tables. Here’s a union, then the inner join, and the select with the final output fields we want to see ordered by level. Let’s try it out by running it. And those are exactly the results I was looking for, traversing our hierarchy within the table.



- And I really love that it’s actually filling in all the variables for you not to kind of paste in yourself. And it’s going to save a lot of people a lot of time. So now, even showing all these different Copilot commands, so one question is: what permissions exactly are being used here?



- Everything you just saw is only using the permissions of the signed-in user and all the troubleshooting queries in their context. Copilot doesn’t have any additional access and it can’t grant additional permissions.



- Right, these are all really great updates, especially for how you can bring your own SQL data to generative AI apps, and how you can do more intelligent search and even better querying. So what do you recommend for everyone who’s watching right now to get started?



- I’d say to learn more. First for building AI apps, you should check out aka.ms/sqlai. And to try the new Copilot experiences, go to aka.ms/sqlcopilot.



- Thanks so much for joining us today, Bob. It’s always great to have you on the show. Be sure to subscribe for more updates like this on Microsoft Mechanics and we’ll see you soon.

Continue reading...
 
Back
Top