Azure OpenAI Extension for Function Apps Hands-on Experience

  • Thread starter Thread starter Mengyang
  • Start date Start date
M

Mengyang

This blog will give some insights on the newly released Azure OpenAI extension. It will combine both Azure OpenAI service and Azure Function Apps. We will discuss the following contents:


Please note that due to fast-growing development of AI services, some contents may be outdated. This article will use the released version as of July 2024.


Why this extension?​


Compared with standard Azure OpenAI API call, the extension would give:

  • Capability to work with large variety of triggers and bindings offered by function apps. Function Apps would have pre-defined triggers to allow developers control event-driven or routine-based tasks. Based on our tests, the extension would work well with the current offered types of functions.
  • Flexibility in the development phase when multiple Azure products are engaged. Different bindings allow function apps to listen and respond when certain Azure product changes. With host.json file inside Function Apps, settings would be easier to adjust and test.

What’s the current requirements and support scope?​


To use this extension, the following requirements must be met:


  1. Get access to Azure OpenAI service. If you are new to Azure, remember to request Azure OpenAI access via this link. After the approval, please go to deployment section and choose one of the LLMs (Large Language Models) for future use. Example:

    606x235?v=v2.png

  2. Azure Function Apps under the following language version:
    • .NET 6+
    • Java 11+
    • Node.JS 18+
    • PowerShell 7.4
    • Python 3.9+

    The extension will support all mentioned languages & version.

Support Scope: Public Preview. Since this feature is still in preview, you may notice some issues. If so, please reach out via support request, or raise issues via GitHub Issue Page pointing to the extension.

How to use this?​


The following resources are highly recommended to start using this extension:


The following demos will use .NET8 and GPT-4o as language / LLM. We will dive deeper into 3 sections:


Essentially, this extension would help you make API calls to the Azure OpenAI endpoint with a smoother experience.

Chat​


Chat allows users to communicate with Azure OpenAI service. The response would be generated based on pre-defined prompts and questions. Please do keep in mind if you want to have long-term memory of the chat history, 2 options are available when using the extension:

  1. Customize the code to store all asked questions in the response body, then invoke API call to send all contents to Azure OpenAI Service endpoint. Downside is that the used token size would grow exponentially as you add more history.
  2. Use Azure cosmosDB as the persistent storage, and leverage semantic search to find the desired chat history. This is a recommended solution, and we will cover later in the 3rd section.

[Local Debugging]


  1. Clone this GitHub repo, you may take a look at README.md to see detailed instructions.


  2. Go to csharp-ooproc/local.settings.json. Replace values to match parameters in your Azure OpenAI endpoint. Also, remember to add setting CHAT_MODEL_DEPLOYMENT_NAME = <name of your deployed LLM>


  3. Head to ChatBot.cs file, you can review and learn how to invoke REST api call to the Azure OpenAI services. Modify the trigger type if needed.


  4. Remember to install the NuGet package Microsoft.Azure.Functions.Worker.Extensions.OpenAI == 0.16.0 Alpha. You may add by executing .NET CLI in VSCode, or a simple click to install this package in Visual Studio.


  5. Go to root folder → cd samples/chat/csharp-ooproc && func start. We will not demo how to test as this has already been covered in the GitHub repo. In all, we will leverage the below 3 API requests:

    Code:
    Functions:
    
        CreateChatBot: [PUT] <http://localhost:7071/api/chats/{chatId}>
    
        GetChatState: [GET] <http://localhost:7071/api/chats/{chatId}>
    
        PostUserResponse: [POST] <http://localhost:7071/api/chats/{chatId}>

If the local debugging would work, we will move on to publish to Azure.

[Publish to Azure]

Use the following methods to publish your project:


  • Visual Studio Code: Use Azure Extension to publish.


  • Visual Studio: Use built-in publish profile. Also please note to enable SCM Basic Auth Publishing Credentials from Azure portal, as this is required by Visual Studio deployment:

    284x326?v=v2.png

  • Grant the user or function app managed identity Cognitive Services OpenAI User on Azure OpenAI resource. This is important as the platform will use it as authentication method to allow connection:

    686x208?v=v2.png


Result:

Create a new chatbot

462x178?v=v2.png

461x293?v=v2.png

Make conversations

461x96?v=v2.png

459x250?v=v2.png

Text Completion​


Text completion allows Azure OpenAI service to extend or answer with given sentences. It’s commonly used with paper writing, story telling and many more scenarios. The below example will demo how to leverage completion APIs to perform text completion:

[Local Debugging]

  1. Clone the GitHub repo, you may take a look at README.md to see detailed instructions.
  2. Go to csharp-ooproc/local.settings.json. Replace values to match parameters in your Azure OpenAI endpoint. Also, remember to add setting CHAT_MODEL_DEPLOYMENT_NAME = <name of your deployed LLM>
  3. In the TextCompletions.cs file, you can see 2 kinds of functions:
    1. WhoIs: it will extract the value “name” based on the invocation URL format (whois/{name}), form the question (Who is {name}?) and send to Azure OpenAI service for text completion.
    2. GenericCompletion: This function will directly take the prompt as input, and send to OpenAI completions API for text completion.

We will skip details of the test due to this has been covered in the README.md. If the local debugging would work, we will move on to publish to Azure.

[Publish to Azure]

Similar to chat section, please pay attention to the difference when using different IDEs.

Result:

WhoIs


669x64?v=v2.png

667x176?v=v2.png

GenericCompletion

medium?v=v2&px=400.png

481x296?v=v2.png

Rag-cosmosDB​


Azure cosmosDB product provides a good option to store previous chat history / company-level Knowledge Base. In this example, we will show how to leverage cosmosDB product to store the information, then use semantic search to locate and print required contents.

[Requirements]

  • Azure cosmosDB for MongoDB (vCore)
  • Azure OpenAI Service
  • Azure Function App

[How to work with Azure cosmosDB]

To work with Azure cosmosDB, firstly prepare the environment:

  1. Clone the GitHub repo, you may take a look at README.md to see detailed instructions.
  2. In the local.settings.json file, update the CosmosDBMongoConnectionString value to match the connection string from cosmosDB resource. Also, define the used embedding model in Azure OpenAI (appsetting: EMBEDDING_MODEL_DEPLOYMENT_NAME, default model: text-embedding-ada-002).
  3. Please also check the README.md to fill the remaining parameter settings.

Then, we will follow the below steps to leverage cosmosDB:


  1. Insert docs into cosmosDB. We will need a storage or equivalent service to host TXT/JSON file. This is the source where you can add or edit the content. Then invoke POST request like below:

    534x229?v=v2.png
    My example: Mengyang Chen is a Support Engineer working for Azure App Service Team.

    You can also validate the ingestion has been succeeded in the terminal logs:

    632x83?v=v2.png

  2. Query by Prompt. By invoking POST request, we can receive the desired result via semantic search.

    medium?v=v2&px=400.png

451x203?v=v2.png

As you can see, it would give you the result and where to find this info. This will be helpful if users want to build custom Knowledge Base or store long-term memory.



Hope this blog will give a good start to leverage this extension, and if you want to ask anything related to this, please feel free to leave comments, and we would be glad to help.

Continue reading...
 
Back
Top