Build Powerful RAG Apps Without Code Using LangFlow and Azure OpenAI

  • Thread starter Thread starter Abdulhamid_Onawole
  • Start date Start date
A

Abdulhamid_Onawole

large?v=v2&px=999.png



Ok. So you want to supplement your LLM chat application with your own knowledge base but you could not be bothered about the cumbersome code development that might be involved. Or you are more visual-oriented and would like to make sense of your application workflow.​



LangFlow is a drag-and-drop framework that helps you build fully customizable GenAI applications. You can assemble several components with a few clicks to create the exact Retrieval Augmented Generation application you envision, powered by your data source. This means you can access a more reliable, easy-to-build, GenAI model built to address your unique business needs.



In this tutorial, we will utilize some of the available building components in LangFlow to make an application that provides food recommendations based on the US dietary guidelines. This project builds upon our previous work where we integrated Azure Openai and Document Intelligence to scan food products and get more insights to guide our nutrition.



This project requires that you have:​

  1. Access to Azure Openai​
  2. Python 3.10​



Creating a LangFlow account

To start LangFlow, ensure you are working in an environment with python 3.10. Then go to your terminal and run the following (I promise, you will not have to write any code beyond this):​



Code:
python -m pip install langflow -U

python -m langflow run



If everything runs successfully, you should get something similar to the image below. Click on the endpoint. This should direct you to your LangFlow account. Proceed to create a new project.​

medium?v=v2&px=400.png



Creating the workflow

Deploying an Azure Openai model

Now, open another page and log onto your Azure portal, and create an Openai resource (fill out this registration form if you don't already have access):

  1. Subscription: Select your active subscription
  2. Resource group: Select an existing resource group or create a new one
  3. Name: Name your resource.
  4. Region: Randomly select any region from the available lists of regions.
  5. Pricing: Select Standard S0

After deploying the resource, click on “Go to Azure Openai studio” on the top pane​

  1. Scroll down on the left pane and click on the “Deployments” page​
  2. Click on “create new deployment” next​
  3. Select GPT4o in the list of models​
  4. Assign a name to the deployment (note the deployment name)​
  5. Reduce the token rate limit to 7K and then “Create”​



Once that has been successfully deployed, go back to the Azure portal. Click on the Openai resource you just deployed and copy the endpoint and keys.​

large?v=v2&px=999.gif



Building your application pipeline

Go back to your LangFlow page. Note the panes on the right; this is where you will find your necessary building components. Click on the inputs dropdown menu and drag the chat input pane onto the canvas. Do the same with the outputs menu and select 'chat output'. Go to the models pane, find and select 'Azure Openai'.​

large?v=v2&px=999.png



Once that is done, connect the components as seen in the image above:​

  1. Paste the following to the template field in the 'Prompt' pane:
    Code:
    You are an AI assistant that helps users resolve their question
    
    Question:{question}
  2. Connect the input node in 'Chat Input' to the 'Prompt' question node
  3. Connect the prompt message in 'Chat Input' to the text node in 'Azure OpenAI'.​
  4. Fill in the copied credentials.​

Now let’s test it. Click on the play button in the top-right corner on the Chat Output pane. Click on ‘playground’ in the bottom right of the screen and ask any question. It should return a response successfully:​

medium?v=v2&px=400.png



Creating a vector database and populating it with vector embeddings

Keeping to the objective of our application, we need a database to store, retrieve, and query our data on dietary guidelines. We need to be able to search this database and retrieve information closely related to our query. To do this we would be using Azure OpenAI embeddings to create the vectors that represent this relationship.



First sign up on Astra db. Once that is done, you should get an interface like the image below. If not, toggle the drop-down in the red box and select Astra db. Then select ‘create database’ on the right in the yellow box. Name the database and select Azure as the provider (please note, this costs a minimal fee to use), select us-east-2 as the region and proceed to create the database. Once created, note the database details on the right of the create database page.


large?v=v2&px=999.jpg



Now go back to your LangFlow page. Save the current flow and then create a new project. On this page populate the screen with the following components:

‘File’

‘Recursive Character Text splitting’

‘Azure OpenAI Embeddings’

‘Astra DB’

You should have something similar to the image below:​

large?v=v2&px=999.png



Return to OpenAI studio and follow the steps in creating and deploying a vector embedding model (select text-embedding-002 as the model type) as stated earlier. Now go back to the LangFlow pane and do the following:​

  1. File component:
    1. Connect the output to the input node in the recursive character text split pane​
  2. Recursive split:
    1. Leave as default​
    2. Connect to Astra DB ‘Ingest Data’ node​
  3. Azure Openai Embeddings:
    1. Fill the endpoint and API keys as you did with the GPT4o model​
    2. Fill in the name of the deployed model​
    3. Connect the embeddings model to Astra DB​
  4. Astra DB:
    1. Return to the Astra DB page and copy the API and token from the database detail​
    2. Name the collection​
    3. Fill the parameters​
    4. Press the play button on the top right and then give it a few minutes to complete​

The process above will create a collection of the vector embeddings in the database and might take a few minutes. Return to the Astra DB database page and you should see the collection has been created and proceed to inspect it.



Creating the final workflow

Next, open the previously saved flow and connect the Astra DB pane to the existing flow:

  1. Fill the details of the Astra db as same as before
  2. Create a new thread that connects chat input data to Astra DB
  3. Drag ‘Parse Data’ onto the pane and connect Astra DB ‘search result’ node to ‘data’

Copy the template below and paste into the template field of the ‘prompt’ pane:​



Code:
You are an AI assistant that helps that recommends healthy meals based on the {guidelines} provided

question: {question}
context: {guidelines}



Connect the text node in 'Parse Data' to the 'guidelines' node in templates. If all instructions are followed correctly, it should look something like this:​

large?v=v2&px=999.png



Now we have completed the pipeline and are ready to test.

Click on ‘playground’ and test out your app!​

medium?v=v2&px=400.png

medium?v=v2&px=400.png





Bonus:

You can export this flow and integrate it into your app. LangFlow provides several API codes. Below, we generate a chat widget by simply copying the API code:​

  1. Click on API in the bottom left of the screen next to 'Playground' in LangFlow. Click on the Chat widget HTML and copy the code.​
  2. Create an HTML file in VS code. Paste the code. Right-click and ‘ open with live server’​
  3. This will direct you to a page with the chat widget.​

medium?v=v2&px=400.png



References and further reading:
1. AstraDB On Azure



2. Vector database




Continue reading...
 
Back
Top