Jump to content

Tracing LangChain Code on Azure with OpenTelemetry and Application Insights

Featured Replies

Posted

As AI and machine learning applications grow more complex, ensuring their observability becomes crucial. Tracing helps identify and resolve performance bottlenecks and errors, providing insights into the internal workings of your applications. LangChain has become a popular framework for building applications with large language models. When deploying LangChain apps to production, tracing and monitoring are crucial for understanding performance and troubleshooting issues. In this blog, we will explore how to trace LangChain code on Azure using OpenTelemetry and Application Insights. We'll leverage tools and libraries such as OpenInference, Azure's OpenTelemetry exporter, and Application Insights.

 

 

 

[HEADING=1]Why Tracing Matters for LangChain Apps[/HEADING]

 

 

 

LangChain applications often involve complex chains of operations - prompting language models, calling external APIs, accessing vector stores, etc. Tracing helps developers visualize these operations, identify bottlenecks, and debug errors. It's especially important for AI apps that may have non-deterministic behavior.

 

[HEADING=1]Prerequisites[/HEADING]

 

 

 

Before we dive into the implementation, ensure you have the following installed:

 

  • Python 3.7+
  • Azure account
  • Basic knowledge of Python and LangChain
  • OpenAI API key

 

 

 

Step 1: Setting Up OpenInference LangChain Instrumentation:

 

OpenInference provides auto-instrumentation for LangChain, making it compatible with OpenTelemetry. Let's start by installing the necessary packages:

 

 

 

requirements.txt

 

 

 

azure-monitor-opentelemetry-exporter
openinference-instrumentation-langchain 
langchain 
opentelemetry-sdk 
opentelemetry-exporter-otlp
openai

 

 

 

 

 

Now install the required packages by pip install -r requirements.txt

 

 

 

Step 2: Set up Azure Monitor Exporter:

Azure Monitor provides powerful tools for monitoring applications, including Application Insights. We'll use the Azure Monitor OpenTelemetry Exporter to send trace data to Application Insights.

 

 

 

 

 

import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from dotenv import load_dotenv
load_dotenv('azure.env')
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter

exporter = AzureMonitorTraceExporter.from_connection_string(
   os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"]
)

 

 

 

 

 

Step 3: Integrating with Azure Monitor as LangChain Instrumentor

 

Azure Monitor provides powerful tools for monitoring applications, including Application Insights. We'll use the Azure Monitor OpenTelemetry Exporter to send trace data to Application Insights. The below code sets up OpenTelemetry tracing for a LangChain application, configuring it to batch and export spans every 60 seconds, and automatically instrument LangChain operations. This allows you to collect detailed telemetry data about your LangChain application's performance and behavior.

 

 

 

 

 

tracer_provider = TracerProvider()
from openinference.instrumentation.langchain import LangChainInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
trace_api.set_tracer_provider(tracer_provider)
trace.set_tracer_provider(tracer_provider)
tracer = trace.get_tracer(__name__)
span_processor = BatchSpanProcessor(exporter, schedule_delay_millis=60000)
trace.get_tracer_provider().add_span_processor(span_processor)
LangChainInstrumentor().instrument()

 

 

 

 

 

Step 3: Create LangChain LLM Chain

 

Now lets set up a LangChain application to generate jokes using Azure's OpenAI service. It begins by importing necessary classes from the langchain_openai and langchain.chains modules. A PromptTemplate is created with a template that asks for a joke based on the provided adjective. The AzureChatOpenAI class is then instantiated with the API key, endpoint, API version, and model name, all of which are retrieved from environment variables. This configuration enables the LangChain application to interact with Azure's OpenAI model deployment to generate responses based on the specified prompt template.

 

 

 

 

 

from langchain_openai import AzureChatOpenAI
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate

prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = AzureChatOpenAI(api_key = os.environ['AZURE_OPENAI_API_KEY'],
                     azure_endpoint = os.environ['AZURE_OPENAI_ENDPOINT'], 
                     api_version = '2024-06-01', 
                     model= os.environ['AZURE_OPENAI_GPT_DEPLOYMENT'])

 

 

 

 

 

Step 4: Viewing Traces in Azure Monitor

 

 

 

Lets invoke the LangChain chain before viewing the trace.

 

 

 

chain = LLMChain(llm=llm, prompt=prompt, metadata={"category": "jokes"})
completion = chain.predict(adjective="funny", metadata={"variant": "funny"})
print(completion)

 

 

 

 

 

After integrating the Azure Monitor exporter, your LangChain application will send traces to Application Insights. You can view these traces:

 

  1. Navigate to the Azure portal.
  2. Select your Application Insights resource.
  3. Go to the "Transactions" section to view the traces.

 

largevv2px999.png.5e3b7087dd4e3bff4a1982ddf71e6053.png

 

 

 

 

 

largevv2px999.png.d37cde038444cab46f70d39bf9048b4e.png

 

 

 

 

 

Conclusion

 

 

 

By following these steps, you can effectively trace your LangChain applications using OpenTelemetry and view these traces in Azure Monitor's Application Insights. This setup not only enhances observability but also helps in identifying and resolving performance issues efficiently. For more detailed information, refer to the official documentation:

 

 

 

 

Happy tracing!

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...