How to build a social media assistant with Prompty and PromptFlow

  • Thread starter Thread starter bethanyjep
  • Start date Start date
B

bethanyjep

How to build a social media assistant with Prompty and PromptFlow

Large Language Models were trained by data from all over the internet with billions of parameters. To generate an output from LLMs, you need a prompt. For day-to-day questions for example generating a LinkedIn Post from a blog post, you may need additional instructions for your LLM, for example: the word count, tone of the message, the format and the call to action at the end. Using prompty, you can easily standardize your prompt and execute it into a single asset.


Why Prompty.​

Prompty brings the focus of building LLM apps to prompts, giving you visibility and allowing you to easily integrate with other orchestrators and tools available. It goes beyond just calling the API and adds in additional configurations using markdown format to create and manage your prompts. Prompty comprises of the specification, tooling and runtime:

  • Specification: Prompty is an asset class not tied to any language as it uses markdown format with yaml to specify your metadata. It unifies the prompt and its execution in a single package to quickly get started.
  • Tooling: using the Prompty extension, you can quickly get started in your coding environment and set up your configuration. It enables you to work with prompts at a high level with additional features such as metadata autocompletion, syntax highlighting, quick run and validation.
  • Runtime: Prompty additionally generates code for you in different frameworks e.g. LangChain, PromptFlow and Semantic Kernel. the asset class easily helps you convert to code simplifying your workflow as it is language agnostic.

Prerequisites and Authentication


To get started, you need to create a new Azure OpenAI resource on the Azure OpenAI Studio.

medium?v=v2&px=400.png
Once you create the resource, head over to Azure OpenAI Studio and create a gpt-35-turbo model deployment.

920x242?v=v2.png


You can authenticate your Prompty using .env or using managed identity where you will sign into your account using az login and follow the instructions to set your subscription as the active subscription if you have multiple tenants/subscriptions

The first, and easiest way, to get started is by using environment variables. In your local environment, create a .env file, where you store your keys and endpoints. Your .env file, should be something like this:

AZURE_OPENAI_DEPLOYMENT_NAME = ""
AZURE_OPENAI_API_VERSION = ""
AZURE_OPENAI_API_VERSION = ""
AZURE_OPENAI_ENDPOINT =""




Once done, you can then add your environment variables to your Prompty metadata as follows:

Code:
  configuration:
      type: azure_openai
      azure_deployment: ${env:AZURE_OPENAI_DEPLOYMENT_NAME}
      api_version: ${env:AZURE_OPENAI_API_VERSION}
      azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}



You can also, login to your Azure account using azd auth login --use-device-code, this will allow you to define your model configurations and update your settings.json files, allowing you to access your configurations without using environment variable.


779x518?v=v2.png

Getting Started with Prompty​

  1. Install the Prompty extension
628x273?v=v2.png


  1. Right-click on the VSCode explorer and select "New Prompty." This will create a new Prompty file in markdown format.
medium?v=v2&px=400.png


  1. Configure your application metadata, in this case we will update as follows:
583x296?v=v2.png
  1. name: SociallyPrompt
  2. description: An AI assistant designed to help you create engaging Twitter and LinkedIn posts
  3. authors:
- <your name>
    1. model:
      1. azure_endpoint: azure_openai
      2. azure_deployment: gpt-35-turbo
    2. parameters:
      1. max_tokens: 3000
      2. temperature: 0.9
    3. sample:
      1. blog_title: LLM based development tools - PromptFlow vs LangChain vs Semantic Kernel
      2. blog_link: LLM based development tools: PromptFlow vs LangChain vs Semantic Kernel
      3. call_to_action: GitHub Sample Code at GitHub - BethanyJep/Swahili-Tutor
      4. post type: Twitter
  1. Update your application instructions, use examples where necessary, for example, below is a sample instruction for our social media assistant:

Code:
system:
You are an Social Media AI assistant who helps people create engaging content for Twitter. As the assistant, 
you keep the tweets concise - remember that 280-character limit! Inject enthusiasm into your posts! 
Use relevant hashtags to boost visibility for the posts. And have a call to action or even add some personal flair with appropriate emojis.

# Context
I am an AI assistant designed to help you create engaging Twitter and LinkedIn posts. I can help you create posts for a variety of topics, including technology, education, and more. I can also help you create posts for blog posts, articles, and other content you want to share on social media.

user:
{{post_type}} post for the blog post titled "{{blog_title}}" found at {{blog_link}} with the call to action "{{call_to_action}}".
  1. Once done click the run button to run your Prompty in the terminal
606x94?v=v2.png


Extending functionality with Promptflow​

Adding PromptFlow or LangChain into your code is just one click away. In your VSCode, right click on your Prompty file and select add Prompt flow code.
medium?v=v2&px=400.png


This will generate PromptFlow code for you that you can load and run a Prompty as shown below:
583x220?v=v2.png


Tracing your Flow​

Tracing allows you to understand your LLM's behaviour. To trace our application, we will need to import start_trace from the promptflow tracing library using: from promptflow.tracing import start_trace and addstart trace to PromptFlow code. The output will be a URL with trace UI as follows:
626x244?v=v2.png


We can then be able to understand the time taken for out Prompty to run as well as number of tokes used and understand our LLM performance better.

Resources​



Continue learning and building with Prompty using the following resources:

Continue reading...
 
Back
Top