Jump to content

Featured Replies

Posted

How to build a social media assistant with Prompty and PromptFlow

 

Large Language Models were trained by data from all over the internet with billions of parameters. To generate an output from LLMs, you need a prompt. For day-to-day questions for example generating a LinkedIn Post from a blog post, you may need additional instructions for your LLM, for example: the word count, tone of the message, the format and the call to action at the end. Using prompty, you can easily standardize your prompt and execute it into a single asset.

[HEADING=1]

Why Prompty.[/HEADING]

Prompty brings the focus of building LLM apps to prompts, giving you visibility and allowing you to easily integrate with other orchestrators and tools available. It goes beyond just calling the API and adds in additional configurations using markdown format to create and manage your prompts. Prompty comprises of the specification, tooling and runtime:

 

  • Specification: Prompty is an asset class not tied to any language as it uses [iCODE]markdown[/iCODE] format with[iCODE] yam[/iCODE]l to specify your metadata. It unifies the prompt and its execution in a single package to quickly get started.
  • Tooling: using the Prompty extension, you can quickly get started in your coding environment and set up your configuration. It enables you to work with prompts at a high level with additional features such as metadata autocompletion, syntax highlighting, quick run and validation.
  • Runtime: Prompty additionally generates code for you in different frameworks e.g. LangChain, PromptFlow and Semantic Kernel. the asset class easily helps you convert to code simplifying your workflow as it is language agnostic.

Prerequisites and Authentication

 

To get started, you need to create a new Azure OpenAI resource on the Azure OpenAI Studio.

 

mediumvv2px400.png.f876665597bfb4fcce0f8533034e3940.png

Once you create the resource, head over to Azure OpenAI Studio and create a gpt-35-turbo model deployment.

 

920x242vv2.png.db64e85109f5d464d48414b5ac6de73d.png

 

 

You can authenticate your Prompty using [iCODE].env[/iCODE] or using managed identity where you will sign into your account using [iCODE]az login and follow the instructions to set your subscription as the [/iCODE]active subscription if you have multiple tenants/subscriptions

 

The first, and easiest way, to get started is by using environment variables. In your local environment, create a [iCODE].env[/iCODE] file, where you store your keys and endpoints. Your [iCODE].env[/iCODE] file, should be something like this:

 

AZURE_OPENAI_DEPLOYMENT_NAME = ""

AZURE_OPENAI_API_VERSION = ""

AZURE_OPENAI_API_VERSION = ""

AZURE_OPENAI_ENDPOINT =""

 

 

 

 

Once done, you can then add your environment variables to your Prompty metadata as follows:

 

  configuration:
     type: azure_openai
     azure_deployment: ${env:AZURE_OPENAI_DEPLOYMENT_NAME}
     api_version: ${env:AZURE_OPENAI_API_VERSION}
     azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}

 

 

 

You can also, login to your Azure account using [iCODE]azd auth login --use-device-code[/iCODE], this will allow you to define your model configurations and update your [iCODE]settings.json[/iCODE] files, allowing you to access your configurations without using environment variable.

 

 

779x518vv2.png.5f0c259a785ef0775894fd292a49315b.png

[HEADING=1] [/HEADING]

[HEADING=1]Getting Started with Prompty[/HEADING]

  1. Install the Prompty extension

628x273vv2.png.513b9d7b5d709bba3e466ea094b83093.png

 

 

  1. Right-click on the VSCode explorer and select "New Prompty." This will create a new Prompty file in markdown format.

mediumvv2px400.png.7cf57d248ab8bb4389586df67740e95d.png

 

 

  1. Configure your application metadata, in this case we will update as follows:

583x296vv2.png.0217c358738a0349a647e735e4f6fa51.png

  1. name: SociallyPrompt

  2. description: An AI assistant designed to help you create engaging Twitter and LinkedIn posts

  3. authors:

- <your name>



    1. model:
      1. azure_endpoint: azure_openai
      2. azure_deployment: gpt-35-turbo

      [*]parameters:

      1. max_tokens: 3000
      2. temperature: 0.9

      [*]sample:

      1. blog_title: LLM based development tools - PromptFlow vs LangChain vs Semantic Kernel
      2. blog_link: LLM based development tools: PromptFlow vs LangChain vs Semantic Kernel
      3. call_to_action: GitHub Sample Code at GitHub - BethanyJep/Swahili-Tutor
      4. post type: Twitter

      [*]Update your application instructions, use examples where necessary, for example, below is a sample instruction for our social media assistant:

       

      system:
      You are an Social Media AI assistant who helps people create engaging content for Twitter. As the assistant, 
      you keep the tweets concise - remember that 280-character limit! Inject enthusiasm into your posts! 
      Use relevant hashtags to boost visibility for the posts. And have a call to action or even add some personal flair with appropriate emojis.
      
      # Context
      I am an AI assistant designed to help you create engaging Twitter and LinkedIn posts. I can help you create posts for a variety of topics, including technology, education, and more. I can also help you create posts for blog posts, articles, and other content you want to share on social media.
      
      user:
      {{post_type}} post for the blog post titled "{{blog_title}}" found at {{blog_link}} with the call to action "{{call_to_action}}".

      1. Once done click the [iCODE]run[/iCODE] button to run your Prompty in the terminal

      606x94vv2.png.775a61978b3e811081009d8c023c3ad8.png

       

       

      [HEADING=1]Extending functionality with Promptflow[/HEADING]

      Adding PromptFlow or LangChain into your code is just one click away. In your VSCode, right click on your Prompty file and select [iCODE]add Prompt flow code[/iCODE].

      mediumvv2px400.png.f3d48da3b168c39d6e3728f242eaa675.png

       

       

      This will generate PromptFlow code for you that you can load and run a Prompty as shown below:

      583x220vv2.png.25d0d4540f46d6dc1d989d63b8cdd8f0.png

       

       

      [HEADING=1]Tracing your Flow[/HEADING]

      Tracing allows you to understand your LLM's behaviour. To trace our application, we will need to import [iCODE]start_trace[/iCODE] from the promptflow tracing library using: [iCODE]from promptflow.tracing import start_trace[/iCODE] and add[iCODE]start trace[/iCODE] to PromptFlow code. The output will be a URL with trace UI as follows:

      626x244vv2.png.dd841ae491a8db9641e4003b83f89a2e.png

       

       

      We can then be able to understand the time taken for out Prompty to run as well as number of tokes used and understand our LLM performance better.

      [HEADING=1]Resources[/HEADING]

       

       

      Continue learning and building with Prompty using the following resources:

 

Continue reading...


Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...