Jump to content

OpenAI and Microsoft Sentinel Part 3: DaVinci vs. Turbo


Recommended Posts

Guest danielbates
Posted

Welcome back to our series on OpenAI's Large Language Models (LLMs) and Microsoft Sentinel. In the first installment, we built a basic playbook using the built-in Azure Logic Apps connectors for OpenAI and Sentinel to explain MITRE ATT&CK tactics found in an incident and discussed some of the different parameters that can influence the OpenAI model such as temperature and frequency penalty. Next, we extended this functionality with Sentinel's REST API to look up a scheduled analytic rule and return a summary of the rule's detection logic.

 

 

 

If you've been following along, you probably noticed that our first playbook looked up MITRE ATT&CK tactics from the Sentinel incident but didn't include any incident techniques in the GPT3 prompt. Why not? Well, fire up your OpenAI API Playground and let's take a trip down the rabbit hole (apologies to Lewis Carroll).

 

 

 

  • Mode: Complete
  • Model: text-davinci-003
  • Temperature: 1
  • Maximum length: 500
  • Top P: 1
  • Frequency penalty: 0
  • Presence penalty: 0
  • Prompt: "Please explain the following MITRE ATT&CK tactics and techniques: ["DefenseEvasion"], ["T1564"]"

 

Here is our first result:

 

 

 

largevv2px999.png.b17086d7defdecdb0b40c94d88b4c57a.png

 

 

 

That's a good summarization of MITRE ATT&CK Tactic TA0005, Defense Evasion, but what is going on with the technique description? T1564 is Hide Artifacts - there are other named techniques for Process Injection (T1055) and Rootkits (T1014). Let's try again.

 

 

 

largevv2px999.png.9b58c71cee7630af26d1ac2219f6dc07.png

 

 

 

Not even close! "Exploitation of Remote Services" is technique T1210 within the Lateral Movement tactic. One more time:

 

 

 

largevv2px999.png.6efc8b012ee9505d6ae1816ced4a929d.png

 

 

 

So, what is going on? Isn't ChatGPT supposed to be better than this?!

 

 

 

Well, it is. ChatGPT is actually fantastic at summarizing MITRE ATT&CK technique codes, but we haven't asked it yet. We have been using a different one of OpenAI's top-of-the-line Generative Pre-trained Transformer-3.5 (GPT-3.5) models, "text-davinci-003", in text completion mode. ChatGPT uses the "gpt-3.5-turbo" model in chat completion mode. The difference is significant. Here is an example of ChatGPT's response to the same query from above:

 

 

 

largevv2px999.png.51583de316d2c1a2eb5f2fbc36438c89.png

 

 

 

But the OpenAI connector in our Azure Logic App doesn't give us a chat-based action and we can't choose a Turbo model, so how can we get ChatGPT into our Sentinel workflow? Just like our second installment where we switched out our Sentinel Logic App connector for a HTTP action calling the Sentinel REST API directly, we can do the same with OpenAI's API. Let's explore the process to build a Logic App workflow that uses the chat model instead of the text completion model.

 

 

 

We'll refer to two reference documents from OpenAI: the Chat Create API reference and the Chat Completion guide. To keep this blog post to a reasonable length, I'll summarize some of the setup tasks that you'll need to replicate our example in your own environment:

 

  • A Key Vault to store your OpenAI API credential
  • A way to authorize your Logic App to read a secret from the Key Vault (I recommend Managed Identities with Azure RBAC)
  • Networking access to allow your Logic App to connect to the Key Vault (I'm using defined IP addresses and CIDR ranges for the appropriate Azure region from the Connectors documentation)

 

 

 

Now, let's open up our Logic App designer and start building the functionality. Just like before, we are using a Microsoft Sentinel incident trigger. We will follow this with a Key Vault action, "Get secret", where we will specify the secret name where our API key is stored:

 

 

 

largevv2px999.png.effd5f6ffb0ff652764b3ac9a4aadfff.png

 

 

 

Next, we need to initialize and set some variables for our API request. This isn't strictly necessary; we could simply write out the request in our HTTP action, but this will make it much easier to change the prompt and other parameters later on. The two mandatory parameters in an OpenAI Chat API call are "model" and "messages", so let's initialize a string variable to store the model name and an array variable for the messages.

 

 

 

largevv2px999.png.b92f5a41b072a124c9f0a43312250c6d.png

 

 

 

The "messages" parameter is the main input for the chat model. It is structured as an array of message objects that each have a role, such as "system" or "user", and content. Let's look at an example from the Playground:

 

 

 

largevv2px999.thumb.png.f595580cbeb04ca4c7a6b78f3703185a.png

 

 

 

The System object allows us to set the behavioral context of the AI model for this chat session. The User object is our question, and the model will reply with an Assistant object. We can include prior responses with User and Assistant objects to give the AI model a "conversation history" if needed.

 

 

 

Back in our Logic App designer, I've used two "Append to array variable" actions and one "Initialize variable" action to build the "Messages" array:

 

 

 

largevv2px999.png.6c3ce1242566473b9b64835755106e1e.png

 

 

 

Again, this could all be done in one step, but I've chosen to break up each object separately. If I want to modify my prompt, I only need to update the Prompt variable.

 

 

 

Next, let's adjust our temperature parameter to a very low value to influence the AI model to be more deterministic. The "Float" variable will be perfect for storing this value.

 

 

 

largevv2px999.png.48706ca3085800a2961051507fec82b5.png

 

 

 

Finally, let's put it all together with an HTTP action as follows:

 

  • Method: POST
  • URI: https://api.openai.com/v1/chat/completions
  • Body:
    {
    "model": @{variables('model')},
    "messages": @{variables('messages')},
    "temperature": @{variables('temperature')}
    }
  • Authentication: Raw
    • Value:
      Bearer @{body('Get_OpenAI_API_token_from_Key_Vault')?['value']}

 

 

 

largevv2px999.png.e8426908a2dba81d636a00dc7cd9a8f9.png

 

 

 

As before, let's run this playbook without any comment action to make sure everything works before we connect it back to the Sentinel instance. If all goes well, we'll get a 200 status code and a nice summary of the MITRE ATT&CK tactics and techniques in an Assistant message.

 

 

 

largevv2px999.thumb.png.04fbdb1e80dce98876f055025aca0155.png

 

 

 

Now for the easy part: adding an incident comment with the Sentinel connector. We will parse the response body with a Parse JSON action and then initialize a variable with the text from ChatGPT's reply. Since we know the response format, we know that the reply can be extracted from the Choices item, index 0, with this expression:

 

 

 

@{body('Parse_JSON')?['choices']?[0]?['message']?['content']}

 

 

 

largevv2px999.thumb.png.dc8808df7680be2325b23a252cc14f7c.png

 

 

 

Here is a bird's-eye view of the completed Logic App flow:

 

 

 

largevv2px999.thumb.png.0293d584d60b64608a50aaca2d14746a.png

 

 

 

Let's try it out! I've included the comment left by the previous iteration of this playbook back in the first installment of this OpenAI and Sentinel series - it's interesting to compare the output from DaVinci's text completion with the Turbo model's chat interaction.

 

 

 

largevv2px999.png.aba2cb7cab556748c0b19a25a58d1a15.png

 

 

 

Much better, isn't it? The full-fledged Chat Completion interaction works really well as a virtual assistant.

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...