Debugging Playbooks

  • Thread starter Thread starter GBushey
  • Start date Start date
G

GBushey

This article assumes knowledge of how to create Microsoft Sentinel Analytic Rules, Automation Rules, and playbooks.



Create the playbook

You will want to create the playbook first, since one will need to be created before it can be used in the Automation rule. In the example shown below, the playbook will iterate through all the alerts in the incident and generate a task based on information in that alert.

medium?v=v2&px=400.png

Keep in mind that you will need to grant the Managed Identity access to the Microsoft Sentinel instance so that it can add the task to the incident.



Create sample Analytic rule

When you are creating a new playbook, you will need to test it. Rather than waiting for the perfect incident to be created, you can create a dummy Analytic rule that utilizes the KQL "datatable" operator to generate the information you need to test your playbook.



An example of this is shown below. This code sample was taken from the "datatable" operator's documentation located at datatable operator - Azure Data Explorer & Real-Time Analytics | Microsoft Learn





Code:
let BlogTable = datatable (Date:datetime, Event:string, MoreData:dynamic)[
    datetime(1910-06-11), "Born", dynamic({"key1":"value1", "key2":"value2"}),
    datetime(1930-01-01), "Enters Ecole Navale", dynamic({"key1":"value3", "key2":"value4"}),
    datetime(1953-01-01), "Published first book", dynamic({"key1":"value5", "key2":"value6"}),
    datetime(1997-06-25), "Died", dynamic({"key1":"value7", "key2":"value8"}),
];
BlogTable





To trigger the loop in the playbook, set the "Event Grouping" to "Trigger an alert for each event" and enable "Alert grouping" and set it to "Grouping all alerts trigger by this rule into a single incident"



Create the Automation Rule

Create a new automation rule that triggers off the analytic rule that was created above. This will make sure no other incidents will trigger the automation rule and playbook accidently while testing. Once the playbook has been debugged, this automation rule can be modified to trigger off any other Analytic rules or all of them.



Set the "Actions" to run the playbook that was created above and save it.



Debugging the playbook

Once your analytic rule has triggered, you can disable it so that it does not create an excessive number of incidents. This blog will show how to rerun a playbook with old data.



In the Microsoft Sentinel Automation Rules, select the "Active playbooks" tab and select your playbook. You will see a page that look likes the image below showing how many runs have been performed and the status. Note that in the image below, this playbook was run and had errors. If it was successful, the "status" field will be set to "Succeeded"

medium?v=v2&px=400.png



Click on the identifier link to be brought into the playbook's history while will show the error.

medium?v=v2&px=400.png

The full functionality of playbooks is outside the scope of this blog post, but note that by clicking the arrows in the loop, you can progress to the next error or the next value in the loop. This is very useful if only certain entries in the loop are causing issues.

389x646?v=v2.jpg

In this case, the error shown is "BadRequest". This is a fairly generic error message but usually means that one of the parameters being set is wrong. If you click on the step, a new pane will open showing all the information about the error including the inputs to the step and the output. Use the "INPUTS" section to verify what parameters are being used and the "OUTPUTS" section to see what has been returned.



Looking at the "INPUTS", shown below, doesn't provide too much useful information in this case. It appears that an ARM Id is being passed in as "incidentArmId" and the "taskTitle" and "taskDescription" both have values.

medium?v=v2&px=400.png



Looking at the "OUTPUTS" section, shown below, provides a bit more useful information. The "innerError" message states that "Incident Arm id is invalid" which provides a good place to start looking.

medium?v=v2&px=400.png



To edit the playbook, close this run history page, and then in the "Overview" page, click the "Edit" button.

medium?v=v2&px=400.png



Click on the "Add task to incident" step to see what values are used. This will open the parameters pane to see what parameters are being used as shown below

medium?v=v2&px=400.png

Notice that the "Alert ARM ID" is being passed in instead of the "Incident ARM Id" that is required. The correct value for the parameter will be set and the playbook saved.



Rerunning a playbook

Rather than needing to wait until the incident is triggered again, you can rerun the playbook using old data. Go back to the "Overview" page and look at the "Runs history" tab, which is selected by default. Select the latest run and click its identifier to go into its run history. At the top of the page will be a "Resubmit" button which will rerun the playbook using the same information that was used to run this instance.

medium?v=v2&px=400.png



When that button is clicked, the Notifications button at the top of the page will show that the playbook has been successfully resubmitted. Wait a little while and then refresh the run history shown on the left side of the page. If everything went OK, there will be a new successful entry added. Click on that to see the successful run.

medium?v=v2&px=400.png

The image below shows this playbook with a successful run.



medium?v=v2&px=400.png

Debugging Conditional steps

The debugging steps listed above have limited usefulness when debugging a "Condition" step. If you click on the step, it will only show you the result of the comparison, not what the comparison is or what the parameter values were. The image below shows the playbook modified to use a condition step to only add the task to the incident if the alert's status is "new". Most of the time, all the alerts inside an incident will have the same status; this was just used as an example.



medium?v=v2&px=400.png

Note that to access the alert's status, use the formula

items('For_each')['properties']['status']

There is also a JSON translation action that could be used, but that it outside the scope of this blog.



After rerunning this playbook, it shows that the playbook successfully ran, but there are no new tasks added to the incident. By looking at the last run, and clicking on the "Condition" step, only the result's value is displayed, as shown below.

medium?v=v2&px=400.png

While this is useful to show what the conditional value was determined to be, it is not useful in that there is no way to determine how the result was computed. By editing the workflow, the formula used in the condition can be determined (see the first image in this section), but not the value of the parameters used.



To do that, the JSON in the "For each" step could be looked at to determine what the value is. However, that would require knowing the structure of the JSON. If this case, knowing the "status" field is under the "properties" field and then determine which instance of the alert is being viewed.



In this case, it may be easier to create a new variable and set its value to same parameter that the condition step is using as shown below. Use the same formula for finding the alert's status when setting the variable as was used in the condition step shown above.

medium?v=v2&px=400.png

Now, when looking at the run history, clicking on the "Set variable" step will show the value that is being used in the condition step as shown below.

medium?v=v2&px=400.png

Looking at this, it was determined that the parameter's value is "New" (with a capital "N") but it is being compared to "new" (with a lowercase "N"). Changing the text the parameter is being compared to in the condition step resolved this issue.



Trigger History

Sometimes Logic Apps do not get the correct data to properly execute the workflow. This will not typically be the case with playbooks as all the data is being passed in from Microsoft Sentinel to the Logic App. However, if you want to see all the data being passed into the playbook, select the "Trigger history" tab from the "Overview" page and then select one of the entries. This will open a page like the one shown below.

medium?v=v2&px=400.png

Select the "Outputs link" link to see the JSON representation of all the data being passed into the playbook. At times this can be faster than trying to see this by looking at the trigger in the actual workflow.



Summary

This blog post walked through one way to help debug a playbook. There are other ways, including setting a variable to the different values, to help debug.

Continue reading...
 
Back
Top