Guest jtwells Posted April 7, 2023 Posted April 7, 2023 This post follows Seeking Dead and Dying Servers blog and introduces the Microsoft Defender for External Attack Surface Management (Defender EASM) APIs. You should start with the previous post if you haven't already done so or are brand new to Defender EASM. Defender EASM APIs provide much more capability than the UI (user interface) alone, enabling users to work with large numbers of assets in one action or piece of code. The pro of APIs is they provide an unencumbered interface between the application and the code or app interacting with it to enable exciting capabilities. However, leveraging an API usually involves significant coding work, even for experienced users. Luckily, I've written sample Jupyter Notebooks in Python and PowerShell you can download and use regardless of your experience level. Choosing how to interact with the APIs Most initial API interaction involves a command-line interface (CLI) application such as cURL, which is fast and incredibly flexible but comes with a steep learning curve. Users must be skilled with shell scripting to do more than single one-off API calls. API clients such as Postman and Insomnia make interacting with APIs much more manageable. We have covered their usage elsewhere. You may feel this is a better option, and if so, you can follow along after downloading our Postman collection. However, a Jupyter Notebook is a better option for many use cases. I'll explain below. Using cURL or an API client such as Postman are great for quick interactions with APIs depending on your comfort level at the command line. However, I've found that once I have the logistics of working with an API figured out, I want to start using it extensively right away. API clients don't always have a smooth transition to a production-like capacity (features like Postman's "Code snippet" capability is handy here). If you need complex logic that takes the output from an API and does something with the results programmatically, you often exceed what you can get from a client, especially with a freemium version. Conversely, suppose you are a skilled developer or experienced with APIs and creating Azure Functions or Logic Apps (or any cloud version thereof). In that case, you probably want to see the API docs and be left alone. The same goes for creating microservices or applications as part of a much bigger architecture. This is typically the realm of a Security Operations Engineer or similarly qualified individual; they take the sample notebooks, get ramped up, and are on their way instantly. For the rest of us, Jupyter Notebooks can be incredibly helpful and decrease the time needed to go from testing an API to using it in production. Also, notebooks provide a great entry point for coding. If you're like me, you've said you wanted to learn to code or code better more often than you care to admit. I taught myself Python and PowerShell using Jupyter Notebooks inside of VS Code with just a few extensions, and you can get started today without writing a single line of code until you are ready. The advantage is that you can begin experimenting with the APIs and see the results immediately, then share those snippets easily with others on your team to iterate on further. Jupyter Notebooks were made popular by data science and machine learning engineers. However, they have since spread to various domains. They are even used from within Microsoft Sentinel and leveraged by the Microsoft Threat Intelligence Center (MSTIC) team and their fantastic tool MSTICPy. Jupyter Notebooks provide a web interface for executing, visualizing, and sharing code easily and in a granular fashion. For our examples, I'll provide one cell of code at a time that interacts with a single API endpoint and then display the results, if any, below it. One by one, you can see how the interaction is set up and executed and then fire it off yourself with only the required input, such as a query. This process differs from a script that requires full compiling and execution before you see the output. This can be tedious and time-consuming for big queries when you only want to make an API call to examine if you are getting the results you expected. Note: One massive caveat here is that notebooks are not "production worthy" from an operational standpoint, nor are they secure. They are solely a testing tool but can produce code that can be taken and quickly implemented in a scalable and secure solution with minor modifications. The provided sample notebooks use Service Principal client secrets within them in plain text and should never be shared with others outside your organization. These samples are made to be easy to use and follow the code execution without much extra effort. Using Notebooks by default is an entirely local process, meaning the server behind it is running on localhost, and everything you do is as secure as your local machine is. The moment your notebook is shared publicly or run on an instance other than your host is when things can go very wrong. Also, be aware that notebooks have a "memory" in that if you run code and display an output, then share that notebook in that state with others, you run the risk of sharing those secrets with the public as the variables are stored in the metadata of the notebook itself until you 'Clear All Outputs.' See the Jupyter docs regarding security matters if you are curious, but as long as everything stays local, you should be fine. Our Setup As mentioned, VS Code makes a tremendous all-in-one environment once correctly configured. I'll quickly point out a few things that may help. There are tons of great IDEs; many can run Jupyter and interpret Python or PowerShell. However, few do both, which is one of the many reasons I love VS Code. Install the language of choice. Python and PowerShell have extensive support for many system architectures and operating systems, in addition to a range of system managers such as Anaconda. Choose what works for you, but if the language is not already installed, check with your systems admin first to see what is permitted or recommended VS Code's Extensions provide added functionality at the click of a button, specifically the following: For those using Python get the Python Extension for VS Code. Also, the Jupyter Extension for VS Code For those wanting to use PowerShell, get the PowerShell Extension. [*]You will also need the .NET Extension Pack, which includes Jupyter support. [*]Download our sample notebooks, and once the required extensions are activated, open them in VS Code. [*]You will need an MDEASM workspace with a completed Discovery run to query the APIs against. [*]Lastly, you will need API credentials, and we will be ready. Authentication There are several ways to obtain the necessary MDEASM API credentials, and every call to the API must include an authorization header containing a valid Azure AD Bearer Token. For our case, I took the liberty of writing a simple function that authenticates and provides our notebook with a bearer token that will eventually expire on its own. This function will also check to see if the current bearer token has expired and if it has, request a new valid token. This process is the Client Service Principal authentication flow. It lends itself nicely to scripts and processes that need a token for as long as the task runs and then lets it expire without further interaction. Setting up a Client Service Principal requires some extended permissions. You can get a token with the Azure CLI if you don't have one. Check with your admin to see if they can help you get an Azure AD Application configured or if using the CLI is an option, but you will need the following either way: If obtaining a bearer token from an AAD App: TenantId ClientId ClientSecret If obtaining a bearer token from the CLI, you will need the following: BearerToken (you will provide it to the notebook manually) BearerTokenExpires (in a standard DateTime format) TenantId For everything else, you will need: subscription ResourceGroupName ResourceName (the name of your MDEASM workspace) Region your MDEASM resource is deployed in With that, let's get started. The Notebook I love Jupyter Notebooks, especially when prototyping Azure Functions or working with an unfamiliar API. APIs are usually brittle and not very forgiving when given the wrong input, so testing with a notebook is great—you only need to run the necessary code without compiling an entire script every time. However, what APIs lack in user-friendliness, they make up for in flexibility, speed, and robustness, which we will take advantage of today. After following the steps from our previous blog Seeking Dead and Dying Servers, we may have a large set of data from our queries looking for Microsoft IIS and Apache webservers with a CVSSv3 score of 9 or higher (hopefully not). More realistically, you changed the query to a CVSSv3 score of 7 or above, which still returns a significant number of assets, which is challenging to work with in the UI. Making things more complex, the assets returned are from all over the enterprise. You want a list of them that you can share with others broken down by their distribution, such as FQDN, IP Block, or ASN. This is something you can do easily via the API. Configuration At the top of either notebook are a title and a brief explanation. Below that is the first cell which includes the most critical variables and a helper function that handles obtaining a bearer token and then checking to see if it has expired every time it is called again. If you set up an AAD App, you will enter the clientId, clientSecret, and tenantId. Otherwise, if you used an alternate method of getting a token (e.g., AZ CLI), you would place it in its respective place along with the expiry. Regardless of how you obtained the token, you will still need to provide the remaining values subscriptionId, resourceGroupName, resourceName, and region. What's important to note is that all the values except BearerTokenExpires must be within quotes. Quotes are used to type these values as strings. BearerTokenExpires is purely numerical and is typed as an integer. Strings without quotes and integers with quotes will create errors, so remember to enter them like in the provided examples. The remaining values are all mostly set for you or are used to check for errors later so that you can ignore them for now. Notebook cells are sequential, so all the variables you enter at the top must be entered and the cell executed before they will be available to be used further below in the notebook. The same goes for functions—I've written the function for each endpoint and added another cell directly below to run the function with the input you supply. If you don't execute the function cell directly above it first, there will not be a function in memory to call – it's a common mistake. If everything is set up correctly, your AAD App is configured properly, and, your CLI-produced bearer token has the expiration time set, you can run the cell by either pressing the 'Execute Cell' button beside it (it's shaped like a play button) or Shift/Enter to run that cell. Do not press 'Run All' as there are missing values you still need to enter below. If successful, you should see this output: Great! You are ready to start using the MDEASM APIs. Assets – List This will likely be the endpoint used most, as it will take a properly formatted query and request the full asset details from the API for each asset returned. As you may have noticed while exploring the UI, a lot of data is associated with each asset. However, this is nominal compared to what returns from the API. The API provides all the data to you at once. It is up to you to decide what you care about and don't. Scroll down to the section in the notebook titled 'Assets – List.' There will be a cell full of code that gets the API call made in the form of a function and another cell below it to use that function and display the results. First, you must execute (Shift/Enter) the upper cell containing the function definition def get_asset_list() or function GetAssetList, depending on your selected language. Don't worry if nothing happens; this compiles the function and prepares it for the next cell. In the next cell, let's replace the empty quotes with this query I have prepared for this example: 'state = confirmed AND kind in ("host", "ipAddress") AND webComponentType = Server AND webComponentName ^=in ("Apache", "Microsoft IIS") AND cvss3BaseScore >= 7' You may loosely recognize this from the previous blog as the query for any webserver associated with a host or IP address whose Web Component Name starts with "Apache" or "Microsoft IIS" that has a CVSS v3 score of 7 or higher (with some slight modifications). You may also recall me mentioning that APIs are notoriously unforgiving for incorrect input, and this looks different than it does in the UI. The API needs this standardized format to consistently get precisely what you or your script is asking for, so it must be formatted very specifically. It's easy to make out the same groupings of filters in the UI, but the names are camel case (ex. webComponentType) or altogether different in certain instances (cvss3BaseScore vs. 'CVSS v3 Score'). Also, take note of the spaces and the use of the word AND to combine these filters into one cohesive query statement. Lastly, note the array value for webComponentName starting within (^=in). These individual values must be each enclosed in quotes, separated by a comma, and inside parentheses. Once entered in the cell, Shift/Enter again, and let it work through the query. This may take a while, depending on how big your result set is. In my example, it took 1 minute 42.6 seconds to return 551 results using Python. There are several factors at play here least of which is a ~1-second delay between subsequent API calls. The results from the API are paginated, and each "page" has a link that points to the next page of results. This slight delay is an industry-wide practice of good internet citizenship. Still, more than just being 'nice,' it helps reduce the exception handling one must account for when APIs are under heavy utilization. Every page of results returned incurs a one-second delay, and for 551 assets in chunks of 25, that's 22 seconds of giving the API a little breathing room. This is only a big deal if you had 7k assets returned. Then, that slight delay adds 4.6 minutes in just waiting. We can do better. The number of results per page gets set with every call in the form of a URL parameter called maxPageSize. If you recall, at the top of the notebook, maxPageSize is set at 25, which is a conservative number and much safer than, say 500. I recommend not exceeding 100 because pushing APIs hard can have unpredictable outcomes. Now we are only pausing 7 seconds overall, and my example returned the same results in 1 minute 3.7 seconds – (38.9 seconds) even faster than the mere 7 seconds we saved in wait time alone. As with all things, use with moderation. In the response below the cell you are using, you should see either the query results or a message stating no assets matched. If you see no results, pat your vuln management team on the back. You can adjust your query with a lower cvss3BaseScore or remove the webComponentName altogether for more possible results. Note that you see only the name of the assets being returned, not any of the hundreds of other data points we have for those assets. Next, we will perform a little work on the raw output to better understand how these assets relate to key centralized infrastructure like domains. Near the bottom of the notebook, I've added a helper function I use pretty regularly that works precisely like our get_asset_list() and GetAssetList functions but, this time, finds common pieces of infrastructure like domains, IP Blocks, and ASNs. This can be a great way to see across organizational boundaries to find pockets of technology requiring attention. Why mitigate one risk when you can resolve many all at once? Get Common Assets This function aims to demonstrate programmatic functionality, not natively part of the API or the UI. It uses the same code as Assets – List to obtain a batch of matching assets but then parses the response of each asset returned and, based on its kind, looks for associated domains, IP Blocks, and ASNs and adds them to a dictionary. When it finds one, it first checks to see if it already exists in the dictionary – if found, it ups the count by one. If not, it adds it and starts the count at 1. In the end, the dictionary is sorted based on the observation count from highest to lowest, making it easy to see where the most significant number of assets related to your query are clustered around. Hopefully, this sparks some ideas for ways to automate getting this information into the right hands of your organization. Suppose X asset is on Y domain, and everything there is the responsibility of Praveen's team. What's stopping you from scripting a small Azure Function that does a weekly check and sends an email to the team's distribution group? What about graphs and visualizations completely custom to your company's risk appetite? The possibilities are infinite, and now you have a place to get started and experiment freely. We will continue to update the notebooks as more endpoints and features become generally available. Continue reading... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.