Jump to content

How to expose NGINX Ingress Controller via Azure Front Door and Azure Private Link Service


Recommended Posts

Guest paolosalvatori
Posted

This article shows how to use Azure Front Door Premium, Azure Web Application Firewall, and Azure Private Link Service (PLS) to securely expose and protect a workload running in Azure Kubernetes Service(AKS). The sample application is exposed via the NGINX Ingress Controller configured to use a private IP address as a frontend IP configuration of the kubernetes-internal internal load balancer. For more information, see Create an ingress controller using an internal IP address.

 

 

 

In addition, this sample shows how to deploy an Azure Kubernetes Service cluster with the API Server VNET Integration and how to use an Azure NAT Gateway to manage outbound connections initiated by AKS-hosted workloads. AKS clusters with API Server VNET integration provide a series of advantages, for example, they can have public network access or private cluster mode enabled or disabled without redeploying the cluster. For more information, see Create an Azure Kubernetes Service cluster with API Server VNet Integration. You can find the companion code in this GitHub repository.

 

 

 

Prerequisites

 

 

 

 

Architecture

 

 

This sample provides a set of Bicep modules to deploy and configure an Azure Front Door Premium with an WAF Policy as global load balancer in front of a public or a private AKS cluster with API Server VNET Integration, Azure CNI as a network plugin and Dynamic IP Allocation. The following diagram shows the architecture and network topology deployed by the sample:

 

 

 

largevv2px999.png.251394d258953598eb3f0be4094f267c.png

 

A Deployment Script is used to create the NGINX Ingress Controller, configured to use a private IP address as frontend IP configuration of the kubernetes-internal internal load balancer via Helm and a sample httpbin web application via YAML manifests. The Origin child resource of the Azure Front Door Premium global load balancer is configured to call the sample application via Azure Private Link Service, the AKS the kubernetes-internal internal load balancer, and the NGINX Ingress Controller, as shown in the following figure:

 

largevv2px999.png.fca3c0b47be0694789bec3b893d39d76.png

 

Bicep modules are parametric, so you can choose any network plugin:

 

NOTE

The sample was tested only with Azure CNI and Azure CNI Overlay

 

The Bicep modules in the companion sample also allow to install the following extensions and add-ons for Azure Kubernetes Service(AKS):

 

 

In a production environment, we strongly recommend deploying a private AKS cluster with Uptime SLA. For more information, see private AKS cluster with a Public DNS address. Alternatively, you can deploy a public AKS cluster and secure access to the API server using authorized IP address ranges.

 

The Bicep modules deploy the following Azure resources:

 

NOTE

You can find the architecture.vsdx file used for the diagram under the visio folder.

 

What is Bicep?

 

 

Bicep is a domain-specific language (DSL) that uses a declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.

 

 

 

Deploy the Bicep modules

 

 

You can deploy the Bicep modules in the bicep folder using the deploy.sh Bash script in the same folder. Specify a value for the following parameters in the deploy.sh script and main.parameters.json parameters file before deploying the Bicep modules.

 

  • prefix: specifies a prefix for all the Azure resources.
  • authenticationType: specifies the type of authentication when accessing the Virtual Machine. sshPublicKey is the recommended value. Allowed values: sshPublicKey and password.
  • vmAdminUsername: specifies the name of the administrator account of the virtual machine.
  • vmAdminPasswordOrKey: specifies the SSH Key or password for the virtual machine.
  • aksClusterSshPublicKey: specifies the SSH Key or password for AKS cluster agent nodes.
  • aadProfileAdminGroupObjectIDs: when deploying an AKS cluster with Azure AD and Azure RBAC integration, this array parameter contains the list of Azure AD group object IDs that will have the admin role of the cluster.
  • keyVaultObjectIds: Specifies the object ID of the service principals to configure in Key Vault access policies.

 

We suggest reading sensitive configuration data such as passwords or SSH keys from a pre-existing Azure Key Vault resource. For more information, see Use Azure Key Vault to pass secure parameter value during Bicep deployment.

 

 

 

#!/bin/bash

 

# Template

template="main.bicep"

parameters="main.parameters.json"

 

# AKS cluster name

prefix="<Azure-Resource-Name-Prefix>"

aksName="${prefix}Aks"

validateTemplate=1

useWhatIf=0

update=1

installExtensions=0

 

# Name and location of the resource group for the Azure Kubernetes Service (AKS) cluster

resourceGroupName="${prefix}RG"

location="westeurope"

deploymentName="main"

 

# Subscription id, subscription name, and tenant id of the current subscription

subscriptionId=$(az account show --query id --output tsv)

subscriptionName=$(az account show --query name --output tsv)

tenantId=$(az account show --query tenantId --output tsv)

 

# Install aks-preview Azure extension

if [[ $installExtensions == 1 ]]; then

echo "Checking if [aks-preview] extension is already installed..."

az extension show --name aks-preview &>/dev/null

 

if [[ $? == 0 ]]; then

echo "[aks-preview] extension is already installed"

 

# Update the extension to make sure you have the latest version installed

echo "Updating [aks-preview] extension..."

az extension update --name aks-preview &>/dev/null

else

echo "[aks-preview] extension is not installed. Installing..."

 

# Install aks-preview extension

az extension add --name aks-preview 1>/dev/null

 

if [[ $? == 0 ]]; then

echo "[aks-preview] extension successfully installed"

else

echo "Failed to install [aks-preview] extension"

exit

fi

fi

 

# Registering AKS feature extensions

aksExtensions=(

"PodSecurityPolicyPreview"

"KubeletDisk"

"AKS-KedaPreview"

"RunCommandPreview"

"EnablePodIdentityPreview "

"UserAssignedIdentityPreview"

"EnablePrivateClusterPublicFQDN"

"PodSubnetPreview"

"EnableOIDCIssuerPreview"

"EnableWorkloadIdentityPreview"

"EnableImageCleanerPreview"

"AKS-VPAPreview"

"AzureOverlayPreview"

"KubeProxyConfigurationPreview"

)

ok=0

registeringExtensions=()

for aksExtension in ${aksExtensions[@]}; do

echo "Checking if [$aksExtension] extension is already registered..."

extension=$(az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/$aksExtension') && @.properties.state == 'Registered'].{Name:name}" --output tsv)

if [[ -z $extension ]]; then

echo "[$aksExtension] extension is not registered."

echo "Registering [$aksExtension] extension..."

az feature register --name $aksExtension --namespace Microsoft.ContainerService

registeringExtensions+=("$aksExtension")

ok=1

else

echo "[$aksExtension] extension is already registered."

fi

done

echo $registeringExtensions

delay=1

for aksExtension in ${registeringExtensions[@]}; do

echo -n "Checking if [$aksExtension] extension is already registered..."

while true; do

extension=$(az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/$aksExtension') && @.properties.state == 'Registered'].{Name:name}" --output tsv)

if [[ -z $extension ]]; then

echo -n "."

sleep $delay

else

echo "."

break

fi

done

done

 

if [[ $ok == 1 ]]; then

echo "Refreshing the registration of the Microsoft.ContainerService resource provider..."

az provider register --namespace Microsoft.ContainerService

echo "Microsoft.ContainerService resource provider registration successfully refreshed"

fi

fi

 

# Get the last Kubernetes version available in the region

kubernetesVersion=$(az aks get-versions --location $location --query "orchestrators[?isPreview==false].orchestratorVersion | sort(@) | [-1]" --output tsv)

 

if [[ -n $kubernetesVersion ]]; then

echo "Successfully retrieved the last Kubernetes version [$kubernetesVersion] supported by AKS in [$location] Azure region"

else

echo "Failed to retrieve the last Kubernetes version supported by AKS in [$location] Azure region"

exit

fi

 

# Check if the resource group already exists

echo "Checking if [$resourceGroupName] resource group actually exists in the [$subscriptionName] subscription..."

 

az group show --name $resourceGroupName &>/dev/null

 

if [[ $? != 0 ]]; then

echo "No [$resourceGroupName] resource group actually exists in the [$subscriptionName] subscription"

echo "Creating [$resourceGroupName] resource group in the [$subscriptionName] subscription..."

 

# Create the resource group

az group create --name $resourceGroupName --location $location 1>/dev/null

 

if [[ $? == 0 ]]; then

echo "[$resourceGroupName] resource group successfully created in the [$subscriptionName] subscription"

else

echo "Failed to create [$resourceGroupName] resource group in the [$subscriptionName] subscription"

exit

fi

else

echo "[$resourceGroupName] resource group already exists in the [$subscriptionName] subscription"

fi

 

# Create AKS cluster if does not exist

echo "Checking if [$aksName] aks cluster actually exists in the [$resourceGroupName] resource group..."

 

az aks show --name $aksName --resource-group $resourceGroupName &>/dev/null

notExists=$?

 

if [[ $notExists != 0 || $update == 1 ]]; then

 

if [[ $notExists != 0 ]]; then

echo "No [$aksName] aks cluster actually exists in the [$resourceGroupName] resource group"

else

echo "[$aksName] aks cluster already exists in the [$resourceGroupName] resource group. Updating the cluster..."

fi

 

# Delete any existing role assignments for the user-defined managed identity of the AKS cluster

# in case you are re-deploying the solution in an existing resource group

echo "Retrieving the list of role assignments on [$resourceGroupName] resource group..."

assignmentIds=$(az role assignment list \

--scope "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}" \

--query [].id \

--output tsv \

--only-show-errors)

 

if [[ -n $assignmentIds ]]; then

echo "[${#assignmentIds[@]}] role assignments have been found on [$resourceGroupName] resource group"

for assignmentId in ${assignmentIds[@]}; do

if [[ -n $assignmentId ]]; then

az role assignment delete --ids $assignmentId

 

if [[ $? == 0 ]]; then

assignmentName=$(echo $assignmentId | awk -F '/' '{print $NF}')

echo "[$assignmentName] role assignment on [$resourceGroupName] resource group successfully deleted"

fi

fi

done

else

echo "No role assignment actually exists on [$resourceGroupName] resource group"

fi

 

# Get the kubelet managed identity used by the AKS cluster

echo "Retrieving the kubelet identity from the [$aksName] AKS cluster..."

clientId=$(az aks show \

--name $aksName \

--resource-group $resourceGroupName \

--query identityProfile.kubeletidentity.clientId \

--output tsv 2>/dev/null)

 

if [[ -n $clientId ]]; then

# Delete any role assignment to kubelet managed identity on any ACR in the resource group

echo "kubelet identity of the [$aksName] AKS cluster successfully retrieved"

echo "Retrieving the list of ACR resources in the [$resourceGroupName] resource group..."

acrIds=$(az acr list \

--resource-group $resourceGroupName \

--query [].id \

--output tsv)

 

if [[ -n $acrIds ]]; then

echo "[${#acrIds[@]}] ACR resources have been found in [$resourceGroupName] resource group"

for acrId in ${acrIds[@]}; do

if [[ -n $acrId ]]; then

acrName=$(echo $acrId | awk -F '/' '{print $NF}')

echo "Retrieving the list of role assignments on [$acrName] ACR..."

assignmentIds=$(az role assignment list \

--scope "$acrId" \

--query [].id \

--output tsv \

--only-show-errors)

 

if [[ -n $assignmentIds ]]; then

echo "[${#assignmentIds[@]}] role assignments have been found on [$acrName] ACR"

for assignmentId in ${assignmentIds[@]}; do

if [[ -n $assignmentId ]]; then

az role assignment delete --ids $assignmentId

 

if [[ $? == 0 ]]; then

assignmentName=$(echo $assignmentId | awk -F '/' '{print $NF}')

echo "[$assignmentName] role assignment on [$acrName] ACR successfully deleted"

fi

fi

done

else

echo "No role assignment actually exists on [$acrName] ACR"

fi

fi

done

else

echo "No ACR actually exists in [$resourceGroupName] resource group"

fi

else

echo "No kubelet identity exists for the [$aksName] AKS cluster"

fi

 

# Validate the Bicep template

if [[ $validateTemplate == 1 ]]; then

if [[ $useWhatIf == 1 ]]; then

# Execute a deployment What-If operation at resource group scope.

echo "Previewing changes deployed by [$template] Bicep template..."

az deployment group what-if \

--resource-group $resourceGroupName \

--template-file $template \

--parameters $parameters \

--parameters prefix=$prefix \

location=$location \

aksClusterKubernetesVersion=$kubernetesVersion

 

if [[ $? == 0 ]]; then

echo "[$template] Bicep template validation succeeded"

else

echo "Failed to validate [$template] Bicep template"

exit

fi

else

# Validate the Bicep template

echo "Validating [$template] Bicep template..."

output=$(az deployment group validate \

--resource-group $resourceGroupName \

--template-file $template \

--parameters $parameters \

--parameters prefix=$prefix \

location=$location \

aksClusterKubernetesVersion=$kubernetesVersion)

 

if [[ $? == 0 ]]; then

echo "[$template] Bicep template validation succeeded"

else

echo "Failed to validate [$template] Bicep template"

echo $output

exit

fi

fi

fi

 

# Deploy the Bicep template

echo "Deploying [$template] Bicep template..."

az deployment group create \

--name $deploymentName \

--resource-group $resourceGroupName \

--only-show-errors \

--template-file $template \

--parameters $parameters \

--parameters prefix=$prefix \

location=$location \

aksClusterKubernetesVersion=$kubernetesVersion 1>/dev/null

 

if [[ $? == 0 ]]; then

echo "[$template] Bicep template successfully provisioned"

else

echo "Failed to provision the [$template] Bicep template"

exit

fi

else

echo "[$aksName] aks cluster already exists in the [$resourceGroupName] resource group"

fi

 

# Create AKS cluster if does not exist

echo "Checking if [$aksName] aks cluster actually exists in the [$resourceGroupName] resource group..."

 

az aks show --name $aksName --resource-group $resourceGroupName &>/dev/null

 

if [[ $? != 0 ]]; then

echo "No [$aksName] aks cluster actually exists in the [$resourceGroupName] resource group"

exit

fi

 

# Get the user principal name of the current user

echo "Retrieving the user principal name of the current user from the [$tenantId] Azure AD tenant..."

userPrincipalName=$(az account show --query user.name --output tsv)

if [[ -n $userPrincipalName ]]; then

echo "[$userPrincipalName] user principal name successfully retrieved from the [$tenantId] Azure AD tenant"

else

echo "Failed to retrieve the user principal name of the current user from the [$tenantId] Azure AD tenant"

exit

fi

 

# Retrieve the objectId of the user in the Azure AD tenant used by AKS for user authentication

echo "Retrieving the objectId of the [$userPrincipalName] user principal name from the [$tenantId] Azure AD tenant..."

userObjectId=$(az ad user show --id $userPrincipalName --query id --output tsv 2>/dev/null)

 

if [[ -n $userObjectId ]]; then

echo "[$userObjectId] objectId successfully retrieved for the [$userPrincipalName] user principal name"

else

echo "Failed to retrieve the objectId of the [$userPrincipalName] user principal name"

exit

fi

 

# Retrieve the resource id of the AKS cluster

echo "Retrieving the resource id of the [$aksName] AKS cluster..."

aksClusterId=$(az aks show \

--name "$aksName" \

--resource-group "$resourceGroupName" \

--query id \

--output tsv 2>/dev/null)

 

if [[ -n $aksClusterId ]]; then

echo "Resource id of the [$aksName] AKS cluster successfully retrieved"

else

echo "Failed to retrieve the resource id of the [$aksName] AKS cluster"

exit

fi

 

# Assign Azure Kubernetes Service RBAC Cluster Admin role to the current user

role="Azure Kubernetes Service RBAC Cluster Admin"

echo "Checking if [$userPrincipalName] user has been assigned to [$role] role on the [$aksName] AKS cluster..."

current=$(az role assignment list \

--assignee $userObjectId \

--scope $aksClusterId \

--query "[?roleDefinitionName==$role].roleDefinitionName" \

--output tsv 2>/dev/null)

 

if [[ $current == "Owner" ]] || [[ $current == "Contributor" ]] || [[ $current == "$role" ]]; then

echo "[$userPrincipalName] user is already assigned to the [$current] role on the [$aksName] AKS cluster"

else

echo "[$userPrincipalName] user is not assigned to the [$role] role on the [$aksName] AKS cluster"

echo "Assigning the [$userPrincipalName] user to the [$role] role on the [$aksName] AKS cluster..."

 

az role assignment create \

--role "$role" \

--assignee $userObjectId \

--scope $aksClusterId \

--only-show-errors 1>/dev/null

 

if [[ $? == 0 ]]; then

echo "[$userPrincipalName] user successfully assigned to the [$role] role on the [$aksName] AKS cluster"

else

echo "Failed to assign the [$userPrincipalName] user to the [$role] role on the [$aksName] AKS cluster"

exit

fi

fi

 

# Assign Azure Kubernetes Service Cluster Admin Role role to the current user

role="Azure Kubernetes Service Cluster Admin Role"

echo "Checking if [$userPrincipalName] user has been assigned to [$role] role on the [$aksName] AKS cluster..."

current=$(az role assignment list \

--assignee $userObjectId \

--scope $aksClusterId \

--query "[?roleDefinitionName==$role].roleDefinitionName" \

--output tsv 2>/dev/null)

 

if [[ $current == "Owner" ]] || [[ $current == "Contributor" ]] || [[ $current == "$role" ]]; then

echo "[$userPrincipalName] user is already assigned to the [$current] role on the [$aksName] AKS cluster"

else

echo "[$userPrincipalName] user is not assigned to the [$role] role on the [$aksName] AKS cluster"

echo "Assigning the [$userPrincipalName] user to the [$role] role on the [$aksName] AKS cluster..."

 

az role assignment create \

--role "$role" \

--assignee $userObjectId \

--scope $aksClusterId \

--only-show-errors 1>/dev/null

 

if [[ $? == 0 ]]; then

echo "[$userPrincipalName] user successfully assigned to the [$role] role on the [$aksName] AKS cluster"

else

echo "Failed to assign the [$userPrincipalName] user to the [$role] role on the [$aksName] AKS cluster"

exit

fi

fi

 

# Get the FQDN of the Azure Front Door endpoint

azureFrontDoorEndpointFqdn=$(az deployment group show \

--name $deploymentName \

--resource-group $resourceGroupName \

--query properties.outputs.frontDoorEndpointFqdn.value \

--output tsv)

 

if [[ -n $azureFrontDoorEndpointFqdn ]]; then

echo "FQDN of the Azure Front Door endpoint: $azureFrontDoorEndpointFqdn"

else

echo "Failed to get the FQDN of the Azure Front Door endpoint"

exit -1

fi

 

# Get the private link service name

privateLinkServiceName=$(az deployment group show \

--name $deploymentName \

--resource-group $resourceGroupName \

--query properties.outputs.privateLinkServiceName.value \

--output tsv)

 

if [[ -z $privateLinkServiceName ]]; then

echo "Failed to get the private link service name"

exit -1

fi

 

# Get the resource id of the Private Endpoint Connection

privateEndpointConnectionId=$(az network private-endpoint-connection list \

--name $privateLinkServiceName \

--resource-group $resourceGroupName \

--type Microsoft.Network/privateLinkServices \

--query [0].id \

--output tsv)

 

if [[ -n $privateEndpointConnectionId ]]; then

echo "Resource id of the Private Endpoint Connection: $privateEndpointConnectionId"

else

echo "Failed to get the resource id of the Private Endpoint Connection"

exit -1

fi

 

# Approve the private endpoint connection

echo "Approving [$privateEndpointConnectionId] private endpoint connection ID..."

az network private-endpoint-connection approve \

--name $privateLinkServiceName \

--resource-group $resourceGroupName \

--id $privateEndpointConnectionId \

--description "Approved" 1>/dev/null

 

if [[ $? == 0 ]]; then

echo "[$privateEndpointConnectionId] private endpoint connection ID successfully approved"

else

echo "Failed to approve [$privateEndpointConnectionId] private endpoint connection ID"

exit -1

fi

 

 

 

The last steps of the Bash script perform the following actions:

 

 

If you miss running these steps, Azure Front Door cannot invoke the httpbin web application via the Azure Private Link Service, and the kubernetes-internal internal load balancer of the AKS cluster.

 

 

 

Bicep Modules

 

 

The companion sample contains several modules. The following table shows the module used to create and configure Azure Front Door and its child resources.

 

 

 

// Parameters

@description('Specifies the name of the Azure Front Door.')

param frontDoorName string

 

@description('The name of the SKU to use when creating the Front Door profile.')

@allowed([

'Standard_AzureFrontDoor'

'Premium_AzureFrontDoor'

])

param frontDoorSkuName string = 'Premium_AzureFrontDoor'

 

@description('Specifies the send and receive timeout on forwarding request to the origin. When timeout is reached, the request fails and returns.')

param originResponseTimeoutSeconds int = 30

 

@description('Specifies the name of the Azure Front Door Origin Group for the web application.')

param originGroupName string

 

@description('Specifies the name of the Azure Front Door Origin for the web application.')

param originName string

 

@description('Specifies the address of the origin. Domain names, IPv4 addresses, and IPv6 addresses are supported.This should be unique across all origins in an endpoint.')

param hostName string

 

@description('Specifies the value of the HTTP port. Must be between 1 and 65535.')

param httpPort int = 80

 

@description('Specifies the value of the HTTPS port. Must be between 1 and 65535.')

param httpsPort int = 443

 

@description('Specifies the host header value sent to the origin with each request. If you leave this blank, the request hostname determines this value. Azure Front Door origins, such as Web Apps, Blob Storage, and Cloud Services require this host header value to match the origin hostname by default. This overrides the host header defined at Endpoint.')

param originHostHeader string

 

@description('Specifies the priority of origin in given origin group for load balancing. Higher priorities will not be used for load balancing if any lower priority origin is healthy.Must be between 1 and 5.')

@minValue(1)

@maxValue(5)

param priority int = 1

 

@description('Specifies the weight of the origin in a given origin group for load balancing. Must be between 1 and 1000.')

@minValue(1)

@maxValue(1000)

param weight int = 1000

 

@description('Specifies whether to enable health probes to be made against backends defined under backendPools. Health probes can only be disabled if there is a single enabled backend in single enabled backend pool.')

@allowed([

'Enabled'

'Disabled'

])

param originEnabledState string = 'Enabled'

 

@description('Specifies the resource id of a private link service.')

param privateLinkResourceId string

 

@description('Specifies the number of samples to consider for load balancing decisions.')

param sampleSize int = 4

 

@description('Specifies the number of samples within the sample period that must succeed.')

param successfulSamplesRequired int = 3

 

@description('Specifies the additional latency in milliseconds for probes to fall into the lowest latency bucket.')

param additionalLatencyInMilliseconds int = 50

 

@description('Specifies path relative to the origin that is used to determine the health of the origin.')

param probePath string = '/'

 

@description('Specifies the health probe request type.')

@allowed([

'GET'

'HEAD'

'NotSet'

])

param probeRequestType string = 'GET'

 

@description('Specifies the health probe protocol.')

@allowed([

'Http'

'Https'

'NotSet'

])

param probeProtocol string = 'Http'

 

@description('Specifies the number of seconds between health probes.Default is 240 seconds.')

param probeIntervalInSeconds int = 60

 

@description('Specifies whether to allow session affinity on this host. Valid options are Enabled or Disabled.')

@allowed([

'Enabled'

'Disabled'

])

param sessionAffinityState string = 'Disabled'

 

@description('Specifies the endpoint name reuse scope. The default value is TenantReuse.')

@allowed([

'NoReuse'

'ResourceGroupReuse'

'SubscriptionReuse'

'TenantReuse'

])

param autoGeneratedDomainNameLabelScope string = 'TenantReuse'

 

@description('Specifies the name of the Azure Front Door Route for the web application.')

param routeName string

 

@description('Specifies the domains referenced by the endpoint.')

param customDomains array = []

 

@description('Specifies a directory path on the origin that Azure Front Door can use to retrieve content from, e.g. contoso.cloudapp.net/originpath.')

param originPath string = '/'

 

@description('Specifies the rule sets referenced by this endpoint.')

param ruleSets array = []

 

@description('Specifies the list of supported protocols for this route')

param supportedProtocols array = [

'Http'

'Https'

]

 

@description('Specifies the route patterns of the rule.')

param routePatternsToMatch array = [ '/*' ]

 

@description('Specifies the protocol this rule will use when forwarding traffic to backends.')

@allowed([

'HttpOnly'

'HttpsOnly'

'MatchRequest'

])

param forwardingProtocol string = 'HttpOnly'

 

@description('Specifies whether this route will be linked to the default endpoint domain.')

@allowed([

'Enabled'

'Disabled'

])

param linkToDefaultDomain string = 'Enabled'

 

@description('Specifies whether to automatically redirect HTTP traffic to HTTPS traffic. Note that this is a easy way to set up this rule and it will be the first rule that gets executed.')

@allowed([

'Enabled'

'Disabled'

])

param httpsRedirect string = 'Enabled'

 

@description('Specifies the name of the Azure Front Door Endpoint for the web application.')

param endpointName string

 

@description('Specifies whether to enable use of this rule. Permitted values are Enabled or Disabled')

@allowed([

'Enabled'

'Disabled'

])

param endpointEnabledState string = 'Enabled'

 

@description('Specifies the name of the Azure Front Door WAF policy.')

param wafPolicyName string

 

@description('Specifies the WAF policy is in detection mode or prevention mode.')

@allowed([

'Detection'

'Prevention'

])

param wafPolicyMode string = 'Prevention'

 

@description('Specifies if the policy is in enabled or disabled state. Defaults to Enabled if not specified.')

param wafPolicyEnabledState string = 'Enabled'

 

@description('Specifies the list of managed rule sets to configure on the WAF.')

param wafManagedRuleSets array = []

 

@description('Specifies the list of custom rulesto configure on the WAF.')

param wafCustomRules array = []

 

@description('Specifies if the WAF policy managed rules will inspect the request body content.')

@allowed([

'Enabled'

'Disabled'

])

param wafPolicyRequestBodyCheck string = 'Enabled'

 

@description('Specifies name of the security policy.')

param securityPolicyName string

 

@description('Specifies the list of patterns to match by the security policy.')

param securityPolicyPatternsToMatch array = [ '/*' ]

 

@description('Specifies the resource id of the Log Analytics workspace.')

param workspaceId string

 

@description('Specifies the workspace data retention in days.')

param retentionInDays int = 60

 

@description('Specifies the location.')

param location string = resourceGroup().location

 

@description('Specifies the resource tags.')

param tags object

 

 

// Variables

 

var diagnosticSettingsName = 'diagnosticSettings'

var logCategories = [

'FrontDoorAccessLog'

'FrontDoorHealthProbeLog'

'FrontDoorWebApplicationFirewallLog'

]

var metricCategories = [

'AllMetrics'

]

var logs = [for category in logCategories: {

category: category

enabled: true

retentionPolicy: {

enabled: true

days: retentionInDays

}

}]

var metrics = [for category in metricCategories: {

category: category

enabled: true

retentionPolicy: {

enabled: true

days: retentionInDays

}

}]

 

// Resources

resource frontDoor 'Microsoft.Cdn/profiles@2022-11-01-preview' = {

name: frontDoorName

location: 'Global'

tags: tags

sku: {

name: frontDoorSkuName

}

properties: {

originResponseTimeoutSeconds: originResponseTimeoutSeconds

extendedProperties: {

}

}

}

 

resource originGroup 'Microsoft.Cdn/profiles/origingroups@2022-11-01-preview' = {

parent: frontDoor

name: originGroupName

properties: {

loadBalancingSettings: {

sampleSize: sampleSize

successfulSamplesRequired: successfulSamplesRequired

additionalLatencyInMilliseconds: additionalLatencyInMilliseconds

}

healthProbeSettings: {

probePath: probePath

probeRequestType: probeRequestType

probeProtocol: probeProtocol

probeIntervalInSeconds: probeIntervalInSeconds

}

sessionAffinityState: sessionAffinityState

}

}

 

resource origin 'Microsoft.Cdn/profiles/origingroups/origins@2022-11-01-preview' = {

parent: originGroup

name: originName

properties: {

hostName: hostName

httpPort: httpPort

httpsPort: httpsPort

originHostHeader: originHostHeader

priority: priority

weight: weight

enabledState: originEnabledState

sharedPrivateLinkResource: empty(privateLinkResourceId) ? {} : {

privateLink: {

id: privateLinkResourceId

}

privateLinkLocation: location

status: 'Approved'

requestMessage: 'Please approve this request to allow Front Door to access the container app'

}

enforceCertificateNameCheck: true

}

}

 

resource endpoint 'Microsoft.Cdn/profiles/afdEndpoints@2022-11-01-preview' = {

parent: frontDoor

name: endpointName

location: 'Global'

properties: {

autoGeneratedDomainNameLabelScope: toUpper(autoGeneratedDomainNameLabelScope)

enabledState: endpointEnabledState

}

}

 

resource route 'Microsoft.Cdn/profiles/afdEndpoints/routes@2022-11-01-preview' = {

parent: endpoint

name: routeName

properties: {

customDomains: customDomains

originGroup: {

id: originGroup.id

}

originPath: originPath

ruleSets: ruleSets

supportedProtocols: supportedProtocols

patternsToMatch: routePatternsToMatch

forwardingProtocol: forwardingProtocol

linkToDefaultDomain: linkToDefaultDomain

httpsRedirect: httpsRedirect

}

dependsOn: [

origin

]

}

 

resource wafPolicy 'Microsoft.Network/FrontDoorWebApplicationFirewallPolicies@2022-05-01' = {

name: wafPolicyName

location: 'Global'

tags: tags

sku: {

name: frontDoorSkuName

}

properties: {

policySettings: {

enabledState: wafPolicyEnabledState

mode: wafPolicyMode

requestBodyCheck: wafPolicyRequestBodyCheck

}

managedRules: {

managedRuleSets: wafManagedRuleSets

}

customRules: {

rules: wafCustomRules

}

}

}

 

resource securityPolicy 'Microsoft.Cdn/profiles/securitypolicies@2022-11-01-preview' = {

parent: frontDoor

name: securityPolicyName

properties: {

parameters: {

type: 'WebApplicationFirewall'

wafPolicy: {

id: wafPolicy.id

}

associations: [

{

domains: [

{

id: endpoint.id

}

]

patternsToMatch: securityPolicyPatternsToMatch

}

]

 

}

}

}

 

// Diagnostics Settings

resource diagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {

name: diagnosticSettingsName

scope: frontDoor

properties: {

workspaceId: workspaceId

logs: logs

metrics: metrics

}

}

 

// Outputs

output id string = frontDoor.id

output name string = frontDoor.name

output endpointFqdn string = endpoint.properties.hostName

 

 

 

Deployment Script

 

 

The sample makes use of a Deployment Script to run the install-helm-charts-and-app.sh Bash script which installs the httpbin web application via YAML templates and the following packages to the AKS cluster via Helm. For more information on deployment scripts, see Use deployment scripts in Bicep

 

 

 

 

# Install kubectl

az aks install-cli --only-show-errors

 

# Get AKS credentials

az aks get-credentials \

--admin \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--only-show-errors

 

# Check if the cluster is private or not

private=$(az aks show --name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--query apiServerAccessProfile.enablePrivateCluster \

--output tsv)

 

# Install Helm

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 -o get_helm.sh -s

chmod 700 get_helm.sh

./get_helm.sh &>/dev/null

 

# Add Helm repos

helm repo add prometheus-community Prometheus Community Kubernetes Helm Charts

helm repo add ingress-nginx Welcome - NGINX Ingress Controller

helm repo add jetstack https://charts.jetstack.io

 

# Update Helm repos

helm repo update

 

if [[ $private == 'true' ]]; then

# Log whether the cluster is public or private

echo "$clusterName AKS cluster is public"

 

# Install Prometheus

command="helm install prometheus prometheus-community/kube-prometheus-stack \

--create-namespace \

--namespace prometheus \

--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \

--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

# Install NGINX ingress controller using the internal load balancer

command="helm install nginx-ingress ingress-nginx/ingress-nginx \

--create-namespace \

--namespace ingress-basic \

--set controller.config.enable-modsecurity=true \

--set controller.config.enable-owasp-modsecurity-crs=true \

--set controller.config.modsecurity-snippet=\

'SecRuleEngine On

SecRequestBodyAccess On

SecAuditLog /dev/stdout

SecAuditLogFormat JSON

SecAuditEngine RelevantOnly' \

--set controller.replicaCount=3 \

--set controller.nodeSelector.\"kubernetes\.io/os\"=linux \

--set defaultBackend.nodeSelector.\"kubernetes\.io/os\"=linux \

--set controller.metrics.enabled=true \

--set controller.metrics.serviceMonitor.enabled=true \

--set controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" \

--set controller.service.annotations.\"service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path\"=/healthz \

--set controller.service.annotations.\"service\.beta\.kubernetes\.io/azure-load-balancer-internal\"=true"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

# Install certificate manager

command="helm install cert-manager jetstack/cert-manager \

--create-namespace \

--namespace cert-manager \

--set installCRDs=true \

--set nodeSelector.\"kubernetes\.io/os\"=linux"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

# Create cluster issuer

command="cat <<EOF | kubectl apply -f -

apiVersion: cert-manager.io/v1

kind: ClusterIssuer

metadata:

name: letsencrypt-nginx

spec:

acme:

server: https://acme-v02.api.letsencrypt.org/directory

email: $email

privateKeySecretRef:

name: letsencrypt

solvers:

- http01:

ingress:

class: nginx

podTemplate:

spec:

nodeSelector:

"kubernetes.io/os": linux

EOF"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

# Create a namespace for the application

command="kubectl create namespace $namespace"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

# Create a deployment and service for the application

command="cat <<EOF | kubectl apply -n $namespace -f -

apiVersion: apps/v1

kind: Deployment

metadata:

name: httpbin

spec:

replicas: 3

selector:

matchLabels:

app: httpbin

template:

metadata:

labels:

app: httpbin

spec:

topologySpreadConstraints:

- maxSkew: 1

topologyKey: topology.kubernetes.io/zone

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

app: httpbin

- maxSkew: 1

topologyKey: kubernetes.io/hostname

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

app: httpbin

nodeSelector:

"kubernetes.io/os": linux

containers:

- image: docker.io/kennethreitz/httpbin

imagePullPolicy: IfNotPresent

name: httpbin

resources:

requests:

memory: "64Mi"

cpu: "125m"

limits:

memory: "128Mi"

cpu: "250m"

ports:

- containerPort: 80

env:

- name: PORT

value: "80"

---

apiVersion: v1

kind: Service

metadata:

name: httpbin

spec:

ports:

- port: 80

targetPort: 80

protocol: TCP

type: ClusterIP

selector:

app: httpbin

EOF"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

# Create an ingress resource for the application

command="cat <<EOF | kubectl apply -n $namespace -f -

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: httpbin

spec:

ingressClassName: nginx

rules:

- host: $hostName

http:

paths:

- path: /

pathType: Prefix

backend:

service:

name: httpbin

port:

number: 80

EOF"

 

az aks command invoke \

--name $clusterName \

--resource-group $resourceGroupName \

--subscription $subscriptionId \

--command "$command"

 

else

# Log whether the cluster is public or private

echo "$clusterName AKS cluster is private"

 

# Install Prometheus

helm install prometheus prometheus-community/kube-prometheus-stack \

--create-namespace \

--namespace prometheus \

--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \

--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false

 

# Install NGINX ingress controller using the internal load balancer

helm install nginx-ingress ingress-nginx/ingress-nginx \

--create-namespace \

--namespace ingress-basic \

--set controller.config.enable-modsecurity=true \

--set controller.config.enable-owasp-modsecurity-crs=true \

--set controller.config.modsecurity-snippet='SecRuleEngine On

SecRequestBodyAccess On

SecAuditLog /dev/stdout

SecAuditLogFormat JSON

SecAuditEngine RelevantOnly' \

--set controller.replicaCount=3 \

--set controller.nodeSelector."kubernetes\.io/os"=linux \

--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \

--set controller.metrics.enabled=true \

--set controller.metrics.serviceMonitor.enabled=true \

--set controller.metrics.serviceMonitor.additionalLabels.release="prometheus" \

--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \

--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true

 

helm install $nginxReleaseName $nginxRepoName/$nginxChartName \

--create-namespace \

--namespace $nginxNamespace

 

# Install certificate manager

helm install cert-manager jetstack/cert-manager \

--create-namespace \

--namespace cert-manager \

--set installCRDs=true \

--set nodeSelector."kubernetes\.io/os"=linux

 

# Create cluster issuer

cat <<EOF | kubectl apply -f -

apiVersion: cert-manager.io/v1

kind: ClusterIssuer

metadata:

name: letsencrypt-nginx

spec:

acme:

server: https://acme-v02.api.letsencrypt.org/directory

email: $email

privateKeySecretRef:

name: letsencrypt

solvers:

- http01:

ingress:

class: nginx

podTemplate:

spec:

nodeSelector:

"kubernetes.io/os": linux

EOF

 

# Create a namespace for the application

kubectl create namespace $namespace

 

# Create a deployment and service for the application

cat <<EOF | kubectl apply -n $namespace -f -

apiVersion: apps/v1

kind: Deployment

metadata:

name: httpbin

spec:

replicas: 3

selector:

matchLabels:

app: httpbin

template:

metadata:

labels:

app: httpbin

spec:

topologySpreadConstraints:

- maxSkew: 1

topologyKey: topology.kubernetes.io/zone

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

app: httpbin

- maxSkew: 1

topologyKey: kubernetes.io/hostname

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

app: httpbin

nodeSelector:

"kubernetes.io/os": linux

containers:

- image: docker.io/kennethreitz/httpbin

imagePullPolicy: IfNotPresent

name: httpbin

resources:

requests:

memory: "64Mi"

cpu: "125m"

limits:

memory: "128Mi"

cpu: "250m"

ports:

- containerPort: 80

env:

- name: PORT

value: "80"

---

apiVersion: v1

kind: Service

metadata:

name: httpbin

spec:

ports:

- port: 80

targetPort: 80

protocol: TCP

type: ClusterIP

selector:

app: httpbin

EOF

 

# Create an ingress resource for the application

cat <<EOF | kubectl apply -n $namespace -f -

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: httpbin

spec:

ingressClassName: nginx

rules:

- host: $hostName

http:

paths:

- path: /

pathType: Prefix

backend:

service:

name: httpbin

port:

number: 80

EOF

 

fi

 

# Create output as JSON file

echo '{}' |

jq --arg x 'prometheus' '.prometheus=$x' |

jq --arg x 'cert-manager' '.certManager=$x' |

jq --arg x 'ingress-basic' '.nginxIngressController=$x' >$AZ_SCRIPTS_OUTPUT_PATH

 

 

 

As you can note, when deploying the NGINX Ingress Controller via Helm, the service.beta.kubernetes.io/azure-load-balancer-internal to create the kubernetes-internal internal load balancer in the node resource group of the AKS cluster and expose the ingress controller service via a private IP address.

 

In this sample, the httpbin web application via YAML templates. In particular, an ingress is used to expose the application via the NGINX Ingress Controller via the HTTP protocol and using the httpbin.local hostname. The ingress object can be easily modified to expose the server via HTTPS and provide a certificate for TLS termination. You can use the cert-manager to issue a Let's Encrypt certificate. For more information, see Securing NGINX-ingress. In particular, cert-manager can create and then delete DNS-01 records in Azure DNS but it needs to authenticate to Azure first. The suggested authentication method is Managed Identity Using AAD Workload Identity.

 

 

 

Alternative Solution

 

 

Azure Private Link Service (PLS) is an infrastructure component that allows users to privately connect via an Azure Private Endpoint (PE) in a virtual network in Azure and a Frontend IP Configuration associated with an internal or public Azure Load Balancer (ALB). With Private Link, users as service providers can securely provide their services to consumers who can connect from within Azure or on-premises without data exfiltration risks.

 

Before Private Link Service integration, users who wanted private connectivity from on-premises or other virtual networks to their services in an Azure Kubernetes Service(AKS) cluster were required to create a Private Link Service (PLS) to reference the cluster Azure Load Balancer, like in this sample. The user would then create an Azure Private Endpoint (PE) to connect to the PLS to enable private connectivity. With the Azure Private Link Service Integration feature, a managed Azure Private Link Service (PLS) to the AKS cluster load balancer can be created automatically, and the user would only be required to create Private Endpoint connections to it for private connectivity. You can expose a Kubernetes service via a Private Link Service using annotations. For more information, see Azure Private Link Service Integration.

 

 

 

CI/CD and GitOps Considerations

 

 

Azure Private Link Service Integration simplifies the creation of a Azure Private Link Service (PLS) when deploying Kubernetes services or ingress controllers via a classic CI/CD pipeline using Azure DevOps, GitHub Actions, Jenkins, or GitLab, but even when using a GitOps approach with Argo CD or Flux v2.

 

For every workload that you expose via Azure Private Link Service (PLS) and Azure Front Door Premium, you need to create - Microsoft.Cdn/profiles/originGroups: an Origin Group, an Origin, endpoint, a route, and a security policy if you want to protect the workload with a WAF policy. You can accomplish this task using [az network front-door](az network front-door) Azure CLI commands in the CD pipeline used to deploy your service.

 

 

 

Test the application

 

 

If the deployment succeeds, and the private endpoint connection from the Azure Front Door Premium instance to the Azure Private Link Service (PLS) is approved, you should be able to access the AKS-hosted httpbin web application as follows:

 

  • Navigate to the overview page of your Front Door Premium in the Azure Portal and copy the URL from the Endpoint hostname, as shown in the following picture

 

 

largevv2px999.png.9c1e53b1f82eb95408758a3e10d178f3.png

 

 

 

 

  • Paste and open the URL in your favorite internet browser. You should see the user interface of the httpbin application:

 

 

largevv2px999.thumb.png.5f245598a60fe19866befa3318c35dfd.png

 

 

 

You can use the bicep/calls.sh Bash script to simulate a few attacks and see the managed rule set and custom rule of the Azure Web Application Firewall in action.

 

 

 

#!/bin/bash

 

# Variables

url="<Front Door Endpoint Hostname URL>"

 

# Call REST API

echo "Calling REST API..."

curl -I -s "$url"

 

# Simulate SQL injection

echo "Simulating SQL injection..."

curl -I -s "${url}?users=ExampleSQLInjection%27%20--"

 

# Simulate XSS

echo "Simulating XSS..."

curl -I -s "${url}?users=ExampleXSS%3Cscript%3Ealert%28%27XSS%27%29%3C%2Fscript%3E"

 

# A custom rule blocks any request with the word blockme in the querystring.

echo "Simulating query string manipulation with the 'attack' word in the query string..."

curl -I -s "${url}?task=blockme"

 

 

 

The Bash script should produce the following output, where the first call succeeds, while the remaining one are blocked by the WAF Policy configured in prevention mode.

 

 

 

Calling REST API...

HTTP/2 200

content-length: 9593

content-type: text/html; charset=utf-8

accept-ranges: bytes

vary: Accept-Encoding

access-control-allow-origin: *

access-control-allow-credentials: true

x-azure-ref: 05mwQZAAAAADma91JbmU0TJqRqS2lyFurTUlMMzBFREdFMDYwOQA3YTk2NzZiMS0xZmRjLTQ0OWYtYmI1My1hNDUxMDVjNGZmYmM=

x-cache: CONFIG_NOCACHE

date: Tue, 14 Mar 2023 12:47:33 GMT

 

Simulating SQL injection...

HTTP/2 403

x-azure-ref: 05mwQZAAAAABaQCSGQToQT4tifYGpmsTmTUlMMzBFREdFMDYxNQA3YTk2NzZiMS0xZmRjLTQ0OWYtYmI1My1hNDUxMDVjNGZmYmM=

date: Tue, 14 Mar 2023 12:47:34 GMT

 

Simulating XSS...

HTTP/2 403

x-azure-ref: 05mwQZAAAAAAJZzCrTmN4TLY+bZOxskzOTUlMMzBFREdFMDYxMwA3YTk2NzZiMS0xZmRjLTQ0OWYtYmI1My1hNDUxMDVjNGZmYmM=

date: Tue, 14 Mar 2023 12:47:33 GMT

 

Simulating query string manipulation with the 'attack' word in the query string...

HTTP/2 403

x-azure-ref: 05mwQZAAAAADAle0hOg4FTYH6Q1LHIP50TUlMMzBFREdFMDYyMAA3YTk2NzZiMS0xZmRjLTQ0OWYtYmI1My1hNDUxMDVjNGZmYmM=

date: Tue, 14 Mar 2023 12:47:33 GMT

 

 

 

Front Door WAF Policies and Application Gateway WAF policies can be configured to run in the following two modes:

 


  • Detection mode: When run in detection mode, WAF doesn't take any other actions other than monitors and logs the request and its matched WAF rule to WAF logs. You can turn on logging diagnostics for Front Door. When you use the portal, go to the Diagnostics section.
     
     

  • Prevention mode: In prevention mode, WAF takes the specified action if a request matches a rule. No further rules with lower priority are evaluated if a match is found. Any matched requests are also logged in the WAF logs.
     

 

For more information, see Azure Web Application Firewall on Azure Front Door.

 

 

 

Review deployed resources

 

 

Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.

 

Azure CLI

 

 

az resource list --resource-group <resource-group-name>

PowerShell

 

 

Get-AzResource -ResourceGroupName <resource-group-name>

Clean up resources

 

 

When you no longer need the resources you created, delete the resource group. This will remove all the Azure resources.

 

Azure CLI

 

 

az group delete --name <resource-group-name>

PowerShell

 

 

Remove-AzResourceGroup -Name <resource-group-name>

Next Steps

 

 

You could add a custom domain to your Front Door. If you use Azure DNS to manage your domain, you could extend the Bicep modules to automatically create a custom domain for your Front Door and create a CNAME DNS record in your public DNS zone.

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...