Jump to content

Recommended Posts

Guest nadavbh
Posted

Introduction

 

 

 

Last week at Build, we've introduced a new feature in Public Preview for Azure Kubernetes Service (AKS) called Deployment Safeguards.

 

Deployment Safeguards, as a part of Azure Policy for AKS, provides a configurable and fast way to make sure your Kubernetes deployment follow through best practices and limits that are set beforehand. In this article, we will explore how it works in real time and how can we use it to tailor AKS to your needs.

 

 

 

Playground Setup

 

 

 

For the sake of this article, I'll create a new cluster from scratch.

 

There are a few things we need to set up first.

 

For my test environment, I'm running these commands on WSL/Ubuntu with local Azure CLI.

 

 

 

If you're not logged in, execute az login and then choose the right subscription,

 

If you're using Azure CLI with Login Experience v2, you can just choose the subscription from the drop-down, and disregard the second command:

 

 

 

 

 

 

 

az login
az account set -s "your-subscription-id"

 

 

 

 

 

 

 

AKS Deployment Safeguard is currently a preview feature, so we'll need to make sure our AKS extension is up-to-date:

 

 

 

 

 

 

 

az extension add --name aks-preview
az extension update --name aks-preview

 

 

 

 

 

 

 

Next, register the feature flag of Deployment Safeguards -

 

 

 

 

 

 

 

[iCODE]az feature register --namespace Microsoft.ContainerService --name SafeguardsPreview[/iCODE]

 

 

 

 

 

 

 

This will take a couple of minutes; the end result should show as Registered:

 

 

 

largevv2px999.png.efc914ff3569f1747c8d32f20f3c5c25.png

 

 

 

Next, refresh the Microsoft.ContainerService resource provider so changes will be applied:

 

 

 

 

 

 

 

[iCODE]az provider register --namespace Microsoft.ContainerService[/iCODE]

 

 

 

 

 

 

 

Create a new test resource group and AKS cluster -

 

 

 

 

 

 

 

az group create --name safeguard-test --location eastus
az aks create --name safeaks --resource-group safeguard-test --node-count 2 --location eastus --enable-addons azure-policy --safeguards-level Warning --safeguards-version v2.0.0 

 

 

 

 

 

 

 

 

 

This will create a new Azure Kubernetes Service (AKS) cluster, with 2 nodes.

 

This cluster will have the Azure Policy for AKS add-on enabled with Safeguard level set to Warning and version set to 2.0.0.

 

Node count is set to 2 to allow for faster creation and a bit of redundancy.

 

 

 

I have created 2 clusters, one with Safeguards level set to Warning and one with Enforcement.

 

 

 

I have set the safeguard level to Warning for the first cluster as we wish to experiment with it, Warning will notify us that a resource/yaml is out of policy but won't block it.

 

Setting safeguard level to Enforcement will automatically block resource files that do not adhere to the safeguards that were set, and will change the ones it can change to adhere to those policies, instead of blocking them.

 

 

 

You can enable Deployment Safeguards on an existing cluster using az aks update:

 

 

 

 

 

 

 

[iCODE]az aks update --name clustername --resource-group resourcegroup --safeguards-level Warning --safeguards-version v2.0.0[/iCODE]

 

 

 

 

 

 

 

 

 

You can change a cluster's safeguards level from Warning to Enforcement and vice-versa also using az aks update:

 

 

 

 

 

 

 

[iCODE]az aks update --name safeaks --resource-group safeguard-test --safeguards-level Enforcement[/iCODE]

 

 

 

 

 

If you wish to turn off Deployment Safeguards completely:

 

 

 

[iCODE]az aks update --name safeaks --resource-group safeguard-test --safeguards-level Off[/iCODE]

 

 

 

 

 

That should wrap it up for the prerequisites.

 

 

 

Deployment Safeguards in Action

 

 

 

After the cluster is created, please allow at least 30 minutes for Deployment Safeguards and Azure Policy for AKS to successfully sync.

 

 

 

If you've followed with the new cluster creation, set kubectl to the new cluster context by using:

 

 

 

 

 

 

 

 

 

[iCODE]az aks get-credentials --name safeaks --resource-group safeguard-test[/iCODE]

 

 

 

 

 

 

 

 

 

Let's run kubectl get nodes -o wide just to verify connectvitiy -

 

 

 

 

 

 

 

 

 

[iCODE]kubectl get nodes -o wide[/iCODE]

 

 

 

 

 

 

 

 

 

Output should look like this:

 

 

 

largevv2px999.png.45b98dc0a0bf8b4cb82a10bce9380d9e.png

 

 

 

Testing Deployment Safeguards

 

 

 

While the entirety of available safeguard policies is listed here,

 

We will focus on Resource Limits Enforcement, together with a few others which I'll explain below.

 

 

 

Testing Deployment Safeguards

 

 

 

Let's create a normal pod that runs an Nginx image, without any special configuration, and save it as no-limits.yaml:

 

 

 

 

 

 

 

apiVersion: v1
kind: Pod
metadata:
 name: no-limits-here
spec:
 containers:
 - name: nginx
   image: nginx

 

 

 

 

 

 

 

Let's apply it to our Warning level cluster and see what happens, using kubectl apply:

 

 

 

 

 

 

 

[iCODE]kubectl apply -f no-limits.yaml[/iCODE]

 

 

 

 

 

 

 

We're immediately presented with the following output:

 

 

 

largevv2px999.png.f0ac29548c0736efbe312517fe9c6a44.png

 

 

 

Let's break it down:

 

 

 

Deployment Safeguards expects a liveness and a readiness probe, resource limits, and an image pull secret.

 

But, since it's set on Warning, it allows the manifest to go through.

 

 

 

 

 

 

 

In a cluster where safeguards level is set to Enforcement, the pod is blocked from being scheduled:

 

 

 

largevv2px999.png.e1b2ea59f6ee55f0ad30028943d2c2b6.png

 

 

 

Let's "fix" our pod to adhere to some of the policies, but let's keep it without resource limits:

 

 

 

 

 

apiVersion: v1
kind: Secret
metadata:
 name: registrykey
 namespace: default
data:
 .dockerconfigjson: >-
   eyJhdXRocyI6eyJodHRwczovL215LXNlY3VyZS1yZWdpc3RyeS5jb20iOnsidXNlcm5hbWUiOiJkb2NrZXIt
   dXNlciIsInBhc3N3b3JrIjoic2VjdXJlLXBhc3N3b3JkIiwiZW1haWwiOiJ1c2VyQGV4YW1wbGUuY29tIn19fQ==
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Pod
metadata:
 name: no-limits-here
spec:
 containers:
 - name: nginx
   image: my-awesome-registry.com/nginx:latest
   readinessProbe:
     httpGet:
       path: /
       port: 80
     initialDelaySeconds: 5
     periodSeconds: 5
   livenessProbe:
     httpGet:
       path: /
       port: 80
     initialDelaySeconds: 10
     periodSeconds: 10
 imagePullSecrets:
 - name: registrykey

 

 

 

This "fixed" pod now adheres to the readiness and liveness probe safeguards, adds a pseudo pullsecret, but does not adhere to the resource limits safeguard.

 

 

 

Important Note - This is a pseudo dockerconfigjson, key and of course, container registry. The container will not run. It's on purpose.

 

 

 

Let's save this in a new file called no-limits-updated.yaml, and apply it to the Enforcement cluster:

 

 

 

 

 

[iCODE]kubectl apply -f no-limits-updated.yaml[/iCODE]

 

 

 

 

 

We're presented with the following output:

 

largevv2px999.png.19f4ece05d346e37c5520cfca0757f85.png

 

 

 

Kubernetes is not happy with our dummy secret. That's fine. Let's explore and see what happened to our pod.

 

Our pod did not run [as implied above] but Deployment Safeguards has made changes to it, specifically on the Limits and Requests part.

 

Let's query it and see what happened:

 

 

 

 

 

[iCODE]kubectl get pod no-limits-here -o=jsonpath='{.spec.containers[*].resources}'[/iCODE]

 

 

 

 

 

You should see the following:

 

 

 

[iCODE]{"limits":{"cpu":"500m","memory":"500Mi"},"requests":{"cpu":"500m","memory":"500Mi"}}[/iCODE]

 

 

 

 

 

Deployment Safeguards has made our pod adhere to the Limits and Requests section, even without us specifying it.

 

This is done on the Enforcement level to make sure your workload is aligned with the limits and requests safeguard.

 

The change happened because Limits and Requests are eligible for mutation.

 

 

 

Other policies that are currently available with mutations are:

 

 

 

  • Reserved System Pool Taints
  • Pod Disruption Budget
  • ReadOnlyRootFileSystem
  • RootFilesystemInitContainers

 

 

 

Deployment Safeguards will edit and change your workload to align with these safeguards.

 

On all other safeguards that are not eligible for mutation, the workload will be rejected on an Enforcement cluster.

 

 

 

You can also exclude a certain namespace from being enforced by Deployment Safeguards using:

 

 

 

[iCODE]az aks update --name safeaks --resource-group safeguard-test --safeguards-level Warning --safeguards-version v2.0.0 --safeguards-excluded-ns myawesomenamespace[/iCODE]

 

 

 

Clean up the resources:

 

az aks delete --name safeaks --resource-group safeguard-test --yes
az group delete --name safeguard-test

 

 

 

Conclusion

 

Azure Kubernetes Service's Deployment Safeguards feature is a robust tool that ensures Kubernetes deployments adhere to best practices and predefined limits. With options for both Warning and Enforcement levels, users can either be alerted of non-compliance or have their deployments automatically adjusted to meet the required standards. This feature enhances security and operational efficiency, making AKS an even more reliable and user-friendly platform for managing containerized applications.

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...