IIS Central Certificate Store and Windows containers

  • Thread starter Thread starter Vinicius Apolinario
  • Start date Start date
V

Vinicius Apolinario

A while ago the .NET and Windows containers team investigated how our customers were using certificates for HTTPS connections when using Windows containers to run web workloads on IIS, either on AKS or not. At the time, we learned that the management of SSL certificates on Windows containers specifically for IIS is very manual and doesn’t align well with modern practices you would expect when running on a containerized environment.



We learned that most of our customers have scripts to load certificates into the Windows container environment, install the certificate, and have it configured as part of the IIS deployment alongside the application, its application pool in IIS, and its IIS bindings. The other scenario on which this is not necessary is when customers use an ingress controller, which handles the HTTPS traffic before it gets to the containers/pods.



At the time that we investigated this we missed an important feature that exists in IIS already from the pre-container era – Central Certificate Store. This feature was introduced in Windows Server 2012 as part of the at the time new IIS 8.0. It allows the server administrators to store and access the certificates centrally on a file share. Windows Servers in a server farm can then be configured to load the certificates from the file share on-demand. For Windows containers, this feature comes in handy as it is exactly what we need to decouple the storing of files (certificates in this case) from the container.



Proof of concept with Docker Desktop

To validate that Central Certificate Store can be properly used for Windows containers, I tested the feature locally on my machine. This is what the architecture looks like in its simplest form:

large?v=v2&px=999.png



The main thing in the diagram above is that the certificate is not being loaded into the container. Instead, it sits on a local folder on my machine. To validate the above, here are the assets I used:

Dockerfile:

Code:
# escape=`
# Use the Windows Server Core image with IIS installed, targeting 2022 LTSC version
FROM mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

# Install the Centralized Certificates Module
RUN powershell -command `
    Add-WindowsFeature Web-CertProvider

#Copy LogMonitor JSON file and download LogMonitor to the container
WORKDIR /LogMonitor
COPY LogMonitorConfig.json .
RUN powershell.exe -command wget -uri https://github.com/microsoft/windows-container-tools/releases/download/v1.2.1/LogMonitor.exe -outfile LogMonitor.exe

#Copy iiscentralstore.ps1 to the container
COPY iiscentralstore.ps1 .
ENTRYPOINT ["powershell", "-File", "C:\\LogMonitor\\iiscentralstore.ps1"]



The above Dockerfile will create a new image based on the Windows Server 2022 LTSC IIS image. It will install the Central Certificate Store (Web-CertProvider) feature. It will also download and configure LogMonitor so you can see the logs from IIS outside of the container. Finally, it will copy a PowerShell script that will be used as entry point for the image.



The most important aspect of this PowerShell script is that it will only be called when the container is executed from the image. The script is not executed when the image is being built. This approach allows us to defer specifying usernames or passwords until the container is launched. This is a security best practice as these could be tracked to the image history.

Here’s what the PowerShell script looks like:

Code:
#Create a new local user account
$Password = ConvertTo-SecureString -AsPlainText $env:LocalUserPassword -Force
New-LocalUser -Name $env:LocalUsername -Password $Password -FullName $env:LocalUsername -Description 'IIS certificate manager user'

# Configure the Central Certificate Store
$PFXPassword = ConvertTo-SecureString -AsPlainText $env:PFXCertPassword -Force
$CertStorePath = 'C:\Certificate\Store'
Enable-IISCentralCertProvider -CertStoreLocation $CertStorePath -UserName $env:LocalUsername -Password $Password -PrivateKeyPassword $PFXPassword

# Update the IIS bindings to use the Central Certificate Store
$siteName = 'Default Web Site'; `
Remove-WebSite -Name $siteName; `
$newSiteName = 'CCSTest'; `
$newSitePhysicalPath = 'C:\inetpub\wwwroot'; `
$newSiteBindingInformation = '*:443:'; `
New-IISSite -Name $newSiteName -PhysicalPath $newSitePhysicalPath -BindingInformation $newSiteBindingInformation -Protocol https -SslFlag CentralCertStore

#Call LogMonitor, ServiceMonitor and IIS
C:\LogMonitor\LogMonitor.exe C:\ServiceMonitor.exe w3svc



The script starts by creating a new local user. This user will be used later for accessing the folder on which the certificate is stored. Note that I’m not hardcoding the username or password to the script. The point of this approach is to remove all secrets from the container image, which includes credentials.


Next, we configure the Central Certificate Store. For that, we’ll use the user account and password from the previous step, but we also need the password for the certificate (PFX) file and the location on which the certificates will be stored. Traditionally, this could be an SMB file share. In our case, we’ll use a local folder. Later, this local folder will be a volume mounted into the container.


We then move to update the IIS binding to use the Central Certificate Store. To ensure only the website we need is there, we deleted the Default Web Site (note: this could be done as part of the build process) and built a new one with the right configuration. Most importantly, the New-IISSite command includes the -SslFlag indicating the certificate comes from the CentralCertStore.


Finally, we instantiate LogMonitor, which calls ServiceMonitor, which in turn checks for the state of the w3svc (IIS) service. As long as the service is up, ServiceMonitor will keep the container running and LogMonitor will send logs to SDTOUT, which will be captured by Docker or Kubernetes.
For a local deployment, that’s pretty much it. Now, we need a certificate. On my local machine, I ran:

Code:
# Create the certificate locally
$cert = New-SelfSignedCertificate -DnsName "www.viniapccstest.com" -CertStoreLocation "cert:\LocalMachine\My"

# Specify the path where you want to save the certificate's public key
$certPath = "C:\Cert\CCSTest.pfx"

# Export the certificate to a file
$certPassword = ConvertTo-SecureString -String "MySecurePassword" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath $certPath -Password $certPassword



The above created the PFX certificate file for the website I want to use. Next, we need to build the container image:

docker build -t iisccs:v1 .



With the image built, we can run a new container based on the image:

docker run -e LocalUsername='<Username>' -e LocalUserPassword='<LocalUserPassword>' -e PFXCertPassword='<CertificatePassword>' -d -p 8081:443 -v C:\Cert:C:\Certificate\Store iisccs:v2



The command above will instantiate a new container based on the image we just built. It will also map port 443 of the container to port 8081 on the host. I have provided the values for the environment variables needed, such as local username, username password, and PFX file password. Finally, it will map the local folder on my machine to the volume inside the container, resulting in the container being able to see the certificate we just created.


Since this is a proof of concept, I have manually changed the HOSTS file on my machine to include the FQDN of the certificate to the IP address of my machine. When I open the browser and type: https://www.viniapccstest.com:8081, it brings up the website correctly. (Surely, I had to bypass the browser warning about the website, since the certificate is self-signed and not trusted by my machine)


This proved that we can have an IIS website with HTTPS configured with no certificates loaded into the container image. Now, if I need to change the certificate, I don’t have to rebuild the image. All I have to do is to update the certificate in the folder being mapped to a volume inside the container.


Don’t get me wrong and let me clarify something right away: The main drawback and absolute blocker here is the fact that I’m passing sensitive information when running the container. A simple docker inspect can reveal the username and password used as part of the docker run command:

Code:
PS C:\Users\user> docker run -e LocalUsername='username' -e LocalUserPassword='password' -e PFXCertPassword='password' -d -p 8080:80 -p 8081:443 -p 8172:8172 -v C:\Cert:C:\Certificate\Store iisccs:v2
72ee37d2088e7673d4efb58a787bbe9005fe3c30f2c2e504330bc6396d21d679
PS C:\Users\user> docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS         PORTS                                                                 NAMES
72ee37d2088e   iisccs:v2   "powershell -File C:…"   12 seconds ago   Up 5 seconds   0.0.0.0:8172->8172/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:8081->443/tcp   beautiful_ellis
PS C:\Users\user> docker inspect 72ee37d2088e
<redacted>
            "Env": [
                "LocalUsername=username",
                "LocalUserPassword=password",
                "PFXCertPassword=password"
<redacted>



Unfortunately, for Docker Desktop environments there’s not much to be done. Docker provides a great feature called Docker Secrets, but that is available for Docker Swarm environments only. Since this was just a proof of concept, it is ok to use it for validation or development/testing purposes.


For Kubernetes environments though, there are other more secure options available that allow us to take this approach and validated concept to production environments.



IIS Central Certificate Store on Azure Kubernetes Service (AKS)

Now we have an IIS container image able to use Central Certificate Store to load the certificate into the container. What we need is to validate that we can do it in a secure and safe way, enabling it to be used in production environments. This is what our architecture in AKS will look like:

large?v=v2&px=999.png



The above architecture is more complex than the previous one, for the obvious reason that we want to ensure a few things:
- Images are available in a registry so AKS cluster nodes can pull it.
- Certificates must be stored in a highly available service and can be mounted into the pods inside the AKS cluster.
- Usernames and passwords are sensitive information and must be kept private.



To achieve the above, I started building this environment by creating:
- An Azure Container Registry (ACR), following the instructions here. You also need to tag your image and push it to the registry, following the documentation.
- An AKS cluster with Windows nodes, following the instructions here. You will need to attach the ACR registry to the AKS cluster. You can do that while you build the cluster or you can attach the registry to the cluster following the documentation here. This will ensure only nodes in this AKS cluster can pull images from the registry.


With ACR and AKS in place, we can move on to configuring Azure Storage where the certificate will be stored and presented to AKS nodes as a persistent volume. To do this, follow the documentation here.
Azure Files storage can be used in two ways for AKS:
- Dynamically provision volumes: This is ideal for scenarios on which the application running in the pod needs a clean volume/disk to use. The volume will be provisioned dynamically as the deployments happen.
- Statically provision volumes: This is used when you want to create a volume that leverages the file shares already present in Azure files.


For our scenario we will use the option to statically provision volumes, which allows us to load the certificate into Azure files prior to allocating it as a volume for the containers. Follow the documentation above and you will have a File Share available in Azure Files. You can then upload the certificate from your machine to the Azure file share by using the Azure portal or the below command:

az storage file upload --account-name <storageaccountname> --share-name <filesharename> --source 'C:\folder\file.pfx' --path 'file.pfx'



You should see the certificate in the share now:

large?v=v2&px=999.png



With ACR, AKS, and Azure Files configured, we can move on to deploying the application. However, prior to deploying the application, we need to prepare the AKS cluster with the secrets the application will need – the username and passwords we saw earlier. To achieve that, we can run the following on the AKS cluster:

kubectl create secret generic iisccs-secrets --from-literal=LocalUsername='Username' --from-literal=LocalUserPassword='Password' --from-literal=PFXCertPassword='Password'



This created the Kubernetes secret to store the sensitive information that will be used when the container is instantiated. If you remember from the previous steps, the container image has been created with a PowerShell script that will run when the container/pod is created. That PowerShell script expects to find this information as environment variables.


To deploy the application, we can create the iiscentralstore.yaml:

Code:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iisccs-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iisccs
  template:
    metadata:
      labels:
        app: iisccs
    spec:
      nodeSelector:
        kubernetes.io/os: windows
      containers:
      - name: iisccs
        image: <your image from the ACR registry>
        ports:
        - containerPort: 443
        env:
        - name: LocalUsername
          valueFrom:
            secretKeyRef:
              name: iisccs-secrets
              key: LocalUsername
        - name: LocalUserPassword
          valueFrom:
            secretKeyRef:
              name: iisccs-secrets
              key: LocalUserPassword
        - name: PFXCertPassword
          valueFrom:
            secretKeyRef:
              name: iisccs-secrets
              key: PFXCertPassword
        resources:
          limits:
            cpu: "1"
            memory: "500Mi"
        volumeMounts:
          - name: azure
            mountPath: "C:\\Certificate\\Store"
      volumes:
      - name: azure
        persistentVolumeClaim:
          claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
  name: iisccs-service
spec:
  selector:
    app: iisccs
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
  type: LoadBalancer



The above will create a deployment and a service. The deployment will be based on the container image you pushed to ACR. It also informs the deployment to use the Kubernetes secret we just created to provide the environment variables assigned. Finally, as part of the deployment, it creates a volume in the folder specified (In this example, this is the folder IIS will be configured to use as the source for the Central Certificate Store. You could either change this, or even use a ConfigMap to set this as a variable). The Service created is a standard LoadBalancer service with ports 80 and 443 open. Note that port 80 is not being used in this application so you can safely remove it.


Also, as part of the deployment, we have the information on the persistentVolumeClaim. We need the iisccs_pvc.yaml file to deploy that:

Code:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: azurefile
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile-csi
  volumeName: azurefile
  resources:
    requests:
      storage: 1Gi



This will create a Persistent Volume Claim from the deployment to reach a Persistent Volume. The file above calls the volumeName by the virtue of another specification called iisccs_pv.yaml:

Code:
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: file.csi.azure.com
  name: azurefile
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azurefile-csi
  csi:
    driver: file.csi.azure.com
    volumeHandle: iisccs-volumeid
    volumeAttributes:
      shareName: iisccsshare
    nodeStageSecretRef:
      name: azure-secret
      namespace: default
  mountOptions:
    - dir_mode=0777
    - file_mode=0777
    - uid=0
    - gid=0
    - mfsymlinks
    - cache=strict
    - nosharesock
    - nobrl



The above creates the construct of the Persistent Volume in the AKS cluster. Is uses the azurefile-csi with the nodeStageSecretRef which is a Kubernetes secret created as part of the deployment of the Azure Files storage (as in the documentation provided above).


With the PVC and PV specification, we can go ahead and deploy it:

Code:
kubectl create -f iisccs_pv.yaml
kubectl apply -f iisccs_pvc.yaml



You can check if the PVC has been created and bound to the PV by using:

Code:
PS C:\Users\user> kubectl get pvc azurefile
NAME        STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS    AGE
azurefile   Bound    azurefile   1Gi        RWX            azurefile-csi   24h



This confirms the PVC and PV have been correctly configured. We can then deploy the application:

kubectl apply -f iiscentralstore.yaml



This will create the deployment and service in your AKS cluster. Once the image has been pulled, it should run in one of the Windows nodes in your cluster. You can check the public IP address of the service by running:

Code:
PS C:\Users\user> kubectl get service
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
iisccs-service   LoadBalancer   10.240.192.27   XXX.XXX.XXX.XXX   80:32726/TCP,443:32215/TCP   23h
kubernetes       ClusterIP      10.240.0.1      <none>          443/TCP                      7d1h



Remember that if you don’t have a real DNS record pointing to this IP address, you might need to change the HOSTS file in your machine so you can access the website with its name – that way IIS Central Certificate Store can match the website to the certificate in the store.


Once you do that, you can access the website:

large?v=v2&px=999.png



Conclusion

In this blog post we analyzed how to create a Windows container image for IIS that deploys a website using HTTPS without having to load the certificate into the container image. We validated the concept locally on Docker Desktop and created an image that successfully deployed a website using HTTPS with the certificate presented via a local mount volume. While successful, this approach doesn’t meet the security bar for production environments, given the nature of Docker Desktop for local development purposes.


We then explored a secure way to deploy this concept in AKS, by leveraging Kubernetes secrets for sensitive data and Azure Files storage for storing the certificate and presenting it to the pods on the Windows nodes as volumes.


The main goal of this exercise was to enable us to separate the certificate lifecycle management from the container image. If we need to change the certificate, we can do so without changing the container image or re-deploying the pods. Furthermore, the pod lifecycle is also apart from the certificate. We can manage them independently now, which is helpful when thinking about DevOps practices and CI/CD pipelines.


We hope this helps you in the process of modernizing applications with Windows containers and AKS and offers a solution for TLS certificates lifecycle management. Let us know what you think in the comments section below.

Continue reading...
 
Back
Top