Deploying .dacpacs to Multiple Environments via ADO Pipelines

  • Thread starter Thread starter j_folberth
  • Start date Start date
J

j_folberth

_a661c3f7-beed-4fe9-aabf-26fef2271b43.jpg




Introduction​


This is the next post on our series covering SQL Databases. In the last post we walked through taking a single .dacpac and deploying it to one environment via an Azure DevOps Pipeline. In this article we will expand upon that and guide through the next evolution with building a dacpac w/ a configuration for a dev and tst environment and deploy that to multiple sql servers and databases.



I would venture to say this is where we will crank up the difficulty. We will discuss concepts such as pre/post deployment scripts which can vary for different environments. In this case we will have a post deployment script for security for each an environment as well as a seed script for our dev environment.



As with the other blogs in this series the full code is available via GitHub.



PreReqs​


If you followed the last post then you should be all set after creating the appropriate SQL Server, SQL Database, and provisioned appropriate ADO service account permissions.



If this is the first post you've come across well then, no worries. Here is a complete list of requirements:






Process​


For this one we should take a step back and talk through what our high-level workflow will look like. We want to leverage Pre/post-deployment scripts to setup Entra users and groups to the SQL Databases. The list of users and roles can be different from one environment to the next.



This is important as if reading the document on Pre/post-deployment scripts it says: "Pre/post-deployment scripts are included in the .dacpac but they are not compiled into or validated with database object model." This is important. This means that we will require a different .dacpac file for each environment.



How should we do this? One concept that is more prevalent when developing applications is leveraging build configurations. These build configurations will effectively allow us to provide different configuration settings, i.e. pre/post deployment scripts, for each .dacpac that is created. Since I am using the SDK process for SQL Projects, I will be leveraging VS Code for this. Alternatively, you can use Azure Data Studio. Additionally, just announce this process is in preview within Visual Studio.



So let's take these steps and outline what our deployment will look like. I am going to break this down at the job and stage level. Jobs will be able to run in parallel while stages will run sequentially. I have omitted the publishing task but we will go over that in our YAML file.

  • Stage build
    • Job build dev .dacpac job
    • Job build tst .dacpac job
  • Stage deploy dev
    • Job deploy dev .dacpac job
    • Run script to seed data
    • Run script to configure security
  • Stage deploy tst
    • Job deploy tst .dacpac
    • Run script to configure security

Post Deployment Scripts​


The first piece we will tackle is creating the post deployment scripts. For this exercise dev will have two scripts. One to assign an Entra group

'sqlmoveme_[Environment Name]_admins' the 'db_owner' role and one to seed a table, in this example just some sample data into the [SalesLT].[Address] table. For security we need to create our script idempotent such that it will check if the group already exists in the database, if not then create it before assigning permissions. I was tempted to always drop and readd the user; however, sided against this in case there is an issue recreating the user or assignment the permissions. If the script failed, then we'd be locking the users out of a live database!

Security Script​


Code:
USE [sqlmoveme]

IF NOT Exists(SELECT * FROM sys.database_principals WHERE name = 'sqlmoveme_[Environment Name]_admins') 
    BEGIN  
        CREATE USER  [sqlmoveme_[Environment Name]_admins] FROM EXTERNAL PROVIDER
    END
EXEC sp_addrolemember 'db_owner','sqlmoveme_[Environment Name]_admins';

GO

Seed Script:​








Code:
USE [sqlmoveme]
BEGIN
    DELETE FROM [SalesLT].[Address];
    INSERT INTO [SalesLT].[Address]VALUES('Apartment 5A','129 W. 81st St.','New York','NY','USA','10001',newid(),SYSDATETIME())
    INSERT INTO [SalesLT].[Address]VALUES('2311 North Los Robles Avenue','','Pasadena','CA','USA','91001',newid(),SYSDATETIME())
    INSERT INTO [SalesLT].[Address]VALUES('742 Evergreen Terrace','','Springfield','IL','USA','65619',newid(),SYSDATETIME())
END
GO







Lastly there is an interesting limitation when defining a pre/post deployment script. It can have only on command, i.e. leverage USE/GO. The workaround for this, as documented by Microsoft, is to create a generic script for the environment which will leverage sqlcmd notation to call the additional scripts.



Environment Script​








Code:
:r [Environment Name].post.sqlmoveme.security.sql
:r [Environment Name].post.sqlmoveme.seed.sql







Script Location​


These scripts should be located in the .sqlproj folder. This will ensure it is attached to the .sqlproj:

j_folberth_0-1726687810780.png

In my case I created a folder called 'scripts' to house all the environment scripts. It is important to understand that these .sql files are considered part of the project. As such we will need to make appropriate accommodations when building the project.



Excluding .sql Files from the Build​


Here is a confusing part. Since the .dacpac will attempt to compile all the .sql files in the project we'd want to exclude these environment specific ones from our build. To do this we will need to update the .sqlproj behind the scenes. First we'd want indicate the `<Build Remove=/> tag on the scripts:







Code:
  <ItemGroup>
    <Build Remove="scripts\dev.post.sqlmoveme.security.sql" />
    <Build Remove="scripts\dev.post.sqlmoveme.seed.sql" />
    <Build Remove="scripts\dev.post.script.sql" />
    <Build Remove="scripts\tst.post.sqlmoveme.security.sql" />
    <Build Remove="scripts\tst.post.sqlmoveme.seed.sql" />
    <Build Remove="scripts\tst.post.script.sql" />
  </ItemGroup>







Without this part our build will fail. For information on this please check out this Microsoft Documentation.

Creating the Build Configurations​


This is the part in the blog where we move to the more advanced category. We will be walking through how to do this in VS Code/Data Studio where one will be required to edit the raw .sqproj. Currently, in VS Code/Data Studio this is how you'd go about proceeding updating the project file for multiple build configurations. This isn't meant to be overly intimidating. If you make a mistake, remember it's under source control! We can roll back to the previous working version.



Editing the .sqlproj File​


Open the .sqlproj file by double clicking. This should open up a VS Code window with the file. There should be already two default build configurations, Debug and Release. If preferred we can update these; however, I like to match these values to my ADO Environments. The first thing we will want to do is create a <PropertyGroup> and we will want to set this <PropertyGroup> to be available only on our desired build configuration. This is done with the following block:







<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'dev|AnyCPU' ">







We would then want the following properties defined in the conditional <PropertyGroup>









Code:
  <OutputPath>bin\dev\</OutputPath>
    <BuildScriptName>$(MSBuildProjectName).sql</BuildScriptName>
    <TreatWarningsAsErrors>false</TreatWarningsAsErrors>
    <DebugSymbols>true</DebugSymbols>
    <DebugType>full</DebugType>
    <Optimize>false</Optimize>
    <DefineDebug>true</DefineDebug>
    <DefineTrace>true</DefineTrace>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>









For the purpose of our exercise the most important one will be the <OutputPath>. This property tells the SDK what path to push the .dacpac into. We will want different paths for different environments. The rest of these were set by the default Debug or Release configuration.



At the time of this writing there is a documentation gap on what the properties are that can be set and what accepted values look like. Currently the closest can be found Database Project Settings - SQL Server Data Tools (SSDT) | Microsoft Learn which discusses what settings could be set; however, does not provide the full list of properties. For the latest information on this I'd encourage you to visit What are SQL database projects? - SQL Server

Repeat for tst​


Now let's repeat these same steps for the tst environment. I am going to make a small tweak just to illustrate the different configurations your team can have. We will remove the seed script as perhaps our tst environment will have a more complete set of data for testing and we'd like to ensure we do not override this. We still though would like to run our security script, this time with the 'sqlmoveme_tst_admins' Entra group having the dbo_owner role.



Just a friendly reminder that we are using Azure DevOps environments, and we should have one configured for out 'tst' environment. If this concept is new, you can review more about it on my previous blog: Azure DevOps Pipelines: Environments and Variables (microsoft.com)



For the complete tst code base please refer to the following GitHub repository.



Updating the YAML Build​


We will want to make to updates from our previous post on Leveraging DotNet for SQL Builds via YAML - Microsoft Community Hub. The first update will be passing in the build configuration as a parameter into the `dotnet build`. This is done by passing in the `--configuration` parameter. For a full list of what is available feel free to check out dotnet build command - .NET CLI | Microsoft Learn









Code:
    - task: DotNetCoreCLI@2
      displayName: dotnet build
      inputs:
        command: build
        projects: $(Build.SourcesDirectory)/src/sqlmoveme/*.sqlproj
        arguments: --configuration tst /p:NetCoreBuild=true









The second piece to this is we will want to replicate our build for a second environment and update appropriately. Those who follow me know that whenever we need to replicate something we should look at Azure DevOps Pipelines: Practices for Scaling Templates - Microsoft Community Hub.



For this post we are taking it step by step and will eventually walk through creating templates. For now, I am going to create a second job for the tst environment. For optimal efficiency this should be a second job so we can run these in parallel as opposed to creating an additional stage or series of task for these steps.









Code:
- job: build_publish_sql_sqlmoveme_tst
    steps:
    - task: UseDotNet@2
      displayName: Use .NET SDK v3.1.x
      inputs:
        packageType: 'sdk'
        version: 3.1.x
        includePreviewVersions: true
    - task: DotNetCoreCLI@2
      displayName: dotnet build
      inputs:
        command: build
        projects: $(Build.SourcesDirectory)/src/sqlmoveme/*.sqlproj
        arguments: --configuration tst /p:NetCoreBuild=true
    - task: PublishPipelineArtifact@1
      displayName: 'Publish Pipeline Artifact sqlmoveme_tst_tst '
      inputs:
        targetPath: $(Build.SourcesDirectory)/src/sqlmoveme/bin/tst
        artifact: sqlmoveme_tst_tst
        properties: ''









All call out here that looks odd, and I will explain my logic is the artifact name being `sqlmoveme_tst_tst`. This is me leveraging a naming pattern of [projectName]_[environmentName]_[configurationName]. You don't need to do this; however, I follow this pattern for flexibility and accommodate for practices across tech stacks. i.e. In other languages one could have tst_android, tst_mac, tst_v1, tst_v2 type of scenarios.



Creating the tst Deployment Stage​


At this point we have our build creating two artifacts, one for dev and one for tst. Courtesy of Deploying .dapacs to Azure SQL via Azure DevOps Pipelines - Microsoft Community Hub we have the YAML deployment stage for dev already done. So now we just need to add one for tst:







Code:
- stage: sqlmoveemecicd_tst_cus_dacpac_deploy
  jobs:
  - deployment: sqlmoveemecicd_app_tst_cus
    environment:
      name: tst
    dependsOn: []
   
    strategy:
      runOnce:
        deploy:
          steps:
          - task: SqlAzureDacpacDeployment@1
            displayName: Publish sqlmoveme on sql-adventureworksentra-tst-cus.database.windows.net
            inputs:
              DeploymentAction: Publish
              azureSubscription: AzureTstServiceConnection
              AuthenticationType: servicePrincipal
              ServerName: sql-adventureworksentra-tst-cus.database.windows.net
              DatabaseName: sqlmoveme
              deployType: DacpacTask
              DacpacFile: $(Agent.BuildDirectory)\sqlmoveme_tst_tst\**\*.dacpac
              AdditionalArguments: ''
              DeleteFirewallRule: True







Take note of a couple tweaks here.


Now let's run the pipeline!



End Result​


Let's go through all these steps one by one and confirm what we walked through.

Let's start by confirming all of the Azure DevOps changes we made. The first is confirming our build generated two .dacpac files as part of the pipeline artifact. One for the 'tst' configuration and one for the 'dev' configuration:

j_folberth_0-1726853305565.png

Check! So next, since we are in ADO lets just make sure we have two stages. One stage should have a job to deploy to 'dev' and the other stage a job to deploy to 'tst'.

j_folberth_1-1726853521428.png

Alright ADO we can argue is the easy one to show confirmation. Let's now go into SQL and confirm that our prescripts not only ran but they ran against the appropriate environments. Note here that the machine you are connecting from may need to be added to the SQL Server Firewall to be able to log in IP firewall rules - Azure SQL Database and Azure Synapse Analytics | Microsoft Learn.



Let's start with 'dev' and confirm that our Entra group has the right access. We can do this by running the following query:

















Code:
SELECT P.Name, R.Name
FROM sys.database_principals P 
LEFT OUTER JOIN sys.database_role_members RM on P.principal_id=RM.member_principal_id 
LEFT OUTER JOIN sys.database_principals R on R.principal_id=RM.role_principal_id
WHERE P.Name='sqlmoveme_dev_admins'

















When running the query we see the below result:

j_folberth_2-1726854564489.png



Alright, it ran one script. But did it also run our seed script? In this example we deleted everything from the Address table and repopulated it:

j_folberth_3-1726854732928.png

Looks like it's all there.



How about 'tst' now? If we remember a key component here is not only running a different script but a different number of scripts in our different environments. Let's log in to the 'tst' server and run our user access script to see what results we get:


j_folberth_4-1726854992922.png



It looks right to me.



Conclusion​


We successfully build a single .sqlproj into a .dacpac for each environment. Each .dacpac had it's own post deployment scripts which configured such items as security including adding Entra Security groups as dbo. We then successfully deployed these .dacpacs to their appropriate environments where the corresponding post deployment scripts ran.



Next Steps​


Now that we have covered how to build and deploy .dacpac to multiple environments we will move to creating YAML Templates, so we don't have to copy and paste our Pipeline steps. Feel free to subscribe to this series on SQL Databases alternatively if you like my posts feel free to follow me.

Continue reading...
 
Back
Top