Jump to content

Recommended Posts

  • FPCH Admin
Posted

In SQL Server environments, including on-premises, Azure SQL Database, or SQL Managed Instance, unexpected failovers can sometimes leave Availability Group roles out of sync. When this happens, the new primary replica might take over while the original primary faces issues, leading to possible data discrepancies if transactions were in progress or uncommitted. To recover or validate any data loss, it’s essential to resync or validate the new primary with the old primary as soon as it’s back online. This ensures any lost transactions are reconciled. If critical data was missed during the failover, it must be recovered and merged to maintain database consistency. This blog outlines the steps to recover or validate any data loss using SQL Server Database Compare Utility.

 

To simulate this situation with the SQL Server Box product, two SQL Server VMs were set up on the same subnet, Always-On HA was turned on for both instances, a simplified WideWorldImportersDW (WWIDW) database was restored and set to full recovery mode. After creating the certificates and logins/users required for Availability Groups (AGs) without domains, an asynchronous commit AG was created (it needed to be asynchronous, since we cannot simulate transaction loss with a synchronous AG).

 

 

Using a transaction simulator, many transactions per second were performed into the primary database.

 

 

During this activity, the primary instance was stopped, which created a group of committed transactions that had not been replicated to the secondary. The goal of this post is to recover those missing transactions.

 

After stopping sql1, you need to open the dashboard on sql2, because SSMS can no longer connect to sql1 to update the status:

 

 

Opening the Failover wizard – you see a warning about data loss:

 

 

The wizard really wants to make sure you know there may be data loss:

 

 

Click through to get the results:

 

 

The dashboard for sql2 has the current cluster status:

 

 

To capture the state of the new primary, immediately create a database snapshot, ideally before opening it up to new application connections:

 

 

(In this case we were using sql1 in the transaction simulator and not the sqlistener, so no transactions will be written to sql2).

 

Restart the sql1 instance. On refresh, you will see that the old primary now has a state of Not Synchronizing / In Recovery.

 

 

Create a database snapshot on the old primary (sql1) before making the old primary the secondary (otherwise, when the old primary synchronizes as a secondary, the missing transactions are lost😞

 

 

At this point you can Resume Data Movement to make the old primary a secondary.

 

 

It takes a minute for the old primary to roll back lost transactions and resynchronize with the new primary and the old primary should now be a synchronizing secondary;

 

 

We can now use the SQL Server Database Compare (SSDBC) application to check if we lost any transactions between the snapshot on the new primary (sql2 – the source) and the snapshot on the old primary (sql1 – the target). Refer to the SSDBC documentation for how to set it up.

 

 

In this case we have 313 hash differences (updates) and 423 missing (inserts) rows – note that we have no deletes that were not replicated because of timing of the simulated transaction cascading deletes (deletes stop if we exceed the configured percentage until the inserts/updates catch up):

 

Looking at the SSDBC folder in My Documents – we can see a SQL script file for each table in the database, with numbers in front to indicate the order you should run them in (based on foreign key references). If the database has DRI configured, you may need to combine scripts if the referenced table has an identity column and if you have the identity value capture option turned on. In the same folder are log files which contain more details about the comparison.

 

 

Opening one of the files, you can see update and insert statements. The update statements are written to try to be as safe as possible by checking (with additional conditions in the where clause) that the column value hasn’t been subsequently changed on the new primary.

 

 

After running all the change scripts on WWIDW on sql2 and then re-running SSDBC on WWIDW on sql2 (not the snapshot) and the sql1 snapshot – we see that the databases are now the same (note that this will only work on a static new primary);

 

 

Note that we could also use the tablediff utility or SSDT to generate a differences script.

 

The SSDBC download package contains a PowerShell script to make running the comparison operations across many servers/databases easier by making it Excel workbook driven. You can list your source and target servers and databases in the provided Servers.xlsx workbook and run the BulkDatabaseRecovery.ps1 script.

 


View full article

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...