Immutable Backup for SAP databases using Azure NetApp Files and BlueXP

  • Thread starter Thread starter RalfKlahr
  • Start date Start date
R

RalfKlahr

Immutable/Indelible Backups for SAP databases

Why immutable/indelible Backups​


ANF snapshots are point-in-time, read-only copies of data that are stored in an ANF volume. They are by definition immutable, but it is possible to delete those snapshots. To protect such snapshots from deletion we can copy the “daily” snapshot (created from azacsnap) to an immutable and indelible Azure blob space. This Azure blob space must be configured with a data protection policy that prevents the deletion or change of the snapshot before its live time period is over.

Immutable backups are backups that cannot be changed or deleted for a certain period of time. They offer several benefits for data protection, such as:

  • Ransomware protection: Immutable backups are safe from malicious encryption or deletion by ransomware attacks.
  • Threat prevention: Immutable backups are also resistant to internal or external threats that may try to tamper with or destroy backup data.
  • Regulatory compliance: Immutable backups can help businesses meet data regulations that require preserving data integrity and authenticity.
  • Reliable disaster recovery: Immutable backups can ensure fast and accurate recovery of data in case of any data loss event.

Overview of immutable storage for blob data - Azure Storage | Microsoft Learn

Configure immutability policies for blob versions - Azure Storage | Microsoft Learn



Scenario​


An ANF snapshot will be created on the production/primary side of your deployed SAP system(s). ANF CRR (Cross Region Replication) will copy the volume (incl. snapshots) over to the DR side. In the DR region, BlueXP will automatically copy the ./snapshot directory to an immutable and indelible (WORM) Azure blob. The lifecycle period of the immutable Azure Blov will determine the lifetime of the backup.

924x496?v=v2.png

Preparation​


Create an Azure storage account for the blob space

789x844?v=v2.png





799x980?v=v2.png



860x860?v=v2.png

Here it is very important to select “Enable version-level immutability” to enable this function of an immutable backup.

768x920?v=v2.png

680x276?v=v2.png



Configure the access network for the storage account

704x335?v=v2.png





Go to the Azure storage Account

749x309?v=v2.png



Add a container

761x164?v=v2.png



Add a directory where the backups will be stored

801x237?v=v2.png



429x522?v=v2.png

Data container

medium?v=v2&px=400.png

669x174?v=v2.png



Create the BlueXP account​


NetApp BlueXP is a unified control plane that lets you build, protect, and govern your hybrid multicloud data estate across multiple clouds and on-premises. It offers storage mobility, protection, analysis and control features for any workload, any cloud, and any data type.

Some of the benefits of NetApp BlueXP are:

  • Simplified management: You can discover, deploy, and operate storage resources on different platforms with a single interface and common policies.
  • Enhanced security: You can protect your data from ransomware, data tampering, and accidental deletion with immutable backups and encryption.
  • Cost efficiency: You can optimize your data placement and consumption with intelligent insights and automation.
  • Sustainability: You can reduce your carbon footprint and improve your data efficiency

NetApp BlueXP

Please create a user or log in with your account.

772x330?v=v2.png



Create your “working Environment”

808x312?v=v2.png



Create the Credentials

807x259?v=v2.png





815x326?v=v2.png



Create the Azure credentials for the communication between BlueXP and the Azure storage account

703x397?v=v2.png

The easiest way to get this information is from the azacsnap authentication file (Service principal). It is also possible to use an managed identity for the connection.

medium?v=v2&px=400.png

856x229?v=v2.png



Return to Working Environments and create a new working environment

758x288?v=v2.png

760x278?v=v2.png



759x294?v=v2.png



If you don’t have a data broker already, create one. I think it is better to create a data broker manually. This gives you the chance to select the OS vendor and also integrate the broker in your monitoring and management framework.

498x191?v=v2.png



667x335?v=v2.png

Simply deploy a D4s_v5 with ubuntu20.04 or similar in your environment and run the installation procedure on this VM. This maybe is the better option because you can define all the “required” settings for the VM by yourself.

738x528?v=v2.png



After the data broker is created we can specify the volume and directory which we would like to backup.

For performance and high availability reasons it is highly recommended to create a data broker group of three or more data broker.

766x289?v=v2.png



Now create the relationship for the backup.

medium?v=v2&px=400.png

779x510?v=v2.png



Create the relationship. Drag and drop the relevant storage tile to the right config place.

843x413?v=v2.png

This displays the configured relationship

836x245?v=v2.png



Configure the source.

814x182?v=v2.png





813x185?v=v2.png



We now need to specify the .snapshot directory from the data volume. We only want to backup the snapshots and not the data volume as it is.

medium?v=v2&px=400.png

medium?v=v2&px=400.png



587x393?v=v2.png



Now select the storage account we have created in the beginning.

599x180?v=v2.png



Select and copy the connect string.

772x556?v=v2.png

Paste the connect string into the blob target config.

763x281?v=v2.png

765x172?v=v2.png

We would recommend creating one container for each data volume (if you have more than one) and one for the log backup.

764x299?v=v2.png



769x268?v=v2.png



Create the schedule for the sync. We would recommend doing the daily backup once a day and the log backup every 10 minutes.

854x737?v=v2.png



You can exclude snapshot names hire in this section. You must specify what you don’t want instead of what you want. I know…. This might get changed.

809x279?v=v2.png



This is the sync relationship we just created.

868x406?v=v2.png



The dashboard will show the latest information. Here it is also possible to download the sync log from the data broker.

784x348?v=v2.png



This is the data container with all the synced backups.

844x294?v=v2.png



Set the deletion Policy​


To setup the deletion policy go to the storage account and select “Data Protection”

medium?v=v2&px=400.png

765x325?v=v2.png



Now set “Manage Policy”

Here you setup the indelible time frame:

medium?v=v2&px=400.png

557x280?v=v2.png

For my system I only protect the deletion for 2 days. Normally we see 14, 30 or 90 days.



Automatic Lifecycle Management of the Blob​


Normally you would like to have an automatic deletion of the backups in place. This makes the housekeeping much easier.

To setup the deletion policy please go to: “Lifecycle Management” and create a new deletion rule.

861x327?v=v2.png



+Add Rule

747x448?v=v2.png

785x422?v=v2.png

823x444?v=v2.png

824x512?v=v2.png



Now the new lifecycle management rule is created.



Restore​


The easiest way to restore a backup is when you create a new BlueXP relationship but in the revers order. From Blob to a new Volume. Then you do not have to deal with azcopy or anything else. This is a very easy but time-consuming process.



Update the Databroker​


Normally the data broker will run an automatic update once a new version is available.

In rare cases you can run the update like this:

curl https://cf.cloudsync.netapp.com/e07d33f7-6ac5-470e-989a-2df33b463ad4_installer -o data_broker_installer.sh



sudo -s
pm2 stop all
chmod +x data_broker_installer.sh
./data_broker_installer.sh > data_broker_installer.log
pm2 start



Files > 4TB​


With the default configuration a databroker can copy only file smaller than 4TB to the blob. With HANA it can happen, that datafiles are getting much larger. In this case we need to adjust the databroker blocksize.

Check Data Broker status:

https://console.bluexp.netapp.com/sync/manage

  • Expand the Data Broker Group and then expand the Data Broker.
  • Check Data Broker version.



The Data Broker should automatically be updated to the latest version as features are released.

To manually update the Data Broker version:

  • From the Data Broker VM
  • pm2 update

OR

  • pm2 stop all, and pm2 start all



Data Broker config files location:

cd /opt/netapp/databroker/config



To make changes to the buffer-size setting (this file is normally empty):

vi local.json



add the following:

{
"protocols": {
"azure": {
"buffer-size": "120MB"
}
}
}

Continue reading...
 
Back
Top