R
RalfKlahr
Immutable/Indelible Backups for SAP databases
Why immutable/indelible Backups
ANF snapshots are point-in-time, read-only copies of data that are stored in an ANF volume. They are by definition immutable, but it is possible to delete those snapshots. To protect such snapshots from deletion we can copy the “daily” snapshot (created from azacsnap) to an immutable and indelible Azure blob space. This Azure blob space must be configured with a data protection policy that prevents the deletion or change of the snapshot before its live time period is over.
Immutable backups are backups that cannot be changed or deleted for a certain period of time. They offer several benefits for data protection, such as:
- Ransomware protection: Immutable backups are safe from malicious encryption or deletion by ransomware attacks.
- Threat prevention: Immutable backups are also resistant to internal or external threats that may try to tamper with or destroy backup data.
- Regulatory compliance: Immutable backups can help businesses meet data regulations that require preserving data integrity and authenticity.
- Reliable disaster recovery: Immutable backups can ensure fast and accurate recovery of data in case of any data loss event.
Overview of immutable storage for blob data - Azure Storage | Microsoft Learn
Configure immutability policies for blob versions - Azure Storage | Microsoft Learn
Scenario
An ANF snapshot will be created on the production/primary side of your deployed SAP system(s). ANF CRR (Cross Region Replication) will copy the volume (incl. snapshots) over to the DR side. In the DR region, BlueXP will automatically copy the ./snapshot directory to an immutable and indelible (WORM) Azure blob. The lifecycle period of the immutable Azure Blov will determine the lifetime of the backup.
Preparation
Create an Azure storage account for the blob space
Here it is very important to select “Enable version-level immutability” to enable this function of an immutable backup.
Configure the access network for the storage account
Go to the Azure storage Account
Add a container
Add a directory where the backups will be stored
Data container
Create the BlueXP account
NetApp BlueXP is a unified control plane that lets you build, protect, and govern your hybrid multicloud data estate across multiple clouds and on-premises. It offers storage mobility, protection, analysis and control features for any workload, any cloud, and any data type.
Some of the benefits of NetApp BlueXP are:
- Simplified management: You can discover, deploy, and operate storage resources on different platforms with a single interface and common policies.
- Enhanced security: You can protect your data from ransomware, data tampering, and accidental deletion with immutable backups and encryption.
- Cost efficiency: You can optimize your data placement and consumption with intelligent insights and automation.
- Sustainability: You can reduce your carbon footprint and improve your data efficiency
NetApp BlueXP
Please create a user or log in with your account.
Create your “working Environment”
Create the Credentials
Create the Azure credentials for the communication between BlueXP and the Azure storage account
The easiest way to get this information is from the azacsnap authentication file (Service principal). It is also possible to use an managed identity for the connection.
Return to Working Environments and create a new working environment
If you don’t have a data broker already, create one. I think it is better to create a data broker manually. This gives you the chance to select the OS vendor and also integrate the broker in your monitoring and management framework.
Simply deploy a D4s_v5 with ubuntu20.04 or similar in your environment and run the installation procedure on this VM. This maybe is the better option because you can define all the “required” settings for the VM by yourself.
After the data broker is created we can specify the volume and directory which we would like to backup.
For performance and high availability reasons it is highly recommended to create a data broker group of three or more data broker.
Now create the relationship for the backup.
Create the relationship. Drag and drop the relevant storage tile to the right config place.
This displays the configured relationship
Configure the source.
We now need to specify the .snapshot directory from the data volume. We only want to backup the snapshots and not the data volume as it is.
Now select the storage account we have created in the beginning.
Select and copy the connect string.
Paste the connect string into the blob target config.
We would recommend creating one container for each data volume (if you have more than one) and one for the log backup.
Create the schedule for the sync. We would recommend doing the daily backup once a day and the log backup every 10 minutes.
You can exclude snapshot names hire in this section. You must specify what you don’t want instead of what you want. I know…. This might get changed.
This is the sync relationship we just created.
The dashboard will show the latest information. Here it is also possible to download the sync log from the data broker.
This is the data container with all the synced backups.
Set the deletion Policy
To setup the deletion policy go to the storage account and select “Data Protection”
Now set “Manage Policy”
Here you setup the indelible time frame:
For my system I only protect the deletion for 2 days. Normally we see 14, 30 or 90 days.
Automatic Lifecycle Management of the Blob
Normally you would like to have an automatic deletion of the backups in place. This makes the housekeeping much easier.
To setup the deletion policy please go to: “Lifecycle Management” and create a new deletion rule.
+Add Rule
Now the new lifecycle management rule is created.
Restore
The easiest way to restore a backup is when you create a new BlueXP relationship but in the revers order. From Blob to a new Volume. Then you do not have to deal with azcopy or anything else. This is a very easy but time-consuming process.
Update the Databroker
Normally the data broker will run an automatic update once a new version is available.
In rare cases you can run the update like this:
curl https://cf.cloudsync.netapp.com/e07d33f7-6ac5-470e-989a-2df33b463ad4_installer -o data_broker_installer.sh
sudo -s
pm2 stop all
chmod +x data_broker_installer.sh
./data_broker_installer.sh > data_broker_installer.log
pm2 start
Files > 4TB
With the default configuration a databroker can copy only file smaller than 4TB to the blob. With HANA it can happen, that datafiles are getting much larger. In this case we need to adjust the databroker blocksize.
Check Data Broker status:
https://console.bluexp.netapp.com/sync/manage
- Expand the Data Broker Group and then expand the Data Broker.
- Check Data Broker version.
The Data Broker should automatically be updated to the latest version as features are released.
To manually update the Data Broker version:
- From the Data Broker VM
- pm2 update
OR
- pm2 stop all, and pm2 start all
Data Broker config files location:
cd /opt/netapp/databroker/config
To make changes to the buffer-size setting (this file is normally empty):
vi local.json
add the following:
{
"protocols": {
"azure": {
"buffer-size": "120MB"
}
}
}
Continue reading...