Jump to content

Immutable Backup for SAP databases using Azure NetApp Files and BlueXP

Featured Replies

Posted

[HEADING=1]Immutable/Indelible Backups for SAP databases[/HEADING]

[HEADING=1]Why immutable/indelible Backups[/HEADING]

 

ANF snapshots are point-in-time, read-only copies of data that are stored in an ANF volume. They are by definition immutable, but it is possible to delete those snapshots. To protect such snapshots from deletion we can copy the “daily” snapshot (created from azacsnap) to an immutable and indelible Azure blob space. This Azure blob space must be configured with a data protection policy that prevents the deletion or change of the snapshot before its live time period is over.

 

Immutable backups are backups that cannot be changed or deleted for a certain period of time. They offer several benefits for data protection, such as:

 

  • Ransomware protection: Immutable backups are safe from malicious encryption or deletion by ransomware attacks.
  • Threat prevention: Immutable backups are also resistant to internal or external threats that may try to tamper with or destroy backup data.
  • Regulatory compliance: Immutable backups can help businesses meet data regulations that require preserving data integrity and authenticity.
  • Reliable disaster recovery: Immutable backups can ensure fast and accurate recovery of data in case of any data loss event.

 

Overview of immutable storage for blob data - Azure Storage | Microsoft Learn

 

Configure immutability policies for blob versions - Azure Storage | Microsoft Learn

 

 

 

[HEADING=1]Scenario[/HEADING]

 

An ANF snapshot will be created on the production/primary side of your deployed SAP system(s). ANF CRR (Cross Region Replication) will copy the volume (incl. snapshots) over to the DR side. In the DR region, BlueXP will automatically copy the ./snapshot directory to an immutable and indelible (WORM) Azure blob. The lifecycle period of the immutable Azure Blov will determine the lifetime of the backup.

 

924x496vv2.png.68f807f0b78611a2b92bcd0bf65bfa70.png

 

[HEADING=1]Preparation[/HEADING]

 

Create an Azure storage account for the blob space

 

789x844vv2.thumb.png.729a383eb13e605a70e1536802adcb91.png

 

 

 

 

 

799x980vv2.thumb.png.3f940bce231eba223855a13521215367.png

 

 

 

860x860vv2.thumb.png.dbf1bd847f9a49332aef90c5654fec59.png

 

Here it is very important to select “Enable version-level immutability” to enable this function of an immutable backup.

 

768x920vv2.thumb.png.5f16c7039c37dad3945193698ee8106e.png

 

680x276vv2.png.e19f50eb34994a8fcbf63a1313c2336c.png

 

 

 

Configure the access network for the storage account

 

704x335vv2.png.cbf9130accf8cd4cc68af4e7b16ed234.png

 

 

 

 

 

Go to the Azure storage Account

 

749x309vv2.png.21fc4ea8be7befc03ec5244a394cd376.png

 

 

 

Add a container

 

761x164vv2.png.d5365e9b10a9c33979b81769d138827b.png

 

 

 

Add a directory where the backups will be stored

 

801x237vv2.png.c34bb38f7bc5719f36bb8282af8365a6.png

 

 

 

429x522vv2.png.8832651a58177489abe07ecdb69464e2.png

 

Data container

 

mediumvv2px400.png.c1c0c5c484914920c7d89312395a4d8c.png

 

669x174vv2.png.938b4c85bfa23b6a41eefa8003223669.png

 

 

 

[HEADING=1]Create the BlueXP account[/HEADING]

 

NetApp BlueXP is a unified control plane that lets you build, protect, and govern your hybrid multicloud data estate across multiple clouds and on-premises. It offers storage mobility, protection, analysis and control features for any workload, any cloud, and any data type.

 

Some of the benefits of NetApp BlueXP are:

 

  • Simplified management: You can discover, deploy, and operate storage resources on different platforms with a single interface and common policies.
  • Enhanced security: You can protect your data from ransomware, data tampering, and accidental deletion with immutable backups and encryption.
  • Cost efficiency: You can optimize your data placement and consumption with intelligent insights and automation.
  • Sustainability: You can reduce your carbon footprint and improve your data efficiency

 

NetApp BlueXP

 

Please create a user or log in with your account.

 

772x330vv2.png.778bdabfef5cb3f1dc044c1bdb58cc1c.png

 

 

 

Create your “working Environment”

 

808x312vv2.png.f855a905e870b5f4cee57bbfc3997d6b.png

 

 

 

Create the Credentials

 

807x259vv2.png.de2642c6d95b390b5936f792b6be5973.png

 

 

 

 

 

815x326vv2.png.69b95da61cc3199952ee2092e75fb2dd.png

 

 

 

Create the Azure credentials for the communication between BlueXP and the Azure storage account

 

703x397vv2.png.5392cd94eaef60e3bb825ebcd763d0ac.png

 

The easiest way to get this information is from the azacsnap authentication file (Service principal). It is also possible to use an managed identity for the connection.

 

mediumvv2px400.png.18e9d73293bc5db4db29fad6e3a2eded.png

 

856x229vv2.png.dc907d56177d72497b0416dbfcd98332.png

 

 

 

Return to Working Environments and create a new working environment

 

758x288vv2.png.0da5eaf8a6db19d1d947e124780631aa.png

 

760x278vv2.png.e5a27ad3de0312dd30c66977fe00fbc5.png

 

 

 

759x294vv2.png.9f984a4ec79bbb96d2ad348381546989.png

 

 

 

If you don’t have a data broker already, create one. I think it is better to create a data broker manually. This gives you the chance to select the OS vendor and also integrate the broker in your monitoring and management framework.

 

498x191vv2.png.469b43fdd0d2abba8f78ef64ea199056.png

 

 

 

667x335vv2.png.94050afbd0549dffc0c563d67609db31.png

 

Simply deploy a D4s_v5 with ubuntu20.04 or similar in your environment and run the installation procedure on this VM. This maybe is the better option because you can define all the “required” settings for the VM by yourself.

 

738x528vv2.png.9802e07ba2800d5c7bd2dc0e733dcd8b.png

 

 

 

After the data broker is created we can specify the volume and directory which we would like to backup.

 

For performance and high availability reasons it is highly recommended to create a data broker group of three or more data broker.

 

766x289vv2.png.06e3a7cdacbeb15412ef4c0d56f3a3c0.png

 

 

 

Now create the relationship for the backup.

 

mediumvv2px400.png.cd013a507306446076734651a2ff52d3.png

 

779x510vv2.png.76d1292a4cfc9cc3fc7b0f7818ae58d2.png

 

 

 

Create the relationship. Drag and drop the relevant storage tile to the right config place.

 

843x413vv2.png.556ead3eb64d90fc5c99858b4769f52d.png

 

This displays the configured relationship

 

836x245vv2.png.31ea71361ac4e7b443751853530cc709.png

 

 

 

Configure the source.

 

814x182vv2.png.39d95f756a3a6be44cf015d5a1abf5c4.png

 

 

 

 

 

813x185vv2.png.51c0dfc477fdfbd67b823f76f2306838.png

 

 

 

We now need to specify the .snapshot directory from the data volume. We only want to backup the snapshots and not the data volume as it is.

 

mediumvv2px400.png.899f915910bc327183967fbda9698b9d.png

 

mediumvv2px400.png.e47bf3f75e79eadf890a9adb3ab5c2ef.png

 

 

 

587x393vv2.png.a520a5dcc063aaaad5a37f85b89ee71e.png

 

 

 

Now select the storage account we have created in the beginning.

 

599x180vv2.png.35370574d090001a31d2dd334398533e.png

 

 

 

Select and copy the connect string.

 

772x556vv2.png.dfbae207eec8963f30c45fe07ac85f79.png

 

Paste the connect string into the blob target config.

 

763x281vv2.png.020bb9c018b61ad089d61816f8003b8d.png

 

765x172vv2.png.1fa57770e6048f098fd9fae33cdad743.png

 

We would recommend creating one container for each data volume (if you have more than one) and one for the log backup.

 

764x299vv2.png.338a9daab6740833d13c456d028da9d0.png

 

 

 

769x268vv2.png.fa68fc8cc25c76fc63ad4c673d4d86a1.png

 

 

 

Create the schedule for the sync. We would recommend doing the daily backup once a day and the log backup every 10 minutes.

 

854x737vv2.png.40e4ad67db2547ed70936f4e7988b820.png

 

 

 

You can exclude snapshot names hire in this section. You must specify what you don’t want instead of what you want. I know…. This might get changed.

 

809x279vv2.png.d075247bbea646d61fced2eaaa3b3595.png

 

 

 

This is the sync relationship we just created.

 

868x406vv2.png.3efb81c8ab4f8041effb8594d782db84.png

 

 

 

The dashboard will show the latest information. Here it is also possible to download the sync log from the data broker.

 

784x348vv2.png.0d2aba85b43297496e7564f24da5cd1a.png

 

 

 

This is the data container with all the synced backups.

 

844x294vv2.png.d1049a0455026c43445a3cc13befa48f.png

 

 

 

[HEADING=1]Set the deletion Policy[/HEADING]

 

To setup the deletion policy go to the storage account and select “Data Protection”

 

mediumvv2px400.png.6e319d51ad3dab0f6c6afd1c2d7c21a1.png

 

765x325vv2.png.5c7296057b2cfdc860f8966e2abbed7e.png

 

 

 

Now set “Manage Policy”

 

Here you setup the indelible time frame:

 

mediumvv2px400.png.9fdceb01297ef8a1a3333df241336c36.png

 

557x280vv2.png.4c9e7d59fd03fe9e867f8bb9bbde0a99.png

 

For my system I only protect the deletion for 2 days. Normally we see 14, 30 or 90 days.

 

 

 

[HEADING=1]Automatic Lifecycle Management of the Blob[/HEADING]

 

Normally you would like to have an automatic deletion of the backups in place. This makes the housekeeping much easier.

 

To setup the deletion policy please go to: “Lifecycle Management” and create a new deletion rule.

 

861x327vv2.png.72d28eb2b08327bf52aadd4da490e41d.png

 

 

 

+Add Rule

 

747x448vv2.png.a3aabef2d7d6b0ca8e743618edd8d5c4.png

 

785x422vv2.png.89729c754cb7afdf4c25425670c2ee63.png

 

823x444vv2.png.10be6d2fb8406c8716b42d0b8ac9c033.png

 

824x512vv2.png.7648018333afdce9da1454eabce3ef7d.png

 

 

 

Now the new lifecycle management rule is created.

 

 

 

[HEADING=1]Restore[/HEADING]

 

The easiest way to restore a backup is when you create a new BlueXP relationship but in the revers order. From Blob to a new Volume. Then you do not have to deal with azcopy or anything else. This is a very easy but time-consuming process.

 

 

 

[HEADING=1]Update the Databroker[/HEADING]

 

Normally the data broker will run an automatic update once a new version is available.

 

In rare cases you can run the update like this:

 

curl https://cf.cloudsync.netapp.com/e07d33f7-6ac5-470e-989a-2df33b463ad4_installer -o data_broker_installer.sh

 

 

 

sudo -s

pm2 stop all

chmod +x data_broker_installer.sh

./data_broker_installer.sh > data_broker_installer.log

pm2 start

 

 

 

[HEADING=1]Files > 4TB[/HEADING]

 

With the default configuration a databroker can copy only file smaller than 4TB to the blob. With HANA it can happen, that datafiles are getting much larger. In this case we need to adjust the databroker blocksize.

 

Check Data Broker status:

 

https://console.bluexp.netapp.com/sync/manage

 

  • Expand the Data Broker Group and then expand the Data Broker.
  • Check Data Broker version.

 

 

 

The Data Broker should automatically be updated to the latest version as features are released.

 

To manually update the Data Broker version:

 

  • From the Data Broker VM
  • pm2 update

 

OR

 

  • pm2 stop all, and pm2 start all

 

 

 

Data Broker config files location:

 

cd /opt/netapp/databroker/config

 

 

 

To make changes to the buffer-size setting (this file is normally empty):

 

vi local.json

 

 

 

add the following:

 

{

"protocols": {

"azure": {

"buffer-size": "120MB"

}

}

}

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...