Guest Jose_Manuel_Jurado Posted January 21, 2023 Posted January 21, 2023 We used to have situations where our customer needs to export 2 TB of data using SQLPackage in Azure SQL Database. Exporting this amount of data might take time and following we would like to share with you some best practices for this specific scenario. If you’re exporting from General Purpose Managed Instance (remote storage), you can increase remote storage database files to improve IO performance and speed up the export. Temporarily increase your compute size. Limit usage of database during export (like in Transactional consistency scenario consider using dedicated copy of the database to perform the export operation) Use a Virtual Machine in Azure with Accelerated Networking in Azure and in the same region of the database. Use as a folder destination and temporal file with a enough capacity and SSD to improve the exported file performance and multiple temporary files created. Consider using a clustered index with non-null values on all large tables. With clustered index, export can be parallelized, hence much more efficient. Without clustered indexes, export service needs to perform table scan on entire tables in order to export them, and this can lead to time-outs after 6-12 hours for very large tables. Review the following articles: Lesson Learned #21: There is not enough space on the disk exporting BacPac using SSMS - Microsoft Community Hub Lesson Learned #57: Bacpac export process doesn't include the automatic statistics - Microsoft Community Hub Lesson Learned #25: Export/Import Azure SQL Database using Azure File Service? - Microsoft Community Hub [*]Besides the diagnostic parameter in SQLPackage , you could see how are the queries is running enabling SQL Auditing in this database or using SQL Server Profiler extension in Azure Data Studio. Enjoy! Continue reading... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.