Jump to content

Your max log rate on SQL Managed Instance Business Critical is now doubled


Recommended Posts

Guest NevenaNikolic
Posted

We are excited to announce that the transaction log write throughput (referred to as "log rate") limit is now doubled for the Azure SQL Managed Instances Business Critical with Premium-series hardware. The new limit of 192 MiB/s significantly improves the overall performance for write-intensive user scenarios such as data ingestion and index maintenance and allows your instances to manage more workload for the same price!

 

 

 

What is log rate?

 

 

Log rate in SQL Managed Instance refers to the rate at which transaction log records are generated and written to the transaction log file. Log rate is directly correlated with the number of transactions processed by the database because every executed transaction generates a certain amount of data that is written to the transaction log.

 

Log rate is important because it directly impacts the performance and stability of your database operations, especially for write-intensive workloads.

 

 

 

Log rate limit is now 192 MiB/s

 

 

In SQL Managed Instance, the log rate limit is based on the number of vCores and the service tier. For the Business Critical tier, the log rate is limited to 4.5 MiB/s per vCore, with a new maximum of 192 MiB/s per instance with no per-database cap limitations (i.e. a single database on a Business Critical instance can use up the whole 192 MiB/s log rate quota). You can benefit from the new log rate limit by choosing Business Critical Premium-series hardware for your SQL Managed Instance.

 

 

 


Business Critical

Log write throughput limit


Before


Now


Standard series

4 MiB/s per vCore
Up to 96 MiB/s per instance

4.5 MiB/s per vCore
Up to 96 MiB/s per instance

Premium series

4 MiB/s per vCore
Up to 96 MiB/s per instance

4.5 MiB/s per vCore
Up to 192 MiB/s per instance

Premium series memory optimized

4 MiB/s per vCore
Up to 96 MiB/s per instance

4.5 MiB/s per vCore
Up to 192 MiB/s per instance

 

Why is the log rate limit important?

 

 

A higher limit for the log rate ensures that your instance can manage more transactions when needed. Therefore, max log rate is important in scenarios when you need higher log rates, such as:

 

  1. High-volume transaction processing workloads (OLTP): If your application has a high volume of data modifications (INSERT, UPDATE, DELETE), a higher log rate can help accommodate the increased transaction log generation and improve performance.
  2. Bulk data operations: When performing bulk data operations, such as bulk inserts or large-scale data imports, a higher log rate can help process these operations more efficiently and reduce the time it takes to complete them.
  3. Database replication or synchronization: If you are using features like transactional replication or other synchronization mechanisms that rely on transaction log shipping, a higher log rate can ensure timely data synchronization between primary and secondary instances.

 

 

 

How to monitor the log rate?

 

 

Monitoring and managing the log rate is essential to ensure optimal performance and avoid potential bottlenecks in your SQL Managed Instance. You can use QPI library to monitor the log rate and take appropriate actions if needed.

 

STEP 1: Take a snapshot of the current values in sys.dm_io_virtual_file_stats DM.

 

 

 

EXEC qpi.snapshot_file_stats

 

 

 

 

 

STEP 2: Find information about each file (size, throughput, read, write [mbps]).

 

 

 

SELECT * FROM qpi.file_stats

 

 

 

mediumvv2px400.png.2bec9ecec1288ad59576e7d28e5eb429.png

 

 

 

We are interested in the information about log files (size, throughput, read, write [mbps]).

 

 

 

SELECT * FROM qpi.file_stats

WHERE file_name = 'log'

 

 

 

 

mediumvv2px400.png.7c2f4798024b6d461b391708166973de.png

 

Note: TempDB log file, templog, is not governed i.e., templog does not count for the log rate of a SQL MI.

 

Note: If you renamed any data log file such that its file_name was no longer called “log”, you would need to readjust these queries to sum all log files (except templog) to the log rate of the MI.

 

 

 

STEP 3: Find the sum of values from the column write_mbps for all log files on the instance to get the log rate.

 

 

 

SELECT SUM(write_mbps) AS lograte

FROM qpi.file_stats

WHERE file_name = 'log'

 

 

 

 

smallvv2px200.png.52f467da854c6875f60778bbb92aa0ae.png

 

This “lograte” is now limited to 192 MiB/s!

 

 

 

More good news: this is free of charge!

 

 

These improvements are a fantastic value for our customers since there are no associated pricing changes; we have doubled your log rate limit on SQL MI Business Critical for no extra charge.

 

These changes were automatically enabled for all existing and future Business Critical Azure SQL Managed Instances that use Premium-series hardware as of the beginning of August 2023.

 

 

 

Conclusion

 

 

This improvement makes Azure SQL Managed Instance an excellent choice for your performance-hungry database workloads and allows you to migrate more of your larger SQL workloads to Azure SQL MI. If you're still new to Azure SQL Managed Instance, now is a great time to get started and take Azure SQL Managed Instance for a spin!

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...