Troubleshooting High Log Utilization Due to ACTIVE_TRANSACTION

  • Thread starter Thread starter Thamires_Lemes
  • Start date Start date
T

Thamires_Lemes

Please keep in mind that the SQL Server transaction log's writing process is influenced by the chosen database recovery model. However, in the context of this post, we exclusively address Azure SQL Database and Azure SQL Managed Instance, both configured with a consistent 'Full' recovery model. Therefore, other recovery models are not within the scope of this post.



The transaction log in SQL Server records all changes made to a database, allowing for data recovery and consistency. When a transaction is initiated, it acquires space in the transaction log to record its activities. Long running transactions have the potential to hold the transaction log, and, depending on database write activity, cause errors and disruptions in the SQL Server environment.



It is important to point out that the transaction that is holding the transaction log might not be performing any write activities to consume additional log space, but subsequent transactions that writes to the transaction log will cause its utilization to increase, even if they are fast. The log space won’t be released until the oldest transaction concludes its execution.



To check the remaining space within the database transaction log, you can run the following T-SQL command while connected to the database:



DBCC SQLPERF(logspace);



Should you observe high transaction log utilization, the next step is to ascertain what is preventing the efficient reutilization of log file space. This can be achieved with the following T-SQL query:



SELECT name, log_reuse_wait_desc FROM sys.databases;



If you see the log reuse wait reason is ACTIVE_TRANSACTION, you can run the following command to check what is the oldest active transaction:



DBCC OPENTRAN;



For more comprehensive insights into the transaction and the specific Log Serial Number (LSN) it is currently holding, you can utilize the following T-SQL queries:



  1. Analyzing Log Utilization: Irrespective of the log reuse wait reason, the following query provides invaluable insights into your database's transaction log, shedding light on the factors causing log congestion:






SELECT

log_truncation_holdup_reason

, log_recovery_size_gb=log_recovery_size_mb/1024.

, Concat (

Convert(bigint,Convert(varbinary,Concat('0x',ParseName(Replace(log_recovery_lsn,':','.'),3)),1)) * 1000000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_recovery_lsn,':','.'),2)),1)) * 10000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_recovery_lsn,':','.'),1)),1))) log_recovery_lsn

, Concat (

Convert(bigint,Convert(varbinary,Concat('0x',ParseName(Replace(log_checkpoint_lsn,':','.'),3)),1)) * 1000000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_checkpoint_lsn,':','.'),2)),1)) * 10000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_checkpoint_lsn,':','.'),1)),1))) log_checkpoint_lsn

, log_since_last_checkpoint_gb=log_since_last_checkpoint_mb/1024.

, Concat (

Convert(bigint,Convert(varbinary,Concat('0x',ParseName(Replace(log_min_lsn,':','.'),3)),1)) * 1000000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_min_lsn,':','.'),2)),1)) * 10000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_min_lsn,':','.'),1)),1))) log_min_lsn

, Concat (

Convert(bigint,Convert(varbinary,Concat('0x',ParseName(Replace(log_end_lsn,':','.'),3)),1)) * 1000000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_end_lsn,':','.'),2)),1)) * 10000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_end_lsn,':','.'),1)),1))) log_end_lsn

, total_log_size_gb=total_log_size_mb

, active_log_size_gb=active_log_size_mb/1024

, log_backup_time

, Concat (

Convert(bigint,Convert(varbinary,Concat('0x',ParseName(Replace(log_backup_lsn,':','.'),3)),1)) * 1000000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_backup_lsn,':','.'),2)),1)) * 10000,

Convert(int,Convert(varbinary,Concat('0x',ParseName(Replace(log_backup_lsn,':','.'),1)),1))) log_backup_lsn

, log_since_last_log_backup_gb=log_since_last_log_backup_mb/1024.

FROM sys.dm_db_log_stats (DB_ID()) db;





  1. Transaction Insight: To gain a deeper understanding of transactions currently active in your database, including details about the oldest Log Serial Number (LSN) in use, session attributes, and execution times, you could run the following query:






SELECT st.session_id

, st.transaction_id

, s.status

, at.transaction_begin_time

, CASE dt.database_transaction_type

WHEN 1 THEN 'Read/Write'

WHEN 2 THEN 'Read only'

WHEN 3 THEN 'System' END AS TransactionType,

CASE database_transaction_state

WHEN 1 THEN 'Not Initialized'

WHEN 3 THEN 'Transaction No Log'

WHEN 4 THEN 'Transaction with Log'

WHEN 5 THEN 'Transaction Prepared'

WHEN 10 THEN 'Commited'

WHEN 11 THEN 'Rolled Back'

WHEN 12 THEN 'Commited and Log Generated' END AS TransactionState

, dt.database_transaction_begin_lsn

, dt.database_transaction_last_lsn

, dt.database_transaction_commit_lsn

, dt.database_transaction_last_rollback_lsn

, dt.database_transaction_first_repl_lsn

, s.last_request_start_time

, s.last_request_end_time

, s.program_name

, s.login_name

, s.client_interface_name

, cpu_time_ms=s.cpu_time

, s.reads

, s.writes

, last_query=sqltext.text

FROM sys.dm_tran_session_transactions st

JOIN sys.dm_tran_active_transactions at

ON st.transaction_id = at.transaction_id

JOIN sys.dm_exec_sessions s

ON s.session_id = st.session_id

JOIN sys.dm_tran_database_transactions dt on st.transaction_id = dt.transaction_id and dt.database_id = DB_ID()

JOIN sys.dm_exec_connections AS c on s.session_id=c.session_id

CROSS APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) AS sqltext

ORDER BY dt.database_transaction_begin_lsn asc;





To mitigate high log utilization attributed to an active transaction, the primary course of action is to conclude the transaction by either committing or rolling it back. If you anticipate that the transaction will finish its execution and there's sufficient room for the transaction log to grow, you could wait until the transaction completes, commits, and subsequently releases the log.



In the context of an Azure SQL Managed Instance, and provided there's available space within the instance, you might consider increasing the maximum size of the log file temporarily until the transaction concludes.



If you cannot wait for the transaction to finish nor increase the size of the transaction log, you may need to kill the session that is running the transaction to force it to rollback and release the log space.



To resolve the issue and avoid future recurrences, a thorough review of lengthy transactions is essential to identify opportunities for optimization and prevent them from running for extended periods. Furthermore, adopting the best practice of monitoring the environment to ensure that you receive timely alerts before the transaction log approaches its limit, enabling proactive intervention to mitigate any potential errors that might impact your end users.

Continue reading...
 
Back
Top