Guest Jose_Manuel_Jurado Posted December 16, 2022 Posted December 16, 2022 Today, I worked on a case that our customer reported a performance issue running SELECT TOP 100 * FROM HeapTable. This table has around 500K rows and is taking around 2 minutes. Even using SELECT TOP 100 Id From HeapTable we got the same performance issue. Following I would like to share my lessons learned about it. Lessons Learned #1: Review if the amount of data that the table has for these 100 rows is high: But, it is a normal data varchar, nvarchar, etc.. no LOB data. So, no issue about the amount of data to download or issue a network level. Lessons Learned #2: Review the TSQL and the execution plan: The TSQL is: SELECT TOP 100 * FROM HeapTable The action plan takes 2 minutes to retrieve the data and the compilation/execution took few milliseconds. Running the query at the moment of the execution "select * from sys.dm_db_resource_stats" I saw that 99%-100% of DataIO metric. So, It seems that SQL Server is reading too much data or reading too many pages. Once the data is in the memory (buffer pool) the query took a few milliseconds that it is expected. But, for this test, the idea was every time that we test (if it is possible in a production environment) to clean the buffer pool to have a real time. Lessons Learned #3: All points to that this HeapTable has a huge fragmentation that is forcing to read a huge number of pages. We have two solutions like is a Heap Table: Run an ALTER TABLE HeapTable REBUILD Create a dummy table with the same structure, insert the data from HeapTable to new dummy table, delete HeapTable and rename dummy table by HeapTable. Choosing the option ALTER TABLE HeapTable REBUILD we were able to recover the expected performance and the query took a few milliseconds. Enjoy! Continue reading... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.