I've experienced a particular frustration with SQL Server many times over the years: the seeming impossibility of programmatically fetching a varbinary value from a table, and converting it to the string value required by a procedure call in the same batch. One example would be when troubleshooting replication by extracting a xact_seqno varbinary(16) value from the MSdistribution_history table, and using it as the value for the nchar(22) @xact_seqno_start and/or @xact_seqno_end input parameters to the sp_browsereplcmds procedure.
For a long time, I just did this by hand because there didn't seem to be a way to get the value out. Later, I sometimes used the output parameter functionality of sp_executesql to bluff my way through this. However, I just discovered the there is a built-in (but undocumented) function that allows you to transform a varbinary into a string! It's master.dbo.fn_varbintohexstr(). Here's a script that accomplishes the replication troubleshooting task I cited above (I assume here you already know the publication database id and agent id - finding those is another subject altogether):
declare @last_xact_seqno varbinary(16)
,@last_xact_seqno_str nchar(22)
select @last_xact_seqno = max(xact_seqno)
from distribution..MSdistribution_history
where agent_id = 6
and error_id = 0
set @last_xact_seqno_str = master.dbo.fn_varbintohexstr(@last_xact_seqno)
exec sp_browsereplcmds
@xact_seqno_start = @last_xact_seqno_str
,@publisher_database_id = 7
Thursday, November 19, 2009
solving frequent replication distribution latency errors
We have many millions of commands a day passing through a 2-node peer-to-peer replication topology. At some point, replication lost its ability to keep up with distribution. After much research, we replicated the issue to Microsoft and were told that our distribution database was too big. They recommended that we set immediate_sync on the publication to false.
To me, it made no sense why this would help. We had replication distribution history latency set to 48 hours - wouldn't transactions be retained for that amount of time regardless of whether they'd been replicated or not?
Luckily, I found this MSDN blog post, which does a great job of explaining the interaction between min transaction history retention, max transaction history retention, and immediate_sync. Our plan now is to set min retention to 8 hours (we need a buffer that will allow us to initialize from backup), max retention to 36 hours, and immediate_sync to false. This will keep all transactions for 8 hours, but replicated transactions after that buffer will be deleted as soon as they're replicated to existing subscribers. Transactions will only be retained for 36 hours if replication gets seriously backed up again.
To me, it made no sense why this would help. We had replication distribution history latency set to 48 hours - wouldn't transactions be retained for that amount of time regardless of whether they'd been replicated or not?
Luckily, I found this MSDN blog post, which does a great job of explaining the interaction between min transaction history retention, max transaction history retention, and immediate_sync. Our plan now is to set min retention to 8 hours (we need a buffer that will allow us to initialize from backup), max retention to 36 hours, and immediate_sync to false. This will keep all transactions for 8 hours, but replicated transactions after that buffer will be deleted as soon as they're replicated to existing subscribers. Transactions will only be retained for 36 hours if replication gets seriously backed up again.
Monday, November 02, 2009
Shrinking data files, part 1
You've probably heard admonishments to avoid shrinking data files, especially on a heavily used database. The standard usage of DBCC SHRINKDATABASE or DBCC SHRINKFILE (not using the TRUNCATEONLY option of these commands) will move any data at the tail of the file to the beginning. The most frequently cited negative consequence of this action is serious fragmentation of the data you moved as data is moved page by page to the beginning of the file with presumably no attention paid to the objects represented by the data. In my case, shrinking data files also caused very heavy I/O load on my staging server, with sustained average disk queues reaching up into the hundreds. Definitely not something I wanted to attempt on a heavily loaded production server.
However, I had a very large database file (>400 GB) from which I had moved a substantial portion (>75%) of the data to a different filegroup. After several attempts to shrink the data file and witnessing the resulting abysmal performance, I found myself wanting to know exactly what was that data at the end of the file so I could move it myself by just rebuilding the indexes. I could then just use DBCC SHRINKFILE with TRUNCATEONLY to recover the space with none of the fragmentation. Most of the indexes in my database can even be rebuilt online using ALTER INDEX . . . REBUILD WITH (ONLINE=ON).
Here's how I found out what was sitting in the back rows of my data file:
First, I already knew that the GAM pages could tell me whether a particular extent is allocated or not. If you don't know what a GAM page is, check Books Online or any one of hundreds of websites and books that can offer more detail. Basically, the GAM is a page in a data file that is a map of the next 500,000 (or so) extents that indicates whether each is allocated or not. There's a GAM page every so often in a data file to indicate the status of the next set of extents. What I wanted to look at was the last GAM in the data file.
As I already mentioned, the data file I'm working with is fairly large, so I wasn't exactly sure how to find the last GAM. I found documentation indicating that the first GAM of each data file is on the 3rd page of the data file (actually page id 2 because of zero-based counting), and another one every 511,230 pages, so I could've probably figured it out by looking at the total size, figuring how many GAMs there were, yadda yadda yadda. What I did instead was look at the last page in the file, and it told me what its parent GAM was!
I found the last page in the data file by getting the total number of pages in the data file:
select size, * from sys.database_files where file_id = 1
Size in the sys.database_files table actually represents the number of pages in the data file. In this case, the number of pages was 51380223. I then ran a DBCC PAGE for the last page in the data file: page id = 51380222 (remember the zero-based counting). Before running that command, though, I first needed to turn on trace flag 3604 so DBCC results would be returned to the client:
DBCC TRACEON (3604)
DBCC PAGE (10,1,51380222,3)
In the DBCC PAGE command above, the first parameter is the database id, second is the file id, the third value is the page you want to view, and the last parameter represents the type of output you want to see.
In the output of the DBCC PAGE command, on the Messages tab of the output (I ran this in grid mode in SQL Server Management Studio), I scrolled down to the Allocation Status section. There was a line that read "GAM (1:x) = ALLOCATED". That x is the page number of the last GAM in the data file! I then ran DBCC PAGE again with that value:
DBCC PAGE (10,1,x,3)
The output of this command included lists of page ranges, marked ALLOCATED or NOT ALLOCATED. I scrolled all the way to the bottom tolook for the last ALLOCATED page range. My output looked something like this:
(1:51372888) - (1:51372960) = NOT ALLOCATED
(1:51372968) - = ALLOCATED
(1:51372976) - (1:51373704) = NOT ALLOCATED
(1:51373712) - = ALLOCATED
(1:51373720) - (1:51374552) = NOT ALLOCATED
Pages 51373720 - 51374552 were already not allocated. I knew that if I ran a DBCC SHRINKFILE with the TRUNCATEONLY option, this range of pages would be removed from the data file, and that would be all that the command would do. However, I need to clear more than 300 GB of space in all , and a few hundred pages wasn't going to cut it. So I decided to see what was on page 51373712, and rebuild that index. Based on my reading, I assumed that when rebuilding, SQL Server would go back to the GAM and other related file allocation pages, and find the first pages in the data file that could hold the rebuilt index. This would effectively move pages from the end of the data file, although I was aware that it could end up removing only one extent worth of pages at the very end. To that end, I again ran DBCC PAGE to find out what was on that data page:
DBCC PAGE(10,1,51373712,3)
The PAGE HEADER section of the output included the following information:
Metadata: IndexId = 1
Metadata: ObjectId = 437676587
Aha! I rebuilt that index and as I expected, it cleared up some bunch of extents at the end of the file. After repeating this process several times, I was able to shrink my file by a sizable margin using DBCC SHRINKFILE with the TRUNCATEONLY option. The db file was actually less - rather than more - fragmented when I was done, and I never encountered those horrible disk queue lengths.
Of course, this doesn't work so well if the last allocated page is from a large index that doesn't allow REBUILD WITH (ONLINE=ON). When I figure out how to get past that, I'll post again.
However, I had a very large database file (>400 GB) from which I had moved a substantial portion (>75%) of the data to a different filegroup. After several attempts to shrink the data file and witnessing the resulting abysmal performance, I found myself wanting to know exactly what was that data at the end of the file so I could move it myself by just rebuilding the indexes. I could then just use DBCC SHRINKFILE with TRUNCATEONLY to recover the space with none of the fragmentation. Most of the indexes in my database can even be rebuilt online using ALTER INDEX . . . REBUILD WITH (ONLINE=ON).
Here's how I found out what was sitting in the back rows of my data file:
First, I already knew that the GAM pages could tell me whether a particular extent is allocated or not. If you don't know what a GAM page is, check Books Online or any one of hundreds of websites and books that can offer more detail. Basically, the GAM is a page in a data file that is a map of the next 500,000 (or so) extents that indicates whether each is allocated or not. There's a GAM page every so often in a data file to indicate the status of the next set of extents. What I wanted to look at was the last GAM in the data file.
As I already mentioned, the data file I'm working with is fairly large, so I wasn't exactly sure how to find the last GAM. I found documentation indicating that the first GAM of each data file is on the 3rd page of the data file (actually page id 2 because of zero-based counting), and another one every 511,230 pages, so I could've probably figured it out by looking at the total size, figuring how many GAMs there were, yadda yadda yadda. What I did instead was look at the last page in the file, and it told me what its parent GAM was!
I found the last page in the data file by getting the total number of pages in the data file:
select size, * from sys.database_files where file_id = 1
Size in the sys.database_files table actually represents the number of pages in the data file. In this case, the number of pages was 51380223. I then ran a DBCC PAGE for the last page in the data file: page id = 51380222 (remember the zero-based counting). Before running that command, though, I first needed to turn on trace flag 3604 so DBCC results would be returned to the client:
DBCC TRACEON (3604)
DBCC PAGE (10,1,51380222,3)
In the DBCC PAGE command above, the first parameter is the database id, second is the file id, the third value is the page you want to view, and the last parameter represents the type of output you want to see.
In the output of the DBCC PAGE command, on the Messages tab of the output (I ran this in grid mode in SQL Server Management Studio), I scrolled down to the Allocation Status section. There was a line that read "GAM (1:x) = ALLOCATED". That x is the page number of the last GAM in the data file! I then ran DBCC PAGE again with that value:
DBCC PAGE (10,1,x,3)
The output of this command included lists of page ranges, marked ALLOCATED or NOT ALLOCATED. I scrolled all the way to the bottom tolook for the last ALLOCATED page range. My output looked something like this:
(1:51372888) - (1:51372960) = NOT ALLOCATED
(1:51372968) - = ALLOCATED
(1:51372976) - (1:51373704) = NOT ALLOCATED
(1:51373712) - = ALLOCATED
(1:51373720) - (1:51374552) = NOT ALLOCATED
Pages 51373720 - 51374552 were already not allocated. I knew that if I ran a DBCC SHRINKFILE with the TRUNCATEONLY option, this range of pages would be removed from the data file, and that would be all that the command would do. However, I need to clear more than 300 GB of space in all , and a few hundred pages wasn't going to cut it. So I decided to see what was on page 51373712, and rebuild that index. Based on my reading, I assumed that when rebuilding, SQL Server would go back to the GAM and other related file allocation pages, and find the first pages in the data file that could hold the rebuilt index. This would effectively move pages from the end of the data file, although I was aware that it could end up removing only one extent worth of pages at the very end. To that end, I again ran DBCC PAGE to find out what was on that data page:
DBCC PAGE(10,1,51373712,3)
The PAGE HEADER section of the output included the following information:
Metadata: IndexId = 1
Metadata: ObjectId = 437676587
Aha! I rebuilt that index and as I expected, it cleared up some bunch of extents at the end of the file. After repeating this process several times, I was able to shrink my file by a sizable margin using DBCC SHRINKFILE with the TRUNCATEONLY option. The db file was actually less - rather than more - fragmented when I was done, and I never encountered those horrible disk queue lengths.
Of course, this doesn't work so well if the last allocated page is from a large index that doesn't allow REBUILD WITH (ONLINE=ON). When I figure out how to get past that, I'll post again.
Subscribe to:
Posts (Atom)