You've probably heard admonishments to avoid shrinking data files, especially on a heavily used database. The standard usage of DBCC SHRINKDATABASE or DBCC SHRINKFILE (not using the TRUNCATEONLY option of these commands) will move any data at the tail of the file to the beginning. The most frequently cited negative consequence of this action is serious fragmentation of the data you moved as data is moved page by page to the beginning of the file with presumably no attention paid to the objects represented by the data. In my case, shrinking data files also caused very heavy I/O load on my staging server, with sustained average disk queues reaching up into the hundreds. Definitely not something I wanted to attempt on a heavily loaded production server.
However, I had a very large database file (>400 GB) from which I had moved a substantial portion (>75%) of the data to a different filegroup. After several attempts to shrink the data file and witnessing the resulting abysmal performance, I found myself wanting to know exactly what was that data at the end of the file so I could move it myself by just rebuilding the indexes. I could then just use DBCC SHRINKFILE with TRUNCATEONLY to recover the space with none of the fragmentation. Most of the indexes in my database can even be rebuilt online using ALTER INDEX . . . REBUILD WITH (ONLINE=ON).
Here's how I found out what was sitting in the back rows of my data file:
First, I already knew that the GAM pages could tell me whether a particular extent is allocated or not. If you don't know what a GAM page is, check Books Online or any one of hundreds of websites and books that can offer more detail. Basically, the GAM is a page in a data file that is a map of the next 500,000 (or so) extents that indicates whether each is allocated or not. There's a GAM page every so often in a data file to indicate the status of the next set of extents. What I wanted to look at was the last GAM in the data file.
As I already mentioned, the data file I'm working with is fairly large, so I wasn't exactly sure how to find the last GAM. I found documentation indicating that the first GAM of each data file is on the 3rd page of the data file (actually page id 2 because of zero-based counting), and another one every 511,230 pages, so I could've probably figured it out by looking at the total size, figuring how many GAMs there were, yadda yadda yadda. What I did instead was look at the last page in the file, and it told me what its parent GAM was!
I found the last page in the data file by getting the total number of pages in the data file:
select size, * from sys.database_files where file_id = 1Size in the sys.database_files table actually represents the number of pages in the data file. In this case, the number of pages was 51380223. I then ran a DBCC PAGE for the last page in the data file: page id = 51380222 (remember the zero-based counting). Before running that command, though, I first needed to turn on trace flag 3604 so DBCC results would be returned to the client:
DBCC TRACEON (3604)DBCC PAGE (10,1,51380222,3)In the DBCC PAGE command above, the first parameter is the database id, second is the file id, the third value is the page you want to view, and the last parameter represents the type of output you want to see.
In the output of the DBCC PAGE command, on the Messages tab of the output (I ran this in grid mode in SQL Server Management Studio), I scrolled down to the Allocation Status section. There was a line that read "GAM (1:x) =
ALLOCATED". That x is the page number of the last GAM in the data file! I then ran DBCC PAGE again with that value:
DBCC PAGE (10,1,x,3)The output of this command included lists of page ranges, marked ALLOCATED or NOT ALLOCATED. I scrolled all the way to the bottom tolook for the last ALLOCATED page range. My output looked something like this:
(1:51372888) - (1:51372960) = NOT ALLOCATED(1:51372968) - = ALLOCATED(1:51372976) - (1:51373704) = NOT ALLOCATED(1:51373712) - = ALLOCATED(1:51373720) - (1:51374552) = NOT ALLOCATEDPages 51373720 - 51374552 were already not allocated. I knew that if I ran a DBCC SHRINKFILE with the TRUNCATEONLY option, this range of pages would be removed from the data file, and that would be all that the command would do. However, I need to clear more than 300 GB of space in all , and a few hundred pages wasn't going to cut it. So I decided to see what was on page 51373712, and rebuild that index. Based on my reading, I assumed that when rebuilding, SQL Server would go back to the GAM and other related file allocation pages, and find the first pages in the data file that could hold the rebuilt index. This would effectively move pages from the end of the data file, although I was aware that it could end up removing only one extent worth of pages at the very end. To that end, I again ran DBCC PAGE to find out what was on that data page:
DBCC PAGE(10,1,51373712,3)The PAGE HEADER section of the output included the following information:
Metadata: IndexId = 1Metadata: ObjectId = 437676587Aha! I rebuilt that index and as I expected, it cleared up some bunch of extents at the end of the file. After repeating this process several times, I was able to shrink my file by a sizable margin using DBCC SHRINKFILE with the TRUNCATEONLY option. The db file was actually less - rather than more - fragmented when I was done, and I never encountered those horrible disk queue lengths.
Of course, this doesn't work so well if the last allocated page is from a large index that doesn't allow REBUILD WITH (ONLINE=ON). When I figure out how to get past that, I'll post again.