Pages

Tuesday, December 01, 2009

MySQL Remote Connections for Replication

Here is another interesting problem:

When trying to set up master to master replication from Server A to Server B, the replication user can login from A to B, but not the other way. The error thrown out by MySQL:

ERROR 1130 (HY000): Host '' is not allowed to connect to this MySQL server

Things to check:
1. Replication user password and host
2. Ability to ping the remote server
3. nslookup the remote server and then nslookup the ip for the reverse lookup

The last one solved the problem.

Much Ado about a space

After upgrading SQL Server 2005 to SP3 I had difficulty bringing SQL server back online. The error message indicated a problem when opening 'master.mdf' or a problem with start up parameters. The parameters looked ok and I did not want to reinstall the SQL server.... Online research pointed to potential master database corruption, however the problem turned out to be a simple one -there was an extra space infront the start up parameters in the Configuration Manager. SQL server started up after the extra space was removed.

Error occurred while opening file 'master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary.

Thursday, November 19, 2009

Varbinary to string

I've experienced a particular frustration with SQL Server many times over the years: the seeming impossibility of programmatically fetching a varbinary value from a table, and converting it to the string value required by a procedure call in the same batch. One example would be when troubleshooting replication by extracting a xact_seqno varbinary(16) value from the MSdistribution_history table, and using it as the value for the nchar(22) @xact_seqno_start and/or @xact_seqno_end input parameters to the sp_browsereplcmds procedure.

For a long time, I just did this by hand because there didn't seem to be a way to get the value out. Later, I sometimes used the output parameter functionality of sp_executesql to bluff my way through this. However, I just discovered the there is a built-in (but undocumented) function that allows you to transform a varbinary into a string! It's master.dbo.fn_varbintohexstr(). Here's a script that accomplishes the replication troubleshooting task I cited above (I assume here you already know the publication database id and agent id - finding those is another subject altogether):

declare @last_xact_seqno varbinary(16)
,@last_xact_seqno_str nchar(22)

select @last_xact_seqno = max(xact_seqno)
from distribution..MSdistribution_history
where agent_id = 6
and error_id = 0

set @last_xact_seqno_str = master.dbo.fn_varbintohexstr(@last_xact_seqno)

exec sp_browsereplcmds
@xact_seqno_start = @last_xact_seqno_str
,@publisher_database_id = 7

solving frequent replication distribution latency errors

We have many millions of commands a day passing through a 2-node peer-to-peer replication topology. At some point, replication lost its ability to keep up with distribution. After much research, we replicated the issue to Microsoft and were told that our distribution database was too big. They recommended that we set immediate_sync on the publication to false.

To me, it made no sense why this would help. We had replication distribution history latency set to 48 hours - wouldn't transactions be retained for that amount of time regardless of whether they'd been replicated or not?

Luckily, I found this MSDN blog post, which does a great job of explaining the interaction between min transaction history retention, max transaction history retention, and immediate_sync. Our plan now is to set min retention to 8 hours (we need a buffer that will allow us to initialize from backup), max retention to 36 hours, and immediate_sync to false. This will keep all transactions for 8 hours, but replicated transactions after that buffer will be deleted as soon as they're replicated to existing subscribers. Transactions will only be retained for 36 hours if replication gets seriously backed up again.

Monday, November 02, 2009

Shrinking data files, part 1

You've probably heard admonishments to avoid shrinking data files, especially on a heavily used database. The standard usage of DBCC SHRINKDATABASE or DBCC SHRINKFILE (not using the TRUNCATEONLY option of these commands) will move any data at the tail of the file to the beginning. The most frequently cited negative consequence of this action is serious fragmentation of the data you moved as data is moved page by page to the beginning of the file with presumably no attention paid to the objects represented by the data. In my case, shrinking data files also caused very heavy I/O load on my staging server, with sustained average disk queues reaching up into the hundreds. Definitely not something I wanted to attempt on a heavily loaded production server.

However, I had a very large database file (>400 GB) from which I had moved a substantial portion (>75%) of the data to a different filegroup. After several attempts to shrink the data file and witnessing the resulting abysmal performance, I found myself wanting to know exactly what was that data at the end of the file so I could move it myself by just rebuilding the indexes. I could then just use DBCC SHRINKFILE with TRUNCATEONLY to recover the space with none of the fragmentation. Most of the indexes in my database can even be rebuilt online using ALTER INDEX . . . REBUILD WITH (ONLINE=ON).

Here's how I found out what was sitting in the back rows of my data file:

First, I already knew that the GAM pages could tell me whether a particular extent is allocated or not. If you don't know what a GAM page is, check Books Online or any one of hundreds of websites and books that can offer more detail. Basically, the GAM is a page in a data file that is a map of the next 500,000 (or so) extents that indicates whether each is allocated or not. There's a GAM page every so often in a data file to indicate the status of the next set of extents. What I wanted to look at was the last GAM in the data file.

As I already mentioned, the data file I'm working with is fairly large, so I wasn't exactly sure how to find the last GAM. I found documentation indicating that the first GAM of each data file is on the 3rd page of the data file (actually page id 2 because of zero-based counting), and another one every 511,230 pages, so I could've probably figured it out by looking at the total size, figuring how many GAMs there were, yadda yadda yadda. What I did instead was look at the last page in the file, and it told me what its parent GAM was!

I found the last page in the data file by getting the total number of pages in the data file:

select size, * from sys.database_files where file_id = 1

Size in the sys.database_files table actually represents the number of pages in the data file. In this case, the number of pages was 51380223. I then ran a DBCC PAGE for the last page in the data file: page id = 51380222 (remember the zero-based counting). Before running that command, though, I first needed to turn on trace flag 3604 so DBCC results would be returned to the client:

DBCC TRACEON (3604)
DBCC PAGE (10,1,51380222,3)

In the DBCC PAGE command above, the first parameter is the database id, second is the file id, the third value is the page you want to view, and the last parameter represents the type of output you want to see.

In the output of the DBCC PAGE command, on the Messages tab of the output (I ran this in grid mode in SQL Server Management Studio), I scrolled down to the Allocation Status section. There was a line that read "GAM (1:x) = ALLOCATED". That x is the page number of the last GAM in the data file! I then ran DBCC PAGE again with that value:

DBCC PAGE (10,1,x,3)

The output of this command included lists of page ranges, marked ALLOCATED or NOT ALLOCATED. I scrolled all the way to the bottom tolook for the last ALLOCATED page range. My output looked something like this:

(1:51372888) - (1:51372960) = NOT ALLOCATED
(1:51372968) - = ALLOCATED
(1:51372976) - (1:51373704) = NOT ALLOCATED
(1:51373712) - = ALLOCATED
(1:51373720) - (1:51374552) = NOT ALLOCATED

Pages 51373720 - 51374552 were already not allocated. I knew that if I ran a DBCC SHRINKFILE with the TRUNCATEONLY option, this range of pages would be removed from the data file, and that would be all that the command would do. However, I need to clear more than 300 GB of space in all , and a few hundred pages wasn't going to cut it. So I decided to see what was on page 51373712, and rebuild that index. Based on my reading, I assumed that when rebuilding, SQL Server would go back to the GAM and other related file allocation pages, and find the first pages in the data file that could hold the rebuilt index. This would effectively move pages from the end of the data file, although I was aware that it could end up removing only one extent worth of pages at the very end. To that end, I again ran DBCC PAGE to find out what was on that data page:

DBCC PAGE(10,1,51373712,3)

The PAGE HEADER section of the output included the following information:

Metadata: IndexId = 1
Metadata: ObjectId = 437676587

Aha! I rebuilt that index and as I expected, it cleared up some bunch of extents at the end of the file. After repeating this process several times, I was able to shrink my file by a sizable margin using DBCC SHRINKFILE with the TRUNCATEONLY option. The db file was actually less - rather than more - fragmented when I was done, and I never encountered those horrible disk queue lengths.

Of course, this doesn't work so well if the last allocated page is from a large index that doesn't allow REBUILD WITH (ONLINE=ON). When I figure out how to get past that, I'll post again.

Tuesday, July 21, 2009

SQL Server: Named Instance Or Default?

I was looking for an easy way to tell if the current instance of SQL Server is a default or named instance since we use both. I could not find anything online. After a little bit of poking around, I figured it out -its quite simple. Go to 'My Computer' and right click for the 'Manage' option. Then under 'Services' look for the SQL Server service entry. If its a named instance it will have the name listed as 'SQL Server (NAME_OF_INSTANCE)', however if its a default instance, it will simply say 'SQL Server (MSSQLSERVER)'

To be sure, you can right click on the SQL Server service entry. Select 'properties' and look at the path to the executable. If its a named instance, the sqlserver.exe will be followed by -s{Servername} or MSSQLSERVER if its a default instance.

You can also open SQL Server Management Studio and right click on the server name. A named instance is usually listed as 'ServerName\InstanceName'

Friday, March 27, 2009

Lost and Found ?

Sometimes you just have to laugh at the crazy things that can kill a good evening.

I had this brilliant idea to change our replication setup on one of our Master-Master replication server setups this week. I got sick of having to restart MySQL every time we wanted to add a new database and have it included in the list of replicated databases - we were using replicate-do-db in our configs.

So it seems very straight forward to change to ignore-db or ignore-table (because of cross database updates).

After a few weeks in QA and a few weeks in staging - no problems, no issues, no complaints... let's go for it!


Yea, as soon as we deploy and restart MySQL to pickup the configs, replication fails and stops!

And of course to make it lots of fun, replication on a couple other servers failing at the same time for unrelated reasons, and then a migration of another application that night having issues...)

So look around a while, check the MySQL Monitor ... and, wala - lost+found - table does not exist errors!

Yea, ext3 rears it's ugly head again. I'm sure it's probably my responsibility to make sure that lost+found directories are clean up, etc, etc, but it sure made for a headache this week.

The fix (knock on wood) was straight forward - we just added ignore-table=lost%.%

Seemed to do the trick. Maybe we should check out ReiserFS or xfs or zfs.

Either way, if there's a way to break replication, I'm sure I'll find it... ;)


On that note - if you love the file system (linux based) that you are using for your MySQL servers, I'd love to hear your comments (good, bad, or ugly)

Saturday, March 14, 2009

What just happened to the database?

It's always fun when you come into work and notice that one of your database monitors/graphs has changed from showing almost no row accesses/sec to 40,000 rows/sec in a matter of minutes. And then by lunch time they are showing 90,000 rows/sec. What's up with that?

MySQL Enterprise Monitor Row Accesses:






Well, of course the first thing you do is check all your cacti monitors, because there is obviously something wrong with the monitoring system.

Cacti Innodb Row Accesses:






Hmm... it's not the monitor. Must be a change in the app...

So you check with everyone, and no, nothings changed in the app. In fact, nothing even appears to be different (same basic app response times, same # requests/sec to the app, etc, etc)

Database Activity:






So it must be a query cache issue...

Query Cache:






Apparently not... :(

So what could cause that kind of dramatic increase in index rows read / sec ?

In reality, looking at the data distribution, the optimizer apparently decided to 'explain' it all away....

Current explain plan:

mysql> \e
-> ;
+--------------+-------------------------------+---------+------+------+----------+
possible_keys
| key | key_len | ref | rows | filtered
| Extra
+--------------+-------------------------------+---------+------+------+----------+
UIDX_TOTAL_RATING_OWNER_ABOUT,IDX_TOTAL_RATING_ABOUT,IDX_TOTAL_RATING_OWNER
| UIDX_TOTAL_RATING_OWNER_ABOUT | 180 | NULL | 5 | 80.00
| Using where; Using index; Using filesort |
+--------------+-------------------------------+---------+------+------+----------+
1 row in set, 1 warning (0.00 sec)



Previous explain plan:

mysql> \e
-> ;
+--------------+-------------------------------+---------+------+------+----------+
possible_keys
| key | key_len | ref | rows | filtered
| Extra
+--------------+-------------------------------+---------+------+------+----------+
UIDX_TOTAL_RATING_OWNER_ABOUT,IDX_TOTAL_RATING_ABOUT,IDX_TOTAL_RATING_OWNER
|IDX_TOTAL_RATING_OWNER | 42 | const,const | 1 | 100.00
| Using where |
+--------------+-------------------------------+---------+------+------+----------+
1 row in set, 1 warning (0.03 sec)

Interesting what a little change in data and a different path through the optimizer can do...


Friday, January 23, 2009

Partition Management....(Don't forget the Management)

You'd think by now I'd know better than to cut corners. But for some reason, I still somehow seem to decide once in a while for short term gains over long term benefits...

A while back I was overloaded with stuff so I set up a year's worth of new monthly partitions in one of our reporting databases. I did not, however, chose to create any nice job to automatically create new partitions or anything like that, as I did not have time (or so I thought).

Well, 'A while back' is now 18 months ago.... Hmm... 6 months of data in one partition. Ouch.

Might not be a big deal with Oracle, but it's SQLServer 2005. Sadly, just splitting 1 day of data into a new partition takes 10 minutes and 9 GB of transaction log space. Very depressing. (At least I tested and timed it before I tried to split one month's data into a new partition).

Kills me some times. Don't put off till tomorrow what you can do today, if tomorrow might become next year... ;)

Hopefully I'll come up with a slick way to split those partitions faster... we'll see.

And yes, I'm going to create a job to add partitions in advance.... :)