Have you ever seen 'select statman(....)' running while your SQLServer is under heavy transactional load? If not, you either don't have auto update stats turned on, or you database is not really under heavy load.
By default, auto update is turned on, and SQLServer keeps track of how often indexes / tables are changed. Once 20% or 500 rows are changed, SQLServer starts updating statistics to improve performance on the chance that the distribution of data might change requiring subsequent changes to execution plans.
This is fine and dandy, except that in many cases, performance can be degridated when this happens. Question is, should we be having this set or not? And if we do have it set, should we be updating statistics on a regular basis?
We have found (beyond a reasonable doubt - even at a certain company's internal SQLServer databases) that statistics for SQLServer seem to get corrupt from the optimizers point of view over time. One theory is that auto stats may contribute to that. It's gotten so bad on some of our databases that we have to update stats every hour in order to be sure we 'fix' the situation as soon as possible after it happens. We plan on testing the difference with auto statistics on vs off on our databases and see what kind of performance differences we get and if there is any noticeable drop in corrupted statistics...
We'll let you know what we find...
2 comments:
Ok, here's my first post/comment. I'm an MS SQL only type of guy, but won't discriminate on what to use to get the job done...its just that I know too much about MS SQL so thats what I try to use.
re: auto-updates, and remember this is slightly un-scientific, I'd recommend turning autoupdates off (they cause perf overhead) and opening up the Fill Factor in the index from 80% (default) to 50% or lower, which will leave more room to grow the index. Then schedule autoupdate statistics runs cyclicly as needed (experiment to get this number dialed in). Also, certain types of db usage will hurt the indexes, too much thrash in the db and too many indexes doing constant update/delete work will hurt (say, a wireless services company that provides email to cellphone users that puts all the email msgs as text into the db because its easier to code from java... ;) There's too much thrash with email being copied around a db, and some of the tables had 7 or more major indexes. The db structure should be looked at and changed at that point.
Even tho I've worked *in* the SQLServer group at MS, I still don't know enough about MS SQL to be a full blown expert (I learn new stuff all the time). So, don't necessarily take my word for it, the KB should have something on tuning these properties and they'd be definitive...I'd just play around with a test server until I proved/disproved my working theory.
- SQLizer
It would be interesting to understand if the same applies to auto create statistics. This is the option that is supposed to create any missing statistics when a query is run.
Maybe we'll add this to some of our tests as well and see what the results are - theoretically it's a good thing to create the statistics...
Post a Comment