Pages

Wednesday, November 17, 2004

SQLServer: To Lock or NOLOCK

I don't think anyone would argue that one of the biggest problems on SQLServer (besides that it runs on Windows) is how terrible it's locking and lock escalation process is.

Then when things seem to go slow when on inserts, the first thing the developers do is modify all selects to use the NOLOCK hint. Yea, so in theory that makes sense, but not only does it imply that data results may be inconsistent, but it also ignores the real issue. We need to focus our tuning efforts on how we lock and if we are forcing lock escalation (from Row to Extent to Page to Table to ...)

The other not so known issue is that SQLServer has a bug that can actually cause SQLServer to crash (it happended to us) when a select with NOLOCK is used and a delete happens at the same time...

-----------------------------------------------------------------------

FIX: A query may fail with retail assertion when you use the NOLOCK hint or the READ UNCOMMITTED isolation level

Article ID:827714
Last Review:August 13, 2004
Revision:3.0
On this Page


SYMPTOMS

CAUSE

RESOLUTION

WORKAROUND

STATUS

MORE INFORMATION

SYMPTOMS

When you run a query with the NOLOCK hint or with the transaction isolation level set to READ UNCOMMITTED, and a concurrent user is deleting rows from the table that is accessed by the query, the query may fail with a retail assertion, and you may receive an assertion error message that is similar to the following:
Msg 3624Location: Drive:\sql\ntdbms\storeng\drs\include\record.inl:1447Expression: m_SizeRec > 0 && m_SizeRec <= MAXDATAROWSPID: 63 Process ID: 1776-or- Msg 3624 Location: recbase.cpp:1378 Expression: m_offBeginVar <>

CAUSE

When the Microsoft SQL Server engine tries to search for a row when a query uses the NOLOCK hint or the READ UNCOMMITTED transaction isolation level, the target row may be deleted. Before you install this fix, SQL Server 2000 may incorrectly handle this situation and the query may fail with the assertion that is mentioned in the "Symptoms" section of this article.Note The READ UNCOMMITTED transaction isolation level is also known as a "dirty read."

RESOLUTION

A supported hotfix is now available from Microsoft, but it is only intended to correct the problem that is described in this article. Only apply it to systems that are experiencing this specific problem. This hotfix may receive additional testing. Therefore, if you are not severely affected by this problem, Microsoft recommends that you wait for the next Microsoft SQL Server 2000 service pack that contains this hotfix.To resolve this problem immediately, contact Microsoft Product Support Services to obtain the hotfix.

Tuesday, November 16, 2004

SQLServer: Statistics - Auto vs Manual update ?

Have you ever seen 'select statman(....)' running while your SQLServer is under heavy transactional load? If not, you either don't have auto update stats turned on, or you database is not really under heavy load.

By default, auto update is turned on, and SQLServer keeps track of how often indexes / tables are changed. Once 20% or 500 rows are changed, SQLServer starts updating statistics to improve performance on the chance that the distribution of data might change requiring subsequent changes to execution plans.

This is fine and dandy, except that in many cases, performance can be degridated when this happens. Question is, should we be having this set or not? And if we do have it set, should we be updating statistics on a regular basis?

We have found (beyond a reasonable doubt - even at a certain company's internal SQLServer databases) that statistics for SQLServer seem to get corrupt from the optimizers point of view over time. One theory is that auto stats may contribute to that. It's gotten so bad on some of our databases that we have to update stats every hour in order to be sure we 'fix' the situation as soon as possible after it happens. We plan on testing the difference with auto statistics on vs off on our databases and see what kind of performance differences we get and if there is any noticeable drop in corrupted statistics...

We'll let you know what we find...

Friday, November 12, 2004

ORACLE: Sending Logs/Files via mailx

This may be irrelevant depending on your choice of monitoring tools, but I tend to prefer good old fashioned shell scripts on UNIX. One issue I've always had with mailx is being able to attach error files.

Here's a sample script that seems to work on most UNIX flavors with sh,bash, or ksh:


#!/bin/sh
#
DT=`date +%d%m%y`
2>&1 echo "Hey - ORA- error !" >> ${ERR_DIR}/$0.$DT.err
ORA_ERRS=`grep -i ora- ${ERR_DIR}/$0.$DT.err wc -l sed "s/ //g"`
if [ $ORA_ERRS -gt 0 ]
then
mailx -s "$0 ORA Errors! ($DT) " dba@x.com <<-EOF WARNING! - $0 ($DT) had errors! Error log is attached. ~< ! uuencode ${ERR_DIR}/$0.$DT.err $0.$DT.err ~. EOF fi # #

Thursday, November 11, 2004

SQLServer: When will we have a real optimizer ?

Yup...

It appears that still in 2000 Server, the optimizer is lacking in working without current statistics. So far it's very inconsistent, but as our databases increase in #rows and transactions, we find that we need to continually update statistics.

In theory, if the data distribution does not change much over time (even if the data grows in size), the optimizer should chose similar plans as previously for traversing objects. It appears in practice, however, that as data grows regardless of distribution, more and more time is spent by the SQLServer optimizer evaluating which plan(s) to chose. This is very observable by watching CPU over time on identical queries.

We have reverted to updating stats every our on our heavily used databases in order to keep this *feature* from occuring.

We keep hoping SQLServer will grow up sooner than later....

Wednesday, November 10, 2004

SQLServer: Access from UNIX ?

Ahh... The age old question. Yup - who really want's to use ODBC anyway?

In an effort to consolidate all perl parsing/transformation code for all database extractions we went with perl DBI on Linux. Initially, we figured, no problem.

Here are some things we found out:

  1. Default perl install on Fudora Core distribution of Linux had issues with SybPerl Resolution: Grab latest stable build from perl.org and re-compile without -ldb option
  2. Install the following Packages: DBI::Bundle, DBD:Sybase, freetds
  3. SYBASE environment variable must be set to install location (in our case /usr/local)
  4. Perl Bind variables via ? not supported. Still investigating if there is a good work around for this issue.

All in all, once perl was re-compiled we didn't really have any issues... knock on wood...

Tuesday, November 09, 2004

SQLServer: DataWarehouses and DATETIME

So we want to grab significant transactions from yesterday for our datawarehouse, but we're not quite sure exactly what time our DTS extraction jobs will run.

What is the easiest way to get all rows that have a datetime column entry from yesterday? With Oracle, we just use trunc sysdate - 1 and trunc sysdate, but I haven't been able to find a good trunc function in SQLServer.

Here's what we've come up with... (Ugly, but effective)



SELECT *
FROM audit
WHERE moddate >= CAST(CAST(DATEPART(YEAR,GETDATE()) AS VARCHAR)
+ '-' + CAST(CAST(DATEPART(MONTH,GETDATE()) AS VARCHAR)
+ '-' + CAST(CAST(DATEPART(DAY,GETDATE()) AS VARCHAR) AS DATETIME) - 1
AND moddate <= CAST(CAST(DATEPART(YEAR,GETDATE()) AS VARCHAR) + '-' + CAST(CAST(DATEPART(MONTH,GETDATE()) AS VARCHAR) + '-' + CAST(CAST(DATEPART(DAY,GETDATE()) AS VARCHAR) AS DATETIME)



[ audit table has various fields, with moddate as datetime datatype. ]