Editorials

SQL Server Feature Bloat

It seems like the database systems I’ve been working on are getting bigger and bigger. I remember the days when a 100 gigabyte database was considered large. Today, a terabyte sized database is not uncommon. Are we saving that much more information, or is there something else going on.

Well, for one thing, if you are using SQL Server, the page size quadrupled so we could have 8k data objects on a page. Internationalized character sets have doubled the space allocated for strings in the form of NCHAR, NVARCHAR, NTEXT.

The SQL engine has been optimized for performance, and for tracking statistics. The net result is additional space being allocated for meta-data not visible to the typical user.

There is also a lot of data being tracked simply by the default trace. This data is especially useful when finding poorly performing queries, locking, blocking, etc. Data captured in this manner is used to produce a number of the standardized reports found in SQL Server Management Studio, or populate Data Management Views.

Is this considered bloat? It may be, until you need it when your database server isn’t performing like it did yesterday, and you want to find out why. Do you think these SQL Server enhancements may result in other engines being considered more often? If so, does it really matter? Share your comments with us or drop a line to btaylor@sswug.org.

Cheers,

Ben