Editorials

Using In Memory Tables for SQL Server 2014

I was reading a performance comparison by Ben Snaidero of In Memory tables in SQL Server 2014 to Disk Tables, and not surprised that the differences were not that far apart. Why would that be?

Unless you have hiding or simply new to SQL Server you have been told that SQL Server performs all of the data manipulation in cache. Background processes are used to persist the changes to disk, so your overall performance of writing to cache are going to be virtually similar, unless your test machine does not have enough memory. This is because the actual work of the lazy writer is not taken into account for your specific queries, and I’m not sure you could do that.

But what about reads? Again, cache comes to the rescue for disk based tables. Read ahead threads can help, especially if your query accesses the table data sequentially and the data is stored contiguously and sequentially. It is that same process that caching RAID controllers provide high performance across multiple disks.

Did you know you can already use In Memory tables in older versions of SQL Server? There is the ability to PIN a table in cache. Perhaps the difference is that a table that is PINed can be removed from the cache if memory is short. Most often the recommendation is not to use this feature. Have more memory and let SQL Server manage the cache because it will manage based on the needs of the entire system, not wasting resources for your one table that may not be needed right now.

What to take away? Maybe you would be better off putting your money into more RAM for your SQL Server instance than moving to in memory databases? Maybe the in memory databases or tables have a different application, such as caching user activities on an application that may be later merged with a central storage? What’s your take? Is this a useful new technology? Share your thoughts here or drop a note to btaylor@sswug.org.

Cheers,

Ben