Editorials

Why Mess With Sets?

You may be wondering why I spent the last two days writing about methods to send multiple records to a stored procedure for processing. Here’s the scoop. I have been working on a system, using Entity Framework as the data access mechanism to SQL Server, and am optimizing some areas where performance is slower than desired.

First, let me explain that I choose Entity Framework (EF) as an ORM on purpose. It is a great time saver, and performs reasonably in most scenarios. I have found that as your objects get more complex, old school techniques can outperform EF dramatically. Using EF to do the typical grunt work of Inser/Updat/Delete/Select for the majority of the application allows me additional time to really tune those areas that need to be more efficient.

In my application I have a screen of around 100 terms that need to be maintained. The table is really simple…ID, Term. These terms are always managed as a set. You always get all of them. You always modify all of them. Using EF, I have to manage each term as an individual row. If I update them, I have to update each term one at a time.

I wanted to manage these terms as a set. I can retrieve them easily in EF as a set. I can update them in EF. However, when I perform the updates in EF, I have to update each record individually.

Since I have implemented EF using a Repository Pattern with a Unit of Work, I can extend my repository, allowing me to pass the entire set of data to a stored procedure through ADO. I can then execute the stored procedure, again through ADO, and merge all of the terms in a single SQL Statement, optimized for SQL Server. Since I always have a fixed number of terms, I don’t even have to use the MERGE command. I can do a direct update joining on ID.

Get into the conversation. Share an example of using different technologies for data persistence by leaving a comment, or sending an email to btaylor@sswug.org.

Cheers,

Ben