Editorials

Change Management Using Brute Force ORM

Change Management Using Brute Force ORM
If your database schema and all layers are completely in synch with your business objects it can simplify your database schema management. Generally, I don’t prefer this kind of model. In my experience, database models and object models are not the same when they are tuned for the purpose of each.

However, when the pragmatic mind considers the situation, what is the value of having a well normalized database if it creates a lot of dissonance between your business objects and your storage, along with a lot of work to maintain the infrastructure translating between the two different layers.

Frankly, how is that any different than just using a No-SQL data store to simply store serialized instances of your objects? Ouch! Did I just say that? Well, I’m in a pragmatic state of mind.
Following is a response from Jude on the whole topic of change management, and a technique he and his colleagues have developed for automatically generating and deploying database changes through an automated, reproducible process.

Jude writes:
We have an in-house built OR/M that we use for change migration, we have built an enterprise grade platform that supports multiple applications, some integrated, some not. When we need to change or add new "entities", we design the entities and then we use software to generate the necessary change scripts and .Net classes that will consume them.

All of our database objects (tables, views, procs, relationships, keys, etc.) as well as the .Net POCO classes that consume them are built by pattern and supported by a robust Data Access Layer. If you really think about it, how many different kinds of tables are there? There are not that many. So if you sit down and decide what pattern you will use to create each type of table, how you will do keys, indexes, CRUD procs, etc. automation can follow pretty quickly.

Changes are made first in a local sandbox environment by the team implementing the change. Once tested locally, the auto-generated scripts, auto-generated configuration data, and auto-generated code is checked into TFS. From there, the build server (upon successful build) compiles all of those changes into the current release package. It is then deployed (via our "release management" software) to a Unit Test environment (we have a database per environment) with some sanity check automated integration tests.

If that is successful, we then deploy it to the integrated development environment for consumption by application developers. Each change is marked in the configuration data with a new revision number, thus our database has a version number just like an application does: MAJOR.MINOR.BUILD.REVISION. Change scripts are named by pattern using the revision number and are applied sequentially by that revision number.

Applying those changes to the Unit Test environment and the integrated development environment is done using the same release management software as is used to apply it to QA and Production. Thus, before a change is released to Prod, it has been "test" released at least 3 times, in Unit Test, Development, and QA.

This may all sound like a lot of work, and it is, which is why we use software automation to do it all for us. Migrating these changes up the chain of environments by a human is a matter of dropping a configuration file into a folder which is picked up by a windows service which then runs the deployment (it also publishes web services and apps as well).

Next we will be fitting it with a scheduling component and automated rollback procedures, so at 6:00 pm, we can do an off hours release with no business down time and get a text message at happy hour letting us know the release completed successfully. Then we can get back to designing cool things on cocktail napkins over a beer rather than performing mundane error prone manual tasks late night in the office.

I don’t want to give away too many specifics, because while I’ve worked to engineer the solution, it’s not my architecture, which is what makes it all possible, and as such is not my design to divulge. Hope it helps anyway though.

There’s some food for thought. Is it worth the cost to maintain different models for different layers, or do we take a pragmatic approach and separate them? There are a number of ORM tools and frameworks supporting this process.

Why not share your feedback on this idea? Feel free to get into the conversation by commenting below, or drop me an Email at btaylor@sswug.org.

Cheers,

Ben

$$SWYNK$$

Featured Article(s)
Testing Oracle Database With Pre-built VM
You don’t have to prepare your system, download, and then install an Oracle database to be able to test it. You can significantly simplify the database installation process by using the pre-built Database App Development VM appliance running within Oracle VM VirtualBox.

Featured White Paper(s)
Encryption Key Management Simplified
Managing encryption keys is a fundamental step to securing encrypted data, meeting compliance requirements, and preventing da… (read more)