Editorials

Database Change

Happy Birthday (to SSWUG.ORG)!
We’re excited – it’s our birthday! 15 years – not too bad. We’re celebrating – click the banner above for details – and, if you have a minute, if we’d helped with your work with databases, drop us a note – let us know (swynk@sswug.org)! You can grab a severely discounted membership too – click here, take a look!

We also want to say thanks for your support all these years – and SO many answers, tips, scripts and help in general for the community of data professionals – it’s great to work with everyone!

Happy Birthday SSWUG.ORG Community!

Database Change
I have two techniques I prefer when dealing with ?Database Code?. This is not the only way to manage database change versioning … but I haven?t found a better method, even in canned tools.

First, I maintain a master script for each database object such as a table, view, function, stored procedure, etc. Basically any major object that shows up in SysObjects. Using a separate master script as a file allows me to use a simple file diff tool to evaluate what changes have been made, and compare them over time and across versions. Even most ER tools allow you to export the diagram as individual files for objects which easily meld into version control.

Using this technique the way a database is supposed to look may be maintained in version control alongside of the application code. A database may be constructed by executing the master scripts keeping the schema and code in synch with the application code.

The problem is when you have to modify existing database objects while retaining existing data. That is when things break down. Moreover, because databases are not object oriented, you can’t use OO techniques to maintain multiple versions concurrently. For example, if you add a new input parameter to a stored procedure old code not providing that parameter may not continue to work.

For tables you can’t simply drop and recreate them with a new schema. Data has to be manipulated if you change the schema, and often that requires some custom conversion code. Tools like RedGate DB Compare are nice. It compares two databases and generates scripts to modify one database to look like another. They have two products. One compares the schema and objects; the other compares the data. Both tools generate change scripts.

The issue with this technique is that you have to modify the database with your new requirements before the comparison can be performed, thus creating the change scripts. I find it easier, as a DBA proficient in SQL to simply create the change script myself. By creating the change script first, I don’t have to take additional time for comparisons.

Say I wish to add a new modifiedDateTime column to a table called mytable I would write a change script like the following:

IF EXISTS
(
SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = ‘ModifiedDateTime’
AND TABLE_NAME = ‘MyTable’
)
GOTO SkipIt

ALTER TABLE MyTable Add ModifiedDateTime DATETIME NOT NULL
Constraint DF_MyTable_ModifiedDateTime DEFAULT (GETDATE())

SkipIt

Using templates I can write a script like this in less time than it takes to open the table designer and generate the sql or apply the change. Moreover, I don’t run into difficulties when SQL Server Management Studio can’t figure out how to modify a table.

Sometimes you have to write a more complicated script anyway, especially when changing data types, etc. So, you may as well start learning DML and write your own.

The script I then check into version control along with my master scripts. It becomes part of the migration capability from any previous version to the schema required by the current software bits.

For the continuous build process it simply retrieves the change scripts and applies them sequentially to a restoration of the last release database. Change scripts modify structure, procedures, and data making necessary modifications.

As you can guess, there are some tweaks you can use to make this process even more efficient. A table tracking changes applied is useful. Especially if you need to know what scripts have or have not been applied. A script execution tool is useful also. It finds the scripts in version control, locates those not having been applied, and then executes them in the correct sequence.

While there is a bit manual work in this process, it is no different than the manual work done using designers. If your shop uses an ERD tool such as ErWin, ERStudio, TOAD, ModelRight, Power Designor, etc. then you may find you prefer to generate your change scripts from those tools. It all depends on your workflow. The point is, it is difficult to recommend a specific toolset for managing database continuous integration in that it depends on your personal skill sets.

While this process sounds cumbersome, I have found that once the tools are in place, and the developer becomes proficient in SQL it is much faster than other methods of database change, and integrations with version control immediately. Build a script execution engine and your database modifications are completely automated without any additional work on the part of the developer.

Ok, realism strikes…I don’t have time to write all the scripts during a busy sprint. I had a developer use the database change trigger in SQL Server 2005 and later to build a utility to capture database changes, present them to the developer, and allow them to generate a change script. This worked well for database change scripts for everything except table and data modifications.

Here are some reader responses:
Gus:
In your newsletter, you describe process steps for continuous integration. While slightly off topic, a critical point to success from my client experience is a step between 2 and 3. Getting feedback and approval on the design prior to the coding step is critical to being efficient. An agile purist might argue that feedback comes after building something, but I have found that a shared, consistent vision before coding means you are more likely to hit the mark and maintain velocity.

Editor:
Gus, anytime communication is enhanced is always valuable, even in Agile methodology. As I have experienced it, the primary goal is that everything that can be documented through code has higher value than to-build documentation. But, before you can write a single line of code, testing or application, you have to understand the requirements. So, up front work is always essential. Thanks for clarifying that fact.

Mike:
At a previous job I was part of a team that took on this challenge and I was very speculative at first on how this could be done at a daily build or build level. The important hurdle an organization has to overcome is mostly around automation.

  • Automate your database standards checking as much as possible at the point of script creation. For the remaining 10 or 20% that can’t be caught at the point of script creation then it needs to be done via T-SQL (or other code) during the build process. Any noncompliance can then be caught and the dba or developer can fix it right away. If the standards check is not automated it becomes very difficult to get the buy in to spend time refactoring the code down the road to meet the standards.
  • Automate as much as possible your creation scripts and migration script process. This step is essential and takes some time to plan out how to get all of the pieces in place. The following at the critical pieces to have in your process:
    • Tool that can produce the creation and migration script DDL. SQL Server Management Studio has the concept of plugins so this is a great place to implement this.
    • rocess that gathers the scripts from source control and puts them into a deployment script.
    • Version control tracking within in the scripts so that the right block of code is executed against each given databases for the migration. A simple table in the database can be added to track this. The deployment script then must check to see whether a given build’s changes have been applied before applying the changes.
  • The final and most important piece – migration verification. For each build that is done, you have to be able to do a schema compare to make sure that the latest creation script results match the prior build plus the new migration script applied. For example yesterday the build version was 1.1 and today will be 1.2. The build process must insure that a new 1.2 database matches a 1.1 database that gets migrated to 1.2 otherwise the build needs to be errored out and fixed. Tools like Redgate’s SQL Compare can help out with this.


It is amazing how well this worked but it did take some time and effort.

Do you have tools or techniques that work well for you? Share them with us by sending your experience to btaylor@sswug.org.

Cheers,

Ben