Amazon AWS, Azure, Editorials, SQL Server

Multi-Database Solutions

tl;dr There is no easy answer.  But it’s important.

It can be really challenging to work efficiently in a world where not only is the hosting environment typically a hybrid of on-site, off-site, public and private cloud configurations, managed and self-managed services… but also where you have multiple databases gathering different bits of information and you’re in a position to bring it into SQL Server to do a bit of work with it. 

This is something we’re seeing on nearly every installation now, and we have it on our internal projects as well.  It’s very common to have MySQL, DynamoDB and SQL Server, and perhaps even SQL Server on Azure and SQL Server on Amazon, all in a single solution that needs to push and pull different information bits between the engines.  

It used to be you’d approach this with ODBC drivers, or an app that talked over OLEDB or whatever tools you had to pull information in.  Perhaps you set up SSIS to pull things together from text and other known data sources.  But it’s just not that easy (!) now.  What we’ve seen happening more is building it out of the provider’s scripting and automation environment. 

This adds a whole new level of complexity to pulling the solution together and, from what I have seen, the methods and tools for accessing the database engines are so varied in what they can and can’t do, and how they access information… it’ can frankly be maddening. 

I’ve done a fair amount of automation work in Lambda – where you get to pick the scripting language and version.  This is nice to be able to do, but I have to admit to loving the “good ol’ days” where once you found and had the ODBC drivers up and running, the select statements (at least the more generic ones) just worked.  This just isn’t so for some of the engines.  

For value/pair engines, pulling information can be crazy – either super simple or “yeah, with this type of engine, you’re not supposed to select data SETS, you’re supposed to be pulling data pairs” which, if you’re dumping a bunch of stuff into it, with anticipation of pulling it all to process somewhere else later, can get aggravating quickly.

Be aware of the solutions people are using.  Beyond being a “no duh, Steve” statement, it’s important to ask a lot of questions about the underbelly of the application solutions – are things being used as temporary holding solutions or configuration tools or … ?  What engines?  What is the access method? 

We’ve seen solutions built on some scripting engines where the version they were built on is being phased out.  While it’s a great thing for security and for control and for keeping things current, when you have a complex solution that you rely on for data capture and injecting from engine “X” into your SQL Server, being told you have to drop everything and deal with a version update on the language, can be worrisome.  This is particularly true if you’ve had it up and running for a good while, the personnel may have changed, the version update may require code changes, etc.

Document, document, document.  Comment the code.  Review cycles can work wonders -when new people come and go, make sure they take time to go through the systems you rely on.  With so many moving parts, getting that data collected and into (or shared with) your SQL Server is a complex and “spit and bailing wire” type situation more often than it should be.  

“Hey, it works!  Don’t touch it!” is a dangerous place to be with your business, your client’s businesses.  While that may be reality, with having to talk to different engines and all of that, I think we have a responsibility too to step up and make sure it’s well managed.