There are many different strategies and options for creating scalable solutions for SQL Server. Whether you’re building out solutions locally or for cloud-based environments (or both) it’s very important to think through the infrastructure requirements you have, and to understand the limitations of where you may (either now or in the future) be hosting that infrastructure.
One of these areas is the fundamental database design, and in particular, making the decision to divide and conquer the overall databases you’re working with, vs. one big database that may be easier now, and certainly a more traditional approach. I remember back in the days (egads, feeling old here) of MySpace, there was much work done to create a scalable architecture while still keeping performance good for people. There was much experimentation done with sharding along names, along account numbers, etc. As MySpace exploded in growth, the age of accounts and other things came into play.
The overall goal was to be able to scale out – using commodity hardware (a fairly new thing at the time) and breaking up this behemoth database system along arbitrary lines was a significant effort. It worked well, and it brought scale out ideas into production and provided some amazing testbed solutions to review.
Now, for instance sizing and storage allocations, along with on-premises SQL Server solutions, it may be helpful to consider this for your own systems, though perhaps in a more “simple” way.
The thing we’re specifically trying to solve for is a (currently) stubborn storage allocation model on some cloud providers and certainly with on-premises solutions. Typically you go after and allocate the space you need for a database and then hope that you don’t blow out the top of it any time soon as your systems grow. It’s possible, but certainly not a given, that reviewing your architecture can provide some additional options in the future for you.
If you are able to, splitting your database (perhaps along functional lines or even moving tables about that don’t HAVE to live together) may be an interesting way to provide for future growth. It’s possible that, by having an application split across multiple databases, you can provide yourself some options, should things get out of hand. Specifically, it may be easier and provide for less downtime, and actually provide a cleaner storage upgrade path to have the ability to move a portion of your database off to different (increased) storage, separate systems or a different instance altogether for your SQL Server.
This might be planning ahead for archives to be moved off, or for ranges of information to be shared between servers or whatever makes sense to your particular application. It’s one option, and it’s an idea that, with fixed size partitions and painful storage upsizing processes currently in play, could save you some blood, sweat, and tears.
How do you approach storage sizing in your environment? Does it change in the cloud vs. on-premises? Does it change based on your provider?