Featured Article(s)
Step By Step DB2 9.X install on AIX Manual Method – Part 1
Step By Step DB2 9.X install on AIX Manual Method – Part 1
Free Webcast:
Get Your Implementation Organized
When it comes to SharePoint environments, if you don’t plan ahead, your environment can quickly become a disorganized mess of IIS, SQL & SharePoint components. Suddenly you don’t know what web application maps to what application pool in IIS, or even worse, you don’t know what service accounts perform what functions! This session will cover strategies and provide recommendations on ways that SharePoint administrators can reign in their farm and make sure that no matter who is managing it, everyone is on the same page. Organization is key and this session will help get you one step closer to organization nirvana.
Presented by: Christopher Regan
> Register Now
> Live date: 8/25/2010 at 12:00 Pacific
The Cloud … and Tuning
I asked last week about tuning, the cloud and whether having your systems based there might make tuning obsolete. Lots of feedback on this one!
Two David’s wrote in – the first wrote "Tuning and optimization is always needed. This reminds me of a unix server deployment some years back. The company I was with went from some pretty small systems to a fully loaded Solaris e10k supporting a large database. The sa indicated that we had virtually unlimited processing power. We developed our db application without regard for system resources and promptly flooded that server.
There is no such thing as unlimited processing power. Everything has limits"
…and the second writes "The answer is a resounding NO, the cloud is not the ultimate “throw hardware at it” problem solver. The reason is because you’ll be hosted with many other applications in the cloud and, at least for SQL Azure, there are batches of new throttling errors to keep any one session from consuming too many resources. In fact, because of those errors, processes that pushed your local server to extremes might not run at all in the cloud unless you find a new, more efficient approach. Or, at least an approach that plays to the cloud’s strengths, like sharding. In most cases, that means a considerable amount of re-engineering."
Chris also write in to say "A Bad design is a bad design.
I have seen a number of cases where ordering bigger and better hardware has not solved issues. Consider code that deadlocks, you can add hardware so the processors churn faster or the memory caches more and even more hard drives, but if the code has been developed in a fashion where it will deadlock this is not going to be resolved by buying something new. I think another issue we have to address is the designs of these systems, many databases are just designed poorly or were designed to meet requirements at that time. But requirements change, and the database may need to have a new look at the piece parts.
With that aside the biggest issue I see with the database in the cloud is the willingness for companies to accept that the company data is hovering out there somewhere in never never land and yet it is secure. The truth is not so much the key point here but more the impression that companies have. If the impression is that the data is not secure in a time when security and identity theft is on everyone’s mind I think you have fewer people willing to move in that direction. I hope that we can put these concerns to rest so we can move forward. "
What do you think? Does using the cloud forgive the sins of bad design in terms of performance and usability just by virtue of the processing power available?
Drop me a note… email me here.
Featured White Paper(s)
Evaluating Deduplication Solutions: What You Should Really Consider
Understand deduplication, and how it can be implemented – single instance storage (SIS) vs deduplication, fixed vs. variable … (read more)