Editorials

DBASchool! For Accidental DBAs and Much More.

Pssst… if you will be attending the conference next week….
Register now – rates go up the final notch starting on Monday! Can’t wait to see everyone there; it’s a great set of sessions (more than 75!) and presenters (more than 20!) that will be there to help go through key SQL Server, Business Intelligence, .NET and SharePoint technologies. I hope you’ll join us. Here’s the link to register.

Get more information here

Webcast: SQL Server Forensics
Have you ever received a call from one of your users asking why they received an error 3 days ago? Or maybe they want to know when a particular piece of data was deleted and who did it. Troubleshooting past events is difficult in SQL Server, but not always impossible. Learn how to set up a SQL Server to be able to respond these questions and how to use resources within SQL Server and other application logs to track down activity that otherwise might be lost.
Presented by: Sarah Barela

> Register Now
> Live date: 4/14/2010 at 12:00 Pacific

Upcoming Events
– SSWUG.ORG Virtual Conference – April 7, 8 and 9 [Read More]
– SSWUG.ORG DBASchool – April 19, 20 and 21, Tucson, AZ with Stephen Wynkoop [Read More]

Back to Virtualization
Final thoughts and feedback on virtualization… it seems like unless you can "do it right," the keys are to limit the levels of activity across your virtual machines. "Do it right" may seem more than obvious, but by that I mean top of the line hardware and software solutions in your configurations. There are numerous successful reports, but they seem to be centered around optimal configurations, and not traditionally middle of the road configurations. When it comes to other installations, I think the now-standard VM best practices remain.

1. Don’t put multiple high-octane solutions on a single server – the instances will fight it out and you won’t likely be happy with the results.

2. Optimize for IO. Disk will still be a slower spot in the mix, so be sure you’re paying attention to those spindles.

3. There are huge benefits to be had from doing this when it comes to managing a series of "average" applications.

From LeAnne, "I just got done supporting a client that we started off with SQL Server 2008 on a Virtual Server 2008. We were three months before going into production with it when it started getting heavy hitting from testing. Over about 1 to 2 days of use, the memory use would climb slowly up to 100% and SQL would choke. We would be forced to reboot the server to clean things up and then it would start all over again. We tried everything we could think of with how SQL Server was using memory but nothing helped. The sys admins tried everything they could think. They finally told us that from what they could see, for some reason, the virtual was not recognizing the memory getting freed up by processes so slowly it chocked on itself.

Two weeks before we went live we were given a real Server and we had no problems with memory use from that point on. Given my experience, I would be hesitant to recommend to anyone to use SQL Server 2008 on a virtual server without extensive testing in this vein."

Dan writes "Virtual database servers only work if every piece of the infrastructure has been designed properly, all admins understand database I/O needs, and there is enough overhead to handle spikes. At our company, we are struggling with lack of server resources, admin availability, and inconsistent SAN performance. We don’t have the dedicated manpower to tune or troubleshoot properly at all the possible levels where problems can occur. I’ve spent the past few months learning about SQL Server I/O behavior and how to determine bottlenecks, in an effort to troubleshoot our environment, but keep running into roadblocks where the abstraction of the virtual environment prevents me from determining a root cause.

A virtual environment is great IF AND ONLY IF all the pieces are finely tuned and regularly monitored. But that’s not the case here, and troubleshooting quickly becomes an exercise in frustration.

Let’s say a SQL Server is running slowly. From the Windows OS (VM), I can run performance monitor and see if the processor queue is high, or seconds per read/write is high, or memory usage is high. I can check waitstats in the DMV if it’s a recent version of SQL Server (but we still have a lot of SQL 2000 here). But then what?

– Virtual Host Server problems: has another VM on the same host ramped up and started taking over unusually high CPU cycles? Has memory usage spiked on another virtual machine, causing memory starvation for other VM’s on the same host?

– SAN problems: I don’t have access to see if the SAN performance has degraded because of high disk activity from a different server sharing the same spindles as the SQL Server LUNs. What about the SAN fabric switches, could they be experiencing a problem? Is the SAN rebuilding a RAID set after a disk failure and replacement? Does the SAN allow dedicated spindles for a LUN, or are all spindles part of a giant RAID set?

– Interconnects: I can’t measure to see if the host bus adapter on the VMWare server is saturated because a different VM has kicked into some high-activity cycle. Are the HBA’s 2 gigabit or 4 gigabit? How many HBA’s are on the ESX host, and are they multipathed?

Virtualization is great when each layer and all the interconnects are well-understood and well-managed. That requires clear communication and teamwork between DBAs, VM administrators, and SAN administrators. It also requires a DBA to have a deep understanding of the I/O and resource requirements of SQL Server, and an ability to translate those into terms that a VM admin and a SAN admin can understand and implement. It’s not just I/O’s per second (IOPs), it’s also sequential reads/writes for some files, and random read/writes for others. And everyone on the team has to understand the rubber-banding nature of server resource spikes and dips. Can the infrastructure handle all the spikes? If it can handle 95% of the spikes, but the 5% it can’t handle is when the chief executives are running the end-of-quarter reports, then to the executive team, performance is bad 100% of the time they are actually paying attention.

If close teamwork between the DBA, SAN, and VM admins is not feasible in an organization, then everyone is better off with standalone database servers and storage. Virtualization works great with regular servers, but database servers have I/O performance demands totally unlike your “regular server.”"

Webcast: Basics of Administering Databases for The Layman – Part 3
Okay, so you’ve gotten your data into your database. Did you do it right? Do you have too much data? Do you have the right data? Can you actually get the data out that you want? Now how do you get it out? This session will take you through some examples of why you want to have less data in a database to be more accurate (normalization), how that’s done, and how it makes the data in your database more accurate. It also explains why, in certain circumstances, it could be beneficial not to do that. Topics included will be indexes: what, why, and how to use them, what the benefits are, what the tradeoffs are. Also, different ways to maintain them and why you want to do that, and why some of Microsoft’s canned maintenance plans can be a problem and where to look for solutions to those problems.
Presented by: Tom Roush

> Register Now
> Live date: 4/21/2010 at 12:00 Pacific