Webcast: Avoid Slow SSAS Queries: MDX Query Troubleshooting
Do you have SSAS queries that you feel should be faster? Multidimensional Expressions, or MDX, is the language used to work with Analysis Services. Similar to SQL for the relational engine, MDX is a language for queries and DDL statements on multidimensional cubes in Analysis Services. Troubleshooting slow MDX queries is often challenging because the level of support provided by the tools and the Analysis Services engine is not as mature as that found in the relational engine. This session covers the methods for examining slow queries using the SQL Profiler and Performance Monitor to determine the need for aggregations, caching problems, memory constraints, and so forth. Best practices for optimizing MDX will be examined, and various server properties will be discussed as necessary to help correct query issues. Attendees will leave armed with a practical approach to locating bottlenecks and strategies for speeding up queries.
> Register Now
> Live date: 4/28/2010 at 12:00 Pacific
Featured Article(s)
Floating Point Variables: Describe Your Entities Correctly
For those passionate programmers, strong enough to spend hours typing like a mad man(or woman 😀 ), here is more stuff to continue thinking about programming. This time, lets talk about Floating Point Variables.
Managing Multi-OS Backups?
Keeping track of cross-platform backups, ranging from the operating system to the database and combinations of those, is a hairy task at times. If you’re facing this type of environment, you’re not alone. THere is a new toolset out that may be just the ticket – be sure to check this out. Bradmark Technologies has released their enterprise application for just this type of system. Get more information here.
Daily Operations with SQL Server – Reader Feedback
Greg wrote in with a detailed look at the types of things he’s doing on a daily-basis with SQL Server. Take a look (and let me know via email what types of things you’re doing daily) –
[Thanks for the great note and detail, Greg!]
"This is probably more of a ‘what do I do daily with data backups’ answer… Our org went with a general backup strategy of:
1. Get things that need to be backed up to a central server with a volume on the SAN
2. Backup to tape with Commvault
But how we get things to that location is in a variety of ways. For some things, we use [a failover vendor] (may be phasing that out, as the price jumped up on us)
For some things, we use native SQL backup to get the .bak files there.
For Exchange, a couple nightly ntbackups of the info stores does it.
For a couple mySQL servers, we use [MySQL-targeted software], and then a secondary ntbackup which gives us 2 weeks of backups in a bkf that go to the SAN.
For some we do .bat files with xcopy – we use this where it’s a folder of files in a proprietary format, and also for SQL data folders where the application gives the workgroup users an easy way to create their own new SQL databases as needed (i.e. year end, special sets of data, etc…); we also use this for a couple SQL 2005 Express databases where it was just easier to stop the service, copy the files, and restart the service at night when not in use
I’ve never gotten too much into the automated alerting, I imagine because if something is not going right, I’d be concerned the process handling the alerts may also not trigger. So, I have about 12-15 locations that I check about every other day – just have all the shortcuts to the locations in a single folder; also a couple shortcuts in there to log locations, but that’s a different story – web/ftp logs. And then it’s these 12-15 locations I watch to see if all the various backup types are getting the data to the SAN. It’s not an overwhelming amount, so okay to just manually check it. The files then go to tape on a regular backup schedule.
This actually wasn’t working well for a long time, as I was trying to keep track of what/when/why/where/how, based on server names, paths, etc… I changed the way I started looking at it to be ‘application-based’ instead, and it became much easier for me to get a bigger picture of how all was fitting together. Starting to think more along the lines of ‘how are XYZ backups handled, and what are all the pieces of xyz app that need to be backed up, etc… made a big difference for me. I was used to thinking of backups in terms of full server backups. And those weren’t happening any more.
A second piece for me is on the security side…
We’ve had two times where extra privs were granted in SQL – about 50+ roles given SUID to tables they weren’t supposed to have – basically all the roles that didn’t already have column level privs to some columns ended up with SUID to the table. We don’t know if it was during a year end rollover process, or a software update process, or some other task happening. And we didn’t know when it happened, but it was within about a 3 month time range each time.
So trying to get something automated to notify us if something similar happens again was a task I started on during part-time DBA time. I created a database trigger that notifies of some role/priv changes – I have it store that in a log table, and also email me and my counterpart daily of how many changes were made the previous day. I also setup a free version of the [auditing software] app – haven’t been able to purchase the paid-for version yet, though.
But I believe these two pieces will be able to let me see if a large number of security changes happens all at once like it did previously. At least we then have a time reference and know what would have been done about that time.
This is a piece I’d like to have better info on daily – but just a summary. Actually, now that I think of it – this is actually something I’d like to see handled at the SQL level (and maybe it is already in ways I don’t know about yet) – I’d like SQL to have a way to log all security changes into an audit table. Or maybe what could be even better… if it could log those to a remote syslog server. And maybe it can do these things already?
I would imagine there’s many additional things that could be good to check daily, and which could help in performance aspects. I watch profiler a little maybe once every three or four months, and/or if there’s a problem, just to see what it shows for different apps.
But, backups and security are the pieces that people get fired over, not performance (generally), so if I’m going to spend time learning pieces, and I have limited time, it’s in the backups and security areas most likely. But in my world, backups of data often mean varying types of data, so I’m usually trying to understand these days what needs to be backed up for an app – data, config, O/S level files, etc… so I can get them to the SAN for backup to tape.
A good document from the app developer goes a long way in helping in this respect – where are the files, what are the files, services that lock the files, etc…
I’d like to see just a one page doc from application developers with a description of how to backup/restore/recover in various scenarios. Something like this – http://kbase.gfi.com/showarticle.asp?id=KBID002636 – was a big help to me, although they don’t mention in it to stop/restart the two GFI services before/after. But it’s a good start, and they’re email support was quick, friendly, and did say to stop/start the services. Kudos to GFI for thinking about the person having to deal with backups and being so responsive on their email support."