Hopefully you read the previous blog post which set up (no pun intended) the context of this second follow up post. If not, take a few minutes and read it before continuing here.
With your matrix of setup configurations fully defined, it’s time to turn our attention to executing the individual scenarios in an effective and efficient manner.
The first thing to do is check your set of test configurations and double check that there aren’t any in there that are invalid. For example, maybe there are some operations systems you mistakenly assumed your product would support, but it turns out that with this version, you’ve dropped support for some of the older ones.
The next big task is to prioritize the setup configurations. Assuming you have neither infinite time or resources, you need to selectively pick the configurations you can run. Here, having a good sense of your customers’ common configurations help. Hopefully you have some historical data on this or maybe it’s something you can collect it via a survey. Otherwise, you’ll just have to rely on your best guess of the likely configurations. If that’s the case, you might want to enlist a pairwise testing tool.
If you’re not familiar with the concept of pairwise testing (or all-pairs testing), it’s a way to reduce the full combinatorial matrix of your variables down to a smaller set that will likely provide “good” test coverage. It’s a cost-benefit tradeoff that might be worth exploring. Let me provide a quick example.
Imagine that your dealing with three variables for your setup test plan and each have two values:
- Operating system – Windows 7, Windows XP
- Machine architecture – 32-bit and 64-bit
- Product SKU – Basic Edition and Professional edition
A complete combinatorial matrix would produce 2 x 2 x 2 = 8 setup configurations.
A pairwise analysis of these variables would only yield 4 setup configurations, but would ensure that each pair of variables would be covered at least once. So you might get something like:
- Windows 7, 32-bit, Basic Edition
- Windows XP, 64-bit, Professional Edition
- Windows 7, 64-bit, Professional Edition
- Windows XP, 32-bit, Basic Edition
You can see that “Windows 7, 32-bit, Professional Edition” is not in the list. Breaking this configuration down in variable pairs, you’ll get coverage for “Windows 7, 32-bit” in configuration #1, “Windows 7, Professional Edition” in configuration #3, and “32-bit, Professional Edition” in configuration #4.
Now, with your setup matrix streamlined and prioritized, you can finally get down to testing. The question though is “who” should do it? In my opinion, this is a great task to be offloaded to less costly resources like vendors or offshore teams. We do this in my group, and it works well, but only because we’re crystal clear and detailed in our communication. Notation is standardized so there’s no confusion on what each setup test definition means in terms of products, install/uninstall order or verification steps. If there’s a lot of back and forth between you and the other resource, you’re not going to save much time or effort.
After running a setup test, you need to verify “success”, but what is this verification process? Are you simply verifying that the setup process ran end-to-end without crashing, or is there more to check?
I hope you answered “there’s more to check”. At a high level, you should at least be evaluating the following:
- A sanity customer scenario can be completed after the product is installed.
- If there are other apps that might be affected by the setup, verify they haven’t been broken. We do such checks when testing SxS setups of Visual Studio. An install of the latest version shouldn’t break the previous.
- If testing uninstall as well, verify you haven’t left any “turds” on the box. This could be files, registry entries, etc. Doing so is just sloppy and might break a future install of a new version of your product. Using tools to take a snapshot of the file system and registry for comparison purposes can help here.
- Assess the speed of the setup install. Is it reasonable? Is there a UI providing meaningful feedback to the customer so they know it’s proceeding successfully?
Okay, so those are my thoughts on setup testing planning, execution and verification. In closing, I’ll paraphrase my primary message from the previous post. Your product is useless, if your customers can’t successfully install it on their machines. Please don’t screw this up. :-)