As our maps have grown in size, we needed to organize them in order to set priorities for testing. Just like many standard written test cases, we gave each THEN node a priority.
Disclaimer. Priority, and using a Dashboard were based on http://www.satisfice.com/presentations/dashboard.pdf. This proved to be a valuable starting point for what we are doing.
Priority 1 – Acceptance:
This tests the basic functions of our product with simple data. We run these tests each time we get a build. As we may have several builds a week, we created one Acceptance mind map that pulled in all of the Priority 1 test cases.
Priority 2 – Acceptance+:
Same as Priority 1 Acceptance Tests, but now with more complex data.
Priority 3 – Common:
Tests the main features of the application.
Priority 4 – Common+:
Covers all features that are not commonly used by a customer.
Priority 5 – Validation:
Tests field validation, corner cases, stress, etc
Xmind and MindManager both have five levels of priority by default, which matches this well. I am still considering reducing this to three levels and simply calling them High, Medium and Low, or Acceptance, Common and Corner. What do you think?
MindManager has good functionality that allows you to filter a map to show only the priority levels that you wish to see. This allows the testers to focus on what they are testing at a given moment.
Priority also comes into play before deciding what to test during a given sprint. During our sprint planning session the maps are discussed with the User Story. We know roughly at the end of this meeting which maps are affected by the new tasks. We also discuss as a team (development, testing, product and project management) what we should focus on during this week for testing. That helps us decide which maps should be tested and at what levels we should test the maps.
During the week, we test the maps that we should focus on, running the test cases in order of priority. Once all maps are completed with the requested priorities, testers are encouraged to test other areas. The goal before product release is to complete testing on all maps and all priorities. This information is recorded on the Dashboard as well.
For any given sprint, I can tell which maps were tested and what level was completed. An Overall Dashboard allows the team to visualize the amount of testing completed week over week. More detail about the Dashboard and the Overall Dashboard will be given in future blog posts.