Sunday 6 November 2011

Priority


As our maps have grown in size, we needed to organize them in order to set priorities for testing.  Just like many standard written test cases, we gave each THEN node a priority.

Disclaimer. Priority, and using a Dashboard were based on http://www.satisfice.com/presentations/dashboard.pdf. This proved to be a valuable starting point for what we are doing.

Priority 1 – Acceptance:
This tests the basic functions of our product with simple data. We run these tests each time we get a build. As we may have several builds a week, we created one Acceptance mind map that pulled in all of the Priority 1 test cases.

Priority 2 – Acceptance+: 
Same as Priority 1 Acceptance Tests, but now with more complex data.

Priority 3 – Common:
Tests the main features of the application.

Priority 4 – Common+:
Covers all features that are not commonly used by a customer.

Priority 5 – Validation:
Tests field validation, corner cases, stress, etc

Xmind and MindManager both have five levels of priority by default, which matches this well. I am still considering reducing this to three levels and simply calling them High, Medium and Low, or Acceptance, Common and Corner. What do you think?

MindManager has good functionality that allows you to filter a map to show only the priority levels that you wish to see. This allows the testers to focus on what they are testing at a given moment.

Priority also comes into play before deciding what to test during a given sprint. During our sprint planning session the maps are discussed with the User Story. We know roughly at the end of this meeting which maps are affected by the new tasks. We also discuss as a team (development, testing, product and project management) what we should focus on during this week for testing. That helps us decide which maps should be tested and at what levels we should test the maps.

During the week, we test the maps that we should focus on, running the test cases in order of priority. Once all maps are completed with the requested priorities, testers are encouraged to test other areas. The goal before product release is to complete testing on all maps and all priorities. This information is recorded on the Dashboard as well.

For any given sprint, I can tell which maps were tested and what level was completed. An Overall Dashboard allows the team to visualize the amount of testing completed week over week. More detail about the Dashboard and the Overall Dashboard will be given in future blog posts.

4 comments:

  1. Good blog article. Thanks.

    When I survey anything I make sure there is no middle value. I find that if there is a middle value and the person has no opinion then they select the middle value. When I ask people to rank something, for example, I given them choices of 1, 2, 3, 4.

    I would use 4 levels of priority. That what we do for PushToTest TestMaker. http://www.pushtotest.com

    -Frank

    ReplyDelete
  2. great great article on testing

    ReplyDelete
  3. Good article, but I see something missing. I suggest you add to the top of the list "Changes". Testing where the change is -- is a high priority. New changes means new risk. Systems don't like being changed. People make mistakes.

    ReplyDelete
  4. Thank you James for the comment.

    The process that we follow in our organization is first we do Acceptance Testing on a build. We then verify that all the Changes (bug reports are in state Verification), and then move onto the other areas of testing.

    I don't explicitly call out Changes on the Dashboard as it is implied that we do this. On reports that are created both the Dashboard and the list of verified as fixed, not fixed and new bugs are listed.

    Make sense?

    Thank you,
    Nolan

    ReplyDelete