Forum Discussion

Bob_C's avatar
Bob_C
Contributor
2 years ago

Test Cases / Cycles / Plans Best Practices

Is there a document somewhere on best practices for putting these together?

Our very simple product has 150+ test cases, and we are grouping these together into sets of test cycles for testing.

Our simple product will have 400+ test cases.

Our more complex product we have not put together any test cases for, but anticipate 1500+ test cases.

 

Our plan for test cases is very thorough, and is verifying all functions, and ensure non-functions as appropriate for each state / mode / output.

 

Is there a recommendation for max number of test cases in a cycle?

Once all cases have passed (after numerous software updates) - we need to re-run all of these cases to ensure nothing broke in the process.

How to I re-run the entire test plan - which is a collection of test cycles, which is a collection of test cases?

 

 

  • MisterB's avatar
    MisterB
    Champion Level 3

    Hi Bob,

     

    I'm not aware of any best practice guide except for what's in the help files, but my approach is to think about what is the best structure for reporting on progress.  I build my Test Case folders and Test Cycle folders to be the same.  Those folders tend to be by Team or Function, and each have sub-folders to logically organise the tests.  I put no limits on how many tests are in a cycle, but they tend to have fewer than 100 tests - perhaps because of the approach I mentioned above.

     

    As for re-running a Test Plan entirely, I've found that there's always something that needs changing in the subsequent runs and for that reason I tend to clone the test plan/test cycles and amend those as needed (adding/removing test cases, etc.).  There might be better ways to do that using other features...

     

    Cheers, Andy

    • Bob_C's avatar
      Bob_C
      Contributor

      Andy,

      This was a tremendous help, and explains the workflow that you utilize - and how it works in the real world.

       

      Do you have any general words of advice for how detailed to go in test cases / test steps?

      Imagine a button press is supposed to do something, and we need to verify it does X in most cases and Y in some odd corner cases.  The cases involve 10 settings for option A, 3 settings for option B, 3 settings for option C.  This is 10*3*3 = 90 test cases (steps) to evaluate.

      This general 90 test steps then gets compounded by the type of button event, which operation state the device is in, and other factors.  This is our 'simple' device and I can already see thousands upon thousands of test steps/cases that seem to be an overkill to specify - especially if this is a manual test.  The value of testing each of these cases (steps) diminishes quickly after testing a few of the X outcomes, but would plan on making sure all of the Y outcomes are correct.  

       

      Any words of advice how you handle this?

       

       

       

  • MisterB's avatar
    MisterB
    Champion Level 3

    Hi Bob,

     

    I finished a project recently that shares some similarities with what you're describing. The service had scenarios with multiple test cases that needed to be executed multiple times.  Due to the nature of the service, it was important to be able to evidence the number of tests executed.

     

    Using your scenario in this example, my solution was to create 1 version of each test case that needed to be executed multiple times, so 1 for option A, 1 for option B, 1 for option C.  Each test case however had a final test step that said "execute this test a further 10 times by clicking the "Start a new test execution" button at the top left-hand corner of this frame" (or something like that).  This resulted in us having 10 executions of option A, 10 of option b, 10 of option C.  If any of those 30 tests failed or blocked they were recorded and an Issue created.  This approach was a win-win: it was quicker and easier for testers to re-execute multiple tests (vs. moving between 10 different test cases with the same test steps), and we were still able to evidence how many tests had been executed.

     

    Another thing I sometimes do, depending on the project and stakeholders, is to demonstrate where the testing effort has been spent, functionally speaking, i.e. we tested this functionality 'x%', etc.  This is another benefit of logically organising test cases and test cycles by a function/team, etc.  With that structure in place, I can export test case data (or test execution data) from Scale and place it into another tool where I can create some visuals, e.g. Excel + Charts (sadly we cannot report the same in Scale at present):

     
     

    An example: each slice is a function that was tested.  I've left some generic ones un-blurred.  I can then use the same data and chart and change the values from '%' to 'n' to show the number of tests executed per function - particularly useful if you need to demonstrate how thoroughly a specific function has been tested

     

    Cheers, Andy