Forum Discussion
Hi,
Personal private opinion based on the projects I participated in:
a) TestComplete provides call stack in the Call Stack log pane. I think that the provided call stack is good enough and is bound to the test code making it possible to navigate to code from call stack. At the same time I don't see any good reason to post a full call stack to the test log itself because test log should be compact, readable and understandable for non-programmers (like manual testers or managers). Thus I don't see the reason in getting call stack from exception.
b) The only case when exceptions were useful in test code was for web services testing when the service was called with incorrect parameters in order to get the 4xx return code. In all other cases the called function either reported a problem or returned an indication of success/failure processed by the calling code as required. This was always possible because, by definition, test code always perform defined action to verify defined behaviour and thus always know expected code call result. In case test code fails because of unexpected situation (e.g. when attempting to write to a read-only file), then it is perfectly fine if the code fails. This reveals but not hides the code problem which can be immediately addressed. If it is possible that the given file can be read-only, then test code must be improved to make file writable before writing to it. If the file must not be read-only, then test code must be improved with verification block and clearly report a problem to the test log.
AlexKaras wrote:
a) TestComplete provides call stack in the Call Stack log pane. I think that the provided call stack is good enough and is bound to the test code making it possible to navigate to code from call stack. At the same time I don't see any good reason to post a full call stack to the test log itself because test log should be compact, readable and understandable for non-programmers (like manual testers or managers). Thus I don't see the reason in getting call stack from exception.
Rightly pointed out, As far as automation script error call stack concerns TestComplete call stack on log pane is the best place to see. What RUDOLF_BOTHMA trying is looking to be complicated. I don't think we need to have this complicated logic for automation scripts. In my view, most of the run time error occurs can very well be identified during our test runs which we can cover before running against AUT. Also, most of the error during execution related to the object identification/timing issue which try...catch can't catch.
End of the day, automation should verify AUT without any false positive and if there is any failure a good framework and standard scripting will help to identify the issue. There will be a nightmare in every script where/how that failed that will be a happy headache to take and resolve by trying various scenarios.
- AlexKaras6 years agoChampion Level 3
shankar_r wrote:
Also, most of the error during execution related to the object identification/timing issue which try...catch can't catch.Yes, absolutely valid and correct point.
- tristaanogre6 years agoEsteemed Contributor
I can't add much more beyond what AlexKaras and shankar_r have added. In my opinion, exception handling for automated testing should have the primary goal of trapping errors in the test. Sure, if the underlying code is complicated enough that there are code bugs, then you need a bit more. However, the truth of the matter is that the automated test failed indicating the possibility of a failed test. At this point, the call stack at the failure point provided by TestComplete is sufficient enough to inform as to where to start the investigation and determine the root cause of the failure. When you make your automation code TOO complicated, then you spend all your time debugging the automation code and not enough time actually validating/verifying your AUT.
- AlexKaras6 years agoChampion Level 3
tristaanogre wrote:
When you make your automation code TOO complicated, then you spend all your time debugging the automation code and not enough time actually validating/verifying your AUT.
And I would like to add even more:
-- Unfortunately, it is pretty rare case when the tested application is designed and documented good enough to make test automation to be done really in parallel with development;
-- In most cases, test automation or correction done in order to match application's changed behaviour is done when the development task is completed;
-- The above means that the more time will be required to put automated test into production, the more time will be spend by manual testers to verify the thing that can and should be verified automatically. And this means increased load on manual testing and its decreased efficiency with manual exploratory verification of complex, corner and non-standard cases.
With the above in mind, I think that sometimes it is better to have less perfect (from the classical development point of view) test code in favor of more easily understandable and modifiable one.
The less time it takes to the person who supports test code to put it into production, the more time manual testing has for extended application verification.
Related Content
- 14 years ago
- 6 years ago
- 10 years ago
- 4 years ago
Recent Discussions
- 6 hours ago