Software Quality has a unique requirement, that testers need to report to development teams on what works and what does not work. Now why does this core requirement cause testers to think in contradictory approaches to software testing? The first requirement of most development teams are to find out the features that work well. The second requirement is to then find bugs that are in the features.
The first approach is the one taken in most team or company environments. As that has higher priority than actually figuring out the measure of the quality of software. Now there are tools like Defect Removal Efficiency (DRE), that is essential the ratio of bugs caught in house to the bugs reported by customers. But how many teams actually use DRE for releases. Most of the Agile times are more concerned about their release that week. So the entire focus become on only validating what works and may be to fix the few showstopper bugs found by small testing effort.
Now why is this not very help full for the software testers?
The intelligence or the thinking is supposed to be carried out during the building of test plan. Now these are converted to test cases and are just executed by software testers. Only about 5% of the test effort is given to exploratory testing, where testers can use their intelligence and carry out risk based testing. One of the major issue is that there is no language to communicate from developers and testers as to what is important for that release cycle. So the two needs of validating the application and finding out bugs based on a risk based approach are contradictory to each other.
For example, let us take the case of testing a script that takes an input date file and the output needs to validated for the format. If one is taking the risk based approach to testing the application, we can first test the well defined input file and check for the output format. This would mach and test would pass. Now how would the risks based approach be different from the test case driven approach? One can systematically introduce randomness to the input to file to find out, where and when the file breaks to produce the output file format.
Now, testers can quantify the likely probability of the changes to the input file to generation of out put file. Most of the time testers just vary the input file to produce a bug, they are not thinking what is the probability of this event happening. Where as the developer knows that a completely junk file given as input might not happen. So the information as to what is important and the probability of variation in inputs from the developer is not passed to the testers. Testers end up just executing test cases and find few bugs, and might totally miss the bugs which could have been uncovered by systematically breaking the app and asking the developer of the chances of each of the random input occurring in production.