Let's say you're testing process/infrastructure/pipeline is something in which you have 80% confidence. That is when you run your tests (however you run them) you think you have a four out of five chance of finding any given problem.
Sometimes, people want to do multiple test passes in order to reduce risk. For instance, if your chances of finding a problem are four out of five on the first pass and four out of five on the second pass, then it stands to reason that the chances of it being found are 24 out of 25 if you do two passes.
The first pass grants any given problem a four out of five chance of being found. That's the equivalent of a twenty out of twenty-five chance. The remaining fifth of the issues gets another chance of being found, with another four out of five chance.
If four out of five issues will be caught and five out of twenty-five issues are not-yet-known, then you are left with one out of every twenty-five issues un-found after the second pass.
If four out of five issues will be caught and five out of twenty-five issues are not-yet-known, then you are left with one out of every twenty-five issues un-found after the second pass.
This can be simplified when you look at your testing in terms of how much you mistrust it. Assuming random distribution, if you mistrust your test passes 20%, then your mistrust of two test passes should be about 4% (20% of 20% = 4%).
So running your tests twice is really an attempt to multiply your mistrust in them by itself. It is an attempt to turn mistrust into mistrust squared.
If you want to mitigate mistrust, that is fine. To me, taking my lack of trust and applying it multiple times seems like a smell.
Rather than attempting to raise to some power my mistrust in a process or tool, I like to try to eliminate the mistrust altogether.
Testing is no exception. However you are testing your code, you should only need one pass to make a decision.
Testing is no exception. However you are testing your code, you should only need one pass to make a decision.