Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Saturday, December 14, 2019

Making my tests multi-bound

To multi-bind a test is to make it so that a single test enforces the same semantic requirement against different portions of a system.

This has been going on for a long time but not everyone noticed it. Teams that use the Gherkin syntax to define a behavior are likely to exploit those definitions in two ways:
  1. Bind automatically with Specflow or Cucumber.
  2. Treat as a manual test script.
This is an early form of multi-binding.

I'm talking about something more than that and it's the foundation of stability in all my pipelines. So we need to discuss it, first.

In my environment, any given test can be executed all of the following ways:
  1. As a unit test.
  2. As an integration test with no networking (everything runs in a single process).
  3. As an integration test for the backend with mocked data.
  4. As a gate between the blue and green environments of the backend.
  5. As an integration test for the Android client, with a local server and a mocked database.
  6. As an integration test for the WebGL client, with a local server and a mocked database.
  7. As a gate between the blue and green environments of one of my WebGL deployments.
There's nothing to stop me from adding more ways to run tests if I need them. This seems pretty good for what amounts to a one-man show.

Thursday, December 27, 2018

Test-Coverage

A person giving address to a group: "To improve personal hygiene, we'll be tracking water-consumption."

Measuring test-coverage tells you almost nothing of value. A disciplined approach leads to high test-coverage. The resulting tests add a lot of value to the development environment.

That correlation isn't something on which you can count, though.

It's like measuring the sharpness of knives in a kitchen. A good cook is going to keep their knives sharp all the time. An amateur cook will probably ignore the a report that their knives are too dull.

The same is true of test-driven development. You're either a professional or you aren't. Either way, test-coverage isn't going to change it.

Most of the time, organizations that are tracking test-coverage end up with something worse than no tests: vast tracts of useless tests that increase cost while providing no real protection.

Wednesday, December 19, 2018

Which Is Better: Big Tests or Little Tests?

Two people arguing: "I only use NUTS!" and "I only use BOLTS!" Both are foaming at the mouth.

As an organization starts to get more healthy, a focus is placed on building a meaningful test suite for their product or products. Should you write big tests that cover large swaths of your system or should you write little tests that give you precise, reliable feedback on individual parts?

Wednesday, October 17, 2018

Mistrust Squared

Three charts: A line with a marker in it that says 20%. Two perpendicular lines with markers through each and a square that says "4%". Three perpendicular lines with markers that define a cube and the label "0.8%".


Let's say you're testing process/infrastructure/pipeline is something in which you have 80% confidence. That is when you run your tests (however you run them) you think you have a four out of five chance of finding any given problem.

Sometimes, people want to do multiple test passes in order to reduce risk. For instance, if your chances of finding a problem are four out of five on the first pass and four out of five on the second pass, then it stands to reason that the chances of it being found are 24 out of 25 if you do two passes.