Requiring basic tests for contributions

I think a lot of benefits come from testing, even the most basic of tests.

To help combat our low test coverage, and get the other benefits from testing, I think we should (even if just temporarily) require at least basic tests from contributors.

I guess the only downside is it may lower total contributions, but if we’re having to spend time fixing bugs down the line is that really worth it?

It’s a good question. It might make sense to act a little bit data-driven about this. Is there some way we can go about measuring how many of our bugs come from test-free vs. test-containing contributions? I know a controlled experiment is impossible, but the scientist part of me wonders whether there’s some sort of systematic, historical digging we can do to see whether asking for tests alongside most contributions would have an effect for the better…

I think while that is a benefit of testing, it isn’t the only thing. One thing I’m coming across while I attempt to refactor modules to pathlib, is that some modules don’t have tests at all or very few tests, and it makes it hard to judge if I’ve actually broken the behavior of the module without manually testing by hand.

I’m sure this won’t be the last time anyone refactors any code.

Now, I do plan on writing some tests where I see it significantly lacking, but it would have been nice to at least have a couple basic ones in place before.

To add another benefit, it will also make the PR review process faster/more efficient if the reviewer can use the tests created rather than manually testing the PR.


As a new contributor I support the idea of requiring tests - it’s best practice for pretty much any project and ensures the quality & longevity of the code being submitted. Ultimately you can’t prove a PR does what it says it does without some form of testing, and moving the onus onto the submitter makes sense - as long as reviewers trust the integrity of the submitted tests. Not only does it aid reviewing, but it inspires some confidence in the submitted code (at least hopefully anyway - don’t want to jump the gun on reviews of my submissions aha). It also fits nicely into the CI stuff to see when new contributions are ready to review.

I regularly seem percentages of test coverage. What are they, how are they calculated? Is it just ‘how many functions/methods have a dedicated test’?

Hi, @dosoe! The Wikipedia article about code coverage metrics is actually pretty helpful in this area:

Of that list, the most common definition is “statement coverage.”