It's 2:17 am as I'm typing this, sitting on a phone bridge during a deployment of the new “Project: Lumbergh” and I'm glad to know that I'm not the only one with a clicky keyboard. My comment about it brought forth a brief conversation about mechanical key switches, but it was short lived as the production kept rolling out. It's sounding a bit like mission control at NASA. So while I'm waiting until I need to answer a question (happened a few times so far), I thought I might go into some detail about my recent rants about testing.
It's not that I'm against testing, or even writing test cases. I think I'm still coming to grips with the (to me) recent hard push for testing über alles in our department. The code was never set up for unit testing, and for some of the harder tests, like database B returning results before database A, we did manual testing, because it's hard to set such a test up automatically. I mean, who programs an option to delay responses in a database?
It's especially hard because “Project: Lumbergh” maintains a heartbeat (a known query) to ensure the database is still online. Introducing a delay via the network will trip the heartbeat monitor, taking that particular database out of query rotation and thus, defeating the purpose of the test! I did end up writing my own database endpoint (the databases in question talk DNS) and added an option to delay the non-heartbeat queries. But to support automatic testing, I now have to add some way to dynamically tell the mocked database endpoint to delay this query, but not that query. And in keeping with the theme, that's yet more testing, for something that customers will never see!
Then there's the whole “checking to ensure something that shouldn't happen, didn't happen” thing. To me, it feels like proving a negative. How long do we wait until we're sure it didn't happen? Is such activity worth the engineering effort? I suspect the answer from management is “yes” given the push to Test All The Things™, but at times it feels as if the tests themselves are more important than the product.
I'm also skeptical about TDD in general. There's this series of using TDD in writing a sudoku solver:
- OK, Sudoku
- Moving On With Sudoku
- Sudoku: Learning, Thinking and Doing Something About It
- Sudoku 4: Disaster Narrowly Averted
- Sudoku 5: Objects Begin to Emerge
Reading through it, it does appear to be a rather weak attempt at satire of TDD that just ends after five entries. But NO!—this is from Ron Jeffries, one of the founders of Extreme Programming and an original signer of the Manifesto for Agile Software Development. If he gave up on TDD for this example, why is TDD still a thing? In fact, in looking over the Manifesto for Agile Software Development, the first tenent is: Individuals and interactions over processes and tools. But this “testing über alles” appears to be nothing but processes and tools. Am I missing something?
And the deployment goes on …
“99 failing tests in the queue! 99 failing tests! Check one out, grind it out, 98 failing tests in the queue!”
So I'm facing this nearly twenty-hour long regression test and I had this idea—instead of querying the mocked end point if it did its thing or not, have the mocked endpoint check to see if it did its thing or not.
I now send the mocked endpoint the testcase itself
(along with a query flag of
The mocked endpoint will save this information,
and set a
queried flag for this testcase to false.
If it is queried,
it updates the queried flag for the given request.
At the end of the regression test
(and I pause for a few extra seconds to let any pending requests to hopefully finish),
the mocked endpoint will then go through the list of all the testcases it was told about,
and check to see if the query flag and queried flag match—if not, it logs an error.
Sure, now we have two error logs to check, but I think that's better than waiting nearly twenty hours for results.
I got a baseline time for the subset of the regression test without the mock checks—35 seconds. I'm trying to beat a time of 4 hours, 30 minutes with the mock checks.
The new method ran the subset in 40 seconds. The entirety of the regression test, 15,852 tests, took just a few minutes—about the same as before.
I can live with that.
Now all that's left is to write the validation logic—I still don't have it down yet.
The idea of the scrum framework is to organize a development process to move through the different project cycles faster. But does it always incentivize the right behaviours doing so? Many of the users who joined the debate around the question on Stack Overflow have similar stories of how developers take shortcuts, get distracted by their ticket high score, or even feign productivity for managers. How can one avoid these pitfalls?
That the question has been migrated from our workplace exchange to the software engineering one shows that developers consider concerns about scrum and its effectiveness larger than the standard software development lifecycle; they feel its effect on their workplace as a whole. User Qiulang makes a bold claim in their question: Scrum is turning good developers into average ones.
Could that be true?