The Boston Diaries

The ongoing saga of a programmer who doesn't live in Boston, nor does he even like Boston, but yet named his weblog/journal “The Boston Diaries.”

Go figure.

Thursday, June 10, 2021

Just more grumblings about testing while sitting on a deployment

It's 2:17 am as I'm typing this, sitting on a phone bridge during a deployment of the new “Project: Lumbergh” and I'm glad to know that I'm not the only one with a clicky keyboard. My comment about it brought forth a brief conversation about mechanical key switches, but it was short lived as the production kept rolling out. It's sounding a bit like mission control at NASA. So while I'm waiting until I need to answer a question (happened a few times so far), I thought I might go into some detail about my recent rants about testing.

It's not that I'm against testing, or even writing test cases. I think I'm still coming to grips with the (to me) recent hard push for testing über alles in our department. The code was never set up for unit testing, and for some of the harder tests, like database B returning results before database A, we did manual testing, because it's hard to set such a test up automatically. I mean, who programs an option to delay responses in a database?

It's especially hard because “Project: Lumbergh” maintains a heartbeat (a known query) to ensure the database is still online. Introducing a delay via the network will trip the heartbeat monitor, taking that particular database out of query rotation and thus, defeating the purpose of the test! I did end up writing my own database endpoint (the databases in question talk DNS) and added an option to delay the non-heartbeat queries. But to support automatic testing, I now have to add some way to dynamically tell the mocked database endpoint to delay this query, but not that query. And in keeping with the theme, that's yet more testing, for something that customers will never see!

Then there's the whole “checking to ensure something that shouldn't happen, didn't happen” thing. To me, it feels like proving a negative. How long do we wait until we're sure it didn't happen? Is such activity worth the engineering effort? I suspect the answer from management is “yes” given the push to Test All The Things™, but at times it feels as if the tests themselves are more important than the product.

I'm also skeptical about TDD in general. There's this series of using TDD in writing a sudoku solver:

  1. OK, Sudoku
  2. Moving On With Sudoku
  3. Sudoku: Learning, Thinking and Doing Something About It
  4. Sudoku 4: Disaster Narrowly Averted
  5. Sudoku 5: Objects Begin to Emerge

Reading through it, it does appear to be a rather weak attempt at satire of TDD that just ends after five entries. But NO!—this is from Ron Jeffries, one of the founders of Extreme Programming and an original signer of the Manifesto for Agile Software Development. If he gave up on TDD for this example, why is TDD still a thing? In fact, in looking over the Manifesto for Agile Software Development, the first tenent is: Individuals and interactions over processes and tools. But this “testing über alles” appears to be nothing but processes and tools. Am I missing something?

And the deployment goes on …


“99 failing tests in the queue! 99 failing tests! Check one out, grind it out, 98 failing tests in the queue!”

So I'm facing this nearly twenty-hour long regression test and I had this idea—instead of querying the mocked end point if it did its thing or not, have the mocked endpoint check to see if it did its thing or not.

I now send the mocked endpoint the testcase itself (along with a query flag of true or false). The mocked endpoint will save this information, and set a queried flag for this testcase to false. If it is queried, it updates the queried flag for the given request. At the end of the regression test (and I pause for a few extra seconds to let any pending requests to hopefully finish), the mocked endpoint will then go through the list of all the testcases it was told about, and check to see if the query flag and queried flag match—if not, it logs an error.

Sure, now we have two error logs to check, but I think that's better than waiting nearly twenty hours for results.

I got a baseline time for the subset of the regression test without the mock checks—35 seconds. I'm trying to beat a time of 4 hours, 30 minutes with the mock checks.

The new method ran the subset in 40 seconds. The entirety of the regression test, 15,852 tests, took just a few minutes—about the same as before.

I can live with that.

Now all that's left is to write the validation logic—I still don't have it down yet.


Just adding a bit more fuel to the dumpster fire

The idea of the scrum framework is to organize a development process to move through the different project cycles faster. But does it always incentivize the right behaviours doing so? Many of the users who joined the debate around the question on Stack Overflow have similar stories of how developers take shortcuts, get distracted by their ticket high score, or even feign productivity for managers. How can one avoid these pitfalls?

That the question has been migrated from our workplace exchange to the software engineering one shows that developers consider concerns about scrum and its effectiveness larger than the standard software development lifecycle; they feel its effect on their workplace as a whole. User Qiulang makes a bold claim in their question: Scrum is turning good developers into average ones.

Could that be true?

Via Comment at Hacker News, Does scrum ruin great engineers or are you doing it wrong? - Stack Overflow Blog

Despite Betterid ge's Law of headlines I'm inclined to answer “yes.” Especially since it appears to me to involve processes and tools over individuals and interactions

Obligatory Picture

Trying to get into the festive mood this year

Obligatory Contact Info

Obligatory Feeds

Obligatory Links

Obligatory Miscellaneous

Obligatory AI Disclaimer

No AI was used in the making of this site, unless otherwise noted.

You have my permission to link freely to any entry here. Go ahead, I won't bite. I promise.

The dates are the permanent links to that day's entries (or entry, if there is only one entry). The titles are the permanent links to that entry only. The format for the links are simple: Start with the base link for this site: https://boston.conman.org/, then add the date you are interested in, say 2000/08/01, so that would make the final URL:

https://boston.conman.org/2000/08/01

You can also specify the entire month by leaving off the day portion. You can even select an arbitrary portion of time.

You may also note subtle shading of the links and that's intentional: the “closer” the link is (relative to the page) the “brighter” it appears. It's an experiment in using color shading to denote the distance a link is from here. If you don't notice it, don't worry; it's not all that important.

It is assumed that every brand name, slogan, corporate name, symbol, design element, et cetera mentioned in these pages is a protected and/or trademarked entity, the sole property of its owner(s), and acknowledgement of this status is implied.

Copyright © 1999-2024 by Sean Conner. All Rights Reserved.