There is a strong cultural difference between how we
(and by “we” I mean the team I work on)
used to handle testing and how we handle testing now.
Today's conversation du jour involved the checking of SIP headers and could we just simply compare the returned SIP message back with a “golden copy.”
I countered that there are some fields,
Call-ID: header that change per call.
I was then asked about the custom headers we produce.
The one header specifically talked about looks something like:
P-Foo-Custom: e=0; foo=this; bar=that; andthis=nothing
“The subfields,” I said, “don't have a set order. They can appear in any order.”
“Because of an implementation detail of Lua—that particular field is populated from a Lua hash table, and Lua doesn't guarantee any ordering on the hash table.”
“But … but … couldn't you add an option to keep an order?”
“The order shouldn't matter! Any client should be able to handle those subfields in any order.”
“But … but … couldn't you add code to maintain an order?”
“That would be yet more code to lock down what should be an implemetation detail! And I do parse those headers when checking.”
“But … but … you could just compare against a ‘golden copy!’”
“I already parse and check the subfields!”
I also get the feeling that the tests are assumed to be 100% functionally correct and any deviation MUST be a problem in the code being tested. The notion that the test COULD BE WRONG just doesn't come up. We went down a deep rabbit hole today where the issue turned out to be a misconfiguration in “Project: Sippy-Cup” only it took over an hour to resolve it. Again, the test was incorrect, not the code (and the original regression test, which had the misconfiguration, passed every test—again, from what I understand reading up on this, you aren't supposed to test the tests, right?
Then there was the “That test failed!”
“Oh, that's just me playing with the new regression test framework—it's not a valid test.”
“But it failed! We must investigate.”
I have to adjust to the fact that I have a new job.