Ah, “Project: Bradenburg”—the gift that keeps on giving.
The build server can now find the proper build node,
so that's progress.
The build fails,
because of linking errors,
which at least validates the issues I'm having on my local machine.
I make one small change to the
Makefile I know will fix the issue,
and the build server didn't pick up the change.
and it appears the build server is pulling the repository from
and not from Subversion.
It appears I did not receive the memo about this
(a year and a half ago,
I started to migrate our stuff to
but stopped because some of our Solaris build servers couldn't check the repos out from
And that's still an issue.
But “Project: Bradenburg” doesn't run on Solaris so for that and some other reasons,
Ops “moved” the repository to
git a few months ago).
So now the official build for “Project: Bradenburg” comes from
and the rest of our official builds for projects in our department are still from Subversion.
I can deal.
I actually prefer
git over Subversion anyway.
I get the credentials worked out for the
make my small change to fix the linking issue,
only to have another linking issue.
Because of course.
I swear, how did this ever compile in the first place?
I received an email from new reader Sylvain asking about Alaksandr, the Russian sendmail spambot that was plaguing me for months. It seems that Sylvain is having to deal with Alaksandr and asked me for some help, having read one of my posts detailing the problem (and if you are having déjà vu, it's not a glitch in the Matrix).
I told Sylvain about my solution—removing the email accounts Aleksandr was spamming. I also decided to update each of the posts to point to my solution as it seems others are having similar issues and not finding answers.
Back in October, I removed a whole section of my Gemini site because I got fed up with badly written bots. In the best practices guide there is a section about redirection limits (something that should be in the specification but for some reason isn't). In the section of my site I removed, I had such a test—a link that would always redirect to a link that itself would redirect, ad nauseum. So I would see a new client being tested and perhaps a dozen or two attempts to resolve the link and then it would stop.
But then there were the clients written for automated crawling of Gemini space that never bothered with redirection limits.
It got so bad that there was one client stuck for a month,
endlessly making requests for a resource that was forever out of reach.
I had even listed the redirection test as a place for bots NOT to go in the
robots.txt file for the site
(also a companion specification).
So because it can be needlessly complex to locate a bot contact address, I was like, XXXX it. I removed the redirection test (along with the rest of the section, which included a general Gemini client test because, well, XXXX it). Why waste my bandwidth for people who don't care?
I only bring this up now because I'm noticing two bots who are attempting to crawl a now non-existant redirection test, probably with a backlog of thousands of links because hey, who tests their bots anyway?