I received a work email today, notifying everybody at the Ft. Lauderdale Office of the Corporation, that the office won't reopen until Monday, October 5TH. We were supposed to open in late June, maybe early July, but given that's it is now late July, the Powers That Be decided it might be better to just wait several more months than to continuously plan to open the office only to have to push the date back.
I can't say I'm upset at the decision.
I suppose it was only a matter of time, but the bad web robot behavior has finally reached Gemini. There's a bot out there that made 42,766 requests in the past 27 hours (so not quite one-per-second) until I got fed up with it and blocked it at the firewall. And according to my firewall, it's still trying to make requests. That tells me that whatever it is, it's running unattended. And several other people running Gemini servers have reported seeing the same client hammering their systems as well.
Now, while the requests average out to about one every two seconds, they actually come in bursts—a metric buttload pops in, a bunch fail to connect (probably because of some kernel limit) and all goes quiet for maybe half a minute before it starts up again. Had it actually limited the requests to one every two seconds (or even one per second) I probably wouldn't mind as much.
As it was though, quite a large number of the requests were malformed—it wasn't handling relative links properly, so I can only conclude it was written by the same set of geniuses that wrote the MJ12Bot.
On the plus side, it did reveal a small bug in the codebase, allowing some of the malformed requests to be successful when they shouldn't have been.
As soon as I find one bug than I find another one.
In an unrelated program.
In this case, the bug was in my gopher server, or rather, the custom module for the gopher server that serves up my blog entries. Earlier this month, I rewrote parts of that module to convert HTML to text and I munged the part that serves up ancillary files like images. I found the bug as I was pulling links for the previous entry when I came across this entry from last year about the horrible job Lynx was doing converting HTML to text. In that post, I wrote what I would like to see and I decided to check how good of a job I did.
It's pretty much spot on, but I for some reason decided to view the image on that entry (via gopher), and that's when I found the bug.
The image never downloaded because the coroutine handling the request crashed,
which triggered a call to
causing the server to stop running.
The root cause was I forgot to prepend the storage path to the ancilliary files. And with that out of the way …
One of the links on this entry won't work on gopher or Gemini.
That's because I haven't implemented a very important feature from the web version of my blog—linking to an arbitrary portion of time!
I don't even think that link will work on gopher
or Gemini either,
because of the way I “translate” links when generating the text from HTML.
HTML-to-text translation is hard, let's go shopping—oh, wait … I can't because of CODIV-19!
Update a few moments later …
Not all links break on Gemini.