Thursday, January 02, 2020
Net of Cards
I just spent the past three days sans email due to an issue with my hosting company. The title of this post isn't to denigrate the hosting company because the situation wasn't entirely their fault. I used to do that type of work so I know how crazy it can get. It also doesn't hurt that I know the owner, and I was able to provide some key information so they could fix the problem. The initial issue was fixed, but there was still an issue where it was only a few virtual servers (and mine was one of them) were still cut off from the Internet. It was my describing the situation in detail to Bunny that I had that “… and then this over here … wait a second … that's it!” moment that sovled the issue.
I then told the owner of the company, “all you need to do is reroute the tachyon emitter through the shield generator and transport Wesley 25 meters outside the ship to get this working.” Said technobabble was applied and my server was back online.
No, the title of this post is just a bit of commentary on the utter fragility of modern software (link via Lobsters) that this situation reminds me of. The fact that it took several engineers several days to get a virtual server back online is telling. That there are so many layers of abstraction that it's often hard to determine the root cause of the issue. That I suffered for nearly three days sans email!
Anyway … Happy New Year everybody! I'm just sad that this didn't happen (link via Kirk Israel). It would have been awesome!
Friday, January 03, 2020
The New Age East Asian meditation neurohacking golf club store
The last items of note in Dad's storage unit are golf clubs—everything else has been cleaned out. Today is pretty much the last day we have to clean out the unit before we're stuck with a full month's rent. Bunny had done the work of organizing the clubs, now we just had to get them out of there.
Several trips later, and we don't have to worry about the storage unit any more.
Now, anybody want some golf clubs?
Please?
Sunday, January 05, 2020
Star Wars: The Rise of Skywalker
When we last left our heroes, Luke was dead, Kylo didn't shoot first, and Finn was in a pointless B-plot.
And now on with the show …
I know I'm late to the party on this, but as far as remakes of “Star Wars: Return of the Jedi” go, it's not bad. I certainly liked it way more than I did the previous installment, and it was clear that J. J. Abrams ran from Rian Johnson's direction of “Star Wars: The Last Jedi,” for good or ill.
And make no mistake, this is a remake of “Return of the Jedi,” down to a showdown on the forest moon Endor and good ol' Emperor Palpatine pulling his “give in to your hate, strike me down and rule the galaxy” shtick he pulled on Luke. But hey, J. J. Abrams also directed “Star Wars: The Force Awakens” which was a remake of “Star Wars: A New Hope,” so I'm not terribly surprised by it either.
The movie, like all Star Wars movies, is visually beautiful, but … I think I don't care for modern movie techniques like quick editing and over-reliance on garish special effects (I was surprised by the epileptic warning of flashing lights shown at the beginning of the movie—yes, it's that bad) and this movie is filled with them, to the point I found it distracting during the climax of the movie. Another aspect of the movie I found a bit annoying was the whole “fetch quest” vibe I got from it. The whole “we need to go here to get this MacGuffin that will show us how to get to the next MacGuffin.” I thought I was watching a Star Wars movie based on a role playing game. And due to the MacGuffin hunt, we went from location to location. In the original trilogy, “A New Hope” took place in three primary locations (Tatooine, the Death Star, Yavin 4), “The Empire Strikes Back” takes place in three primary locations (Hoth, Dagobah, Bespin), and “Return of the Jedi” takes place in, you guessed it, three primary locations (Tatooine again, the second Death Star, and the forest moon Endor). I lost track of the number of locations in this movie—I think at least six planets and numerous ships.
And can we get away from the XXXXXXX desert planets already? Sheesh.
Afterwards at dinner, refrigerator logic started to kick in as little details started not making sense. One example: one MacGuffin the characters obtained that, story wise, must have been made after “Return of the Jedi” but before “The Force Awakens.” But as I starting thinking of that particular MacGuffin, I asked myself, who made it? Why was it made? Who was it made for? It didn't make sense. And that's just one MacGuffin—there are others.
If this wasn't a Star Wars movie, it would be a fine popcorn type movie. Decent, but nothing terribly special about it. And that's what's sad about this movie. It's … okay. It wasn't bad, but it's not great. I don't hate it.
So … yeah.
Monday, January 06, 2020
Adding CGI support to my gopher server
Back when I released my gopher server, the only way to generate dynamic output was to add a custom handler to the program. I noticed that other gopher servers all claimed CGI support, but when I was rewriting the gopher server, I felt that CGI support as defined didn't make much sense for gopher, but an email conversation changed my mind on the subject. I thought I would go through how I support CGI for my gopher server.
On a Unix system, the “meta-variables” defined in the specification are passed in as environment variables. So going through them all, we have:
AUTH_TYPE
Only required if the request requires authorization. Since gopher doesn't have that concept, this meta-variable doesn't have to be set. Good. Next.
CONTENT_LENGTH
This is only defined if data is being passed into the CGI script. The gopher protocol doesn't have this concept, so this meta-variable doesn't have to be set.
CONTENT_TYPE
If
CONTENT_LENGTH
isn't set, then this one doesn't need to be set either.GATEWAY_INTERFACE
The specification I'm following defines version 1.1 of CGI, so this one is easy—it's just set to “1.1” and we're done.
PATH_INFO
This one is tough, and I had to run a bunch of experiments on my webserver to see how this meta-variable works. As the specification states:
It identifies the resource or sub-resource to be returned by the CGI script, and is derived from the portion of the URI path hierarchy following the part that identifies the script itself.
Basically, if I reference “/script” then
PATH_INFO
isn't set, but if I reference “/script/data” thenPATH_INFO
should be “/data”. Because of this meta-variable (and a few others) I had to drastically change how requests are passed around internally, but I got this working.One issue I had with this was leading slashes. Gopher doesn't have a concept of a “path”—it has the concept of a “selector,” which is an opaque sequence of characters that make up a reference. That, in turn, makes gopher URLs different enough from web URLs. This also means that a gopher “selector” does not have to start with a leading slash, something I had to mention up front on my gopher space (none of the selectors on my gopher site start with a slash). But there are gopher sites out there with selectors that do start with a slash, and I wanted to take both types into account. That was harder than it should have been.
But it also needs the leading portion of the selector upto the script name prepended. For example, if the selector is “Users:spc/script/foobar” then
PATH_INFO
should be “Users:spc/foobar”.And this meta-variable is only set if there's a “sub-resource” defined on the selector.
PATH_TRANSLATED
And the beat goes on.
Whereas
PATH_INFO
is the selector with the script name removed (for the most part),PATH_TRANSLATED
is the underlying filesystem location with the script name removed. So, using the example of “Users:spc/script/foobar” then the resultingPATH_TRANSLATED
would be “/home/spc/public_gopher/foobar”. Also, ifPATH_INFO
is not set, then I don't have to deal wit this meta-variable.Both where a bit tough to get right.
QUERY_STRING
Easy enough—gopher does have the concept of search queries so if a search query is supplied, it's passed in this, otherwise, this is set to the empty string.
The one kicker here is that the specification states that
QUERY_STRING
is URL-encoded, which is not the case in gopher. I decided against URL-encoding the non-URL-encoded search query, which goes agains the standard, but there are other parts of the standard that don't fit gopher (which I'll get to in a bit).REMOTE_ADDR
The address of the remote side. Easy enough to provide. Enough said here.
REMOTE_HOST
The standard states:
The server SHOULD set this variable. If the hostname is not available for performance reasons or otherwise, the server MAY substitute the REMOTE_ADDR value.
I'm setting this to the
REMOTE_ADDR
value. Done! Next!REMOTE_IDENT
Nobody these days supports
ident
and the specification states one may use this, so I'm not. Next.REMOTE_USER
The meta-variable
AUTH_TYPE
doesn't apply, then this one doesn't apply, so it's not set.REQUEST_METHOD
This one was tough, and not because I had to go through contortions to generate the value. No, I had to to through mental contortions to come up with what to set this to. The specification is written for the web, and it's expected to be set to some HTTP method like
GET
orPOST
orHEAD
. But none of those (or really, any of the HTTP methods) apply here. I suppose one could say theGET
method applies, since that's semantically what one is doing, “getting” a resource. But the gopher protocol doesn't use any methods—you just specify the selector and it's served up. So after much deliberation, I decided to set this to the empty string.I suppose the more technical response should be something like “-” (since the specification defines it must be at least one character long) but that's the problem with trying to adapt standards—sometimes they don't quite match.
SCRIPT_NAME
This will typically be the selector echoed back, but the meta-variables
PATH_INFO
andPATH_TRANSLATED
complicate this somewhat. But given that I've calculated those, this one wasn't that much of a problem.SERVER_NAME
Easy enough to pass through.
SERVER_PORT
Again, easy enough to pass through.
SERVER_PROTOCOL
Unlike the meta-variable
REQUEST_METHOD
, this one was easy, “GOPHER”.SERVER_SOFTWARE
Again, easy to set.
The specification also allows protocol-specific meta-variables to be defined, and so I defined a few:
GOPHER_DOCUMENT_ROOT
This is the top level directory where the script resides, and it can change from request to request. My gopher server can support requests to multiple different directories, so the
GOPHER_DOCUMENT_ROOT
may change depending upon where the script is served from.GOPHER_SCRIPT_FILENAME
This differs from the meta-variable
SCRIPT_NAME
as this is the actual location of the script on the filesystem.SCRIPT_NAME
is the “name” of the script as a gopher selector.GOPHER_SELECTOR
The actual selector requested from the network.
And that pretty much covers the input side of things. The output, again was a bit difficult to handle, semantic wise. The standard expects the script to serve up a few headers, like “Status”, “Content-Type” and “Content-Length” but again, gopher doesn't have those concepts. After a bit of thought, I decided that anyone writing a CGI script for a gopher site knows they're writing a CGI script for a gopher site and such things won't need to be generated. And while in theory one could use a CGI script meant for the web on a gopher server, I don't think that will be a common occurance (HTML isn't common on most gopher sites). So at the places where I broke with the standard, that's why I did it. It doesn't make sense for gopher, and strict adherence to the standard will just mean some work done just to be undone.
By this point, I was curious as to how other gopher servers dealt with the CGI interface, so I looked at the implementations of three popular gopher servers, Gophernicus, Motsognir and Bucktooth. Like mine, they don't specify output headers, just the content. But unlike mine, they vary wildly with the meta-variables they defined:
- Bucktooth
Defines the least number:
SERVER_HOST
SERVER_PORT
And the following nonstandard meta-variable:
SELECTOR
- Motsognir
Defines a few more:
GATEWAY_INTERFACE
, which is set to “CGI/1.0” and as far as I can tell, isn't described anywhere.QUERY_STRING
REMOTE_ADDR
REMOTE_HOST
SCRIPT_NAME
SERVER_PORT
SERVER_SOFTWARE
And the following nonstandard meta-variables:
QUERY_STRING_SEARCH
QUERY_STRING_URL
, which appears to be the same asQUERY_STRING_SEARCH
- Gophernicus
Which defines the most (even more than I do):
GATEWAY_INTERFACE
, which is set to “CGI/1.1”QUERY_STRING
REMOTE_ADDR
REQUEST_METHOD
, which is set to “GET”SCRIPT_NAME
SERVER_NAME
SERVER_PORT
SERVER_PROTOCOL
, which is set to either “HTTP/0.9” or “RFC1436”SERVER_SOFTWARE
And the nonstandard meta-variables:
COLUMNS
CONTENT_LENGTH
, which is set to 0DOCUMENT_ROOT
GOPHER_CHARSET
GOPHER_FILETYPE
GOPHER_REFERER
HTTPS
HTTP_ACCEPT_CHARSET
HTTP_REFERER
LOCAL_ADDR
PATH
REQUEST
SCRIPT_FILENAME
SEARCHREQUEST
SERVER_ARCH
SERVER_CODENAME
SERVER_DESCRIPTION
SERVER_TLS_PORT
SERVER_VERSION
SESSION_ID
TLS
Gophernicus seems the most interesting. It seems they support running gopher over TLS, even though it doesn't make much sense (in my opinion), and try to make their CGI implementation appear most like a webserver.
What this says to me is that not many CGI scripts for gopher even look at the meta-variables all that much. But at least I can say I (mostly) support the CGI standard (somewhat—if you squint).
Tuesday, January 07, 2020
The Heisenberg Notification Center of Windows 10
So it's been a few months since I received the Corporate Overlord's mandated managed Windows 10 laptop and among some of the annoying aspects of it are these little notifications that briefly pop up and then disappear. I'll have it on so it can do its thing (if I don't have it “phone home” at least once every three weeks, the Corporate Overlords will assume it's been stolen and have it remotely wiped next time its turned on) and I'm doing my thing when this little chime rings out. I then have to stop what I'm doing as this small box slides into the lower right corner of the screen, and by the time I get my gaze to the box, it slides quickly out of view, only leaving me with a glimpse of something or other updating or needing updating or I need to update or who knows what. I can't read it in time before it slides away into the bit bucket.
And as far as I can tell, there's no real way to recall them, nor any way to configure the notification to remain on the screen longer than it takes me to glance at the lower right corner of the screen (as if I could modify anything on this “managed” laptop anyway).
Sometimes, I question whether I'm seeing things or not. I'm finding the whole experience a bit unsettling.
It could be an omen, or it could be that someone just forgot to water the poinsettia
Just outside my office at the Ft. Lauderdale Office of the Corporation is this rather sad looking poinsettia:
I'm just hoping it's someone just forgot to water the poor thing.
Wednesday, January 08, 2020
Chiabacca
I turn my back for one second and BAM! a Chiabacca just appeared out of nowhere on my desk!
This means something …
Monday, January 13, 2020
“Security may indeed be a large sailboat with Krugerrands for ballast”
Going through the last box of Dad's papers, Bunny found an extensive collection of notes and drafts of a book Dad was trying to write back in 1982, titled Running Away To Sea. It seems the election of one Ronald Reagan to the Presidency spooked Dad a bit and he started researching how to survive “the badness” (as he called it) not by living off-grid in a shack in the middle of Montana, but by living off-grid in a boat in the middle of the Pacific.
From the time one slips his mooring lines, he begins to put an insurmountable distance between himself and those who might try to take that which he has set aside for his family's survival. Once the ocean is reached, safety from the problems on land ceases to be an immediate consideration.
Until the pirates show up.
Okay, to be fair, I did find references and draft material covering the problem of pirates, but I found his stance on a 12 gauge shotgun to be “more accurate” than a hand gun to be questionable at best. “Accuracy” on a rolling, pitching boat in the open water is going to be questionable, regardless of choice of firearm.
There is correspondence with yatch manufactuers, blue prints, price breakdowns (nearly $300,000 in 1982 dollars, making it nearly $800,000 in today's dollars—ouch!) and scores of articles on everything related to sailing. It also appears that Dad was trying to invent a new type of sail, as there were drawings he did and correspondence with an engineering firm. I'm not sure what I'll do with it all, but the blueprints are cool.
Tuesday, January 14, 2020
Notes on an Overheard Conversation About a Bank Account
“Hello, this is XXXXXX of [a bank in California]. How may I help you?”
“Yes, I'm calling about the closing out of an account. I received the affidavit form and I've filled it out except for one line—I don't know how much is in the account.”
“I'm sorry, but I can neither confirm nor deny he has an account with us.”
“What?”
“I can neither confirm nor deny he has an account with us.”
“You sent me the affidavit!”
“Yes, but I can neither confirm nor deny his account with us.”
“I have his account number in front of me!”
“I can neither—oh … um … please hold … <mumble mumble mumble>”
“I'm sorry, I didn't hear that.”
“Sorry, I can't speak any louder than this—what is the account number?”
“XXXXXXXXX”
“Okay, the amount left in the account is $45.71”
“Seriously?”
“Seriously.”
“You're only telling me because it's a rounding error to you guys, right?”
“Shhhhhhhhh!”
“Okay! Anyway, thank you.”
“Anything else I can do for you?”
“Yes, your direct phone number in your email? When I called it, I got a health clinic in Encino.”
“Seriously?”
“Seriously.”
Friday, January 17, 2020
Mysterious packets in the night
For about a decade,
I've been monitoring syslog
traffic
in real time.
It makes for an interesting background screen.
For instance,
I've noticed over the years Mac OS-X getting more and more paranoid about running executables.
that one UPS needs a new battery,
and just how many DNS requests Firefox makes.
Just stuff I notice out of the corner of my eye.
So it was rather alarming when I just saw the following pop out:
iTunes tcp_connection_destination_perform_socket_connect 140 connectx to 2600:1403:b:19a::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 140 connectx to 2600:1403:b:1a8::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 140 connectx to 2600:1403:b:1ab::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 140 connectx to 2600:1403:b:1ac::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 140 connectx to 2600:1403:b:185::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 141 connectx to 2600:1403:b:1ac::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 141 connectx to 2600:1403:b:185::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 141 connectx to 2600:1403:b:19a::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 141 connectx to 2600:1403:b:1a8::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 141 connectx to 2600:1403:b:1ab::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 143 connectx to 2600:1403:b:188::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 143 connectx to 2600:1403:b:18c::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 143 connectx to 2600:1403:b:18d::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 143 connectx to 2600:1403:b:1a1::2a1.443@0 failed: [65] No route to host iTunes tcp_connection_destination_perform_socket_connect 143 connectx to 2600:1403:b:1ad::2a1.443@0 failed: [65] No route to host
Now, I have IPv6 enabled on my Mac to play around with the technology, but my main connection out to the Intarwebs is still plain IPv4. So that explains the error. But the question is why is iTunes trying to connect to some machine on the Intarwebs? I have iTunes running, but doing nothing at the moment.
So then I look into that IPv6 address. First, it's assigned to Europe, which is odd, because I'm not in Europe. Second, it seems it belongs to Akamai Technologies. So the bigger question now is, what is iTunes trying to get from Europe? Is my computer trying to snitch on me? Checking for updates? Is iTunes feeling neglected?
I don't know … and that is bothering me.
Tuesday, January 28, 2020
I feel the earth move under my feet
I'm sitting at my desk at the Ft. Lauderdate Office of the Corporation when I get this weird feeling the building is moving. How, the Tri-Rail runs along side the Ft. Lauderdale Office of the Corporation, and the rail itself is shared with freight trains. When a freight train goes by, you can feel it in the building. But this movement doesn't feel the same. First, I can't hear the train (which I can from my desk since it's on that side of the building). Second, it's a longer, slower, swaying motion rather than than the short back-and-forth type movement typical of freight trains blowing past the building.
“Hey, TS1,” I said, “do you feel the building moving?”
“No.”
“Oh.” So I'm left with the thought that 1) I'm going crazy or 2) I'm experiencing some major medical event. The sensation subsides after a minute or so, I'm not on the floor writhing in pain or unconscious, so I stop thinking about it and go back to work.
I'm at home when Bunny says, “Did you hear about the 7.7 earthquake between Cuba and Jamaca today?”
“Really? When did it happen?”
“Um … looks like this afternoon. It was felt as far north as Miami.”
“No, I didn't hear about it, but I think I felt it.”
It's one thing to worry about earthquakes in Brevard, North Carolina but now I have to worry about them here? La-la-la-la-la-la-la-la-la-la!
“Sean, why are you running around with your fingers in your ears?”
Wednesday, January 29, 2020
“This is an invalid protocol because I can't open a file”
TS2 comes to my desk at the Ft. Lauderdale Office of the Corporation. “I'm running a load test of ‘Project: Cleese’ and it's not functioning. If I run a normal test, it runs fine.”
“Hmm … let me take a look.” I head back to TS2's desk. Sure enough, “Project: Cleese” is crashing under load. Well, not a hard crash—it is written in Lua and what's crashing are individual coroutines due to an uncaught error (the Lua equivalent of exceptions) where the only information being reported is “invalid protocol.” I have TS2 send me copies of the data files and script he's using to load test, and I'm able to reproduce the issue. It's an odd problem, because it appears to be crashing on this line of code:
local sock,err = net.socket(addr.family,'tcp')
I dive in,
and isolate the issue to this bit of C code that's part of the net.socket()
function:
if (getprotobyname_r(proto,&result,tmp,sizeof(tmp),&presult) != 0) return luaL_error(L,"invalid protocol");
Odd, “tcp” is a valid protocol, so I shouldn't be getting ENOENT
,
and the buffer used to store data is large enough
(because normally it works fine)
so I don't think I'm getting ERANGE
.
And that covers the errors that getprotobyname_r()
is documented to return.
I add some logging to see what error I'm actually getting.
I'm getting “Too many open files” and it suddenly all makes sense.
getprotobyname_r()
is using some data file
(probably /etc/protocols
)
to translate “tcp” to the actual protocol value but it can't open the file because the program is out of available file descriptors.
“Project: Cleese” is out of file descriptors because each network connection counts as a file descriptor,
and the test systems
(Linux in this case)
only allow 1,024 descriptors per process.
It's easy enough to up that to some higher value
(I did 65,536) and sure enough,
the “Too many open files” error starts showing up where I expect it to.
On the plus side, it's not my code. On the minus side, you have to love those leaky abstractions (and perhaps relying upon documentation a bit too much).
Tuesday, February 04, 2020
It might be worth the trouble if there were another zero in the number
So it turns out my Dad had two bank accounts—the one in California, and another one in Nevada. There were no issues getting the Nevada bank to cut a check for the whopping $100 remaining in the account (I'm rich! I'm rich I tell ya!), unlike the California bank. There was filling out the affidavit and the shenannigans required to get it notarized (I did not realize there were multiple levels of notarization—go figure) and sending it back along with the death certificate.
The California bank then sent the check via certified mail—we missed the initial delivery. Bunny can deal with the bureaucracy better than I, so when she went to the post office to pick it up, they couldn't find it. That was yesterday.
Today the post office called her to say they found the letter, so after work we went to pick it up and find out if all this trouble was worth the $45.71. We're in the car and I open up the letter.
“Wait! They didn't send the check! There's no check here!” I said, looking through a thick packet of papers.
“It's right here,” said Bunny, holding up a check. “How could you have missed it? It was right here in the envelope.”
“Oh cool!”
“But … it's made out to your Dad,” she said. “And they included a page from a loan made to somebody else …”
Passersby were amused to see me honking the car horn with my head.
Wednesday, February 12, 2020
Notes on an overheard conversation upon mail call
“Here you go. I've removed the real estate guide because I know you aren't interested.”
“But … it says ‘Real Estate Guide.’”
“No, it says the real estate guide is inside!”
“… Oh, it does.”
“I swear, I don't know how you dress yourself.”
Thursday, February 13, 2020
Notes on an overheard phone conversation with a computer about a product that was just delivered
“We can send you updates and confirmations about our product and future products to your email address. Would you like to do this?”
“No.”
“Okay then. We can send you a follow-up email about our product you just received.”
“What? I just said ‘no’ to your previous question about email!”
“Great! Expect to see a follow-up email real soon now.”
“No! No! N—”
“Do you have any more questions?”
“—o!”
“Okay then. Thank you for using our product. Good-bye.'
“Wait!”
Thursday, February 20, 2020
A vote for Velma is a vote for anarchy
I came across this letter to the editor of the Transylvania Times that is just too good not to share.
A Vote For Anarchy
My name is Velma Owen. I'm an 84-year-old grandmother of seven. I've lived in Transylvania County my whole life, and I'm thinking about running a write-in campaign for president of the United States. I read letters in this paper all the time of people complaining about this politician or that politician, but I got a plan we could all get behind.
As president, I will disband the entire government. We all hate it anyway. Let's just get rid of it. My grandson thinks we need the government for the post office and roads, but he's voting for the socialist. Besides, the post office is useless since UPS came around, and I haven't cared much about the roads since I quit driving.
And don't get me started on taxes. Taxes are a load of bull. I could give the government my first born child and they'd still want the second and third. (They can take the fourth.) All that money out of my pocket, and for what? So some politician can write a law that gets me in trouble for driving my mower on Highway 64.
Anyway, if you're as sick of it as I am, then write-in Velma Owen in the general election in November. We'll close this government like it's the K-Mart.
Remember: A vote for Velma is a vote for anarchy.
Velma Owen
Lake Toxaway
There's not much else to add to this but, Vote Velma!
Friday, February 21, 2020
A random encounter with a 35 year old file format
I ended up at the art section of the Interactive Fiction Archive
and I was curious as to the format of the .blb
files.
I downloaded one and much to my surprise,
it was an IFF file.
I haven't encountered an IFF file in the wild in over twenty years.
So it seems that a .blb
is a blorb file,
used to save resources for an interactive fiction game.
Going further down the rabbit hole,
it seems that compiled Erlang code is also stored as an IFF file,
although it's a slightly modified version
(Erlang uses 32-bit alignment while the IFF standand only mandates 16-bit alignment,
which makes sense given IFF was defined in the mid-80s by Electronic Arts).
It's a bit of a shame that it wasn't used more often as it's not a bad file format, nor is it that complicated—the standard is less than 20 pages long, and a “parser” is about a page of code in any modern language today. But alas, the format never really caught on outside the Amiga community and it's hard to say why. Jerry Morrison, one of the creators of the format, lists maybe three reasons why in a retrospective but it's hard to say if those were the sole reasons, or if there were more. About the only modern format today that is somewhat based on IFF is the PNG format (it was probably more inspired by the IFF format, but it's not compatible with it).
Anyway, what a pleasant surprise.
Wednesday, February 26, 2020
This date on the blog, or stealing a feature I found on another blog
Kirk Israel has an interesting feature on this blog, the “thisday” link, which displays all the entries for a given day (and here is February 26TH in case you are reading this sometime in the future). It's a neat concept, and one I could certainly use. There have been times (usually on holidays) where I'd like to see what I've written so as not to repeat myself. So that usually involves opening up a bunch of tabs of, say, July 4TH (and as of writing this, that would be 14 tabs) but no longer! Now I (and you) can see what I've written for every Fourth of July holiday. All that's left is to generate links to the next and previous day, as well as maybe a link in the sidebar to all the entries for a given day.
While the feature was easy to add to the website, I have yet to do so for my gopher mirror. I'm still afraid my blog on Gopher is still a second class citizen. I still don't support linking to arbitrary portions of time with the gopher mirror, and I'm not sure if I will ever get around to it.
A journal doesn't really help if you never finish the entries
It's been fun going though some of my own entries with the new “this day” feature. One thing I've noticed is that I used to write way more entries in the past than I do now. Part of that may be the newness of online journaling. Another part might be there was less risk of repeating myself. Perhaps I hadn't run out of things to say? Who knows?
I did find a stash of unfinished entries from the time I started working at Negiyo when I checked out this day of October 13TH. At first, I was afraid the partial text was due to a bug, but no, the 13TH of October, 2000 really is a few sentence fragments, along with a whole bunch of other non-entries that month—I guess I never finished those. And that entry for October 30TH, 2000? It's not cut off—apparently I came across the source code for ITS. But I can't even remember what I wanted to say, much less where I stuffed said source code. And what was up with the security guard and the Bible? I want to know! I need closure!
Curse my younger self for not finishing these entries!
How I ended up with a month of non-entries
So how did I end up with a month of non-entries? Therein lies a tale …
While the first entry of my blog is dated Debtember 4TH, 1999,
the software running the blog,
mod_blog
wasn't even started yet—maybe.
The previous October and November I spent writing mod_litbook
(the software behind The Electric King James Bible)
which was the inspiration for how links work around here.
I'm not sure if I started the software that Debtember or not,
since I spent the rest of the month visiting Dad out in Palm Springs, California.
My first post about mod_blog
appears to be on March 13TH,
so sometime between Debtember of 1999 and March of 2000 is when I started coding mod_blog
.
But until mod_blog
was ready,
I was basically maintaining a bunch of static pages by hand.
I then spent over a year and a half writing the software.
Most of the time I spent trying to figure out how to generate the appropriate hyperlinks—I
was trying to generate anchor links
(<A HREF="#2000/10/15.1">
) if the entry was on screen,
otherwise a hyperlink
(<A HREF="/2000/10/15.1">
)
if the entry wasn't on the screen,
while at the same time trying to generate an on-page directory of entries currently being displayed—it was a real mess.
By early October of 2000, I had finalized the storage format for each entry. But that was also the month I started working at Negiyo and the first few weeks were pretty tough. I think I just forgot to go back and flesh out those entries. Besides, at that point, the blog wasn't on a public server and only a small select set of friends had the actual link to it.
It wasn't until October 23RD, 2001 that I finally had enough with the development and decided to go public with what I had.
I didn't have the anchor links like I wanted
(but that turned out to be a bad idea in the long term anyway),
nor the directory of entries
(and I still don't have an automatic list of past entries—the archive section I add to every month).
Besides,
I really wanted to make that synopsis of Atlas Shrugged public
(yes, that's what finally prompted me to get mod_blog
shipped),
so I copied everything on the private server
(including the month of non-entries)
to the public server and the rest has been online ever since.
And that's how I ended up with a month of non-entries, and curiousity as to what “a wired Jamison” is all about.
Get off the lawn, my younger self!
Thursday, March 05, 2020
Notes on an overheard conversation outside a used book and compact disc store
“Are all the CDs like this?”
“Yup. All ten boxes.”
“Really?”
“Really.”
“I've seen a lot of CD collections in my life … ”
“Yes?”
“And I've seen some really bad collections … ”
“And?”
“This collection has to be the single worst collection of CDs I've ever come across. It's like these were given out for free. I'm not sure who would buy this crap.”
“That bad, huh?”
“Music for Aromatherapy?”
“Erm … um … ”
“I might have to charge you just on principle.”
“You wouldn't happen to have a dumpster, would you?”
“Oh Hell no!”
”Sigh.”
Friday, March 06, 2020
The 2020 Surprise Birthday Party Road Trip
[Note: This is being written after the fact so as not to spoil the surprise of a surprise birthday party. —Editor]
The COVID-19 is close to being a pandemic, and Florida has declared a state of emergency. What better way to celebrate than with a road trip! My friend Joe just turned XX so Bunny, my friend Kurt and I are headed north to Marianna, Florida for a surprise birthday party. The last time we visited Joe was … wow … eleven years ago to attend Joe's father's funeral. At least this time it's under better circumstances.
But first, Bunny and I had to drive from Chez Boca down to Davie to pick up Kurt. Unfortunately, Kurt could not get the day off so we had to pick him up at 5:00 pm. On a Friday. Joy. What should have been a 40 minute drive ended up being over an hour as traffic was insane. I-95 was a parking lot. A segment of the Florida Turnpike was closed. That left the Sawgrass Expressway, which while clean of heavy traffic, left us six miles west of Kurt's house.
Sigh.
And in the short time we spent picking him up, the northbound Sawgrass Expressway clogged up with traffic, leading to even more delays.
And delays.
It was about 2½ hours after Bunny and I started this trip that we finally got out of South Florida. Then we could start the seven hour drive to Marianna.
Let's see … Bunny and I left the house at 4:15 pm. We arrived at the hotel in Marianna around 2:00 am, but Marianna is in Central, Chez Boca is in Eastern, so we arrived at the hotel at 3:00 am our time, making for an eleven hour drive. For what should have been a 7½ hour trip.
Bleech.
Seriously, these were the cleanest gas station bathrooms I've ever seen
The gas gauge was reading “E” by the time we hit I-10 westbound on our roadtrip. I pulled off at the first available exit that had a gas station and that's how we found ourselves at The Busy Bee. How shall I describe it? Hmm … how about The Four Seasons of gas stations?
Seriously, it's impressive as a gas station. Did I say “gas station?” It's also a recharging station. And a food court. And a candy store. And a confection store. It pretty much has everything, including the cleanest public bathrooms I've even seen. They were even cleaner than our hotel bathroom!
So if you are ever in North Florida and come across a Busy Bee, do yourself a favor and stop. You won't be sorry.
Orange you glad it wasn't banana slices?
It's 3:00 am. Or rather, 2:00 am (this time zone change is really throwing me off here). We've unloaded the car, and as I'm plugging in my iPad to recharge, I find orange segments on the floor. Dusty orange segments.
Lovely.
Personally, I think I would have preferred staying at The Busy Bee than our hotel room. It's not that the hotel was that bad (dusty orange segments aside), but that the Busy Bee was that good.
Saturday, March 07, 2020
Surprise!
There's not much to report on Joe's surprise birthday party. Joe was expecting only family at this gathering and he was genuinely surprised at our being there. We then spent the rest of the day (and quite a bit of the night) just hanging out, talking and reminiscing, and eating a ton of good food.
Sunday, March 08, 2020
Traffic jams at night
If I thought the drive up to North Florida was bad, the drive back was even worse! There was heavy traffic southbound on the Florida Turnpike. And bumper-to-bumper traffic in I-95 south at 11:00 pm. On a Sunday! (“What you are all doing on the road at this time of night? Why aren't you at home sleeping?”)
I think it took us … twelve? … hours to get home for what should normally be a 7½ hour trip (not only did we change time zones, but there was also that whole “spring back” thing to contend with).
We also learned one should not attempt to eat at The Cracker Barrel on a Sunday afternoon in North Florida (there was a waiting line just to park!).
Thursday, March 12, 2020
Notes on an overheard conversation at the Ft. Lauderdale Office of The Corporation
BANG!
BANG!
BANG!
BANG!
“Is there someone trapped in your desk desperately trying to escape?”
“No. It's a construction crew ripping up the tiles on the patio below our office.”
“Oh, so it is.”
Monday, March 16, 2020
Is their network infected with COVID-19?
XXXXXXXXXXXX: You've used about 50% data limit and may be restricted for the rest of the bill cycle.
Gee, thanks Oligarchist Cell Phone Company. I'm only using up my current data plan because the Monopolistic Phone Company can't seem to route around a broken router on their network, rendering my DSL useless for large chunks of time (it'll route around the trouble spot for maybe an hour or so, then BAM! right back to the broken router for a few hours).
I admit, I'm lucky. I can easily work from home (and yes, we at the Ft. Lauderdale Office of the Corporation just got the COVID-19 memo to work from home) but now I have to bounce back and forth between the DSL yoyo and the possibly throttled smart phone personal hotspot.
Grrrrrrrr …
Tuesday, March 17, 2020
Priorities in the time of COVID-19
When I awoke,
I noticed that the DSL was back up.
I immediately logged into my web server and did a traceroute
back to Chez Boca.
Over the past few days when I've run traceroute
,
I noticed that when we were down the traffic was always hitting the same IP address,
and the short time we were up,
traffic was being routed around that “bad” IP address.
But I saw the packets go through the “bad” IP address and I realized that the Monopolistic Phone Company had finally gotten its act together and fixed the broken link.
I then informed Bunny that there was no need to upgrade our data plan with the Oligarchist Cell Phone Company.
Now I can watch cat videos get back to work without trouble.
I just finished reading a polarizing book
I just finished reading the book A Confederacy of Dunces. I started reading it Christmas day as it was a gift from Bunny. She got me the book because she read it was uproariously funny. The fact that it won the Pulitzer Price for Fiction (1981) and is considered a canonical work of “Southern Literature” is just icing on the cake.
I did not find it uproariously funny.
I did not fine it somewhat funny.
I did not find it funny.
I did not find it amusing.
I did not find it slightly amusing.
I downright hated the book.
The only reason I stuck it out and finished the book was because it was a gift from Bunny. None of the characters were likable; all of them were downright loathesome, horrible people. The main character had no arc to his story. In fact, there was only one character to have any form of arc at all.
Back when I was halfway through the book (around mid-January), I wrote to my best friend Sean Hoade about this book. I asked him if he had read the novel, and not only did he, but “[t]hat is the funniest novel I've ever read, and one of only three or four that I've read more than once.” As the main character was a graduate student and the novel takes place in New Orleans, I then asked him if one needed be a graduate student who lived in the Deep South to appreciate the book, because I sure didn't (I'm neither—and no, South Florida does not belong to the Deep South; South Florida is more like a New York City borough than the Deep South—at least we have decent delis down here).
He wrote back:
I've heard that same reaction from some people, and not just non-academic folk. To me, it's like Hitchhiker's Guide—some people find it tear-inducingly funny and others “just don't get it.” It's true that nerds are more likely to find HGTTG funny, but some don't. Some of them even think it's stupid and obvious.
I find both of them a scream. Maybe I'm just an easy laugh. (Well, I am, but still.)
There are no likeable characters, but all are (IMHO) fascinating, especially Ignatius, of course. But everything with Levy Pants slays me.
Honestly, although the saying is usually used to indicate one person's taste is shit, but there really is no accounting for taste. We have pretty similar senses of humor, but we obviously don't agree about this being funny.
It's kind of like the movie Napoleon Dynamite: Back when Netflix mailed out physical DVDs and getting one you didn't like meant a 2-to-4-day wait for another, user ratings were very important as predictors of what subscribers would get as suggestions. You rate Movie A some number of stars, so Netflix recommends Movie B because it has a high rating among other subscribers who gave Movie A the rating you did.
But Napoleon Dynamite destroyed the system, because 100% of people who rented it gave it either one star or five stars. No one gave it two or three or four stars. People either completely loved it or they utterly hated it.
And get this: Netflix algorithms could find no correlation between the rating viewers gave Napoleon Dynamite and any other films. Couple this with the fact that around 2005, ND was one of the top Netflix rentals of the year, and you have a recipe for recommendation disaster.
I think Dunces is like that. I loved both Dunces and Dynamite, but, as mentioned above, I'm an easy laugh.
As Jones would say, “Whoa!”
So this appears to be one of those things where you either love it or loathe it, and I'm in the second camp. So buyer beware. You might love this book. You might hate this book. I don't think you'll be “meh” about this book.
Wednesday, March 18, 2020
Can one make COVID-19 jokes? I hope so, becuse this is a COVID-19 joke
Bunny received the following text from one of her friends:
I posted a warning at the hardware store to not pop the bubble wrap because the air in the bubble wrap is from China. We had to sedate one guy because he is addicted to bubble wrap popping.
We both found it funny, even if it might not be true. There is some concern about COVID-19 survivability on surfaces though, so be careful.
I am so getting struck by lightning.
Thursday, March 19, 2020
Still another issue with DoH, yet this time it isn't my fault
So I'm reading this comment on Hacker News and none of the links are working. Odd, because I have had no problems since Debtember with my current implementation of DoH. The broken links in question all have the hostname ending with a period. While unusual, the trailing dot on a hostname makes is a “fully qualified domain name.” I won't go into the full details of a “fully qualified domain name” (that's beyond the scope of this post) but suffice to say, it should be supported.
Okay, fine. I start looking at my script and … well … there's no reason for it to fail. I mean, I did find two bugs (one typo and one logic bug in handling an error) but they were unrelated to not resolving a fully qualified domain name. Down the rabbit hole I go.
What do I find once I hit bottom? Not Alice, but I do think I found a bug in Firefox. And I think it's a similar cause as before—a different codepath.
When I force Firefox to use DNS,
both boston.conman.org
and boston.conman.org.
(note the trailing dot)
produce the following DNS request:
00000000: 00 00 01 00 00 01 00 00 00 00 00 01 06 62 6F 73 .............bos 00000010: 74 6F 6E 06 63 6F 6E 6D 61 6E 03 6F 72 67 00 00 ton.conman.org.. 00000020: 01 00 01 00 00 29 10 00 00 00 00 00 00 08 00 08 .....).......... 00000030: 00 04 00 01 00 00 ......
When I switch back to DoH however,
boston.conman.org.
(note the fully qualified domain name) generates this:
00000000: 00 00 01 00 00 01 00 00 00 00 00 01 06 62 6F 73 .............bos 00000010: 74 6F 6E 06 63 6F 6E 6D 61 6E 03 6F 72 67 00 00 ton.conman.org.. 00000020: 00 01 00 01 00 00 29 10 00 00 00 00 00 00 08 00 ......)......... 00000030: 08 00 04 00 01 00 00 .......
There's an extra NUL
byte after the domain name,
and I suspect what's happening is that the extra “.” at the end is being encoded instead of being ignored.
I've created a bug report so we'll see how this goes.
Update on Friday, March 27TH, 2020
The bug has been fixed..
Monday, March 23, 2020
Doing my thing to maintain social distancing
About fifteen years ago [Has it really been that long? –Sean] [Yes, it has been that long. –Editor], I was playing in a D&D game that had transitioned from being all in-person to partially on-line. I was not a fan of the on-line compoent but I stuck it out for perhaps a year before leaving the game entirely. For me, the reaons include:
- I was going to hang out with friends, not hang out with friends all staring at a computer screen.
- The remote players were second class citizens at the game—the DM had to continously remind us to “type what we were saying” and not just talk among ourselves at the table.
- We tried multiple technologies at the time, and the best we could do was a glorified chat room.
I hated it so much that I have since refused to even consider playing in an on-line D&D game. My stance has caused one casualty—one friend accused me of “poisoning the minds” of our friends against running an on-line D&D game, but as I tried pointing out, I was the one who refused to participate in such a game; our other friends were more than welcome to run an on-line game, but that argument went nowhere, and I think that friend still holds a grudge (well, for that and another slight that's beyond the scope of this post).
Unfortunately, due to circumstances apparently beyond the world's control, and the fact that I'm currently running a D&D game, I have been forced to reconsider my stance.
Yes, I ran our twice-monthly (actually every other week) game yesterday, entirely on-line! (sorry, XXXXXXX)
We settled upon using Roll20,
a web-based on-line gaming system.
The free version is barely good enough for our use.
On the plus side,
there's no software to install,
but on the minus side,
it does seem to be quite heavy in bandwidth.
It took about an hour to get all six people
(myself and five friends)
all online and talking (video chat!).
I had to inform Bunny not to stream video while we were playing,
and even checking email was a slow and painful process.
And during the game,
one or two players would suddenly disappear;
usually reloading the page would fix the problem.
But we managed to get through the session and well … I hope this doesn't go on for much longer is all I have to say.
One funny story—during the hour we spent trying to get Roll20 working, I tried several different laptops here at Chez Boca. One of the laptops was the managed Windows 10 laptop from The Corporation's Corporate Overlords. The website was blocked by the laptop because of a password breach from 2018! Lovely.
This is the type of town that gives small town politics a bad name
A meeting for a South Florida city government exploded into a shouting match over the city's handling of the coronavirus pandemic, leading the mayor to storm out of the room as one commissioner accused her and the city manager of failing to close the city's beaches and shutting off peoples' utilities in the midst of the outbreak.
Why does it not surprise me that this took place in Lake Worthless? And why does it not surprise me that it involves Lake Worthless Utilities?
Stay classy, Lake Worthless. Stay classy.
Tuesday, March 24, 2020
You still have the flexible schedule but the games of nude laser tag with lingerie models is not an option
I'm on the second week of working from home, and I'm totally not watching cat videos. Nope.
Not at all.
Not one bit.
And while working from home can be awesome, it's not all less time in the car and fewer spam calls.
Wednesday, March 25, 2020
The Return of the Alien White Squirrels
Almost three years ago the Total Transylvania Eclipse Banner arrived at our house. I manged to buy one of the banners that adorned downtown Brevard during the 2017 total eclipse. Bunny had apparently saved the shipping tube it came in, because just today she found a few postcards and stickers that were shipped along with the banner (she's reusing the tube to ship some stuff out to her brother in Seattle).
What a cool find!
Thursday, March 26, 2020
There just aren't enough clue-by-fours
In this paper I present an analysis of 1,976 unsolicited answers received from the targets of a malicious email campaign, who were mostly unaware that they were not contacting the real sender of the malicious messages. I received the messages because the spammers, whom I had described previously on my blog, decided to take revenge by putting my email address in the ‘reply-to’ field of a malicious email campaign. Many of the victims were unaware that the message they had received was fake and contained malware. Some even asked me to resend the malware as it had been blocked by their anti-virus product. I have read those 1,976 messages, analysed and classified victims’ answers, and present them here.
…
5. The fifth group is actually the most worrying. I call this group ‘MY ANTI-VIRUS WORKED, PLEASE SEND AGAIN’, as these are recipients who mention that their security product (mostly anti-virus) warned them against an infected file, but they wanted the file to be resent because they could not open it. The group consisted of 44 individuals (2.35%).
Via inks, Virus Bulletin :: VB2019 paper: 2,000 reactions to a malware attack — accidental study
Over a year ago, the Corporate Overlords of The Ft. Lauderdale Office of The Corporation started sending us phishing emails in order to “train us” to recognize scams. Pretty much all it did for me was to treat all emails from our Corporate Overlords asking for information as a phishing attempt (it's also made easier as each phishing email has a specific header designating it as such to ensure they get through their own spam firewall—I am not making this up). And I was upset over the practice as I felt our Corporate Overlords did not trust their employees and felt they had to treat us as children (the managed laptops don't help either).
But reading this report is eye opening. Over 2% requested the malware be sent again! Over 11% complained that the “attachment” did not work (they were infected) and another 14% asked where was the “attachment”—what?
I … this … um … what?
I should not be surprised. I mean, someone has to fall for the scams else the scammers wouldn't waste their time. The scary bit is that this validates what our Corporate Overlords are doing.
Sigh.
But Bunny will find the following response group amusing:
10. One of the biggest surprises were 31 members of group number 10 (1.66%) who spent time pointing out all the spelling errors and typos made in the original message. I call this group “I'M A GRAMMAR NAZI”.
Via inks, Virus Bulletin :: VB2019 paper: 2,000 reactions to a malware attack — accidental study
Heh.
Friday, March 27, 2020
Looks like that DoH issue I had last week has been fixed
Oh cool! The Firefox bug I reported last week has been fixed. One week, I don't think I can complain, and it's nice to know that I apparently gave enough information for them to reproduce the bug and fix it. It looks like it'll be out in release 76 (the current version of Firefox is 74).
Update on Saturday, May 9TH, 2020
Thursday, April 02, 2020
To block the bad guys, it helps to correctly specify all the addresses
Back when I had some server issues I took the time to have the hosting company modify the main firewall to allow all ssh
traffic to my server instead of from a fixed set of IP addresses.
There had been some times in the recent past
(like when the DSL connection goes down and I can't log into the server)
where that would have been a Good Thing™.
The change went through,
and as long as I have an ssh
key
(no passwords allowed) I can log in from anywhere.
Now,
I run my own syslog daemon and one of its features is the ability to scan logs in real time and do things based on what it sees,
like blocking IP addresses on failed ssh
attempts.
I do this on my home system and have currently blocked over 2,300 IP addresses
(over the past 30 days—after said time the blocks are removed to keep the firewall from “filling up” so to speak).
I enabled this feature on my server about a week ago and … it didn't work.
I could see entries being added to the firewall,
but the attempts from some “blocked” IP addresses kept happening.
It took me some time, but I spotted the problem—I was blocking 0.0.0.0
instead of 0.0.0.0/0
.
The former says “match the exact IP address of 0.0.0.0”
(which is not a valid IP address on the Internet)
while the later says “match all IP addresses.”
Sigh.
Once spotted, it was an easy fix. Then I noticed that the failed log message differed a bit between my home system and the server, so I had to fix the parser a bit to account for the differences. Hopefully, that should be it.
Saturday, April 04, 2020
I don't quite understand this attack
Blocking ssh
login attempts is working,
but I have noticed another odd thing—the large number of TCP connections in the SYN_RECV
state.
This is indicitive of a SYN
flood,
but what's weird is that it's not from any one source,
but scores of sources.
And it's not enough to actually bring down my server.
I spent a few hours playing “whack-a-mole” with the attacks, blocking large address spaces from connection to my server, only to have the attack die down for about five minutes then kick back up from a score of different blocks. The only thing in common is that all the blocks seem to be from Europe.
And this is what I don't understand about this attack.
It's not large enough to bring down my server
(although I have SYN
cookies enabled and that might be keeping this at bay)
and it's from all over European IP space.
I don't get who's getting attacked here.
It could easily be spoofed packets being sent,
but what's the goal here?
It's all very weird.
I'd put this off, but I'm trying to procrastinate my procrastination
I tends towards procrastination. I hate it, yet I do it. Or don't do it … depending on how you want to look at things. I don't think I can verbalize why I do it, but this video on doing that one thing pretty much sums it up, I think. Only my one thing isn't the one thing in the video.
Anyway, to stop this habit, I might have to try The 10 Minute Rule, where you give a task 10 minutes a day. Over time, it'll get done.
Perhaps I'll start tomorrow.
Sunday, April 05, 2020
Is this attack a case of “why not?”
My friend Mark wrote
back about the SYN
attack to mention that he's also seeing the same attack on his
servers. It's not enough to bring anything down, but it's enough to be an
annoyance. He's also concerned that it might be a bit of a “dry run” for
something larger.
A bit later he sent along a link to the paper “TCP SYN
Cookie
Vulnerability” which describes a possible motive for the attack:
TCP
SYN
Cookies were implemented to mitigate against DoS attacks. It ensured that the server did not have to store any information for half-open connections. ASYN
cookie contains all information required by the server to know the request is valid. However, the usage of these cookies introduces a vulnerability that allows an attacker to guess the initial sequence number and use that to spoof a connection or plant false logs.
The “spoofing of a connection” is amusing, as I don't have any private files worth downloading and spoofing a connection to an email server just nets me what? More spam? I already deal with spam as it is. And the same for the logs—I just don't have anything that requires legally auditable logs. I guess it's similar for most spam—it pretty must costs the same if you attempt 10 servers or 10,000,000 servers, so why not? And like Mark says, I hope this isn't a precursor of something larger.
And chasing down the references in the paper is quite the rabbit hole.
Tuesday, April 07, 2020
Some musings about some spooky actions from Google and Facebook
Periodically,
I will go through my Gmail account just to see what has accumulated since the last time I checked.
Usually it's just responding to emails saying stuff like “no, this is not the Sean Conner that lives in Indiana” or “I regret to inform you that I will not be attending your wedding” or even
“I've changed my mind about buying the Jaguar and no, I'm not sorry about wasting your time.”
But today I received an email from Google titled “Your March Search performance for boston.conman.org
” and I'm like What?
I check, and yes ideed, it's a search performance report for my blog from Google. I don't use Google Analytics so I was left wondering just how Google associated my gmail account to my website. I know Google does a lot of tracking even sans Google Analytics, but they must have really stepped up their game in recent times to get around that lack.
But no, it appears that some time ago I must have set up my Google Search Console and I forgot about it. Fortunately, that moves the whole issue from “pants staining scary” to just “very spooky.” Poking around the site, I was amused to find that the three most popular pages of my blog are:
- a rant about math textbooks;
- a review of a long obsolete BASIC IDE for a long obsolete gaming console;
- and some musing about spam
Even more amusing is the search query that leads to the top result—“⅚ cup equals how many cups”. What? … I … the answer is right there! I can't even fathom the thought process that even thought of that question.
Wow.
And speaking of “spooky web-based spying” I just realized that Facebook is adding a fbclid
parameter to outgoing links.
I noticed this the other day,
and yes,
it even shows up in my logs.
I would have written about that,
but it seems Facebook started doing this over a year and a half ago,
so I'm very late to the game.
But it still leaves one question unanswered—would such an action drag otherwise innocent web sites into GDPR non-compliance?
It does appear to be a unique identifier and Facebook is spamming all across webservers.
Or does Facebook somehow know a European website from a non-European website and avoid sending the fbclid
to European websites?
I'm just wondering …
Wednesday, April 22, 2020
The trouble of finding a small memory leak
The last time I mentioned GLV-1.12556 it was in reference to a bug that prevented large files from being transferred. I neglected to mention that I fixed the bug back in November where I was improperly checking a return code. Code fixed, issue no more.
But a problem I am seeing now is the ever growing memory usage of the server.
I've written other servers that don't exhibit this issue so it's not Lua per se.
I use valgrind
to check and it does appear to be LibreSSL,
but the output from valgrind
isn't entirely helpful,
as you can see from this snippit:
==27306== 96 bytes in 8 blocks are indirectly lost in loss record 8 of 21 ==27306== at 0x4004405: malloc (vg_replace_malloc.c:149) ==27306== by 0x429E7FD: ??? ==27306== by 0x429E918: ??? ==27306== by 0x429F00A: ??? ==27306== by 0x435BF54: ??? ==27306== by 0x422D548: ??? ==27306== by 0x4236B14: ??? ==27306== by 0x420FD9C: ??? ==27306== by 0x421021B: ??? ==27306== by 0x420D3D0: ??? ==27306== by 0xD0808A: pthread_once (in /lib/tls/libpthread-2.3.4.so) ==27306== by 0x420671D: ???
Some functions are decoded by their address,
but not all.
It doesn't help that LibreSSL is loaded dynamically so the addresses change from run to run.
I want a stacktrace of each call to malloc()
(and related functions) but I'd rather not have to modify the code just to get this information.
Fortunately,
I run Linux,
and on Linux,
I can take advantage of LD_PRELOAD
and insert my own
hacked versions of malloc()
(and co.) to record the backtraces without having to rewlink everything.
The simplest thing that could work is just to print a message with the backtrace,
and so that's what I did.
Given this simple program:
#include <stdio.h> #include <stdlib.h> int main(void) { void *p = realloc(NULL,50); void *q = calloc(1,100); void *r = malloc(150); void *s = realloc(p,200); free(q); free(r); exit(0); }
I can now get the following output:
! (nil) 0x96dd008 50 ./y [0x8048464] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0xba4e93] ./y [0x80483b5] + 0x96dd3c8 100 ./y [0x8048476] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0xba4e93] ./y [0x80483b5] + 0x96dd430 150 ./y [0x8048489] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0xba4e93] ./y [0x80483b5] ! 0x96dd008 0x96dd4d0 200 ./y [0x804849f] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0xba4e93] ./y [0x80483b5] - 0x96dd3c8 - 0x96dd430
Allocations from malloc()
and calloc()
are signified by a “+” sign
(followed by the address, size and callstack);
allocations from realloc()
are signified by a “!” sign
(followed by the previous and new address, new size and callstack);
calls to free()
are signified by a “-” sign
(which just contains the address—I don't care about the callstack for this call).
Some post processing of this output can flag allocations that don't have a corresponding free call:
0x96dd4d0 200 ./y [0x804849f] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0xba4e93] ./y [0x80483b5] Total memory 200 Total records 1
It's not perfect,
but I gives a bit more information than valgrind
does,
as we can see from what I think is the same call as the above valgrind
example showed:
0x98861f0 12 /home/spc/JAIL/lib/libcrypto.so.45(lh_insert+0xea) [0x380156] /home/spc/JAIL/lib/libcrypto.so.45(OBJ_NAME_add+0x70) [0x3854c0] /home/spc/JAIL/lib/libcrypto.so.45(EVP_add_cipher+0x2d) [0x371889] /home/spc/JAIL/lib/libcrypto.so.45 [0x366d3f] /lib/tls/libpthread.so.0(__pthread_once+0x8b) [0xd0808b] /home/spc/JAIL/lib/libcrypto.so.45 [0x2ff307] /lib/tls/libpthread.so.0(__pthread_once+0x8b) [0xd0808b] /home/spc/JAIL/lib/libssl.so.47(OPENSSL_init_ssl+0x4b) [0x148ebb] /home/spc/JAIL/lib/libtls.so.19 [0xfa63ba] /lib/tls/libpthread.so.0(__pthread_once+0x8b) [0xd0808b] /usr/local/lib/lua/5.3/org/conman/tls.so(luaopen_org_conman_tls+0x18) [0x21871e] lua [0x804ef6a] lua [0x804f264] lua [0x804f2be] lua(lua_callk+0x37) [0x804d0eb] lua [0x8068deb] lua [0x804ef6a] lua [0x8058ab5] lua [0x804f27d] lua [0x804f2be] lua(lua_callk+0x37) [0x804d0eb] lua [0x8068deb] lua [0x804ef6a] lua [0x8058ab5] lua [0x804f27d] lua [0x804f2be] lua [0x804d146] lua [0x804e8ac] lua [0x804f6ec] lua(lua_pcallk+0x60) [0x804d1a8] lua [0x804b0e4] lua [0x804baba] lua [0x804ef6a] lua [0x804f264] lua [0x804f2be] lua [0x804d146] lua [0x804e8ac] lua [0x804f6ec] lua(lua_pcallk+0x60) [0x804d1a8] lua(main+0x55) [0x804bb91] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0xba4e93] lua [0x804ae99]
I can see that this particular piece of leaked memory was allocated by tls_init()
(by tracking down what the call at address luaopen_org_conman_tls+0x18
corresponds to).
But this leads to another issue with tracking down these leaks—I don't care about allocations durring initialization of the program.
Yes,
it's technically a memory leak,
but it happens once during program initialization.
It's the memory loss that happens as the program runs that is a larger concern to me.
So yes,
there's some 40K or so lost at program startup.
What's worse is that it's 40K over some 2,188 allocations!
I did see a further leak when I made several requests back to back—about 120 bytes over 8 more allocations,
and it's those that have me worried—a slow leak.
And given that the addresses of the heap and dynamically loaded functions change from run to run,
it makes it very difficult to filter out those 2,188 allocations from initialization to find the 8 other allocations that are leaking.
It would be easier to track down if I could LD_PRELOAD
the modified malloc()
et al. into the process
after intialization,
but alas,
that is way more work
(let's see—I need to write a program to stop the running process,
inject the modified malloc()
et al. into mapped but othersise unused executable memory,
then patch the malloc()
et al. vectors to point to the new code,
and resume the program; then possibly reverse said changes when you no longer want to record the calls—doable but a lot of work)
just to track down a bug in code that isn't even mine.
Sigh.
Update on Thursday, April 23RD, 2020
Thursday, April 23, 2020
Of course talking about a bug means its easier to find and fix the bug. Odd how that happens
Of course, after I point the finger to LibreSSL for the memory leak, I find the leak … in my own code.
Sigh.
Not knowing what else to do,
I thought I would go through my TLS Lua module to make sure I didn't miss anything.
That's when I noticed that I was keeping a reference to a connection so that I can deal with the callbacks from libtls
.
I was expecting the __gc()
method to clean things up,
but with a (non-weak) reference,
that was never going to happen.
Yes, just because you are using a garbage collected language doesn't mean you can't still have memory leaks.
I verified that, yes indeed, the references were being kept around after the request was finished. It was then straightforward to fix the issue.
That's not to say that libtls
still isn't leaking memory—it is,
but (it seems) only when you initialize it
(which means it's not as bad).
But I'll know in a day or two if I fixed the leak.
I hope that was it.
Gopher selectors are OPAQUE people! OPAQUE!
Despite warning that gopher selectors are opaque identifiers and even setting up a type of redirect for gopher there are still gopher clients making requests with a leading “/”. This is most annoying with the automatic clients collecting phlog feeds. I expected that after six months people would notice, but nooooooooooooooo!
Sigh.
So I decided to make the selector /phlog.gopher
valid,
but to serve up a modified feed with a note about the opaque nature of gopher selectors.
Yes,
it's passive aggressive,
but there's not much I can do about people not getting the memo.
Maybe this will work …
Thursday, April 30, 2020
That was the leak—now it crashes!
A small recap to the memory leak from last week. Yes, that was the leak and the server in question has been steady in memory usage since. In addition to that, I fixed an issue with a custom module in my gopher server where I neglected to close a file. It wasn't a leak per se, as the files would be closed eventually but why keep them open due to some sloppy programming?
But now a new problem has cropped up—the formerly leaking program is now crashing. Hard! It's getting an invalid pointer, so off to track that issue down …
Sigh.
Friday, May 01, 2020
It seems that C's bit-fields are more of a pessimization than an optimization
A few days ago, maybe a few weeks ago, I don't know, the days are all just merging together into one long undifferentiated timey wimey blob but I'm digress, I had the odd thought that maybe, perhaps, I could make my Motorola 6809 emulator faster by using a bit-field for the condition codes instead of the individual booleans I'm using now. The thought was to get rid of the somewhat expensive routines to convert the flags to a byte value and back. I haven't used bit-fields all that much in 30 years of C programming as they tend to be implementation dependent:
- Whether a “plain”
int
bit-field is treated as asigned int
bit-field or as anunsigned int
bit-field (6.7.2, 6.7.2.1).- Allowable bit-field types other than
_Bool
,signed int
, andunsigned int
(6.7.2.1).- Whether a bit-field can straddle a storage-unit boundary (6.7.2.1).
- The order of allocation of bit-fields within a unit (6.7.2.1).
- The alignment of non-bit-field members of structures (6.7.2.1). This should present no problem unless binary data written by one implementation is read by another.
- The integer type compatible with each enumerated type (6.7.2.2).
C99 standard, annex J.3.9
But I could at least see how gcc
deals with them and see if there is indeed a performance increase.
I converted the definition of the condition codes from:
struct { bool e; bool f; bool h; bool i; bool n; bool z; bool v; bool c; } cc;
to
union { /*--------------------------------------------------- ; I determined this ordering of the bits empirically. ;----------------------------------------------------*/ struct { bool c : 1; bool v : 1; bool z : 1; bool n : 1; bool i : 1; bool h : 1; bool f : 1; bool e : 1; } f; mc6809byte__t b; }
(Yes,
by using a union I'm inviting “unspecified behavior”—from the C99 standard: “[t]he value of a union member other than the last one stored into (6.2.6.1)”),
but at least gcc
does the sane thing in this case.)
The code thus modified,
I ran some tests to see the speed up and the results were rather disappointing—it was slower using bit-fields than with 8 separate boolean values.
My guess is that the code used to set and check bits, especially in an expression like (cpu->cc.f.n == cpu->cc.f.v) && !cpu->cc.f.z
was larger
(and thus slower)
than just using plain bool
for each field.
So the upshot—by changing the code to use an implementation-defined detail and invoking unspecified behavior, thus making the resulting program less portable, I was able to slow the program down enough to see it wasn't worth the effort.
Perfect.
Monday, May 04, 2020
On the plus side, it happens the same way each time; on the down side, it doesn't happen often
I'm still trying to find the cause of the crash.
I made a small change
(a switch from using lua_touserdata()
,
which can return NULL
,
to using luaL_checkudata()
,
which if it returns,
will never return NULL
; else it throws an error)
which didn't help resolve the issue—the program still dumped core.
Both last week and today,
it crashed in the same place in Ltls_write()
:
static int Ltls_write(lua_State *L) { struct tls **tls = luaL_checkudata(L,1,TYPE_TLS); size_t len; char const *data = luaL_checklstring(L,2,&len); lua_pushinteger(L,tls_write(*tls,data,len)); return 1; }
The variable tls
was itself not NULL
,
but the value it pointed to was! There's only one place in the code where it's set to NULL
—in the __gc
metamethod
(the bits with the Lua registry is to deal with callbacks from libtls
which knows nothing about Lua):
static int Ltls___gc(lua_State *L) { struct tls **tls = luaL_checkudata(L,1,TYPE_TLS); if (*tls != NULL) { lua_pushlightuserdata(L,*tls); lua_pushnil(L); lua_settable(L,LUA_REGISTRYINDEX); tls_free(*tls); *tls = NULL; } return 0; }
This is only called when the object no longer has any references to it. I'm not aware of any place where this value gets overwritten (but I'm not discounting it outright quite yet). The crashes started after I fixed the leak, which turned out to be this function (this is the pre-fix version):
static int Ltls_close(lua_State *L) { lua_pushinteger(L,tls_close(*(struct tls **)lua_touserdata(L,1))); return 1; }
The leak happened because I left the reference to the object in the Lua registry, so the garbage collector would never reclaim the resource. The fix itself was easy:
static int Ltls_close(lua_State *L) { int rc = tls_close(*(struct tls **)luaL_checkudata(L,1,TYPE_TLS)); if ((rc != TLS_WANT_POLLIN) && (rc != TLS_WANT_POLLOUT)) Ltls___gc(L); lua_pushinteger(L,rc); return 1; }
Yes, it's slightly complicated by the fact that closing a TLS connection might require a few more packets of data between the two endpoints to close out the protocol, but that's taken into account by the framework I'm using. At least, I hope it is.
So my theory right now is that the __gc
metamethod is being called before the connection is fully closed,
but to confirm that,
I somehow need to capture that scenario.
It's consistent in that it seems to be the same client making the same requests causing the same crash.
On the down side,
I have no idea who runs the client,
or where it is
(other than an IP address).
And if I make the same requests with my client,
it doesn't crash,
so I can't reproduce it.
And there's no telling when it will happen again.
I know it's just a small detail I'm missing on my side that's causing the issue,
but I just can't seem to locate the detail.
To that end,
I modified Ltls___gc()
and Ltls_write()
to attach some traceback information to see if my theory that Ltls___gc()
is being called too early.
Lua makes this pretty easy to do.
I modified Ltls___gc()
:
static int Ltls___gc(lua_State *L) { struct tls **tls = luaL_checkudata(L,1,TYPE_TLS); if (*tls != NULL) { luaL_traceback(L,L,"__gc",0); lua_setuservalue(L,1); lua_pushlightuserdata(L,*tls); lua_pushnil(L); lua_settable(L,LUA_REGISTRYINDEX); tls_free(*tls); *tls = NULL; } return 0; }
I get a traceback message and assign it to the object about to be cleaned.
In Lua,
a userdata goes through the garbage collection twice—once to clean the resources that Lua isn't aware of
(that's what this function does)
and the second time
(in the next garbage collection cycle)
to reclaim the memory Lua uses to store the userdata.
Then in Ltls_write()
I have:
static int Ltls_write(lua_State *L) { struct tls **tls = luaL_checkudata(L,1,TYPE_TLS); size_t len; char const *data = luaL_checklstring(L,2,&len); if (*tls == NULL) { lua_getuservalue(L,1); if (!lua_isnil(L,-1)) syslog(LOG_NOTICE,"%s",lua_tostring(L,-1)); luaL_traceback(L,L,"write",0); syslog(LOG_NOTICE,"%s",lua_tostring(L,-1)); lua_pop(L,2); lua_pushinteger(L,-1); } else lua_pushinteger(L,tls_write(*tls,data,len)); return 1; }
This will (hopefully,
knock on formica)
give me two tracebacks—one when the garbage collection is called
(or Ltls_close()
is called) and then a write is attempted,
and that will point me to the detail I'm missing.
The other theory is that I'm overwriting memory somewhere and that's a harder issue to track down.
Wednesday, May 06, 2020
The root cause of a crash
It happend yet again and this time, I was able to get the stack traces! Woot! But as is the case with most bugs, the fix was laughably trivial once found. The trouble is always in finding it, and all one can hope for is to learn for next time.
So the normal pattern of requests came in and the program crashes (only this time with the stack traces I was hoping to capture):
remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/" bytes=2505 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0001" bytes=283 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0002" bytes=249 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0003" bytes=129 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0004" bytes=205 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0005" bytes=112 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0006" bytes=248 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0007" bytes=89 subject="" issuer="" remote=XXXXXXXXXXXX status=20 request="gemini://gemini.conman.org/test/torture/0008" bytes=360 subject="" issuer="" remote=XXXXXXXXXXXX status=59 request="" bytes=16 subject="" issuer="" __gc stack traceback: [C]: in method 'close' /usr/local/share/lua/5.3/org/conman/nfl/tls.lua:126: in method 'close' GLV-1.12556.lua:288: in function <GLV-1.12556.lua:232> write stack traceback: [C]: in method 'write' /usr/local/share/lua/5.3/org/conman/nfl/tls.lua:92: in function </usr/local/share/lua/5.3/org/conman/nfl/tls.lua:91> (...tail calls...) GLV-1.12556.lua:206: in upvalue 'reply' GLV-1.12556.lua:399: in function <GLV-1.12556.lua:232>
These series of requests exist to test Gemini client programs and since I patched the memory leak, the program has always crashed after test #8 (first clue). This test sees if the client can properly handle relative links:
Title: Link-full path, parent directory, no text Status: 20 Content-Type: text/gemini At this point, if you are writing a client that manipulates URLs directly, you will need to look at RFC-3896, section 5.2.4 to handle these relative paths correctly. Things are about to get really messy. If you want to skip ahead to the MIME portion, select the second link instead of the first. => /test/../test/torture/0009 => 0011
Heck, I used two different clients (one I wrote, and the Gemini Gateway), but I was never able to reproduce the error (second clue) so I never did bother to look deeper at this. It was only when I had the stack traces was I able to track the issue down to this bit of code (there's a bit more to see the context but the actual bug is in the last block of code below):
if not loc.host then log(ios,59,"",reply(ios,"59\t",MSG[59],"\r\n")) ios:close() return end if loc.scheme ~= 'gemini' or loc.host ~= CONF.network.host or loc.port ~= CONF.network.port then log(ios,59,"",reply(ios,"59\t",MSG[59],"\r\n")) ios:close() return end -- --------------------------------------------------------------- -- Relative path resolution is the domain of the client, not the -- server. So reject any requests with relative path elements. -- --------------------------------------------------------------- if loc.path:match "/%.%./" or loc.path:match "/%./" then log(ios,59,"",reply(ios,"59\t",MSG[59],"\r\n")) ios:close() end
These are error paths, and for each one, I close the connection and return from the handler, except for the case of a relative path!
Head, meet desk.
The clues I had were pointing right at the problem,
but I just couldn't see it.
The reason I couldn't reproduce the issue is that the two clients I used were implemented properly,
so of course the check for a relative path wouldn't trigger.
And the reason why I never found this before was because the memory leak prevented the __gc()
method from running,
which meant the TLS context pointer would never be overwritten with NULL
,
thus,
no crash.
Fixing that bug revealed this bug—one that has existed since I originally wrote the code.
A test by hand
(by running ./openssl s_client -connect gemini.conman.org:19651
and typing in a request with a relative path)
quickly revealed I had found the root cause of the crash.
Now,
it might occur to some to ask why didn't I write the Ltls_close()
function like this:
static int Ltls_close(lua_State *L) { struct tls **luaL_checkudata(L,1,TYPE_TLS); if (*tls == NULL) { lua_pushinteger(TLS_ERROR); return 1; } int rc = tls_close(*(struct tls **)luaL_checkudata(L,1,TYPE_TLS)); if ((rc != TLS_WANT_POLLIN) && (rc != TLS_WANT_POLLOUT)) Ltls___gc(L); lua_pushinteger(L,rc); return 1; }
That is—to have the function check for a NULL
pointer?
Or all the routines in the TLS module to check for a NULL
pointer?
That's a conscience decision on my part because defensive code hides real bugs
(and in this case,
it most definitely would have hidden the bug).
I came to this conclusion having read Writing Solid Code years ago
(and it was one of two books that radically changed how I approach programming—the other one being
Thinking Forth; it's well worth reading even if you don't program in Forth).
The book presents a solid case why defensive code can hide bugs much better than I can.
And even if it took a week to track down the bug,
it was worth it in the end.
Saturday, May 09, 2020
It's out!
About 50 months ago [It's only been 51 days. —Editor] [What? Really? Only 51 days? It feels longer. —Sean] [Yes, only 51 days. I know the COVID-19 thing has warped your sense of time but get a grip on yourself. —Editor] [Sigh. —Sean] 51 days ago, I reported a bug with Firefox, which took about a week to fix but wasn't slated to be released until version 76 of Firefox. Well, I just downloaded version 76 and yes, the bug is truly fixed.
Huzzah!
Now if only the Oligarchist Cell Phone Company we work with at The Corporation could work as fast.
Thursday, May 14, 2020
The shaving of yaks
A day ago,
a month ago
(time no longer has real meaning anymore),
I was ask to look into moving our code away from SVN and into
git
.
Ever since I've been pretty much busy with the shaving of yaks—lots and lots of yaks.
And it's more than just converting a repository of code from SVN to git
—it's also breaking up what is basically
a monorepos into lots of separate repos for reasons
(operations hates having to checkout the 800 pound gorilla for the 4 oz. banana).
And without losing history if at all possible.
Lots of yaks to shave in this project.
So I've been learning about submodules in git
and while I like git
,
I'm not a fan of the submodule.
First,
when you clone a repository with submodules
(git clone https://git.example.com/foo.git
)
you don't get the submodules.
That's another two steps to get them
(git submodule init; git submodule update
).
I solved that issue with a new Makefile
target:
getmod: git submodule init git submodule update
And on the build system, I made sure that make getmod
was done prior to make
.
That issue solved.
Another issue—we use Lua in our department, but we ended up with using two different versions. The stuff I built is still using 5.1.5, while a new project a fellow cow-orker wrote used 5.3.2. This is a chance to consolidate some dependencies, and to that end, I have a new single Lua repo with every version of Lua from 5.1.4 to 5.3.5 (with all the patches applied). But as I found out (and many yaks were shaved to bring me this information) you can't just checkout a particular tag or branch for a submodule.
Grrrrrr.
So I solved that issue in the Makefile
as well:
VLUA := $(shell cd deps/lua ; git describe --tag) ifneq ($(VLUA),"5.1.5.p2") DUMMY := $(shell cd deps/lua; git checkout 5.1.5.p2) endif
This checks to see which version of Lua is checked out,
and if it's not the one I want,
check out the version I do want.
I also had to make sure the .gitmodule
file had ignore = all
set:
[submodule "deps/lua"] path = deps/lua url = https://git.example.com/lua.git ignore = all
to prevent git status
from freaking people out.
I figure this will keep me busy for the next few months at least. Or maybe days. Time just flows strangely these days.
Saturday, May 16, 2020
A busy day
It's a busy day today.
First off,
the power shut off long enough for the computers to be shut down but short enough that maybe the UPSs would not have drained completely.
Sigh.
And 10 minutes after the power is restored,
I've already blocked half a dozen IP addresses trying to log in via ssh
.
I also received three bug reports for my Motorola 6809 emulator. This is the first time in eight years I've received a bug report for that particular project. It's nice to know that someone found the project useful.
I"ve also become aware of a potential email issue with my server that probably started when I last had issues with email. Briefly, when my server (a virtual server which I think is part of the cause) makes an outgoing connection, the other end sees the IP address of the physical host my server is on, not the IP address assigned to my server. This has some serious implications with email and I need to get it resolved. Sigh.
I'm also planning tomorrow's D&D session. We're also going to try using Zoom for hosting the session as Roll20 is not quite stable nor fast to use. Roll20 also seems to suck up all the bandwidth here at Chez Boca which I find annoying.
And I finally checked—it's only been 61 days since the quarantine has started. Just two months. Man, it feels like it's been forever.
Update on Wednesday, June 10TH, 2020
I solved the issue with email. Finally!
Friday, June 05, 2020
Timing LPEG expressions
One pattern that is seemingly missing from LPEG is a case-insensitive match, a pattern where you can specify a pattern to match “foo”, “FOO”, “Foo” or “fOo”. I have to think this was intentional, as doing case insensitive matching on non-English words is … a complex topic (for a long time, the German letter “ß” upper case form was “SS” but not all “SS” were an upper case “ß”). So it doesn't surprise me there's no pattern for it in LPEG. But I wish it did, as a lot of Internet text-based protocols require case-insensitive matching.
There are two ways around the issue. One way is this:
local lpeg = require "lpeg" lcoal P = lepg.P local patt = (P"S" + P"s") * (P"A" + P"a") * (P"M" + P"m") * P"-" * (P"I" + P"i") * P"-" * (P"A" + P"a") * (P"M" + P"m")
But this would seem to produce a lot of branching code that would be slow (LPEG has its own parsing-specific VM). Of course, there's this solution:
local lpeg = require "lpeg" local P = lpeg.P local S = lpeg.S local impS = S"Ss" * S"Aa" * S"Mm" * P"-" * S"Ii" * P"-" * S"Aa" * S"Mm"
But each lpeg.S()
uses 32 bytes to store the set of characters it matches,
and that seems like a large waste of memory for just two characters.
A third way occured to me:
local lpeg = require "lpeg" local Cmt = lpeg.Cmt local R = lpeg.R local impC = Cmt( S"SsAaMmIi-"^1, function(subject,position,capture) if capture:lower() == 'sam-i-am" then return position end end )
This just looks for all the characters in “Sam-I-am” and then calls a function to do an actual case-insensitive comparison,
but at the cost of doing it at the actual match time,
instead of potentially doing it lazily
(as the manual puts it, “this one is evaluated immediately when a match occurs (even if it is part of a larger pattern that fails later)”).
And it might be a bit faster than the one that just uses lpeg.P()
,
and with less memory than the one using lpeg.S()
.
So before going to work on a custom case-insensitive pattern for LPEG
(where lpeg.P()
is pretty much the case-sensitive pattern),
I thought I might want to profile the existing approaches first just to get a feeling for how long each approach takes.
local lpeg = require "lpeg" local rdtsc = require "rdtsc" -- this is another post local Cmt = lpeg.Cmt local Cf = lpeg.Cf local P = lpeg.P local R = lpeg.R local S = lpeg.S local test = "Sam-I-Am" local base = P"Sam-I-Am" local impP = (P"S" + P"s") * (P"A" + P"a") * (P"M" + P"m") * P"-" * (P"I" + P"i") * P"-" * (P"A" + P"a") * (P"M" + P"m") local impS = S"Ss" * S"Aa" * S"Mm" * P"-" * S"Ii" * P"-" * S"Aa" * S"Mm" local impC = Cmt( S"SsAaMmIi-"^1, function(subject,position,capture) if capture:lower() == "sam-i-am" then return position end end ) local function testf(patt) local res = {} for i = 1 , 10000 do local zen = rdtsc() patt:match(test) local tao = rdtsc() table.insert(res,tao - zen) end table.sort(res) return res[1] end print('base',testf(base)) print('impP',testf(impP)) print('impS',testf(impS)) print('impC',testf(impC))
I'm testing the normal case-sensitive pattern,
and the three case-insensitive patterns.
I run each test 10,000 times and return the lowest value (“lowest” means “fastest”).
The rdtsc()
function … that's another post
(but a pre-summary—it represents the number of clock cycles the CPU has been running and on the test system there are 2,660,000,000 cycles per second).
Anyway, on to the results:
base | 2800 |
impP | 3020 |
impS | 3020 |
impC | 5190 |
I'm honestly surprised.
First,
I thought the last method would do better than it did, but it's nearly twice as slow.
The other two are pretty much the same,
time wise
(which leads me to wonder if the pattern lpeg.P(single_letter) + lpeg.P(single_letter)
is internally converted to lpeg.S(letters)
—it could very well be).
And they aren't that much slower than the case-sensitive pattern.
Well,
not enough for me to worry about it.
Even a much longer string,
like “Access-Control-Allow-Credentials” gave similar results.
And no, I did not write out by hand the expression to match “Access-Control-Allow-Credentials” case-insensitively, but wrote an LPEG expression to generate the LPEG expression to do the match:
local lpeg = require "lua" local Cf = lpeg.Cf -- a folding capture local P = lpeg.P local R = lpeg.R local char = R("AZ","az") / function(c) return P(c:lower()) + P(c:upper()) end + P(1) / function(c) return P(c) end Hp = Cf(char^1,function(a,b) return a * b end)
It's a powerful technique, but one that can take a while to wrap your brain around. It's just one of the reasons why I like using LPEG.
A Lua module in assembly, why not?
I've been known to dabble in assembly language from time to time,
and there's been this one instruction on the Intel Pentium that I've wanted to play around with—RDTSC
.
It's used to read the internal time-stamp counter which is incremented every clock cycle.
On my system,
this counter is incremented 2,660,000,000 times per second
(the computer is running at 2.66GHz)
and this makes for a nice way to time code,
as the instruction is available in userspace
(at least on Linux).
I wanted to use this to time some Lua code, which means I need to wrap this instruction into a function that can be called. I could have used some inline assembly in C to do this, but
- the code is non-portable anyway;
- I wanted to avoid as much C overhead as possible;
- and I thought it would be amusing to write an entire Lua module in assembly.
It wasn't hard:
;*************************************************************************** ; ; Copyright 2020 by Sean Conner. ; ; This library is free software; you can redistribute it and/or modify it ; under the terms of the GNU Lesser General Public License as published by ; the Free Software Foundation; either version 3 of the License, or (at your ; option) any later version. ; ; This library is distributed in the hope that it will be useful, but ; WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ; or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public ; License for more details. ; ; You should have received a copy of the GNU Lesser General Public License ; along with this library; if not, see <http://www.gnu.org/licenses/>. ; ; Comments, questions and criticisms can be sent to: sean@conman.org ; ;*************************************************************************** bits 32 global luaopen_rdtsc extern lua_pushinteger extern lua_pushcclosure ;*************************************************************************** section .text ldl_rdtsc: rdtsc push edx push eax push dword [esp + 12] call lua_pushinteger ; lua_pushinteger(L,rdtsc); xor eax,eax ; return 1 inc eax lea esp,[esp + 12] ret ;--------------------------------------------------------------------------- luaopen_rdtsc: xor eax,eax push eax push ldl_rdtsc push dword [esp + 12] call lua_pushcclosure ; lua_pushcclosure(L,ldl_rdtsc,0); xor eax,eax ; return 1 inc eax lea esp,[esp + 12] ret
I'll leave it up to the reader to convert this to 64-bit code.
Monday, June 08, 2020
I can code that FizzBuzz function with only two tests
My favorite implementation of FizzBuzz is the Ruby version based upon nothing more than lambda calculus and Church numbers, i.e. nothing but functions all the way down.
Then I saw an interesting version of it in the presentation “Clean Coders Hate What Happens to Your Code When You Use These Enterprise Programming Tricks.” The normal implementation usually involves thee tests, one for divisibility by 3, one for divisibility by 5, and an annoyingly extra one for both or for divisibility by 15. But the version given in the presentation only had two tests. Here's a version of that in Lua:
function fizzbuzz(n) local test = function(d,s,x) return n % d == 0 and function(_) return s .. x('') end or x end local fizz = function(x) return test(3,'Fizz',x) end local buzz = function(x) return test(5,'Buzz',x) end return fizz(buzz(function(x) return x end))(tostring(n)) end for i = 1 , 100 do print(i,fizzbuzz(i)) end
The function test()
will return a new function that returns a string,
or whatever was passed in x
.
fizz()
and buzz()
are functions that return what test()
returns.
The last line can be broken down into the following statements:
_0 = function(x) return x end _1 = buzz(_0) _2 = fizz(_1) _3 = _2(tostring(n)) return _3
And we can see how this works by following a few runs through the code. Let's first start with a number that is not divisible by 3 or 5, say, 1.
_0 = function(x) return x end _1 = buzz(_0) = function(x) return x end _2 = fizz(_1) = function(x) return x end _3 = _2("1") == "1" return _3
_0
is the identity function—just return whatever it is given.
buzz()
is called,
which calls test()
,
but since the number is not divisible by 5,
test()
returns the passed in identity function,
which in turn is returned by buzz()
.
fizz()
is then called,
and again,
since the number is not divisible by 3,
_test()
returns the identity function,
which is returned by fizz()
.
_3
is the result of the identity function called with “1”,
which results in the string “1”.
Next, the number 3 is passed in.
_0 = function(x) return x end _1 = buzz(_0) = function(x) return x end _2 = fizz(_1) = function(_) return 'Fizz' .. _1('') end _3 = _2("3") == 'Fizz' return _3
_0
is the identify function.
buzz()
just returns it since 3 isn't divisible by 5.
fizz()
,
however,
returns a new function,
one that returns the string “Fizz” concatenated with the empty string as returned by the identify function.
This function is then called,
which results in the string “Fizz”.
The number 5 works similarly:
_0 = function(x) return x end _1 = buzz(_0) = function(_) return 'Buzz' .. _0('') end _2 = fizz(_1) = function(_) return 'Buzz' .. _0('') end _3 = _2("5") == 'Buzz' return _3
Here,
buzz()
returns a new function which returns the string “Buzz” concatenated with the empty string returned by the identify function.
fizz()
returns the function generated by buzz()
since 5 isn't divisible by 3,
and thus we end up with the string “Buzz”.
It's the case with 15 where all this mucking about with the identify function is clarified.
_0 = function(x) return x end _1 = buzz(_0) = function(_) return 'Buzz' .. _0('') end _2 =_fizz(_1) = function(_) return 'Fizz' .. _1('') end _3 = _2("15") = 'FizzBuzz' return _3
Again,
_0
is the identify function.
buzz()
returns a new function because 15 is divisible by 5,
and this function returns the string “Buzz” concatenated by the empty string returned by the identity function.
fizz()
also returns a new function,
because 15 is also divisible by 3.
This function returns the string “Fizz”, concatenated by the passed in function,
which in this case is not the identity function,
but the function returned from buzz()
.
This final function returns “Fizz” concatenated with the string of the function that returns “Buzz” concatenated with the empty string returned by the identity function,
resulting in the string “FizzBuzz”.
Yes, it's quite a bit of work just to avoid an extra test, but fear not! Here's a way to do FizzBuzz with just two tests that is vastly simpler:
function fizzbuzz(n) local list = { 'Fizz' , 'Buzz' , 'FizzBuzz' } list[0] = tostring(n) local m = (n % 3 == 0 and 1 or 0) + (n % 5 == 0 and 2 or 0) return list[m] end for i = 1 , 100 do print(i,fizzbuzz(i)) end
Here we generate an index into an array of four possible responses, the original number as a string, “Fizz”, “Buzz” and “FizzBuzz”. If the number isn't divisible by either 3 or 5, the index is 0; if it's divisible by 3, the index is 1, if divisible by 5, the index is 2, and if both, the index ends up as 3. It's not quite as mind blowing as the first version, but it is easier to translate to a language that lacks closures, like C.
Wednesday, June 10, 2020
The email situation has been solved
I finally solved the email issue on my server—the physical host was NATing connections from my virtual server. There's no need for that. Once the bypass for the NAT was added, outgoing packets were finally coming from my server's IP address.
Monday, June 15, 2020
Dear Apple
You must really not want my money. I know, I know, you aren't discrimiting against my race, or sex, or religion, or the fact that I'm voting for Velma Owen for President. But I find it puzzling that this is the third time you rejected my attempt to drunkenly spend money to fix a problem. The first time I was ready to buy the K2 of Mac computers only to learn I had to buy one online—I wasted my time in coming to the store. The second time I was going to buy an iPad (long stupid story there, not worth going into) only to be shoved towards Best Buy where it was cheaper.
And then today. I've walked into your Apple Store looking to buy a new monitor. Earlier, when I went to turn on the monitor on my Mac, it briefly lit up, decided it had enough of life and immediately shut off, never to turn back on again. Granted, the monitor was from 2005, and I bought it used around ten years ago so I'm not terribly upset over it dying an inglorious dimming death.
But alas, Apple no longer supports that monitor, so fixing it was out of the question. The interface on the Mac mini it's attached to is, itself, obsolete, and Apple no longer carries the appropriate adaptors. And the other interface on the Mac mini, the HDMI port, is not supported by any new Apple monitor. So I was told my best bet was to head of to … Best Buy.
Why did I even bother coming to your store, Apple? Why do you even bother to have a store in the first place?
Saturday, June 27, 2020
Yet another repair job, this time involving solder!
“Are you ever going to finish repairing the clock?”
“Oh! I was waiting to see if you had any solder.”
“I thought I already told you I didn't have any, and that you were going to check your tool box.”
“Ah. Okay, let me do that.”
And so I found myself repairing an old-style alarm clock with the bells on top. It's not actually that old, having been manufactured in China some time in the last decade. The battery had died and leaked, causing some corrosion around the batter terminals. The corrosion wasn't coming off that easy, so last week (month? Day? Year? I HAVE NO CONCEPT OF TIME ANYMORE!) I soaked the back of the clock, which hosed the battery compartment, in vinegar overnight to clean up the corrosion. That worked beautifully, but some wires had come loose and needed to be resoldered.
The hard part of doing the repair? Finding a pair of wire strippers that wouldn't just cut the wire. The second hardest part? The actual soldering job. It had been way too long since I last used a soldering iron [Nothing worse than a programmer with a soldering iron. –Editor.] [Shut up, you! –Sean] but I managed to only burn one finger and resolder everything only twice. Not bad [If you say so yourself. –Editor] if I say so—[Shut up! –Sean].
Anyway, the clock has been repaired, it works, and we saved ourselves having to buy a new one.
Saturday, July 04, 2020
Adventures in Formatting
If you are reading this via Gopher and it looks a bit different,
that's because I spent the past few hours (months?) working on a new method to render HTML into plain text.
When I first set this up I used Lynx because it was easy and I didn't feel like writing the code to do so at the time.
But I've never been fully satisfied at the results [Yeah, I was never a fan of that either –Editor].
So I finally took the time to tackle the issue
(and is one of the reasons I was timing LPEG expressions the other day
[Nope. –Editor] … um … the other week
[Still nope. –Editor] … um … a few years ago?
[Last month. –Editor]
[Last month? –Sean]
[Last month. –Editor]
[XXXX this timeless time of COVID-19 –Sean]
last month).
The first attempt sank in the swamp.
I wrote some code to parse the next bit of HTML
(it would return either a string,
or a Lua table containing the tag information).
And that was fine for recent posts where I bother to close all the tags
(taking into account only the tags that can appear in the body of the document,
<P>
, <DT>
, <DD>
, <LI>
, <THEAD>
, <TFOOT>
, <TBODY>
, <TR>
. <TH>
, and <TD>
do not require a closing tag),
but in earlier posts,
say, 1999 through 2002,
don't follow that convention.
So I was faced with two choices—fix the code to recognize when an optional closing tag was missing,
or fixing over a thousand posts.
It says something about the code that I started fixing the posts first …
I then decided to change my approach and try rewriting the HTML parser over.
Starting from the DTD for HTML 4.01 strict I used the re
module
to write the parser,
but I hit some form of internal limit I'm guessing,
because that one burned down,
fell over,
and then sank into the swamp.
I decided to go back to straight LPEG, again following the DTD to write the parser, and this time, it stayed up.
It ended up being a bit under 500 lines of LPEG code,
but it does a wonderful job of being correct
(for the most part—there are three posts I've made that aren't HTML 4.01 strict,
so I made some allowances for those).
It not only handles optional ending tags,
but the one optional opening tag I have to deal with—<TBODY>
(yup—both the opening and closing tag are optional).
And <PRE>
tags cannot contain <IMG>
tags while preserving whitespace
(it's not in other tags).
And check for the proper attributes for each tag.
Great! I can now parse something like this:
<p>This is my <a href="http://boston.conman.org/">blog</a>. Is this not <em>nifty?</em> <p>Yeah, I thought so.
into this:
tag = { [1] = { tag = "p", attributes = { }, block = true, [1] = "This is my ", [2] = { tag = "a", attributes = { href = "http://boston.conman.org/", }, inline = true, [1] = "blog", }, [3] = ". Is it not ", [4] = { tag = "em", attributes = { }, inline = true, [1] = "nifty?", }, }, [2] = { tag = "p", attributes = { }, block = true, [1] = "Yeah, I thought so.", }, }
I then began the process of writing the code to render the resulting data into plain text.
I took the classifications that the HTML 4.01 strict DTD uses for each tag
(you can see the <P>
tag above is of type block
and the <EM>
and <A>
tags are type inline
)
and used those to write functions to handle the approriate type of content—<P>
can only have inline
tags,
<BLOCKQUOTE>
only allows block
type tags,
and <LI>
can have both;
the rendering for inline
and block
types are a bit different,
and handling both types is a bit more complex yet.
The hard part here is ensuring that the leading characters of <BLOCKQUOTE>
(wherein the rendered text each line starts with a “| ”)
and of the various types of lists (dictionary, unordered and ordered lists) are handled correctly—I think there are still a few spots where it isn't quite correct.
But overall, I'm happy with the text rendering I did, but I was left with one big surprise …
Spending cache like its going out of style
I wrote an HTML parser. It works (for me—I tested it on all 5,082 posts I've made so far). But it came with one large surprise—it bloated up my gopher server something fierce—something like eight times larger than it used to be.
Yikes!
At first I thought it might be due the huge list of HTML entities (required to convert them to UTF-8).
A quick test revealed that not to be the case.
The rest of the code didn't seem all that outrageous, so fearing the worst, I commented out the HTML parser.
It was the HTML parser that bloated the code.
Sigh.
Now,
there is a lot of code there to do case-insensitive matching of tags and attributes,
so thinking that was the culpret,
I converted the code to not do that
(instead of looking for <P>
and <p>
,
just check for <p>
).
And while that did show a measurable decrease,
it wasn't enough to lose the case-insentive nature of the parser.
I didn't check to see if doing a generic parse of the attributes
(accept anything) would help,
because again,
it did find some typos in some older posts
(mostly TILTE
instead of TITLE
).
I did try loading the parsing module only when required, instead of upfront, but:
- it caused a massive spike in memory utilization when a post was requested;
- it also caused a noticible delay in generating the output as the HTML parser had to be compiled per request.
So, the question came down to—increase latency to lower overall memory usage, or consume memory to decrease a noticible latency?
Wait? I could just pre-render all entries as text and skip the rendering phase entirely, at the cost of yet more disk space …
So, increase time, increase memory usage, or increase disk usage?
As much as it pains me, I think I'll take the memory hit. I'm not even close to hitting swap on the server and what I have now works. If I fix the rendering (and there are still some corner cases I want to look into) I would have to remember to re-render all the entries if I do the pre-render strategy.
A relatively quiet Fourth of July
This has been one of the quietest Fourth of July I've experienced. It's been completely overcast with the roll of thunder off in the distance, the city of Boca Raton cancelled their fireworks show, and our neighbor across the street decided to celebrate with his backyard neighbor on the next street over.
Yes, there's the occasional burst of fireworks here and there, but it sounds nowhere near the levels of a war zone as it has in the past.
Happy Fourth of July! And keep safe out there!
Tuesday, July 07, 2020
The magic tricks may be simple, but that's not to say they're easy to perform
Bunny and I watch “Fool Us,” a show where Penn & Teller showcase a number of magicians and try to figure out how the trick was performed. It's a cool show and we will often rewind bits (and again, and again) to see if we can spot the trick.
On the recent show we saw Wes Iseli whose trick with a single 50¢ piece fooled Penn & Teller. He walked out on stage, handed Alyson Hannigan (the hostess) a “prediction” for her to hold on to. He then had the entire audience stand up and had them call heads or tails as he flipped the 50¢ piece. After about ten flips, there was a single audience member left standing. Wes then asked Alyson to read the “prediction” he made, and it described the audience member left standing.
I think I know how it was done, and the only reason it fooled Penn & Teller was due to a bad guess on Penn's part (if they know of several ways a trick can be done, and they guess the wrong one, they're fooled). The thing about magic is that often times, the “trick” is so simple that once you know how it's done, it's like “that's it? That's how it was done?”
For instance, way back in the second season, Wes Barker did a trick where he speared a page from a phone book with a sword which fooled Penn & Teller. He recently revealed how he did the trick (because, as he stated, phone books don't exist anymore). The trick was stupidly simple and by overthinking the trick, Penn was fooled. Oh, and it was interesting to learn yet another method of tearing up a phone book (as if we'll ever have a phone book to rip up).
Another instance of a very simple method tricking Penn & Teller is a recent trick by Eric Leclerc. He did the “needle in a haystack” trick, but this time finding a marked packing peanut by Teller in a huge box of packing peanuts. And again, it was a very simple trick. It's amazing how simple these tricks really are. It's almost likey they're cheating. And in a way, I guess they are.
Monday, July 13, 2020
A twisty maze of little redirects, all alike
I have The Electric King James Bible. I then ported it to gopher. Sometime after that, I ported it again, this time to Gemini. I then received an email from Natalie Pendragon, who runs GUS, about the infinite redirection that happens when you try to read the the Book of Job via Gemini.
Sure enough, when I visted the The Book of Job on Gemini, I ended up in a maze of twisty little redirects, all alike.
So there's this file I have that lists not only the books of the Bible,
but the abbreviations for each book,
so instead of having to type http://bible.conman.org/kj/Genesis.1:1
you can type http://bible.conman.org/kj/Ge.1:1
and it'll do The Right Thing™.
Only for Job,
there is no abbreviation—instead,
I have “Job” listed as the abbreviation
(and the same issue goes for Joel).
So I guess that I handled that case in the web version
(don't let the timestamps fool you—I imported it into git
ten years ago,
but wrote the code over twenty years ago),
and the gopher version doesn't do redirections,
so it doesn't matter there,
but the Gemini version does do redirections,
and I didn't check that condition.
Oops.
The issue has now been fixed.
Wednesday, July 15, 2020
There's a disconnect somewhere
I received the following email from The Corporate Overlords:
All,
Microsoft is currently reporting service issue with the Outlook email client globally. They are rolling out fixes, however it may take several hours to complete.
Outlook on the web and mobile clients are unaffected. If you are currently experiencing Outlook crashes or problems accessing email via Outlook on your computer, please login with your [Corporate Overlord assigned] credentials to
https://outlook.office.com
or use the Outlook mail app on your mobile device.
There's an issue with using email, and they notify the users of this issue, by using email. I mean, I got it because I use the web version of Lookout. I have to wonder how many other people at the Corporation know of the issue …
Thursday, July 16, 2020
Adventures in Formatting II: Gemini Boogaloo
If you are reading this via Gemini, then welcome to my blog! Emboldened by converting HTML to text for gopher, I decided to try my hand at converting HTML to the native Gemini text format (section 5 of the specification), and I'm less than thrilled with the results and I don't think given the contraints that I could do a better job than I have.
The format has similarities to Markdown but simpler, and you can't embed HTML for when Markdown has no syntax to do what you want. I mean, given that Gemini supports serving up any type of content, I could have just served up HTML, but well, I have the webserver for that. And I could have just used the plain text format I use for gopher, but the Gemini text format does allow for links, and I like my links (even if external links have a half life of about a year).
Most of the entries will look okay, it's only the occasional entry with deeply nested HTML that might look wierd.
And yes, the size of the server bloated quite a bit since I reused the HTML parser, but it's something I'll just have to live with for now.
Monday, July 20, 2020
Adventures in Atom
I offer up several feeds for my blog, one of the formats being Atom, a much nicer (and better specified) than RSS (which I also offer by the way). There exists an Atom feed aggregator for Gemini. The Atom specification allows zero or more links to a blog entry. I discussed this with the author of CAPCOM (the Atom feed aggregator for Gemini) and it can deal with multiple links. I don't know about any other Atom aggregators out there, but I'm going to find out, because each entry now has three links per entry—one to the web version, one to the Gemini version and one to the gopher version.
I have a feeling this is going to be fun, and by “fun,” I mean “it will probably break things somewhere because who is stupid enough to use more than one link?”
Update a few moments later …
Of course, becuse of the way I translate links for the gopher version, those using gopher won't see the web version of the link. Sigh.
We're a long way from the halcyon days of ASCII-only text
Rendering text, how hard could it be? As it turns out, incredibly hard! To my knowledge, literally no system renders text "perfectly". It's all best-effort, although some efforts are more important than others.
I don't know if I'm glad or sad I didn't read this article before rendering HTML into different formats. On the one hand, it might have completely discouraged me from even starting. On the other hand, I don't use the full span of Unicode characters in my blog posts (which I'm very thankful for, after doing all this work). But on the gripping hand, Lynx was doing a horrible job at formatting HTML into text, and what I have now is much better looking.
So let's see … we now have three hard things in Computer Science:
- cache invalidation;
- naming things;
- text rendering;
- and “off-by-one” errors.
Yup, that seems about right.
Friday, July 24, 2020
Why not open the office back up on Halloween? It's be thematically cool to do so
I received a work email today, notifying everybody at the Ft. Lauderdale Office of the Corporation, that the office won't reopen until Monday, October 5TH. We were supposed to open in late June, maybe early July, but given that's it is now late July, the Powers That Be decided it might be better to just wait several more months than to continuously plan to open the office only to have to push the date back.
I can't say I'm upset at the decision.
And here I thought web bots were bad
I suppose it was only a matter of time, but the bad web robot behavior has finally reached Gemini. There's a bot out there that made 42,766 requests in the past 27 hours (so not quite one-per-second) until I got fed up with it and blocked it at the firewall. And according to my firewall, it's still trying to make requests. That tells me that whatever it is, it's running unattended. And several other people running Gemini servers have reported seeing the same client hammering their systems as well.
Now, while the requests average out to about one every two seconds, they actually come in bursts—a metric buttload pops in, a bunch fail to connect (probably because of some kernel limit) and all goes quiet for maybe half a minute before it starts up again. Had it actually limited the requests to one every two seconds (or even one per second) I probably wouldn't mind as much.
As it was though, quite a large number of the requests were malformed—it wasn't handling relative links properly, so I can only conclude it was written by the same set of geniuses that wrote the MJ12Bot.
Sigh.
On the plus side, it did reveal a small bug in the codebase, allowing some of the malformed requests to be successful when they shouldn't have been.
Well, the bugs start coming and they don't stop coming
As soon as I find one bug than I find another one.
In an unrelated program.
Sigh.
In this case, the bug was in my gopher server, or rather, the custom module for the gopher server that serves up my blog entries. Earlier this month, I rewrote parts of that module to convert HTML to text and I munged the part that serves up ancillary files like images. I found the bug as I was pulling links for the previous entry when I came across this entry from last year about the horrible job Lynx was doing converting HTML to text. In that post, I wrote what I would like to see and I decided to check how good of a job I did.
It's pretty much spot on, but I for some reason decided to view the image on that entry (via gopher), and that's when I found the bug.
The image never downloaded because the coroutine handling the request crashed,
which triggered a call to assert()
causing the server to stop running.
Oops.
The root cause was I forgot to prepend the storage path to the ancilliary files. And with that out of the way …
No, seriously, the bugs start coming and they don't stop coming
One of the links on this entry won't work on gopher or Gemini.
That's because I haven't implemented a very important feature from the web version of my blog—linking to an arbitrary portion of time!
I don't even think that link will work on gopher or Gemini either,
because of the way I “translate” links when generating the text from HTML.
HTML-to-text translation is hard, let's go shopping—oh, wait … I can't because of CODIV-19!
Sigh.
Update a few moments later …
Not all links break on Gemini.
Saturday, July 25, 2020
Please make the bugs stop
Let's see … there was the bug in my Gemini server, two bugs in a custom module for my gopher server, so it's little surprise to me to find a bug in a third server I wrote—this time my “Quote of the Day” server (no public repository for that one, but the service is easy to implement).
This bug was a simple “off-by-one” error—when I got to the last quote the program should have wrapped around to restart with the first quote, only that code was “off-by-one” (thus the name). It only took a while to hit because it required serving up 4,187 quotes before restarting and it's not like it gets hit all that often.
I'd say “I think that's it for now,” but I'm worried that would jinx me for a slew of new bugs to show up, so I won't say it.
Bad bots, bad bots, whatcha gonna do? Whatcha gonna do when they contact you?
I finally found a way to contact the party reponsible for slamming my Gemini server with requests. Hopefully I'll hear a response back, or at the very least, the behavior of the bad bot will be silently fixed.
I can only hope.
Tuesday, July 28, 2020
All is silent on the bad Gemini bots
On Saturday, I sent a message to the party responsible for slamming my Gemini server (one among several) and I've yet to receive any response. I removed the block from the firewall, and I haven't seen any requests from said bot. It looks to have been a one-off thing at this time.
Weird.
But then again, this is the Intarwebs, where weird things happen all the time.
At this point, I'm hoping it was fixed silently and it won't be an issue again.
Wednesday, July 29, 2020
I can't believe I didn't think of that sooner
Last week I was tasked with running the regession test for “Project: Sippy-Cup” and figure out any issues. Trying to figure out the issues was a bit harder than expected. I had my suspicions but the output wasn't quite conducive to seeing the overall picture. The output was almost, but not quite valid Lua. If it was valid Lua, I could load the data and write some code to verify my hypothesis, but alas, I had to write code to massage the output into a form that could be loaded.
What a drag.
I was able to prove my hypothesis (some contradictory features were enabled, but it's a “that can't happen in production” type scenario, and if it did happen in production it's of no real consequence). I then adjusted the regression test accordingly.
But afterwards, I adjusted the output slightly to make it valid Lua code. That way, it can be loaded via the Lua parser so further investigations of errors can be checked. I'm just a bit surprised that I didn't think of that sooner.
Update on Thrusday, July 30TH, 2020
Interfacing with the blackhole of the Intarwebs
Smirk called last night to ask me how I publish my blog to MyFaceMeLinked BookWeInSpace. I told him I do it by hand. When I post to my blog, I then go to FaceMeLinkedMyBookWeInSpace and manually post the link. It used to be an automatic feature but several years ago MeLinkedMyFaceWeInSpaceBook changed the API. He was curious because he was getting fed up with FaceMeLinkedMySpaceBookWeIn and their censorious ways, and wanted a way to post to both FaceMeLinkedMy SpaceBookWeIn and another website. Here's what I found.
First:
The
publish_actions
permission will be deprecated. This permission granted apps access to publish posts to Facebook as the logged in user. Apps created from today onwards will not have access to this permission. Apps created before today that have been previously approved to request publish_actions can continue to do so until August 1, 2018. No further apps will be approved to usepublish_actions
via app review. Developers currently utilizingpublish_actions
are encouraged to switch to Facebook's Share dialogs for web, iOS and Android.
New Facebook Platform Product Changes and Policy Updates
For my usecase, I'm still screwed, unless I become an “approved partner:”
On August 1st, 2018, the Live API publish_actions permission, which allows an app to publish on behalf of its Users, will be reserved for approved partners. A new permission model that allows apps to publish Videos to their User's Groups and Timeline will be created instead.
New Facebook Platform Product Changes and Policy Updates
So, I'm still screwed.
You can do a “share dialog” which looks like it may work, but … it looks like it may require the use of JavaScript (a non-starter for me, so I'm still screwed) and the user has to be logged into FaceMeLinkedMyBookWeInSpace for it to work (another non-starter for me). This may work for Smirk, and more importantly, it doesn't require a FaceMeLinkedMyBookWeInSpace app to be written.
Then there's this Pages API thing that looks like it could work (not for me, because I don't think my “timeline” counts as a “page”—man this MyFaceMeLinkedInSpaceBookWe stuff is confusing) but it requires building an app. If what Smirk wants to publish to on FaceMeLinked MyBookWeInSpace is a “page” then this is probably what he wants. While it may look like “Instant Articles” is the way to go, especially since there appear to be plugins for popular web publishing platforms, but the kicker there is: “[t]he final step before going live is to submit 10 complete articles for review by our team.” That may work for The National Enquirer or the Weekly World News, but it won't work for me or Smirk.
And that's for getting stuff into MeLinkedMyFaceBookWeInSpace. As far as I can tell, there's no way to get stuff out! And that's probably by design—LinkedMyFaceMeSpaceBookWeIn is the Intarwebs, as far as it's concerned.
Thursday, July 30, 2020
I can't believe I didn't think of that—clarification
Over on MeLinkedInstaMyFaceInGramSpaceBookWe, my friend Brian commented “Cool! (Whatever it was exactly you did!!!)” about my work issue. Rereading that post, I think I can clarify a bit what I did. But first, a disclaimer: I'm not revealing any personal information here as all the data for the regression test is randomly generated. Names, numbers, it's all generated data.
So I ran the test and the output from that is a file that looks like (feature names changed to protect me):
ERR CNAM feature8 failed: wanted "VINCENZA GALJOUR" got "" testcase = { id = "3.0037", orig = { number = "2012013877", person = { business = "-", first = "VINCENZA", name = "VINCENZA GALJOUR", last = "GALJOUR", }, feature9 = false, cnam = true, extcnam = false, feature4 = true, feature10 = false, feature7 = false, feature8 = true, }, term = { feature10 = true, feature1 = false, feature2 = false, feature3 = false, feature4 = true, feature5 = false, feature6 = false, number = "6012013877", feature7 = false, feature8 = false, }, } ERR CNAM feature8 failed: wanted "TERINA SCHUPP" got "" testcase = { id = "3.0039", orig = { number = "2012013879", person = { business = "-", first = "TERINA", name = "TERINA SCHUPP", last = "SCHUPP", }, feature9 = false, cnam = true, extcnam = false, feature4 = true, feature10 = false, feature7 = false, feature8 = true, }, term = { feature10 = true, feature1 = false, feature2 = false, feature3 = false, feature4 = true, feature5 = false, feature6 = false, number = "6012013879", feature7 = false, feature8 = false, }, }
Since the regression test is written in Lua,
I found it easy to just dump the structure holding the test data to the file,
given I already have have a function to do so.
I also print out what failed just before the data for that particular test case.
The code that prints the structure outputs valid Lua code.
All I changed was adding an array declaration around the output,
turned the error message into a comment,
and changed testcase
to a valid array index:
testcase = { -- ERR CNAM feature8 failed: wanted "VINCENZA GALJOUR" got "" [1] = { id = "3.0037", orig = { number = "2012013877", person = { business = "-", first = "VINCENZA", name = "VINCENZA GALJOUR", last = "GALJOUR", }, feature9 = false, cnam = true, extcnam = false, feature4 = true, feature10 = false, feature7 = false, feature8 = true, }, term = { feature10 = true, feature1 = false, feature2 = false, feature3 = false, feature4 = true, feature5 = false, feature6 = false, number = "6012013877", feature7 = false, feature8 = false, }, } -- ERR CNAM feature8 failed: wanted "TERINA SCHUPP" got "" [2] = { id = "3.0039", orig = { number = "2012013879", person = { business = "-", first = "TERINA", name = "TERINA SCHUPP", last = "SCHUPP", }, feature9 = false, cnam = true, extcnam = false, feature4 = true, feature10 = false, feature7 = false, feature8 = true, }, term = { feature10 = true, feature1 = false, feature2 = false, feature3 = false, feature4 = true, feature5 = false, feature6 = false, number = "6012013879", feature7 = false, feature8 = false, }, } }
That way, I can verify my hypothesis with some simple Lua code:
dofile "errorlog.txt" for _,result in ipairs(testcase) do if not (result.feature10 and (result.feature8 or result.feature4)) then print("hypothesis failed") end end
Tuesday, August 18, 2020
It's not storming the beaches at Normandy, but in-person voting in this time of COVID-19 is nothing to sneeze at
So today was the Florida Primary Election Day. Given that my polling station is just a few blocks away fom Chez Boca, I decided to walk. Less to save fuel and more to just get out of the house for something other than food. Well, that, and to see what the protocol might be for the general election in November in these pandemic times. The walk there wasn't bad,
It was not crowded at all. There were a total of seven people at the polling station—five people staffing the station, and two voting (and I'm including myself in that count of two). It was pretty much the same as every other time I voted, with the exception of mask wearing, and keeping the pen used to mark the ballot.
I'm registered as an independent voter, so the ballot wasn't that large for me. Seven races, one supervisor of Elections, three judges, one school board district and two parks and recreation elections! I didn't even know those were a thing!
You might not want to read this post. If you read beyond the title, it's on you, not me. You have been warned. Have a nice day.
On my walk to the polling station, I saw a sign posted in someone's front yard that I found amusing enough to take a picture to post on this blog. As I resumed my walk, I began to have second thoughts on the sign—it brings up questions about the 1ST and 2ND Amendments and could be considered imflamatory in a country deeply divided along partisan lines, so I decided against posting the image.
Then I thought Why am I censoring myself? This is my blog! I can post what I want! I'm not forcing anyone to read this, and there are plenty of other pages out there one can read.
So, without further ado, the sign:
Don't say I didn't warn you.
A clown singing Gilligan's Island to the tune of Stairway to Heaven, I did not know I needed this
There's not much else to say but, enjoy.
Oh wait, there is … he's going off the rails on a crazy train.
Sunday, August 23, 2020
Yeah, there was a reason why we couldn't sell these
The power at Chez Boca went out on Friday afternoon and didn't come back up until around noon today.
It was a painful reminder of just how addicted dependent I am on the Internet and cat videos.
So to pass the time,
I decided to go through my Dad's CD collection that remain
(that is,
those that we kept because they might be good enough to keep and not thrown away because we coudn't sell them).
On one of the CDs I listened to, I found the following text:
Tai Chi Way Regimen Music
Return To Simplicity(Warning: Due to the relaxing effect of the music, do not listen while driving, working, or participating in any kind of mechanical operation)
The Internet couldn't come back soon enough.
Wednesday, August 26, 2020
Welcome to the machine
Ah, the Corporate Overlords' Managed Laptop … how would I ever waste time without you?
Last week I received an email saying the Corporate Overlords' Managed Laptop is “out of compliance” (read: I waited longer than twenty minutes to update the laptop). I followed the instructions in the email but they failed to appease the Corporate Overlords as it was still “out of compliance.” A meeting was scheduled to let the “Desktop Compliance Analyst” log into the laptop and make it “into compliance.” That failed due to the Corporate Overlords' VPN being flaky, so the meeting was rescheduled for today.
This time, the DCA had me download a 2G patch file. As I was waiting for it to download, the XXXXXXXXXXXXX Protection program updated itself and was asking me to restart the computer to finish its installation. I asked the DCA about this, and was told to just let the computer run and finish the downloading.
Twenty minutes later (and I wish I was making this up) The XXXXXXXXXXXXX Protection program interrupted the popup asking me to restart the cmpouter to inform me that it had updated itself again and to restart the computer to let it finish.
Seriously.
While this was going on, I asked the DCA about the Heisenberg Notification Center. The DCA said it was easy to reconfigure and went ahead to do it, since I don't have access to do said configuration.
“Oh,” said the DCA, “you're running a version of Windows 10 that's older than twenty minutes. No wonder I couldn't find the option I was looking for.”
“You guys set it up,” I said.
“No, I belong to the Desktop Compliance Unit. I'm not responsible for configuring machines.”
“No, ‘you guys’ being The Corporate Overlords.”
That got a slight chuckle out of the DCA.
I then asked the DCA about another issue. “I keep getting this popup—‘Program FunUtility has stopped working. A problem caused the program to stop working. Please close the program’ and it has a button on it labeled ‘Close the program.’ I do that, but it keeps coming back up.”
“Can you send me a screen shot?”
“Can you not take over my laptop and see it for yourself?”
“Oh, it's probably nothing. Just keep dismissing the dialog box until it stops appearing.”
“Seriously?”
“Seriously.”
How does anyone use Windows?
Tuesday, September 15, 2020
As this is all being done over email, how do I know it's not an elaborate phishing scheme?
I think I dodged a bullet.
Last Friday, I received an email from the Corporate Overlords about their impending “multi-factor authentication implementation”. They included a FAQ about the project:
Q: I don't have a company mobile. Can I install XXX on my personal mobile?
A: Yes, you can install the XXX Mobile App on your personal phone to use it as a token.
Q: I don't have a company mobile and I don't want to install XXX on my personal mobile.
A: We strongly suggest using the XXX mobile app as the most convenient features, like XXX push (one touch authorization), are not available on the hardware token. You also can use the DUO Mobile App to secure your personal accounts (Google, Facebook, LinkedIn, Amazon, etc…) with multi-factor authentication.
If you really don't want to install the app, please let XXX know, hardware token also available and will be distributed upon request.
I don't have a company provided smart phone, and I really don't want to install this software on my personal smart phone, given the silliness of their managed laptop. But I don't also want to come across as too obstinate in dealing with them—they do, after all, sign the paychecks.
So I recieved the email about downloading the app today and after some internal back-and-forth, I decided “Why the heck not? Let the Corporate Overlords onto my iPhone! What's the worse that can happen?”
Please don't answer that.
Much to my relief (and surprise, but in retrospect it shouldn't really have been) my version of iOS is too old to be supported! I can't use the mobile app!
Whew!
So now I'll see how long it takes for them to send me the hardware token, and where they deliver it (given that the Ft. Lauderdale Office of the Corporation is still closed due to COVID-19).
Tuesday, September 29, 2020
I knew my dad was into New Age meditation neurohacking music, but not this
The DSL has been down for a week. I'm saving the iPhone hot spot for work as just watching a YouTube video blew 50% of my current “data plan” for the month, and I finished reading the entire Harry Potter series. So now I'm going through yet another bit of my Dad's CD collection and … yeah, it's a horrible collection.
I'm now listening to
“Greater California”
“Somber Wurlitzer” by
Somber Wurlitzer
Greater California
and I can only describe it as “Gothic BubbleGum Pop,”
which is as horrible as it sounds.
Update on Tuesday, October 14TH, 2020
Yes, I made a bit of a mistake here ...
Thursday, October 01, 2020
Skidoosh
The following is one of the most amusing spam emails I've received in a long time:
- From
- "Alisha Roberts" <selfdefense@XXXXXXXXXXXXXXXX>
- To
- <sean@conman.org>
- Subject
- "I Was In Hell" Serial Robber Terrified After Entering Detroit Family's House
- Date
- Thu, 1 Oct 2020 04:10:15 -0500
What a serial robber thought was just another break in, turned out to be the most frightening thing he had ever experienced.
Police found him crippled on the living room floor, seven feet away from his gun, screaming in excruciating pain "He's not human!".
The scene was incredibly awkward, as the person he was pointing at was a little boy in Teddy Bear pajamas, no older than 10 years, surrounded by his worried parents and grandparents.
When an officer interrogated the family they said that Jimmy -the little boy- had learned a self-defense move [2]after watching this video with his grandfather.
The two-finger trick, called the "Death Touch", was invented by a Chinese Kung Fu Master and it allows anyone, no matter their physical strength or condition, to bring down an attacker just by poking him in a vulnerable spot.
"We saw the video a couple of times and practiced a little bit. I'm shocked Jimmy almost killed an 180 pound man just by touching him…and he did it in the dark!"
It reminds me of the Count Dante ads one would find in comic books back in the 70s, promising to teach 點脈. I guess with the death of the comic book industry they have to advertise elsewhere …
Tuesday, October 06, 2020
Is it really a first world problem when DSL required for working and it's down?
Two weeks ago our DSL line went down and the ETA kept being extended and extended until it finally came up last Thursday, a few days before the last ETA we received from the Monopolistic Phone Company of Saturday evening.
There was much rejoicing (woot).
For about four hours, when it promptly went down again.
When it initially went down two weeks ago, there was no indication of a problem on our DSL modem as all the lights were green—the issue was purely somewhere in the Monopolistic Phone Company's network and from what I understand, it's getting harder to find replacement DSL equipment (and sadly, DSL from the Monopolistic Phone Company is the only game here in the boondocks that is Boca Raton). This time, however, the DSL connection light was flashing red, meaning the DSL couldn't connect to the DSLAM, but try telling the Monopolistic Phone Company that. Their response was “there's an outstanding ticket for your area that is currently being worked on and is expected to be resolved by Saturday evening,” even though our neighbor across the street, who has the same plan as us, is up and running.
It took multiple levels of escalation to even get the Monopolistic Phone Company to admit that, “oh yes, one system shows the work has been done for your area, but the system our front line techs use hasn't been updated yet.” The earliest service appointment was today.
The service tech came out, and upon looking into the issue, found that our phone line had been cut at the pole!
No one has any idea why.
Very odd.
Wednesday, October 07, 2020
A decade of working for The Man
I've been working at The Corporation now for ten years. My, how the time has flown.
At the five year mark, I received a hefty slab of glass with the anniversary etched inside. It's quite nice and quite hefty—enough that you woudn't want to be on the receiving end of it if thrown. For the ten year mark, I received the the 8″ Pinnacle Globe from Glassical Designs. It's not as heavy as the five year piece of glass, but it is much nicer in design.
Also included were gift cards worth $250, which is a very nice touch.
Friday, October 09, 2020
I don't care if it's a first world problem—it's annoying and I want it fixed
This is silly. The DSL is down. Again!
And the first available apppointment is the 15TH.
On top of this, we've already burned through our initial cell phone data plan with the Oligarchist Cell Phone Companies, and the updated cell phone plan with the Oligarchist Cell Phone Companies. We don't want the “unlimited data plan” because we (normally) don't need the “unlimited data plan” (and man, I found out the hard way that YouTube burns through data plans like you wouldn't believe).
So now we're adding data to our cell phone data plan 1GB at a time … Grrrrrrrr …
Saturday, October 10, 2020
It sucks that Boca Raton, Florida is way out in the sticks
Appointment updated to be on Monday, between 8:00 am and 8:00 pm. Love that pinpoint accuracy there. But at least it's earlier than the 15TH.
We then had a call later because the Monopolistic Phone Company thought the issue was resolved (from my perspective, they really don't want to send out a technician—or the left hand doesn't know what the left pinky is doing), but no, it's still down.
Monday, October 12, 2020
We should be good to go now … I hope
The technician arrived early this morning (wow!) and fixed the issue with the DSL (again!) and the issue explains the confused response about it being fine. The DSLAM was fine. The line wasn't cut. This time, the wiring behind the faceplace in the wall was shorting out.
Good lord—we've had the trifecta of issues with DSL this past month—the interior wiring, exterior wiring and issues with the Monopolistic Phone Company network!
Sheesh!
Hopefully this means everything is fine and we won't have any further issues with th@ D ASDqwe2893 lkas f
NO CARRIER
OK
Tuesday, October 13, 2020
I wonder what the difference between Boomer Darkwave and Gothic Bubblegum pop really is?
Two weeks ago I mixed up a music group name with its album name. The only text on the front of the CD is “GREATER CALIFORNIA” with nary a mention of the group name. Along the spine is “GREATER CALIFORNIA ★ SOMBER WULITZER” but since I am unwise in the ways of CD labelling, I mixed up the names.
Oops.
It's a pity though, because I think “Somber Wurlitzer” is a much better group name than “Greater California.”
But it does appear I've introduced Greater California to a couple of people. One person described it as “what if nostalgic boomers had invented Darkwave” and the second person said “if you think the theme for Trailer Park Boys is good, you'll probably like this. I can't give a more glowing commendation than that!” And that's great! Everybody has different tastes. It's just that I didn't like Greater California all that much. My initial reaction calling it “Gothic Bubblegum pop” was my honest reaction to it.
Wednesday, October 14, 2020
If the definition of insanity is doing the same thing over and over getting the same result, but expecting a different one, then does that make me insane for expecting the popup window will stop popping up, or Microsoft Windows for keeping popping up the popup window?
I turned on the Corporate Overlords' Windows 10 managed laptop to enter my time for today when a notice popped up about Adobe® Flash® being out of date and that I should uninstall it. At least, that's what I think it said—the text was a bit too small to read easily (and if I'm complaining about the text being too small it's small). So I click okay as a reflex, I'm asked to input my password (as most systems do today) and then a second later, I get this small popup window:
You must have administrative privileges to install Adobe® Flash® Player. Please log on with administrative privileges and try again.
Oh, that's right—managed laptop. Ah well. I click the only button on the popup window, only to have another small popup window:
You must have administrative privileges to install Adobe® Flash® Player. Please log on with administrative privileges and try again.
Yeah, I don't have administrative rights. And even if I did, I didn't even have a chance to enter my administrative rights. I click the only button on the popup window only to have another small popup window:
You must have administrative privileges to install Adobe® Flash® Player. Please log on with administrative privileges and try again.
Sigh.
I can see where this isn't going.
Time to reboot the computer.
I just love this managed laptop.
Wednesday, October 21, 2020
Umopapisdn
I neglected to mention I received the security hardware token from the Corporate Overlords. It's the size of a USB thumb drive but it's not USB. It's just a simple device with an LCD screen and a green button. You press the green button and a number shows up on the screen and that's your “password” for the moment.
Pretty simple and I've used it once so far to log in.
But when I went to submit my timecard for today, I had to reauthenticate. No big deal, just hit the button—“12h522h“.
Odd, I thought, I thought this was numeric only. Perhaps a few segments are burned out? So I type “1265226” and no go. I try again with the same value and again, no go. By the time I want to type it in a third time, the token went blank. I hit the button and get … a backwards “J”? “6” maybe? What is going on?
A few moments later I get a sequence of numbers with a backwards “J“ and an “h”.
It takes me a few more minutes to realize I'm holding the security token upsidedown.
Sigh.
There are three graphic symbols on the token along one edge. With the token oriented one way, the three graphic symbols spell out a word. Flip the token upsidedown and the three graphic symbols spell out a different word. And what's amazing is that one of the symbols reads as a “E” one way, and as a “D” in another.
Yeah, the symbols are that abstracted.
Boy do I feel stupid!
I thought it stood for “Girl In File” so of course it's pronounced …
The three letters “G,” “I” and “F.” Together, they represent an image format, but there is some controversy over how it's pronounced.
Tom Scott presents the arguments for the two ways GIF is pronounced, but I like Mike Rugnetta's take which is a completely different pronounciation based on “ghoti” (the alternate spelling for “fish”).
Wednesday, October 28, 2020
Funny how enterprise software never seemed to be installed on the Enterprise itself
The Corporate Overlords have recently changed the time tracking system. The previous one was, I think, developed in house and all things considered, wasn't that bad. The new one, not developed in house (perhaps because “code is a liability” or tax differences between opex and capex—somebody got a bonus I'm sure) can be best described as not being that good.
The new system, a third party, “enterprise class” time tracking application via the web (of course), doesn't respond well if you enter the data too fast. And one of the fields that's a “pull-down” list of items takes forever to load, no matter how many entries you make—it never gets faster.
But today I learned about another limitiation. Due to COVID-19, I haven't taken any vacation time, and because of the quarantine, I haven't been sick either. So I have all this time accrued up that I now have to use (yeah, I know, first world problems). But there is no way to request a single block of time off with the new time tracking application. No, you have to enter … each … vacation … day … one … day … at … a … time.
So it's select the day, wait a bit for the popup window to appear, select category, wait a bit while all the categories load, select “absent,” wait a bit for the sub-categories to load, select “Paid Time Off,” select out of that field to make sure it takes, select the “reason” field, wait a bit, select “other” (because the only other selection is “sick”), select out of that field to make sure that takes, be glad that the “time” field is pre-filled out with 8 hours, but wait until the popup stops resizing itself, then hit “Okay,” and wait until that takes. And then do it again for the next day. Then once the week is filled out, hit “Submit,” wait a bit until the next page comes up, and hit “Done” to do the actual submission. And then do all that again for the next week. Repeat until done.
I just wish the executives responsible for selecting this “enterprise solution” were themselves subject to it, but alas, they have secretaries, so we mere employees are subject to this craziness.
But as my manager said, “at least you get paid to do that.” Yeah, there is that.
Some questions about “enterprise software”
The talk of “enterprise software” in the previous entry reminds me of the time I worked at Negiyo, twenty years ago. Back then Negiyo had an issue tracking system that was built in-house. It was very nice, because the people who were building it were also using it so they were subject to the same pain points as everybody else in the company. But towards the end of my time there, it was decided by “the executives” that it had to go and be replaced by some third party “enterprise class” ticketing system. I wasn't there for the switch-over, but I had friends who were, and from what I remember, everyone hated it, and it took a few years for it to come close to the capabilities of the in-house ticketing system.
I have to wonder what the incentives are for this type of idiocy to happen. Strippers and steak? Hookers and blow? Executives trying to justify their jobs? Or is it some form of capex/opex tax thing?
Great, now I have to train my next manager
Today my manager at the Ft. Lauderdale Office of the Corporation just announced he's retiring. Not immediately, but by first quarter next year, maybe second quarter. I'm sad to see him go, but alas, these things happen. He's the 6TH manager I've had since I started here at The Corporation.
I was sad when managers #1 & #3 left and #4 & #5 were let go (#2 wasn't a good fit for what I was doing at the company, even though technically he headed the QA department when I technically was working as a QA engineer—I was the only QA engineer for call processing while the rest of the department dealt with Android phones), but things will work out.
I don't know if I should be amused or alarmed that I've outlasted six managers.
Thursday, October 29, 2020
Great, now I have to train a new QA engineer
Today the QA engineer for our department (call processing, although officially it goes by a different name) just announced he's moving on to another job in two weeks.
Hmm.
I hope him well at his new job, but at the same time, I wish he were staying. We get along great, and he's good at the job. The other reason I wish he were staying is the rest of the QA team is in the Seattle Office of the Corporation. It's easier to answer questions when one can share a whiteboard and computer.
Now?
Erm … I guess things will work out?
Saturday, October 31, 2020
“Twice-times-a-thousand glares and winks and blinks and leerings of fresh-cut eyes.”
_ /\ )\ _ __)_)__ .'`--`'. )\_ .-'._'-'_.'-. / ^ ^ \ .'`---`'. .'.' /o\'/o\ '.'. \ \/\/\/ / / <> <> \ : ._: 0 :_. : \ '------' _J_ | A |: \\/\_/\// : | _/)_ .'`---`'. \ <\_/> / : :\/\_/\/: : / .'`----`'./.'b d \ _?_._`"`_.'`'-:__:__:__:__:-' /.'<\ /> \: 0 | .'`---`'.`` _/( /\ |:,___A___,|' V===V / /.'a . a \.'`---`'. __(_(__ \' \_____/ /'._____.' |: ___ /.'/\ /\ \ .-'._'-'_.'-:.______.' _?_ \' \_/ |: ^ | .'.' (o\'/o) '.'. .'`"""`'. '._____.'\' 'vvv' / / :_/_: A :_\_: \ / ^.^ \ '.__.__.' | : \'=...='/ : | \ `===` / jgs \ : :'.___.': : / `-------` '-:__:__:__:__:-'
And a reminder: check your kids' candy bags for Reese's Peanut Butter Cups so you can eat them before they do.
Sunday, November 01, 2020
Words words words words words and more words
Ah, November—a few things come to mind:
- All Hallows Eve is over (did you remember to confiscate your kids' Reese's Peanut Butter Cups?) and it's now All Hallows Day;
- the end of the 2020 Presidential Election season (which means that the start of the 2024 Presidential Election season will start Any Day Now™);
- my favorite holiday of the year is this month—Gobble Gobble Day (woot! can't wait);
- it's the start of National Novel Writing Month, where you have to write a book of 50,000 fictional words (or is it a fictional book of 50,000 words?—I get confused);
- it's also the start of National Novel Generation Month, where you have a computer program write a book of 50,000 fictional words, or real words—it's up to you.
I stopped doing NaNoWriMo years ago, instead switching to doing NaNoGenMo as it's more my style to have a computer do the hard work for me. I was worried that I had no idea what to do for NaNoGenMo, but much to my surprise, I found I left notes to myself for this years NaNoGenMo during last year's NaNoGenMo. I don't think anyone has done this before, so I don't want to give too much away at this time since the idea is (at least to me) very obvious (and no, it's not a novel of 50,000 meows).
What ping times and the speed of light have in common
Back when I was wearing a network admin hat,
about ten to fifteen years ago,
one question I tried answering was how to determine the time it took a network packet to travel one way.
The ping
command measures the round trip time:
[spc]lucy:~>ping -c 5 brevard.conman.org PING brevard.conman.org (66.252.224.242) 56(84) bytes of data. 64 bytes from brevard.conman.org (66.252.224.242): icmp_seq=0 ttl=49 time=26.7 ms 64 bytes from brevard.conman.org (66.252.224.242): icmp_seq=1 ttl=49 time=26.1 ms 64 bytes from brevard.conman.org (66.252.224.242): icmp_seq=2 ttl=49 time=26.9 ms 64 bytes from brevard.conman.org (66.252.224.242): icmp_seq=3 ttl=49 time=27.0 ms 64 bytes from brevard.conman.org (66.252.224.242): icmp_seq=4 ttl=49 time=27.1 ms --- brevard.conman.org ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 26.163/26.832/27.161/0.362 ms, pipe 2 [spc]lucy:~>
So,
can one assume that the one way time is a bit over 13ms?
Not really,
because asymetric routing—where the path to a remote destination does not match the path from said remote destination—does exist.
I've seen it happen several times while wearing my network admin hat.
Yes,
you can attempt to synchronize the clocks on the two systems—it's not easy,
but it is possible if you work hard to achieve it.
Over time,
I found it easier to just use the round trip time as reported by ping
and assume the path is symetrical.
I was reminded of that when I watched “Why no one has measured the speed of light.” It never occured to me that scientists haven't actually measured the speed of light at 299,792,458 meters per second, but used the round trip time to calculate it and just assume the timing is symetrical because it's impossible to synchronize clocks between two sites (for the purpose of measuring the speed of light). Kind of mind blowing if you ask me.
Tuesday, November 03, 2020
It's the end of the world as we know it, and I feel fine
So.
The 2020 Presidential Election is over, at least for me. What ever happens, happens, and I'm going to avoid all news for the next day or so just to preserve what's left of my sanity.
And like last time, I (along with Bunny) walked to the polling station, and despite it being the most crowded I've seen, we didn't have to wait to vote. We were in and out in under ten minutes.
I just hope that whoever wins, wins by a landslide.
Just a few amusing signs I saw on the way to vote
On the walk to the polling station, I saw a few signs that I found amusing.
And in the yard of the house with the warning sign:
And in some other election news, I wonder how Velma Owen is doing in the elections …
Can this year get … no … wait … I best not ask
Of course it is.
Thursday, November 05, 2020
The Computing Horror
I use the Corporate Overlords' mananged Microsoft Windows laptop to log into the Corporate Overlord's VPN at least once a week to let it update itself and reboot it as many times as it wants (if I go too long, the Corporate Overlords' will assume the laptop has been stolen, remotely nuke it from orbit and swamp me with paperwork). I also use it to punch my timecard. I don't use it for programming because the Ft. Lauderdale Office of the Corporation uses Apple Macs for that.
Oh, and I use the Corporate Overlords' mananged Microsoft Windows laptop to attend meetings via Microsoft Teams. I use a web interface for work email—yes, it's Microsoft Lookout for the web and given that I use Apple Macs (and before that, Linux) at work, it was easier to use that than to try to interface a non-Windows email client with Microsoft Exchange. The meeting notices are sent via email and contain the link to the Microsoft Teams meeting.
Now the laptop, being managed, was “set up” so I could use it from the start. And for reasons I don't understand, there are two different versions of Microsoft Internet Exploder installed on the system, and they both show up on the program bar across the bottom of the screen. I found out the hard way that one of the two Microsoft Internet Exploders refuses to work with Microsoft Teams. I click the link and … no meeting, even when Microsoft Teams is already running! I nearly missed a meeting until my manager directly invited me via Microsoft Teams.
I learned that the second Microsoft Internet Exploder would happily connect to the meeting. All I have to do now is recognize which blue “E” to use to attend Microsoft Teams based meetings.
Another weird thing happened with the laptop—it was warning me that I wasn't using the proper power brick for charging and to please use the power brick that came with the unit. The only thing was, I was using the power brick that came with the unit.
Sigh.
The fix?
Use a different electrical outlet, I kid you not.
But by then, both batteries were less than 1% charged, and the computer was trying in vain to charge one battery for a few seconds, then the other battery for a few seconds, back and forth, back and forth.
Until it shut off.
At that point, I felt like I was computer illiterate, not knowing how to deal with this … this … thing in front of me. It's stressful that I'm working for the computer, and not with the computer.
It's a horrible feeling.
In the end, I let it just sit there and eventually, the batteries charged up. But I just can't shake this feeling that I'm computer illiterate when it comes to this computer.
Saturday, November 07, 2020
Uncle Ed
I found out today that my Uncle Ed died.
On the one hand, the news is sad, but I am also relieved since he was suffering with a neurological disease for years.
Growing up, my Dad's parents (after my own parents divorced) would take me in for the summer in Royal Oak, Michigan (for the geographically impared, Royal Oak is just outside Detroit). Ed married my Dad's sister Jan, and they lived just a block and a half away from my grandparents house. What really stood out about their house was the six car garage Ed built (I recall him building it) to store the cars he spent his time restoring. There was also the two story play house he built for his kids.
Not only handy, he also had an incredible sense of humor and at times probably could have used some adult supervision himself, especially around fireworks.
Ah, the fireworks. He would trade glasses (he was an optician and was part owner of a glasses shop) for a ridiculous amount of fireworks every year. There was the time when a Roman candle fell over and started shooting at the garage, Ed kicked it and it started shooting at the neighbor's house. Then there was the time we went into the park behind his house and several fireworks ended up landing near the school on the other side of the park, and the ones that didn't hit the school nearly hit 12 Mile Rd.
Good times.
He was also my first real introduction to computers. One year I visited, he would occasionally take me into his office and let me play games on the Apple ][ he used for the business (oh, and sometimes I would have to clean the glasses before they were delivered to customers). I recall us both being affected by the ending of the classical text adventure game “Planetfall.” And at his house, he had an ever increasing number of gaming systems, starting with the Bally Professional Arcade system back in 1979.
It's no wonder that his own kids are involved with carpentry and information technology.
So here's to Uncle Ed. And much love to Aunt Jan, and my cousins Seth, Levi and Mallory.
The death of Kirk Cameron was greatly overexaggerated
Bunny is in the other room watching TV. I'm trying hard not to pay attention to the noise when I hear the unmistakable voice of Raymond Burr mention the murder of Kirk Cameron, and I'm like What?
No, I heard correctly—Kirk Cameron was murdered.
Only it was Kirk Cameron, fictional character for the Perry Mason episode “The Case of the Illicit Illusion” and not Kirk Cameron, former child actor of “Growing Pains” and promoter of the crocoduck.
For just a split second, I thought 2020 just got a bit weirder …
Tuesday, November 10, 2020
Let's see what Parler is giving to the FaceTwitBookTer refugees
So it seems there's this mass exodus away from FaceTwitBookTer with a large portion heading towards MeWe and Parler. I know a few people who have moved towards Parler, and I thought I would take a look at what's up over there.
Of course, if you aren't a member, you can't see much by default—sadly, that doesn't surprise me at all. All these social media type sites want users and what better way than by locking everything up into a walled garden.
So let's see what their User Agreement says:
5. You grant to Parler a license to any content posted by you to the Services, including a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute your content. You agree that Parler or its service providers or partners may display advertising in connection with your content and otherwise monetize your content without compensation to you. You warrant that you have all rights necessary to grant these rights to Parler and Parler users. You also grant a limited non exclusive, royalty-free license to any user of the Services to use, copy, reproduce, process, adapt, modify, publish, transmit, display, and distribute any content posted by you to the Services solely in connection with that users use of the Services. The licenses granted by you hereunder do not include any moral rights or right of attribution.
Scary, but probably required to let people repost posts. Of course, there is Fair Use, but Fair Use gets tricky when the entire post might be 40 words long. Pay close attention to the “service providers or partners” bit—that will become important in a bit.
Continuing on then …
10. You agree to receive communications from Parler, including communications sent by phone, email, text message, or other means of communication. If you provided a phone number to Parler, you are required to notify Parler when you cease to own or control that number to help prevent Parler from sending communications to others who may acquire that number.
If you provide a phone number to Parler? If? There's no “if” here, it's required to sign up. That's the one thing that stopped me cold from creating an account. I get plenty of spam already to my phone, what with the two to three robo calls per day, and the dozen spam text messages over the past month (mostly trying to get me to vote for or against <insert bogey man here>) Email spam I already deal with, but crap like this to my phone? I don't need more crap like that to my phone.
So then there is this, right on the web site:
Parler believes in transparent relationships. We will always provide updates and notifications to keep you informed about changes to the platform.
That's nice. But it's when you dig into the User Agreement that things maybe aren't what they seem …
15. Parler may modify the Terms of this User Agreement in any way and at any time without notice to you, and you agree to be responsible for making yourself aware of any modification of the Terms and to be bound by any modification of the Terms when you continue to access or use the Services after any such modification. As a matter of courtesy, Parler endeavors to inform its users of any such changes …
They'll try their best, but don't hold them to “always provide updates.” Even if they hold to always inform users of updates, who's to say that if they are aquired, the the aquiring company will maintain such a policy.
I'm just saying …
Also on their website, they say:
Any personal data shared with Parler is encrypted for your protection, and never sold to outside entities.
Which appears to be true—they won't sell your data, but they sure will give it away without much thought. They give away
- location information
- device information (including IP address, device type, browser type, operating system, phone carrier and installed applications)
- usage information
- your contacts (if you allow them to)
- web cookies (along with their third-party partners)
to
- vendors and select providers
- marketing
- analytics partners
It's all spelled out in their privacy policy. And it's pretty typical of all the social media sites.
Their community guidelines are fine—don't do illegal things or spam, and you'll be fine. Even their elaboration on said guidelines are fine—I don't see any real issues there.
It appears that right now, they aren't quite as bad as FaceTwit BookTer, since they aren't as big. They're still bad though, collecting and diseminating user information just like other social websites. And if it weren't for the mandatory phone number, I might have signed up just to see what the fuss is all about. I did sign up for MeWe early last year (and while MeWe asks for a phone number, it isn't mandatory). I never used MeWe that much because it was glacially slow (and still is—I just checked).
Ah well, I'm just glad to have my own little corner of the Intarwebs that I control.
Thursday, November 19, 2020
“Start me up! Then start me up! Now start me up! Start me up!”
I start up (for the third time this week) the Corporate Overlords' mananged Microwoft Windows laptop. Upon logging in, I see a popup message saying “Your organization requires you to restart this laptop within the next four days.”
At this point, I just have to laught at the insanity of this.
Sunday, November 22, 2020
All I wanted was a surreal email conversation with a confused recipient
I've had a Google email account now for sixteen years. I don't use it for anything, preferring to run my own email server, thank you very much. And even after all this time, I still get email for other Sean Conners (that is, a number of other people called “Sean Conner,” not one person called “Sean Conners”).
I recently received an email for Sean Conner that consisted of nothing but pictures of a pickup truck. And another one trying to remind me that my mom is turning 90 this year. Or my Aunt Marge. It was kind of hard to make out how I was supposed to be related to Marge, despite the fact that I have no Aunt Marge, nor was my mom called Marge.
Most of these I reply back in a straightforward manner informing the sender that they have mistaken my email address for that of their Sean Conner. But one message I got though …
- From
- Jen <XXXXXXXXXXXXXX@gmail.com>
- To
- <seanconner@gmail.com>
- Subject
- RE: Sean
- Date
- Sat, 31 Oct 2020 23:56:00 -0500
Sean, are you near Tuxedopark?
It had been three weeks since that came in. And the word “Tuxedopark” just screamed a custume shop, or maybe formal wear. The fact that it was sent on All Hallow's Eve just cemented the connection in my head. Anyway, for this one, I just didn't feel like a straight forward reply was enough, and I decided that a surrealistic reply was called for.
- From
- Sean Conner <sean.conner@gmail.com>
- To
- Jen <XXXXXXXXXXXXXX@gmail.com>
- Subject
- RE: Sean
- Date
- Fri, 20 Nov 2020 04:53:00 -0500
Wow! That was weird! I was near Tuxedopark to rent my costume when this portal just popped into existence and swallowed me up. I can't go into details about what happened because I would be here all night, but I finally got out. Who won the election? Who's the President? Do you know?
I was expecting one of two things here—one, that I would start engaging in a very surreal conversation about my adventures across the 8TH dimention to a very confused Jen, or I would hear nothing back. Instead I got:
- From
- Jen <XXXXXXXXXXXXXX@gmail.com>
- To
- <seanconner@gmail.com>
- Subject
- RE: Sean
- Date
- Fri, 20 Nov 2020 06:39:00 -0500
Sean, I live near you!
My name is Jen, I am 31, and still very pretty
My pics are in my profile on this site:http:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Sign up there, it is free!
and find me, my nickname is Jenn_C!
A crummy commercial? Son of a XXXXX! Come on Google! I thought you were supposed to be good at catching spam!
Monday, November 23, 2020
Great, now I have to train my next coworker
Today I learned that my fellow cow-orker JC is leaving The Company at the end of next week. So, with JC leaving, my manager leaving Real Soon Now™ and with the QA engineer having already left, that leaves … um … me!
Wow.
I have a lot of training ahead of me …
My misconception of aluminum Christmas trees
Bunny walked into the Computer Room bearing a copy of the The Transylvania Times. “Do you know what's open in Brevard?”
“No,” I said.
“The Aluminium Christmas Tree Exhibit!” she said, showing me the lead article in section B of the paper.
Yes, it's that time of year again, when the Aluminum Tree & Æsthetically Challenged Seasonal Ornament Museum and Research Center (aka ATOM) is open to the public.
I first came across this wonderful exhibit fourteen years ago, and at the time, I mentioned that my only exposure to aluminum Christmas trees was “A Charlies Brown Christmas.” What I didn't mention in that post was that I thought aluminum Christmas trees were tall aluminum cones—the reality was a bit different:
(Note: Bunny and I actually saw the exhibit back in December of 2012, but I did not blog about it then—perhaps I was being lazy or something)
I was mildly disappointed they were not the large cones of painted aluminum, but they're cool nonetheless.
Thursday, November 26, 2020
Gobble gobble gobble
,+*^^*+___+++_ ,*^^^^ ) _+* ^**+_ _+^^*+_ ( ,+*^ ^ \+_ ) { ) ( ,( ,_+--+--, ^) ^\ { (@) } f ,( ,+-^ __*_*_ ^^\_ ^\ ) {:;-/ (_+*-+^^^^^+*+*<_ _++_)_ ) ) / ( / ( ( ,___ ^*+_+* ) < < \ U _/ ) *--< ) ^\-----++__) ) ) ) ( ) _(^)^^)) ) )\^^^^^))^*+/ / / ( / (_))_^)) ) ) ))^^^^^))^^^)__/ +^^ ( ,/ (^))^)) ) ) ))^^^^^^^))^^) _) *+__+* (_))^) ) ) ))^^^^^^))^^^^^)____*^ \ \_)^)_)) ))^^^^^^^^^^))^^^^) (_ ^\__^^^^^^^^^^^^))^^^^^^^) ^\___ ^\__^^^^^^))^^^^^^^^)\\ ^^^^^\uuu/^^\uuu/^^^^\^\^\^\^\^\^\^\ ___) >____) >___ ^\_\_\_\_\_\_\) ^^^//\\_^^//\\_^ ^(\_\_\_\) ^^^ ^^ ^^^ ^^
And for your viewing enjoyment this day of turkey excess, WKRP's promotional turkey spot.
A rabbit hole of turkey proportions
I'm searching Youtube for the “WKRP in Cincinnati” turkey clip when I fell into this rabbit hole I was never expecting. The bit from “WKRP in Cincinnati” (one of the funniest bits on television) was inspried by an embellshed story of a real event decades prior. But I was completely gobsmacked by this 2016 news report of an actual, real life, “live turkey drop,” in Arkansas.
…
WHAT?
My mind is blown. Comedy and satire just can't compete with real life anymore.
So I start down this particular rabbit hole, and while the Yellville, Arkansas Chamber of Commerce no longer sponsors the festival, it's still unclear if the turkey drop has actually stopped?
…
WHAT?
The FAA apparently can't stop it:
“FAA regulations do not specifically prohibit dropping live animals from aircraft, possibly because the authors of the regulation never anticipated that an explicit prohibition would be necessary,” an FAA spokesman told HuffPost in an email. “This does not mean we endorse the practice.”
The FAA Can't Stop People From Throwing Live Turkeys Out Of Planes | HuffPost
…
WHAT?
Well, yeah, that makes a weird type of sense, but still …
… “The Federal Air Administration has deemed it legal for this act to occur at our festival as long as it is performed within the parameters that they have set forth.”
“The phantom pilot is named as such for a reason. There is no airstrip in Yellville and therefore we do not have any authority in terms of flight control,” it added. “Furthermore, Chamber board members, Turkey Trot sponsors, and Chamber members have absolutely no affiliation, jurisdiction, or control over what any individual does in his or her private plane in the air.”
…
The FAA said Monday it was aware of Saturday’s drop. The agency hasn’t intervened in past years because the birds aren’t considered projectiles.
FAA looking into Arkansas festival's turkey drop
…
WHAT?
All the articles I can find on this date from 2015 to 2018. I can't find anything that has said definitively that the turkey drop has actually stopped for good, but it may appear to have stopped. I don't know.
But this is not something I was expecting to find—an actual turkey drop.
Sheesh.
Saturday, November 28, 2020
Notes from a Christmas tree shopping trip
Bunny and I went to a Christmas tree lot a few blocks from Chez Boca. There was a sign outside the tent saying that masks were mandatory, yet half the people there weren't wearing masks. And of the other half that were, half had them below their noses.
Of course.
Overheard, “It shows how effective the masks are when you can smell the trees.” Yes, even wearing a mask, it still smelled good standing among the trees.
Also overheard, “These trees were delivered just this morning fresh from North Carolina. They still had snow on them.”
Thursday, Debtember 03, 2020
Putting the “pro” in “procrastination”
Wait!
It's Debtember!
How did that happen?
Oh … yeah … that COVID-19 long undifferentiated timey wimey blob thing going on.
Sigh.
And I never did get around to coding my 2020 NaNoGenMo entry.
Another sigh.
Well, here's to doing it next year.
Sunday, Debtember 06, 2020
An early Christmas surprise
Bunny walked into the Computer room. “Want to see the lights?” she asked as she was opening up a package.
“Is that a remote-controlled outlet?”
“Yes it is.”
“Cool. Let's see the lights.” We walked out to look at the lights splayed across the front of Chez Boca. “Nice.”
“I just need to plug this in,” said Bunny. She stepped around the bougainvillea bush and plugged the device into the outside outlet. It made a large buzzing noise, sparked, a large pop, and then it burst into flames.
“It's not supposed to do that, is it?”
“No.”
“I think you need to get your money back.”
“Most definitely.”
Saturday, Debtember 19, 2020
Details, details! It always comes down to the details
Back in July, I wrote an HTML parser using LPEG. I was a bit surprised to find the memory consumption to be higher than expected but decided to let it slide for the moment. Then in October (which I did not blog about—sigh) I decided to try using a C version of PEG. It was a rather straightforward port of the code and an almost drop-in replacement for the LPEG version (it required one line of code to change to use it). And much to my delight, not only did it use less memory (about ⅛TH of the memory) but it was also way faster (it ran in about 1/10TH the time).
It's not small though. The PEG code itself is 50K in size, the resulting C code is 764K in size (yes, that's nearly ¾ of a megabyte of source code), the resulting code is 607K in size. and with all that, it still runs with less memory than the LPEG version.
And all was fine.
Until today.
I've upgraded from Lua 5.3 (5.3.6 to be precise) to Lua 5.4 (5.4.2 to be precise). Lua 5.4 was released earlier this year, and I held off for a few months to let things settle before upgrading (and potentially updating all my code). Earlier this week I did the upgrade and proceeded to check that my code compiled and ran under the new version. All of it did, except for my new HTML parser, which caused Lua 5.4 to segfault.
With some help from the mailing list, I found the issue—I bascially ignored this bit from the Lua manual:
So, while using a buffer, you cannot assume that you know where the top of the stack is. You can use the stack between successive calls to buffer operations as long as that use is balanced; that is, when you call a buffer operation, the stack is at the same level it was immediately after the previous buffer operation. (The only exception to this rule is
luaL_addval ue
.)
Oops. The original code was:
lua_getfield(yy->L,lua_upvalueindex(UPV_ENTITY),label); entity = lua_tolstring(yy->L,-1,&len); luaL_addlstring(&yy->buf,entity,len); lua_pop(yy->L,1);
Even though it violated the manual, it worked fine through Lua 5.3. To fix it:
lua_getfield(yy->L,lua_upvalueindex(UPV_ENTITY),label); luaL_addvalue(&yy->buf);
That works.
(The code itself converts a string like “CounterClockwiseContourIntegral” and converts it to the UTF-8 character “∳” using an existing conversion table.)
What I find funny is that I participated in a very similar thread three years ago!
Anyway, the code now works, and I'm continuing on the conversion process.
LPEG vs. PEG—they both have their strengths and weaknesses
While the C PEG library is faster and uses less memory than LPEG, I still prefer using LPEG, because it's so much easier to use than the C PEG library. Yes, there's a learning curve to using LPEG, but its re module uses a similar syntax to the C PEG library, and it's easier to read and write when starting out. Another difference is that LPEG practically requires all the input to parse as a single string, whereas the C PEG can do that, it can also read data from a file (you can stream data to LPEG, but it involves more work—check out the difference between a JSON parser that takes the entire input as a string versus a JSON parser that can stream data; the later is nearly twice the size of the former).
The code isn't that much different. Here's a simple LPEG parser that will parse text like “34.12.1.444” (a silly but simple example):
local re = require "re" return re.compile( [[ tumbler <- number ( '.' number)* number <- [0-9]+ -> pnum ]], { pnum = function(c) print(">>> " .. c) end, } )
Not bad. And here's the C PEG version:
tumbler <- number ('.' number)* number <- < [0-9]+ > { printf(">>> %*s\n",yyleng,yytext); }
Again, not terrible and similar to the LPEG version.
The major difference between the two,
however,
is in their use.
In the LPEG version, tumbler
can be used in other LPEG expressions.
If I needed to parse something like “34.12.1.444:text/plain; charset=utf-8”,
I can do that:
local re = require "re" return re.compile( [[ example <- %tumbler SP* ':' SP* %mimetype SP <- ' ' / '\t' ]], { tumbler = require "tumbler", mimetype = require "org.conman.parsers.mimetype", } )
The same cannot be said for the C PEG version. It's just not written to support such use. If I need to parse text like “32.12.1.444” and mimetypes, then I have to modify the parser to support it all—there's no easy way to combine different parsers.
That said, I would still use the C PEG library, but only when memory or performance is an issue. It certainly won't be because of convenience.
Wednesday, Debtember 23, 2020
I solved the issue, but I'm not sure what the issue was
It's a bug whose solution I can't explain.
So I have a Lua module that enables event-driven programming.
I also have a few modules that drive TCP and TLS connections.
To make it even easier,
I have a module that presents a Lua file-like interface to network connections—functions like obj:read()
and obj:write()
.
The previous version of this interface module, org.conman.net.ios
,
used LPEG to handle line-based IO requests,
as well as an extension to read headers from an Internet message.
Given the overhead of LPEG I thought I might try using the built-in pattern matching of Lua.
I reworked the code and a bench mark did show a decent and measurable improvement in speed and memory usage.
But the new code failed when transferring a sizable amount of data (about 6.7M) over TLS. It took about two days to track down the problem, and I still don't have a root cause. The code works, but I don't know why it works. And that bugs me.
To further complicate matters, the code did work when I download the data from a server I wrote (using the same Lua code as the client) but it would fail when I tried downloading the data from another server (different TLS implementation, different language, etc.).
I was able to eventually isolate the issue down one function in org.conman.net.ios
.
Here was the original code:
local function write(ios,...) if ios._eof then return false,"stream closed",-2 end local output = "" for i = 1 , select('#',...) do local data = select(i,...) if type(data) ~= 'string' and type(data) ~= 'number' then error("string or number expected, got " .. type(data)) end output = output .. data end return ios:_drain(output) end
It works,
but I didn't like that it first accumulated all the output first before writing it.
So when I rewrote org.conman.net.ios
,
I modified the function thusly:
local function write(ios,...) if ios._eof then return false,"stream closed",-2 end for i = 1 , select('#',...) do local data = select(i,...) if type(data) ~= 'string' and type(data) ~= 'number' then error(string.format("bad argument #%d to 'write' (string expected, got %s) end data = tostring(data) local okay,err,ev = ios:_drain(data) if not okay then syslog('error',"ios:_drain() = %s",err) return okay,err,ev end ios._wbytes = ios._wbytes + #data end return true end
Instead of accumulating the data into one large buffer, it outputs it piecemeal. To further confound things, this doesn't appear to have anything to do with reading, which is what I was having issues with.
The client only did one call to this function:
local okay,err = ios:write(location,"\r\n")
The request went out, I would start receiving data, but for some odd reason, the connection would just drop about 200K short of the full file (it was never a consistent amount either).
While the reading side was a different implementation,
the writing side didn't have to be different,
I just felt the second vesion to be a bit better,
and it shouldn't make a difference, right?
[There's that word! –Editor]
[What word? –Sean]
[“Should.” –Editor]
But regardless of my feelings about how that certainly can't be at fault,
I put the previous version of write()
back and lo'!
It worked!
…
I'm flummoxed!
I don't understand why the new version of write()
would cause the TLS connection to eventually fail,
but it did,
for whatever reason.
Weird.
Monday, Debtember 28, 2020
Yet more adventures in profiling
While I'm now satisfied with the memory usaage, I'm starting to watching the CPU utilization and noticed with some dismay that it's quite high (even with the new HTML parser being faster overall). I started an instance a day ago (the 27TH), and it already has accumulated 27 minutes, 35 seconds of CPU time. As a contrast, the web server has only accumulated 37 seconds of CPU time since the 25TH.
That's a large difference.
The server in question is written in Lua. I have another server written in Lua, and it has only accumulated 1 minute, 26 seconds since the 25TH.
There are two differences that might account for the discrepancy:
- one gets more requests than the other;
- one uses TLS, the other doesn't.
But to be sure, I need a controlled experiment. Since both servers basically do the same thing (mainly, serve up this blog via gopher and Gemini, and convert the HTML to text formats, thus the need for an HTML parser) it was easy enough generate a list of comparable requests for both, and profile the execution.
Unfortunately, profiling across shared objects doesn't necessarily work all that well (at least on Linux). I recompiled both Lua and all the Lua modules I use (at least the ones written in C) but it only showed the time spent in the main Lua VM and nowhere else.
I then spent the time constructing a self-contained executable (containing Lua, plus all the modules comprising the application) of port70 (the gopher server) and another one for GLV-1.12556 (the Gemini server). Pretty easy to do, if a bit tedious in tracking down all the modules to include in the executables. I didn't bother with any optimizations for these runs, as I'm trying to get a feel for where the time is spent.
I profiled each executable, making the same 1,171 requests (well, “same” meaning “requesting the same content”) to each program.
First, port70, the gopher server, straight TCP connection. It accumulated 14 seconds of CPU time with the profile run, and the results:
% time | cumulative seconds | self seconds | calls | self ms/call | total ms/call | name |
---|---|---|---|---|---|---|
18.06 | 0.56 | 0.56 | 33881 | 0.00 | 0.00 | luaV_execute |
17.74 | 1.11 | 0.55 | 986744 | 0.00 | 0.00 | match |
4.03 | 1.24 | 0.13 | 28481743 | 0.00 | 0.00 | lua_gettop |
3.55 | 1.35 | 0.11 | 22087107 | 0.00 | 0.00 | index2value |
2.58 | 1.43 | 0.08 | 11321831 | 0.00 | 0.00 | yymatchChar |
2.26 | 1.50 | 0.07 | 6478653 | 0.00 | 0.00 | touserdata |
2.26 | 1.57 | 0.07 | 2063343 | 0.00 | 0.00 | pushcapture |
1.94 | 1.63 | 0.06 | 2074113 | 0.00 | 0.00 | lua_getmetatable |
1.94 | 1.69 | 0.06 | 2068487 | 0.00 | 0.00 | auxgetstr |
1.61 | 1.74 | 0.05 | 2222138 | 0.00 | 0.00 | luaS_new |
1.29 | 1.78 | 0.04 | 5469355 | 0.00 | 0.00 | luaV_equalobj |
1.29 | 1.82 | 0.04 | 5239401 | 0.00 | 0.00 | luaH_getshortstr |
1.29 | 1.86 | 0.04 | 2042852 | 0.00 | 0.00 | luaL_checkudata |
1.29 | 1.90 | 0.04 | 1207086 | 0.00 | 0.00 | lua_tolstring |
1.29 | 1.94 | 0.04 | 1070855 | 0.00 | 0.00 | luaT_gettmbyobj |
1.29 | 1.98 | 0.04 | 175585 | 0.00 | 0.00 | internshrstr |
Nothing terribly surprising there. The function luaV_execute()
is not surprising,
as that's the main driver for the Lua VM.
match()
is from LPEG,
which is used for all parsing aside from HTML.
The function yymatchChar()
is from the HTML parser I wrote,
so again,
no terrible surprise there.
Now, GLV-1.12556, the Gemini server, using TLS. This accumulated 1 minute, 24 seconds of CPU time with the profile run. The results:
% time | cumulative seconds | self seconds | calls | self ms/call | total ms/call | name |
---|---|---|---|---|---|---|
8.06 | 0.10 | 0.10 | 30070 | 0.00 | 0.01 | luaV_execute |
7.26 | 0.19 | 0.09 | 1494750 | 0.00 | 0.00 | luaH_getshortstr |
5.65 | 0.26 | 0.07 | 11943 | 0.01 | 0.01 | match |
4.03 | 0.31 | 0.05 | 535091 | 0.00 | 0.00 | luaD_precall |
4.03 | 0.36 | 0.05 | 502074 | 0.00 | 0.00 | moveresults |
3.23 | 0.40 | 0.04 | 129596 | 0.00 | 0.00 | luaS_hash |
2.42 | 0.43 | 0.03 | 11321831 | 0.00 | 0.00 | yymatchChar |
2.42 | 0.46 | 0.03 | 4218262 | 0.00 | 0.00 | yyText |
2.42 | 0.49 | 0.03 | 3293376 | 0.00 | 0.00 | yymatchString |
2.42 | 0.52 | 0.03 | 1508070 | 0.00 | 0.00 | yyrefill |
2.42 | 0.55 | 0.03 | 377362 | 0.00 | 0.00 | luaH_newkey |
1.61 | 0.57 | 0.02 | 2898350 | 0.00 | 0.00 | index2value |
1.61 | 0.59 | 0.02 | 2531258 | 0.00 | 0.00 | lua_gettop |
1.61 | 0.61 | 0.02 | 1081241 | 0.00 | 0.00 | yy_CHAR |
1.61 | 0.63 | 0.02 | 230982 | 0.00 | 0.00 | luaV_equalobj |
1.61 | 0.65 | 0.02 | 174295 | 0.00 | 0.00 | luaV_finishget |
1.61 | 0.67 | 0.02 | 136553 | 0.00 | 0.00 | luaT_gettmbyobj |
1.61 | 0.69 | 0.02 | 129534 | 0.00 | 0.00 | internshrstr |
1.61 | 0.71 | 0.02 | 10363 | 0.00 | 0.00 | traversestrongtable |
1.61 | 0.73 | 0.02 | 4684 | 0.00 | 0.00 | lua_resume |
It's satisfying seeing the same number of calls to yymatchChar()
,
but the times are smaller overall and there's less of a spike with luaV_execute()
,
leading me to believe the time is actually being spent in TLS.
That isn't showing up because I haven't compiled the TLS library to be profiled,
and it's dynamically linked in anyway.
I'm fairly confident that TLS is sucking the CPU time and it's not necessarily my code.
I'm aprehensive about attemping to recompile the TLS library with profiling in mind but it is the next logical step if I want to know for sure.
Sigh.
Tuesday, Debtember 29, 2020
The OpenSSL/LibreSSL shuffle
Two and a half years ago, someone tried using my UUID library with a modern version of OpenSSL. At the time I rejected the patch because I couldn't use it (I was, and still am, using an older version of OpenSSL). Then today, I was notified that someone else tried to do the same, and I figured it was time to actually adress the issue.
It used to be that you could do:
#include <openssl/evp.h> unsigned char hash[EVP_MAX_MD_SIZE]; EVP_MD_CTX ctx; EVP_DigestInit(&ctx,EVP_md5()); EVP_DigestUpdate(&ctx,data,len); EVP_DigestFinal(&ctx,hash,&hashsize);
The context variable declaration changed and you no longer could do that. Instead, you now have to:
#include <openssl/evp.h> unsigned char hash[EVP_MAX_MD_SIZE]; EVP_MD_CTX *ctx; ctx = EVP_MD_CTX_new(); if (ctx != NULL) { EVP_DigestInit(ctx,EVP_md5()); EVP_DigestUpdate(ctx,data,len); EVP_DigestFinal(ctx,hash,&hashsize); EVP_MD_CTX_free(ctx); }
It's an annoying change and yet, I can understand why the change was made—future updates of hash functions could use more space than what you statically allocate which could lead to a buffer overrun. It also changed what used to be an error-free path (well, buffer overruns aside) to a path that could fail. The reason I put off making the change was trying to find the version of OpenSSL where the change was made. After downloading over a dozen versions of OpenSSL and checking each one, I found the change in version 1.1.0.
This also prompted me to spend the time to update my TLS Lua module to the latest version. This also involved downloading over a dozen versionf of LibreSSL and checking each one. There was only one minor change involved, and that was adding a new call to the module.
I have yet to profile LibreSSL though.
Wednesday, Debtember 30, 2020
A sane and easy to use TLS library for OpenSSL! Will wonders never cease!
I saw the following on the Gemini mailing list:
Perhaps take a look at "[gentoo-dev] [RFC] Discontinuing LibreSSL support?". Fascinating dynamic.
I only use LibreSSL because it comes with libtls
,
an easier to use API than the base LibreSSL (which itself was forked years ago from OpenSSL for various reasons).
It seems that over the years,
the API between LibreSSL and OpenSSL have drifted and now the Linux distribution of Genntoo is thinking of dropping support for LibreSSL.
It doesn't affect me,
since I'm not using Gentoo
(and the last time I used Gentoo was a rather stressful time).
I just install it from source
(and it's a pain to use too, because I don't want to destroy my existing OpenSSL installation).
I was,
however,
happy to see a port of libtls
to OpenSSL,
as that would make it easier to keep using libtls
.
Thursday, Debtember 31, 2020
Yet another wrinkle in the TLS library woes
I downloaded and compiled libretls
,
a libtls
for OpenSSL.
I also recompiled my Lua wrapper for libtls
and reran the profiling tests from the other day.
I can now say for certain that there is no need to profile LibreSSL because the results speak for themselves—an accumulation of 24 CPU seconds using OpenSSL vs. 1 minutes, 24 seconds using LibreSSL. Looking at the results, it makes some sense. The LibreSSL library I'm using the the “portable version” for non-OpenBSD systems, and there's probably little work to making it fast.
So now I have to rethink my using LIBRESSL_VERSION_NUMBER
when compiling the Lua module.
I will no longer be using the LibreSSL version and I can't exactly rely upon the TLS_API
value either … sigh.