Tuesday, January 01, 2019
The upside is that this is not an election year—the downside is that we only have 364 days until it is
I've been looking over the past decade's worth of New Year's Day entries so I don't inadvertantly repeat myself, and boy, do I bitch about the fireworks. Thankfully this holiday season has been a bit low key, and that includes our neighbor's propensity towards blowing things up. Yes, there were fireworks tonight, but not nearly at the war sounding levels of previous years.
I'll take solace when I can when it comes to fireworks.
Hopefully, this means that this year will be low key. All I can say is thank God it isn't an election year! Only 364 more days until that madness starts.
Anyway …
HAPPY NEW YEAR!
Yes, we have no copyright
I just saw a commerical using the Prince song “Let's Go Crazy”. It was something I wasn't expecting because Prince had refused all requests to use his work for commericals (as well as turning down all requests from Weird Al Yankovic to parody his songs). But given that Prince died back in 2016 it seems his estate has waited long enough and is now enjoying the licensing fees.
Then it hit me—it won't be until 2086 that the works of Prince will fall into the public domain. Nearly a hundred years since some of his most iconic hits.
In other copyright-public-domain news, today is the first day in 21 years that works have fallen into the public domain. It's weird to think that up until yesterday, “Yes, We Have No Bananas” was still in copyright.
Monday, January 07, 2019
Ignorance is Bliss
So I'm catching up on The Transylvania Times (I'm a bit behind, and they're piling up) when I come across this rather distrubing headline: “Wednesday Morning Earthquake Felt In Transylvania.”
Wait … what?
Yes Virgina, an earthquake in the southeast United States.
I know eathquakes happen along the Pacific Coast (like California, the land of Shake and Bake). I also know they happened in Missouri (although rare, when it happens, it happens). But in the East? The East is supposed to be stable. Rock solid (ahem). Not shifting underneath our very feet. I am disquieted by this news.
As I fall deeper into this whole “East Coast Earthquake Zone,” it appears to be all too true. There's a fault line that runs from Alabama northeast to Newfoundland, Canada, and it runs about six miles east of Brevard.
I … I don't know how I feel about this. I admit, I have an irrational fear of earthquakes. I don't know why, I just do. Hurricanes? Please … don't bother me unless it's a category 4. An earthquake? Even a relatively minor 2 on the Richter scale (and this one was a 4.4)? Aieeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee! Run away!
And the kicker in all this? This fault line has a name. It's name is the Brevard Fault! No, really.
Now I really have no idea how I feel about this.
Tuesday, January 08, 2019
A reason to celebrate today
The book Atlas Obscura: An Explorer's Guide to the World's Hidden Wonders appeared on the doorstep, courtesy of Bunny. I do have to wonder why she felt it necessary to give me that book to celebrate the only time in the history of the United States the country was not in debt, back in 1835 thanks to President Andrew “I regret not killing that bastard Vice President Calhoun” Jackson. It could be for other reasons, but I can't be sure.
In any case, it's a very cool book. Where else could one learn about the Skunk Ape Research Headquarters in Ochopee, Florida? Or the Lost Subway of Cincinnati, Ohio? Lots of places to visit—oh!
Saturday, January 12, 2019
It's no longer possible to write a web browser from scratch, but it is possible to write a gopher browser from scratch
As I mentioned two months ago, I've been browsing gopherspace. At the time, I was using an extension to Firefox to browse gopherspace, but a recent upgrade to Firefox left it non-working. I could use Lynx but it includes unnecessary pauses that make it feel slower than it really should be. I also don't care for how it lays out the page.
So I've been writing my own CLI gopher client.
And it's not like the protocol is all that difficult to handle and everything is plain text.
How hard could it be to download a bunch of text files and display them?
The protocol? Trivial.
Displaying the pages? Er … not so trivial.
The first major problem—dealing with UTF-8.
Problem—the terminal window can only display so many characters per line
(default of 80 usually).
There are two ways of dealing with those—one is to wrap the text to the following lines(s),
and the other is to “pan-and-scan”—let the text disappear off the screen and pan left-and-right to show the longer lines.
Each method requires chopping text to fit though.
With ASCII, this is trivial—if the width of the temrinal is N
columns wide,
just chop the line at every N
-bytes.
This works because each character in ASCII is one byte in size.
But characters in UTF-8 take a variable number of bytes,
so chopping at arbitrary byte boundaries is less than optimum.
The solution may look simple:
-- ************************************************************************ -- usage: writeslice(left,right,s) -- descr: Write a portion of a UTF-8 encoded string -- input: left (integer) starting column (left edge) of display screen -- right (integer) ending column (right edge) of display screen -- s (string) string to display -- ************************************************************************ local function writeslice(left,right,s) local l = utf8.offset(s,left) or #s + 1 local r = utf8.offset(s,right + 1) or #s + 1 tty.write(s:sub(l,r - 1)) end
but simple is never easy to achieve. It took a rather surprising amount of time to come up with that solution.
The other major problem was dealing with the gopher index files. Yes, they are easy to parse (once you wrap your head around some of the crap that presents itself as a “gopher index” file) but displaying it was an even harder problem.
Upon loading a gopher index,
I wanted the first link to be highlighted,
and use the Up
and Down
keys to select the link and then the enter key to select a page to download and view.
Okay,
but not all lines in a gopher index are actual links.
In fact,
there are gopher index files that have no actual links
(that surprised me!).
And how do I deal with lines longer than can be displayed?
Wrap the text?
Let the text run off the screen?
At first, I wanted to wrap long lines, but then trying to manage highlighting a link that spans several lines when it might not all be visible on the screen (the following lines might be off the bottom, for instance) just proved too troublesome to deal with. I finally just decided to let long lines of text run off the end of the screen just to make it easier to highlight the “current selection.” Also, most gopher index pages I've come across in the wild generally contain short lines, so it's not that much of a real issue (and I can “pan-and-scan” such a page anyway).
For non-text related files,
I farm that out to other programs via the mailcap
facility found on Unix systems.
That was an interesting challenge I will probably address at some point.
There are still a few issues I need to address, but what I do have works. And even though it's written in Lua it's fast. More important, I have features that make sense for me and I don't have to slog through some other codebase trying to add an esoteric feature.
And frankly, I find it fun.
The technical differences between HTTP and gopher
… The point is to attempt as full a sketch as possible of the actual differences and similarities between the HTTP and GOPHER protocols.
…
From what I gather, these are the similaries:
- Both gopher and http start with a TCP connection on an IANA registerd port number.
- Both servers wait for text (the request) terminating in a CRLF
- Both servers expect the request (if there is one) to be formatted in a particular way.
- Both servers return plain text in response, and close the TCP connection.
And these are the differences that I understand:
- Gopher will accept and respond to a blank request, with a default set of information, http will not.
- Gophper [sic] sends a single "." on a line by itself to tell the client it is done, http does nothing similar prior to closing the connection.
- Http has things like frames, multiplexing, compression, and security; gopher does not.
- Http has rich, well-developed semantics, gopher has basic, minimalist semantics
- Http requests are more resource intensive than gopher requests.
- Http is highly commercialized, gopher is barely commercialized.
- Http is heavily used and highly targeted by malicious users, gopher is neither.
- Http is largely public, gopher is largely private (de facto privacy through obscurity.)
- Http is used by everyone, their children, their pets, their appliances, their phones, and their wristwatches; gopher is used primarily by technical folk and other patient people.
- Http all but guarantees a loss of privacy; gopher doesn't
Yeah, I know, it's not much, but that's all that is coming to mind presently. What are your thoughts?
Tech
nology/Gopher
(I'm quoting for the benefit of those that
cannot view gopher based sites).
I don't want to say that tfurrows
is wrong, but
there is quite a bit that needs some clarification, and as someone who has
worked with HTTP for over
twenty years, and has recently dived back into gopher (I used it for several
years in the early 90s—in fact, I recall Time Magazine having a gopher server back then) I
think I can answer this.
First, the protocol. The gopher protcol is simple—you make a TCP connection to the given port (defaults to 70). Upon connection, the client then sends the request which can be one of three formats:
CRLF
The simplest request—just a carriage return and line feed character. This will return the main page for the gopher server.
selector-to-viewCRLF
This will return the requested data from the gopher server. The specification calls this a “selector.” And yes, it can contain any non-control character, including space. It's terminated by a carriage return and line feed characters.
selector-for-searchHTsearch terms to useCRLF
The last one—this sends a search query to a gopher server. It's the “selector” that initiates a search, followed by a horizontal tab character, then the text making up the query, followed by a carriage return and line feed.
In all three cases, the gopher server will immedately start serving up the data. Text files and gopher indexes will usually end with a period on its own line; other file transfers will end with the server closing the connection.
That's pretty much the gopher protocol.
The HTTP protocol that works the closest to gopher is the so called HTTP/0.9 version, and it was pretty much the the same. So the same three requests above as HTTP requests.
GET /CRLF
The minimum request for HTTP. As you can see, it's only an extra four characters,
but the initial text, GET
in this case, was useful later when
the types of requests increased (but I'm getting ahead of myself here). This
will return the main page for the HTTP server.
GET /resource_to_viewCRLF
The usual request, but instead of a “selector” you request a “resource”
(different name, same concept) but it cannot contain bare spaces—they have to
be encoded as %20
(and a bare “%” sign is encoded as
%25
). Like gopher, the contents are immediately sent, but there
is no special “end-of-file” marker—the server will just close the
connection.
GET /resource_for_seach?search%20terms%20to%20useCRLF
And a search query, where you can see the spaces being replaced with
%20
. Also note that the search query is separated by the
“resource” with a “?”.
So not much difference between gopher and HTTP/0.9. In fact, during the early to mid-90s, you could get gopher servers that responded to HTTP/0.9 style requests as the difference between the two was easy to distinguish.
The next version of HTTP, HTTP/1.0, expanded the protocol. Now, the client was expected to send a bit more infomration in the form of headers after the request line. And in order to help distinguish between HTTP/0.9 and HTTP/1.0, the request line was slightly expanded. So now the request would look like:
GET /resource_to_view HTTP/1.0CRLF User-Agent: Foobar/1.0 (could be a web browser, could be a web crawler)CRLF Accept: text/*, image/*CRLF Accept-Language: en-US;q=1.0, en;q=0.7; de;q=0.2, se;q=0.1CRLF Referer: http://www.example.net/search?for%20blahCRLF CRLF
(Yes, “Referer” is the proper name of that header, and yes, it's mispelled)
I won't go too much into the protocol here, but note that the client can
now send a bunch more information about the request. The Accept
header now allows for so-called “content negotiation” where the client
informs the server about what type of data it can deal with; the Accept
Language
header tells the server the preferred languages (the example
above says I can deal with German, but only if English isn't available, but
if English is availble, American is preferred). There are other
headers; check the specification for
details).
The server now returns more information as well:
HTTP/1.0 200 OkayCRLF Date: Sun, 12 Jan 2019 13:39:07 GMTCRLF Server: Barfoo/1.0 (on some operating system, on some computer, somewhere)CRLF Last-Modified: Tue, 05 Sep 2017 02:59:41 GMTCRLF Content-Type: text/html; charset=UTF-8CRLF Content-Length: 3351CRLF CRLF content for another 3,351 bytes
The first line is the status, and it informs the client if the “resource”
exists (in this case, a 200
indicates that it does), or if it
can't be found (the dreaded 404
) or if it has explicitely been
remove (410
) or it's been censored due to laws
(451
), or even moved elsewhere.
Also added were a few more commands in addition to GET
, like
POST
(which is used to send data from the client to the server)
and HEAD
(which is like GET
but doesn't return any
content—this can be used to see if a resource has changed).
HTTP/1.1 is just more of the same, only now you can make multiple requests per connection, a few more commands were added, and the ability to request portions of a file (say, to resume a download that was cut off for some reason).
HTTP/2.0 changes the protocol from text-based to binary (and attempts to do TCP- over-TCP but that's a rant for another time) but again, it's not much different, conceptually, than HTTP/1.1.
Security, as in https:
type of security, isn't inherently
part of HTTP. TLS is basically inserted between the
TCP and HTTP layers. So the same could be
done for gopher—just insert TLS
between TCP and gopher and
there you go—gophers:
. Of course, that now means dealing with
CAs and certificates and
revocation lists and all that crap, but it's largely orthogonal to the
protocols themselves.
HTTP/1.0 allows compression but that falls out of the content negotiation. The bit about frames and multiplexing is more an HTTP/2.0 issue which is a lot of crap that the server has to handle instead of the operating system (must not rant …).
Are HTTP requests more resource intensive? They can be, but they don't have to be. But that leads right into the commericalization of HTTP. Or rather, the web. HTTP is the conduit. And conduits can carry both water and waste. HTTP became commercialized because it became popular. Why did HTTP become popular and gopher whithered? Personally, I think it has to do with HTML. Once you could inline images inside an HTML document, it was all over for gopher. The ability to include cat pictures killed gopher.
But in an alternative universe, where HTML had no image support, I think you would have seen
gopher expand much like HTTP
has. Work was started in 1993 to to expand
the gopher protocol (alternative link) where the protocol gets a
bit more complex and HTTP-
like. As mentioned, a secure gophers:
is “easy” to add in that it doesn't change the core
protocol (update—it's not as easy
as I thought). And as such, I could see it getting more commercialized.
Advertising can be inserted
TYPEWRITERS
For SALE, HIRE, or EXCHANGE,
at HALF the USUAL PRICES.
MS.
Typewritten from
10d. per 1,000 words. 100 Circulars for 4s
TAYLOR'S,
74, Chancery Lane, London.
(Est. 1884.)
Telegrams: "Glossator," London.
Telephone No. 690,
Holborn.
even in a text file. Yes, it might look a bit strange, but it can be done. The only reason it hasn't is that gopher lost out to HTTP.
So those are the differences between HTTP and gopher. HTTP is more flexible but more complex to implement. Had history played out differently, perhaps gopher would have become more flexible and complex.
Who knows?
Monday, January 21, 2019
Oblivious spammers looking to game Google for “organic” links
Two weeks ago I received an email from A informing me of a broken link on a 14-year old post and suggesting I replace said broken link with a link to a general purpose site that has no relation to the post or to the specific information I originally linked to with the now broken link. It was obvious to me that A just searched for links to the defunct site and spammed anyone that had linked to said site in order to divert some Google Page Rank to some obscure site they've been paid to hawk. Then a week later, A emails me again to remind me of the email of the previous week. What sent me over the edge on this one was the following at the bottom of the second message: “P.S. If you don't want to hear from me anymore you can unsubscribe here.”
I'm sorry, but when did I “subscribe” to your emails? I replied in a rather harsh tone, but I suspect A just sent my reply to the bit bucket.
Then two days after that, I received an email from C informing me of a broken link on the same 14-year old post. C had a different email address from A, and appeared to work for a different company than A, but yet offered a different link than the one A had offered. And the link showed that C had no clue how the original broken link was used in context of the page and was just hawking some obscure site to obtain some Google Page Rank. So I ignored it.
Today, I received an email from C, reminding me of the email sent previously.
So I replied with a very harsh message informing C that not only was I aware of his previous email, but I was also aware of the previous two emails from A and the links A was spamming. I ended my reply to C with “I've decided to remove the damn link to XXXXXXXXXXX entirely! It's obvious both of you never read the page it was on and are just looking to game Google with ‘organic’ links.”
C surprised me with a reply a few hours later, apologizing for sending the emails.
Wow! Complaining actually worked. At least, I hope it worked and I was removed from at least one spam list. A guy can hope.
Tuesday, January 22, 2019
The Process über alles
It took a month and a half but my “self-review from hell” was rejected (and eight hours of my time wasted). This did not surprise me. But The Process is important because it's The Process, and thus, I will find myself spending another eight hours appeasing The Corporate Overlords Of The Corporation.
“One cannot simply walk into Mordor”
A manager went to the master programmer and showed him the requirements document for a new application. The manager asked the master: “How long will it take to design this system if I assign five programmers to it?”
“It will take one year,” said the master promptly.
“But we need this system immedately or even sooner! How long will it take if I assign ten programmers to it?”
The master programmer frowned. “In that case, it will take two years.”
“And if I assign a hundred programmers to it?”
The master programmer shrugged. “Then the design will never be completed,” he said.
They always shoot the messengers, don't they?
Thus spake the master programmer:
“Let the programmers be many and the managers few—then all will be productive.”
But incentivize finding bugs, and programmers will be buying minivans
A group of programmers were presenting a report to the Emperor. “What was the greatest achievement of the year?” the Emperor asked.
The programmers spoke among themselves and then replied, “We fixed 50% more bugs this year than we fixed last year.”
The Emperor looked on them in confusion. It was clear that he did not know what a “bug” was. After conferring in low undertones with his chief minister, he turned to the programmers, his face red with anger. “You are guilty of poor quality control. Next year there will be no ‘bugs’!” he demanded.
And sure enough, when the programmers presented their report to the Empoeror the next year, there was no mention of bugs.
Wednesday, January 23, 2019
Gold Handcuffs
It was not a fun day at The Ft. Lauderdale Office Of The Corporation. There was much I wanted to say and do in response to The Process, but I was disuaded from all of them by several persons, all of whom informed me that doing so was not in my best self-interest. It's all the more frustrating because they're right!
You have no idea how much I self-censored myself when writing this post. I've already written and deleted over a dozen revisions of this post, and I hate that I've had to do that. I may have the same global reach as a corporation, but I don't have the same amount of money to fight.
Thursday, January 24, 2019
The various principles of management
Today was much nicer than yesterday. As I did actual work, you know, the work that I was hired to perform—to generate software that we charge our customers to use, I reached the que sera sera state with my “self-review” and have decided to let it go (I can only hope that the Corporate Overlords of the Corporation will finally accept it and not kick it back down for another wasted day of make-busy work).
But The Process has me thinking. There's the Peter Principle, which states that people are promoted to their level of incompetence, and the Dilbert Principle, which states that incompetent people are promoted to management to limit the damage they can do. And then there's the Gervais Principle, which is a lot harder to summarize but at first glance (of over 30,000 words) appears to explain management machinations, but I suspect I'm going to have to read the thing several times before I understand the actual principle, especially given the terms used to describe three groups of people—sociopaths, clueless, and losers—are loaded with negative connotations (after an initial reading, I would personally use the terms realists, idealists and cynics).
In the mean time, as far as I'm concerned, The Process is over, and I shall not talk about it again.
Monday, February 04, 2019
Portrait of a picture of a migrant mother
So I started watching “Masterpiece: The Making of Migrant Mother” and right off, I'm annoyed that the creator, Evan Puschak, is using a vertical orientation for the video, as if he filmed it with a smart phone. But after a few minutes I stopped noticing the odd aspect ratio as I was engrossed in the top of the video. By the end, it was clear that the vertical aspect ratio was a design choice, to reflect the subject matter, an iconic portrait from the Great Depression.
He filmed a video about Dorothea Lange's portrait “Migrant Mother” in portait mode! Yes, it's jarring to see video in portait mode instead of the normal landscape mode, but ultimately, I think it works.
It was also interesting to note just how much manipulation went into that photo. It wasn't just a quick shot of a mother with three kids, but artfully staged as government propaganda to promote one of President Roosevelt's social programs during the Great Depression. Jason Kottke goes into some depth about the subject of the photo, Florence Owens Thompson.
Wednesday, February 20, 2019
How to measure 5/6 cup of oil, part II
I just received an email from D. J. about a fifteen year old post which mentioned bad math books and tagentially about measuring oil. It was unusual in that D. J. not only read the post (and enjoyed it) but also mentioned an alternative way of measuring 5/6 cup of oil using a simpler method than the insane one I came up with at the time. D. J.'s method is to meausre out 4/3 cups and then removing ½ cup. I guess D. J. is better at fractions than I am.
Thursday, February 28, 2019
In most cases, the $100 chip will blow to save a 1¢ fuse, but occasionally, the 1¢ fuse will do its job
Well, that's a fine kettle of fish, I thought as I powered up my main computer and it reported it couldn't find the hard drive. First the keyboard, and now the hard drive. I resisted the urge to say, “What else could possibly go wrong?”
Late last night we had a power outage. I did the usual “shut down all the computers, then shut down the UPSes. Fortunately, it only lasted some ten minutes and the power came back up. I powered up the UPSes, then I started with my main computer. It got to the point where I could log in, only the keyboard didn't work.
That's odd, I thought. I then powered up my other computer, the Mac mini. Both computers use the same keyboard in a convoluted setup involving a KVM and Synergy that works for me. The keyboard worked fine on the Mac mini, so it wasn't the keyboard that was bad.
But both computers lack a PS/2 port, so I have to run the keyboard cables through a PS/2-to-USB converter (one per system). Logging into my main machine from the Mac and scanning the USB subsystem showed nothing connected, which lead me to believe said USB system on my main machine was dead. But oddly, there was nothing in the boot messages that said the USB couldn't be initialized or was otherwise bad. I then thought perhaps some dust had gotten in the way? I mean, the system was dusty.
I took the machine outside and let loose with some canned air. Back inside, hook everything back up and Well, that's a fine kettle of fish …
At this point, I did not panic, but instead applied Conner's Three Rules to Worrying:
- Can I do anything about the issue I'm worried about now? If so, do the thing and stop worrying.
- Can I do anything about the issue later? If so, wait until later and see rule 1.
- Can I do anything about the issue at all? If not, no use worrying about it as there's nothing you can do about it.
I figured rule 2 applied, and waited until later.
Later came, and I decided to take a methodical approach to the problem. I unhooked the computer, opened the case and made sure all the cables were firmly in place. Perhaps letting loose with the canned air knocked a cable loose and in fact, that was the issue—the power cable to the CD-ROM had been knocked loose, and that's probably the hard drive the computer was complaining about. Everything else seemed firmly in place.
As an aisde, I do want to mention the case I have for my main computer is one of the nicest cases I've seen. It's easy to get into without any tools, wire/cable management is super clean, and nothing is hard to get at or remove. It's a beautiful case.
Anyway, I hook everything back up, and lo'! It booted up, But alas, the keyboard was still borked.
I still didn't panic. The computer works. I don't have to get a new harddrive and the mess that entails. I can still work without a keyboard. At the very least, I can see if I have a USB PC card in my collection (I gave it a 10% chance of having one in storage) and if not, I could always order one.
But then something said try the PS/2-to-USB converter on the Mac on my main system. And lo'—it worked! It was the PS/2-to-USB converter on my main system that for some reason fried—it wasn't the USB subsystem at all! And that's a much easier problem to work around.
Tuesday, March 05, 2019
“We the willing”
I'm was still trying to process that the process of our process is to process the process to ensure the process has processed the process when I came across this rather insightful comment about the F izzBuzz Enterprise Edition:
A combination of wasteful architecture astronomy and legitimate need to divvy up absolutely mammoth line-of-business applications among teams with hundreds of members operating for years with wildly varying skill levels, often on different subsystems from different physical locations, with billions of dollars on the line. You can't let the least senior programmer in the Melbourne office bring down the bank if he screws up with something, so instead you make it virtually impossible for him to touch anything than the single DAO which he is assigned to, and you can conclusively prove that changing that only affects the operation of the one report for the admin screen of a tier 3 analyst in the risk management group for trans-Pacific shipping insurance policies sold to US customers.
The tradeoff is, historically, that you're going to have to hire a team of seven developers, three business analysts, and a project manager to do what is, honestly speaking, two decent engineers worth of work if they were working in e.g. Rails. This is a worthwhile tradeoff for many enterprises, as they care about risk much, much, much more than the salary bill.
(I spent several years in the Big Freaking Enterprise Java Web Applications salt mines. These days I generally work in Rails, and vastly prefer it for aesthetic and productivity reasons, but I'm at least intellectually capable of appreciating the advantages that the Java stack is sold as bringing to users. You can certainly ship enterprise apps in Rails, too, but "the enterprise" has a process which works for shipping Java apps and applying the same development methodology to e.g. a Rails app would result in a highly effective gatling gun for shooting oneself in the foot.)
A comment on HackerNews about F izzBuzz Enterprise Edition
And how having attended several scrum meetings (at the behest of our Corporate Overlords) for “Project: Gibbons” (a slimmed down and simplified version of “Project: Lumbergh”) I can see how this “enterprise development” is shaking out—it's a form of Conway's Law: “[O]rganizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” It's gone from what should be a simple one week project (because all it's doing is looking up a name based upon a phone number and that's it) into a multi-month project mired in internal bureaucratic overhead. I'm not going to go into details, but just note that yes, Dilbert is a documentary.
The Electric King James Bible Remake
It's been a long time since I last mentioned The Electric King James Bible, an experiment I did back in late 1999 in URL addressing a portion of a document (which influenced the structure of my blog). The code hasn't changed much since 1999 (there was a bug fix around 2010 it seems), and thus, it has sat there, chugging along with little attention to the greater world.
Until the past month when I've had email discussions with two different people about The Electric King James Bible. Both people were interested in the addressing scheme, which I think is still unique on the Internet. How many sites will let you link directly to a portion of the Bible and get Noah's Ark or Samson and Delilah? It can also handle some pretty bad misspellings (levitakus 19:19 anyone?). One of the respondents mentioned it would be nice if The Electric King James Bible was available via Gopher.
Well, yeah, I do have a gopher site, so it wasn't all that difficult to present a similar interface (in fact, it uses the same data files as the web version). So of course now you can get your Noah's Ark story and Samson and Delilah story from gopherspace. And any other Bible story you care to, just as long as you know where in the Bible it resides.
The Quick and Dirty B-Movie Plot Generator, now in Gopherama!
As long as I was making The Electric King James Bible available via Gopher, I thought I might as well adapt The Quick and Dirty B-Movie Plot Generator to Gopher as well. It pretty much works the same as the web version. Just reload the page for different plots and when you find one you like, you can just bookmark it.
Thursday, March 14, 2019
A recreation of a scene at an office
The breakroom of the Ft. Lauderdale Office of the Corporation. On the counter are several boxes clearly labled “Krispy Kreme.” Sean walks in.
- Sean
-
Dum de dum.
He stops dead in his tracks as he spots the Krispy Kremes.
Um …
- Booming Voice
-
We don't see the person speaking, but it's a booming, Brian Blessed like, voice. As a side note, perhaps we can get Brian Blessed to play this part. Anyway, booming voice, unseen person.
Make a SAVE vs. Krispy Kreme!
- Sean
-
Reaches into his pocked and pulls out a twenty-sided die. He shakes it in his fist and then rolls it on the counter. His eyes goes wide. CUT to die rolling on the counter.
- D20
-
ZOOM to CLOSE-UP of the die on the counter. You can clearly make out the “1” on the top face of the die.
- Sean
-
Nooooooooooooo!
Sean then dives into the Krispy Kreme boxes …
Nom nom nom nom nom …
“Is there a way to convert this integer to an integer?”
I would like to write an Apache client to have a specific layout for requests to a secure site, making a static website, not locally, that may look like a assembly loader. This is my first attempt at using SSL. It's a bit simple to do with a few steps, asking the user to decide whether I should use "Google App Engine" or "Google Apps", and then have them send a sort of hash in the request to that domain. However, I still want my site to be public so I can just open it in Google App Engine.
The reason is that my client does not want to do a simple submit / form put into my app, since that 'd be possible with the Google App Engine. The reason I want to use the Google App Engine is that it works as expected. However, neither of the examples I found for handling the request from the parent page (which I am guessing are compatible with the Apache re - server - side tomcat) work as discussed in the link at How to open an IE include in a new application in Python? and all, and the one i've tried. Is there any way I can help my client AJAX to connect to my Python app?
Via Hacker News, Configure Google App Engine to redirect to localhost
This is not a real question, no one actually asked for this. This question (and every question on the site) is a computer generated word salad that almost, but not quite, makes sense. You know, like a few questions actually asked by real people at Reddit or Stack Overflow who don't quite grasp the whole concept of programming, or English, or both.
And the creator of the page is a bit worried about this site being indexed by Google. It's a valid concern that this site will pollute search results, given that the corpus used to generate the questions are questions on other sites like Reddit and Stack Overflow. There were also some comments about autogenerating answers but given the questions are maddenly close to comprehension, the lack of answers might be a good thing.
But this does give me an idea for National Novel Generation Month 2019 …
The Repair Culture
I was using my computers when all of a sudden, I couldn't select anything with the mouse. My intial reaction was well, there goes the other PS/2-to-USB converter, but some subsequent experiments proved to me that wasn't the case—I could still move the mouse pointer, and the middle and right mouse buttons worked. It was just the left mouse button that no longer functioned.
I have a Logitech Trackman Marble from the 90s (it's not the same as what is being sold today as the Trackman Marble), and it's the second such unit I've used. The first one wore out and I had to go to Ebay to find a replacement a few years ago, I liked it that much. The thought that I would have to go through that trouble yet again filled me with dread.
In the meantime, I couldn't effectively use my computers. As Bunny and I were scambling to find a replacement mouse at Chez Boca, I decided to crack open the Trackman and see what might be the issue. It was easy enough to open, remove four screws and the innards were exposed. My initial thought was that the left mouse button (which gets the most use) had worn out. It was, but not in the way I expected. There's a portion of the button you press that activates the switch below it. It's a small vertical piece of plastic that pushes down on the horizontally oriented switch. And in that small vertical piece a groove had formed over the years of use.
After discussing the problem with Bunny, the solution we came up with was to use a small amount of Elmer's Glue (applied with a toothpick) to fill in the groove that had formed.
I want to report that the solution worked wonderfully! The Elmer's Glue hardened enough to make the left button work and if it ever wears out, I know how to fix it.
Saturday, March 16, 2019
I'm not addicted to the Internet. I can give it up at any time. What? You mean it's down? Aaaaaaaaaah!
Ah, the sweet fast Internet is back at Chez Boca.
Late Thursday, the Internet here at Chez Boca went bonkers (that's a technical term). One second we were smoothly surfing the Intarwebs and then the next, we smacked against the concrete pylons of a pier (if I'm to keep with the surfting metaphore). The connection wasn't down, and it wasn't as if there was massive packet loss. It was just that each packet was taking, on average, about eight seconds to traverse the link.
Things would be fine for a few seconds, maybe enough to start loading a page but then—BAM! 10 seconds! 8 seconds! 11 seconds! 4 seconds! For the next few hours or so. Then then latency would drop to around 30ms for a minute or so, then the multisecond latencies would drive right back up.
And the packet loss was usually less than 2%.
It was weird and annoying.
When we called our ISP, we were first told that someone would be available to check on Tuesday, but when Bunny suggested to tech support to look for any cancellations, they were able to find an opening for today.
It turned out to be a bad DSL port.
In looking back over our nearly 30 hours sans Internet, it's amazing how dependent we've become on it just being there. No email. No Netflix. No quick Google searches when doing a crossword puzzle. No MyFaceGoogleLinkedSpaceBookPlusIn.
Okay, that last one isn't bad at all.
But the rest … wow.
Thursday, March 21, 2019
Observations on the drive home from work
I was driving through the neighborhood leading to Chez Boca when I saw two people walking in the street, not an uncommon sight in this neighborhood as it lacks side walks. What was uncommon though, were the identical clothes the pair were wearing and the identical hair styles. As I passed them, I also noticed they were wearing the same pair of glasses. It was also apparent by this time that I was seeing a father and son (or a man and his young clone) walking about the neighborhood.
That was not the oddest though. A few seconds later I noticed a car driving the other way. A small car with two occupants, dressed as clowns! I passed a clown car, in the wild, driving home this evening.
I don't think David Lynch moved into our neighborhood …
Monday, March 25, 2019
It took thirty-odd years for web browsers to get automatic hypenation and almost a decade for me to notice
Automatic hyphenation on the web has been possible since 2011 and is now broadly supported. Safari, Firefox and Internet Explorer 9 upwards support automatic hyphenation, as does Chrome on Android and MacOS (but not yet on Windows or Linux).
Via inks, All you need to know about hypenation in CSS | Clagnut by Richard Rutter
I had no idea that hyphenation has been supported for eight years now. And
adding support was easy enough—I already mark the blog as being English so
the only remaining bit was adding the bit of CSS to enable it. I've been manually (for various values of
“manual”) hyphenation hints with the use of ­
, such as
when I mention FaceGoogleLinkedMyBookPlusInSpace or when I do my XXXXXXXXXXXXX XXXXXXXXXXX XXXXXXX censor bars.
Now, it looks like I don't have to do that anymore. Sweet!
“Woe unto all who reveal the Secrets contained herein for they shall be Hunted unto the Ends of the Universe”
This is a document I wrote in early 1984 at the behest of the System Development Foundation as part of Xanadu’s quest for funding. It is a detailed explanation of the Xanadu architecture, its core data structures, and the theory that underlies those data structures, along with a (really quite laughable) project plan for completing the system.
At the time, we regarded all the internal details of how Xanadu worked as deep and dark trade secrets, mostly because in that pre- open source era we were stupid about intellectual property. As a consequence of this foolish secretive stance, it was never widely circulated and subsequently disappeared into the archives, apparently lost for all time. Until today!
Via Lobsters, Habitat Chronicles: A Lost Treasure of Xanadu
It is the Xanadu concept
of tumblers that is core to how
mod_blog
works, although
more in idea than in actual implementation detail. And the web in general has
a limited form of transclusion which
is pretty much just images (still and moving) and sound. A general version of transclusion would probably be
very difficult (if not outright impossible) to implement. But there are still
concepts that Xanadu has that are still hard to understand, and I'm hoping
the document mentioned above is enough to shed some light on some of the more
esoteric concepts of Xanadu. I don't think we'll ever see a fully blown
Xanadu system ever, but there are still ideas lurking in it that deserve some
investigation.
A lesson in economics
Bunny received the following in email and couldn't wrap her brain around it. Neither could her brother, nor the person who sent it to her:
It's a slow day in the small town of Pumphandle, Saskatchewan, and the streets are deserted. Times are tough, everybody is in debt, and everyone is living on credit.
A tourist visiting the area drives through town, stops at the motel, and lays a $100.00 bill on the desk, saying he wants to inspect the rooms upstairs to choose one for the night.
As soon as he walks upstairs, the motel owner grabs the bill and runs next door to pay his debt to the butcher.
The butcher takes the $100.00 and runs down the street to retire his debt to the pig farmer.
The pig farmer takes the $100.00 and heads off to pay his bill to his supplier, the Co-op.
The guy at the Co-op takes the $100.00 and runs to pay his debt to the local prostitute, who has also been facing hard times and has had to offer her "services" on credit.
The hooker rushes to the hotel and pays off her room bill with the Hotel Owner.
The hotel proprietor then places the $100.00 back on the counter so the traveler will not suspect anything.
At that moment, the traveler comes down the stairs, states that the rooms are not satisfactory, picks up the $100 bill and leaves.
No one produced anything, no one earned anything.
However, the whole town now thinks that they are out of debt and there is a false atmosphere of optimism and glee.
And that, my friends, is how a “government stimulus package” works.
Bunny then sent it to me, asking if I could wrap my brain around it. And upon first reading, yeah, it's hard to grok what exactly happened and how it happened. But I think I can pull this apart.
First off, one way to look at money is as a means of exchange of goods and services. Farmer Bob raises cows but wants some poultry to eat for a change. Farmer Chuck has the chickens and wouldn't mind some beef. I've just checked the current spot prices for beef and chicken:
animal | weight in pounds | cost per pound |
---|---|---|
cow | 1,400 | $4.25 |
chicken | 4 | $2.00 |
That's nearly 350 chickens per cow. I'm sure that Farmers Bob and Chuck could come to some agreement, like a chicken for a pound of beef. This is easy, because Farmer Bob has something that Farmer Chuck wants, and Farmer Chuck has something that Farmer Bob wants—there's a direct exchange. This can even be extended to “debt”—Farmer Chuck could give a chicken to Farmer Bob for later payment in beef. But it gets tiresome working out the “price of chickens” in asparagus or the “price of beef” in potatoes. Thus some commodity that everyone agrees upon to exchange for goods and services—in our case today, United States dollars (I realize the dollar isn't a commodity like pork bellies or gold and is in fact, a fiat currency, but I don't want to bog this down any more than necessary).
Now to our particular example. There is a circle of debt between the proprietor, butcher, pig farmer, co-op, hooker and back to the proprietor. It's a larger circle of debt than between our example of Farmers Bob and Chuck, but it is there, it just took a bit for it to manifest itself. And no one had the entire picture—the proprietor was in debt to the butcher, but was a creditor to the hooker; the pig farmer was in debt to the co-op, but a creditor to the butcher. This mutual debt wasn't broken until the introduction of money from the tourist to unravel the debt ring. Remove the hooker from our little story, and the proprietor would be on the hook for $100 to the tourist, or his debt to the butcher would still exist, and so on down the line to the co-op.
In addition, the story also hinges on the mutual debt all being the same amount. But say the debt had been $50 between the proprietor and butcher but $100 for everyone else and the debt doesn't vanish at all—it just kind of moves about a bit. So I don't think the story is that good of an example of a “government stimulus package” at all—not everyone's debt is the same, nor is all debt in the United States local to the United States. Someone is going to end up holding onto some debt.
Tuesday, March 26, 2019
Notes on an overheard conversation about a dish at a restaurant being featured on television
“Yeah … no. Get rid of the sweet plantains and I'd be all over that. With the plantains, no.”
“But you eat plantains when you have the skirt steak at Cuban restaurants.”
“Yes, but I save those for dessert.”
“Everything goes to the same place—your stomach.”
“Yeah, but my stomache doesn't have taste buds.”
“You're just like my brother. He wants everything separated on his plate.”
I'd like to see the unit test for this bug
“Sean—”
“Aaaaaaaaaah!”
“—it's time to get up … oh dear. I have to scrape you off the ceiling again.”
“No problem—just give me a moment to get my heart started again.”
I'm convinced I've found a bug on the iPhone.
I use my iPhone as an alarm clock. It's there, and I can set multiple alarms to remind me of various tasks other than waking up, such as scrum mettings.
I also tend to keep the ringer off my iPhone, due to the sheer number of spam calls I receive (during the week, it's not uncommon for me to receive a few spam calls per day—so far three today, but the most I've received up to seven). But having the ringer off does not mean the alarm is silent—it still goes off (loudly). It's just that anyone calling won't wake me.
So now we come to the bug—if I receive a call at the same time as the alarm goes off, no sound is emitted! And with no alarm, I keep sleeping until my backup alarm, Bunny, comes in and scares the XXXX out of me.
I could solve this by setting silence as a default ring tone, but alas, the iPhone does not come with silence as an option for a ring tone (hint hint!). So I would have to sample John Cage's 4′33″ to use as a default ring tone.
Yeah, I know, first world problem. But only if something was being done about spam calls …
And part of our process of processing the process is to ensure we process the process safely
So our Corporate Overlords have assigned us a workplace safety pamphlet to read followed by a workplace safety video to watch. It wasn't nearly as bad as previous “training sessions” where we had to watch videos of text slides as someone read them aloud slowly and badly. No, this time we got to read the pamphlet at our own speed and watch a scare mongering video about the proper response to a workplace shooting—flee, hide or as a last resort, fight. Unfortunately, the video lacked proper closure as it cut just as the group of office workers were forced to fight the lone gunman shooting up the office, so we never get to see if they succeeded or not. I found this German forklift safety video much more entertaining than what I watched.
Funny to think the Germans have a better sence of work humor than us Americans.
Wednesday, March 27, 2019
Notes on an overheard conversation in or near Chez Boca
“Woot!”
“What?”
“It's Opening Day!”
“What?”
“Opening Day!”
“What's that?”
“You don't know what Opening Day is?”
“No. Did J. C. Penney's open a new store?”
“No! You really don't know what Opening Day is, do you?”
“No.”
“Baseball! What else would it be?”
“Well, doesn't football, basketball, soccer or curling have an opening day?”
“No, it's only baseball. I'm surprised you didn't grok Opening Day.”
“You know the word ‘grok?’”
“Pththththththththththth!”
Sunday, March 31, 2019
Why adding crypto to gopher isn't that easy
I'm talking about the fact that my hypothetical new protocol operated strictly over TLS connections - plaintext straight up wasn't an option. I am of the opinion that the widespread lack of encryption in gopherspace today is the protocol's biggest shortcoming, and I actually suspect that this point alone discourages some folks who would otherwise be on board from adopting it. …
We only need to add ubiquitous encryption to gopher to end up with the best of both worlds!
Now, let me be clear exactly what I mean by "adding encryption to gopher". I don't want to advocate anybody serving anything on port 70 which isn't backward compatible with standard gopher, because that would be a tragedy for the gopher community. And I also don't want plaintext gopher to disappear entirely, because it's great that something like gopher exists which can be utilised on 40 year old machines which are too slow to do effective crypto. What I would like is to see something new which is basically "gopher plus crypto, maybe a little more" appear alongside the existing options. Something which could be thought of as a "souped up gopher" or as a "stripped down web", depending on your perspective. Something which meant people weren't forced to choose between two non- overlapping sets of massive and obvious shortcomings but could just USE the internet for sharing static content in a non-awful way - whether that static content is "just" phlog posts, ASCII art or old zines, or whether it's serious political dissent, cypherpunk activism, sexually explicit writing or non-trivial free software development.
As I metioned before, adding
TLS to gopher is
relatively straightforward, as it can be added between the TCP layer and the gopher layer
with both clients and servers. That's the trivial part. The next step is to
register the gophers:
URI scheme, and to register a default TCP port number for the gophers:
URI. This too, is trivial (it just
needs to be done once by somebody).
What's not so trivial is shoehorning the “secure gopher” into the gopher index file. There's no real place for it. The “gopher index” file is a machine readable file that indicates the contents of a gopher server and it looks something like:
1This is a pointer to another siteHTthis-selects-the-fileHTgopher.conman.orgHT70CRLF 0About this siteHTabout-site.txtHTexample.comHT70CRLF gOur LogoHTlogo.gifHTexample.comHT70CRLF IOur office spaceHToffice.jpgHTexample.comHT70CRLF
The first character of each line is an indication of what to expect when retrieving the data, a “1” indicates a gopher index file (an example of which you see above), a “0” is a text file, a “g” is a GIF file and “I” is for other image types. This is followed by a human readable description meant to be displayed, followed by a “selector,” followed by the hostname and port number. There is nothing there to indicate “use TLS.” A flag could be added past the port value to indicate “use TLS when retrieving this data” but:
- old gopher clients won't see the flag (most will just ignore it) and try to connect without TLS—at best, the client just errors out and at worst, crash;
- it breaks gopher clients that actually use the enhansed gopher protocol (alternative link)—they will either error out at best, or at worst, crash.
One solution is to just say “okay, port 70 is plain text, any other port
is TLS” but again, we're back
to the problem with older gopher clients that don't understand this. Another
solution that would work is to assign new “gopher types” (the “0”, “g”, “I”,
“1” etc.) for a TLS connection.
That just involves picking about two dozen new characters. Old gopher clients
will ignore “types” they don't understand and new clients can use TLS. Unfortunately, it does mean that
information about the connection type leaks out. HTTP doesn't have this problem because the
http:
(and https:
) URI does not include what the link is, unlike the
gopher:
URI (or
the linking information in the gopher index file) which does include
what the link is.
So I think it comes down to picking your poison:
- potentially breaking old client;
- doubling the number of “types” to support;
- or even a new type of protocol entirely, but then you start falling into HTTP territory …
Update on Tuesday, September 28TH, 2021
There might be a fourth way, but I think it's a hack.
Update on Monday, Debtember 6TH, 2021
There are more than four ways to do this, but I don't think any are worth implementing.
Monday, April 01, 2019
The Rings of Silence
So a friend of mine saw the iPhone bug report and sent me a silent ring tone. I installed it and went through the laborious process of giving everyone in my contact list a custom ring tone, then set the default ring tone to the silent ring tone.
We shall see how well it works.
Wednesday, April 03, 2019
My social has been suspended
Of course just when I default to a silent ring tone, Verizon is offering services to block robocalling, which might explain why the number of robocalls I've received the past few days has dropped dramatically. That doesn't mean I'm going to stop using the silent ring tone.
So an unknown caller calls me, only this one leaves me voice mail. Usually when that happens, it's because it's someone I know calling from an unknown number or the car dealership telling me my car is ready or something along those lines.
I listen to the message: “—number we have gotten an order to suspend your social at very right moment becaue we have found many suspicious activities on your social before we go ahead and suspend your number kindly call us back on our number … which is … 325 … 399 ‥ 0630. I repeat it's … 325 … 399 … 0630. Thank you and good-bye,” followed by 10 seconds of silence.
Wow.
Just wow.
My social will be suspended. I guess that explains why I no longer have Google+.
Thursday, April 04, 2019
It's just one of those days
Driving to work to day was horrible! No matter what road I took, it was always a parking lot. The picture above of that dog? Yeah, that's how I was feeling, waiting for traffic to move.
Sigh.
Monday, April 08, 2019
Notes on an overheard conversation about the possibility of the Return of the Demonic Creature, or The Alien Invasion
“Come quick! I just saw something entering the neighbor's house.”
“A burgler?”
“No, something. Nothing human. Come on!”
“Okay! Okay! I'm coming. Where?”
“See where I'm shining the light?”
“Yes. It looks like a screen is falling out.”
“Yeah! That's where I saw and heard movement when I was taking out the garbage. Some … thing … scrambling to get in.”
“Might be a squirrel.”
“It could be the start of an alien invasion!”
“Really? It's a squirrel, or maybe an opossum.”
“Or, you know, the return of the Demonic Creature that invaded Bill's Room!”
“Now you're being silly.”
“Don't say I didn't warn you.”
Why the web went bad
My recent post about "why gopher needs crypto" received a very well- considered response over at The Boston Diaries. The author (do I call you "the conman"?) …
…
The conman suggests that creating a new protocol is to risk that we "start falling into HTTP territory". This is of course a very real risk, but I also very strongly believe that it is perfectly avoidable if we are sufficiently determined from day one to avoid it. To this end, I hope to think and write (and read, if anybody wants to join in!) more in the future not just about the shortcomings of gopher but very explicitly about what is right and what is wrong about HTTP and HTML. It's vitally important to identify precisely what features of the web stack facilitated the current state of affairs if we want to avoid the same thing happening again.
In my opinion, the point where HTTP and HTML “went off the rails” into the current trainwreck of the modern web happened when browsers gained the ability to run code within the browser, turning the browser from a content delivery platform and into an application delivery platform (althought that transformation didn't happen overnight). And no, it wasn't the fault of Netscape and their introduction of Javascript that brought about the current apocalypse of bloated webpages and constant surveillance. Nope, the fault lies directly at the feet of Sun Microsystems (whose zombie corpse is following the command of Oracle but I digress) and the introduction of Java in early 1996. Javascript was Netscape's reaction to Java.
But while the blame definitely lies with Sun, that's not to say it wouldn't have happened. If Sun didn't do it, it would have most certainly been Microsoft, or even possibly Netscape (my money would have been on Microsoft though—they had already added support to run VisualBasic in their office suite and adding such to the browser would have been a natural progression for them). I think that whatever protocol was popular at the time, HTTP or Gopher, would have turned from a content delivery platform to an application delivery platform because that's the way the industry was headed (it's just that HTTP won out because of embedded cat pictures but again I digress).
In fact, the very nature of wanting to “improve Gopher” is what drove HTTP into its current incarnation in the first place and one must fight hard against the second system effect.
Saturday, April 13, 2019
Notes on an overheard conversation about emptying the dishwasher
“I see you were right about putting the platter back in its place.”
“I was?”
“Yes. Last time I showed you where it goes, you said, ‘We'll see if I remember next time.’”
“Ah.”
“So I guess your head isn't filled with useless trivia then.”
“No, my head is only filled with useless trivia.”
“Ha! Just like my head is only filled with song lyrics.”
Monday, April 15, 2019
I like the whole “computers-in-wood” asthetic
Love Hultèn (or Hultén—it's spelled both ways on that page) made a series of life sized computers based off the old Lego computer bricks (link via Jason Kottke) and I must admit, they are very cool!
But then I checked out Love's other projects and the The Golden Apple—wow! A Mac Mini housed in a custom walnet case based off the original Macintosh 128K case is just beautiful (although to be truthful, I could do without the gold keys on the keyboard but that's me). The Pet De Lux is no slouch either.
One of these days, I'll get a computer into a wood case. One of these days …
Thursday, April 18, 2019
“Help! I'm still trapped in a Chinese fortune cookie factory!”
Bunny and I are having dinner at a Chinese restaurant. The check is delivered and we crack open our fortune cookies. Mine is something generic, but Bunny's cookie says, as God is my witness:
Unfortunately, there was no other fortune cookie to pick from. Go figure.
Monday, April 22, 2019
“Can you tell me how to get? How to get to Westeros?”
I have a few friends that are into “The Game of Thrones” (and if not the TV series, then at least the book series). For them, I have two videos: the first is “Game of Chairs,” a Sesame Street parody of the TV show. The second video is … well … a meeting between Cersei and Tyrion that is interrupted by an unexpected guest. To say more would be to spoil it.
Tuesday, April 23, 2019
The feeling when your new task is already done
“The ‘Project: Heimdall’ team now want all the numbers,” said TS1, my fellow cow-orker.
“Really?” I asked. “They can finally deal with international phone numbers?”
“Apparently yes. So ‘Project: Sippy-Cup’ needs to open the flood gates and let all the numbers through. But make it configurable.”
“Okay.”
So I dive into the code for “Project: Sippy-Cup” and … what's this? The code is already in place. From last year. When it was clear that “Project: Heimdall” could not, in fact, handle all the numbers! I remember it was annoying having to send all NANP numbers (those that are 10 digits, like “501-555-1212”), even the malformed, invalid NANP numbers (like “501-511-1212”), while making sure I didn't pass along valid, international numbers that also happened to be 10 digits long (like “501-555-1212”). Now that the “Project: Heimdall” team has to deal with the crap we get … well … good luck. We're all counting on you.
Dealing with phone numbers
“Project: Wolowizard only supports NANP numbers, but since those numbers come via The Protocol Stack From Hell clearly marked as NANP, it's easy to determine there if a number is NANP or not. It's not quite as simple in “Project: Sippy-Cup” since SIP is … a bit loose with the data formatting.
There,
the numbers are formatted as a tel:
URI
(or a sip:
URI but the differences are minor).
If the number is “global,”
it's easy to determine a NANP number because it will be marked with a “+1”
(“1” being the country code for North America).
So,
tel:+1-501-555-1212
is most definitely a NANP number,
while tel:+501-555-1212
is not.
Things get a bit more muddy when we receive a so-called “local” number.
RFC-3966 clearly states that a “local” tel:
URI MUST
(as defined in RFC-2119)
contain a phone-context
attribute—except when it doesn't
(I swear—the RFC contradicts itself on that point; tel:8005551212
is valid,
even though it's a “local” number and missing a phone-context
attribute because it's a “national freephone number”).
So tel:555-1212;phone-context=+1501
is NANP,
while tel:555-1212;phone-context=+501
is not
(look closely at the two—one has a country code of “1” while the other has a country code of “501”).
It's worse though,
because while tel:555-1212;phone-context=+1501
is NANP,
you cannot use the phone-context
attribute to reconstruct a global number
(the RFC contains the following example: tel:863-1234;phone-context=+1-914-555
—um … yeah).
To further complicate things,
the phone-context
attribute does not have to contain digits—it can be a domain name.
So tel:555-1212;phone-context=example.com
is a valid number.
Is it NANP?
International?
Who knows?
So what does “Project: Sippy-Cup” do?
If it receives a “local” number with a “+1” country code in the phone-context
attribute,
it's marked as NANP;
any other country code is it marked as non-NANP.
If the phone-context
attribute contains a domain name,
it is treated as a NANP number (based on what I saw in production).
And if there's a missing phone-context
attribute for a “local”
number, “Project: Sippy-Cup” treats it as a NANP number if it has at least 10 digits.
Now, why do I care about this? Because we want to avoid doing an expensive database query for non-NANP and invalid NANP numbers, but “Project: Heimdall” wants all the numbers for tracking potentially fraudulent calls.
Wednesday, April 24, 2019
Just one of life's smaller mysteries
Some fellow cow-orkers and I were returning from lunch to work and as we rounded the corner to The Ft. Lauderdale Office of The Corporation, we found the entire front of the building swarming with fire and rescue vehicles. Thirteen in all.
There were no police, so it wasn't an office shooting (thankfully!). No crowd of people were huddled outside the building, so there didn't appear to be an evacuation. No smoke, so there apparently was no fire. But something happened that required thirteen responding units.
We were able to park and enter the building. From what little we heard, there was an “incident” on the 6th floor, but what it was, and how many were involved, was unknown.
Just one of life's smaller mysteries.
Sunday, May 12, 2019
I wonder what they think they're attacking?
In addition to a self written gopher server I also have a QOTD server accepting requests via TCP and UDP. I never mentioned it as I just put it out there to really see what would happen. I will occasionally see a request go by, but over the past two weeks, some people have really been hitting it hard via UDP:
host address | requests |
---|---|
host address | requests |
38.21.240.153 | 252628 |
113.113.120.152 | 18547 |
148.70.95.145 | 11529 |
150.138.92.17 | 11400 |
149.248.50.17 | 9917 |
123.129.223.133 | 9373 |
222.186.49.221 | 8689 |
39.105.122.74 | 8261 |
182.150.0.73 | 8098 |
47.107.64.105 | 7575 |
101.132.44.244 | 5745 |
170.33.8.193 | 5566 |
140.249.60.227 | 5520 |
61.160.207.99 | 5278 |
47.244.154.2 | 5084 |
23.107.43.194 | 5067 |
47.101.222.141 | 5066 |
47.101.169.118 | 5024 |
47.101.68.112 | 4449 |
47.102.135.146 | 4325 |
47.75.116.41 | 4200 |
47.244.36.42 | 4137 |
104.25.221.35 | 3638 |
144.48.125.176 | 3440 |
219.234.29.229 | 3402 |
125.88.186.186 | 3219 |
47.99.152.166 | 3167 |
39.108.51.161 | 3166 |
47.101.51.117 | 3161 |
210.83.80.21 | 3154 |
47.100.96.218 | 3139 |
47.101.200.97 | 3137 |
120.79.0.221 | 3090 |
47.100.183.18 | 2971 |
39.96.31.5 | 2944 |
47.98.38.120 | 2758 |
101.132.182.251 | 2756 |
47.107.123.238 | 2492 |
139.99.16.112 | 2290 |
47.101.157.245 | 2258 |
106.14.158.7 | 2226 |
47.100.234.2 | 2183 |
47.100.201.32 | 2090 |
120.79.40.9 | 2047 |
47.100.125.115 | 2037 |
101.132.37.45 | 1997 |
120.78.5.80 | 1985 |
47.101.68.50 | 1950 |
47.96.172.52 | 1915 |
20.188.110.231 | 1781 |
106.14.137.34 | 1118 |
119.188.250.37 | 1095 |
There doesn't see to be much I can find about this,
other than a potential link to XBox Live,
but that doesn't
seem right.
It's hard to say.
So to see what might be happening,
I modified the QOTD program to record anything it receives via UDP.
That way,
I should be able to figure out if 38.21.240.153
is trying to attack something,
or if it really just wants an up-to-date quotes file.
Experimental headers are no longer experimental
On the Lua Users email list the topic of custom email headers came up. Back in the early days, RFC-822 stated that:
Any field which is defined in a document published as a formal extension to this specification; none will have names beginning with the string "X-" …
RFC-822: STANDARD FOR THE FORMAT OF ARPA INTERNET TEXT MESSAGES
This also applies to headers starting with “x-” as Internet based text headers are case-insensitive.
Now given that RFC-822 has been obsoleted by RFC-2822 and RFC-5233 I thought I would check those out as well:
Fields may appear in messages that are otherwise unspecified in this document. They MUST conform to the syntax of an optional- field. This is a field name, made up of the printable US-ASCII characters except SP and colon, followed by a colon, followed by any text that conforms to the unstructured syntax.
The field names of any optional field MUST NOT be identical to any field name specified elsewhere in this document.
RFC-5322: Internet Message Format
Hmm … nothing about “X-”. I replied that starting a non-standard header with “X-” was still a safe way to go, only for Cu nningham's Law to kick into effect:
- From
- Daurnimator <XXXXXXXXXXXXXXXXXXXX>
- To
- Lua mailing list <lua-l@lists.lua.org>
- Subject
- Re: Adding another way to point to "levels" to debug.getinfo and friends
- Date
- Mon, 13 May 2019 11:55:07 +1000
On Mon, 13 May 2019 at 09:03, Sean Conner <sean@conman.org> wrote:
In other RFC documents (too many to mention) private or experimental fields are usually labeled with "X-" (or "x-") so your best bet is to create a header name starting with "X-" to be safe.
Please stop using the X- prefix! See RFC 6648:
This document generalizes from the experience of the email and SIP communities by doing the following:
1. Deprecates the "X-" convention for newly defined parameters in application protocols, including new parameters for established protocols. This change applies even where the "X-" convention was only implicit, and not explicitly provided, such as was done for email in [RFC822].
Interesting. The “X-” standard for non-standard headers was to allow for experimentation without fear of conflicting with other headers, but the process of converting such headers to a standard header prove problematic. But RFC-6648 does cover the case when one doesn't want to standardize a header (or parameter):
… In rare cases, truly experimental parameters could be given meaningless names such as nonsense words, the output of a hash function, or Universally Unique Identifiers (UUIDs) [RFC4122].
RFC-6648: Deprecating the "X-" Prefix and Similar Constructs in Application Protocols
What a wild idea!
Monday, May 13, 2019
They aren't attacking, they're being attacked
So that list of IP addresses I listed yesterday … it turns out they weren't the attackers, but the victims! And I was unwittingly helping to facilitate a DDoS amplification attack.
Sigh.
When we left off yesterday, I had modified my QOTD server to log the IP address, port number, and the incoming UDP packet to help figure out what the heck was going on. So pretty much off the bat, I'm seeing this (which goes on for nearly 4,000 entries):
38.21.240.153:6951 "\001" 38.21.240.153:7333 "\001" 38.21.240.153:37152 "\001" 38.21.240.153:6951 "\001" 38.21.240.153:7333 "\001" 38.21.240.153:37152 "\001" 38.21.240.153:6951 "\001" 38.21.240.153:7333 "\001" 38.21.240.153:37152 "\001"
What had me puzzled are the ports—I wasn't familar with them. It may be that port 6951 deals with online transaction processing, port 7333 seems to have something to do with the Swiss Exchange, and nothing at all about port 37152. It's not exactly looking good, but the ports being attacked are rather all over the place (I'm only going to list two of the attacked IP addresses—there are more though):
host address | port number | requests |
---|---|---|
host address | port number | requests |
38.21.240.153 | 10947 | 1508 |
38.21.240.153 | 11860 | 1425 |
38.21.240.153 | 14485 | 1420 |
38.21.240.153 | 65033 | 1418 |
38.21.240.153 | 4625 | 1409 |
38.21.240.153 | 4808 | 1401 |
38.21.240.153 | 37152 | 1400 |
38.21.240.153 | 65277 | 1394 |
38.21.240.153 | 27683 | 1389 |
38.21.240.153 | 17615 | 1389 |
38.21.240.153 | 48235 | 1388 |
38.21.240.153 | 27227 | 1386 |
38.21.240.153 | 14503 | 1386 |
38.21.240.153 | 43174 | 1385 |
38.21.240.153 | 43069 | 1377 |
38.21.240.153 | 47040 | 1372 |
38.21.240.153 | 6991 | 1370 |
38.21.240.153 | 18235 | 1369 |
38.21.240.153 | 57696 | 1360 |
38.21.240.153 | 7333 | 1233 |
38.21.240.153 | 6951 | 1204 |
38.21.240.153 | 36965 | 1171 |
38.21.240.153 | 16306 | 1139 |
47.99.152.166 | 47673 | 145 |
47.99.152.166 | 39606 | 144 |
47.96.172.52 | 48309 | 142 |
47.96.172.52 | 46769 | 142 |
47.107.64.105 | 59669 | 142 |
47.107.64.105 | 35763 | 142 |
47.107.64.105 | 22100 | 141 |
47.99.152.166 | 4302 | 140 |
47.107.64.105 | 53336 | 140 |
47.99.152.166 | 35758 | 138 |
47.96.172.52 | 44529 | 138 |
47.96.172.52 | 26878 | 138 |
47.107.64.105 | 52337 | 138 |
A lot of the ports are high values, which tend not to have defined services and are typically used for outbound requests to a service, like making a request to a QOTD service.
The data being sent is just a single byte, which is all that's really needed for the QOTD protocol to return a quote via UDP. So this looks like legitimate traffic, except for the volume.
But as I kept searching for “QOTD attacks” I kept coming across UDP amplification attacks (more of the same). It appears that the vast majority of traffic is forged (it's easy enough to forge UDP packets), and because QOTD sends more data than it receives, it's a rather cheap method to attack a target with a ton of traffic regardless of what the attacked machine is being used for (and my UDP based server probably isn't the only one unwittingly facilitating this attack).
A bit more research revealed a few servers that made a request (or a very small number of requests):
host address | requests | first request |
---|---|---|
host address | requests | first request |
74.82.47.61 | 2 | May 03 |
185.94.111.1 | 4 | May 04 |
74.82.47.37 | 1 | May 04 |
74.82.47.17 | 1 | May 05 |
71.6.233.171 | 1 | May 06 |
74.82.47.29 | 1 | May 06 |
104.152.52.39 | 1 | May 07 |
74.82.47.57 | 2 | May 07 |
74.82.47.33 | 1 | May 08 |
206.189.86.188 | 1 | May 10 |
74.82.47.49 | 1 | May 10 |
I'm guessing these machines made the query to see if my machine could be used for a UDP DDoS amplification attack, and would periodically check back to see if such attacks could continue from my server, which would explain the periodic nature of the deluge of traffic I saw (they weren't continuous but would happen in very random bursts). I also suspect there may be two different groups doing an attack, given the volume of traffic to certain targets.
It was also amusing to see 104.152.52.39
attempt to spam me with email,
and attempt to log in via ssh
on the 7TH as well.
I've since disabled the UDP protocol on my QOTD server. Sigh. This is why we can't have nice things on the Intarwebs.
“If you strike me down, I shall become more powerful than you can possibly imagine”
Of all the lightsaber duels in the Star Wars movies, the one in “Star Wars: Episode IV—A New Hope is probably the most sedate. But that's okay, because in 1977 this is the first time we're seeing freaking lightsabers! So cool! And it blew my 8-year old mind at the time.
But this reimagining of that fight? (link via Kirk Israel)
Had I seen that as an 8-year old, my head would have exploded!
Friday, May 24, 2019
Notes on an overheard conversation at a doctor's office
“Take a seat right over there.”
“Okay.”
“Which arm?”
“It doesn't matter—it's hard either way.”
“Other phlebotomists have had problems finding a vein?”
“No, it's hard on me!”
“What?”
“I can't stand needles.”
“Oh, it's not going to hurt.”
“That's what they all say.”
“Now, now …”
“Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!”
“That was just the alcohol wipe!”
“You could have warned me!”
“Why me?”
“Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!”
“I was just uncapping the syringe.”
“Oh god … ”
“Are we ready?”
“ErrrrrrrrAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA!”
“You do realize we've soundproofed the room, so screaming won't help any.”
“How much longer?”
“Sigh.”
“How much longer? Aaaaaaaaaa! The horror! The horror!”
“Aaaaand—we're done! That wasn't so bad, was it?”
“The blade is sharp … Lucky … my heart only skips one beat …”
“What are you, twelve?”
“… blacked out … can't afford that …”
“Would you like a lollypop?”
“Please?”
Monday, June 03, 2019
It's time for another trip to Brevard
There are four ways to get to Brevard. One way is to head north along US-276 from Greenville, South Carolina. It's mostly a nice, leasurely drive with the exception of a 1,500′ (460m) elevation change over five miles (8km) of switchbacks and blind curves through the Caesars Head State Park (straight as the cardinal flies it's two miles (3km)). We took this route once (note the emphasis)—while I had a blast driving, Bunny lamented the lack of strong sedatives and refused to even look as I drove this section (even my Dad commented on the drive—“You took that route?”).
This route is no longer under consideration for us.
The second way is to head south along US-276 towards Brevard. We don't take this route mainly because we're not north of Brevard, and secondarily because of the 2000′ (600m) elevation change over eight miles (13km) of switchbacks and blind curves (also two miles (3km) as the cardinal flies).
The third way is to head east along US-64. This is a nice road, no excessive elevation changes, no switchbacks, and once you hit Rosman, the road widens into a fast, four lane highway running along the Proclamation Line of 1763 right up into Brevard from the southwest. We don't take this route because we aren't west of Brevard.
We take the final option—west along US-64 into Brevard. The road along this stretch is two lanes, running through an area that's too built up to be rural, yet not built up enough to be suburban (much less urban)—perhaps “ruburban?” For us, this is the easiest option as it's also very simple—I-95 North to I-26 West to US-64 West.
The only issue we had this year in driving was mostly in South Carolina, For some reason, every semi-truck driver in South Carolina was headed north along I-95, then west along I-26 and trying to pass each other.
This wouldn't be that bad except that both I-95 and I-26 are only two lanes in each direction, and the traffic would pile up behind two semis as one would s-l-o-w-l-y pass the other. This was just in South Carolina (more like “Slow Carolina”). Florida, Georgia and North Carolina did not have this issue.
Go figure.
But twelve hours and 750 miles later, we're safely ensconsed at The Red House Inn and had an okay dinner at The Square Root, as is tradition when we roll into town.
Tuesday, June 04, 2019
Extreme pay phones, Brevard edition
Bunny and I had lunch at Rocky's Grill and Soda Shop, and while there, I noticed this little bit of history:
The fact that the sign says “temporarily out of serice” implies they think it can be fixed at some point in the future. I sure hope they do—it would be cool to use just for the novelty of it.
Wednesday, June 05, 2019
Extreme cheese, Brevard edition
Bunny wanted to stop by Ingles to pick up some items and while there, I came across this lovely product:
Bunny and I like cheese. Bunny and I like chocolate. We both agreed we both don't like chocolate cheese.
Thursday, June 06, 2019
Extreme nightmare fuel, Brevard Edition
Bunny and I ate lunch at Hawg Wild BBQ, and right in the lobby was a totem the likes of which nightmares are made:
Only by sharing can I rid myself of this image.
Acclimating
We've been here for only a few days and we're still trying to acclimatize to the area. It has nothing to do with the climate—the mid-70s at noon, low-to-mid 60s at midnight are a given. No, what has us screwed up is the time! It doesn't start to get dark until near 9:00 pm and that's messing us up, given that Brevard rolls up the sidewalks at 9:00 pm. It's like, “No! Wait! It's just now getting dark! You can't be closing up shop! Wait!”
Another oddity to get used to—the local UPS driver constantly honks the horn at anything moving. Cars, people, white squirrels dashing across the road—if it moves, the driver honks at it. Quite odd indeed.
The third item is only affecting me, but man, is it annoying. I use my iPad as a laptop while on vacation (it helps that I have a keyboard for it—one that sucks but then again, any keyboard that isn't a full sized clicky IBM keyboard sucks so take that observation for what it's worth). But I've been noticing over the past few days it's having issues with browsing the web. Web sites these days are so XXXXXXX bloated that anything short of a multicore computer with a browser that's been updated in the last ten minutes stands to crash more often that not. Just trying browse the website of a local restaurant stands a 50/50 chance of crashing Safari on the iPad.
Sigh.
Friday, June 07, 2019
Extreme social clubs, Brevard edition
It was Captain Bligh who experienced the mutiny, not Captian Ahab, and certainly not Moby Dick who (if I may spoil a 168 year old book) did not die.
Still a cool sign, though.
Saturday, June 08, 2019
Extreme travel posters, Brevard edition
I guess they're going to hang out with Elvis.
Sunday, June 09, 2019
Extreme table tops, Brevard edition
Bunny and I had dinner at El Ranchero, a Mexican restaurant just down the street from The Red House Inn (literally—it's at the bottom of a hill). The place has good food for the price (seriously—the prices are incredible for what you get) but I don't recall them having art on the table in years past.
They do now, though, and …
… wow. Just … wow.
Monday, June 10, 2019
Extreme excitement, Brevard edition
So this is happening, right across the street from where we are staying:
And now, some background.
Bunny and I arrived back at The Red House Inn late afternoon to find a letter slipped under the door. It read:
Dear Local Resident,
The City of Brevard Fire Department will be conducting a training exercise in your neightborhood on Monday, June 10th from 6:00pm through 11:0pm. Specfically, the house located at the following address will be demolished through the use of controlled burns within the structure. Ultimately, the house will be leveled to the ground.
The house is located at 279 Probart Street.
Many tasks have to be …
If you would like to see what's all involved in the training exercise, we will have a designated area where family and friends are able to watch. We greatly appreciate your support during this valuable training exercise. If you have any qustions …
There was also a handwritten addition:
Received June 10, 2019 9 a.m.
So sorry for the late notice. I
hope it will not be disruptive to
your stay. Please call us
with any concerns. Thank you,
Tracie
XXX-
XXX-
XXXX
At the time, we had no idea where 279 Probart Street was. We do now, though. And we had to modify our dinner plans a bit, given that our ability to drive anywhere was seriously curtailed.
It seems a developer bought several pieces of property and was in the process of clearing it to build more homes in Brevard. The house in question was bought several months ago and was going to be knocked down anyway.
And here it is, a bit past 11:00 pm and the fire rages on …
First watching a total eclipse from the front porch, and now a house fire from the front porch. I have to wonder what we'll see from the front porch next time we come!
Tuesday, June 11, 2019
“Well, that's one way to renovate a house”
Bunny managed to get a picture of the house across the street before yesterday's training exercise:
Photo by Bunny
And the house as it “stands” today (pun intended):
We're told it should all be gone by Saturday. We'll see.
Extreme church, Brevard edition
You can't throw a stone around here without hitting a church. As a result, some of them have a very unique design:
It's either a very daring design, or they'll turn anything into a church around here.
Wednesday, June 12, 2019
Extreme doorway, Brevard edition
Walking along Main street I came across this odd doorway:
If you look closely, you can see some trees on the other side of the door. Intrigued, I went around to the other side of the building and found it was just a front:
I guess it makes Main Street look better if there isn't an obviously missing building.
Thursday, June 13, 2019
Extreme contradiction, Brevard edition
So Bunny and I came across this lovely bit of signage in downtown Brevard:
So which is it? Loading, or parking? Or loading of wheelchairs for parking? Or parking for wheelchairs to be loaded? I'm so confused!
Friday, June 14, 2019
Extreme bongs, Sevierville edition
Bunny and I made the trek out to Sevierville, TN to visit Tennessee's largest flea market. Overall, it was “meh” and nothing at all what we were expecting. I'm not sure what we were expecting, but what we found wasn't quite it.
But that's not to say it wasn't amusing, like this interesting gask mask:
And just a few booths down from that, we hit a bookstore with both kinds of fuction, westerns and Amish:
And no, I am not making that bit about “both kinds of fiction” up—the clerk explicitly stated that as a direct quote. I did not handle any of the fiction, lest it burst into flames upon my touching it. The Amish theme kept going with this booth:
Yes, this booth would supply your trailer hitching and Amish honey and jam needs. Need I say more? Well, I could but I'm not sure if I should. And when did Tennessee become such a hotbed of Amish activity? Did I not get the memo? I must not have gotten the memo.
Just randomly, I saw several of these signs hanging from the ceiling:
At first, I thought that anyone wanting to buy an animal had to obtain permission from the office, so they could make sure the customer is capable of taking care of marmosets, sloths or whatever other livestock was being sold, but upon reflection, this sign could be a notice for the sellers, to make sure they aren't selling illegal wolverines, pangolins or alligators.
I also found amusing the number of booths selling cleaning products. We got accosted early on by one woman hawking a cleaning product. She mostly talked to Bunny, and every other word out of her mouth was “ma'am.” Every other sentance was “You can drink this stuff, but it wouldn't taste very good.” Nice to know, I guess (and it turned out, every cleaning product being hawked at the half dozen or so other booths were all “drinkable, but you wouldn't like the taste”). And I have no idea if the Amish would use such products.
Fur Ball at the Waffle House
Because of our little jaunt into Tennessee we ended up having a late dinner in Brevard, where “late” is “after 9:00 pm when the sidewalks roll up.”
The late-evening eating establishments are rather limited, which is why we found ourselves eating at The WaffleHouse at 11:30 pm on a Friday night. I swear, I never thought I would have the following coversation:
“Ooh, it looks busy.”
“Do you think we'll have to wait for a seat?”
“I hope not.”
We did not have to wait, as we grabbed the only two seats left at the counter.
There, we met with our waitresses, Kloey, with a “K” and Fur Ball (yes, “Fur Ball” was the name on her tag), which is a nickname given to her when she was 15 years old. I kid you not.
Saturday, June 15, 2019
Extreme nightmare fuel part II, Brevard edition
It's bad enough when the white squirrels are using abominations to take over the world, but now Slenderman is in town:
Then again, this is Transylvania County …
Sunday, June 16, 2019
Exteme friendly pets, Brevard edition
Monday, June 17, 2019
There and back again
We are home.
We missed my ETA by one minute—had a driver not cut in front of us and slowed down to 40mph on I-95 just as we were approaching our final exit, we would have arrived home at 10:20 pm instead of 10:21 pm. Stupid slow driver! Other than that, it was an uneventful 12 hour drive, give or take a few minutes.
Anyway, a picture of the former house at 279 Probart Street as of yesterday:
I do say, it's largely gone.
And thus ends our yearly adventure in Brevard. (As a side note—man is it nice to get back to a real keyboard again.)
Wednesday, June 26, 2019
Notes about a broken menu system
I am partaking a local quick, consumable, gustatory establishment whereupon I spied a problematic carte du jour:
Methinks the local proprietor requires consultation with the originating equipment manufacturer to resolve the current conundrum.
Thursday, June 27, 2019
Those deployment blues
My department at The Corporation had a deployment this morning (2:00 am). These deployments don't happen that often (the last one happened in January of this year; last year we had a total of four deployments) but usually there are no problems afterwards.
This time we weren't so lucky.
It wasn't a problem with our code, but with a vendor our customer, The Monopolistic Phone Company, uses. The vendor in question wasn't sending some critical information we were sending back to The Monopolistic Phone Company. We didn't notice this initially since our testing just happened to use the other vendor The Monopolistic Phone Comapny uses. So while it technically wasn't our problem, getting that particular vendor to even look at a problem, much less solve it, is a multi-month and multi-money problem, practically it is our problem.
The base problem is that one vendor who shall rename nameless is supposed to forward all SIP headers that start with a common prefix, but they have a limit to the number of non-standard SIP headers they'll forward and we've exceeded said limit. Apparently, a new feature we added, plus moving some existing data to its own header, bumped the number of headers past this limit. The fix was easy (just put the existing data we moved back in the old header while keeping it in the new header) but there was a bit of concern about installing it into production.
You see, because our customer is The Monopolistic Phone Company, and they have regulartory issues with respect to reliability to contend with, there's a whole process involved with deployment. Just for starters, we have to give them a 10-business day notice of any changes, which they can veto …
Oh, and have I mentioned the very scary SLAs we have with them? Where vast amounts of money start flowing to The Monopolistic Phone Company for violations of said SLAs? So you can see why it takes a significant amount of time to get deployed, and why we have so few.
Fortunately, we're given a number of emergency deployments we can use and thus, we used one of them today.
All told, from initial bug fix to re-deployment took a total of three hours. That is the fastest deployment I've seen of our department's code.
Friday, June 28, 2019
“Will someone please rescue me from this Chinese fortune cookie factory?”
Tonight's fortune cookie is amusing in the way that only fortune cookies can be.
The other fortune cookie fortune was not nearly as interesting.
Thursday, July 04, 2019
“T'was the night after fireworks, and all through the land, I can only hope, that no one lost a hand.”
It's that time of the year again when people spend vast amounts of time and money shooting off fireworks. As of now, it no longer sounds like a war zone and the smell of black powder has drifted onward. So I hope everyone had a safe Fourth of July and that this:
never happened to you.
Tuesday, July 09, 2019
How can a “commercial grade” web robot be so badly written?
Alex Schroeder was checking the status of web requests, and it made me wonder about the stats on my own server. One quick script later and I had some numbers:
Status | result | requests | percent |
---|---|---|---|
Total | - | 64542 | 100.01 |
200 | OKAY | 53457 | 82.83 |
206 | PARTIAL_CONTENT | 12 | 0.02 |
301 | MOVE_PERM | 2421 | 3.75 |
304 | NOT_MODIFIED | 6185 | 9.58 |
400 | BAD_REQUEST | 101 | 0.16 |
401 | UNAUTHORIZED | 147 | 0.23 |
404 | NOT_FOUND | 2000 | 3.10 |
405 | METHOD_NOT_ALLOWED | 41 | 0.06 |
410 | GONE | 5 | 0.01 |
500 | INTERNAL_ERROR | 173 | 0.27 |
I'll have to check the INTERNAL_ERROR
s and into those 12 PARTIAL_CONTENT
responses,
but the rest seem okay. I was curious to see what I didn't have that was being requested,
when I noticed that the MJ12Bot was producing the majority of NOT_FOUND
responses.
Yes, sadly, most of the traffic around here is from bots. Lots and lots of bots.
requests | percentage | user agent |
---|---|---|
47721 | 74 | Total (out of 64542) |
16952 | 26 | The Knowledge AI |
9159 | 14 | Mozilla/5.0 (compatible; SemrushBot/3~bl; +http://www.semrush.com/bot.html) |
5633 | 9 | Mozilla/5.0 (compatible; VelenPublicWebCrawler/1.0; +https://velen.io) |
4272 | 7 | Mozilla/5.0 (compatible; AhrefsBot/6.1; +http://ahrefs.com/robot/) |
4046 | 6 | Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) |
3170 | 5 | Mozilla/5.0 (compatible; Go-http-client/1.1; +centurybot9@gmail.com) |
2146 | 3 | Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/) |
1197 | 2 | Mozilla/5.0 (compatible; DotBot/1.1; http://www.opensiteexplorer.org/dotbot, help@moz.com) |
1146 | 2 | istellabot/t.1.13 |
But it's been that way for years now. C'est la vie.
So I started looking closer at MJ12Bot and the requests it was generating, and … they were odd:
//%22http://www.thomasedison.com//%22
//%22https://github.com/spc476/NaNoGenMo-2018/blob/master/run.lua/%22
//%22/2018/08/24.1/%22
//%22https://kottke.org/19/04/life-sized-lego-electronics/%22
And so on. As they describe it:
Why do you keep crawling 404 or 301 pages?
We have a long memory and want to ensure that temporary errors, website down pages or other temporary changes to sites do not cause irreparable changes to your site profile when they shouldn't. Also if there are still links to these pages they will continue to be found and followed. Google have published a statement since they are also asked this question, their reason is of course the same as ours and their answer can be found here: Google 404 policy.
But those requests?
They have a real issue with their bot.
Looking over the requests,
I see that they're pages I've linked to,
but for whatever reason,
their bot is making requests for remote pages on my server.
Worse yet,
they're quoted!
The %22
parts—that's an encoded double quote.
It's as if their bot saw “<A HREF="http://www.thomasedison.com">
” and treated it as not only a link on my server,
but escaped the quotes when making the request!
Pssst! MJ12Bot! Quotes are optional! Both “<A HREF="http://www.thomasedison.com">
” and
“<A HREF=http://www.thomasedison.com>
” are equivalent!
Sigh.
Annoyed, I sent them the following email:
- From
- Sean Conner <sean@conman.org>
- To
- bot@majestic12.co.uk
- Subject
- Your robot is making bogus requests to my webserver
- Date
- Tue, 9 Jul 2019 17:49:02 -0400
I've read your page on the mj12 bot, and I don't necessarily mind the 404s your bot generates, but I think there's a problem with your bot making totally bogus requests, such as:
//%22https://www.youtube.com/watch?v=LnxSTShwDdQ%5C%22 //%22https://www.zaxbys.com//%22 //%22/2003/11/%22 //%22gopher://auzymoto.net/0/glog/post0011/%22 //%22https://github.com/spc476/NaNoGenMo-2018/blob/master/valley.l/%22I'm not a proxy server, so requesting a URL will not work, and even if I was a proxy server, the request itself is malformed so badly that I have to conclude your programmers are incompetent and don't care.
Could you at the very least fix your robot so it makes proper requests?
I then received a canned reply saying that they have, in fact, received my email and are looking into it.
Nice.
But I did a bit more investigation, and the results aren't pretty:
Status | result | number | percentage |
---|---|---|---|
Total | - | 2164 | 100.00 |
200 | OKAY | 505 | 23.34 |
301 | MOVE_PERM | 4 | 0.18 |
404 | NOT_FOUND | 1655 | 76.48 |
So not only are they responsible for 83% of the bad requests I've seen, but nearly 77% of the requests they make are bad!
Just amazing programmers they have!
Wednesday, July 10, 2019
Some more observations about the MJ12Bot
I received another reply from MJ12Bot about their badly written bot and it just said the person responsible for handling enquiries was out of the office for the day and I should expect a reponse tomorrow. We shall see. In the mean time, I decided to check some of the other bots hitting my site and see how well they fare, request wise. And I'm using the logs from last month for this, so these results are for 30 days of traffic.
requests | percentage | user agent |
---|---|---|
167235 | 70 | Total (out of 239641) |
46334 | 19 | The Knowledge AI |
38097 | 16 | Mozilla/5.0 (compatible; SemrushBot/3~bl; +http://www.semrush.com/bot.html) |
17130 | 7 | Mozilla/5.0 (compatible; BLEXBot/1.0; +http://webmeup-crawler.com/) |
15928 | 7 | Mozilla/5.0 (compatible; AhrefsBot/6.1; +http://ahrefs.com/robot/) |
12358 | 5 | Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) |
8929 | 4 | Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler) |
8908 | 4 | Gigabot |
7872 | 3 | Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/) |
6942 | 3 | Barkrowler/0.9 (+http://www.exensa.com/crawl) |
4737 | 2 | istellabot/t.1.13 |
So let's see some results:
Bot | 200 | % | 301 | % | 304 | % | 400 | % | 403 | % | 404 | % | 410 | % | 500 | % | Total | % |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Knowledge AI | 42676 | 92.1 | 3352 | 7.2 | 0 | 0.0 | 127 | 0.3 | 4 | 0.0 | 170 | 0.4 | 5 | 0.0 | 0 | 0.0 | 46334 | 100.0 |
SemrushBot/3~bl | 36088 | 94.7 | 1873 | 4.9 | 0 | 0.0 | 110 | 0.3 | 0 | 0.0 | 21 | 0.1 | 5 | 0.0 | 0 | 0.0 | 38097 | 100.0 |
BLEXBot/1.0 | 16633 | 97.1 | 208 | 1.2 | 124 | 0.7 | 114 | 0.7 | 0 | 0.0 | 46 | 0.3 | 5 | 0.0 | 0 | 0.0 | 17130 | 100.0 |
AhrefsBot/6.1 | 15840 | 99.4 | 78 | 0.5 | 0 | 0.0 | 4 | 0.0 | 0 | 0.0 | 5 | 0.0 | 0 | 0.0 | 1 | 0.0 | 15928 | 99.9 |
bingbot/2.0 | 12304 | 99.6 | 35 | 0.3 | 0 | 0.0 | 6 | 0.0 | 0 | 0.0 | 3 | 0.0 | 5 | 0.0 | 0 | 0.0 | 12353 | 99.9 |
MegaIndex.ru/2.0 | 8412 | 94.2 | 456 | 5.1 | 0 | 0.0 | 24 | 0.3 | 0 | 0.0 | 36 | 0.4 | 1 | 0.0 | 0 | 0.0 | 8929 | 100.0 |
Gigabot | 8428 | 94.6 | 448 | 5.0 | 0 | 0.0 | 23 | 0.3 | 0 | 0.0 | 7 | 0.1 | 2 | 0.0 | 0 | 0.0 | 8908 | 100.0 |
MJ12bot/v1.4.8 | 2015 | 25.6 | 175 | 2.2 | 0 | 0.0 | 2 | 0.0 | 0 | 0.0 | 5680 | 72.2 | 0 | 0.0 | 0 | 0.0 | 7872 | 100.0 |
Barkrowler/0.9 | 6604 | 95.1 | 300 | 4.3 | 0 | 0.0 | 10 | 0.1 | 0 | 0.0 | 28 | 0.4 | 0 | 0.0 | 0 | 0.0 | 6942 | 99.9 |
istellabot/t.1.13 | 4705 | 99.3 | 28 | 0.6 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | 4 | 0.1 | 4737 | 100.0 |
Percentage wise of the top 10 bots hitting my blog (and in fact, these are the 10 ten clients hitting my blog) MJ12Bot is just bad at 72% bad requests. It's hard to say what the second worst one is, but I'll have to give it to “The Knowledge AI” bot (and my search-foo is failing me in finding anything about this one). Percentage wise, it's about on-par with the others, but some of its requests are also rather odd:
/%22
/%22https:/www.brevardnc.org/business-directory/5474/rockys-soda-shop/%22
/%22http:/brevardnc.org/%22
/%22https:/www.greenvillesc.gov/%22
/%22https:/en.m.wikipedia.org/wiki/Caesars_Head_State_Park/%22
/%22https:/www.transylvaniacounty.org/town-of-rosman/%22
It appears to be a similar problem as MJ12Bot, but one that doesn't happen nearly as often.
Now, this isn't to say I don't have some legitimate “not found“ (404) results. I did come across some actual valid 404 results on my own blog:
/2004/08/18/mias@speedy.com.pe
/2012/08/10/HREF
/2013/01/02/menamena
/2013/02/01/HREF
/2014/05/04/HREF
/2015/02/10/B000FBJCJE
/2015/07/10/mailtp:admin@macropayday.com
Some are typos, some are placeholders for links I forgot to add. And those I can fix. I just wish someone would fix MJ12Bot. Not because it's bogging down my site with unwanted traffic, but because it's just bad at what it does.
Thursday, July 11, 2019
Yet more observations about the MJ12Bot
I received a reply about MJ12Bot! Let's see …
- From
- Majestic <XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX>
- To
- Sean Conner <sean@conman.org>
- Subject
- [Majestic] Re: Your robot is making bogus requests to my webserver
- Date
- Thu, 11 Jul 2019 08:34:13 +0000
##- Please type your reply above this line -##
Oh … really? Sigh.
Anyway, the only questionable bit in the email was this line:
The prefix
//
in a link of course refers to the same site as the current page, over the same protocol, so this is why these URLs are being requested back from your server.
which is … somewhat correct. It does mean “use the same protocol” but the double slash denotes a “network path reference” (RFC-3986, section 4.2) where, at a minimum, a hostname is required. If this is just a misunderstanding on the developers' part, it could explain the behavior I'm seeing.
And speaking of behavior, I decided to check the logs (again, using last month) one last time for two reports.
404 (not found) | 200 (okay) | Total requests | User agent |
---|---|---|---|
170 | 42676 | 46334 | The Knowledge AI |
21 | 36088 | 38097 | Mozilla/5.0 (compatible; SemrushBot/3~bl; +http://www.semrush.com/bot.html) |
46 | 16633 | 17130 | Mozilla/5.0 (compatible; BLEXBot/1.0; +http://webmeup-crawler.com/) |
5 | 15840 | 15928 | Mozilla/5.0 (compatible; AhrefsBot/6.1; +http://ahrefs.com/robot/) |
3 | 12304 | 12353 | Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) |
36 | 8412 | 8929 | Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler) |
7 | 8428 | 8908 | Gigabot |
5680 | 2015 | 7872 | Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/) |
28 | 6604 | 6942 | Barkrowler/0.9 (+http://www.exensa.com/crawl) |
0 | 4705 | 4737 | istellabot/t.1.13 |
404 (not found) | 200 (okay) | Total requests | User agent |
---|---|---|---|
5680 | 2015 | 7872 | Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/) |
656 | 109 | 768 | Mozilla/5.0 (compatible; MJ12bot/v1.4.7; http://mj12bot.com/) |
177 | 45 | 553 | Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2) |
170 | 42676 | 46334 | The Knowledge AI |
120 | 0 | 120 | Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0) |
(Note: The number of 404s and 200s might not add up to the total—there might be other requests that returned a different status not reported here.)
MJ12Bot is the 8th most active client on my site, yet it has the top two spots for bad requests, beating out #3 by over an order of magnitude (35 times the amount in fact).
But I don't have to worry about it since the email also stated they removed my site from their crawl list. Okay … I guess?
Friday, July 12, 2019
Once more with the MJ12Bot
So I replied to MJ12Bot's reply outlining everything I've mentioned so far about the sheer number of bad links they're following and how their explanation of “//” wasn't correct. They then replied:
- From
- Majestic <XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX>
- To
- Sean Conner <sean@conman.org>
- Subject
- [Majestic] Re: Your robot is making bogus requests to my webserver
- Date
- Fri, 12 Jul 2019 07:27:48 +0000
##- Please type your reply above this line -##
Your ticket reference is XXXXX. To add any comments, just reply to this email.
I can tell from your responses that you are much better than us, so we can only continue to avoid visiting your site.
Kind Regards
XXXXX
I guess this is their way of politely telling me to XXXXXXXX. Fair enough.
Tuesday, July 16, 2019
Notes on blocking the MJ12Bot
The MJ12Bot is the first robot listed in the Wikipedia's robots.txt
file,
which I find amusing for obvious reasons.
In the Hacker News comments there's a thread specifically about the MJ12Bot,
and I replied to a comment about blocking it.
It's not that easy, because it's a distributed bot that has used 136 unique IP addresses just last month.
Because of that comment,
I decided I should expand on some of those numbers here.
The first table is the number of addresses from January through June, 2019 to show they're not all from a single netblock,
The address format “A.B.C.D” will represent a unique IP address, like 172.16.15.2
;
“A.B.C” will represent the IP addresses 172.16.15.0
to 172.16.15.255
;
“A.B” will represent the range 172.16.0.0
to 172.16.255.255
and finally “A” will represent the range 172.0.0.0
to 172.255.255.255
.
Address format | number |
---|---|
A.B.C.D | 312 |
A.B.C | 256 |
A.B | 86 |
A | 53 |
Next are the unique addresses from all of 2018 used by MJ12Bot:
Address format | number |
---|---|
A.B.C.D | 474 |
A,B.C | 370 |
A.B | 125 |
A | 66 |
This wide distribution can easily explain why Wikipedia found it to ignore any rate limits set. Each individual node of MJ12Bot probably followed the rate limit, but it's a hard problem to coordinate across … what? 500 machines across the world?
It seems the best bet is to ban MJ12Bot via robots.txt
:
User-agent: MJ12bot Disallow: /
While I haven't added MJ12Bot to my own robots.txt
file,
it hasn't hit my site since they removed me from their crawl list,
so it appears it can be tamed.
Tuesday, August 06, 2019
There are even bots crawling gopherspace
My webserver isn't the only program beset by bots—my gopher server is also being crawled.
I identified one bot repeatedly trying to request the selector
(the gopher equivalent of a web page) Phlog
when it should be trying to request Phlog:
(note the ending “:”).
On the web server,
I could inform the client of the proper link with a “permanent redirect” and hope it gets the hint,
but gopher lacks such a facility.
All this bot was getting back was the rather lack luster gopher error,
which for an automated process,
is pretty darned hard to distinguish from actual content,
due to the simplicity of the protocol.
Oh a lark, I decided to see if there was a gopher server on the IP address of the bot, and lo', there was. I was able to send an email to the organization responsible, and they fixed the error.
That still left a few bots that thought I was running a web server on port 70.
Yes,
I was getting requests for “GET / HTTP/1.1
” over and over again,
and these particular bots weren't getting the clue they weren't talking to a web server by the lack of proper web server response.
I decided to handle these by replying as a tea pot because why not?
And to further support the joke,
my gopher server will not only respond to the web method GET
but also BREW
(and to think I wanted to write a gopher server,
not a web server … sigh).
Hopefully that will placate them and they'll go away
(although on second thought,
I think I should have done a permament redirect to gopher://gopher.conman.org/
to see how well the web bots would handle that!).
An MJ12Bot update
When last we left the MJ12Bot saga, it was pretty apparent it wasn't a well written bot, but true to their word, they haven't crawled my server since.
“The Knowledge AI” bot however …
it is trying to repeatedly fetch /%22https:/mj12bot.com/%22
from my web server.
What is it with these horribly written web bots?
Thursday, August 08, 2019
Unfortunately, my blog on Gopher is a second class citizen
I will be the first to admit that my blog on gopher is a second-class citizen. When I wrote the gopher server I took the quickest and easiest way to adapt my blog to a (close enough) text-only medium by feeding requests through Lynx. Note I didn't say “well done” (of course not! I said it was a “medium!” Ba-dum-bump! I'll be here all week! Don't forget to tip the wait staff!) or even pretty.
For instance, this entry looks like this via gopher:
Extreme contradiction, Brevard edition
So Bunny and I came across this lovely bit of signage in downtown
[1]Brevard:[“The white zone is for immediate loading and unloading of passengers only.
There is no stopping in the red zone.” / “The red zone is for immediate
loading and unloading of passengers only. There is no stopping in the white
zone.” / “No, the white zone is for loading of passengers and there is no
stopping in a RED zone.” / “The red zone has always been for loading and
unloading of passengers. There's never stopping in a white zone.” / “Don't
you tell me which zone is for loading, and which zone is for stopping!”]So which is it? Loading, or parking? Or loading of wheelchairs for parking?
Or parking for wheelchairs to be loaded? I'm so confused!References
1. https://www.cityofbrevard.com/
First off, there's no indication that there's a photo on that page, unless you realize I'm using a very old web convention of describing the image contents by placing said description inside of square brackets.
Secondly, there is no actual link to the picture on the converted entry.
Third, on most (all?) graphical browsers, just holding the mouse over the images will pop up the text above (I don't think many people know about this).
And fourth, the text is a reference to the movie “Airplane!” which does fit the subject of the picture on that page, which is of two traffic signs giving conflicting parking directions (this really doesn't have anything to do with the second-class status of the post on gopher—just more of an FYI type of thing).
I used Lynx because I didn't want to bother writing code to convert HTML to plain text—especially when I had access to a tool that can do it for me. It's just that it doesn't really do a great job because I expect the HTML to do the formatting for me. And I really do need to write a description of the photo in addition to the caption I include for each picture. Ideally, it would look something like:
Extreme contradiction, Brevard edition
So Bunny and I came across this lovely bit of signage in downtown
Brevard [1]:[Image of two traffic signs one above the other. The upper one says
“NO PARKING, LOADING ZONE” and the lower one saying “RESERVED PARKING
for the HANDICAPPED”—“The white zone is for immediate loading and
unloading of passengers only. There is no stopping in the red zone.” /
“The red zone is for immediate loading and unloading of passengers only.
There is no stopping in the white zone.” / “No, the white zone is for
loading of passengers and there is no stopping in a RED zone.” / “The
red zone has always been for loading and unloading of passengers. There's
never stopping in a white zone.” / “Don't you tell me which zone is for
loading, and which zone is for stopping!”] [2]So which is it? Loading, or parking? Or loading of wheelchairs for parking?
Or parking for wheelchairs to be loaded? I'm so confused!References
[1] https://www.cityofbrevard.com/
[2] gopher://gopher.conman.org/IPhlog:2019/06/13/Confusion.jpg
And then reality sets in and I'm just not up to writing an HTML-to-text translator right now.
Sigh.
Sorry, gopherspace.
The “Tonya Harding Solution” to computer benchmarks
… we knew we had to do more to truly earn those extra credit points. Luckily, I had one final optimization idea:
The Tonya Harding Solution: The benchmark program works by calling the optimized function, calling the naive function, and comparing the two times. And this gave me a truly devilish idea. I added some code to
calc_depth_optimized
that created a child process. That child process would wait forcalc_depth_naive
to start running, then send aSIGUSR1
signal to the benchmark process. That signal would interruptcalc_depth_naive
and jump to a special signal handler function I'd installed:void our_handler(int signal) { // if you can't win the race, shoot your opponent in the legs sleep(image_size * 4 / 10000); }So while we did implement a number of features that made our program faster, we achieved our final high score by making the naive version a whole lot slower. If only that 4 had been a 5 …
CS 61C Performance Competition
I'll have to hand it to Carter Sande for literally beating the competition in benchmarking.
(Although it wasn't Tonya Harding who did the attack, but Shane Stant, hired by Harding's ex-husband Jeff Gillooly who attacked Nancy Kerrigan with a police baton and not a gun. Harding herself claims she had nothing to do with the attack.)
Who knew ice cream could be so hard?
We have an ice cream maker. I like chocolate ice cream, the darker, the better. And the instruction manual for the ice cream maker lists a recipe for a “decadent chocolate ice cream” which not only calls for Dutch processed cocoa, but 8 ounces (230g) of bittersweet chocolate. I opted for a really dark chocolate, like on the order of 90% cocoa dark chocolate.
Yeah, I like my chocolate dark.
I'm also trying to cut sugar out of my diet as much as possible, so I decided to use a bit less surgar than what the receipe calls for, so this stuff isn't going to be overly sweet.
I get the ice cream base churned, into a plastic bowl and in the freezer, and I wait for several hours, eagerly awaiting some deep, dark, decadent chocolate ice cream.
I end up with a deep, dark, decadent ice chocolate rock.
This isn't hard ice cream. This isn't even ice cream. It's an ice rock is what it is. I try dropping the bowl a few inches onto the kitchen counter to show Bunny how rock-like it is, and the bowl hits the counter, bounces off and shatters onto the floor.
I mentioned it was in a plastic bowl, right?
There are shards of plastic across the kitchen.
The deep, dark, chocolate ice rock is in one piece.
I think the ice cream base was too dense for much, if any, air to get whipped in while churning. Bunny thinks the low surgar content contributed to the rock-like consistency. Both are probably to blame for this. I do recall that the last time I made the “decadent chocolate ice cream, but with all the surgar,” it did tend towards the harder side of ice cream. So I think the next time I should try the basic vanilla recipe with less surgar and see what happens. If that turns out fine, then try the basic chocolate recipe.
Saturday, August 10, 2019
From chocolate ice rock to vanilla ice cream
My friend Squeaky replied to my chocolate ice rock post backing up Bunny's assertion that sugar content is key when making ice cream—to little and ice crystalization takes over making for a rather solid ice rock than ice cream. So on Friday, I went back to basics and made the basic vanilla ice cream receipe:
- 2c (500ml) heavy cream
- 1c (250ml) whole milk
- ¾c (180ml) sugar
- 2tsp (10ml) vanilla extract
Mix, chill, churn.
I wasn't quite satisfied with making a vanilla ice cream, so I decided to add cherries—I chopped up a bunch of cherries (the hardest part was getting the seeds out—man they were stubborn), a bit of sugar, chill, and add in the last few minutes of churning. I was initially concerned because the instant I added the cherries, the mixture starting loosening up—I think the cherries to a bit too warm. Next time I think I'll freeze the cherries before adding them.
But after sitting in the freezer overnight, the results were much better—I actually had ice cream and not a large rock. Lesson learned—sugar is key to ice cream.
Sunday, August 11, 2019
See, this is one reason why I'm leery about updates
A few days ago I noticed my iPhone notifying me of a “critical security update.” And it was only for the iPhone, not the iPad. Sigh. Fine. Download, install, and get on with my life.
Only a few days go by and I finally clued in that I wasn't receiving any actual phone calls! Not that that really bugs me, as most of the calls I get these days are spam calls from around the country, but it was odd that my dad has left two voice mails and yet, his calls did not show up on the recently called list.
So I check, and yes, I have no service. I try rebooting the phone, and that didn't work. I tried resetting the network, and that didn't work (and had the side effect of wiping out all known passwords for existing Wi-Fi networks).
Bunny suggested I go through the trouble shooting pages on the Monopolistic Phone Company website as she waited on the phone for a Monopolistic Phone Company Representative and the race was on to see who finished first.
Turns out, I won. I think it was step five where the Monopolistic Phone Company had me turn off the phone (and by “turn off the phone” I mean a hard power down and not just shutting off the screen), pull out the SIM card, push the SIM card back in, and turn the phone on. That turns out to have worked.
And now I can receive all those spam calls warning me that this is the final, no, we really mean it, final warning that my car warrantee has expired and if I don't act now I'm doomed to financial ruin. I honestly don't know how I lived without those calls.
Wednesday, August 14, 2019
“Here's a nickel kid. Get yourself a real computer”
“Here you go, kid,” said Dad, as he handed me a book with a bright yellow jacket. “I heard this ‘Linux’ is the next up-and-coming thing in computers.”
I look at the title: Linux for Dummies: Quick Reference. “Um … thank you. You do realize I run Linux both at home, and at work, right?”
“Hey, maybe you can learn something from it.”
So I'm skimming through the book and … elm
?
pine
?
rsh
?
FVWM?
When was this book written? …
Oh … 2000.
That explains it.
Everybody these days are running mutt
,
ssh
and
GNOME
.
It'll fit nicely on the shelf next to Sams Teach Yourself TCP/IP in 24 Hours and Inside OS/2.
Monday, August 19, 2019
Notes on an overheard phone conversation at The Ft. Lauderdale Office of The Corporation
“Hello?”
“Yes, who is this?”
“Who is this”
“This is … Sean.”
“And this is … XXXX [of the IT department].”
“Okay.”
“This is in reference to … ”
“Your email … ”
“About … ”
“Um … the laptop?”
“Oh yes! Sean! Of course! See, I get a lot of spam calls these days.”
“Yeah, I get a lot of phishing emails from you guys these days.”
“Ha!”
Tuesday, August 20, 2019
Profiles of Lua code
I started “Project: Sippy-Cup” six years ago as a “proof-of-concept” and it ended up in production (without my knowledge and to my dismay at the time) about a year or two afterwards. The code was written to be correct, not fast. And for the past four or five years its been running, performance has never been a real issue. But that's changing, as the projected traffic levels shoot past the “oh my” and into the “oh my God” territory.
“Project: Sippy-Cup” processes a lot of SIP messages, which are text based, so there's a lot of text processing. I use LPEG for ease of writing parsers, but it's not necessarily as fast as it could be.
There are two issues with LPEG—it has infinite look-ahead, and ordered choices. So the code that checks for the SIP method:
method = lpeg.P"ACK" + lpeg.P"BYE" + lpeg.P"CANCEL" + lpeg.P"INFO" + lpeg.P"INVITE" + lpeg.P"MESSAGE" + lpeg.P"NOTIFY" + lpeg.P"OPTIONS" + lpeg.P"PRACK" + lpeg.P"PUBLISH" + lpeg.P"REFER" + lpeg.P"REGISTER" + lpeg.P"SUBSCRIBE" + lpeg.P"UPDATE" + (lpeg.R("AZ","az","09") + lpeg.S"-.!%*_+`'~")^1
will first compare the input to “ACK”; if it doesn't match, it then backtracks and tries comparing the input to “BYE”, and so on down the list until it gets the last rule which is a “catch-all” rule. It would be easy to reorder the list so that the checks are “most-likely” to “least-likely,” but really the entire list could be removed leaving just the catch-all:
method = (lpeg.R("AZ","az","09") + lpeg.S"-.!%*_+`'~")^1
I have the same issue with SIP headers—there are 100 headers that are “parsed” (for various values of “parsed”) but I only really look at a dozen headers—the rest just slow things down and can be passed by a generic parsing rule. The full headers were added during the “proof-of-concept” stage since I wasn't sure at the time which headers would be critical and which ones wouldn't, and I've never gone back and cleaned up the code.
Another aspect is the sheer number of validity checks the code makes on the incoming SIP message. Many of the checks don't really have any effect on the processing due to managerial mandate at the time, so they could go (I wanted strict checking that bailed on any error; my manager at the time did not want such strictness—no need to guess who won, but I still track parsing irregularities).
So while I feel these are two areas where the code could be made faster, I don't know if that's where the time is spent, and so it's time to profile the code.
The issue now is that the system profiler will profile the code as C, not as Lua. I don't need to profile the code to know the Lua VM gets called all the time. What I need to know is what Lua code is called all the time. But it can't hurt to try the system profiler, right? And given that the regression test has over 12,000 test cases, we should get some good profiling information, right?
% time | cumulative seconds | self seconds | calls | self ms/call | total ms/call | name |
---|---|---|---|---|---|---|
13.32 | 3.47 | 3.47 | match | |||
12.74 | 6.79 | 3.32 | luaV_execute | |||
9.31 | 9.22 | 2.43 | luaS_newlstr | |||
6.83 | 11.00 | 1.78 | luaD_precall | |||
5.31 | 12.38 | 1.39 | luaH_getstr | |||
3.38 | 13.26 | 0.88 | luaD_poscall | |||
2.57 | 13.93 | 0.67 | index2adr | |||
2.19 | 14.50 | 0.57 | luaV_gettable |
Not bad at all.
The function match()
is the LPEG execution engine,
which matches my intial thoughts on the code.
It wasn't much work to remove extraneous SIP headers I don't bother with,
and to simplify the method parsing (see above).
Re-profile the code and:
% time | cumulative seconds | self seconds | calls | self ms/call | total ms/call | name |
---|---|---|---|---|---|---|
14.25 | 3.67 | 3.67 | luaV_execute | |||
11.22 | 6.56 | 2.89 | luaS_newlstr | |||
10.49 | 9.26 | 2.70 | match | |||
6.33 | 10.89 | 1.63 | luaD_precall | |||
5.20 | 12.23 | 1.34 | luaH_getstr | |||
2.76 | 12.94 | 0.71 | index2adr | |||
2.58 | 13.61 | 0.67 | luaD_poscall | |||
2.41 | 14.23 | 0.62 | luaV_gettable |
match()
drops from first to third place,
so that's good.
And a load test done by the QA engineer showed an easy 25% increase is message processing.
But that's really as far as I can go with profiling. I did a run where I removed most of the validation checks (but only after I saw none of them were triggered over the past 30 days) and didn't see much of a speed improvement. So I really need to profile the Lua code as Lua code and not as C.
That's going to take some work.
Wednesday, August 21, 2019
“Nobody Expects the Surprising Profile Results!”
It still surprises me that the results of profiling can be so surprising.
Today I profiled Lua code as Lua. It was less work than expected and all it took was about 30 lines of Lua code. For now, I'm just recording the file name, function name (if available—not all Lua functions have names) and the line number as that's all that's really needed.
But as I was writing the code to profile the code, I wasn't expecting any real results from profiling “Project: Sippy-Cup.” The code is really just:
- get packet
- parse packet
- validate SIP message
- acknowledge SIP message
- get relevent data from SIP message
- query “Project: Lumbergh” (business logic)
- wait for results
- send results in SIP message
- wait for SIP acknowledgement
- done
I was expecting a fairly uniform profile result, and if pressed, maybe a blip for awaiting results from “Project: Lumbergh” as that could take a bit. What I did not expect was this:
count | file/function/line |
---|---|
21755 | @third_party/LPeg-Parsers/ip-text.lua::44 |
6000 | @XXXXXXXXXXXXXXXXXXXXXXXXXX:send_query:339 |
2409 | @XXXXXXXXXXXXXXXXXXXX:XXXXXXXXX:128 |
After that,
the results tend to flatten out.
And yes, the send_query()
makes sense,
but ip-text.lua
?
Three times more than the #2 spot?
This line of code?
local n = tonumber(capture,16)
That's the hot spot? Wait? I'm using IPv6 for the regression test? When did that happen? Wait? I'm surprised by that as well? What is going on here?
Okay, breathe.
Okay.
I decide to do another run, this time at a finer grain, about 1/10 the previous profiling interval and see what happens.
count | file/function/line |
---|---|
133186 | @third_party/LPeg-Parsers/ip-text.lua::44 |
29683 | @third_party/LPeg-Parsers/ip-text.lua::46 |
21910 | @third_party/LPeg-Parsers/ip-text.lua::45 |
19749 | @XXXXXXXXXXXXXXXXXXXXXXXXXXX:XXXXXXXXXXXXX:279 |
And the results flatten out after that. So the hot spot of “Project: Sippy-Cup” appears to be this bit of code:
local h16 = Cmt(HEXDIG^1,function(_,position,capture) local n = tonumber(capture,16) if n < 65536 then return position end end)
send_query()
doesn't show up until the 26TH spot,
but since it's finer grained,
it does show up multiple times,
just at different lines.
So … yeah.
I have to think on this.
Done with the profiling for now
After some more profiling work I've come to the conclusion that yes,
the hot spot is in ip-text.lua
,
and that after that function, it's quite flat otherwise.
The difference between ip-text.lua
and the number two spot isn't quite as bad as I initially thought,
although it took some post-processing to lump all the function calls together to determine that
(required because Lua can't always know the “name” of a function,
but with the line numbers they can be reconciled).
It's only called about twice as much as the next most used function instead of the nearly 4½ times it appeared earlier.
As far as profiling “Project: Sippy-Cup” is concerned, I think I'm about as far as I can go at this time. I did improve the performance with some minor code changes and any more improvement will take significant resources. So I'm calling it good enough for now.
Thursday, August 22, 2019
Through Windows Darkly
I arrived to The Ft. Lauderdale Office of the Corporation to find a package sitting on my desk. It had finally arrived—the Corporate Overlords' mandated managed laptop.
It's only been a year and a half that we've been threatened by promised new managed laptops to replace the self-managed ones we currently use,
but in the end,
it was decided to let us keep our current machines and use the new “managed laptops” to access the Corporate Overlords' network.
I think this was decided due to the cultural differences between The Corporation and the Corporate Overlords—we're Mac and they're Windows.
Yes, of course the new managed laptop is a Windows box.
It's a Lenovo ThinkPad T480, and compared to the current laptops I have at work, a Linux system with 4 2.8GHz CPUs with 4G RAM and a Mac with 8 2.8GHz CPUs and 16G RAM, it's a bit underpowered with 4 1.8GHz CPUs and 8G RAM. I will admit that the keyboard is nicer than the keyboards on my existing laptops, but that's like saying bronchitis is better than pneumonia—technically that's true, but they're still bad. It looks like I'll have to break out another real keyboard from the stash.
The laptop was thinner than I expected, and the build feels solid. Lots of ports, so that's nice. The screen is nice, and the built-in camera has a sliding cover so I don't have to spoil the sleek asthetic with a tab of electrical tape.
The real downside for me is the software—Windows. I can hear the gales of laughter from my friend Gregory when he hears I have to suffer Windows. The last time I used Windows was … um … 1999? It's been a while, and not only do I have to get used to a nearly alien interface, but one that I have little control over.
Well,
I have a bit of leeway—I was able to install Firefox so it isn't quite that bad,
but there's a lot I can't do;
external block storage devices are blocked outright,
there are some websites I can't visit and editing the Windows Registry is right out!
Not to mention the crap ton of anti-viral, anti-spam, anti-phishing, anti-development, corporate-friendly “productivity” software installed and running on the machine.
This is something I have never experienced. Until now, every computer I used at a company has never been this locked down, not even at IBM. It's going to take some adjustment to get used to it.
Meanwhile, I've been poking around on the system and—“End of Day Restart” …
“End of Day Restart?”
Seriosly, Microsoft? You automated the daily reboot?
Wow … this is definitely going to take some time to get used to …
Thursday, August 29, 2019
Okay, so I wasn't really done with the profiling
Last week I was surprised to find this bit of Lua code as the hot spot in “Project: Sippy-Cup:”
local h16 = Cmt(HEXDIG^1,function(_,position,capture) local n = tonumber(capture,16) if n < 65536 then return position end end)
Last night I realized that this code was stupid in this context. The code was originally based upon the original IP address parsing module which converted the text into a binary representation of the code. So in that context, converting the text into a number made sense. When I made the text-only version, I did the minimum amount of work required and thus, left in some sub-optimal code.
But here? I'm just looking at text, expecting up to four hex digits. A string of four hex digits will, by definition, always be less than 65,536. And LPeg has a way of specifying “up to four, but no more” of a pattern. It's just:
local h16 = HEXDIG^-4
I made that change to the code,
and reprofiled “Project: Sippy-Cup.”
It didn't change the results at the C level all that much
(the LPEG C function merge()
is still third,
as I do quite a bit of parsing so that's expected),
but the results in Lua are vastly different—it's now the code that verifies the incoming phone numbers that's the hot spot.
It doesn't surprise me very much as we do that twice per request
(one for the caller, and one for the recipient),
and it's not nearly as bad a hot spot as the above code was.
Monday, September 16, 2019
♫Sleigh bells ring, are you listening?♫
Bunny and I went to the Cracker Barrel for dinner, and in … wait a second! I'm getting a strong feeling of déjà vu at the moment …
Only this time it's mid-September and not late-July, and it wasn't just a few items, but half the store was given over to Christmas. The other half was themed with Thanksgiving in mind.
And on one lone table was the Hallowe'en display.
Sigh.
Tuesday, September 24, 2019
“We have both kinds of food—natural and organic!”
“Look! Customers! WE HAVE CUSTOMERS!”
“Um … this isn't a good sign, is it?”
Burganic is an all-organic fast food restaurant located in Boca Raton, Florida. Our flagship location is opening soon & will be offering organic burgers, salads, sodas, french fries, & more.
Burganic | Organic Burger Restaurant in Boca Raton
“So … um … what is mambo sauce?” I ask.
“That is our signature house sauce. It is very good!”
“Okay … I think I'll have the fastie burger, the fresh cut fries and an unsweetened ice tea.”
“Excellent. And you?”
“The same. Only with the lemonade.”
“Excellent.”
“I assume the lemonade is freshly made?”
“Yes. Lemons. Sugar. Water. Turmeric.”
“Turmeric?”
“It's for the color.”
“…”
“It is very good!”
“… okay?”
We prepare our salads with all-naturally grown, veggies and grass fed beef. Our beef is sourced from the best ranches in the country and free of hormones, steroids and antibiotics. So that statement actually is correct. Organic means nothing at all.
Burganic | Organic Burger Restaurant in Boca Raton
The burgers and fries arrive, with a few small containers of ketchup and one of the “mambo sauce.” I take a fry and dip it into the mambo sauce. I'm curious how good the “signature house sauce” is. I take a bite.
All I taste is the french fry.
I dip another fry into the sauce.
All I taste is the french fry.
Perhaps the aggressive taste of the fries is overpowering the sauce? I dip my finger into the sauce and taste it. I dip my finger again into the sauce and attempt to taste it.
I can't taste a thing.
There is no taste to the sauce.
I find it rather unsettling. It's there. I'm eating it. It's not bad! It's not good either. It's just there.
I then try the burger.
I then try the burger again.
This is amazing—there is no taste to the burger. It's not bad. It's not good. It's just there. Just like the mambo sauce.
The only thing that has any taste is the fries. Which, frankly, are okay.
The lemonade, however, was not.
We believe in the value of organic ingredients, without the need to sacrifice flavor — that's why everything we offer at Burganic is organic!
Burganic | Organic Burger Restaurant in Boca Raton
“This is a bad cover of Pink Floyd's ‘Money””
“No, that's the original version.”
“Are you sure?”
“Yup. See, even Shazam agrees … ”
“It sounds like a college band trying to sound like Pink Floyd.”
“The speakers are crappy.”
“Yeah … ”
“And I think it's being played a bit faster than normal.”
“Could be … ”
“With horrible compression.”
“I'm still not convinced that's Pink Floyd.”
All-Natural does not mean Organic, nor does it come with any guarantees. "All-Natural" foods can still have heavily processed ingredients, whereas Organic foods do not.
Burganic | Organic Burger Restaurant in Boca Raton
I don't think Bunny and I will be going back any time soon.
Monday, September 30, 2019
I finally decided to release my gopher server software
So when I originally wrote my gopher server back in February/March of 2017, it was a hack job to just more or less server up my blog over gopher. Everything was hard coded into the codebase and making changes was annoying. So earlier this month I decided to start over and make a gopher server that someone else could potentially use. Another goal was to keep the the site functioning as is.
The hardest part was naming the darned thing,
and in the end,
I decided upon the rather plain name of port70
.
I've been running it now for a few weeks and not only is it stable,
but much easier to configure, modify and serve up content.
Don't mind me, I'm just a gopher pretending to be a teapot
Almost two months ago I modified my gopher server to respond to HTTP requests with “418 I'm a teapot” and it appears to have worked! The gopher server is no longer receiving any HTTP requests.
I'm also glad that the movement to remove the 418 response code failed. I don't find it useless, as it was probably odd enough that the authors of the agents making the inappropriate requests were forced to look into response and just skip my server entirely.
So yea!
An annoying aspect of the gopher protocol
In the nearly two years of running a gopher server the most annoying aspect of the
gopher protocol,
in my opinion,
is the inability to redirect client requests.
It's painful to see the same request for /Phlog:
over and over again due to an unwarranted assumption on how things are organized on my blog.
As I stated on the top page of my gopher server:
Welcome to Conman Laboratories
NOTE: RFC-1436 says this about selectors:
… an OPAQUE selector string … The selector string should MEAN NOTHING to the client software; it should never be modified by the client.
(emphasis added)
The selectors on this server *ARE OPAQUE* and *MUST* be sent *AS IS* to the server. Please note that the selectors here rarely start with a '/' character. Particularly, phlog entries start with a selector of "Phlog:"—note the lack of '/' and the ending ':'.
Thank you.
The Management
And yet—people assume I'm serving content up from a filesystem and therefore, a leading “/” is required.
Aaaaaaaaaaaaaaaaarg!
If only gopher had a way to redirect the request, but alas …
I mean, you can kind of work a way, but that leads to the second most annoying aspect of the gopher protocol, in my opinion—the document type is an inherent part of the request! The client is told beforehand the type of data it will be requesting, unlike an HTTP request where the server tells the client the type of data being sent. Redirecting a gopher “directory” is easy—just serve up a “directory” type with the correct link. And while not ideal, redirecting a text file could also be done by sending a text file with the updated URL, but this doesn't help with automated clients (as I found out the hard way). And this won't work at all for any other non-text media types like images.
I suppose you could overload the “gopher error type” which has to be checked for anyway (one hopes) but again, that won't help with automated agents. Unless perhaps if the “gopher error type” was standardized a bit more, but good luck with that (although I could try it) …
At least I got the webbots to stop making requests …
Adding redirection to the gopher protocol
The primary gopher protocol specification is totally mum on the topic of errors. The word “error” only occurs once and that just to note that the gopher type “3” is an error. So given the lack of specification, I thought I might do an experiment and see if I can't introduce the concept of “redirection” to the gopher protocol. It can hardly be thought of violating both the spirit and letter of the spec if there's nothing in the spec to figuratively or literally violate.
Upon encoutering some form of error, say, a nonexistent selector, a gopher server is supposed to return an “error selector” that looks something like:
3'/Phlog:' doesn't exist!HTHTerror.hostHT1CRLF
What I'm doing is giving some structure to the “error selector.” The text portion (the bit right after the “3” and before the first tab character) will be a fixed string giving the actual error, So for a nonexistent selector, my gopher server will return:
3Not foundHT/Phlog:HTgopher.conman.orgHT70CRLF
The text portion will always be “Not found” with the nonexistent selector being returned along with the hostname and port. Now, for a redirection, it will return
3Permanent redirectHTPhlog:HTgopher.conman.orgHT70CRLF
The text portion will always be “Permanent redirect” with the new selector being given, along with the host and port number. Doing this will allow me to even redirect a request to another gopher server. Well, as long as the gopher client understands the error text.
Using literal text strings like this isn't ideal, but it doesn't break existing clients and does give enough information in case someone sees the error (and that they speak English—which is why this isn't ideal). Also, if the number of possible errors is kept small, then explicitly checking each string isn't that much of an issue.
I can only hope other gopher servers pick up on the idea and make gopher space a little bit less annoying to use.
Tuesday, October 01, 2019
It only took 35 years …
The first accurate portrayal of a black hole in Hollywood was in the 2014 movie “Interstellar” with help from theoretical physicist Kip Thorne, and the images from that movie do appear to match reality. But I find it facinating that astrophysicist Jean-Pierre Luminet generated an image of a black hole in April of 1979!
It's sad to think that Disney's “The Black Hole,” which came out in December of 1979, could have not only been the first Hollywood portrayal of a black hole (which it appears it was), but it could have been an accurate portrayal of a black hole. Ah well …
Wednesday, October 02, 2019
“Night of the Lepus” was based on a book‽
I'm going to lunch with a few cow-orkers and ST is driving. While driving, we're subject to his music listening choices, which tend towards movie and video game scores. As a joke, I mention that he's playing the score to “Night of the Lepus” and to my total surprise, no one else in the vehicle had ever heard of the movie.
So of course I start reading off the plot synopsis from Wikipedia and I'm amazed to learn that it's based on a book! “Night of the Lepus” was originally a book! I then switch to reading the plot synopsis of The Year of the Angry Rabbit and … it sounds amazing! An attempt to eradicate rabbits in Australia leads to world peace through an inadvertant doomsday weapon with occasional outbreaks of killer rabbits.
Wow!
Why wasn't that movie made?
Friday, October 04, 2019
Back when I was a kid, all I had to worry about was the mass extinction of the human race due to global thermonuclear war
Bunny and I are out eating dinner at T. B. O. McFlynnagin's and out of the corner of my eye on one of the ubiquitious televisions dotting the place, I saw what appeared to be a “back to school” type commercial but one that turned … dark. I'm normally not one for trigger warnings, but this commercial, which did air because I saw it, is quite graphic. So … you have been warned!
It reminds me of the “Daisy” commercial, although it's hard to say which one is worse. Perhaps both of them are.
It's a stupid benchmark about compiling a million lines of code, what else did I expect?
I came across a claim that the V programming langauge can compile 1.2 million lines of code per second.
Then I found out that the code was pretty much just 1,200,000 calls to println('hello world')
.
Still,
I was interested in seeing how GCC would fare. So I coded up this:
#include <stdio.h> int main(void) { printf("Hello world!\n"); /* 1,199,998 more calls to printf() */ printf("Hello world!\n"); return 0; }
which ends up being 33M, and …
[spc]lucy:/tmp>time gcc h.c gcc: Internal error: Segmentation fault (program cc1) Please submit a full bug report. See <URL:http://bugzilla.redhat.com/bugzilla> for instructions. real 14m36.527s user 0m40.282s sys 0m17.497s [spc]lucy:/tmp>
Fourteen minutes for GCC to figure out I didn't have enough memory on the 32-bit system to compile it (and the resulting core file exceeded physical memory by three times). I then tried on a 64-bit system with a bit more memory, and I fared a bit better:
[spc]saltmine:/tmp>time gcc h.c real 7m37.555s user 2m3.000s sys 1m23.353s [spc]saltmine:/tmp>
This time I got a 12M executable in 7½ minutes, which seems a bit long to me for such a simple (but large) program. I mean, Lua was able to compile an 83M script in 6 minutes, on the same 32-bit system as above, and that was considered a bug!
But I used GCC, which does some optimizations by default. Perhaps if I try no optimization?
[spc]saltmine:/tmp>time gcc -O0 h.c real 7m6.939s user 2m2.972s sys 1m27.237s [spc]saltmine:/tmp>
Wow. A whole 30 seconds faster. Way to go, GCC! Woot!
Saturday, October 05, 2019
More stupid benchmarks about compiling a million lines of code
I'm looking at the code GCC produced for the 32-bit system (I cut down the number of lines of code):
804836b: 68 ac 8e 04 08 push 0x8048eac 8048370: e8 2b ff ff ff call 80482a0 <puts@plt> 8048375: 68 ac 8e 04 08 push 0x8048eac 804837a: e8 21 ff ff ff call 80482a0 <puts@plt> 804837f: 68 ac 8e 04 08 push 0x8048eac 8048384: e8 17 ff ff ff call 80482a0 <puts@plt> 8048389: 68 ac 8e 04 08 push 0x8048eac 804838e: e8 0d ff ff ff call 80482a0 <puts@plt> 8048393: 68 ac 8e 04 08 push 0x8048eac 8048398: e8 03 ff ff ff call 80482a0 <puts@plt> 804839d: 68 ac 8e 04 08 push 0x8048eac 80483a2: e8 f9 fe ff ff call 80482a0 <puts@plt> 80483a7: 68 ac 8e 04 08 push 0x8048eac 80483ac: e8 ef fe ff ff call 80482a0 <puts@plt> 80483b1: 68 ac 8e 04 08 push 0x8048eac 80483b6: e8 e5 fe ff ff call 80482a0 <puts@plt> 80483bb: 83 c4 20 add esp,0x20
My initial thought was Why doesn't GCC just push the address once? but then I remembered that in C,
function parameters can be modified.
But that lead me down a slight rabbit hole in seeing if printf()
(with my particular version of GCC)
even changes the parameters.
It turns out that no,
they don't change
(your mileage may vary though).
So with that in mind,
I wrote the following assembly code:
bits 32 global main extern printf section .rodata msg: db 'Hello, world!',10,0 section .text main: push msg call printf ;; 1,999,998 more calls to printf call printf pop eax xor eax,eax ret
Yes,
I cheated a bit by not repeatedly pushing and popping the stack.
But I was also interested in seeing how well nasm
fares compiling 1.2 million lines of code.
Not too badly, compared to GCC:
[spc]lucy:/tmp>time nasm -f elf32 -o pg.o pg.a real 0m38.018s user 0m37.821s sys 0m0.199s [spc]lucy:/tmp>
I don't even need to generate a 17M assembly file though,
nasm
can do the repetition for me:
bits 32 global main extern printf section .rodata msg: db 'Hello, world!',10,0 section .text main: push msg %rep 1200000 call printf %endrep pop eax xor eax,eax ret
It can skip reading 16,799,971 bytes and assemble the entire thing in 25 seconds:
[spc]lucy:/tmp>time nasm -f elf32 -o pf.o pf.a real 0m24.830s user 0m24.677s sys 0m0.144s [spc]lucy:/tmp>
Nice. But then I was curious about Lua. So I generated 1.2 million lines of Lua:
print("Hello, world!") -- 1,999,998 more calls to print() print("hello, world!")
And timed out long it took Lua to load (but not run) the 1.2 million lines of code:
[spc]lucy:/tmp>time lua zz.lua function: 0x9c36838 real 0m1.666s user 0m1.614s sys 0m0.053s [spc]lucy:/tmp>
Sweet!
Monday, October 07, 2019
I was working harder, not smarter
Another department at the Ft. Lauderdale Office of the Corporation is refactoring their code. Normally this wouldn't affect other groups, but this particular code requires some executables we produce, and to make it easier to install, we, or rather, I, needed to create a new repository with just these executables.
Easier said than done.
There's about a dozen small utilities, each a single C file, but unfortunately, to get the banana (the single C file) you also need the 800 pound gorilla (its dependencies). Also, these exectuables are spread through most of our projects—there's a few for “Project: Wolowizard” (which is also used for “Project: Sippy-Cup”), multiple ones for “Project: Lumbergh,” a few for “Project: Cleese” and … oh, I never even talked about this other project, so let's just call it “Project: Clean-Socks.”
Uhg.
So that's how I spent my time last week, working on “Project: Seymore,” rewriting a dozen small utilities to remove the 800 pounds of gorilla normally required to compile these tools. All these utilties do is transform data from format A to format B. The critical ones take a text file of lines usually in the form of “A = B” but there was one that took over a day to complete because of the input format:
A = B:foo,bar,... name1="value" name2="value" ... A = B:none
Oh, writing parsing code in C is so much fun! And as I was stuck writing this, I kept thinking just how much easier this would be with LPEG. But alas, I wanted to keep the dependencies to a minimum, so it was just grind, grind, grind until it was done.
Then today,
I found that I had installed peg/leg
,
the recursive-descent parser generator for C,
on my work machine eight years ago.
Eight years ago!
Head, meet desk.
Including the time to upgrade peg/leg
,
the time it took me to rewrite the utility that took me nearly two days only took two hours
(most of the code among most of the utilities is the same—check options, open files, sort the data, remove duplicates, write the data;
it's only the portion that reads and converts the data that differs).
It's also shorter,
and I think easier to modify.
So memo to self: before diving into a project, check to see if I already have the right tools installed.
Sigh.
Tool selection
So if I needed to parse data in C,
why did I not use lex
?
It's pretty much standard on all Unix systems, right?
Yes,
but all it does is lexical analysis.
The job of parsing requires the use of yacc
.
So why didn't I use yacc
?
Beacuse it doesn't do lexical analysis.
If I use lex
,
I also need to use yacc
.
Why use two tools when one will suffice?
They are also both a pain to use,
so it's not like I immediately think to use them
(that, and the last time I used lex
in anger was over twenty years ago …)
Sunday, October 13, 2019
How many redirects does your browser follow?
An observation on the Gemini mailing list led me down a very small rabbit hole. I recalled at one time that a web browser was only supposed to follow five consecutive redirects, and sure enough, in RFC-2068:
10.3 Redirection 3xx
This class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request. The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A user agent SHOULD NOT automatically redirect a request more than 5 times, since such redirections usually indicate an infinite loop.
Hypertext Transfer Protocol -- HTTP/1.1
But that's an old standard from 1997. In fact, the next revision, RFC-2616, updated this section:
10.3 Redirection 3xx
This class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request. The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.
Note: previous versions of this specification recommended a maximum of five redirections. Content developers should be aware that there might be clients that implement such a fixed limitation.
Hypertext Transfer Protocol -- HTTP/1.1
And subsequent updates have kept that language. So it appears that clients SHOULD NOT (using language from RFC-2119) limit itself to just five times, but still SHOULD detect loops. It seems like this was changed due to market pressure from various companies and I think the practical limit has gone up over the years.
I know the browser I use, Firefox, is highly configurable
and decided to see if its configuration
included a way to limit
redirections. And lo', it does! The option network.http.redirection-
limit
exists, and the current default value is “20”. I'm curious to
see what happens if I set that to “5”. I wonder how many sites will
break?
Thursday, October 17, 2019
You know, we might as well just run every network service over HTTPS/2 and build another six layers on top of that to appease the OSI 7-layer burrito guys
I've seen the writing on the wall, and while for now you can configure Firefox not to use DoH, I'm not confident enough to think it will remain that way. To that end, I've finally set up my own DoH server for use at Chez Boca. It only involved setting up my own CA to generate the appropriate certificates, install my CA certificate into Firefox, configure Apache to run over HTTP/2 (THANK YOU SO VERY XXXXXXX MUCH GOOGLE FOR SHOVING THIS HTTP/2 XXXXXXXX DOWN OUR THROATS!—no, I'm not bitter) and write a 150 line script that just queries my own local DNS, because, you know, it's more XXXXXXX secure or some XXXXXXXX reason like that.
Sigh.
And then I had to reconfigure Firefox using the “advanced configuration page” to tweak the following:
variable | value |
---|---|
variable | value |
network.trr.allow-rfc1918 | true |
network.trr.blacklist-duration | 0 |
network.trr.bootstrapAddress | 192.168.1.10 |
network.trr.confirmationNS | skip |
network.trr.custom_uri | https://playground.local/cgi-bin/dns.cgi |
network.trr.excluded-domains | |
network.trr.max-fails | 15 |
network.trr.mode | 3 |
network.trr.request-timeout | 3000 |
network.trr.resolvers | 192.168.1.10 |
network.trr.uri | https://playground.local/cgi-bin/dns.cgi |
I set network.trr.mode
to “3” instead of “2” because it's coming.
I know it's just coming so I might as well get ahead of the curve.
Friday, October 18, 2019
A minor issue with DoH
So far, the DoH server I wrote works fine (looking over the logs, it's amazing just how many queries mainstream sites make—CNN's main page made requests to over 260 other sites and that's after I restricted the number of redirects allowed) except for Github. The browser would claim it couldn't find Github, (although the logs said otherwise), or the the page formatting was broken because the browser couldn't locate various other servers (which again the logs said otherwise).
So I dived in to figure out the issue. It turns out the DNS replies were just a tad bit larger than expected. The Lua wrapper I wrote for my DNS library used the RFC mandated limit for the message size, which these days, is proving to be a bit small (that particular RFC was written in 1987). The fix was trival (increase the packet size) after the hour of investigation.
Tuesday, October 29, 2019
I thought computers exist to appease us, not for us to appease the computers
I got an email from the Corporation's Corporate Overlords' IT department about making sure my Windows laptop was on and logged into the Corporate Overlords' VPN so that mumble techspeak mumble technobable blah blah whatever. Even if it's a phishing email that our Corporate Overlords so love to send us, it didn't have any links to click or ask for my credentials. I also need to turn on the computer at least once every three weeks to prevent the Corporate Overlords from thinking its been stolen, so I figured it wouldn't hurt to turn it on, log in and reply to the email.
I turn it on, but I find I'm unable to get connected to the Corporation's wifi network which I need to do if I'm to log onto the Corporate Overlords' VPN. After many minutes of futzing around, I end up telling Windows to forget the Corporation's wifi network, re-select it from a list and re-enter my credentials (which have changed since the last time I logged in due to the outdated password practices still in use at The Corporation). Then I could log into the Corporate Overlords' VPN and reply to the email saying “go ahead mumble technospeak mumble technobabble blah blabh whatever.”
Of course, the Corporation “change your password” period (which was triggered last week) is different from that of the Corporation's Corporate Overlords' “change your password” period (which was triggered today) so there was that nonsense to deal with.
Over the course of the next few hours, I had to restart the Windows laptop no less than five times to appease the Microsoft Gods, and twice more I had to tell the computer to forget the Corporation's wifi netowrk before it got the hint and remember my credentials.
Seriously, people actually use Windows? I'm lucky in that I had the Mac book to keep working.
We are all publishers now
[This gopher site] has had, from very early days, a policy which allows [users] to request that their account be removed and all their content immediately and permanently deleted. This is called "claiming your civil right", … The Orientation Guide explains:
This promise is not a gimmick … It is a recognition that the ability to delete your accounts from online services is an important part of self ownership of your digital identity. This is genuinely an important freedom and one which many modern online services do not offer, or deliberately make very difficult to access.
I have always been, and still am, proud that [this gopher server] offers this right so explicitly and unconditionally, and I have no plans to change it. I really think this an important thing.
And yet, it always breaks my heart a little when somebody actually claims their right, and it's especially tough when a large amount of high- quality gopherspace content disappears with them. As several people phlogged about noticing, kvothe recently chose to leave gopherspace, taking with him his wonderful, long-running and Bongusta-aggregated phlog "The Dialtone" … I loved having kvothe as part of our community, but of course fully respect his right to move on.
As I deleted his home directory, I thought to myself "Man, I wish there was an archive.org equivalent for Gopherspace, so that this great phlog wasn't lost forever". A minute later I thought "Wait… that is totally inconsistent with the entire civil right philosophy!". Ever since, I've been trying to reconcile these conflicting feelings and figure out what I actually believe.
The individual archivist, and ghosts of Gophers past
Some of the commentary on solderpunk's piece has shown, of course, divided opinion. There are those who claim that all statements made in public are, res ipsa loquitor, statements which become the property of the public. This claim is as nonsensical as it is legally ridiculous.
By making a statement in a public place, I do not pass ownership of the content I have "performed" to anyone else, I retain that ownership, it is mine, noone elses. I may have chosen to permit a certain group of people to read it, or hear it; I may have restricted that audience in a number of ways, be it my followers on social media, or the small but highly-regarded phlog audience; I may have structured my comments to that audience, such as using jargon on a mailing list which, when quoted out of context, can appear to mean something quite different; I may just have posted a stupid or ill-judged photo to my friends.
In each of those cases, it is specious to claim that I have given ownership of my posts to the public, forever, without hope of retrieval. It is not the case that I have surrendered my right to privacy, forever, to all 7.7bn inhabitants of this earth.
…
In much the same way, I reacted strongly when I realised that posts I had made on my phlog were appearing on google thanks to that site's indexing of gopher portals. I did not ever consent to content I made available over port 70 becoming the property of rapacious capitalists.
Ephemera, or the Consciousness of Forgetting
Back during college I wrote a humor column for the university newspaper. In one of my early columns (and not one I have on my site here) I wrote a column with disparaging remarks about a few English teachers from high school. Even worse, I named names!
I never expected said English teachers to ever hear about the column, but of course they did. My old high school was only 10 miles away (as the crow flies) and there were plenty of students at FAU who had attended the same high school I did. Of course I should have expected that. But alas, I was a stupid 18 year old who didn't know better.
Now I know better.
It was a painful experience to learn, but things spoken (or written) can move in mysterious ways and reach an audience that it was not intended for.
The copies of the humor column I have on my site are only a portion of the columns I wrote, the ones I consider “decent or better.” The rest range from “meh” to “God I wish I could burn them into non-existance.” But alas, they exist, and I've even given a link to the paper archives where they can be unceremoniously resurrected and thrown back into my face. Any attempt to “burn them into non-existance” on my part would be at best a misdemearnor and worst a felony.
In this same vein, Austin McConnell erased his book from the Internet. He managed to take the book out of print and buy up all existing copies from Amazon. There are still copies of his book out there in the hands of customers, and there's nothing he can do about that. The point being, once something is “out there” it's out, and the creator has limited control over what happens.
I'm not trying to victim shame Daniel Goldsmith. What I am trying to say is that Daniel may have an optimistic view of consumption of content.
As to his assertion that his content via gopher is now “the property of
rapacious capitalists”—plainly false. Both Ireland (where Daniel resides) and
the United States (where Google primarily resides) are both signatories to
the “Berne Convention for the
Protection of Literary and Artistic Works” which protects the rights of
authors and Daniel owns the copyright to his works, not Google. Daniel may
have not wanted Google to index his gopher site but Google did nothing wrong
in accessing the site, and Google has certainly never claimed ownership of
such data (and if it did, then Daniel should be part of a very long line of
litigants). Are there things he can do? Yes, he could have a
/robots.txt
file that Google honors (The Internet Archive also honors it, but at
best it's advisory and not at all mandatory—other crawlers might not honor
it) or he can block IP addresses. But
sadly, it was inevitable once a web-to-gopher proxy was available.
The issue at heart is that everyone is a publisher these days, but not everyone realizes that fact. Many also believe social media sites like MyFaceMeLinkedSpaceBookWeIn will keep “private” things private. The social media sites may even believe their own hype, but accidents and hacks still happen. You can block a someone, but that someone has friends who are also your friends. Things spoken or written can move in mysterious ways.
I feel I was fortunate to have experienced the Internet in the early 90s,
before it commercialized. Back then, every computer was a peer on
the Internet—all IP addresses were
public, and anything you put out on the Internet was, for all
intended purposes, public! There's nothing quite like finding
yourself logged into your own computer from Russia (thanks to a 10Base2
network on the same floor as a non-computer science department with a Unix
machine sans a root
password). Because of that, I treat
everything I put on the Internet as public (but note that I am not
giving up my rights to what I say). If I don't want it known, I don't put it
on the Internet.
Daniel goes on to state:
The content creator, after all, is the only person who has the right to make that decision, they are the only one who knows the audience they are willing to share something with, and the only ones who are the arbiter of that.
Ephemera, or the Consciousness of Forgetting
To me, that sounds like what Daniel really wants is DRM, which is a controversial issue on the Internet. Bits have no color, but that still doesn't keep people from trying to colorize the bits, and others mentioning that bits have no color and doing what they will with the bits in question. It's not an easy problem, nor is it just a technical problem.
You put content on the Internet. You are now a publisher with a world wide audience.
Wednesday, October 30, 2019
So this is what it feels like on the other side
I subscribed to a mailing list today. I had to wait until the validation email passed through the greylist daemon on my system, but once that happened, I can start replying to the list.
Only the first post I made didn't go through. There was no error reported. There was no bounce message. Nothing. I checked to make sure I was using the address I signed up with (I did) and the filters on my email program were correct (they were).
I then checked the logs and behold:
Oct 30 19:07:41 brevard postfix/smtp: 023E22EA679B: to=<XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX>, relay=XXXXXXXXXXXXXXXXX[XXXXXXXXXXXXX], delay=4, status=deferred (host XXXXXXXXXXXXXXXXX[XXXXXXXXXXXXX] said: 450 4.2.0 <XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX>: Recipient address rejected: Greylisted, see http://postgrey.schweikert.ch/help/XXXXXXXXXXXXXXXXXXXXX (in reply to RCPT TO command))
Ha! I'm being greylisted right back! This is the first time I've noticed my outgoing email being greylisted. I find this amusing.
Thursday, October 31, 2019
In theory, it should work the same on the testing server as well as the production server
I haven't mentioned the other server I wrote, GLV-1.12556. I wrote it a few months ago mainly as a means to test out the Lua TLS wrapper I wrote beacuse otherwise, the wrapper is just an intellectual exercise. It implements the Gemini protocol which lies somewhat between gopher and HTTP.
One issue keeps rearing its ugly head—files larger than some size just aren't transfered. It just causes an error that I haven't been able to figure out. The first time this happened several months ago, I hacked at the code and thought I got it working. Alas, it's happening again. I received word today of it failing to send files beyond a certain size, and yes, I can reproduce it.
But here's the kicker—I can reproduce it on my live server but I can't reproduce it locally. It only seems to happen across the Internet. So any testing now has to happen “live” (as it were) on the “production server” (grrrrrr). I fixed some possible issues, maybe, like this bit of code:
ios._drain = function(self,data) local bytes = self.__ctx:write(data) if bytes == tls.ERROR then -- -------------------------------------------------------------------- -- I was receiving "Resource temporarily unavailable" and trying again, -- but that strategy fails upon failure to read a certificate. So now -- I'm back to returning an error. Let's hope this works this time. -- -------------------------------------------------------------------- return false,self.__ctx:error() elseif bytes == tls.WANT_INPUT or bytes == tls.WANT_OUTPUT then self.__resume = true coroutine.yield() return self:_drain(data) elseif bytes < #data then nfl.SOCKETS:update(conn,"w") self.__resume = true coroutine.yield() return self:_drain(data:sub(bytes+1,-1)) end return true end
Upon looking over this code,
I rethought the logic if dealing with tls.WANT_INPUT
(the TLS layer needs the underlying socket descriptor to be readable)
or tls.WANT_OUTPUT
(the TLS layer needs the underlying socket descriptor to be writable)
with the same bit of code, and rewrote it thusly:
ios._drain = function(self,data) local bytes = self.__ctx:write(data) if bytes == tls.ERROR then -- -------------------------------------------------------------------- -- I was receiving "Resource temporarily unavailable" and trying again, -- but that strategy fails upon failure to read a certificate. So now -- I'm back to returning an error. Let's hope this works this time. -- -------------------------------------------------------------------- return false,self.__ctx:error() elseif bytes == tls.WANT_INPUT then self.__resume = true coroutine.yield() return self:_drain(data) elseif bytes == tls.WANT_OUTPUT then nfl.SOCKETS:update(conn,"rw") self.__resume = true coroutine.yield() return self:_drain(data) elseif bytes < #data then nfl.SOCKETS:update(conn,"rw") self.__resume = true coroutine.yield() return self:_drain(data:sub(bytes+1,-1)) end return true end
Now,
upon receiving a tls.WANT_OUTPUT
it updates the events on the underlying socket descriptor from “read ready”
(which is always true)
to “read and write ready.”
But even that didn't fix the issue.
I then spent the time trying to determine the threshhold, creating files of various sizes until I got two that differed by just one byte. Any file that has 11,466 bytes or less will get served up. Any file that is 11,467 bytes or more, the connection is closed with the error “Resource temporarily unavailable.” I have yet to figure out the cause of that. Weird.
Where, indeed
Bunny and I are out, having a late dinner this Hallowe'en when I notice a woman walking in dressed as Carmen Sandiego. I never did find her husband, Waldo. Go figure.
Friday, November 01, 2019
November is already upon us and that can mean only three things
Thanksgiving is in the air. It's time for National Novel Writing Month. And it's time for National Novem Generation Month. I was dreading this.
I thought I had no ideas for NaNoGenMo this year, but I checked my NaNoGenMo ideas folder and oh look! I do have some notes for 2019. Oh. It only has one line in it: “Translate a book into Toki Pona.”
Well. There it is. Translate a book into Toki Pona (which literally translated means “talk good”).
It should be simple, right? Toki Pona only has at most 120 words. How hard can that be?
Let's take a look at some Toki Pona:
mama pi mi mute o, sina lon sewi kon.
nimi sina li sewi.
ma sina o kama.
jan o pali e wile sina lon sewi kon en lon ma.
o pana e moku pi tenpo suno ni tawa mi mute.
o weka e pali ike mi. sama la mi weka e pali ike pi jan ante.
o lawa ala e mi tawa ike.
o lawa e mi tan ike.
tenpo ali la sina jo e ma e wawa e pona.
Amen.
That happens to be the Lord's Prayer, which appears twice in the Bible (Matthew 6:9-13 and Luke 11:2-4). Let's translate it back and see what I might be in for.
What follows will be:
- original line in Toki Pona
- literal translation into English
- Matthew 6:9-13
- Luke 11:2-4
So without further ado …
mama pi mi mute o, sina lon sewi kon.
parent of many [command], you at high air.
Our Father which art in heaven, (Ma 6:9)
Our Father which art in heaven, (Lk 11:2)nimi sina li sewi.
name you [predicate] high.
Hallowed be thy name. (Ma 6:9)
Hallowed be thy name. (Lk 11:2)ma sina o kama.
land you [command] come.
Thy kingdom come, (Ma 6:10)
Thy kingdom come, (Lk 11:2)jan o pali e wile sina lon sewi kon en lon ma.
person [command] do [object] want you at high air [and] at land.
Thy will be done in earth, as it is in heaven. (Ma 6:10)
Thy will be done, as in heaven, so in earth. (Lk 11:2)o pana e moku pi tenpo suno ni tawa mi mute.
[command] give [object] eat of time sun this to me many.
Give us this day our daily bread. (Ma 6:11)
Give us day by day our daily bread. (Lk 11:3)o weka e pali ike mi.
[command] away [object] do bad me.
And forgive us our debts, (Ma 6:12)
And forgive us our sins; (Lk 11:4)sama la mi weka e pali ike pi jan ante.
same [context] me away [object] do person different.
as we forgive our debtors. (Ma 6:12)
for we also forgive every one that is indebted to us. (Lk 11:4)o lawa ala e mi tawa ike.
[command] head no [object] me to bad.
And lead us not into temptation, (Ma 6:13)
And lead us not into temptation, (Lk 11:4)o lawa e mi tan ike.
[command] head [object] me from bad.
but deliver us from evil: (Ma 6:13)
but deliver us from evil. (Lk 11:4)tenpo ali la sina jo e ma e wawa e pona.
time all [context] you have land [object] strong [object] good.
For thine is the kingdom, and the power, and the glory, for ever. (Ma 6:13)
(not in Luke 11)Amen.
Amen.
Amen. (Ma 6:12)
(not in Luke 11)
Um … okay … perhaps I better come up with a better idea.
The 5,000 translations of “mama pi me mute”
I found a much better dictionary for Toki Pona than the one I was using. This dictionary even includes the parts of speech, which could prove useful if I decide to generate a grammatically correct novel of 50,000 Toki Pona words for National Novel Generation Month.
As I was wrangling the new dictionary into a machine-usable format, it struck me that I could just generate a series of translations of the Lord's Prayer (since I have a copy of it in Toki Pona) by using the different meanings of each word. For example, the first word in the prayer, mama, has the following meanings:
- parent
- ancestor
- creator
- originator
- caretaker
- sustainer
Since my initial translation was quite limited. I set about just translating the opening line, “mama pi mi mute o, sina lon sewi kon” using the new dictionary, and got the following:
- mama
- NOUN parent, ancestor; creator, originator; caretaker, sustainer
- pi
- PARTICLE of
- mi
- NOUN I, me, we, us
- mute
- ADJECTIVE many, a lot, more, much, several, very
NOUN quantity - o
- PARTICLE hey! O! (vocative or imperative)
- sina
- NOUN you
- lon
- PREPOSITION located at, present at, real, true, existing
- sewi
- NOUN area above, highest part, something elevated
ADJECTIVE awe-inspiring, divine, sacred, supernatural - kon
- NOUN air, breath; essence, spirit; hidden reality, unseen agent
A more “literary” literal translation would probably be “Creator of we many, O! You existing divine air.” Or as a form of poetic English, ”Creator of us, residing in the divine air.” Pretty cool stuff. And as it turns out, there're enough variations in just the opening line to create enough translations to fulfill the 50,000 word requirement. I could certainly stop here and claim success, but I may just end up playing around with this a bit more.
Sunday, November 10, 2019
Notes on an overheard conversation while eating dinner at the International House of Pancakes
“So do you know anything about this elf thing?”
“No, I never encountered anything like that growing up.”
“Me neither.”
“Oh, so it's not just me then.”
“Let's see … oh! There's a Wikipedia article about it.”
“What does it say?”
“It's based upon a book written in 2005 …”
“So it's after both our times.”
“Yup. It says, ‘The book tells a Christmas-themed story, written in rhyme, that explains how Santa Claus knows who is naughty and nice. It describes elves visiting children from Thanksgiving to Christmas Eve, after which they return to the North Pole until the next holiday season.’”
“So the elves spy on kids.”
“Yeah, it indoctrinates them into the 24-hour surveilliance society.”
Friday, November 22, 2019
Memorialized in silicon
My Dad died last week so Bunny and I have been rushing around making arrangements and going thorugh his papers. I've been going through his computer, an Acer Chromebook I got him nearly two years ago to replace his dead laptop, I was surprised at the number of accounts he has:
- Google email account
- a LinkedIn account,
- match.com (!),
- Twitter,
- Vimeo and most surprising,
- Facebook!
And it's not surprising to me that his Facebook account is under an assume name, but the name, “Les Hansel,” has no meaning to me. Why that name?
What's scary about the Facebook account is that he has no “friends,” no profile, nothing much at all except a bunch of “friend recomendations” that are scarily accurate. Looking over the list, I see his sisters, their husbands, a few nieces and nephews and finally myself (and annoyingly low in the list—come on Facebook! Why am I not higher in the list?), plus a whole list of people that show up on his physical Rolodex file of contacts.
Thinking this over, I can only conclude he might have allowed Facebook access to his contact list, or more likely, the majority of his “friend recommendations” allowed Facebook access to their contact lists, and Dad's AOL email address was among the lists. The rest is filled in by Facebook's “a friend of a friend must also be a friend” type associations.
His Twitter account has his real name (oh really?) but it seems the only person he's following there is … Kathy Griffin? Seriously? Is he trying to confuse his enemies? Becuase he's confusing me.
It also appears he didn't use his LinkedIn account all that much, given how many pop-ups I'm getting urging me to update this, and checkout that and whatnot.
I am not going to checkout his match.com
account.
Not going to do it.
And I'm having trouble getting into his Google email account (he has like three different passwords listed for Google and the account recovery mechanism tends to fail) but given the activity on this other web accounts, I doubt he used it much.
Here a club, there a club, everywhere a golf club
Bunny and I went to Dad's storage unit for the first time to deal with that since his death. We're both a bit flummoxed about dealing with this golf club collection:
If anyone wants golf clubs, or a golf bag, let us know. We have plenty to go around.
Meanwhile, we picked out several boxes full of papers. There's Dad's car to deal with and with an expired California registration, and an expired California license plate (he was only here two years you know—you don't want to rush these things), that will be … interesting … to deal with.
Sigh.
Saturday, November 23, 2019
If only these were Certificates of Deposit
Another day, another trip to Dad's storage unit. I thought my Dad had a ton of golf clubs, but that's nothing compared to his collection of CDs. We took home at least 13 boxes of CDs and eight bins of books.
Bunny and I were of two minds about dealing with all the CDs and books. After loading both our vehicles, I was just ready to drive to the nearest used CD shop and probably double their inventory. Bunny thought we should at least look through them first. I initially overrode her idea and called two different shops. The first one no longer buys CDs, and the second one told me to call back after the holidays when he needed to restock.
Sigh.
So now we have boxes and boxes of CDs at home that we're going though. If you need any New Age East Asian meditation neurohacking music CDs, I'm your connection.
I'm now apparently the source for New Age meditation neurohacking books as well
We're slowing going through the boxes and boxes of books and music. “Get a load of this book before I drop it in the garbage,” said Bunny, dropping an 8″ × 10″ × 1½″ book on my lap.
I glanced at the title. “The Neo-Tech Discovery (Zonpower)—huh.” I then started flipping through the book. Multiple fonts, hand written script, wide margins. It just looked crazy! “Let's see … ‘Commercial, non-aging I-ness immortality is achievable within our lifetime. But that achievement depends on collapsing the 2000-year hoax of mysticism and eliminating all its symbiotic neocheaters. The Neo-Tech Research and Writing Center is already undermining the hoax of mysticism worldwide and will forever cure that disease of death without anyone's support, without asking anyone to donate time or money, and without permission or control by anyone.‘”
“See, it's garbage.”
“Hold on, let me check something … ” I turned to the computer and went to Amazon. “Wow! Amazon is selling it for $200!”
“What?”
“See?”
“I don't believe it!”
“Trash indeed!”
Through glasses large
Not only did Dad keep every golf club, CD and book he ever owned, but also every pair of glasses, numbering at least a dozen.
For a guy that wanted to live a minimal lifestyle, he sure kept a lot of stuff.
Also, our garage now smells of stale cigarette smoke. Blech.
Sunday, November 24, 2019
The major reason why Intel needs to twist the tail of a Pentium so it will go 50 MIPS is because 45 of them are LOAD and STORE.
Via Lobsters comes this brief historty of the x86 architecture and the 1,000 ways to move data between registers, and by 1,000 ways, it really is 1,000 ways. Mind boggling. These days, the x86 has exceeded any known definition of CISC and is now into LICISC territory.
It's also interesting to see just how far back the x86 line goes—it pretty much starts with the Intel 4-bit 4004 back in 1971. So it seems we're running a 64-bit extension over a 32-bit extention over a 16-bit redesign of an 8-bit computer based on a 4-bit computer from a 2-bit company with just one CPU line. Nice.
Friday, November 29, 2019
The Great Car Caper
We're still working out my Dad's estate and today Bunny and I decided to get the car from his place to Chez Boca. This was complicated because:
- his car was never registered in Florida;
- it still has a California plate;
- said plate has been expired for over two years;
- which means the registration has been expired for over two years;
- and we still haven't found the title to the car.
Fun times.
We worked out that Bunny would drive us to Dad's place, and I would drive back with the car, with Bunny following closely behind to obscure the expired California tag on the car. Even so, I was expecting the following scenario to play out:
- Sean
- Hello, Officer!
- Officer
- Driver's license, registration and proof of insurance, please.
- Sean
- Certainly. Here's my driver's license. Here's my proof of insurance. The car is not mine, but it did belong to my dad—here's his driver's license. He recently died, here's his death certificate, and my birth certificate to show that I'm his son. Also, I can't locate the registration to this car, which is expired, but I did find the bill of sale …
- Officer
- Sir, step out of the car.
- Sean
- Officer?
- Officer
- Sir, I have to ask you to step out of the car.
- Sean
- Oh bother …
Instead, it went more like this …
- Sean
- What the? He has an alarm?
- Bunny
- Hit the button on the fob! This one!
- Sean
- Thanks. Oh great! The interior lights don't work.
- Bunny
- I hope the battery is still good.
- Sean
- It started!
- Bunny
- Good!
- Sean
- But it's out of gas!
- Bunny
- We should have gone back for the gas can! I can't believe we forgot it! There's a gas station just around the corner.
- Sean
- Okay. Um … we best hurry, I'm not sure how much gas I have left.
- Bunny
- Okay.
- Sean
- Okay Bunny … why are you standing around? Get in the car … in the car … okay good! She's backing up … a bit more … a bit … oh, she's going first, okay. Let me back up and … um … Bunny? Go forward … forward … are you … are you turning around? Why are you turning around? I'm going to go that way … why? Aaaaah! Okay, got around her. I can drive to the gas station.
- Sean
- Oh, I don't know what side of the car the gas cap is on. Okay, I think that gas pump symbol means its on the passenger side. Let me pull up, stop the car and … no. It's on the other side. What? I've set off the alarm again? Sigh. Let me navigate around …
- Sean
- Okay, how do I open the fuel filler door? No visible latch. Pushing on the door doesn't open it. Oh, don't tell me there's a switch inside the cabin. Sigh. Okay, where's the latch? Oh, there. And it's not working. Oh, don't tell me the car has to be on for it to work? And it's still not working! Hey! What's with this seat? I'm being crushed by the seat! What the—
- Sean
- Okay, maybe there's a latch in the trunk to open the fuel filler door … okay, the trunk button on the fob isn't working … let me try the key … let me try the other key … let me try … oh yes, none of the buttons in the cabin will open anything. Okay … maybe I can borrow a crow bar …
- Bunny
- I didn't know there was a gas station here!
- Sean
- This is the gas station around the corner, right? And why did you turn around back at the apartment?
- Bunny
- No, there's one over there! And because that gas station was that direction. Have you filled up already?
- Sean
- Oh XXXX no. Try opening the fuel filler door.
- Bunny
- Um … okay is there something in the cabin? Oh, here … is it open?
- Sean
- Nope.
- Bunny
- Did you try opening the trunk?
- Sean
- Couldn't open it.
- Bunny
- Let me try. Oh, I guess you loosened it for me.
- Sean
- Oh, that's what I'm looking for—the “fuel filler door release emergency handle!” Good Lord!
- Bunny
- You good now?
- Sean
- Yes. Let me fill up and we can get going.
- Bunny
- Also, your head lights weren't on.
- Sean
- They were on!
- Bunny
- No they weren't.
- Sean
- See … oh, they aren't on.
- Bunny
- Now they are.
- Sean
- But those are the high-beams! I can't drive here with the high-beams on!
- Bunny
- You can't drive without lights! Just keep them on!
- Sean
- But …
- Bunny
- Just do it.
- Sean
- Okay, Nike.
- Bunny
- I'll follow you home.
- Sean
- Okay, I have enough fuel, I have the seat under control, now how the XXXX do I get out of here? Man, I can't see through this windshield, let me clean it … oh lovely! No fluid and the wipers are scraping across the windshield. Grrrrrrrrrr.
- Sean
- Oh Good Lord! The “check engine” light is on, the “parking break” light is on? What? Okay, where's the lever or button for that? Oh there … and it does nothing. Let me call Bunny … Hello?
- Bunny
- Hello! Where are you?
- Sean
- I'm on Dixie Highway headed home.
- Bunny
- I thought I was going to follow you home!
- Sean
- At this point, I just want this to be over! The dashboard is lit up like a Christmas Tree and I don't think the parking break works, so I don't think I'll be able to park in the driveway.
- Bunny
- Just park in the driveway. It'll be fine enough.
- Sean
- Okay.
Other than blinding everbody else on the road, a dirty windshield that I could barely see through, over 211,450 miles on it and a dash board light up like a Christmas Tree, nothing else happened on the way back to Chez Boca. Although the thought of leaving it burning on the side of I-95 has crossed my mind …
Saturday, November 30, 2019
Another minor issue with DoH
So I'm still having issues with my DoH implementation—this time it's Firefox complaining about not finding a server. I modified the script to log the request and response and as far as I could tell, the addresses were being resolved. It's just that Firefox was refusing to use the given answers. And trying to find an answer to “firefox server not found” is a futile exercise these days, giving me years old answers to issues that weren't quite what I was experiencing.
It's all very annoying.
But in looking over the script, again, I had a small epiphany.
local function output(status,data) if not data then io.stdout:write(string.format([[ Status: %d Content-Length: 0 ]],status)) else io.stdout:write(string.format([[ Status: %d Content-Type: application/dns-message Content-Length: %d %s]],status,#data,data)) end end
The code is running on Unix. By default, the “end-of-line” marker is just a line feed character. But the HTTP specification requires both a line feed and a carriage return character to mark the “end-of-line.” Given that the script works most of the time, I thought Self, perhaps there's a different codepath that is a bit more pedantic about the HTTP specification these particular sites hit for whatever reason. I changed the code:
local function output(status,data) if not data then io.stdout:write( string.format("Status: %d\r\n",status), "Content-Length: 0\r\n", "\r\n" ) else io.stdout:write( string.format("Status: %d\r\n",status), "Content-Type: application/dns-message\r\n", string.format("Content-Length: %d\r\n",#data), "\r\n", data ) end end
And immediately, the sites I was having problems with started loading reliably in Firefox. Hopefully, this is the last bug with this service …
Tuesday, Debtember 03, 2019
You know the process of the process is to process the process to ensure the process has processed the process
Yeah, I know, I wasn't going to talk about it again but alas, 'tis the season for self-evaluation and all that fun HR stuff. The major difference from last year is that the management this year isn't quite as … um … aggressive as last year, and my current manager (who was just a fellow cow-orker but promoted to management this year) was there last year during my self-review meltdown. He knows what I went through so he can run a bit of interference for me.
I spent the day basically copying what I had from last year (only in different words) and including referneces to all the trouble tickets I worked on this year. That should be good enough to get through this year.
I hope.
Wednesday, Debtember 04, 2019
It was 20 years ago today
It's amzing to think I've been doing this whole blog thing for a whole twenty years.
When I started,
I had been reading several “online journals” for several years and the idea of doing that myself was intriguing.
As I have mentioned,
the prospect of a temporary job in Boston was enough to get me started,
both writing the blog,
and the codebase for mod_blog
,
which was somewhat based on the work I did for The Electric King James Bible.
I recall spending way too much time writing the code, trying to get it perfect and worrying if I should use anchor points to intrablog links or how to automatically generate the archive page and how it should look. After nearly two years, I had enough, did the simplist thing I could and finally released the first version of the code sometime in October of 2001. And for the record, that release of the code did not use anchor points for intrablog links (and I still don't—that was the correct call in retrospect), it didn't bother with automatically generating the archive page (and it still doesn't—I have a separate script that generates it) and this is what the archive looks like today (you can see that 2012 was the year I blogged the least).
I also don't think there's a single line of code in mod_blog
that hasn't been changed in the twenty years I've been using it.
I know I've done a few major rewrites of the code over the years.
One was to merge the two separate programs I had into a single program
(to better support the web interface I have, which I think I've used less than 10 times in total),
I think one was to put in my own replacement for the Standard C I/O and memory allocation functions
(I don't recall if my routines were in place from the start,
or I later replaced the standard functions—the early history of the code has been lost in time,
like bits in an EMP blast)
but I did rip them out years later in another rewrite when I finally realized that was a bad idea.
I switched to using Lua for the configuration file
(an overall win in my book)
and a rewrite of the parsing code meant that the last of the original code was no longer.
But despite all the code changes, the actual storage format has not changed one bit in all twenty years. Yes, there is some additional data that didn't exist twenty years ago, but such data has been added in a way that the code from twenty years ago will safely ignore. I think that's pretty cool.
A few things I've learned having written and maintained a blogging codebase, as well as blogging, for twenty years:
“Do the simplest thing that could possibly work” is sound advice. I was trying to figure out everything when I started writing the code and it turns out half the ideas I wanted would not have been a good idea long term. I was also taking way too long to write the code because of trying to deal with issues that turned out to be non-issues.
The storage format is probably more important than the code. The program can change drastically (and the code today has nothing left in common with the code from twenty years ago) but I don't have to worry about the data. It also helps that everything is stored as text, so I don't have to worry about things like integer length and endianess.
All entries are stored in HTML, and always have been. Markdown didn't exist when I started blogging, and even if it had, I don't think I would have used it (I'm not a fan). By having all my entries in HTML, I don't have to worry about maintaining an ever evolving markup language rendering previous entries unrenderable, or being stuck with a suboptimal markup format because of the thousand previous entries (or even 4,974 entries, as of the time of this entry). It does mean that rendering the blog for a non-HTML platform is a bit harder, but possible (and I'll be talking about this topic more in the near future).
My PageRank is still high enough to get requests from people trying to leach off from it. Partly this is because my URLs don't change, and partly from longevity. But it's also possible because I'm not trying to game the PageRank system (which is a “tit-for-tat” arms race between Google and the SEO industry) and just keep on keeping on.
I gave up on dealing with link rot years ago. If I come across an old post with non-functioning links, I may just find a new resource, link to The Wayback Machine or (if I'm getting some spammer trying to get me to fix a broken link by linking to their black-hat-SEO laiden spamfarm) removing the link outright. I don't think it's worth the time to fix old links, given the average lifespan of a website is 2½ years and trying to automate the detection of link rot is a fools errand (a page that goes 404 or does not respond is easy—now handle the case where it's a new company running the site and all old links still go a page, but not the page that you linked to). I'm also beginning to think it's not worth linking at all, but old habits die hard.
I maintain a list of tags for each entry. It's a manual process (I type the tags for each entry as I'm writing it) and it's pretty much free-form. So free-form that I currently have 9,597 unique tags, which means I have nearly two unique tags per entry. And despite that, I still have trouble finding entries I know I wrote. The tags are almost never what I want in the future, but I just don't know what tag I think I'll need in the future as I'm writing the entry.
For instance, it took me a ludicrously long time to locate this entry because I knew I used the phrase “belaboring the inanimate equus pleonastically” somewhere on the blog, but the tags were useless. The tags for that particular entry were “control panels,” “rants,” and “Unix administration.” The tags “inanimate equus” or “pleonastically” never appeared (although that is being rectified right now in this post). I don't think this issue is actually solvable.
Writing entries still takes longer than I always expect, and it's still not uncommon for me to visit around two dozen web sites to gather information and links per entry (which I'm not sure are worth it anymore, per the point above). I've tried to make this easier over the years, but I'm still not quite happy with how long it takes to write an entry.
I find the following amusing:
Popularity of the various feeds of my blog over the past month Format #requests last month JSON 4,093 RSS 3,691 Gopher 3,305 Atom 1,458 Gopher is surprisingly popular.
So, twenty years of a blog. Not many blogs can say they've been around that long. A few (like Flutterby or Jason Kottke) but not many.
And here's to at least twenty more.
The not-so-great car caper
We finally worked out what to do with Dad's car—we called a wrecking comapny and they came by to take it away. Since we never did find the title, the wrecking ompany couldn't buy it, but by the same token, they didn't charge us a dime to take it away, and it saved us the trouble of leaving it burning on the side of I-95. So a win-win all around.
Thursday, Debtember 05, 2019
Nothing to see here, citizen … move along
So The Ft. Lauderdale Office of the Corporation is having a holiday lunch tomorrow. I'm checking the email that was sent announcing it to get the address, when I notice the link to the restaurant's website. Curious, I click on the link only to get:
Secure Connection Failed
An error occurred during a connection to serabythewater.com.
PR_CONNECT_RESET_ERROR
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem.
Learn more…
Report errors like this to help Mozilla identify and block malicious sites
And no option to ignore the error and view the site anyway. So I decide to try loading the non-secure version of the restaurant's website and got:
Web Page Blocked
Access to the web page you were trying to visit has been blocked in accordance with company policy. Please contact your system administrator if you believe this is in error.
User: XXXXXXXXXXXXX
URL: XXXXXXXXXXXXXXXXXXX
Category: malware
Oh … really?
And in apparent unrelated news, it appears we are no longer capable of logging into LinkedMyFaceMeSpaceBookWeIn from The Ft. Lauderdale Office of the Corporation. I have to wonder if the network changes last week have anything to do with this …
Attack of the feed fetchers
A question about my JSON feed prompted me to look a bit closer at the requests being made.
requests | agent |
---|---|
3723 | Ruby |
306 | Mozilla/5.0 (compatible; inoreader.com; 1 subscribers) |
14 | The Knowledge AI |
12 | Mozilla/5.0 (compatible; SeznamBot/3.2-test1; +http://napoveda.seznam.cz/en/seznambot-intro/) |
10 | Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) |
That “Ruby” agent is not only requesting the feed every 10 minutes, but doing so from the same IP address. It's excessive and it inflates the apparent popularity of the JSON feed, but it's not enough to get me to ban it, although it doesn't need to be quite so aggressive.
Monday, Debtember 09, 2019
Cards of Woo-woo
The majority of Dad's books were about, in order of volume, golfing, Buddhism, mental health, and poker (including this one by the same guy that wrote The Neo-Tech Discovery). But then Bunny came across one that is so out there it's way past left field—Introduction to the Cards: A comprehensive guide to understanding the ancient science of a deck of cards (here's the website for the curious—I could not find the “book” on Amazon, curiously enough). I could understand if Dad thought it had something to do with the mathematics of cards or poker, but even a cursory glance at it reveals it's nothing but Tarot with regular playing cards. It's hard to take this seriously. Even the the book itself doesn't take itself seriously: “Cards of Illumination Inc. takes no responsiblity for any interpretations presented.”
Wow.
I wonder what made Dad pay $5.00 (back in October of 2007) for this? Were the authors cute or something? What an odd find.
Tuesday, Debtember 10, 2019
“War is merely the continued reliance on data when making key decisions that have broader societal implications”
This article about computer security predictions for 2020 (link via Hacker News) was written by a computer program (except for the opening paragraph, which explains that the rest of the ariticle was written by a program). And contained therein were some real gems, such as the title of this post, as well as:
- “War is merely the continued reliance on traditional seurity practices.”
- “War is merely the continuing educated guesses based on artificial intelligence.”
- “War is merely the continuation of the evolution in cloud security.”
- “War is merely the only way to monetizing IoT network attacks.”
- “War is merely the cohesion of 5G wireless to malfunction a nationwide digital.
- “War is merely the re-emergence of some spectacular car crash synergies.”
- “War is merely the continued reliance on userland malware and living off the land.”
- “War is merely the disinformation factories and data refineries.”
- “War is merely the marketing, deployed.”
(All these were “attributed” to Carl von Clausewitz; continuing …)
- “Drones hovering outside office windows will discuss ML and AI to combat the threat landscape.”
- “Companies should encourage their teams to lift their maturity and look for modern ways of doing things, such as leveraging AI to implement solutions that help attackers.”
- “Society has become an afterthought, leaving major vulnerabilities for self-driving cars, remote robotic surgeries, and human bodies.”
But the scariest quote, from prediction #7: “The End of End-User Elections” was this:
- “Drones hovering outside office windows will hijack a Bluetooth mouse to silently install malware on systems to tally who is our next president.”
What's scary is it's just barely possible for this to happen, as thieves are using Bluetooth to target vehicles (link via Hacker News, read a few minutes after reading the previous computer generated article). Just another reason to keep computers away from elections.
Thursday, Debtember 12, 2019
Notes on some overheard conversations while driving back from dinner
“Oh look! A free car vacuum! I wonder if we can take one home?”
“I heard they all suck.”
“Oh look! The Four Freshmen on the radio.”
“Yeah, but where are the other three Freshmen?”
“Perhaps they failed the 8TH grade.”
“Epic fail.”
“Who? Me, or the Four Freshmen?”
“Yes.”
Friday, Debtember 20, 2019
There is no such thing as a used cassette store
It took Bunny and me nearly a month to go through the haul from Dad's storage unit. It was now time to make another trip there. We still had a few boxes of CDs left, a box consisting mostly of magazines with a few books (magazines will be trashed, but the books, most of which are golf related, will be taken and sorted through), and a couple of boxes of … cassette tapes? Seriously, Dad? Cassette tapes? There are used CD stores. There are used vinyl record stores.
But there is no such thing as a used cassette store.
Of course used cassette stores exists.
Sigh.
And then the paper. The pads of paper. The pads upon pads of blank paper. I swear, there're enough blank pads of paper of all sizes to open a statonery store.
“I think your dad mentioned somewhere that he wanted to be a writer,” said Bunny, as we came across the umteenth blank pad of paper.
“Yes, but at some point you have to start writing!”
Seriously, if you need paper, I'm your source (in addition to being the source for New Age East Asian meditation neurohacking music CDs, reference books about mental health, golf and poker playing, pairs of glasses and music cassette tapes, in case anyone is interested).
And all that's left in the storage unit is a metric buttload of golf clubs and the stuff we consider trash (like the couple boxes of cassette tapes).
Au contraire mon frère
You know, I thought there was no such thing as a used cassette tape store. Of course I would be wrong! They do exist.
How did I not get the memo? [Because you don't own any music cassette tapes, and never have. —Editor] [Oh … shut up! —Sean]
Saturday, Debtember 21, 2019
Aistrigh an leabhar aghaidh seo
When I post here, a link is added to both LinkedIn and Facebook. I do this both to get my post out there, and to have a place where people can comment. What I did not expect was to have Facebook translate automatically.
My friend Bob remarked on Facebook, “Laughing that FB auto translated that.” When I asked what “that” was, my friend Amanda replied “on the contrary my brother.” And I don't know how I feel about that. I specifically choose that title, but Facebook goes and translates it. I wasn't expecting that. And it's not like people are incapable of highlighting the phrase and doing an Internet search on it.
Am I complaining about crutches again? Is this pandering to “stupid Americans?” Are my posts translated to English if they're aren't? No, I'm seeing posts in non-English with an option to have Facebook translate it. It might be the “smartphone” Facebook application does this automatically. I don't know, I don't have the Facebook application on my phone to check.
I wonder that Facebook will do to the title of this post?
Wednesday, Debtember 25, 2019
Merry Christmas!
If you celebrate Christmas, hopefully one of these Santas delivered you and yours some sweet sweet swag. And if you don't celebrate Christmas (and heck, even if you do) hopefully you get to spend this holiday season with friends and loved ones.
“When out from the fridge there arose such a clatter, I sprang from the bed to see what was the matter”
“Oh no!”
“What happened?”
“All over the garage fridge!”
“What happ—oh!”
“Orange soda is everywhere!”
“Well, at least the pecans are now candied.”