The Boston Diaries

The ongoing saga of a programmer who doesn't live in Boston, nor does he even like Boston, but yet named his weblog/journal “The Boston Diaries.”

Go figure.

Friday, May 01, 2015

Living with being dead

Most of the patients here have been diagnosed with garden-variety neurological disorders: schizophrenia, dementia, psychosis, severe depression, or bipolarism. But the ones I am searching for are different. They suffer from an affliction even more puzzling: They believe that they are dead.

It’s a rare disorder called Cotard’s syndrome, which few understand. For patients who have it, their hearts beat and lungs pump, yet they deny their existence or functionality of their bodies, organs or brains. They think their self is detached.

Via Hacker News, Living With Being Dead — Matter — Medium

I would like to congratulate Sean Hoade on his Zombi epalooza interview, and what better way to do that than to link to an article about people who think they're dead. Does that mean they think they're zombies? Or ghosts? Or just dead and their body has yet to notice?

Saturday, May 02, 2015



I hate web based applications, because as soon as you get used to the interface—BAM some attention-deficit programmers change how everthing works, just because. Google Maps is a good example of this. It's still perhaps the best mapping application out there and I always use it, but every few months they change how the entire interface works, destroying existing patterns of use and wasting days, nay weeks of time as I attempt to learn how to use the features I use, only to find out half of them have been removed, because.


But today I'm not here to bury Google Maps, but Facebook. They broke my posting application. The application I use when I post to this blog and send notification to Facebook that is posted on my … whatever that thing is called at Facebook. My wall? Timestream? Spam channel? Whatever it's called.

Facebook changed how things work on the backend, and now I'm getting the dreaded 803 error (and of course there are no real answers there).

Thank you Facebook.

Thank you a lot.


It appears that Facebook wants to be the internet (much like Google in fact) or at the very least, force the impression that the web is Facebook. Why else make such a drastic change in API that disallows small blogging sites from updating Facebook remotely?

After spending several hours pouring over the Facebook API documentation, my eyes are glazing over and from what I can see, it appears Facebook only supports three use cases (aisde from using the Facebook website itself):

  1. an application running on Android;
  2. an application running on iOS;
  3. an application running on a website (preferrably using Facebook to authenticate the user).

And that last one—it's someone actively using the website. My now-broken application? That was kicked off when I posted to my blog (most of the time that's via email, where I can use an editor of my choice to compose the entry instead of whatever hideous crap editing you get in a TEXTAREA on a webpage) where I may or may not be logged into Facebook at the time (usually not, not that it matters at all for tracking purposes, which is for another post).

And my application wasn't the only one Facebook broke. And that appliation looks like it won't be fixed any time soon ever! (sorry Dan)

So it looks like I'm stuck manually posting to Facebook when I update here. I was already updating GooglePlus manually because they have yet to provide an API to update remotely (I don't expect one any time soon). I suppose I could automatically update Twitter, which can update Facebook (now that I worked around the broken Twitter API—are you seeing a pattern here?) but there's no telling how long that will last; I'd be stuck with a 140 character limit including the link and well … no.

There's more to the web than just GoogleMyFacePlusSpaceBook, and long term, I think it'll be easier to just manually update FaceGoogleMyBookPlusSpace. XXXX the GoogleMyFaceSpaceBookTwitterPlus APIs. XXXX them all!

Update on Sunday, May 3rd, 2015 at 02:28 am

Petty, I know, but it made me feel better.

Sunday, May 03, 2015

When Las Vegas is not Las Vegas

Ah, Lost Wages. It's been a few years since my last visit, but each time I've been there, I technically wasn't in Las Vegas, but in Paradise. Paradise, Nevada to be precise.

And yes, it was a tax dodge.

Monday, May 04, 2015

Today my friends are talking with a lithp.

[Getting hit in the nuts hurts a lot more than getting hit in the head. Conclusion: Evolution thinks that your nuts are more important than your brain.  I agree with evolution.] [Getting hit in the nuts hurts a lot more than getting hit in the head. Conclusion: Evolution thinks that your nuts are more important than your brain.  I agree with evolution.] [Getting hit in the nuts hurts a lot more than getting hit in the head. Conclusion: Evolution thinks that your nuts are more important than your brain.  I agree with evolution.] [Getting hit in the nuts hurts a lot more than getting hit in the head. Conclusion: Evolution thinks that your nuts are more important than your brain.  I agree with evolution.] [Getting hit in the nuts hurts a lot more than getting hit in the head. Conclusion: Evolution thinks that your nuts are more important than your brain.  I agree with evolution.]

So … does this mean The Empire should have installed a cup on the Death Star?

May the Fourth be with you!

Tuesday, May 05, 2015

Ch-ch-ch-changes for the mobile web

It's been brought to my attention by a few parties that my blog was unviewable on some smartphones; which smartphones I don't know (but I suspect Android based devices). I finally got around to it and the changes were minimal. This:

<meta name="HandHeldFriendly" content="True">

(the Google Mobile-Friendly Test fell on the floor laughing when it encountered that line) changed to:

<meta name="viewport" content="width=device-width, initial-scale=1">

And that's it for the HTML changes. The CSS changes weren't that bad, once I figured out what was needed. I asked a fellow cow-orker, D, what I needed to do in order to serve up a “mobile-friendly CSS file” and his advice was: “Do whatever CNN does.”


It appears there is no real reliable way to detect a smartphone through CSS only. Sure, I could try to detect a smartphone by sniffing the user agent, but I wanted something easy, not something error prone despite a ton of ongoing configuration and testing. So that was out. And the obvious media query:

<link media="handheld" rel="stylesheet" href="/CSS/smartphone.css" type="text/css">

was right out because “smartphones” are “smart” and not at all a “handheld.” Sheesh.

I ended up doing what CNN did—base the style upon the width of the browser. It seems that a “safe” width for smartphones is around 736 pixels. Larger than that, and I can assume a real desktop (or laptop) display; that or less and hey, treat it as a smartphone. And if your browser window on the desktop (or laptop) is less than 737 pixels, you'll get the “mobile” version of my site.

Anyway, the changes weren't all that bad. The “not-main-content” is positioned via CSS and that's all I really wanted to change. For instance, I had this style for the main content:

/* Yes, the DIV is redundant.  I left it in because I want to be explicit */
  margin-top:           0;
  margin-bottom:        0;
  margin-left:          220px;
  margin-right:         180px;
  border:               0;
  padding:              0;

To fix this for the smartphone:

  margin-top:           0;
  margin-bottom:        0;
  margin-left:          220px;
  margin-right:         180px;
  border:               0;
  padding:              0;

/* override some previous settings for "smartphones" */
@media screen and (max-device-width: 736px),
        screen and (max-width: 736px)
    margin-left:        0;
    margin-right:       0;

The rest of the changes were along those lines for the major portions of the page—override the placement settings for the various bits and pieces.

So now the blog should be readable on small devices.

I hope.

Driskell v. Homosexuals

It seems that one Sylvia Ann Driskell is suing homosexuals (link via Flutterby). The handwritten lawsuit is a riot to read, but ultimately, it does seem that Ms. Driskell might be in need of some mental care (if not a proofreader).

Also, I think that Ms. Driskell needs to read (or listen) to Matthew Vines' talk on the Bible and homosexuality (it's long, but I think it's worth the time if only for some Christians to gain some perspective, and for gays to get some counter arguments for the Westboro Baptist Churches out there).

Wednesday, May 06, 2015

Dogs and cats living together

I'm watching an animated interview of Buckminster Fuller when I see this sequence of equations:

a = b a2 = ab a2 - b2 = ab - b2 (a+b)(a-b) = b(a-b) a+b = b 2b = b 2 = 1

And I'm thinking, that looks right, as far as I can remember algebra, but two can't be equal to one, can it?

I had to work through it by hand to find the problem, and now, gentle reader, you get to work through it yourself.

You're welcome.

Thursday, May 07, 2015

Small websites need not apply

Hoade and I were exchanging emails about AdWords today. He's trying to get his writing career into overdrive, seeing how every other job he's tried never worked out. I advised him against AdWords unless someone else was footing the bill, as it can get expensive.

[I'm amused that the GoogleAI has realized that lots of people don't think too highly of AdWords.]

I've also heard first hand that AdWords wasn't worth the money spent on it.

On the flip side, I found that AdSense wasn't worth it for me. Sure, if my blog had a bit more focus, like I was writing about gendered bodies in Japanese pornographic anime and horror through a Foucauldian framwork in order to analyze the West's gaze upon the world, Google would have had a better time generating related advertising towards the blog, but alas, my blog isn't that focused and the ads I did get were bizarre. And it didn't help me that not many people were seeing the ads.

Things might have changed since my last experiment with AdSense (or what I heard about AdWords), but I still cautioned Hoade about wasting money on AdWords. I feel he would do better by getting his name out there on podcasts and web-based forums. At least that way, he won't spend any money.

Last words from Hoade:

Just to add a little coda to our AdWords / Facebook ads convo, the for-real serious consensus seems to be "Maybe it could work. Do some A/b testing with your massive pile of ad money. One way might could work gooder than another." Not even XXXXXXX kidding.

Actual quote: "She ended up with a 1.3 percent click-through rate, which is actually very good." I believe this is true, but lord, it doesn't make you run for your checkbook.

Yeah, I'm thinking small sites need not apply.

Friday, May 08, 2015

I, for one, welcome our new self-driving robotic overlords

As I thought about Google's self-driving car, I realized that more than just taxicab drivers should be worried—there would be tremendous pressure from the trucking industry (and I'm not talking about the drivers here) to allow driverless trucks on the road. I'm not terribly surprised.

License plates are rarely an object of attention, but this one’s special the funky number is the giveaway. That’s why Daimler bigwig Wolfgang Bernhard and Nevada governor Brian Sandoval are sharing a stage, mugging for the phalanx of cameras, together holding the metal rectangle that will, in just a minute, be slapped onto the world’s first officially recognized self-driving truck.

The truck in question is the Freightliner Inspiration, a teched-up version of the Daimler 18-wheeler sold around the world. And according to Daimler, which owns Mercedes-Benz, it will make long-haul road transportation safer, cheaper, and better for the planet.

“There’s a clear need for this generation of trucks, and we’re the pioneers who are willing to tackle it,” says Bernhard.

Via InstaPundit, The World's First Self-Driving Semi-Truck Hits the Road | WIRED

I am also reminded of Humans Need Not Apply, but it might be all that dire.

Saturday, May 09, 2015

The Dymaxion Car

The Dymaxion Car, designed by Buckminster Fuller. It wasn't pretty. It could seat eleven. It was difficult to drive. It killed one of its first drivers. But it got 30mpg and could travel at 90mph. Kind of a mixed bag for a car built in 1933.

Still, it might be somewhat fun to ride in it, as long as there's a trained driver.

Sunday, May 10, 2015

Eight years of greylisting

I've been hacking on my greylist daemon over the past few days. I'm not sure what, exactly, prompted me to start hacking away at it though. The last code change was in December of 2011—all code changes since then have been tweaks to the Makefile (the file that describes how to build the program). As I'm hacking on it, I've come to hate the code handing the protocol the components use for communications (there's the main component that manages the data and logic; there's the component that interfaces with sendmail and another one that interfaces with postfix).

And over the past few days, I've reflected over what I would do differently if I were to write the greylist daemon now and how well my decisions eight years ago held up.

One decision I made eight years ago was to write my own “key/value” store instead of using a pre-existing one. I rejected outright the use of an SQL database engine (like MySQL or PostgreSQL) and I don't think I would change my mind now. The data stored is short lived (six hours for most entries, otherwise thirty-six days) and I don't think such churn is good for database engines.

In addition, the only NoSQL based solution (as they're now called) at the time was memcached (written in 2003; redis wasn't released until 2009, two years after I released the greylist daemon). memcached (and redis) can expire entries automatically, and it could handle five out of the six lookups the greylist daemon makes. The one lookup neither one can handle (as far as I can tell) is the IP address lookup.

This lookup compares the IP address of the sending SMTP server against a list. The list describes an address range and what to do if the given IP address “matches.” For example:

If, say, the IP address is, then the email is accepted and further processing is skipped because of the matching rule: ACCEPT

An IP address of is rejected, because of the matching rule: REJECT

An address like will match the rule

and because the result is GREYLIST, futher checks are made.

There does not appear to be a way of handling this type of query using memcached or redis. I would have to write code to store the IP addresses anyway. Also, memcached is a pure memory cache—if it crashes, all the data goes away (and remember—at the time I wrote this, this was really the only key/value store that existed that wasn't an SQL database engine) which is something I didn't want to happen. So my decision at the time to write my own key/value store wasn't a bad one.


Today I might consider using redis to store what I could, but it's another component that, if it isn't available, I can't greylist incoming email (I have to allow the email in—fail safe and all that). Also, the code I wrote to store the non-IP address data was easy to write. I dunno. It's hard to say how I would store the data today.

The protocol between my components is something I would handle completely differently today. I can't say what the actual protocol would be though.

There are basically two methods of sending data—a series of values in a fixed order (which is how the protocol works today) or as a series of tagged values, which can appear in any order. The former doesn't really deal well with optional data (you end up tagging such values anyway) while the later is harder to parse (since the values aren't in a fixed order, you have to deal with missing values in addition to duplicate values).

The biggest issue I have with the protocol now is what I said above, the code that handles the protocol is a mess—it's all over the place instead of in a few isolated routines. That makes updating the protocol (say, adding new fields, fixed or optional) very difficult.

What I would do now is make the protocol handing portion more of a module—a module for version 1.0 of the protocol, a module for version 1.1 of the protocol, etc., load them all up, and based upon the version embedded in the packet (something I do have, by the way), farm out the processing to the proper protocol version module. It would make updating the protocol easier to deal with in the codebase. The lack of this approach to the protocol is, I think, the biggest problem with the codebase today.

One last aspect I would change is the logging of verious statistics, or “key performance indicators” as they are called in the industry. Instead of incrementing a bunch of variables in the codebase and every so often dumping them out to syslog (messy code requiring the use of signals and all the problems that entail, and several lines of code modified for every new KPI added) I would use the method they use at Etsystatsd—or at least, my own take on it. I don't need the full blown “all-singing, all-dancing, all-graphing” statsd that Etsy developed but one that just logs to syslog(). And given the whole concept is easy, a small version that just logs to syslog() is pretty trivial to write (I wrote a version in Lua with 225 lines of code, and a full quarter of that is just parsing the command line). The nice thing about a statsd-like concept is that it is trivial to add new KPIs to the codebase, and they're logged automatically without any other changes. The logging and potentially resetting of values is all isolated in statsd, in the way that log messages are logged to files or forwarded to another server is isolated in syslog.

There's not anything else I would really modify in the greylist daemon. Really, the only bad decision I made eight years ago was not fully isolating the protocol. Everything else was an okay decision.

And frankly, I'm not even sure if the greylist daemon needs any more work done on it.

Monday, May 11, 2015

SPF might not be worth handling, but what about RBL?

A month ago, I re-evaluated the use of SPF as an anti-spam measure and found it wanting. Today, I decided to re-evaluate my stance on the various real-time blackhole lists that exist. I was relunctant to use an RBL because of over-aggresive classification for even the smallest of infractions could lead to false positives (wanted email being rejected as spam). It has been over a decade since I first rejected the idea, and I was curious to see just how it would all shake out.

I used the Wikipedia list of RBLs as a starting point, figuring it would be pretty up-to-date. I then dumped information from my greylist daemon. The idea is to see how much additional spam would be caught if, after getting a “GO!” from the greylist daemon, I do a RBL check.

Out of the current 2,830 entries, only 145 had not been whitelisted. I didn't filter these out before running the test, but I don't think it would throw off the results too much. Half an hour of coding later, and I had a simple script to query the various RBLs for each unique IP address (1,446). I let it run for a few hours, as it had quite a few queries to make (1,446 IP addresses, each one requiring one query to see if the IP address is a known spammer, and a possible second one for the reason, across 45 RBL servers—it took awhile).

First up, how many “spam” results did I get from each RBL:

Results from each RBL
RBLhitsreasons given

As you can see, some of them were not worth querying. Also, about list.quorum.toit's not straightforward to use that server as it always sent back a result even when the others did not. I ultimately decided that any result that only had a “hit” from to be “non-spam” because of the issues.

I then proceeded to pour through all 2,830 results.

Email classification from RBLs
Marked as SPAM273997%
Not marked as SPAM913%

And out of the 91 that was not marked as spam, only 7 were spam not marked by any of the RBLs. Not bad. But the real test is false positives—email marked as spam that isn't. And unfortunately, there were a few:

False positives

Now, I realize that some of my readers might very well consider email from Twitter or Facebook as spam, but hey, don't judge me!


Anyway, that's a problem for me. I will occasionally have issues with the greylisting in some cases (rare, but it does happen, and I have to explicitely authorize the email when I become aware of the issue) but it's even worse with this. For instance:!!!!!!!!!

It's hit-or-miss within the IP range Facebook uses to send email. This would make troubleshooting quite difficult. I could whitelist the problematic domains but for any new site I might want to receive email from, I would have to watch the logs very closely for issues like this. But it's not as bad as I thought it would be, and it would cut out a lot of the spam I do get. It's tempting.

I shall have to think about this.

Tuesday, May 12, 2015

He is the electric messiah! The AC/DC god!

Today we received an email from the Marketing Department of the Corporation Overlord Corporation touting their latest press release to the public. They linked to the press release at the various GoogleMyFacePlusTwitterSpaceBook sites using some outrageous graphics. The one for Facebook was pretty scary looking, being based on this:

[Oh my God!  He's part of the Matrix!  I knew this was a bad week to start sniffing bits!]

Image by Charis Tsevis

I'm not sure why the Marketing Departmemt of the Corporation Overlord Corporation felt the standard linking images for FaceTwitterGoogleMyPlusSpaceBook weren't good enough and needed to “kick it up a notch,” but there you go.

But as a counter point to that (or maybe even a counter-counter point, or a point, or something), here's an interesting video (warning: it's an hour) on the digital tracking that MyFaceGoogleSpaceBookPlusTwitter can (and most likely, is doing), if you can stomache the whole “viva la revolución my democratic comrades” vibe the speaker gives off (and you can pretty much skip the last fifteen to twenty minutes where it gets really thick).

Wednesday, May 13, 2015

Notes on an overheard conversation at The Ft. Lauderdale Office of The Corporation

“Can you help me?”

“Sure. What's the problem?”

“When I'm logged into my laptop as me, I can run ispell. But if I switch to root, it's not there.”

“Hmm … where does ispell live?”

“It's in /usr/local/bin. And before you ask—I checked, /usr/local/bin is in root's $PATH.”

“Hmmm … I'm running … um … that version of the operating system.”

“Rabid Wombat? I am too.”

“Let me drive the laptop for a second.”

“Okay. These two terminal windows.”

“Can you switch to root for me?”

“There you go.”

“Now, let's see … as you, I can see ispell in /usr/local/bin and the permissions seem okay. Now in the root terminal window … wow! There's a completely different set of files in /usr/local/bin. Hmmm … ”

“Any ideas?”

“Wait a second … that root window is on another system!”

“Oh … that would explain my problem … ”

“Yes it would.”

Thursday, May 14, 2015

The Flying Camera

I have a soft spot for cameras. I've got a few 35mm cameras, a couple of 8mm cameras and floating around here somewhere is a Super-8 camera. I also have … um … I think three, maybe four, digital cameras (which includes the one in my iPhone).

So yeah, I like cameras.

And even with my current crop of seldom-used cameras, I want the Lily (link via MyFaceTwitterGoogleSpaceBookPlus). The camera you toss in the air and it follows you around.

It's probably for the best that you can only pre-order the thing for now. And I live less than a mile from an airport.


Friday, May 15, 2015

Countrly Road

For no good reason, here's a video of some Japanese musicians doing a cover of John Denver'sTake Me Home, Country Raod” (link via Instapundit).

Saturday, May 16, 2015

The more things change


HTTP2 is finally here (link via Hacker News). I'm not happy about it, but what can I (or you) do? It's a done deal.

Part of the reason I don't like it is that it seems as if Google pushed this for their own needs.

You have a completely warped perspective here.

This is something Google pushed, so that Google can have as many tracking cookies as they like when you browse the internet, without the cookies causing a noticeable performance degradation because a http request might exceed the American DSLs MTU size.

This was one of the primary engineering criterias. No really.

There's no features in it for the user.

You have a completely warped perspective here. This is something Google pushed,… | Hacker News

Google is now in a position to dictate the architecture of the web. Sure, one could ignore Google and blithely go about their web business, but really, if you want to even have a chance of being found on the web, you follow the dictates commands of Google! Heck, even I kept mucking with my blog until I got the “okay” from Google (although there were other reasons I did the change besides Google, notice I didn't stop until Google said I was okay). And don't think Google will stop there (which is another rant for another time).

Another reason I don't like HTTP2 is that, as written, it's TCP over TCP. I can understand why they did it that way, but it's sad that for as much power as Google has, even they couldn't force a more sensible change.


Plus ça change, plus ils deviennent énervant.

Sunday, May 17, 2015

The blind men and the Molochian elephant

Bostrom makes an offhanded reference of the possibility of a dictatorless dystopia, one that every single citizen including the leadership hates but which nevertheless endures unconquered. It’s easy enough to imagine such a state. Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.

So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if you don’t, everyone else will kill them, and so on. Every single citizen hates the system, but for lack of a good coordination mechanism it endures. From a god’s-eye-view, we can optimize the system to “everyone agrees to stop doing this at once”, but no one within the system is able to effect the transition without great risk to themselves.

And okay, this example is kind of contrived. So let’s run through – let’s say ten – real world examples of similar multipolar traps to really hammer in how important this is.

Meditations On Moloch | Slate Star Codex

I don't agree with everything said in this long article (and warning—it is long. I mean, long. Did I mention just how long it was?) but I do feel that certain someones would benefit greatly if they read it and thought long and hard about it. While I'm tempted to give my summary of the article I'd rather not, lest my intended target audience disreguard the article entirely.

Monday, May 18, 2015

Gaming the system

By December 2011, lajello’s profile had become one of the most popular on the entire social network. It had received more than 66,000 visits as well as 2435 messages from more than 1200 different people. In terms of the number of different message received, a well-known writer was the most popular on this network but lajello was second.

How a Simple Spambot Became the Second Most Powerful Member of an Italian Social Network | MIT Technology Review

Only lajello isn't a human, but a spambot, but using the information in the article to boost your own ranking on MyFaceGoogleSpaceBookPlusTwitter is left as an exercise for the reader.

Tuesday, May 19, 2015

Wishful thinking

There are times when I wish the RFCs had more examples that covered various corner cases, such as handling SMTP or even, you know, SIP!


Anyway, while I'm here, let me also ask for a way to log everything but only when something is going to fail and skip the logging for stuff that won't fail (that just wastes space). How hard can that be?

Wednesday, May 20, 2015

If you think signal handling in C sucks, try Lua

So I have this Lua module to handle signals I wrote …

Originally, I just set a flag that had to be manually checked, as that was the safest thing to do (make that “practically the only thing you can do” if you want to be pedantic about it).

But after a while, I thought it would be nice to write a handler in Lua and not have to manually check a flag. Unfortunately, signal handlers have to be thought of as asynchronous threads that run at the worst possible time, which means calling into Lua from a signal handler is … not a good idea. The way to handle this is to hook into the Lua VM in the signal handler. lua_sethook() is the only safe Lua function to call from an asynchronous thread (or signal handler) as it just sets a flag in the Lua VM. Then when the Lua VM is at a safe spot, the hook is call which can then run the Lua function.

So we write our Lua function:

function Lua_signal_handler()
  print("Hi!  I'm handing a signal!")

and here I'm making it a global function just for illustrative purposes. At some point, we catch the signal to install the signal handler (which has to be in C):

; In order to hook the Lua VM, we need to statically store the Lua state. 
; That's what this variable is here for ...

static lua_State *gL;

	/* ... code code blah blah ... */

	; we're in some function and we have access to a Lua state.  Store
	; it so the signal handler can reference it.

	gL = L;

	; sigaction() should be used, but that's quite a bit of overhead
	; just to make a point.  Anyway, we are installing our signal
	; handler.


The signal handler installs a hook:

static void C_signal_handler(int sig)
  lua_sethook(gL,luasignalhook,LUA_MASKCALL | LUA_MASKRET | LUA_MASKCOUNT,1);

and when it's safe for the VM to call the hook, it does, which then calls our signal handler written in Lua:

static void luasignalhook(lua_State *L,lua_Debug *ar)
  ; remove the hook as we don't want to be called over and over again


  ; get our function (which is a global, just for illustrative purposes, and
  ; call it.


Yes, it's a rather round-about way to handle a signal, but that's what is required to run Lua code as a signal handler. And it works except for two cases (that I have so far identified—there might be more).

The first case—coroutines. Lua coroutines can be thought of as threads, but unlike system threads, they have to be scheduled manually. And like system threads, signals and coroutines don't mix. Each coroutine creates a new Lua state, which means that if a signal happens, the Lua state that is hooked may not be the one that is currently running and thus, the Lua-written signal handler may never be called!

The second issue involves a feature of POSIX signals—the ability to restart system calls. Normally, a signal will interrupt a system call and its up to the program to restart it. There is an option to restart a system call automatically when a signal happens so the program doesn't have to deal with it. The funny thing is—under the right conditions, the Lua signal handler is never called! Say the program is making a long system call, such as waiting for a network packet to arrive. A signal is raised, our signal handler is called, which hooks the Lua VM. Then the system call is resumed. Until a packet arrives (and thus we return from the system call) the Lua VM never gets control, and thus, our function to handle the signal is never called (or called way past when it should have been called).

Fortunately, I found these issues out in testing, not in production code. But it has me thinking that I should probably work to avoid using signals if at all possible.

Thursday, May 21, 2015

Notes about an overheard conversation at The Ft. Lauderdale Office of The Corporation

“We should definitely do that at Black Hat!”

“I didn't know you were into haberdashery.”


“I think you mean millinery. Haberdashers generally sell buttons and thread and stuff.”



“What are you guys talking about?”

“Your fascination with hats.”

“You mean Black Hat?”

“Yeah. Haberdashery.”


“Oh, sorry. Millinery.”

“You guys are crazy.”

“We're just pushing it to eleven.”

The check that was in the mail

I'm checking my snail mail and … what's this? A check?

[Cheap Tickets is certainly not cheap when it comes to checks!]

You mean the check was in the mail?

Oh wait … it's one of those “promotional checks” and not a “real check,” even though it has a check number, it's made out to me, has the value as both numbers and words, it's signed, and has what look to be a routing number (to an Australian bank‽) and account number. Also, across the bottom it has: “THIS DOCUMENT CONTAINS A BLUE BACKGROUND, [Check! –Sean] MICROPRINTING [Oh yes, it does. It doesn't make much sense, but I can make out the letters. —Sean] AND AN ARTIFICIAL WATERMARK ON THE BACK [Yup, “IPS.” So, check! —Sean] — VOID IF NOT PRESENT” [Nope, it's all there, so it's not void. —Sean]

And there is the story of man who deposited a “promotional check” for $95,000

So maybe it's a real check?

Perhaps I could try depositing it? [No! —Bunny] [Awwww! —Sean] [Okay, it's your bank account to lose … go right ahead! —Bunny] [Woot? —Sean]

Friday, May 22, 2015

“Consistent mediocrity, delivered on a large scale, is much more profitable than anything on a small scale, no matter how efficient it might be.”

Fundamentally, there’s a theme in Olia’s speech (and the speech of others in that space, like Dragan Espenschied, Ben Fino-Radin, and so on) bemoaning the move away from a space on a website being the province of the users, and being turned into a homogenized, commodified breeder farm of similar-looking websites with only surface implementations, like WordPress, Facebook Pages, and so on.

There was a time when a person who was not particularly technical, or whose technical acumen was sufficient to get applications running on a machine and not much more, could code a webpage. The tags were pretty straightforward, the uses of them clear, and the behavior pretty dependable. Much how one could, in a weekend, learn sufficiently how to pilot a sailboat… such was that a few weekends of study could allow a person to craft a fun little webpage, with their voice, their stamp, and the idiosyncrasies of their personality shining through.

Those days are gone. Long gone.

Instead, we have this (as my buddy Ted Nelson calls it) nightmare honky- tonk of interloping, shifting standards soirees that ensure, step by step, bylaw by beta, that anybody who isn’t willing to go full native will be shut out forever. The Web’s underpinnings, at least on the basic HTML level, have been given over to the wonks and the engineers, making it an impenetrable layer of abstraction, not worth your time to learn unless you were looking to buff up your resume, or if some programmer pride resided in this whole mess being in your job description.

That Whole Thing With Sound in In-Browser Emulation « ASCII by Jason Scott

There's no need to read the full article (unless you are interested in the state of audio in webpages and how it's not serving web based emulators of old home computers letting people play thousands of games from the 80s and 90s); I'm just quoting the part that spoke to me.

I'm also reminded of this:

For a very long time, taste and artistic training have been things that only a small number of people have been able to develop. Only a few people could afford to participate in the production of many types of media. Raw materials like pigments were expensive; same with tools like printing presses; even as late as 1963 it cost Charles Peignot over $600,000 to create and cut a single font family.

The small number of people who had access to these tools and resources created rules about what was good taste or bad taste. These designers started giving each other awards and the rules they followed became even more specific. All sorts of stuff about grids and sizes and color combinations — lots of stuff that the consumers of this media never consciously noticed. Over the last 20 years, however, the cost of tools related to the authorship of media has plummeted. For very little money, anyone can create and distribute things like newsletters, or videos, or bad ass tunes about "ugly."

Suddenly consumers are learning the language of these authorship tools. The fact that tons of people know names of fonts like Helvetica is weird! And when people start learning something new, they perceive the world around them differently. If you start learning how to play the guitar, suddenly the guitar stands out in all the music you listen to. For example, throughout most of the history of movies, the audience didn't really understand what a craft editing was. Now, as more and more people have access to things like iMovie, they begin to understand the manipulative power of editing. Watching reality TV almost becomes like a game as you try to second-guess how the editor is trying to manipulate you.

As people start learning and experimenting with these languages authorship, they don't necessarily follow the rules of good taste. This scares the shit out of designers.

In Myspace, millions of people have opted out of pre-made templates that "work" in exchange for ugly. Ugly when compared to pre-existing notions of taste is a bummer. But ugly as a representation of mass experimentation and learning is pretty damn cool.

Regardless of what you might think, the actions you take to make your Myspace page ugly are pretty sophisticated. Over time as consumer-created media engulfs the other kind, it's possible that completely new norms develop around the notions of talent and artistic ability.

Happy Ugly.

the show: 07-14-06 - zefrank

But sadly, no one cares.

Saturday, May 23, 2015

Sometimes, you just gotta go back to the 8-bit era

In the pages that will follow, I will be documenting the various stages in the design of a new arcade game that I hope to create for my classic Tandy Color Computer 3 sold internationally by the Radio Shack Corporation during the 80's and early 90's. This game will largely be created the old school way utilizing as much as possible the same setup that I used to develop games back then.

[PopStar Pilot]

As a teenager with a computer during the 80s I always had the idea of writing a computer game in the back of my mind, but I never did know how to write one. It perhaps didn't help that I had a Tandy Color Computer at the time. I know, it's a bad craftsman that blames his tools, but in this case, I think there's something to it. The Color Computer had no hardware graphics to speak of (the Color Computer 3 did, but I had moved on to the PC world by the time it came out) so it was up to the programming to do all the bit shifting, masking and drawing which isn't as easy as it sounds (or rather, making it fast isn't that easy).

I never did write a game.

But it is a simple computer. Unlike modern systems, the entire computer is documented in a 70-page book and games were written for it. So feeling a bit nostalgic, I fired up an emulator (I'm nostalgic, not masochistic) and spent a few hours getting a simple graphic program going.

[The UFOs are coming to take me away! Aaaaaah!]

Yeah, not that easy. That running man? (bonus points if you recognize where he comes from) There're eight images in the animation, and each image is repeated four times, each one shifted right one pixel to avoid having to do a massive amount of shifting at runtime (it's a classic “memory vs. time” tradeoff here). Then I had to align the images so it looks smooth (image one, then image two shifted right one pixel, then image three shifted right two pixels, then image four shifted right three pixels and that takes us through a full byte of pixels) which complicated the animation loop since it ends with image one shifted one pixel to the right, which has to carry over to the next loop (image one shifted right one pixel, then image two shifted right two pixels, then image three shifted right three pixels, then image four not shifted but starting one byte over, etc).

That's not to mention that I had to draw the running man over the background image which requires merging the image data of the man with the background image. And to avoid really weird drawing artifacts, I used a double-buffer method (show one frame while drawing into a non-visible frame, then show the updated frame and use the previous frame to draw and repeat).

It was fun though. I don't think this will end up as a game any time soon, but it was nice to work on a computer that is so easily comprehensible by one person and where hitting the hardware is very easy to do (I think the last time I programmed to the hardware was in the mid-90s). I think my nostalgia has been sated for now.

Sunday, May 24, 2015

The crazy things that were done to make games run fast in the 8-bit era

The most amout of time spent in my simple graphics program is this loop:

clrloop		ldd	,u++	; load 2 bytes from background, increment pointer to next 2 bytes
		std	,x++	; store them in current frame, increment pointer to next 2 bytes
		cmpu	#end	; are we done yet?
		bne	clrloop	; nope, keep going

Despite its name, it's not clearing memory. It's actually copying the background image to the current frame being drawn. I'm not showing the code before or after this as this post is really about this loop.

As written, each iteration of this loop takes 24 clock cycles (or just “cycles”) to run, meaning this code effectively copies one byte every 12 cycles. I recalled reading several years ago a crazy scheme to copy memory on the Motorola 6809 (the CPU used in the Color Computer) that involved using the stack register.

But before I get crazy, just how fast can I get the code to run?

Unrolling the loop a bit:

clrloop        ldd     ,u++	; load 2 bytes from background
               std     ,x++	; store 2 bytes to current frame
               ldd     ,u++	; repeat this seven more times
               std     ,x++
               ldd     ,u++
               std     ,x++
               ldd     ,u++
               std     ,x++
               ldd     ,u++
               std     ,x++
               ldd     ,u++
               std     ,x++
               ldd     ,u++ 
               std     ,x++ 
               ldd     ,u++ 
               std     ,x++ 
               cmpu    #end 	; are we there yet?
               bne     clrloop	; don't make me turn this CPU around!

and we get 8.5 cycles per byte. Unrolling it more isn't worth it, as the fastest we'll get is 8 cycles per byte (assuming we unroll the entire loop to copy all 1,024 bytes; but in doing so we'll use 4K in code (ldd ,u++ and std ,x++ are both two byte instructions) just to copy 1K in data). Can we do better?

In checking the timings of various index operations, amazingly enough, adding an offset instead of just incrmenting the pointers is faster. This:

clrloop         ldd     ,u	; load 2 bytes from background
                std     ,x	; store 2 bytes to current frame
                ldd     2,u	; load 2 more from background past the previous data
                std     2,x	; store them past the previous data
                ldd     4,u	; and keep this up
                std     4,x
                ldd     6,u
                std     6,x
                ldd     8,u
                std     8,x
                ldd     10,u
                std     10,x
                ldd     12,u
                std     12,x
                ldd     14,u
                std     14,x
                leau    16,u	; adjust pointers by 16 bytes
                leax    16,x	; as that's how much we copied
                cmpu    #end	; are we there yet?
                bne     clrloop ; enough!

gets us down to 6.5 cycles per byte! But unrolling this any further won't buy us a thing, as once the index passes 15, the instruction takes longer to execute because of additional instruction decoding. So this routine is pretty much it as far as a straightforward approach will take us. Not so bad though—almost twice as fast as the original 4 instruction loop. But to go even faster, we have to get crazy and bring in the stack pointer.

Why the stack pointer?

Because of four instructions: PSHS, PSHU, PULS PULU. The first instruction can save a number of registers onto the stack. But looking at it another way: it's an instruction that can write up to 12 bytes into memory. The second instruction is similar, but instead of using the stack register, it uses the U register (it's the “user stack pointer”). The data written goes from higher addresses to lower addresses (because traditionally, stacks grow downward in memory). The last two instructions to the reverse, restoring a number of regsters from memory, or, reading up to 12 bytes into registers.

But we can't use all the possible registers these instruction support. We can't use the program counter as that's rather important to executing the program (it contains the address of the currently executing instruction—overwriting that will cause the program to start running who knows what). We'll be using both stack registers so those are out. We could use the CC register, but part of its use is to control the CPU—setting random values could be interesting. Too interesting for me, so that's out (and it's only 8-bits—not like a great loss).

That still leaves us with four other registers we can use: X, Y, D (16-bit registers) and DP (an 8-bit register). So realistically, we can transfer up to seven bytes at a time, taking 12 cycles to read and 12 cycles to write, for a theoretical maximum of 3.4 cycles per byte!


The problem with this method is that the stack pointer is used by the CPU to keep track of where it is in the program. Not only that, but when the CPU receives an interrupt (a signal that somehing has happened and needs to be handled now!) it saves what it is doing on the stack and handles the interrupt. And while on the 6809 the stack register can be used as a general purpose index register (like we've been using the X and U registers) we're hampered by the fact that interrupts happen.

In this case though, it can be done. The stack grows downward in memory—that is, as items are pushed onto the stack, the stack pointer is decremented lower and lower into memory. Taking this into account means we will be filling in the frame backwards, from the bottom of the frame towards the top. So we set the stack pointer to the bottom of the frame. If an interrupt happens, sure, there's some odd stuff written to the frame, but once the interrupt has been handled, the stack pointer is restored to were it was before the interrupt and we can continue copying the background, overwriting the garbage data added by the interrupt.

But pulling data off a stack goes from lower addresses to higher. And the bytes are pulled off in reverse order they were pushed (as expected—it's a stack after all—last in, first out). So the background data would need to be rearranged to take into account that we're reading the data from a low to high, storing the data high to low, every seven bytes are reversed, and that the memory in front of a frame can be expected to be trashed by an interrupt. Then there's the issue that 7 does not evenly divide 1,024—we'll have to handle the last two bytes as a special case.

But assuming the data is stored correctly and we have some memory in front of each frame that can be safely transhed, then the copying code would be:

clrloop		pulu	dp,d,y,x ; transfer seven bytes
		pshs	x,y,d,dp
		cmpu	#end-2   ; are we there yet?
		bne	clrloop  ; shut up!
		pulu	d	 ; some post loop cleanup
		pshs	d

and get 4.5 cycles per byte.

But this level of optimization should only be done if absolutely required. And in my case, it's not required (yet—if it ever will be). I'll be happy to stick with the 6.5 cycle version for now.

Monday, May 25, 2015

Oh that Florida!

Ironically, one of the things that may be contributing to Florida being shamed so often in the national media is something all Floridians should be proud of.

The terms "progressive" and "model for the rest of the nation" don't often appear in sentences with "Florida," but that's exactly how people view the state's open-records laws, AKA the Government in the Sunshine Act.

Since 1909, Florida has had a proud tradition that all government business is public business and therefore should be available to the public. That means all records, including photos and videos, produced by a public agency are easily accessible with a few narrow and obvious exceptions. Public officials are also required to open all of their meetings — even unofficial ones — to the public.

However, those same laws are also the reason your mugshot appears online days after your arrest, and those laws make it incredibly easy for journalists to write about weird Florida news stories.

You'll notice something when you read so many "Weird Florida" news stories. They almost always include the phrase "according to the arrest report."

As journalists, all we have to do in most cases is call the police department and ask for an arrest report, and the cops are required to give it to us. Nowadays a lot of cops simply email the reports, and some departments even post arrest records online. Some of the more dedicated weird-Florida-news reporters go through batches of arrest reports at a time.

How Florida's Proud Open Government Laws Lead to the Shame of "Florida Man" News Stories | Miami New Times

You know, that explains a lot. It's not that Florida is crazier than the rest of the nation, it's just that the rest of the nation has decided not to air its dirty laundry.

Or in other words, Florida is the most transparent state when it comes to governance. Go figure.

Tuesday, May 26, 2015

Notes on an overheard conversation at The Ft. Lauderdale Office of The Corporation

“Man, the new hires seem to be getting younger and younger every day. And unruly.”

“You do realize today is ‘Bring Your Kid To Work Day,’ don't you?”



“Is it too late to call in si—”



Notes on another overheard conversation at The Ft. Lauderdale Office of The Corporation

“It's too quiet in here. The kids are up to something.”

“Now, now, be nice.”

“I'm tellin you, they're planning something … ”

“All the kids have gone home.”

“Yeah, right! That's want they want us to think.”

Wednesday, May 27, 2015

Der Ring des Star Wars

So, by now you’re probably wondering what any of this has to do with Star Wars?

Well, as this essay will show, the six Star Wars films together form a highly structured ring composition. The scheme is so carefully worked out by Lucas, so intricately organized, that it unifies the films with a common universal structure (or what film scholar David Bordwell might call a “new formal strategy”), creating a sense of overall balance and symmetry.

Via Sean Tevis on GoogleMyTwitterFaceSpaceBookPlus, star wars ring theory | Mike Klimo

It's long, but it's an interesting new theory about Star Wars. Sure, we've all heard about George Lucas borrowing heavily from The Hero With A Thousand Faces, but a ring composition? That's certainly a new take on things.

But if the Star Wars films comprise a ring composition, it's only coincidental, as The Secret History of Star Wars made clear: George Lucas was making it up as he went along.

Thursday, May 28, 2015

Kung Fury

It's set in the 80s. It's set in Miami. It's a Kung Fu cop. It has Adolf Hitler (aka Kung Fürher). And it has a T-Rex. It's the ultimate in 80s actions films not made in the 80s. It's Kung Fury!

Oh lord is this thing over the top (so over the top it has its own music video staring David “The Hoff” Hasselhoff). It's more a series of quick scenes that ape common 80 action film tropes turned up to 11 than it is a film with a compelling story and character development. And all the more glorious because of it.

Friday, May 29, 2015

Does the removal of the audio from this video make it ironic?

This video previously contained a copyrighted audio track. Due to a claim by a copyright holder, the audio track has been muted.

Yeah, that happens on YouTube. But for some reason, the removal of the audio track on Anna's performance of “XXXX You” just makes it that much better, because she's “performing” it using sign langauge.

How appropriate.

Saturday, May 30, 2015

Magic is supposed to be … well, magical! Not scientific!

RPG magic systems can roughly be divided up into "fixed spell" and "freeform" mechanics. Fixed spell systems are often highly mechanistic, where the operation of each spell is exactly calculable. Freeform mechanics, on the other hand, call for the GM to judge the difficulty of a spell based on little information as well as a large degree of randomness.

Neither of these, however, is "mysterious". A mystery means that no pattern is obviously visible – but there is a hidden pattern. For a magic system to be mysterious, there must be hidden patterns which the magician character does not know at first, but which can with effort be discovered. In a game, this means that there must be either hidden variables or even hidden rules. An extreme of this would be that the GM secretly designs the magic system and only lets the player learn it a bit at a time (i.e. completely hidden rules). However, mystery can be injected by having hidden variables. i.e. How a PC's magic works depends on factors which are defined by GM, but which the player must deduce from other clues.

Via Hacker News, Breaking Out of Scientific Magic Systems

This is more observations about magic in role playing systems than it is a system to use to replace an existing magic system. The author is right that we (modern players) tend to be reductionist about magic systems because of modern science in today's world (I know I am a reductionist when it comes to magic systems as a player—I never did get a good grip on the magic system in Mage, probably the closest role playing system where “magic” is still mysterious and needs to be discovered because the system was so vague and contradictory) and maybe we need to loosen up a bit. I don't know … it sounds like it would be a lot of work for the GM, and possibly alienate the players.

Sunday, May 31, 2015

We wouldn't want anything to happen to the page rank on your nice website, now would we?

For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We've seen positive results, so we're starting to use HTTPS as a ranking signal. For now it's only a very lightweight signal — affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content — while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.

Via Rob Landley's Blog Thing for 2015, Official Google Webmaster Central Blog: HTTPS as a ranking signal

And that was nine months ago. Is your website served over HTTPS?

This just appears to be yet more proof that Google is calling the shots on the web now.

Oh, by the way, your web server is HTTP/2 compliant, right? Wouldn't want anything bad to happen to your page rank, now would you?

Obligatory Picture

[The future's so bright, I gotta wear shades]

Obligatory Contact Info

Obligatory Feeds

Obligatory Links

Obligatory Miscellaneous

You have my permission to link freely to any entry here. Go ahead, I won't bite. I promise.

The dates are the permanent links to that day's entries (or entry, if there is only one entry). The titles are the permanent links to that entry only. The format for the links are simple: Start with the base link for this site:, then add the date you are interested in, say 2000/08/01, so that would make the final URL:

You can also specify the entire month by leaving off the day portion. You can even select an arbitrary portion of time.

You may also note subtle shading of the links and that's intentional: the “closer” the link is (relative to the page) the “brighter” it appears. It's an experiment in using color shading to denote the distance a link is from here. If you don't notice it, don't worry; it's not all that important.

It is assumed that every brand name, slogan, corporate name, symbol, design element, et cetera mentioned in these pages is a protected and/or trademarked entity, the sole property of its owner(s), and acknowledgement of this status is implied.

Copyright © 1999-2024 by Sean Conner. All Rights Reserved.