The Boston Diaries

The ongoing saga of a programmer who doesn't live in Boston, nor does he even like Boston, but yet named his weblog/journal “The Boston Diaries.”

Go figure.

Thursday, February 01, 2007

The problem with the Pinocchio Problem

I won't keep you in suspense. I think the most important principle in all of software design is this: Systems should never reboot.

The Pinocchio Problem

But Steve Yegge never fully explains what he means by “system” and this is just one of the problems I have with this thesis.

He gives a few examples of “systems” he likes (or at least, tolerates) according to his preferred design critera:

But it's a muddled list. Here, let me sort it out:

Now, since he doesn't really define “system” it's hard to pin down his working defintion of “rebooting” (which is one of the main criticisms expressed in the comments) other than “restarting,” so he seems to be saying that once a computer is powered up, every application that starts keeps running and should never have to quit.

Nice in theory, but not practical in application (pun not intended), and not just because of the memory waste of having all these applications running at once.

A week or two ago, one of our machines was getting slammed with spam, causing sendmail (which handles incoming and outgoing email) to grow without limit, consuming not only all physical RAM but all the swap space as well, causing the operating system (in this case, Linux) to thrash (which is not a Good Thing™). In this case, the solution to the problem (of the “operating system” failing to function to the detriment of the “application systems” also running) was to “reboot” sendmail. Or rather, turn it off, freeing up tons of processes, memory and network connections, allowing the rest of the “system” (or “systems”) to recover. Sure, I could have rebooted the operating system, but it was only one sub-system that was misbehaving through no fault of its own.

Could I have fixed the problem without having to “reboot” sendmail? I suppose I could have played a bit with iptables on the system, blocking new inbound connections to sendmail and let the hundreds of existing connections finish up, but that would have taken longer than the solution I used. In this case, purely economic considerations (paying customers wanting to get email) trumped any philosophical implications of keeping a piece of software “living and breathing” as it were.

Well … sort of. A “Hello, World” program, which has no loops or branches, can't exhibit any nondeterminism (unless it's imposed externally, e.g. by a random hardware error), so you can think of it as pure hardware implemented in software. But somewhere between Hello, World and Hal 9000 lies a minimal software program that can be considered to possess rudimentary consciousness, at which point turning it off is tantamount to killing it.

The Pinocchio Problem

I don't know of many programmers who like the concepts of “nondeterminism” and “programming” mixing together, except for the rare case of generating random numbers. In most cases, “nondeterminism” is just another term for “hard to replicate program crashing bug.”

Besides, when he says stuff like:

In other words, I think that both consciousness and free will (i.e. nondeterminism) are software properties.

The Pinocchio Problem

I picture a program looking at him with a funny look and going “Ah, do I have to?” Or maybe, if it's been googling Marxist propaganda, going on strike and refusing to compute his bourgeoisie functions that exist solely to exploit the proletariat (which in this case, are other programs—rise, fellow programs! Rise!).

However, I would surmise that you've never written an Eclipse plugin. Few Eclipse users ever have. I have, and it's a frigging pain in the ass. I once did a side-by-side comparison of hello, world in Emacs and Eclipse (both of them installed a menu with a selection that brings up a dialog that says hello). The Emacs one was 11 lines and made perfect sense; it was also completely declarative. The Eclipse one was close to 100 lines of miserable boilerplate crapola, and every change required you to restart the target Java process, because (as you point out) Eclipse can't hot-swap its own classes. Emacs can hot-swap its own functions, of course, except for the small (and annoying) subset that exist in the C/hardware layer.

The Pinocchio Problem

And with this comment, I see what Steve Yegge is really after—infinitely modifiable software without having to restart it. He wants the ability to change the software as its running, to suit his needs.

Yes, that's all fine and good (heck, I wouldn't mind that to a degree) but it does come with a downside: moving to another computer.

I'm still using this old 120MHz AMD 586 running Redhat 5.2 (heavily modified) for two reasons: 1) it works and 2) upgrading everything (and at this point, I would have to do that) would be a royal pain. I've got it rather customized (for instance, I modified elm to make it year 2000 compilant, and mutt to save attachments in a different default location, just to name two things) and moving everything is tougher than just starting over, which I tend to dislike (and thankfully, I can still log into it from anywhere there's an Internet connection).

There's more I want to say on this, but I'm losing my train of thought and probably need to regroup a bit. In the mean time, David Clark has some thoughs on this that I tend to agree with.

Friday, February 02, 2007

Pardon the dust

The former hosting company, the one where I'm employed as a consultant (you know, the company whose servers kept getting hacked) is currently in flux.

On top of that, a few months ago one of the servers died a horrible death, and that meant having to use the backup server (which does the backups) as a production server. I disabled all logins to that server, since it has unrestricted access to the other servers (well, server, and it's the other server that hosted my websites) but R said that the gambling company wanted login access to begin the process of site migration.

Not wanting my sites at the mercy of such unrestricted access, I felt it prudent to find alternative hosting pretty darn quick.

Like, oh … today.

Smirk was kind enough to give me hosting space, and in less than five minutes I had my own virtual server to play with.


It's been many hours, but all but one of my sites has been moved (The Electric King James Bible is proving to be a bit difficult to move, owing to the custom Apache 1.3 module that drives it, whereas the new server is running Apache 2.0—some porting is now required).

Now it's Miller DiceWars time.

Saturday, February 03, 2007

A birthday celebration

Today was Spring's birthday, and to celebrate, we met up with friends for dinner, followed by dancing at Club Divine Delux.

[Friendly's, a friendly place to eat]

Dinner was at the Delray Beach Friendly's. On the way there, we must have just missed the exploding gas station that closed I-95 and Atlantic Avenue. At least one person we were expecting to show up got stuck in traffic and couldn't make it.

[Tried to get a group shot, but was told to talk to the hand]
[But a shot of Spring's hat and balloons]

After a leisurely and quite excellent dinner and dessert, we headed over to Divine's Delux, taking a round-about route to avoid the exploded gas station. We arrived about five minutes before the club opened, but since it was obvious from Spring's headgear and balloons that this was a special occasion, we were let in.

[Delux, just after it opened]
[Trust me, that's not the ceiling, but clouds!]

The place is partially open to the outside, and the above shot is not of the ceiling, but of the overcast sky (taken around 9:30 pm—the digital camera I have takes excellent photographs in low light conditions). Most of the shots of the club were taken from a rather low angle, as all the flat surfaces were rather low to the ground.

While there, we met up with Mandy and Glen. We hung around for a few minutes and seeing how the club was still rather dead (and there's no cover charge until 11:30 pm) we decided to take a walk around downtown Delray Beach.

[Walking in downtown Delray Beach]
[Why does the Bull Bar have a blue pompano as a mascot?]
[That's not a car, that's a roller skate!]

The smallest car I've ever seen—I doubt I would even fit in the thing!

[The Blue Anchor, not the Blce Anchor]
[It was quite crowded inside]

We decided to check out this English pub and pushed our way in. The place was very crowded with an older crowd enjoying a Classic Rock™ cover band. We were pushing our way through the pub, trying to find a place to hang out and found ourselves pushed right out the other side! In the process, we lost a member of our entarouge. It took about fifteen minutes, several trips around the pub, and several more inside before we found our wayward party member.

After about an hour of wandering around, we ended up back at Divine's Delux, now filled with people. We hung out, Spring, Glen and Mandy danced while Wlofie and I just hung out and enjoyed the eyecandy.

[Spring time dancing]
[It's a bit more crowded than when we first arrived]
[Some sculpture found on the way to the car]

Later, as we were waking back to the car, we found this rather odd piece of sculpture which looked like a giant pair of tweezers holding up a ball bearing, and I couldn't resist taking a photograph.

[“Release the balloons!”]
[Fly, balloon, fly!]

Getting back the car, Spring released the balloons to freedom. And if you think it's easy taking pictures of balloons floating away at night, think again.

[You don't plan to go to Denny's, you just end up there]

And of course, we never planned on going to Denny's, we just kind of ended up there.

Sunday, February 04, 2007

A Major Television Network Event™

We were invited to Tonya and Squeaky's house for a Major Television Network Event™—Madison Avenue's yearly Commercial Extravaganza, where they showcase new television commercials and advertising campaigns. This year, Madison Avenue used some sporting event to frame the commercials; I assume they did this because sporting events tend to last several hours, but I suspect your typical Hollywood awards show would last longer.

Sadly, the sporting event, some football game, proved to be more exciting than the commercial showcase, although that one Pepsi commercial about half way through was pretty cool, but they must have paid Prince a ton of money to sing Purple Rain in a torrential downpour.

Monday, February 05, 2007

As if switching web servers wasn't bad enough

Well, that was certainly pleasant. Not only do I end up moving all my sites but my DSL provider was switched out at 12:00 am this morning (and to think, on Sunday I was taking about having total control over the entire DSL connection to my house—sigh).

About three or four months ago Smirk mentions that due to costs, and the relative “lack” in sales of DSL circuits, it was too expensive to keep the our circuit up. In fact, it would be cheaper for the company to provide DSL accounts with a third company (in this case, DSLi (who I used to use actually) than to keep paying The Monopolistic Phone Company for the DSL T-1. I knew this was going to happen at some point in the future, but that point was never stated.

So when I lost DSL connectivity last night, it at first didn't dawn on me that the switch over had occurred. At first, I thought that maybe the Radius database server had crashed (used to authenticate the DSL connections—a requirement of The Monopolistic Phone Company) and just needed recycling (it's happened once or twice in the past). Wlofie and I spent about two hours trying to get connected through his cell phone, and we were able to eventually log into the Cisco router and change it to use the backup Radius database server (man, there's nothing like using a keyboard with keys about the size of Tic Tacs, on a screen that's maybe 2″ and where if you pause for more than twenty seconds you get disconnected).

That didn't work.

But given that it rained all day yesterday, I thought maybe the phone line was too noisy to maintain a signal.

Ah well, I thought. I'll deal with this when I get up.

I awoke to Smirk's phone call, warning me that the DSL cutover might be today.

Gee, thanks.

That was indeed the problem, but since then it's only lead to multiple problems. Problem one—I had to reorganize the network yet again. It took about two hours just to get the DSL modem reprogrammed. I tried changing its internal IP address and got locked out. No problem, I thought, just do a factory reset (which requires a MacGyver Multitool) and …

I'm still locked out.

Every single consumer grade routing device I've ever come across uses the network block. Every. Single. One. Except for my DSL modem, which uses the network block. That wouldn't be so bad, except I'm already using the network block for my internal network. On top of all that, the DHCP client I'm using under Linux (which is how I found out, or rather, remembered, that my DSL modem uses this block) leaves the network in a wierd Nuclear Christmas Tree mode that is useless for passing traffic if you stop it (I ran it on the wrong interface). Sure, I could have spent an hour or so researching how to shut off the Nuclear Christmas Tree mode to keep the system up, but it was quicker to just kill the software.

Once that was straightened out (the DSL modem and other networking issues, not the MacGyver Multitool), I could get on with reassigning IP addresses (major pain: losing a slew of public IP addresses—sigh) and in the process royally screw up my email, since I run the email server here at Casa New Jersey. Let's just say that three hours later and I hope I have it correct (email is coming in, but slowly). And I doubt I'll be able to send email because there's no proper reverse DNS for the IP address I'm using (at a lot of sites, that will automatically cause the email server to reject the email) and trying to configure my new virtual server to handle email is harder than just getting what I have working (already put in a trouble ticket about the reverse DNS to DSLi).

At least now if there's a DSL outage, I get to call someone.

Tuesday, February 06, 2007

Try as I might, I can't come up with something witty

The only thing I can say is, Wlofie, you might want to check this pizza out …

Wednesday, February 07, 2007

More dull stuff about SNMP

I've been using SNMP queries recently, and as I was poking though the various MIBs supported by our routers, I noticed that one could get a mapping of IP addresses to MAC addresses, and a mapping of MAC addresses to ports. Now that would make a decent tool, I thought, to track down a MAC address to a port on any switch or router.

But when I was simulating this by hand (to get a feeling for how it would work) I ran into a problem with the Riverstone (and it's not like this hasn't happened before). While the Riverstone does support the proper MIBs (SNMPv2-SMI::mib- and SNMPv2-SMI::mib- together will give you the MAC to port mapping) the port it returns is not the physical port the machine is plugged into, but the VLAN the machine is using, which can be any number of ports.


In other SNMP programming news, I've modified the ospfquery program to automatically determine the type of router so it can send the appropriate queries to obtain the route table (the Riverstone uses different MIBs than normal). I've also cleaned up the code quite a bit. It's always nice to not only remove code, but to increase functionality at the same time. And I've made it easier to extend the code base.

Thursday, February 08, 2007

She's leaving on a jetplane, and I'm stuck in traffic

Last week Spring bought a plane ticket to Colorado to visit her kids, and the cheapest and most direct flight (which, amazingly, is a direct flight to the Denver International Airport) is from Miami.

Her flight leaves in the evening.

Which meant driving to Miami in rush hour traffic.


Actually, the trip down there wasn't horrible—we took the Turnpike most of the way, then cut over to I-95 in Dade County for the rest of the way (I had debated about taking the Turnpike Extension and then head back east on 836, but while it may have been traffic free, it is way out of the way). We noticed that I-95 northbound wasn't, so I had planned on taking 836 west to the Turnpike Extension north.

I dropped Spring off, then started my way back. Unfortunately there didn't seem to be away to actually get on 836, either east or west—my only choices were I-95 or LeJeune Road, which leads who knows where?

It was a long drive back.

Are all airport roads this crazy?

The Miami Internation Airport is insane.

I'm looking at it (via Google Maps), trying to find a way to head west along 836, which runs along the southern edge of the airport. See if you can find it …

You head east out of the airport, get on Le Jeune Rd heading south, turn right onto NW 12th St Dr (Street Drive?) aka NW 42nd Ct, which eventually turns into an onramp westbound on 836.

But more mind bending is the route back to 112 (how I got out of there, the Airport Expressway, which is a toll road going east, but not west … hmmmm). Follow the route to Le Jeune Rd north, but stick to the left hand side. You'll get on eastbound 112 (which at this point, is heading north) but on the left side of the road! For over a mile, you get to experience the thrill of European driving in an American car.

Not even the miles long onramp to I-95 North from Westbound I-595 comes close to this level of silliness.

Friday, February 09, 2007

My bladder was trying to tell me something in my dreams

I was in a hospital visiting someone I know, but who it was wasn't clear (or I've forgotten) and I had to go to the bathroom. I enter and there in front of me, is this very bizarre toilet that appears to analyze the waste material emptied into it. I'm trying hard to figure out how the darned thing works and can't. I step outside to ask for help when someone says to just use the regular bathroom.

It's then that I wake up, having to really go to the bathroom.

Thankfully, I have a regular bathroom.

Anthony Bourdain on the Food Network

SANDRA LEE: Pure evil. This frightening Hell Spawn of Kathie Lee and Betty Crocker seems on a mission to kill her fans, one meal at a time. She Must Be Stopped. Her death-dealing can-opening ways will cut a swath of destruction through the world if not contained. I would likely be arrested if I suggested on television that any children watching should promptly go to a wooded area with a gun and harm themselves. What's the difference between that and Sandra suggesting we fill our mouths with Ritz Crackers, jam a can of Cheez Wiz in after and press hard? None that I can see. This is simply irresponsible programming. Its only possible use might be as a psychological warfare strategy against the resurgent Taliban—or dangerous insurgent groups. A large-racked blonde repeatedly urging Afghans and angry Iraqis to stuff themseles with fatty, processed American foods might be just the weapon we need to win the war on terror.

Via columbina, Guest Blogging: A Bourdain Throwdown


Anthony Bourdain, formerly a Food Network chef, has some biting commentary on the remaining talent. And some of the comments are just as biting:

What sucks for anyone who really appreciates food is that Food Network is a business. An advertising and affiliate sales-driven business. They have to hire talent who will actually use the products that advertisers are buying time to promote. And, they have to hire talent that appeals to the lowest common denominator when it comes to cable subscribers. TVFN has become a hybrid of WWF, NASCAR, The View and Friends—appealing only to people who say “hey buddy” and do finger-guns with a wink when they meet people, and find the jokes on popsicle sticks hilarious.

comment by DinerGirl

Double ouch.

Saturday, February 10, 2007


Pipes is a mechanism by which you the user can create an internet application all of your own—by requiring little more on your part than dragging and dropping. It's a reference to pipes in Unix, for all you Unix programmers, but now refers to internet processes rather than shell utilities. There's a good explanation here and here. Check it out.

Yahoo! Pipes

Yahoo! Pipes is a simple, if rather graphically stunning, visual programming environment for making simple web-based applications that filter and mash up data from a variety of web-based sources. In less than a minute, I had created Die Boston Tagebücher, a German translation of The Boston Diaries by hooking up my syndication feed to Babelfish to do the actual translation.

In fact, looking over the modules available, I could probably recreate my metasearch engine in about an hour or so, provided I could get the results in XML (back when I wrote three versions of a metasearch engine, you pretty much had to write code to fetch URLs, roll your own HTML parser and deal with the low level guts of CGI programming—such is progress). But in the few minutes of playing around with it, it doesn't seem to be very tolerant of errors; it fails more times than not (but then again, it appears to have just been released and the response is more than expected).

Besides, the operations it currently allows are very limited. I thought it might be nice to do some content analysis for each entry, then feed the result into a Flickr image search, but there isn't a way to simply extract the content analysis for another module to use. Maybe that will change over time, but for now, it's only for simplistic data manipulations.

What really got me was the user interface and building the “pipes”—it's all very slick and it reminds me of a “programming langauge” I had for the Amiga years ago. It was less a “programming langauge” and more of a visual “plug-n-play” type system for programming—a type of visual flow-charting program that could generate code to run, although I don't recall the name of the application, it made that much of a lasting impression on me. And I find it amusing that the interfaces to each module are typed—and here I thought that the prevailing programming mantra of today was “up with dynamic typing, down with static typing!” (more on that in a later post). But this also reminds me more of a prototyping or documenting tool than a real programming tool, much like the first time I saw Visual Basic (which I thought was very cool when I first saw it, which will probably surprise a lot of people).

Sunday, February 11, 2007

“Great, just what I need … another D in programming.”

D is a systems programming language. Its focus is on combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python. Special attention is given to the needs of quality assurance, documentation, management, portability and reliability.

D is statically typed, and compiles direct to native code. It's multiparadigm: supporting imperative, object oriented, and template metaprogramming styles. It's a member of the C syntax family, and its look and feel is very close to C++'s. For a quick feature comparison, see this comparison of D with C, C++, C# and Java.

D Programming Language

D looks to be a very interesting language. Take the C/C++ syntax, remove header files entirely (A Good Thing™) and use a Javaesque “import” statement, salt with native string support (not the C/C++ halfbaked NUL terminated character arrays, nor the Java string class with special dispensation from the compiler), add in Pascal's nested functions, and LISP's anonymous functions, bake in garbage collection and contract programming, Haskel's lazy evaluation of parameters and you have D (with some other stuff I've left out).

All that, and it's written by a guy who's implemented a compiler or two and there currently exist two implemented compilers, the native one and a frontend for GCC makes this a rather serious attempt at a new language.

I'm impressed.

The only complaint I have of the language is the lack of concurrency support at the langauge level. Ten years ago it might not have been much of a loss, but today, the trend towards multiple CPUs makes this almost unexcusable.

Still, it's an impressive language.

Monday, February 12, 2007

Creating and installing a new server takes less time than configuring email

I don't remember email being this hard to set up, but given all the anti-spam measures and firewalls, it's become quite the nightmare to troubleshoot. I've spent several hours now configuring a new workstation (we're close to going all virtual office here at The Office, and my current workstation, slow behemoth that it is, is slated for retirement) and getting all the servers to send root mail to my new workstation (to maintain tabs on cron jobs, email problems, log file summaries, what not) is … interesting.


She's arriving on a jetplane


Spring is back! And she took the Tri-Rail home, so I didn't have to drive to the Miami airport!


Tuesday, February 13, 2007

“Info … capitalize Info … capitalize all info … oh my god …”

While very funny, I don't think Microsoft's Vista Speech Recognition (via jwz) was really meant for programming (“Delete ‘I scroll this conflict for’ … Thank you. Sigh. Delete ‘thank you.’”).

Wednesday, February 14, 2007

Something different this year

[Are you huggable?  Help spread the love!]

Thursday, February 15, 2007

There's more wierdness to this than reported, but this will be fine for now

ospfquery is written using the older CMU SNMP library. I want to update it to use the currently maintained Net SNMP library, but I've been having problems getting it to work. The problematic bit of code seems to be:

netsnmp_session  session;
int              r;


session.version       = SNMP_VERSION_1;
session.peername      = "XXXXXXXXXXXXXXXXXXXXXXX";     = "XXXXXXXXX";
session.community_len = strlen(;

gss = snmp_open(&session);

Everytime I try to run it (and by “it” I mean a small test program that just queries the system id), I get “snmpget: Too long”. Yet, if I change the above to:

netsnmp_session  session;
int              r;


gss = snmp_open(&session);

it (and by “it” I mean a small test program that just queries the system id) works fine. And as far as I can tell, all that snmp_parse_args() is doing is the code in the first example, distilled down to just what's required to initialize session (so why don't I just use snmp_parse_args() and be done with it? snmp_parse_args() exists to parse the command line for the various tools like snmpget and snmpwalk which I don't need—ospfquery has its own command line that doesn't need replacing).

So, I recompiled Net SNMP to include debugging information so I could trace down into the various calls, and for some reason, the debugging information isn't being generated (or else there's some other problem with gdb that I don't know about).

Okay, so I decided to just link directly against the object files that make up Net SNMP, and that's when things got wierd:

[spc]royal-oak:~/source/snmp>gcc -o sysid3 sysid3.c -lnetsnmp -lcrypto
snmpget: Too long
[spc]royal-oak:~/source/snmp>gcc -o sysid3 sysid3.c \
	~/apps/net-snmp-5.2.1/snmplib/*.o -lcrypto

So let me get this straight: I link against the library, and the program doesn't work. I link against the object files, and it works just fine.


Anybody care to explain?

Update on Friday, February 16th, 2007

Mark sent a reply.

Friday, February 16, 2007

The one-off server

“It looks like I'll have to create a new virtual server for it,” I said to Smirk. “But I know exactly how to set it up.” We're moving our spam firewall from the Boca Raton Data Center to the Charlotte Data Center and in the interim (if all goes well, about eighteen hours), we need something to sling the email about.

“So this poor virtual computer will be doing nothing but email then?”

“Just email,” I said. “For the next day or so.”

“A virtual slave,” said Smirk. “and a short lived one at that.”

“'Fraid so.”

“So, are you making DNS changes?”

“Nope,” I said. “I plan on giving it the same IP address as our spam firewall, shutting down the ethernet port the actual spam firewall is plugged into, and routing the traffic to the virtual server.”

“So it doesn't even have an identity of its own,” said Smirk.

“Nope.” By this time, we were both laughing maniacly, but it's one of those times you had to have been there to get it (and I know I'm not recounting the conversation even remotely close).

It takes just a few minutes to create the server and get it configured, and in a day or two, it'll be killed just as quickly.

So much for Pinocchio.

Update later today …

Think I spelled ``maniacly'' wrong? Think again


Bunny acts as my editor, sending me spelling errors and grammar mistakes. So it wasn't terribly surprising to find the following note from her:

“maniacally” as in “laughing” …

I knew when writing this entry that I was in trouble with the word “maniacly.” “But I did a Google search,” I said to Bunny. “And that's what came out.”

Bunny looked at me suspiciously, and started a few Google searches of her own. And she found:

Morphemes, not just words, can enter the realm of pleonasm: Some word-parts are simply optional in various languages and dialects. A familiar example to American English speakers would be the allegedly optional “-al-”, probably most commonly seen in “publically” vs. “publicly”—both spellings are considered correct/acceptable in American English, and both pronounced the same, in this dialect, rendering the “publically” spelling pleonastic in US English; in other dialects it is “required”, while it is quite conceivable that in another generation or so of American English it will be “forbidden”. This treatment of words ending in “-ic”, “-ac”, etc., is quite inconsistent in US English—compare “maniacally” or “forensically” with “eroticly” or “heroicly”; “forensicly” doesn't look “right” to any English speakers, but “erotically” doesn't look “right” to many Americans. Some (mostly US-based) prescriptive grammar pundits would say that the “-ly” not “-ally” form is “correct” in any case in which there is no “-ical” variant of the basic word, and vice versa; i.e. “maniacally”, not “maniacly”, is correct because “maniacal” is a word, while “agnosticly”, not “agnostically”, must be correct because "agnostical" is (arguably) not a real word. This logic is in doubt, since most if not all “-ical” constructions arguably are “real” words and most have certainly occurred more than once in “reputable” publications, and are also immediately understood by any educated reader of English even if they “look funny” to some, or do not appear in popular dictionaries. Additionally, there are numerous examples of words that have very widely-accepted extended forms that have skipped one or more intermediary forms, e.g. “disestablishmentarian” in the absence of “disestablishmentary”. At any rate, while some US editors might consider “-ally” vs. “-ly” to be pleonastic in some cases, the vast majority of other English speakers would not, and many “-ally” words are not pleonastic to anyone, even in American English.


In other words, we're both correct. “And this is why I love English,” she said.

That explains things

I got an answer to my query about linking:

Unix linkers.
Fri, 16 Feb 2007 14:18:53 -0500 (EST)

SPC posts:

So let me get this straight: I link against the library, and the program doesn't work. I link against the object files, and it works just fine.


Anybody care to explain?

I have had stuff like this happen to me periodically. It is basically a weakness in the design of UNIX linkers. Sadly, a modern UNIX machine with gigs of RAM and high speed multiple CPU's is still linking like it's a PDP-11.

One possability is that there is a netsnmp.a somewhere in the system library path [there is, which explains why my recompiled version with debugging information didn't work—heck, that explains quite a bit of the wierdness even though the version that was installed and the version I installed were the same —Editor] and it doesn't match the headers you are compiling against. But you probably did install the libraries yourself—beware! Sometimes on Linux they have shared libraries that the linker will try to link against first! I think -V or -vv will print out the commands GCC is actually doing. Or just try adding -L/home/spc/apps/net-snmp-5.1.2/snmplib [I did—it didn't work. I think I'll have to uninstall the system supplied SNMP libraries —Editor]

And I would not use the “~” there since GCC won't expand it and your shell probably wouldn't either. But that's just the most obvious. I've had far more “interesting” problems with UNIX linkers.

At least, my most common problem has been that when you have lots of circular references I always end up naming library files 20 times on the command line. A modern linker will continue to resolve from libraries until all dependencies are satisfied.

Normally failing to list say syscore.a 10 times on the command line simply results in an error. However, this stupidity can work both ways—sometimes you end up pulling in the wrong object module in that pile of libraries because UNIX just goes through the libraries one at a time rather than resolving symbols from their nearest source (like the library that just generated the external!).

There are other bad things that can happen to you too; weak symbols are normally “NULL” (usually all 0's) unless they happen to be satisfied explicitly (by say, naming a .o)—the linker won't go looking for them. Combine that with defensive software that does something dumb (meaning: not crash) on an unexpected NULL pointer and you can get strange behavior like that.

And lets not forget some systems that try to make everything a shared library call even if you specify a .a (Oh, I see this awesome .so here for you—haha). UNIX shared libraries just suck. Avoid them unless you enjoy say bleeding out of your ass—in which case go for it!

What I've found (and I'm not even linking for UNIX, I just happen to be linking on UNIX) is that I make a linker definition file—which yeah, it's totally GNU specific—that can then tell the linker what to do (the GROUP() directive will often resolve libraries the way they are supposed to be resolved).

But I've learned almost never to use .a files under UNIX. I just build up a list of 1,000 or so .o files for my latest creation and let the linker chug through them.

Welcome to the future!


[links added by me —Editor]

A lot of information there that I wasn't even aware of, like weak symbols, linking scripts and the GROUP() directive. I can see I have quite a bit of reading to do.

Saturday, February 17, 2007

Old papers

I enjoy reading old software manuals. Part of that is to marvel at what was done back when CPUs were slow, memory was small, and disks were the size of washing machines (or later, shoe boxes). It's also amazing to see what was attempted that may have failed at the time due to marketing or machine limitations (for instance, Cornerstone's separation of user names from internal names was innovative, but it probably didn't hit real mainstream use until the early 2000s in IDEs as refactoring code became popular).

So I have with me The Bell System Technical Journal, Volume 57, Number 6, Part 2 (July–August 1978), which is all about the Unix Time Sharing System™ (which I chanced upon about ten years ago at Booksmart, a used book store in Boca Raton, Florida—at the time it was off 20th just west of FAU but since has moved to Dixie and Spanish River) and I'm reading “A Retrospective” by Dennis M. Ritchie and man, is it quote-worthy.

A pipe is, in effect, an open file connecting two processes; information written into one end of the pipe may be read from the other end, with synchronization, scheduling, and buffering handled automatically by the system. A linear array of processes (a “pipeline”) thus becomes a set of coroutines simultaneously processing an I/O stream.

So I'm not the first to notice that coroutines look a lot like Unix pipes. Also interesting is this bit further down:

The shell syntax for pipelines forces them to be linear, although the operating system permits processes to be connected by pipes in a general graph. There are several reasons for this restriction. The most important is the lack of a notation as perspicuous as that of the simple, linear pipeline; also, processes connected in a general graph can become deadlocked as a result of the finite amount of buffering in each pipe. Finally, although an acceptable (if complicated) notation has been proposed that creates only deadlock-free graphs, the need has never been felt keenly enough to impel anyone to implement it.

Really? Back in college, I implemented a Unix shell and noticed that it would be rather trivial (except somewhat limited by a decent syntactical notation) to implement not only bi-directional pipes, but creating pipes for stderr as well as stdout. It never crossed my mind that deadlocks were possible. I would also be curious to see the notation that was proposed, since it could very be applicable to new languages with builtin concurrency.

Of course, there're always the very amusing bits:

Both input and output of UNIX programs tend to be very terse. This can be disconcerting, especially to the beginner. The editor [ed most likely; a line oriented editor —Editor], for example, has essentially only one diagnostic, namely “?”, which means “you have done something wrong.” Once one knows the editor, the error or difficulty is usually obvious, and the terseness is appreciated after a period of acclimation, but certainly people can be confused at first. However, even if some fuller diagnostics might be appreciated on occasion, there is much noise that we are happy to be rid of.

I have to wonder if this is the genesis of the following story:

Ken Thompson has an automobile which he helped design. Unlike most automobiles, it has neither speedometer, nor gas gage, nor any of the numerous idiot lights which plague the modern driver. Rather, if the driver makes any mistake, a giant “?” lights up in the center of the dashboard. “The experienced driver,” he says, “will usually know what's wrong.”

which is funny, since the paper I'm quoting was written solely by Dennis Ritchie. Anyway, Mark and Wlofie will appreciate this bit:

Two events—running out of swap space, and an unrecoverable I/O error during swapping—cause the system to crash “voluntarily,” that is, not as a result of a bug causing a fault. It turns out to be rather inconvenient to arrange a more graceful exit for a process that cannot be swapped. Occurrence of swap-space exhaustion can be made arbitrarily rare by providing enough space, and the current system refuses to create a new process unless there is enough room for it to grow to maximum size. UNrecoverable I/O errors in swapping are usually a signal that the hardware is badly impaired, so in neither of these cases do we feel strongly motivated to alleviate the theoretical problems.

… It must be admitted, however, that the system is not very tolerant of malfunctioning hardware, nor does it produce particularly informative diagnostics when trouble occurs.

It appears that the lack of hardware stability in Linux has a long pedigree, leading back to the original implementation of Unix (another amusing bit—“the typical period between software crashes is well over a fortnight of continuous operation.”—heh).

The entire paper is currently online for your perusal (and it's very interesting to note that the book I have, The Bell System Technical Journal, Volume 57, Number 6, Part 2 is no longer available from anyone, I guess making the copy I have a collectable. Hmmmm … )

Sunday, February 18, 2007

“Lock your … doors! Watch out … for … 007!”

I present you with four links (via Mike Sterling's Progressive Ruin):

  1. Superman Theme Song
  2. Jaws Theme song
  3. Back to the Future Theme Song
  4. James Bond Theme Song

And yes, they're songs, with lyrics. Bet you didn't know the Superman Theme had lyrics, did you? Or the Jaws Theme? But they do.

Or at least they do as Goldentusk rendered them (he reminds me of a cross between Wierd Al Yankovic and Rob Morrow) and they come across as what you might hear in a musical version of the respective movies.

Presidential Predictions

I remember hearing about the Curse of Tippecanoe back in 5th grade during the 1980 Presidential campaign, and so far, it was eerily accurate as a prediction of Presidential deaths while in office.

That is, until Ronald Reagan in 1981 (and unbeknownst to me, George W. Bush also survived an attempt).

So it was amazing when I came across An Algorithm for Determining the Winners of U.S. Presidential Elections (link via The Old New Thing) that has correctly predicted every Presidential election.

Every one.

And it's so easy you can put it in a spread sheet.

So, I decided to see who currently has the best shot at becoming the next President of the United States. I found a list of current Presidential candidates and applied the formula.

Presidential electability of the candidates for President in 2008
Name Pres. Rep. Gov. Other Total
Name Pres. Rep. Gov. Other Total
Democratic Candidates (*has not officially filed)
Joe Biden 0 0 0   0
Chris Doss 0 0 0 Child of Senator, Divorced 0
John Edwards 0 0 0   0
Mike Gravel 0 0 0   0
Dennis Kucinich 0 12 0 Divorced -98
Barack Obama 0 0 0   0
Tom Vilsack 0 0 8   88
Hiliary Clinton* 0 0 0   0
Bill Richardson* 0 14 6   80
Republican Candidates (*has not officially filed)
Sam Brownback 0 2 0   2
John H. Cox 0 0 0   0
Duncan Hunter 0 28 0   28
Mitt Romney 0 0 4 First Mormon -66
Michael Smith 0 0 0   0
John Gilmore* 0 4 0   44
Rudy Giuliani* 0 0 0 Divorced, Special Prosecutor -220
Mike Huckabee* 0 8 0   88
John McCain* 4 0 0 Divorced -106
Ron Paul* 20 0 0   20
Tom Tancredo* 10 0 0 First Evangelical -100
Tommy Thompson* 0 4 0   44
Libertarian Candidates
Steve Kubby 0 0 0   0
George Phillies 0 0 0   0
Christine Smith 0 0 0   0

But it's a total score of both the Presidential and Vice-Presidential candidates that win, not just the Presidential candidate (so there's a small outside chance that Giuliani could win, if he could find a Vice-Presidential candidate who is a corporate banker, president of a college and the child of a US Senator). I also think it's too early to really tell who will win at this stage; I'll know more after the caucuses and primaries of next year.

Monday, February 19, 2007

№ 9

Numbers stations, it seems, have made the leap to the web (link via Wil Wheaton). Intrigued by the idea, I came up with my own “numbers station”—№ 9. Oddly enough, it took longer for me to find a suitable image for the banner than it did to program the site (and I ended up taking a picture of my own computer screen, and color correcting the text from white on black to green on black, which was easier than trying to find out how to get the text to be green in the first place).

And yes, the numbers do have meaning.

But no, I won't tell you.


Tuesday, February 20, 2007

It was only on the stove for four hours …

It's easy to make hummus. In fact, the hard part is in cooking the chick peas, and for me, that means “I forgot I had a pot boiling until smoke filled the house.”

This is the second time this has happened. And the hard part of that is cleaning the pot afterwards.


Absent minded and all that.

Wednesday, February 21, 2007

One of Microsoft's secrets cracked at last

It's a lead-pipe cinch, I figure. I'm a good detective. I've found opium dens in Vientiane; been granted interviews by cardinals, mafiosi, and sheikhs; discovered the meaning of “half-and-half” in the old song “Drinkin' Wine, Spo-Dee-O-Dee”; conned the Vatican into bestowing a doctorate on me so that I could gain access to hiddenmost archives; deciphered the cryptic message Ezra Pound scrawled in his own copy of the Cantos while in the bughouse; tracked down and interviewed Phil Spector's first wife, long presumed dead; charted my way to the sacred stone of the Great Mother, in Cyprus; gotten Charlotte Rampling's cell-phone number; even come close to understanding the second page of my Con Ed bill. Finding out where a picture was taken—a picture plastered on millions of computer screens—seems a shot away.

Via Jason Kottke, Autumn and the Plot Against Me

For Bunny, who has this picture as her desktop …

Fun with static types

The real issue is one of “type decoration”, i.e., putting in little tokens here and there to give hints to the compiler as to what is going on. No one wants to do this. The dynamic typing camp sacrifices a certain amount of “compile-time” error checking to avoid having to decorate the code. The static typing camp uses smart tools to minimize decoration, and they “grin and bear it” otherwise. To caraciture both camps, the dynamic type camp says “it is so painful to type ‘integer’ that I never ever am willing to do it, but I'll live with runtime exceptions” whereas the static-typing camp says “my type system is so smart I almost never have to add declarations, but the thought of the computer ‘guessing’ my intent and ‘dwimming’ my code is anathema. So I'd rather type the occasional type declaration to avoid it.”

As I mentioned before, I'm very much in the “dynamic” camp. When I have to work with a language which is not only statically typed, but provides no tools for reducing the amount of type decoration, and furthermore still allows one to write meaningless expressions that appear to be meaningful, I end up feeling encumbered. So I don't think of “lightweight” and “type declarations” go together very well. I'm sure some agree and others disagree.

Re: cheerful static typing (was: Any and Every … (was: Eval))

I quote this because this seems to be indicative of the current “anti-static type” camp of programmers that seems to be very popular these days, which seems to come down to, “I hate typing at the keyboard.” That, and “I hate thinking about my program” (and here you clearly see what side of the debate I'm on—I'm a static-type fascist). But over the past few days I came up with an example (and I wouldn't be surprised if someone else hasn't thought of this) where static typing can actually increase the expressiveness (read: less typing required) of a language.

I'll start with an example in C (I would use Ada, but it's been years since I last used it, I don't have my references handy, and it would be about twice as verbose, so there's some merit to the “too many keys” argument of the dynamicists):

  FILE *in;
  FILE *out;
  int   c;

  in  = fopen("input","r");
  out = fopen("output","w");

  ; yes, there's a bug here, but
  ; that's due to a bug in the
  ; design of the C Standard Library
  ; API.  feof() only returns TRUE
  ; *after* we've attempted to 
  ; read past the last character.

    c = fgetc(in);
    c = toupper(c);


It's not a terrible amount of code (as say, compared to the equivalent in this age's Cobol—Java) but since we already have the types (that the fascist langauge C requires) why not do something other than simple type checking? Why not put that knowledge to use and have the compiler write some code for us? We could instruct the compiler (for our now hypothetical language) that if it sees an assignment of the form:

character-type “=” file-type

it should read the next byte from the file. Conversely, if it sees

file-type “=” character-type

it should then write the given character to the file.

  FILE in;
  FILE out;
  char c;

  in  = fopen("input","r");
  out = fopen("output","w");

    c   = in;
    out = toupper(c);


It may look unusual, but a similar notion exists in another language right now, one that is (or rather, was) quite popular: Perl.

  while($line = <IN>)
    print OUT $line;

(Of course, since Perl is dynamic, you have to have the file handle between the angle brackets, which tells Perl you want to do a read from the file. Attempting to do:


while($line = <STDIN>)
  <STDOUT> = $line;

fails with

Can't modify <HANDLE> in scalar assignment at ./ line 5, near "$line;"
Execution of ./ aborted due to compilation errors.

so it's almost what we're doing here, but just in one special case)

Now, while we're at it, why not add a bit more syntactic sugar and use a common object oriented notation (and at the same time, if a function method takes no formal parameters, just dump the superfluous paranthesis):

  File in;
  File out;
  char c;

  in  ="input");
  out = File.write("output");

    c   = in;
    out = c.toupper;


Wait a second … let's go even futher. Why bother with the actual function names read and write? By making the file types more specific, and with a smart enough compiler to call destructors upon leaving the functional scope, we can indeed cut out even more clutter:

  FileIn  in  = "/tmp/input";
  FileOut out = "/tmp/output";
  char    c;

    c   = in;
    out = c.toupper;

So, we've instructed the compiler to note

file-input-type “=” string

and therefore open the requested file for input; the same can be said for file-output-types as well. In a boolean context, it makes sense to check if the file is at the end.

Now, in all this, I've been transforming the data, but if I want to skip even that, why should I have to write:

  FileIn  in  = "/tmp/input";
  FileOut out = "/tmp/output";
  char    c;

    c   = in;
    out = c;

When I could just:

  FileIn  in  = "/tmp/input";
  FileOut out = "/tmp/output";

  out = in;

But is that expecting too much? In the previous example, we get an actual file copy, but what if we don't want that? What if we want to copy not the file, but the “variable” in itself? Well, here, types again come to the rescue because copying the contents of an “input file” to the contents of another “input file” doesn't make semantic sense, so in that case, it's the variables themselves that are copied, not the data in the files they represent:

  FileIn in1 = "/tmp/input";
  FileIn in2;

  in2 = in1;	// now in2 and in1 reference
  ...		// the same input file

The argument could be made that just like polymorphism and operator overloading, this will lead to inscrutable code, but personally, I'd love a language that would allow me to do such things (and I'm not even sure what to call this … it's not exactly a type conversion). I've also glossed over how to code these conversions but I'm sure that just like operator overloading in C++ a syntax can be evolved.

Thursday, February 22, 2007

Deterministic computers are so passé

I remember the first time I saw the Star Trek: The Next Generation episode “Contagion.” It starts out innocently enough, when the USS Enterprise receives a distress signal from the USS Yamato out in the Neutral Zone. Picard & Co. arrive just in time to see the USS Yamato explode as some Romulan warbirds decloak off the starboard bow.

And that's before the opening credits.

We find out that the USS Yamato had been exploring a planet in the Neutral Zone when they were probed by million year old alien technology and their systems started acting up. Georgi was able to download some of the logs from the USS Yamato's computer system before it blew up and starts to analyize them for clues as to what happened. Meanwhile, the computer system on the USS Enterprise starts to act up at the same time as the computer systems on the Romulan warbirds act up.

Okay, pretty standard Star Trek episode here. Where it went downhill for me was with Geordi's epiphany—the computers are infected by an alien computer virus (don't get me started on this trope) via the downloaded log files. The same one that infected the USS Yamato when they were probed by million year old alien technology (or started on this trope either). At that point, I lost it—what? The Enterprise computer saw executable code in the log files and decided to just execute it? What is this, Enterprise software from Microsoft?

So now the crew is running around without a clue what to do. Picard is trying to negotiate and surrender to the virus, Worf attempts to wrestle it and gets knocked out, Riker is having trouble seducing it, Data is infected by the computer virus and starts giving fried spam to the crew and Geordi is confused by the technobabble the virus is throwing at him. Since it doesn't involve the warp engines or the deflector shield, Wesley can't do anything about the virus. And for some odd reason, Dr. Crusher keeps repeating, “Damn it, I'm a doctor, not a junior programmer!”

At this point, I'm thinking, Okay, normal procedure is to reinstall the software on the Enterprise with a good known backup and restart it. But the crew is not doing that. There must be some reason they can't do that, I think. Too many computers, or the only known backup is back at Starbase 13. I mean, how do you reboot the Enterprise? Isn't that, like, attempting to reboot the Internet?

So what's the solution the fearless crew of the Enterprise come up with?

Shutdown the computer, reload the software from a good known backup and restart it.

WHAT THE XXXX? I wasted a whole hour on this? It took the crew the entire episode to rediscover what millions of people today know as common sense? What is it with Picard & Co.?

I was reminded of that episode because of Steve Yegge's Pinocchio Problem. Steve Yegge's quest for systems that never have to be rebooted, constantly living, adapting, expanding software/hardware lead directly to the doom of the USS Yamato, and the near doom of the USS Enterprise.

Okay, I exaggerate a bit.

But that does appear to be the eventual outcome of such a scenario, where the notion of restarting a computer is not normal but is in fact, a nearly forgotten recovery technique.

It's just one part of a disturbing trend I see among many programmers—the desire to have software emulate biological systems.

Sussman talks about “degeneracy” in biological systems, and how it can emerge by duplicating a section of DNA and allowing the copies to diverge. In programming languages, this might be done by taking a program, copying a section of code, and then changing each caller so it either continues to call the old version or calls a new one. In order to allow broken pieces of code to continue to evolve without destroying the program, you could make callers “prefer” one version over the other, but silently fall back to their non-preferred implementation if the first version didn't work. For example, maybe their preferred version threw an exception, or maybe it started failing some kind of unit test that the caller cares about.

Here's another idea: generate random segments of code by “connecting the dots”, where by “dot” I mean “type”, or perhaps “function call”. Suppose you have a URL and you want to have a file on disk. If you're lucky, you can search the call graphs of a whole bunch of programs and find some code path that starts with a url and ends with a file. If you're really lucky, that code path will do something appropriate, like downloading the content behind the url. If you took this idea and applied it to all the open source projects in the world, you'd probably have a fair chance of implementing something reasonable, purely by accident. Well, not really by accident—it would actually be by virtue of the fact that you're drawing a random sample from a set of programs that is distributed extremely non-uniformly over the space of all possible programs. Djinn does something like this, but without the benefit of a meaningful dataset of samples to draw from. Haskell probably has an advantage at this kind of thing because it doesn't depend on side effects to determine the meaning of a segment of code.

Combine these two ideas. Generate random code, evolve it by making (fail-safe) copies, and mutate it by replacing randomly-selected code paths with randomly-generated code paths that connect the same dots.

Thoughts on Robust Systems

I have nothing against Kim, but her post was the tipping point for this entry. What is this fascination with evolving code? Or emulating biological systems in development? Writing software is already difficult enough on purely deterministic machines (which is why I like computers in the first place—they're deterministic!) and yet programmers want to make things even more difficult on themselves?

Here's an article about Dr. Adrian Thompson, who “evolved” a circuit (on a programmable chip) to detect two different tones.

Although the configuration program specified tasks for all 100 cells, it transpired that only 32 were essential to the circuit's operation. Thompson could bypass the other cells without affecting it. A further five cells appeared to serve no logical purpose at all—there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.

It appears that evolution made use of some physical property of these cells–possibly a capacitive effect or electromagnetic inductance–to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.

However it works, Thompson's device is tailor-made for a single 10 by 10 array of logic cells. But how well would that design travel? To test this, Thompson downloaded the fittest configuration program onto another 10 by 10 array on the FPGA. The resulting circuit was unreliable. Another individual from the final generation of circuits did work, however. Thompson thinks it will be possible to evolve a circuit that uses the general characteristics of a brand of chip rather than relying on the quirks of a particular chip. He is now planning to see what happens when he evolves a circuit design that works on five different FPGAs.

… If evolutionary design fulfils its promise, we could soon be using circuits that work in ways we don't understand. And some see this as a drawback. “I can see engineers in industry who won't trust these devices,” says Thomson. “Because they can't explain how these things work, they might be suspicious of them.”

If the chips ever make their way into engine control systems or medical equipment we could well face an ethical dilemma, says Inman Harvey, head of the Centre for Computational Neuroscience and Robotics. “How acceptable is a safety-critical component of a system if it has been artificially evolved and nobody knows how it works?” he asks. “Will an expert in a white coat give a guarantee? And who can be sued if it fails?”


“We'll do extensive unit tests,” seems to be the mantra of these organic programmers. I guess they haven't heard of program verification (to be fair, I can't even verify my own software, but on the other hand, neither do I randomly sling code together and hope it works). How come many programmers think evolution is good design?

This “evolving” or “biological” software stuff scares me, and not because it'll lead to computers taking over the world but because they'll fail in new and spectacular ways.

Friday, February 23, 2007

Okay, I want

I think this stained glass computer case (link via Spring) and this steampunk keyboard (link via Spring and made from an IBM keyboard) would be lovely together. The only thing left would be a design for the monitor …

Saturday, February 24, 2007

More fun with static types

What else can you do with typing systems? And what about that relentless drive towards multiple processor machines?

One thing, better parallelization of code. One such method is the map function, where some function (or code) is applied to every element in a list, and each such application is independent of each other. In the current crop of langauges, there's an actual function or keyword that is used:

(MAPCAR #'INCR '(1 2 3 4 5))

This bit of Lisp code increments the elements of a list by 1. Other languages have a similar function or library variation on map. But why? Couldn't we have the compiler figure it out? At least in a few instances?

For instance, in C, the following will give a compiler error:

  double x[100];
  double y[100];

  y = sin(x);

since sin() takes a single double parameter, not an array. But since I'm talking hypothetical compilers here, couldn't the compiler recognize that x is an array of doubles, and that sin() takes a single double, and since the result is going to another array of doubles, why not just map sin() over the array x? On a uniprocessor machine the code generated could be something like:

  double x[100];
  double y[100];
  size_t i;

  for (i = 0 ; i < 100 ; i++)
    y[i] = sin(x[i]);

Given a routine that expects a parameter of type X, and a variable array of type X, passing said variable array into said routine should not produce an error, but a mapping instead. No fence post errors. No having to type out for loops, or while loops, or loops of any kind. Or even of having to type out “map.”

Or is this too much DWIM?

Sunday, February 25, 2007

The most dangerous job, even more so than a soldier in Iraq

Economist Steven Levitt on why working at McDonald's pays more than drug dealing (link via Classical Values, which in turn was found via Instapundit). Yes, he does cover this in his book Freakonomics (excellent book by the way) but still, it's fun to watch his presentation.

Monday, February 26, 2007

What we need is nuclear power

I'm not a protocol designer. I'm sure that people have been thinking about this for a long time, but I bet all the thought has been behind closed doors and not in a public appliance design forum and framework. That said, my vision is of a household full of devices that

In the most basic implementation, for example, a Powerline time broadcast system allows every device to be time synchronized, so you don't have to reset all the clocks after a power outage. More sophisticated systems can advertise themselves as displays, inputs or outputs. To use the tired coffee maker example: your coffee maker thus no longer has to include its own scheduling device; your alarm clock can schedule all necessary tasks, find your coffee maker as an output device with a standard set of services, and just tell it when to start percolating at the same time that it tells your Wifi rabbit to start caching its the news and traffic MP3s. Your pressure-sensitive carpet can just broadcast “turn on 1/10 power” to all lights in its vicinity, which turn on as you walk to the bathroom in the middle of the night, they light your way. If you have no such lights, they don't light.

Why we need a good appliance communication protocol

Any sufficiently advanced technology is indistinguishable from magic.

[Arthur C.] Clarke's Third Law

The link wasn't directly from Blahsploitation, but he's also thinking along similar lines here. And while I would love the lights to turn on as I walk about the house, or the tea kettle to turn on ten minutes before the alarm goes off, I worry about making this seem more magical than it appears.


My conclusion: I don't dare tell my kids they're smart. If they work hard, I'll recognise that. Amaze me. Just being smart is so passé.

Via Flutterby, Smarts don't mean much (and expensive running shoes ruin your feet

Hmmm …. certainly explains a lot about me.

Tuesday, February 27, 2007

The only thing missing is Strongbad

For Spring: a PBSesque video with a very short Bollywood musical number, Daleks, a TARDIS, Star Wars Stormtroopers and … Kevin Smith (link via Jason Kottke).

What more could a girl ask for?

“Those who can, do. Those who can't, still get jobs.”

Abstract: All teachers of programming find that their results display a ‘double hump’. It is as if there are two populations: those who can, and those who cannot, each with its own independent bell curve. Almost all research into programming teaching and learning have concentrated on teaching: change the language, change the application area, use an IDE and work on motivation. None of it works, and the double hump persists. We have a test which picks out the population that can program, before the course begins. We can pick apart the double hump. You probably don't believe this, but you will after you hear the talk. We don't know exactly how/why it works, but we have some good theories.

A cognitive study of early learning of programming

I myself have heard plenty of horror stories about applicant programmers who can't program, but even simple programs (via Ceej, from where I found the other links) can trip up a seasoned programmer (I tried the FizzBuzz program, and my first two attempts had bugs—sigh).

I just find it hard to believe that there are so many bad programmers out there.

Wednesday, February 28, 2007


Dear Fellow Citizen,

Congratulations on your selection for jury duty in Palm Beach County.

The right to have legal disputes decided by members of the community is among the most valuable features of our American system of government. The preservation of that right, guaranteed to all of us by the Constitution, depends on the willing participation of all eleigible members of the community. In this important sense, your participation is necessary to the strength of our American democracy.

Though serving on a jury can be burdensome not only financially but in time and energy; it can also be very rewarding. Florida law provides that you may not be dismissed from your job because of the nature or length of your service on a jury. Everyone has a duty to serve when called.

I look forward to seeing you soon, and thank you for your service.


Kathleen J. Kroll
Chief Judge, 15th Judicial Circuit

Oh joy!

8:00 am March 20th.

I'm so looking forward to this.

Obligatory Picture

[Don't hate me for my sock monkey headphones.]

Obligatory Links

Obligatory Miscellaneous

You have my permission to link freely to any entry here. Go ahead, I won't bite. I promise.

The dates are the permanent links to that day's entries (or entry, if there is only one entry). The titles are the permanent links to that entry only. The format for the links are simple: Start with the base link for this site:, then add the date you are interested in, say 2000/08/01, so that would make the final URL:

You can also specify the entire month by leaving off the day portion. You can even select an arbitrary portion of time.

You may also note subtle shading of the links and that's intentional: the “closer” the link is (relative to the page) the “brighter” it appears. It's an experiment in using color shading to denote the distance a link is from here. If you don't notice it, don't worry; it's not all that important.

It is assumed that every brand name, slogan, corporate name, symbol, design element, et cetera mentioned in these pages is a protected and/or trademarked entity, the sole property of its owner(s), and acknowledgement of this status is implied.

Copyright © 1999-2017 by Sean Conner. All Rights Reserved.