Friday, January 01, 2010
It sounds like a war zone out there …
Happy New Year!
And the neighbors around here are definitely in a celebratory mood as it sounds like they're shooting .45s right outside the window, although I think they're fireworks that are a bit more powerful than an M-80. It actually sounds like there are more fireworks being shot off now than on the Fourth, which always reminds me of this:
It's something to keep in mind when working with fireworks …
I resolve …
So, being the New Year and all, I thought I would take this time to make a New Year's Resolution: “This year, I resolve not to make any New Year's Resolutions.”
D'oh!
Well, there go the resolutions, right out the window …
Saturday, January 02, 2010
No flying cars, but then again it's forgivable, as airplanes haven't even been invented yet …
Predictions are funny things. Some are spot on, others turn out correct but not for the reasons stated and others are just plain weird. I came across Predicti ons of the Year 2000 from The Ladies Home Journal of December 1900 (link via Hacker News) and it makes for facinating reading. Some of the predictions are dead spot on:
Prediction #6: Automobiles will be cheaper than horses are today. Farmers will own automobile hay-wagons, automobile truck-wagons, plows, harrows and hay-rakes. A one-pound motor in one of these vehicles will do the work of a pair of horses or more. Children will ride in automobile sleighs in winter. Automobiles will have been substituted for every horse vehicle now known. There will be, as already exist today, automobile hearses, automobile police patrols, automobile ambulances, automobile street sweepers. The horse in harness will be as scarce, if, indeed, not even scarcer, then as the yoked ox is today.
…
Prediction #18: Telephones Around the World. Wireless telephone and telegraph circuits will span the world. A husband in the middle of the Atlantic will be able to converse with his wife sitting in her boudoir in Chicago. We will be able to telephone to China quite as readily as we now talk from New York to Brooklyn. By an automatic signal they will connect with any circuit in their locality without the intervention of a “hello girl”.
Not much more to say than that, other than an apocryphal story I heard: AT&T around the turn of the previous century was concerned at the growth of the phone system and that at that rate, “they would need to hire everyone to become an operator”—and oddly enough, they did, only they don't pay us, we pay them.
Now, some of the predictions are right, but for the wrong reasons:
Prediction #1: There will probably be from 350,000,000 to 500,000,000 people in America and its possessions by the lapse of another century. Nicaragua will ask for admission to our Union after the completion of the great canal. Mexico will be next. Europe, seeking more territory to the south of us, will cause many of the South and Central American republics to be voted into the Union by their own people.”
…
Prediction #21: Hot and Cold Air from Spigots. Hot or cold air will be turned on from spigots to regulate the temperature of a house as we now turn on hot or cold water from spigots to regulate the temperature of the bath. Central plants will supply this cool air and heat to city houses in the same way as now our gas or electricity is furnished. Rising early to build the furnace fire will be a task of the olden times. Homes will have no chimneys, because no smoke will be created within their walls.
We have around 350,000,000 people and yes, it's party because of land and the passage of time, but it has nothing to do with Nicaragua or Mexico (although Mexico seems to be doing a good job of taking over the southwest). I have no idea what the bit about Europe means though.
And we don't exactly get hot and cold air from spigots, but we do have it, although it's produced locally, in the house, than at a hot/cold air plant.
Then there are the predictions that are just plain wrong:
Prediction #4: There Will Be No Street Cars in Our Large Cities. All hurry traffic will be below or high above ground when brought within city limits. In most cities it will be confined to broad subways or tunnels, well lighted and well ventilated, or to high trestles with “moving sidewalk” stairways leading to the top. These underground or overhead streets will teem with capacious automobile passenger coaches and freight with cushioned wheels. Subways or trestles will be reserved for express trains. Cities, therefore, will be free from all noises.
…
Prediction #11: No Mosquitoes nor Flies. Insect screens will be unnecessary. Mosquitoes, house-flies and roaches will have been practically exterminated. Boards of health will have destroyed all mosquito haunts and breeding-grounds, drained all stagnant pools, filled in all swamp-lands, and chemically treated all still-water streams. The extermination of the horse and its stable will reduce the house-fly.
Prediction #12: Peas as Large as Beets. Peas and beans will be as large as beets are to-day. Sugar cane will produce twice as much sugar as the sugar beet now does. Cane will once more be the chief source of our sugar supply. The milkweed will have been developed into a rubber plant. Cheap native rubber will be harvested by machinery all over this country. Plants will be made proof against disease microbes just as readily as man is to-day against smallpox. The soil will be kept enriched by plants which take their nutrition from the air and give fertility to the earth.
Prediction #13: Strawberries as Large as Apples will be eaten by our great-great-grandchildren for their Christmas dinners a hundred years hence. Raspberries and blackberries will be as large. One will suffice for the fruit course of each person. Strawberries and cranberries will be grown upon tall bushes. Cranberries, gooseberries and currants will be as large as oranges. One cantaloupe will supply an entire family. Melons, cherries, grapes, plums, apples, pears, peaches and all berries will be seedless. Figs will be cultivated over the entire United States.
Prediction #4 is a riot, and I like the optimism of it. Predictions #11, 12 and 13 are not only wrong, but weird, but I guess it made sense at the turn of the previous turn-of-the-century.
And then you get the ones that are so right, yet so wrong at the same time, such as Prediction #19 (which I love, because what it got wrong was just so out there, and yet, it's still so right):
Prediction #19: Grand Opera will be telephoned to private homes, and will sound as harmonious as though enjoyed from a theatre box. Automatic instruments reproducing original airs exactly will bring the best music to the families of the untalented. Great musicians gathered in one enclosure in New York will, by manipulating electric keys, produce at the same time music from instruments arranged in theatres or halls in San Francisco or New Orleans, for instance. Thus will great bands and orchestras give long-distance concerts. In great cities there will be public opera-houses whose singers and musicians are paid from funds endowed by philanthropists and by the government. The piano will be capable of changing its tone from cheerful to sad. Many devises will add to the emotional effect of music.
But in 1900, airplanes didn't exist (although the time had come for them), radio had just been invented, television was still a couple of decades away, no one could have forseen the rise of computers (fifty years away), modern container shipping (some seventy years away) or a global informational network (some 90 years away). I have to wonder what marvels we'll have one hundred years hence …
Monday, January 04, 2010
This is what I'm looking for
A few weeks ago Jeff mentioned an e-reader he was interested in using. It's nice, but what I'm really hoping for is the Mag+ (watch the video, it's worth it), which I think would also make a killer tablet computer in general.
Name one word that describes “Han Solo.” Okay, now one word that describes “Queen Amidala.” Yeah, I thought so …
This 70 minute review (link via Jason Kottke) of “The Phantom Menace” is incredible (even though the attempts at humor really fall flat and the reviewer's voice is severely annoying)—while I knew the movie was bad, I never knew it was that bad. I never knew that even George “I don't need no steeenking editors” Lucas knew “The Phantom Menace” was bad. Jeeze!
Tuesday, January 05, 2010
Unintended consequences of outlawing common sense
The rapid introduction of full body scanners at British airports threatens to breach child protection laws which ban the creation of indecent images of children, the Guardian has learned.
New scanners break child porn laws | Politics | The Guardian
Ooooh, I just love unintended consequences, or as jwz said when he posted this: “What happens when the immovable object of terrorism meets the unstoppable force of kiddie porn?” I can see this playing out thusly: those under 18 are exempt, so terrorists now use kids to smuggle the bomb materials aboard. Once that is discovered, the next step is to force parents and kids to separate sections of the planes, so now the terrorist kids are trained to trigger the explosions themselves. Kids are then banned from flying (not that I would argue with such an outcome).
The other scenario—pedophiles attempt en mass to become security screeners.
Speaking of unintended consequences and pedophiles, comes this story of a grandmother charged with kiddie porn because she took the obligatory grandkids in the bathtub photo. I swear, given the level of hysteria here (and it's not even limited to the United States) that one would think pedophiles by now would know to develop their own film, but I guess common sense has been legislated out of existence.
“Oh, did you pee in your pants again, Firefox?”
Thank you so much Firefox for deciding right now was the perfect time to force a download and upgrade of yourself. Never mind that I was in the middle of using you, but no, it's more important to you that I not run a version of Firefox older than 20 minutes.
Sigh.
Worse still, I have to rehack Firefox 3.5 so it even runs on my system. It's actually amusing (and rather pathetic) that I not only installed a program I don't even use just to get one pre-compiled shared library on the system, but I have to reinstall Flash every time I launch Firefox (bascially, I copy two files just prior to starting Firefox) or it doesn't work.
Wednesday, January 06, 2010
Oddly enough, these three things do have something in common …
Just in case you haven't noticed, I've been clearing out some links I've accumulated over the holiday season, and this is no exception. My Dad sent me a link to The Webcycle; not only is it a web browser, but an excercise machine, a desert topping and a floor wax! I wonder if he's trying to imply something …
Also (Dad didn't send this; I found it to it from somewhere)—a collection of control panels <shudder> that I wouldn't mind using. Very cool stuff here.
And a final link for today—10/GUI, an interesting concept GUI that uses a multi-touch pad.
Saturday, January 09, 2010
It's all Greek to me
I knew APL is terse and incomprehensible, but seeing Conway's Game of Life in APL was mindblowing (link via Hacker News):
life ← { ⊃ 1 ω ∨ . ∧ 3 4 = +/ +⌿ 1 0 ‾1 ∘.θ 1 - ‾1 Φ″ ⊂ ω }
(I think that's close—it was hard to find the right symbols) Yes, that really does calculate a single generation. To run multiple generations was one additional line of code:
{} { pic∘ ← '⋅☻'[ω] ◊ _←[]dl ÷8 ◊ life ω } ★≡ RR
I remember years ago at a ham fest I was on the lookout for IBM keyboards (the only true keyboard, mind you) and I missed picking up an honest-to-god IBM APL keyboard by just seconds—a friend I was with got to it before I did (grrrrrrrr, and at $5, it was certainly a steal).
Friday, January 15, 2010
“Life should NOT be a journey to the grave with the intention of arriving safely in an attractive and well preserved body, but rather to skid in sideways—Chardonnay in one hand—Chocolate in the other—body thoroughly used up, totally worn out and screaming, ‘WOO HOO, What a Ride!’”
My friend Kurt is finally tying the knot tomorrow and of course that can't pass without a bachelor party!
His brother Erik planned a full day of activities and those of us that can take off work, have done so. The first stop on today's debauchery was the Bass Pro Shop for a little archery.
The place is huge. The store itself must be several acres in size and it includes both an indoor shooting range and indoor archery range. I was running a bit late and once I arrived in the store, I had to call Kurt for directions to the archery range (on the second floor, no less!). There I met Kurt, the groom-to-be, Erik, Rich, Kurt's brother-in-law-to-be, Keith and Mike.
We had fun. The various animal targets (an alligator, bear, a wild boar, a few bucks) are on hydraulic lifts and can be raised or lowered by an operator. Given that I haven't shot an arrow since 8th grade, the bows were remarkably easy to use and we were all mostly able to hit (or just graze) the various tagets.
Gregory arrived just as we were leaving the Bass Pro Shop (he could only get a half-day off from work) for lunch. After a brief discussion, we decided to head to Ernie's Bar B Ques and Lounge (formerly known as “Dirty Ernie's”).
The story, as told by Kurt, is that Ernie opened up the place and covered the walls with his sayings on how to live life and whatnot (thus the former name—“Dirty Ernie's”). After a few years of running the place, he apparently got bored, up and left for parts unknown, without even selling the place.
Interesting character.
And good food. And drinks.
Mike warned Kurt that if he passed out tonight, Mike would make sure Kurt ended up with a permanent reminder of the night. It was my idea to make said reminder a tramp stamp. Mike wanted to make it a butterfly, I was leaning more towards My Little Pony, but we still had time to decide.
A few hours later, we arrived at Old Heidelberg for dinner. As this was later in the evening, this meant more friends, and we had the entire back room to ourselves, and two waiters (two friendly fellows by the name of Jeff and Leo). Joining us were Kurt's two other brothers, Neal and Kyle, Russ (who spent the day driving from Tampa), Keener (who spend the day driving from Blountstown, Florida, an even longer drive), Jeff and two other friends of Kurt whose names I didn't catch.
For the most part, the food was excellent and the deserts—oh—to die for (the Black Forrest Cake I had was indescribably good). After a few hours of dinner and conversation the group headed out to downtown Ft. Lauderdale for a round of bar hopping, but on the way, the evening was having its affect on Kurt and we ended up making several stops on the Bathroom Tour of Ft. Lauderdale™.
We did hit a couple of bars before Kurt felt the need to visit a “gentleman's club.” And while Ft. Lauderdale isn't Lost Wages, what happened there shall remain there. I will, however, make two comments about the “gentleman's club” we visitied:
- there were TV screens mounted everywhere and occasionally they would flash “Feel Free To Use Your Credit Card” and “We Have An ATM” (and as Dave Barry says, “I am not making this up”);
- the music was loud. No, I ba-da boom boom boom boommeanBa-Da Boom Boom Boom BoomreallyBA-DA BOOM BOOM BOOM BOOMloudBA-DA BOOM BOOM BOOM BOOMandBA-DA BOOM BOOM BOOM BOOMob …BA-DA BOOM… nox …BOOM… ious …BOOMI MEAN SO LOUD THERE WAS A STIFF WIND BLOWING THROUGHT THE PLACE!BOOMWHAT?
Fortunately for Kurt, he never did pass out (then again, how could anyone pass out with music that loud?). Unfortunately for us, we couldn't give him his tramp stamp.
Saturday, January 16, 2010
“Mawage. Mawage is wot bwings us togeder tooday. Mawage, that bwessed awangment, that dweam wifin a dweam … And wuv, tru wuv, will fowow you foweva …”
Ah, mawage, uh, I mean, marriage. Today my friend Kurt said goodbye to bachelorhood and married his true love.
What else can I say? The bride was beautiful. The groom dashing. The venue had a beautiful view of the Miami skyline, and somewhere, David Caruso is wearing his shades.
Congratulations, Kurt and Amanda!
Monday, January 18, 2010
“Help! I'm trapped in a Chinese fortune cookie factory!”
I crack open the fortune cookie, and, as God is my witness, I read:
And admit it—you're intrigued too!
Monday, February 01, 2010
Yet another Bill Watterson interview found!
Ah, the life of a newspaper cartoonist—how I miss the groupies, drugs and trashed hotel rooms!
But since my “rock star” days, the public attention has faded a lot. In Pop Culture Time, the 1990s were eons ago. There are occasional flare-ups of weirdness, but mostly I just go about my quiet life and do my best to ignore the rest. I'm proud of the strip, enormously grateful for its success, and truly flattered that people still read it, but I wrote “Calvin and Hobbes” in my 30s, and I'm many miles from there.
Via Hacker News, Bill Watterson, creator of beloved 'Calvin and Hobbes' comic strip looks back with no regrets | Living - cleveland.com - cleveland.com
It's not the best Bill Watterson interview (a much better interview was done by Andrews McMeel Publishing) but that answer is just great.
The reason for even linking to this is that any interview with Bill Watterson is a very rare event. Heck, even pictures of the artist are very rare and so far, I think this is the only picture I've seen of him:
Quite the recluse, that Mr. Watterson.
He's also famous of not allowing any Calvin and Hobbes merchandising, although it does appear he relunctantly gave his endorsement for a new product coming out in July:
Wednesday, February 03, 2010
Perhaps an 80M script isn't that excessive …
Around three months ago, I found a bug in Lua (and yes, it's silly to run an 80M script, but then, I tend to do silly things with programs). I reported the error to the Lua mailing list, and a few days later it was posted as a known bug with a one line patch to fix it.
And yes, I just got around to retesting Lua (with a version that has every patch applied, including the patch for the bug I found) with my 80M script:
[spc]lucy:/tmp/lua>time lua -i show.lua Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio > dofile("default.lua") > os.exit() real 0m10.964s user 0m5.880s sys 0m0.376s [spc]lucy:/tmp/lua>
Much better.
Lua just in time
Playing around with Lua is fun, but I've been hearing some good things about LuaJIT, a “just in time” compiler for Lua for the x86 platform (written by a single guy, no less!). Even more amazing, it's literally a drop in replacement for Lua (both the command line interpreter and library).
Okay, I'm willing to give this a try. I download, compile and install it. I then decide to test it using jumble program I wrote in Lua. All I need to do is change one line:
#!/usr/local/bin/lua
to read:
#!/usr/local/bin/luajit
and rerun the program.
version | time in seconds |
---|---|
pure Lua | 7.74 |
pure LuaJIT | 3.57 |
Lua + C | 2.06 |
LuaJIT + C | 1.70 |
LuaJIT easily trounces the Lua interpreter without any code changes (other than specifying a different “interpreter”). The versions with C use a C function to sort the letters in the word and while LuaJIT was faster than the Lua + C version, the very fact that I didn't have to modify any code is fantastic! LuaJIT used the very same C code as the Lua version—no changes or recompilations required!
Very neat!
I just relinked my Lua daemon against LuaJIT, just to test it out, and yes, it worked without any changes. I could even reload the scripts on the fly. And incredibly, it's only about 50% bigger than Lua itself.
LuaJIT is one sweet piece of technology.
I can't remember the last time I used a printer …
I hate printers.
All I wanted to do was print out a 13 page file using both sides of the paper. It was easy enough to select the odd pages, then the even pages in the printer software. The hard part? Knowing which way to orientate the papers for printing on the second side.
It took me four attempts to get it correct.
Sigh.
So much for saving paper …
Saturday, February 06, 2010
Thinking outside the dusty box
My computer has a rather odd cutout in the front that tends to accumulate dust:
I've noticed recently that it's accumulated a metric buttload of dust (the shot above is after I've removed quite a bit of it, as shown below). Even Mark remarked how dusty it was in there.
So, the problem: how to go about dusting the thing. It was a thick blanket of dust, and I headed into the garage of Chez Boca thinking I might find a vacuum with a nozzle attachment that would fit in there when I came across the perfect solution:
The Lint Roller!
It's really just a very wide roll of masking tape with the sticky side out. The method is easy: just remove the outer sheet (it's about 6″×4″) and apply it to the dust.
A few lint sheets later and the computer is now dust free.
Sunday, February 07, 2010
Heaven forbid he ever try to climb a tree (not like it's easy to climb palm trees … )
I'm beginning to think child safety is getting way out of hand. I was driving home from resolving a customer issue (nothing major—just needed to reset a port on a switch—thank you ever so much, Monopolistic Phone Company), driving down the street towards Chez Boca when I saw a kid, maybe four or five years old, wandering about, on foot, close to home (easily within 50′ of the front door), wearing a bicycle helmet! There was no sign of any type of pedal-powered vehicle in the vicinity, although his mother was nearby, sitting on the front lawn watching out for the little kid.
He was wearing a bicycle helmet while walking!
Around his age, I was flying headfirst into ditches on my bicycle sans helmet and the only lasting affect is an inability to spell sertain words correctly. Well, that, and a tendency to say “that” instead of “who.”
He was wearing a bicycle helmet while walking, people!
Then again, I should be amazed he was let outside at all …
Insanity
- From
- Mark Grosberg <XXXXXXXXXXXXXXXXX>
- To
- Sean Conner <sean@conman.org>
- Subject
- Re: Password updated
- Date
- Tue, 5 Jan 2010 11:24:10 -0500 (EST)
On Tue, 5 Jan 2010, Sean Conner wrote:
What's the cookie2 header for?
I'm so glad you asked. This is almost so good it may cause you to blog about it (actually I figured by the time we were done discussing the insanity of cookies you may have had an insightful blog post anyhow).
I guess after the third cookie spec they figured they kinda sucked at this so they built in an escape. So after much re-re-re-reading of the RFC I think what happens is if you have received a cookie with a
$Version
that you don't understand you are supposed to just send back aSet-Cookie2:
header with$Version="version_this_thing_understands"
It's for future expandability so when we have 10 cookies specs clients and servers will “just work” (at this point I think we both know that statement is about as truthful as “the check is in the mail.”).
Well Mark (and yes, I know, it's been a month), the cookie specs are a paragon of clarity compared to the laughable mess that is syslog protocol specification. Had I been aware of the “informational nature” of RFC-3164 I might not have even started my own homebrew syslogd replacement (network stuff in C, high level logic in Lua).
How loose is the spec?
A program that wishes to use syslog()
may select a
“facility” the message will be logged under—think of “facility” as a
subsystem, like “mail” or “cron” (under Unix, cron
runs
scheduled tasks on a periodic nature) or “auth” (authorization, or login
credentials). Also, each message has a priority (kind of), one of
“debug”, “info”, “notice”, “warn”, “err”, “crit” (for critical
errors), “alert” (even more critical errors) and “emerg” (basically, the
machine is on fire, abandon all hope, etc.). The program using
syslog()
can also tag each message, usually with its name, and
the message itself has no real structure, originally being meant for human
consumption.
Now, the syslog protocol, which is used to send the messages to a program
that handles these messages, usually named syslogd
under Unix,
is a text based protocol, and a full RFC-3164 message would look something like:
<87>Feb 07 04:30:00 brevard crond: (root) CMD (run-parts /etc/cron.hourly)
You have the facility and priority (as a single number) in angle brackets, immediately followed by the timestamp, a space and then the name of the machine sending the message, a space and the tag (usually the name of the program on the machine sending the message), a colon, then the message.
And technically, every field is optional! Which makes parsing this a technical challenge. Not only that, but since there never really was a spec, it's easy to find ambiguous messages, such as:
<14>Jan 14 05:53:37 gconfd (spc-25469): Received signal 15, shutting down cleanly
which (per the spec) was sent from the program “(spc-25469)” on machine
“gconfd”. Funny thing is, I have no machine called “gconfd” but there
does exist a program called gconfd
that runs on my machine,
running as me, with a process ID of 25496 (fancy that).
I don't even want to talk about /Applications/Windows Media
Player/Windows Media
Player.app/Contents/MacOS/WindowsMediaPlayer
.
It gets even worse. RFC-3164 makes a point in saying that the following is a legal syslog message that has to be processed:
Use the BFG!
Just writing the code to parse this mess took the majority of time, as I kept coming across syslog messages that really weren't.
To work my way out of this mess, if I don't find a proper facility/priority field, I log the raw message (using facility “user” and level “notice”, which is what RFC 3164 says to use in the absence of such information). If there's no timestamp, okay, but if there is one but it's malformed, I log the raw message. I then check for an IP or IPv6 address, as I feel that's really the only sane value to use, then everything else up to a ':' is accepted as the tag value (more or less).
Is it perfect?
No.
But so far, it covers everything I've personally encountered. It will
misparse (which is a testcase I pulled from rsyslogd
), but not crash, on
seeing:
<130> [ERROR] host.example.net 2008-09-23 11-40-22 PST iapp_socket_task.c 399: iappSocketTask: iappRecvPkt returned error
Garbage in, garbage out (also, stuff like this can be checked in the Lua code, as the raw message is available in addition to the parsed message).
Cookies? Insane? Not really. Not when compared to the syslog protocol.
More than you care to know about syslog
So I've been learning more than I ever wanted to about the syslog protocol. There's the non-spec that is RFC-3164 that is optimistic in terms of the protocol. Then there's the cleaned-up spec that no one is using that is RFC-5424 (which is quite nice, if a bit over-engineered).
RFC-3164 documents the use of UDP as the transport protocol for the syslog protocol, reading that RFC one gets the impression that one should never actually use UDP as the transport mechanism, least some cracker intercept or change the messages, or worse yet—you lose some packets and get nailed in an Sarbanes—Oxley audit (or even worse still—an ISO-9000 audit—the horror! The horror!).
Well, you could try running the syslog protocol over TCP, but even that isn't good enough for some people, claiming that you can still lose logging information under certain circumstances. No, for reliability you need to add a layer of framing over TCP and wrap the syslog protocol in XML and call it a day.
So far, the only syslog program I've found that even pays RFC-3195 lip service is rsyslogd
, and
even then, it's receive only and uses its own framing layer over TCP for sending.
I personally haven't seen an issue with using UDP for the syslog protocol. Not only do I relay syslog messages to a centralized server (my desktop box at Chez Boca, so I can watch the stuff in real time) but copies are kept locally (just in case). Also, there have been times when a TCP version (yes, even if I was using RFC 3195 or the lighter RELP) would have failed (at one point, our upstream provider upgraded a firewall that filtered out TCP traffic routed asymetrically and guess what? Our traffic was routed asymetrically; UDP traffic was unaffected and thus in that case, we were able to isolate the issue faster). Even the design of SNMP centered around UDP simply because it was “fire and forget” and thus on a congested network, there was a greater chance of UDP traffic of making it out and accepted than TCP traffic (which requires an acknowledgment that might never make it back).
But in looking over these, I'm struck that a reliable syslog protocol doesn't use SCTP, which has the reliability, ordering and (most importantly, congestion control) of TCP with the message-based semantics of UDP. Heck, for “reliability” SCTP has one feature that neither TCP nor UDP have: either peer can change the IP address used for the session.
For now, I'll just stick with UDP.
Tuesday, February 09, 2010
Syslogintr—a syslogd replacement in C and Lua
So, why did I decide to write my own syslogd
? Well, it was
started as something to do while at Bunny's mom's house. I was curious as
to the actual protocol used by syslogd
because it's always
bugged me that the files written by syslogd
bear no information
about the facility or priority a message originally had. Sure, you can
filter by the facility and priority, but the resulting files lose
that information.
It was after reading RFC-3164 that I realized it might be possible to filter on more than just the facility and priority. Have Lua handle the logic of log analyzation and I had what I thought was a fun little project.
Heh. Little did I know.
Anyway, it's been in use for a little over three months (and 140 commits
to the source repository) and I must say, I'm finding it very
useful. Sure, I could have written a Lua script to parse the log files that
the traditional syslogd
writes, but I'm just cutting out the
middleman filesystem.
So the C code accepts, parses and hands a well-structured block of information to the Lua code, and it's the Lua code where all the features are implemented.
The information to Lua is passed in a table (which is Lua for an associative array, or hash table) with the following fields:
name | type | comments |
---|---|---|
name | type | comments |
version | number | This is the version of the syslog protocol, of which there are two—the documented version 1 which I haven't seen in the wild yet, and the conventional version which I've internally labelled as “version 0”. I currently only support version 0, so this field (as of this writing) is always 0. |
_RAW | string | This is the actual raw message as received. It was intended as a debugging aid, but I keep it enabled as it is useful when you get the odd message. |
remote | boolean | true if the message
came in over the network, false if it was generated
locally on the machine. I use this as a quick test. |
host | string | This is really the source of the
message. For a remote syslog message, this will contain the
IPv4 or
IPv6 address.
For messages generated locally, this will be the “local socket”
(formerly known as “Unix sockets”) the message was written to
(which on most systems is /dev/log ). |
relay | string | Usually this will be the same as
the host field, but when syslog messages are being
relayed, this is the intermediate system that relayed the message
on. If system A generated a message to B, and B relayed it to C,
then the host field will contain “A”, but the
relay field will contain “B”. If, however, it goes A
→ B → C → D, then host will contain
“A” and relay will have “C” (in other words—the
relay field will only contain the last relay
machine). |
port | number | The UDP port number the syslog message was sent from. If the message was sent locally, this will be -1. |
timestamp | number | This is the timestamp the
message as it was received on the accepting host. This value is
suitable for use in the Lua functions os.date() and
os.difftime() . |
logtimestamp | number | This is the timestamp as
found in the syslog message (if there was one). Otherwise, it will
be the same as timestamp . |
program | string | This is really the tag portion,
but since it's mostly the name of the program that generated the
syslog message, that's what I called this field. If it wasn't
given, the value of this field will be “”' (an empty string, not
nil ). |
pid | number | The process ID of the program that
sent the message. Most Unix syslogd implementations
have this as part of the tag, and if found, is set here. Otherwise,
it will be 0. |
facility | string | The facility, which will be one
of the following:
|
level | string | This is the priority of the
message, but the term “priority” never made much sense, so I call
it “level.” This will be one of eight values:
|
msg | string | The actual message, or basically, whatever is left after parsing everything else. This is pretty much a free-format string. |
This table is passed to a function called “log()” to handle however. On my development system, this function is simply:
function log(msg) writelog(msg) sshd(msg) end
writelog()
just logs the message to a single huge logfile
(that records the host
, program
,
facility
, level
and msg
fields).
It's the sshd()
function that's interesting:
if blocked == nil then blocked = {} end function sshd(msg) if msg.remote == true then return end if msg.program ~= "sshd" then return end if msg.facility ~= "auth2" then return end if msg.level ~= "info" then return end local ip = string.match(msg.msg,"^Failed password for .* from ::ffff:([%d%.]+) .*"); if ip == nil then return end I_log("debug","Found IP:" .. ip) if blocked[ip] == nil then blocked[ip] = 1 else blocked[ip] = blocked[ip] + 1 end if blocked[ip] == 5 then local cmd = "iptables --table filter --append INPUT --source " .. ip .. " --proto tcp --dport 22 --jump REJECT" I_log("debug","Command to block: " .. cmd) os.execute(cmd) I_log("info","Blocked " .. ip .. " from SSH") table.insert(blocked,{ ip = ip , when = msg.timestamp} ) end end
This checks for local messages from sshd
and if it finds
five (or more) consecutive failed attempts, adds the IP address of the
offending party to the firewall, and logs the attempt (via
I_log()
). Nothing that a lot of intrusion scripts don't
already do, but this is in the syslog daemon, not another process reading
this second hand through a file (and without the issues that come up with
log rotation).
Now, in order to keep the firewall from filling up, I added yet another feature—the ability to periodically run a Lua function. So, elsewhere in the script I have:
alarm("60m") function alarm_handler() I_log("debug","Alarm clock"); if #blocked == 0 then I_log("debug","Alarm clock---snooze button!") return end local now = os.time() I_log("debug",string.format("About to remove blocks (%d left)",#blocked)) while #blocked > 0 do if now - blocked[1].when < 3600 then return end local ip = blocked[1].ip I_log("info","Removing IP block: " .. ip) blocked[ip] = nil table.remove(blocked,1) os.execute("iptables --table filter -D INPUT 1") end end
alarm()
informs the C code to call the function
alarm_handler()
every 60 minutes (if you give
alarm()
a numeric value, it takes that as a number of seconds
between calling the function, if a string, it expects a numeric value
followed immediately by “m” for minutes, “h” for hours (so I could have
specified “1h” in this case) and “d” for days). The function
alarm_handler()
then removes the blocks one at a time (once per
hour) which is long enough for the scanner to move on.
Now, you may have noticed this bit of odd code:
if blocked == nil then blocked = {} end
One feature of the C code is that if it receives a SIGUSR1
,
the Lua script will be reloaded. This is to ensure that if there is any
IP blocks defined, I don't lose
any when the script is reloaded (and yes, that is a handy feature—change
the Lua code a bit, and have the daemon reload the script without having to
restart the entire daemon).
Along those lines, if the C code receives a SIGHUP
, calls
the Lua function reload_signal()
, which allows the script the
ability to close and reopen any logfiles it might be using (thus being
compatible with the current syslogd
behavior) which is useful
for rotating the logs and what not.
I'm also running this on my own server (brevard.conman.org
,
which runs this blog) and also one of our monitoring servers at The Company
(this one not only runs Nagios, Cacti and
snmptrapd
, but all our routers send their syslog information to
this server). The Lua code I have running on the monitoring server not only
keeps logging in the same files as the old syslogd
(in the same
format as well), but also this little bit of code:
function check_ospf(msg) if msg.facility ~= 'local1' then return end if string.match(msg.msg,".*(OSPF%-5%-ADJCHG.*Neighbor Down).*") then send_emergency_email("sean@conman.org",msg) send_emergency_email("XXXXXXXXXXXXXXXXX",msg) elseif string.match(msg.msg,".*(OSPF%-5%-ADJCHG.*LOADING to FULL).*") then send_okay_email("sean@conman.org",msg) send_okay_email("XXXXXXXXXXXXXXXXX",msg) end end
If the routing on our network changes, I get email notification of the
event. I also have some code that processes the logs from Postfix. Postfix
generates “thin
entries,” five entries per email (from various subsystems) when what I
really want logged are “fat entries” (which summarize the status of the
email in one log line). So, I wrote some code to catch all five lines per
email, and then just log a one line summary, thus changing a bunch of “thin
entries” to one “fat entry” (what I really want to do is send all mail
related logs on all our servers to one central location, so we no longer
have to check two or three servers everytime one of our customers complains
about email being “too slow” but that's for another show
entry).
The Lua code on my personal server also does the Postfix “thin-to-fat” conversion, but it also logs the webserver status every hour (just because I can).
And if you're wondering why I've blathered on and on about a piece of code I've yet to release, it's because I've yet to write any documentation on the darned thing, and I figure this would be a decent first pass at some documentation. And maybe to gauge interest in the project.
Wednesday, February 10, 2010
Notes on logging
This is interesting: Facebook wrote their own logging system instead of using syslog. Their system only has two pieces of informtion—a catagory and the message. No facilities, no priorities or levels. I think in Facebook's case, they log everything so there's no need for individual priorities or levels (the argument here is: you're going to log everything eventually anyway, so simplify the process).
Another note: when your configuration file is too complex (or in other words—an ad-hoc declarative language) perhaps it's time to give up and just use a scripting language for configuration (I skipped straight to using a scripting language for configuration/logic).
More notes on logging
I mentioned yesterday about logging
all mail related logs to a central server. While we don't have a
complicated email setup (unlike, say, Negiyo), we still have several email
severs and we get enough tickets about slow or lost email that it's a pain
having to slog through one or two servers piecing everything together. What
I would like is, given a Message-ID
(which is (supposed to be)
a globally unique identifier for an email) or an email address, to make a
query in one location and get something like:
message-id = <YzNCeWFXNW5RSE53Y21sdVoyUmxkeTVqYjIwPQo=@mx3oc.com> from = gandalf@example.net to = sean@example.com [rhohan-isp.example.org] [gondor.example.net] Feb 10 22:46:56 [gondor.example.net] [spamfirewall.example.com] Feb 10 22:46:57 [spamfirewall.com] [compmailserv.example.com] Feb 10 22:47:02 [compmailserv.example.com] [workstation.example.com] Feb 10 22:47:06 [workstation.example.com] mbox of sean Feb 10 22:47:06
As an example, you see the Message-ID
, who sent the email,
who received it, and the five other lines can be read as: machine X sent
email to machine Y at such-n-such a time,” with the last one showing local
delivery of the email to a mailbox.
Anyway, that's what I would like to build. And I can almost do it. Sendmail (which at
The Company we use on our legacy systems), Postfix (which we use for new servers)
and Exim (which we use
on one server because it has a feature that's needed by a program
that runs on that one server) all log a bunch of messages as email works
through their respective systems. Each one uses an internal unique ID,
but they at least log the Message-ID
at some point, so
I can map the respective MTAs internal IDs to a globally unique ID.
The odd-man out though, is our spam firewall, which is used by a
significant portion of our customers. But, given that our spam firewall is
OpenSource™ I suppose I can modify the source code to emit a
Message-ID
, but the problem there is if (or when) we
upgrade—I would have to patch the code again (or, convince the Powers That
Be to accept the patch).
I would also like to convert as many software packages to log via
syslog
, and while most, like PostgreSQL and even Apache, can be
configured as such, there are a few holdouts (I'm looking at you,
MySQL) that can't.
Wednesday, February 17, 2010
Dragons and Thunderstorms
Four months ago I attended a Drupal users group and I was underwhelmed since the group was mainly web developers, not programmers. So of course I found myself at a Ruby users group (so close that Wlofie and I walked there).
I went not because I'm terribly interested in Ruby (although getting a feel for the language certainly won't hurt) but because an old friend from college, Steve Smith had organized the meeting and I thought it might be nice to drop in and say hello.
It turned out that this group was more aligned with my interests than the Drupal users group. The first presentation was on distributed Ruby, which had a unique feature—if the local side could not marshall an object to send to the remote side, it basically signaled the other side to make a remote call back to do the actual processing, so neither side could really claim to be the client nor the server—both sides could act the role. It's a bit baroque for my tastes, but still, an interesting solution to a problem with remote procedure calls.
The second presentation wasn't a presentation as it was a discussion on design patterns in Ruby, which lead to a digression through the source code for Rails 3.0 and a few coding techniques that several of the more knowledgeable Ruby programmers in the room didn't realize were possible in Ruby (and a whole new way to write ravioli code—wow).
And oddly enough, they were interested in some of the projects I've done in Lua. So it's definitely a users group I can see going to again.
Friday, February 19, 2010
Glutton for punishment
Four months ago I attended a Drupal users group and I was underwhelmed, so I'm finding myself in a rather odd position—I'm giving a technical talk about Drupal at the 2010 Florida Drupal Camp tomorrow (Saturday—at <shudder> 10:00 am! Oh bother!) about the work I've done on “Project: DoogieHowser.”
I'm not sure what prompted me to volunteer for the presentation, other than misery loving company and a chance to rant about the sheer silliness of the project (or maybe just the guilty feeling from writing about that Drupal users group a few months back), but hey, I get a free trip to Orlando and other than the unholy earliness of the presentation, it shouldn't be that bad.
And besides, what could possibly go wrong with Smirk and me being out of town at the same time?
I should know better than that
“Oh, Sean,” said Smirk. “You know better than that!”
“I know, I know.”
Sigh.
Bunny warned me about asking what possibly could go wrong. Bunny and I encountered heavy traffic leaving the Boca Raton area, and I-4 was almost, but not quite, a parking lot at 7:30 pm.
But even that wasn't the issue. Nope. At 11:00 pm Smirk knocks on the hotel door. Apparently, one of our customers decided that tonight was the perfect night to rework their entire network and needed our help.
Head, meet desk.
Saturday, February 20, 2010
Notes from a hotel room from the middle of the night while listening to a heated discussion in Spanish
The Quality Inn room is very nice, but there were several men in a very heated discussion, outside as we were trying to sleep. Bunny called the front desk to complain, and was informed by the sole employee onsite at the time that the police were being called in to mediate the heated discussion.
Sigh.
An hour later, the very heated discussion was resolved, but that still left a bunch of people entering and leaving various rooms and making sure that their doors were closed shut.
I think things quieted down by 3:00 am, giving me a full three hours before the wakeup call.
Le sigh.
Just a quick note from a conference
Of course I'm blogging from the conference. Why do you ask?
My perspective on presentations
I managed to get through my presentation about “Project: DoogieHowser” although my presentation was a bit shorter than I expected it to be. But it appeared to go well with the audience.
Smirk has been recording all the sessions with the goal of putting up the videos at Drupal Maestro (where you'll be able to see my presentation, but not from my perspective).
And if that wasn't bad enough, I volunteered to do yet another
presentation, this time about Git, since there was an open session and one
of the suggestions for a topic was git
. Since I use
git
, I figured I could do a quick rundown on how to use it.
A realization
While I was underwhelmed by the Drupal users group, I am not underwhelmed here at the 2010 Florida Drupal Camp.
But then again, I'm sitting through the Special Topics track, which have the more technical talks at the conference. Everybody has been very nice (obviously, they haven't read that post) and the catered lunch was quite good. I'm actually glad I came to this thing.
Now, to prepare for my git
presentation …
It is over
The git
presentation was well received, as I briefly covered some basic git
commands like git init
, git add filenames…
and git commit
, along with a few different workflows one can do with git
.
After that, a two-plus hour dinner at Macaroni Grill, dinner for 60.
Now, I crash.
Monday, February 22, 2010
Souvenir from a conference
I crashed hard, mainly because, I caught yet another cold (second this year so far—or perhaps a continuation of the first, it's hard to say). Snotty nose, tired, sneezing, the whole nine yards.
I really hate being sick.
Tuesday, February 23, 2010
Crawler Town
For my friends who are into Lego, I give you:
Crawler Town (link via kisrael.com)
Now, back to my regularly being sick …
Wednesday, February 24, 2010
Noooooooooooooooooooooooooooo!
or
Hollywood is creatively bankrupt
Monday, March 08, 2010
Not the Messiah
Like Handel only funnier (link via news from me). Ah, so that's what Monty Phython has been up to …
Adventures in profiling
Last month, Mark hired me on as a consultant to help him profile a program he's been writing. And while I can't describe the program (or the project the program is for), I can, however, describe the issues I've had with profiling this program.
Normally to profile a program, you compile the program using special
compiler options (with gcc
this is the -pg
option)
that instrument the program with special code to record not only how many
times each function is called, but how much time is spent running each
function. This information is saved when the program terminates, and you
run another special program to decode this output into an easy to read format. Then you use that
information to boost the performance of
your program.
It's pretty straightforward.
This project, on the other hand, wasn't so straightforward. The
primary issue—it's a multiprocess program; that is, it calls
fork()
to create child processes that do the actual work. It's
the child processes that need to be profiled, but it's difficult to actually
get the profile information from the child processes due to the way
the profiler works (and both the GNU profiler (which I'm not allowed to use
due to licensing fears) and the Sun Studio 12 profiler (which is the
development platform for the project) work simularly, so this issue affects
both).
Problem one—the output. The program runs and when it exits, the
accumulated data is written to a file. This file is named
mon.out
(gmon.out
if I were using GNU). In this
case, the main program starts, creates several child processes. The ouput
file is only generated when a process ends, and the only time a process ends
in this project is when you explicitely stop the main program. This results
in mon.out
being overwritten by each child as they end, then
overwritten again when the main process ends. So all I end up with is
profile information for the main process, which tells me that the main
processing loop took 99% of the time with only 0.01 seconds of CPU time (in
other words—the parent process did nothing noteworthy). And there's no
option in either the compiler or at runtime, to change the output file.
Or is there?
The file mon.out
is generated in the current working
directory of the running process. Change the working directory of the
process, and the file ends up in the new working directory. So I modify the
program such that each child process creates a new working directory based
on the child's process ID and try again.
I ended up with one mon.out
file in the working directory of
the main process, and a bunch of empty directories. This leads to the
second problem with profiling—you only get the output when the process
calls exit()
(or returns from main()
),
not _exit()
(there is a
difference between the two).
And replacing the calls to _exit()
with exit()
caused the program to hang (Mark even had a comment in the code about the C
runtime handling fork()
badly in the case of
exit()
).
So that pretty much killed using the compiler profiler options for this project.
Or did it?
The code is on a Sun server, which means it comes with DTrace, which is an incredible tracing facility that can be used to profile an application! Without compiling a special version of the program! Heck, you can profile any running process at any time!
It's a neat facility.
Just by using some sample scripts from the DTraceToolkit
and a few examples from the Solaris Dynamic Tracing
Guide, I was able to provide Mark with enough information to nearly
double the performance of the program (major culprit—a ton of pointless
strcpy()
calls in a third party library being used, but that's
about all I can say about that).
I was fortunate in that DTrace existed; between samping the program
counter, recording the number of standard library calls made, and
selectively checking the call stack to a few questionable calls (for
instance, sprintf()
calls ferror()
if you can
believe it, and tracking down the few hundred thousand calls to
strcpy()
) of selected child processes, I was able to profile
this multiprocess program (and each process is multithreaded—a fact I
didn't realize until later).
And if DTrace didn't exist? Well … there's
the profiling equivalent to printf()
debugging I could have
tried …
Cache flow problems
“I keep getting these notifications that you've updated your site,” said Dad, “but when I check, I keep seeing the same page over and over again.” This was about the fourth or fifth time this topic has come up over the past few months. And each time I tell Dad to shift-reload but that doesn't seem to work for him, although the page eventually does change, after awhile.
Sigh.
I suspect I know the problem. Sometime in the past year I changed the configuration of my webserver to allow browsers and web proxies to better cache the content here:
ExpiresActive On ExpiresDefault "access plus 1 day" ExpiresByType image/jpeg "access plus 1 month" ExpiresByType image/gif "access plus 1 month" ExpiresByType image/png "access plus 1 month" ExpiresByType text/css "access plus 1 month" ExpiresByType text/plain "access plus 1 month"
Images, style sheets, plain text files, all can be cached for a month (heck, they can probably be cached indefinitely for my blog—the content has never really changed all that much), but the web pages (in addition to the various feed files) can only be cahced for 24 hours.
And I think that's the issue.
Dad's ISP is known to aggressively cache the web (I won't name names, but it's initials are 'A', 'O' and 'L') so I may need to adjust my methods for caching values.
Tuesday, March 09, 2010
Now *this* is a gaming room …
For my D&D playing friends … (link via Instapundit)
Wednesday, March 10, 2010
More on that “tool vs. crutch” debate
An empirical test of ideas proposed by Martin Heidegger shows the great German philosopher to be correct: Everyday tools really do become part of ourselves.
The findings come from a deceptively simple study of people using a computer mouse rigged to malfunction. The resulting disruption in attention wasn’t superficial. It seemingly extended to the very roots of cognition.
“The person and the various parts of their brain and the mouse and the monitor are so tightly intertwined that they're just one thing,” said Anthony Chemero, a cognitive scientist at Franklin & Marshall College. “The tool isn't separate from you. It's part of you.”
Your Computer Really Is a Part of You | Wired Science | Wired.com
More food for thought in the tool vs. crutch debate …
Friday, March 12, 2010
Brief notes about a surprise birthday party
The original plan was to drive to Blountstown to surprise Joe for his birthday, but due to existent plans his family had, we couldn't do that. Joe then expressed interest in meeting at MegaCon the following week, but we all told him that no one from South Florida would be able to make (in truth, with the help of his wife, planning on meeting him for a surprise birthday party in Orlando).
So at the ungodly hour of 11:00 am, Bunny, Gregory, Kurt, Keith and I started our drive northward. We needed to start out early to ensure we would arrive at the hotel first and set up the decorations and get the cake.
Joe (left) still giddy at our little surprise, even stuffed into the back of a sport utility vehicle with Kurt.
Joe was totally surprised by us, and was rather giddy the entire evening. Just being there was the best gift we could have given him.
Saturday, March 13, 2010
Brief notes about a convention
If I thought 11:00 am was ungodly, then there's no description I can give of getting up at 7:00 am (I think the last time I had to do that was … um … over twenty years ago?) in order to get to the convention. After a quick breakfast at McDonald's (where I saw no less than five (5) superheros eating breakfast—this particular McDonald's was within walking distance to the Orlando Convention Center, so it's not surprising that I saw no less than five (5) superheros at that particular establishment), Joe, Kurt, Gregory, Keith, Bunny, and I headed over to the convention center. There in the parking lot, we met up with Jeff and Tom, who drove up that day from South Florida to help celebrate Joe's birthday.
We then hiked about a mile through the south portion of the convention center to the north hall (yes, the Orlando Convention Center is at least two huge buildings), only to wait in line for about an hour to get tickets to go inside.
But once inside … oh my …
The costumes.
The displays.
The artists.
Over six hours, maybe even seven, and I doubt we saw it all (although I did meet Don Rosa and got a few autographed Uncle Scrooge prints, but I missed seeing the Mach 5—sigh).
Dinner afterwards (after some fun attempting to navigate through the Orlando traffic), then back to the hotel for some swimming and hanging out.
Sunday, March 14, 2010
Even more brief notes about a store
Again, I got up at some ungodly hour of the morning where we all proceeded to Perkins for breakfast, followed by a trip to the Lego Store.
Unfortunately, I did not get any shots inside the store, as I was too busy drooling over all the sets for sale (I was particularly drooling over the Emerald Night Train set), and marvelling at a camera/video display they had set up. You could hold up a box of a kit to the camera, and on the video display, you could see yourself holding up the box, but the system would then add a 3D rendering of the kit on the box you were holding. And as you rotated the box around, the the 3D rendering would rotate with the box (and a few kits were even animated, like the Emerald Night Train).
Wow.
A few hours there, then we all started the drive back from Orlando.
Thursday, April 01, 2010
Notes about the past few weeks
So.
What happened?
Well …
After the trip to Orlando for a surprise birthday party, there was a another Ruby users group meeting, which immediately afterwards I came down with my third cold of this year (seriously, since January—either that, or it's been the same cold for three months; hard to say).
A week later I (along with Bunny and Wlofie) attended a special viewing of the works of M. C. Escher at the Boca Raton Museum of Art—a guided tour by the owner of the collection. Being a fan of M. C. Escher, I got a lot out of the guided tour, saw a bunch of works I've never seen before, and got to see some of the actual wood cuts and stones used to make his prints (and as the owner kept saying, any number of single works would have been enough to get Escher's name into history, so incredible was his wood engraving skills). I do want to go back and take my time viewing the exhibit (since the tour was rather quick—less than two hours with over 60 people following the owner around the gallery).
A week after that and I'm just now getting over the cold (clinic, antibiotics, blah blah).
And … well … I just haven't felt like writing all that much [You don't say? —Editor] [Shutup. —Sean], but I have been doing this for ten years now … perhaps it's time to close up shop.
Saturday, April 03, 2010
I'll only upgrade software if there's a compelling reason to, and for me, mod_lua is a compelling reason to upgrade Apache
Nah, it's not quite time to close up shop … (so much for my April Fool's joke this year—most people missed the style changes I did for several years running, but a) most people read the entries here via the newsfeed so the visual change in layout was always lost on them, and b) I never did find that round toit I needed to change the style—anyway, I digess).
I've been looking a bit deeper into Drupal these past few days (seeing how I'm
scheduled to give a repeat of my
talk at the new West Palm Beach Drupal users
group this month—I'm giving a lot of presentations this year it seems)
and trying to get into the whole PHP framework and well … as a diversion,
I thought it might be interesting to see what type of web-based framework
one could do in Lua, and
why not attempt it using mod_lua
?
Well, the fact that I linked to the svn
respository should say
something about the stability of mod_lua
—it ain't. It's only
currently available for the latest development
version of Apache, there's no documentation (except
for the source code) and a
smattering of example code to guide the intrepid. It's also not
terribly reassuring that it hasn't been worked on for a few months.
That didn't stop me from trying it though.
I spent a few hours debugging
the module, enough for it to pass the few tests available and hopefully,
the Apache team will accept the patch (a call to memset()
to
initialize a structure to a known value before use).
Now that it doesn't crash, it does appear to be quite nice, allowing the
same access that any Apache module in C would have, and it looks like one
could effectively replace a few of the murkier modules (like mod_rewrite
)
with more straightforward Lua implementation. My initial thoughts are to
reimplement mod_litbook
(which currently only works for Apache 1.3x) using mod_lua
as a
test case (and heck—maybe even upgrade the existing
mod_litbook
to Apache 2.x so I won't have to keep running an Apache 1.3
instance just for one portion of my website).
Sunday, April 04, 2010
I can haz Easter Bunny. I eated it.
Tuesday, April 06, 2010
Client certificates in Apache
I've been spending an inordinate amount of time playing around with Apache, starting with
mod_lua
, which lead me
to reconfigure both Apache 2.0.52 (which came installed by default) and
Apache 2.3.5 (compiled from source, because mod_lua
is only
available for Apache 2.3) so they could run at the same time. This lead to
using IPv6 because I
have almost two dozen “sites” running locally (and as I've found, it's
just as easy to use IPv6 addresses as it is IP addresses, although the DNS PTR
records get
a little silly).
This in turn lead to installing more secure sites locally, because I can (using TinyCA makes it trivial actually), and this lead to a revamp of my secure site (note: the link takes you to an unsecure page—the actual secure site uses a certificate signed by my “certificate authority” which means you'll get a warning which can be avoided by installing the certificate from the unsecure site). And from there, I learned a bit more about authenticating with client certificates. Specifically, isolating certain pages to just individual users.
So, to configure client side certificates, you need to create a client certificate (easy with TinyCA as it's an option when signing a request) and install it in the browser. You then need to install the certificate authority certificate so that Apache can use it to authenticate against the client certificate (um … yeah). In the Apache configuration file, just add:
SSLCACertificateFile /path/to/ca.crt
Then add the appropriate mod_ssl
options to the secure site (client-side authentication only works with
secure connections). For example, here's my configuration:
<VirtualHost 66.252.224.242:443> ServerName secure.conman.org DocumentRoot /home/spc/web/sites/secure.conman.org/s-htdocs # ... <Directory /home/spc/web/sites/secure.conman.org/s-htdocs/library> SSLRequireSSL SSLRequire %{SSL_CLIENT_S_DN_O} eq "Conman Laboratories" \ and %{SSL_CLIENT_S_DN_OU} eq Clients" SSLVerifyClient require SSLVerifyDepth 5 </Directory> </VirtualHost>
And in order to protect a single file with more stringent controls (and here for example, is my bookmarks file):
<VirtualHost 66.252.224.242:443> # ... <Location /library/bookmarks.html> SSLRequireSSL SSLRequire %{SSL_CLIENT_S_DN_O} eq "Conman Laboratories" \ and %{SSL_CLIENT_S_DN_CN} eq "Sean Conner" SSLVerifyClient require SSLVerifyDepth 5 </Location> </VirtualHost>
The <Files>
directive in Apache didn't work—I
suspect because the <Directory>
directive is processed
first and it allows anybody from the unit “Clients” access and thus any
<Files>
directives are ignored, whereas
<Location>
directives are processed before
<Directory>
directives, and thus anyone not me
is denied access to my bookmarks.
Now, I just need to figure out what to do about some recent updates to Apache, since I have some “old/existing clients” to support (namely, Firefox 2 on my Mac, which I can't upgrade because I'm stuck at 10.3.9 on the system, because the DVD player is borked … )
IF IT AIN'T BROKE DON'T FIX IT!!!!!!!!!
Sigh.
I can fix the client certificate
issue if I install the latest Apache
2.2, which has the SSLInsecureRenegotiation
option, but that requires OpenSSL 0.9.8m or higher (and all this
crap because of a small
bug in OpenSSL). So, before mucking with my primary server, I decide to
test this all out on my home computer (running the same distribution of
Linux as my server).
Well, I notice that OpenSSL just came out with verion 1.0.0, so I decide
to snag that version. Download, config
(what? No
configure
still?), make
and make
install
, watch it go into the wrong location (XXXXXX I wanted it in /usr/local/lib/
no /usr/local/openssl/lib
!), rerun config
with other options and get it where I want it.
Okay.
And hey, while I'm here, might as well download the latest OpenSSH and get that
working. I nuke the existing OpenSSH installtion (yum remove
openssh
) since I won't need it, and start the configure
,
make
and make install
, but the
configure
script bitches about the version of zlib
installed
(XXXX! I know RedHat is conservative about using the
latest and greatest, but come on! It's been five years since
version 1.2.3 came out! Sheesh!) so before I can continue, I must do the
download, configure
, make
and make
install
dance for zlib
. Once that is out of
the way …
checking OpenSSL header version... 1000000f (OpenSSL 1.0.0 29 Mar 2010) checking OpenSSL library version... 90701f (OpenSSL 0.9.7a Feb 19 2003) checking whether OpenSSL's headers match the library... no configure: error: Your OpenSSL headers do not match your library. Check config.log for details. If you are sure your installation is consistent, you can disable the check by running "./configure --without-openssl-header-check". Also see contrib/findssl.sh for help identifying header/library mismatches.
Oh XXXXXX XXXX …
IT'S IN /usr/local/lib
YOU USELESS
SCRIPT!
But alas, no amount of options or environment variables work. And no,
while I might be willing to debug mod_lua
, I am not about to debug a 31,000
line shell script. Might as well reinstall the OpenSSH package …
[root]lucy:~>yum install openssh Setting up Install Process Setting up repositories Segmentation fault (core dumped)
Um … what?
[root]lucy:~>yum install openssh Setting up Install Process Setting up repositories Segmentation fault (core dumped)
What the XXXX?
Oh please oh please oh please don't tell me that yum
just
assumes you have OpenSSH installed …
Okay, where is this program dying?
[root]lucy:/tmp>gdb /usr/bin/yum core.3783 GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-redhat-linux-gnu"..."/usr/bin/yum": not in executable format: File format not recognized Core was generated by /usr/bin/python /usr/bin/yum search zlib'. Program terminated with signal 11, Segmentation fault. #0 0x007ff3a3 in ?? () (gdb)
Oh … it's Python.
Um ‥ wait a second …
It's … Python! It's a script!
WHAT THE XXXX?
What did I do to cause the Python interpreter to crash?
Aaaaaaaaaaaaaaaaaaaaaaaaaah!
Okay, I managed to find some RPMs of OpenSSH to install. That didn't fix
yum
.
Okay, don't panic.
Obviously, it's something I've done that caused this.
The only things I've done is to install up libraries in
/usr/local/lib
.
Okay, keep any programs from loading up anything from
/usr/local/lib
. That's easy enough—I justed edited
/etc/ld.so.conf
to remove that directory, and ran
ldconfig
. Try it again.
Okay, yum
works!
And through a process of elimination, I found the
culprit—zlib
! Apparently, the version of Python I have
doesn't like zlib 1.2.4
.
Sheesh!
Okay, yes, I bring ths upon myself for not running the latest and greatest. I don't update continously because that way lies madness—things just breaking (in fact, the last thing I did upgrade, which was OpenSSL on my webserver the other day, broke functionality I was using, which prompted this whole mess in the first place!). At least I was able to back out the changes I made, but I have to keep this in mind:
IF IT AIN'T BROKE DON'T FIX IT!!!!!
Write Apache modules quickly in Lua
I really like mod_lua
, even in its alpha state. In less
than five minutes I had a webpage that would display a different quote each
time it was referenced. I was able to modify the Lua based qotd, changing:
QUOTESFILE = "/home/spc/quotes/quotes.txt" quotes = {} do local eoln = "\r\n" local f = io.open(QUOTESFILE,"r") local s = "" for line in f:lines() do if line == "" then -- each quote is separated by a blank link if #s < 512 then table.insert(quotes,s) end s = "" else s = s .. line .. eoln end end f:close() end math.randomseek(os.time()) function main(socket) socket:write(quotes[math.random(#quotes)]) end
to
QUOTESFILE = "/home/spc/quotes/quotes.txt" quotes = {} do local eoln = "\r\n" local f = io.open(QUOTESFILE,"r") local s = "" for line in f:lines() do if line == "" then -- each quote is separated by a blank link if #s < 512 then table.insert(quotes,s) end s = "" else s = s .. line .. eoln end end f:close() end math.randomseek(os.time()) function handler(r) r.content_type = "text/plain" r:puts(quotes[math.random(#quotes)]) end
(you can see, it didn't take much), and adding
LuaMapHandler /quote.html /home/spc/web/lua/lib/quote.lua
to the site configuration (what you don't see is the only other line you
need, LuaRoot
), reload Apache and I now have webpage backed by
Lua.
And from there, it isn't much to add some HTML to the output, but it should be clear that adding Apache modules in Lua isn't that hard.
What did take me by surprise is that there's no real way to do the heavy
initialization just once. That bit of reading in the quotes file? It's
actually done for every request—mod_lua
just compiles the
code and keeps the compiled version cached and for each request, runs the
compiled code. It'd be nice if there was a way to do some persistent
initialization once (a feature I use in the current mod_litbook
),
but as written, mod_lua
doesn't have support for that.
I also haven't see any action on my bug report—not a good sign.
I'm wondering if I might not have to pick up the ball
mod_lua
and run with it …
Wednesday, April 07, 2010
Dependencies and side effects
“Well, that's your problem,” I said, looking at my computer sitting there, powered off. Bunny had been unable to check her bank account and thinking our network connection was bad, powercycled the router and DSL unit, but was still unable to connect. The real issue was my computer being off—all the computers here at Chez Boca use my computer for DNS resolution.
“Yeah, something happened with the power, but I'm not sure what,” said
Bunny. And yes, it was odd; none of the clocks are blinking “12:00” and
from what it sounds like, she didn't hear any USP alarms (I think she would
have mentioned those if she did hear them) so something odd did happen. And
later (to get ahead of the story a bit) when I did check the UPS logs, I found (output
from syslogintr
):
/dev/log | apcupsd | daemon info | Apr 04 18:23:58 | 000.0,000.0,000.0,27.10,00.00,40.0,00.0,000.0,000.0,122.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 18:29:00 | 000.0,000.0,000.0,26.60,00.00,40.0,00.0,000.0,000.0,120.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 18:34:02 | 000.0,000.0,000.0,26.43,00.00,40.0,00.0,000.0,000.0,120.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 18:39:05 | 000.0,000.0,000.0,26.27,00.00,40.0,00.0,000.0,000.0,120.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 18:44:07 | 000.0,000.0,000.0,26.27,00.00,40.0,00.0,000.0,000.0,122.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 18:49:10 | 000.0,000.0,000.0,26.27,00.00,40.0,00.0,000.0,000.0,121.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 18:54:12 | 000.0,000.0,000.0,26.27,00.00,46.0,00.0,000.0,000.0,120.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 18:59:15 | 000.0,000.0,000.0,26.27,00.00,45.0,00.0,000.0,000.0,120.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 19:04:17 | 000.0,000.0,000.0,26.27,00.00,46.0,00.0,000.0,000.0,120.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 19:09:20 | 000.0,000.0,000.0,26.10,00.00,46.0,00.0,000.0,000.0,122.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 19:14:22 | 000.0,000.0,000.0,26.10,00.00,46.0,00.0,000.0,000.0,122.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 19:19:24 | 000.0,000.0,000.0,26.10,00.00,46.0,00.0,000.0,000.0,121.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 19:24:27 | 000.0,000.0,000.0,26.10,00.00,46.0,00.0,000.0,000.0,121.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 19:29:29 | 000.0,000.0,000.0,26.10,00.00,46.0,00.0,000.0,000.0,122.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 19:34:32 | 000.0,000.0,000.0,26.10,00.00,40.0,00.0,000.0,000.0,122.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 19:39:34 | 000.0,000.0,000.0,26.10,00.00,40.0,00.0,000.0,000.0,121.0,100.0,1 /dev/log | apcupsd | daemon info | Apr 04 19:44:37 | 000.0,000.0,000.0,26.10,00.00,40.0,00.0,000.0,000.0,122.0,100.0,0 /dev/log | apcupsd | daemon info | Apr 04 19:50:36 | 000.0,000.0,000.0,26.10,00.00,40.0,00.0,000.0,000.0,121.0,099.0,1 /dev/log | apcupsd | daemon info | Apr 04 19:55:53 | 000.0,000.0,000.0,25.94,00.00,40.0,00.0,000.0,000.0,121.0,098.0,0 /dev/log | apcupsd | daemon info | Apr 04 20:00:56 | 000.0,000.0,000.0,25.94,00.00,40.0,00.0,000.0,000.0,122.0,098.0,1
So, starting just past 18:23:58 on April 4th something odd was happening to my UPS (which all the computers in The Home Office are hooked up to), causing the battery voltage (normally 27.1v) and battery charge (100.0 percent) to drop. And those values kept dropping until:
/dev/log | apcupsd | daemon info | Apr 07 12:31:05 | 000.0,000.0,000.0,25.60,00.00,36.0,00.0,000.0,000.0,119.0,084.0,0 /dev/log | apcupsd | daemon info | Apr 07 12:36:08 | 000.0,000.0,000.0,25.60,00.00,36.0,00.0,000.0,000.0,121.0,084.0,1 /dev/log | apcupsd | daemon info | Apr 07 12:41:10 | 000.0,000.0,000.0,25.60,00.00,36.0,00.0,000.0,000.0,119.0,084.0,0
The battery voltage fell to 25.6V and the battery charge to 84% and past that … well, nothing because the computers lost power at that point (or anywhere up til 12:46:12 when the next log should have appeared). So no real clues as to what happened with the power, but I digress—back to the story.
I hit the power button on my computer, it bitches about the disk being
corrupt (which I ignore, running a journaled filesystem) and when it gets to
starting up syslogd
(which is in reality my
syslogintr
) and klogd
(which logs messages from
the kernel to syslog()
) when it hangs.
Hmm, I thought. Perhaps I better let fsck
run,
just to be safe. Powercycle, hit 'Y' to let fsck
run for
about fifteen minutes, then watch as it hangs yet again on
syslogd/klogd
.
Now, I won't bore you with the next few hours (which basically involved
continously booting into single user mode and trying to puzzle out what was
going wrong) but in the end, the problem ended up being
syslogintr
.
Or rather, not with the actual C code in syslogintr
, but
with the Lua script it was running. It actually had to do with blocking
ssh
attempts via iptables
. See,
syslog/klogd
start up before the network is initialized (and by
extension, iptables
), and aparently, running
iptables
before the network and klogd
is
running really messes up the boot process in some odd way that locks the
system up (not that this is the first time I've seen such weird interactions
before—back in college I learned the hard way that under the right
circumstances (which happened all too often) screen
and IRC
under Irix4.0.5 would cause a kernel
panic, thus stopping the computer cold).
Once I figured that out, it was a rather simple process to
remove the ssh
blocking feature from the script, and now I'm
stuck with a weird dependency issue—or I can just remove entirely the
ssh
blocking code from syslogintr
(or at least the
Lua script it runs).
Sigh.
Notes on a conversation during an impromptu UPS test
“So what should the UPS do when the power goes out?” asked Bunny.
“It sounds an alarm, and I should have,” I said, turning to the keyboard and typing a command, “nine minutes of power.”
“Oh really?”
“Yes. Well, let's test it,” I said, getting up, and pulling the plug on the UPS. About five seconds later, it started beeping. “See?”
“Hmm … I see,” said Bunny. “And then what?”
“Well, it's enough time for either the power to come back up, or to shutdown the computers. You don't really need—”
Just then all our computers suddenly lost power.
“Oh, well that was interesting,” I said.
“I thought you said you had nine minutes.”
“Apparently, so did the UPS.”
Friday, April 09, 2010
Cache flow problems, II
Google just announced that website speed is part of their ranking criteria (link via Hacker News), and seeing how Dad is still reporting issues with viewing this blog, I figured I might as well play around with Page Speed (which requires Firebug, an incredible website debugging tool that runs in Firefox) and see if I can't fix the issue (and maybe speed up the site).
Now, I realize there isn't any real need to speed up my site, but the suggestions by Page Speed weren't horrible, and actually, not terribly hard to implement (seeing how the main website here consists of nothing but static pages, with the only dynamic content here on the blog) and mainly consisted of tuning various caching options on the pages, and some minor CSS tweaks to keep Page Speed happy.
The caching tweaks for the main site I made were:
FileETag MTime Size AddType "image/x-icon" .ico ExpiresActive On ExpiresDefault "access plus 1 year" ExpiresByType text/html "access plus 1 week" ExpiresByType image/x-icon "access plus 1 month" <LocationMatch "\.(ico|gif|png|jpg|jpeg)$"> Header append Cache-Control "public" </LocationMatch>
HTML pages can be
cached for a week, favicon.ico
can be
cached for a month, and everything else for a year. Yes, I could have made
favicon.ico
cache for a year, but Page Speed suggested at least
a month, so I went with that. I can always change that later. I may
revisit the caching for HTML pages later; make non-index pages cachable for a
year, and index pages a week, but for now, this is fine.
And it does make the pages load up faster, at least for subsequent visits. Also, Page Speed and YSlow both give me high marks (YSlow dings me for not using a CDN, but my site isn't big enough to require a CDN, but that's the only thing YSlow doesn't like about my site).
And as an attempt to fix Dad's issue, I added the following to the configuration for The Boston Diaries:
<Files index.html> Header set Cache-Control "no-cache" </Files>
Basically, no one is allowed to cache the main page for this blog. I'll see how well that works, although it may take a day or two.
Saturday, April 17, 2010
The monitoring of uninterruptable power supplies
I've been dealing with UPS problems for a week and a half now, and it's finally calmed down a bit. Bunny's UPS has been replaced, and I'm waiting for Smirk to order battery replacements for my UPS so in the mean time, I'm using a spare UPS from The Company.
Bunny suspects the power situation here at Chez Boca is due to some overgrown trees interfering with the power lines, causing momentary fluctuations in the power and basically playing hell with not only the UPSes but the DVRs as well. This past Wednesday was particuarly bad—the UPS would take a hit and drop power to my computers, and by the time I got up and running, I would take another hit (three times, all within half an hour). It got so bad I ended up climbing around underneath the desks rerunning power cables with the hope of keeping my computers powered for more than ten minutes.
It wasn't helping matters that I was fighting my syslogd
replacement during each reboot
(but that's another post).
So Smirk dropped off a replacement UPS, and had I just used the thing, yesterday might have
been better. But nooooooooooooooooo! I want to monitor the device
(because, hey, I can), but since it's not an APC, I can't use apcupsd
to
monitor it (Bunny's new UPS is an APC, and the one I have with the dead battery is
an APC). In searching for some software to monitor the Cyber
Power 1000AVR LCD UPS, I came across NUT,
which supports a whole
host of UPSes,
and it looks like it can support monitoring multiple UPSes on a single computer
(functionality that apcupsd
lacks).
It's nice, but it does have its quirks (and caused me to have nuclear meltdowns yesterday). I did question the need for five configuration files and its own user accounting system, but upon reflection, the user acccounting system is probably warranted (maybe), given that you can remotely command the UPSes to shutdown. And the configurations files aren't that complex; I just found them annoying. I also found the one process per UPS, plus two processes for monitoring, a bit excessive, but the authors of the program were following the Unix philosophy of small tools collectively working together. Okay, I can deal.
The one quirk that drove me towards nuclear meltdown was the inability of the USB “driver” (the program that actually queries the UPS over the USB bus) to work properly when a particular directive was present in the configuration file and running in “explore” mode (used to query the UPS for all its information). So I have the following in the UPS configuration file:
[apc1000] driver = usbhid-ups port = auto desc = "APC Back UPS XS 1000" vendorid = 051D
I try to run usbhid-ups
in explore mode, and it fails.
Comment out the vendorid
, but add it to the commnd
line, and it works. But without the vendorid
, the
usbhid-ups
program wouldn't function normally (it's the
interface between the monitoring processes and the UPS).
It's bad enough that you can only use the explore mode when the rest of the UPS monitoring software isn't running, but this? It took me about three hours to figure out what was (or wasn't) going on.
Then there was the patch I made to keep NUT
from logging
every second to syslogd
(I changed one line from “if result >
0 return else log error” to “if result >= 0 return else log error” since
0 isn't an error code), then I found this
bug report on the mailing list archive, and yes, that bug was affecting
me as well; after I applied the patch, I was able to get more informtion from the Cyber Power
UPS (and it didn't
affect the monitoring of the APC).
And their logging program, upslog
, doesn't log to
syslogd
. It's not even an option. I could however, have it
output to stdout
and pipe that into logger
, but
that's an additional four processes (two per UPS) just to log some stats into
syslogd
. Fortunately, the protocol used to communicate with
the UPS monitoring
software is well documented and easy to implement, so it was an easy thing
to write a script (Lua, of course) to query the information I wanted to log
to syslogd
and run that every five minutes via
cron
.
Now, the information you get is impressive. apcupsd
gives
out rather terse information like (from Bunny's system, which is still
running apcupsd
):
APC : 001,038,0997 DATE : Sat Apr 17 22:23:25 EDT 2010 HOSTNAME : bunny-desktop VERSION : 3.14.6 (16 May 2009) debian UPSNAME : apc-xs900 CABLE : USB Cable MODEL : Back-UPS XS 900 UPSMODE : Stand Alone STARTTIME: Thu Apr 08 23:20:10 EDT 2010 STATUS : ONLINE LINEV : 118.0 Volts LOADPCT : 16.0 Percent Load Capacity BCHARGE : 084.0 Percent TIMELEFT : 48.4 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 078.0 Volts HITRANS : 142.0 Volts ALARMDEL : Always BATTV : 25.9 Volts LASTXFER : Unacceptable line voltage changes NUMXFERS : 6 XONBATT : Fri Apr 16 00:40:37 EDT 2010 TONBATT : 0 seconds CUMONBATT: 11 seconds XOFFBATT : Fri Apr 16 00:40:39 EDT 2010 SELFTEST : NO STATFLAG : 0x07000008 Status Flag MANDATE : 2007-07-03 SERIALNO : JB0727006727 BATTDATE : 2143-00-36 NOMINV : 120 Volts NOMBATTV : 24.0 Volts NOMPOWER : 540 Watts FIRMWARE : 830.E6 .D USB FW:E6 APCMODEL : Back-UPS XS 900 END APC : Sat Apr 17 22:24:00 EDT 2010
NUT
will give back:
battery.charge: 42 battery.charge.low: 10 battery.charge.warning: 50 battery.date: 2001/09/25 battery.mfr.date: 2003/02/18 battery.runtime: 3330 battery.runtime.low: 120 battery.type: PbAc battery.voltage: 24.8 battery.voltage.nominal: 24.0 device.mfr: American Power Conversion device.model: Back-UPS RS 1000 device.serial: JB0307050741 device.type: ups driver.name: usbhid-ups driver.parameter.pollfreq: 30 driver.parameter.pollinterval: 2 driver.parameter.port: auto driver.parameter.vendorid: 051D driver.version: 2.4.3 driver.version.data: APC HID 0.95 driver.version.internal: 0.34 input.sensitivity: high input.transfer.high: 138 input.transfer.low: 97 input.transfer.reason: input voltage out of range input.voltage: 121.0 input.voltage.nominal: 120 ups.beeper.status: disabled ups.delay.shutdown: 20 ups.firmware: 7.g3 .D ups.firmware.aux: g3 ups.load: 2 ups.mfr: American Power Conversion ups.mfr.date: 2003/02/18 ups.model: Back-UPS RS 1000 ups.productid: 0002 ups.serial: JB0307050741 ups.status: OL CHRG ups.test.result: No test initiated ups.timer.reboot: 0 ups.timer.shutdown: -1 ups.vendorid: 051d
Same information, but better variable names, plus you can query for any number of variables. Not all UPSes support all variables, though (and there are plenty more variables that my UPSes don't support, like temperature). You can also send commands to the UPS (for instance, I was able to shut off the beeper on the failing APC) using this software.
So yes, it's nice, but its quirky nature was something I wasn't expecting after a week of electric musical chairs.
Sunday, April 18, 2010
Off to the races
I mentioned briefly yesterday about the issue I was having with syslogintr
while booting the
computer. On my system, the system would hang just after loading
syslogintr
. I tracked it down to initlog
hanging. Further investigation revealed that both
syslogintr
and initlog
were hanging, but the
significance of that escaped me until an epiphany I had while sleeping: I
was experiencing yet another race
condition!
A quick test today proved that yes, it was a race condition. A particularly nasty race condition too, since once again, I wasn't explicitly writing multi-threaded code.
syslogintr
creates a local socket (/dev/log
for
those that who are curious) and then waits to receive logging
messages sent to said socket, something like:
local = socket(...) while(!interrupted) { read(socket,buffer,sizeof(buffer)); process(buffer); }
But in the process of processing the incoming message,
syslogintr
may itself call syslog()
:
while(!interrupted) { read(socket,buffer,sizeof(buffer)); process(buffer); syslog(LOG_DEBUG,"processed message"); }
syslog()
(which is part of the standard library under Unix)
sends the message to the local socket (/dev/log
). The data is
queued up in the socket, but it's okay because it'll cycle around quickly
enough to pick up the new data. Unless there's too much data already queued
in the local socket, at which point whoever calls syslog()
will
block until the backlogged data in the local socket is dealt with.
The startup script (/etc/init.d/syslog
for those of you
following along at home) starts both syslogintr
and
klogd
. klogd
is a program that pulls the data
from the kernel logging queue (logging messages the kernel itself generates,
but the kernel can't use /dev/log
as that's just a Unix convention, not
something enforced by the kernel itself) and logs that data via
syslog()
. And by the time klogd
starts up,
there's quite a bit of logging data generated by the kernel. So that data
gets blasted at syslogintr
(and in the process, so much data is
being sent that klogd
is blocked from running). But
syslogintr
is still coming up to speed and generating a bunch
of internal messages and suddenly, its calls to syslog()
are
blocking, thus causing a deadlock:
while(!interrupted) { read(socket,buffer,sizeof(buffer)); process(buffer); /* this takes some time */ /*-------------------------------------------------- ; meanwhile, something like klogd could be blasting ; data to the local socket, filling it up, thus when ; we get to: ;---------------------------------------------------*/ syslog(LOG_DEBUG,"processed message"); /*------------------------------------------------- ; the call to syslog() blocks, thus blocking the ; program until something else (in this case, *us*) ; drains the data waiting in the socket. But we ; can't drain the data because we're waiting (via ; syslog()) for the data to be drained! ; ; Can you say, *deadlock* boys and girls? I knew ; you could. ;--------------------------------------------------*/ }
This also explains why it only happened when booting—because that's
about the only time so much data is pushed to syslogintr
that
socket operations (reading, writing) are blocked. It also explains why I
haven't seen it on any other system I'm running it on, since those systems
don't run klogd
(being virtual hosts, they don't have
klogd
).
If you've ever wondered why software tends to crash all the time, it's odd interactions like this that are the cause (and this was an easy problem to diagnose, all things considered).
So now I internally queue any logging messages and handle them in the main loop, something along the lines of:
while(!interrupted) { foreach msg in queued_messages process(msg); read(socket,buffer,sizeof(buffer)); process(buffer); queuelog(LOG_DEUBG,"processed message"); }
Monday, April 19, 2010
Geek Power: Steven Levy Revisits Tech Titans, Hackers, Idealists
In the last chapters of Hackers, I focused on the threat of commercialism, which I feared would corrupt the hacker ethic. I didn't anticipate that those ideals would remake the very nature of commerce. Yet the fact that the hacker ethic spread so widely—and mingled with mammon in so many ways—guaranteed that the movement, like any subculture that breaks into the mainstream, would change dramatically. So as Hackers was about to appear in a new edition (this spring, O’Reilly Media is releasing a reprint, including the first digital version), I set out to revisit both the individuals and the culture. Like the movie Broken Flowers, in which Bill Murray embarks on a road trip to search out his former girlfriends, I wanted to extract some meaning from seeing what had happened to my subjects over the years, hoping their experiences would provide new insights as to how hacking has changed the world—and vice versa.
I could visit only a small sample, but in their examples I found a reflection of how the tech world has developed over the past 25 years. While the hacker movement may have triumphed, not all of the people who created it enjoyed the same fate. Like Gates, some of my original subjects are now rich, famous, and powerful. They thrived in the movement's transition from insular subculture to multibillion-dollar industry, even if it meant rejecting some of the core hacker tenets. Others, unwilling or unable to adapt to a world that had discovered and exploited their passion— or else just unlucky—toiled in obscurity and fought to stave off bitterness. I also found a third group: the present-day heirs to the hacker legacy, who grew up in a world where commerce and hacking were never seen as opposing values. They are bringing their worldview into fertile new territories and, in doing so, are molding the future of the movement.
Geek Power: Steven Levy Revisits Tech Titans, Hackers, Idealists | Magazine
My own copy of Hackers: Heros of the Computer Revolution is worn out from so many readings and re-readings that it's falling apart (and when I first got it, back in 1986 or so, I read the entire book in one sitting, which lasted all night—not something I should have done on a school night).
So now here is Steven Levy, revisiting his own book from a twenty-five year perspective, and following up on the changes to the industry, and the people he interviews, since the early 80s.
Tuesday, April 20, 2010
When “No error” is actually an error
My patch to NUT was rejected:
No.
The above is an error condition (despite the 'No error' message), most likely due to buggy UPS firmware. Normally, we should not expect that when asking for a report, the UPS returns nothing. After all it is 'advertising' the report in the report descriptor, so silently ignoring this would be a grievous mistake. At the very least, if someone is debugging the we should provide some indication why this fails.
Hmm … okay. I thought they just missed typed a conditional, since 0 is used to indicate success throughout a mess of Standard C library (and Unix library) calls (silly me!).
The cause of the “No error” message is this bit of code:
/* * Error handler for usb_get/set_* functions. Return value > 0 success, * 0 unknown or temporary failure (ignored), < 0 permanent failure (reconnect) */ static int libusb_strerror(const int ret, const char *desc) { if (ret > 0) { return ret; } switch(ret) { case -EBUSY: /* Device or resource busy */ case -EPERM: /* Operation not permitted */ case -ENODEV: /* No such device */ case -EACCES: /* Permission denied */ case -EIO: /* I/O error */ case -ENXIO: /* No such device or address */ case -ENOENT: /* No such file or directory */ case -EPIPE: /* Broken pipe */ case -ENOSYS: /* Function not implemented */ upslogx(LOG_DEBUG, "%s: %s", desc, usb_strerror()); return ret; case -ETIMEDOUT: /* Connection timed out */ upsdebugx(2, "%s: Connection timed out", desc); return 0; case -EOVERFLOW: /* Value too large for defined data type */ case -EPROTO: /* Protocol error */ upsdebugx(2, "%s: %s", desc, usb_strerror()); return 0; default: /* Undetermined, log only */ upslogx(LOG_DEBUG, "%s: %s", desc, usb_strerror()); return 0; } }
While I have yet to find the code for usb_strerror()
(and
I've searched every file; I have no clue as to where the
definition of usb_strerror()
is located), it acts as if it's
just a wrapper around strerror()
(a Standard C library call),
and when given a value of 0, it returns “No error” (since 0 isn't
considered an error value). I submitted back a patch to print “Expected
result not received”, since that seems to be what a 0 result means.
Also notice that the comment describing the results is somewhat lost at the top there—in the actual code it's even more invisible since there isn't much to visually set it off from the rest of the code.
Hopefully, the new patch I submitted will be accepted.
Thursday, April 22, 2010
An army of Sean
MyFaceSpaceBook
apparently makes profile pages—a short link to your page on
MyFaceSpaceBook. So I tried http://www.facebook.com/sean.conner
and … oh … unless I have a really deep tan, that isn't me. I then tried
http://www.facebook.com/sean.patrick.conner
and … um … closer, but still not quite me.
I'm not even on the first page of results.
Online in one form or another since 1987, and I'm failing at MyFaceSpaceBook.
Sigh. [Hey you kids! Get off my lawn!]
I ended up with http://www.facebook.com/spc476
,
which at least matches my ID across
several other websites.
Only 25 days to Vegas? Sign me up!
On the advice of his attorney, my friend Hoade hocked his wife's three cats and the silverware to buy a cherry red Chevy Impala convertable and is threatening to kidnap me on a wild road trip to Viva Lost Wages. I was curious as to route we might take when I noticed that Google Maps offered walking directions.
How very amusing.
But the 358 steps in walking to Vegas pale in comparison to the 1,008 steps in biking to Viva Lost Wages.
Wednesday, May 05, 2010
Millions of moving parts
In a system of a million parts, if each part malfunctions only one time out of a million, a breakdown is certain.
—Stanislaw Lem
In between paying work, I'm getting syslogintr
ready for release—cleaning up
the Lua scripts, adding
licensing information, making sure everything I have actually works, that
type of thing. I have quite a few scripts that isolated some aspects of
working scripts—for instance, checking for ssh
attempts and
blocking the offending IP but
weren't fully tested. A few were tested (as I'm using them at home), but
not all.
I update the code on my private server, rewrite its script to use the new modules (as I'm calling them) only to watch the server seize up tight. After a few hours of debugging, I fixed the issue.
Only it wasn't with my code.
But first, the scenario I'm working with. Every hour,
syslogintr
will check to see if the webserver and nameserver
are still running (why here? Because I can, that's why) and log some stats
gathered from those processes. The checks are fairly easy—for the
webserver I query mod_status
and log the results; for the nameserver, I pull the PID from /var/run/named.pid
and from that, check
to see if the process exists. If they're both running, everything is fine.
It was when both were not running that syslogintr
froze.
Now, when the appropriate check determines that the process isn't running
it not only logs the situation, but sends me an email to alert me of the
situation. If only one of the two processes were down,
syslogintr
would work fine. It was only when both
were down that it froze up solid.
I thought it was another type of syslog deadlock—Postfix spews forth multiple log entries
for each email going through the system and it could be that too much data
is logged before syslogintr
can read it, and thus, Postfix
blocks, causing syslotintr
to block, and thus, deadlock.
Sure, I could maybe increase the socket buffer size, but that only pushes
the problem out a bit, it doesn't fix the issue once and for all. But any
real fix would probably have to deal with threads, one to just read data
continuously from the sockets and queue them up, and another one to pull the
queued results and process them, and that would require a major restructure
of the whole program (and I can't stand the pthreads
API). Faced with that,
I decide to see what Stevens
has to say about socket buffers:
With UDP, however, when a datagram arrives that will not fit in the socket receive buffer, that datagram is discarded. Recall that UDP has no flow control: It is easy for a fast sender to overwhelm a slower receiver, causing datagrams to be discarded by the receiver's UDP …
Hmm … okay, according to this, I shouldn't get deadlocks
because nothing should block. And when I checked the socket receive buffer
size, it was way larger than I expected it to be (around 99K if you
can believe it) so even if a process could be blocked sending a
UDP packet, Postfix (and
certainly syslogintr
wasn't sending that much
data.
And on my side, there wasn't much code to check (around 2300 lines of
code for everything). And when a process list showed that
sendmail
was hanging, I decided to start looking there.
Now, I use Postfix, but Postfix comes with a “sendmail” executable that's compatible (command line wise) with the venerable sendmail. Imagine my surprise then:
[spc]brevard:~>ls -l /usr/sbin/sendmail lrwxrwxrwx 1 root root 21 Feb 2 2007 /usr/sbin/sendmail -> /etc/alternatives/mta [spc]brevard:~>ls -l /etc/alternatives/mta lrwxrwxrwx 1 root root 26 May 5 16:30 /etc/alternatives/mta -> /usr/sbin/sendmail.sendmail
Um … what the … ?
[spc]brevard:~>ls -l /usr/sbin/sendmail* lrwxrwxrwx 1 root root 21 Feb 2 2007 /usr/sbin/sendmail -> /etc/alternatives/mta -rwxr-xr-x 1 root root 157424 Aug 12 2006 /usr/sbin/sendmail.postfix -rwxr-sr-x 1 root smmsp 733912 Jun 14 2006 /usr/sbin/sendmail.sendmail
Oh.
I was using sendmail's sendmail
instead of
Postfix's sendmail
all this time.
Yikes!
When I used Postfix's sendmail
everything worked
perfectly.
Sigh.
mod_lua patched
And speaking of bugs, the bug I submitted to Apache was fixed!
Woot!
Monday, May 10, 2010
An update on the updated Greylist Daemon
Internet access at Chez Boca was non-existant today (scuttlebut: a fibre cut, which is currently the third one in about two months time) and without the Intarweb pipes, I can't work (good news! I get a day off! Bad news! I can't surf the web!), so I figured I would take the time to get a few personal projects out the door.
First one up—a new version of the greylist daemon has been released. a few
bugs are fixed—the first being an error codition wasn't properly sent back
to the gld-mcp
. The second one a segmentation fault (not
fatal actually—the greylist daemon restarts itself on a
segfault) if it recieved too many requests (by “too many” I mean “the
tuple storage is filled to capacity”—when that happens, it just dumps
everything it has and starts over, but I forgot to take that into account
elsewhere in the code). The last prevented the Postfix interface from logging any syslog
messages (I think I misunderstood how setlogmask()
worked).
The other changes are small (a more pedantic value for the number of seconds per year, adding sequence numbers to all logged messages (that's another post) and set the version number directly from source control) but the reason I'm pushing out version 1.0.12 now (well, aside from there was nothing else to do yesterday) is related to the one outstanding bug (that I know of) in the program. That bug deals with bulk data transfers (Mark has been bit by it; I haven't) and I suspect I know the problem, but the solution to that problem requires a incompatible change to the program.
Well, okay—the easy solution requires an incompatible change to the program. The problem is the protocol used between the greylist daemon and the master control program. The easy solution is to just change the protocol and be done with it; the harder solution would be to work the change into the existing protocol, and that could be done, but I'm not sure if it's worth the work. The number of people (that I know of) using the greylist daemon I can count on the fingers of one hand, so an incompatible change wouldn't affect that many people.
Then again, I might not hear the end of it from Mark.
In any case, there are more metrics I want to track (more on that below) and those would require a change to the protocol as well (or more technically, an extention to the protocol). The addtional metrics may possibly help with some long term plans that involve making the greylist daemon multithreaded (yes, the main page states I can get over 6,000 requests a second; that's a very conservative value—drop the amount of logging and I can handle over 32,000/second, all on a single core).
Some of the new metrics involve tracking the the lowest and highest
number of tuples during any given period of time. This should help with
fine-tuning the --max-tuples
parameter (currently defaults to
65,536). I've noticed over the years that I don't seem to get much past
3,000 tuples at any one time, but I would like to make sure before I tweak
the value on my mail server.
The other metrics I want to track are the number of tuple searches (or “reads”) and the number of tuple updates (additions or deletions, but in other words, “writes”). These metrics should help if I decide to go multithreaded with the greylist daemon, and how best to store the tuples depending if the application is read heavy or write heavy (but my guess is that reads and writes are nearly equal, which presents a whole set of challenges). But it's not like the program isn't fast as it is—while I claim over 6,000 requests per second, that's a rather conservative figure—drop the logging to a minimum and it can handle over 30,000 requests per second on a single core. I would be interested to see if I can improve on that with additional cores.
Since the there are quite a few changes that require protocol changes, I decided to just make the last release of the 1x line, and start work on version 2 of the greylist daemon—or maybe 1.5 and leave 2x for the multithreaded version.
A new project released!
I've made mention of my syslogd
replacement, but since I had nothing else to do today,
I buckled down, wrote (some) documentation and have now officially released
syslogintr
under a modified
GNU GPL license.
It currently supports only Unix and UDP sockets (both unicast and multicast) under IPv4 and IPv6; is compatible with
RFC-3164 (which documents current best practice and isn't a “standard”
per se) but is fully scriptable with
Lua. And because of
that, it's probably easier to configure than rsyslogd
and
syslog-ng
, both of which bend over backwards in trying to
support, via their ever increasingly complex configuration files, all sorts
of wierd logging scena—Oh! Intarwebs back up! Shiny!
Wednesday, May 12, 2010
Just gotta love stupid benchmarks
Because I like running stupid bench marks, I thought it might be fun to
see just how fast syslogintr
is—to see if it even gets close to handling thousands of requests per
second.
So, benchmarks.
All tests were run on my development system at Chez Boca: 1G of RAM, 2.6GHz dual core
Pentium D (at least, that's why my system is reporting). I tested
syslogintr
linked against Lua and against LuaJIT. All were run with messages (after
being processed) being relayed to a multicast address (but one set without a
corresponding listener, and another set with a corresponding listener).
The script being tested was using.lua
from the current version; executable compiled without
any optimizations.
And the results:
Lua | LuaJIT | |
---|---|---|
no multicast listener | 10,250 | 12,000 |
multicast listener | 8,400 | 8,800 |
Not terribly shabby, given that the main logic is in a dynamic scripting language. It would probably be faster if it skipped the relaying entirely and compiled with heavy optimizations, but that's a test for another day.
Update a few minutes later …
I forgot to mention—those figures are for a non-threaded (that is, it only runs on a single CPU) program. Going multithreaded should improve those figures quite a bit.
Saturday, May 15, 2010
Yet another new (actually, old) project released!
Two years ago I wrote a program to wrap Perl in order to catch script kiddie Perl scripts on the server.
Today, I'm deleting a huge backlog of information recorded by said program (normally under Linux, the physical directory (a list of files in said directory) is 4,096 bytes—the directory I'm trying to delete is 110,104,576 bytes in size (as a rough calculation means there's over 3,000,000 files in that directory).
And this isn't the first time I've deleted that directory.
But what I didn't realize is that I never got around to releasing the program.
Woooooooooooot! Another project released!
Update on Monday, April 18th, 2022
I've since unpublished the code. I'm not sure I even have the code anymore.
Saturday, May 29, 2010
Death by a thousand SQL queries
The Company just got hired to take over the maintenance and development of a mid-sized Web 2.0 social website that's the next Big Thing™ on the Internet. For the past week we've been given access to the source code, set up a new development server and have been basically poking around both the code and the site.
The major problem with the site was performance—loads exceeding 50 were common on both the webserver and database server. The site apparently went live in January and has since grown quickly, straining the existing infrastructure. That's where we come in, to help with “Project: SocialSpace2.0” (running on the ubiquitous LAMP stack).
The site is written with PHP (of course), and one of the cardinal rules of addresssing performance issues is “profile, profile, profile.”—the bottle neck is almost never where you think it is. Now, I've profiled code before, but that was C, not PHP. I'm not even sure where one would begin to profile PHP code. And even if we had access to a PHP profiler, profiling the program on the development server may not be good enough (the development server has maybe half the data of the production server, which may not have the pathological cases the production server might encounter).
So what to do as the load increases on the webserver?
Well, this
answer to profiling C++ code gave me an idea. In one window I ran
top
. In
another window a command line. When a particular instance of Apache hit the CPU hard as seen in top
, I quickly
get a listing of open files in said process (listing the contents of
/proc/pid/fd
to find the ofending PHP file causing the
load spike).
Laugh if you will, but it worked. About half a dozen checks lead to one particular script causing the issue—basically a “people who viewed this profile also viewed these profiles” script.
I checked the code in question and found the following bit of code (in pseudocode, to basically protect me):
for viewers in SELECT userID FROM people_who_viewed WHERE profileID = {userid} ORDER BY RAND() for viewees in SELECT profileID FROM people_who_viewed WHERE userID = {viewers['userID']} ORDER BY RAND() ... end end
Lovely!
An O(n2) algorithm—in SQL no less!
No wonder the site was dying.
Worse, the site only displayed about 10 results anyway!
A simple patch:
for viewers in SELECT userID FROM people_who_viewed WHERE profileID = {userid} ORDER BY RAND() LIMIT 10 for viewees in SELECT profileID FROM people_who_viewed WHERE userID = {viewers['userID']} ORDER BY RAND() LIMIT 10 ... end end
And what do you know? The site is actually usable now.
Alas, poor Clusty! I knew it, Horatio: a search engine of infinite results …
YIPPY is foremost the world's first fully-functioning virtual computer. A cloud-based worldwide LAN, YIPPY has turned every computer into a terminal for itself. On the surface, YIPPY is one-stop shopping for the web surfing needs of the average consumer. YIPPY is an all-inclusive media giant; incorporating television, gaming, news, movies, social networking, streaming radio, office applications, shopping, and much more—all on the fastest internet browser available today.
I wish I could say the above was a joke (but you should read the rest of the above page purely for its entertainment value—the buzzword quotient on that page is pure comedy gold), but alas, it is not. Some company called Yippy has bought Clusty and turned what used to be my favorite (if occasionally used) search engine into some LSD-induced happy land of conservative values:
Yippy.com, its sub-domains and other web based products (such as but not limited to the Yippy Browser) may censor search results, web domains and IP addresses. That is, Yippy may remove from its output, in an ad-hoc manner, all but not limited to the following:
- Politically-oriented propaganda or agendas
- Pornographic Material
- Gambling content
- Sexual products or sites that sell same
- Anti-Semitic views or opinions
- Anti-Christian views or opinions
- Anti-Conservative views or opinions
- Anti-Sovereign USA views or opinions
- Sites deemed inappropriate for children
I cannot (even if I may agree with some of the above) in good conscience, endorse such censorship from a search engine, nor if I refuse to use it, force others to use it. Even my own site has gambling content on it so thus I too, could be censored (or at least my site from search results). Not to mention it encourages people to report “questionable content:”
Yippy users are our greatest defense against objectionable material. Should a keyword or website by found that returns this kind of material it may be reported in the CONTACT US tab located on the landing page of the Yippy search engine. Our staff will quickly evaluate all responses and reply back within 24 hours for a resolution notice. We thank you in advance for helping keep Yippy the greatest family friendly destination online today.
Thus, I'm going back to Google, and have removed Clusty (sigh) from my site.
Friday, June 25, 2010
The sucking vortex of ill-marked, multiply-named, non-Euclidean roadways designed by a disgruntled parking lot architect that is Orlando
Today we left for Orlando for a short weekend getaway. The plan is to arrive in Orlando (technically, Kissimmee) just before dinner, check into the resort (Bunny has a timeshare in the Orlando area), then head over to Emeril's at Universal City Walk for dinner. Tomorrow is a trip to St. Petersburg to visit the Salvador Dali Museum, then back to Orlando for Blue Man Group. Then drive home on Sunday.
The trip to Orlando was uneventful. The trip in Orlando was horrible. Orlando is this huge sucking vortex of ill-marked, multiply-named, non-Euclidean roadways designed by disgruntled parking lot architects (for instance, W Irlo Bronson Memorial Highway is also SR-500, SR-530, US-192, US-17 and US-441; then there's S Apopka Vineland Road, aka CR-435, Vineland Road, and SR-535, which is not to be confused with Apopka Vineland Road, which isn't connected to S Apopka Vineland Road, and is also known as CR-435 and Clarcona Road, which turns into S Park Ave, then N Park Ave before turning into Rock Springs Road where it ends). Worse yet, the map Bunny purchased prior to the trip turned out to be largely useless.
My finger is at the approximate location of our resort on the map Bunny bought. We have this huge 4′×4′ map … and our resort isn't on the map.
And here's the major road we were on for most of the trip, US-192:
Yup, not on the map either.
We almost didn't make it to Emeril's because I read the map wrong (I had to guess where we were) and thought we were on N Orange Blossom Trail (aka US-441, US-92, US-17, SR-500) when in reality we were on S Orange Avenue (an easy mistake when it's also known as Old Dixie Highway and CR-527—I'm telling you, the roads around Orlando are eeeeeeeeeeviiiiiiilllllllllll) and nothing was quite matching up.
Driving home from Emeril's, we got horribly lost and I was sure we were headed west (next stop—Tampa) on Poinciana Blvd, but it turned out we were headed south on Poinciana Blvd before doing a U-turn and stopping at a 7-11 on the corner of US-192 and Poinciana Blvd, where I purchansed our second useless map on the trip—this one at least had US-192 but our resort was still off the map.
(Later, I learned that had we stayed going south on Poinciana Blvd, we would have hit S Orange Blossom Trail (CR-532, US-17, US-92), which, unlike N Orange Blossom Trail that runs north/south, runs east/west. Going east, it turns into S John Young Parkway (aka N Bermuda Avenue) which runs north/south; N Orange Blossom Trail runs parallel to S John Young Parkway, oddly enough—I'm telling you, eeeeeeeeeeeeeeeeeviiiiiiiilllllllll.)
The eeeeeeeeeviiiiilllllll roads around the Orlando area had it's effect on us—I don't recall a time when Bunny and I were arguing more than when we were driving around the area.
Truffles and crème brûlée
Ah, Emeril's. We arrived half an hour late for our reservation and had to wait maybe ten minutes to get seated. Bunny and I were escorted upstairs and lead to this small room (perhaps, 15′×20′) with windows on three sides (man, the geometry of the Orlando area is very odd) and beneath each window, a table.
I swear, there were more waitstaff working the room than patrons (there was only one other couple in the room with us). So the service was excellent. It was also much quieter than in the main dining areas, which was a bonus. At least we could hear each other in conversation.
I enjoyed the crab-stuffed artichoke; Bunny loved the White Truffle Flatbread. It's hard to go wrong with Filet Mignon (what I had) but the carrots were a bit undercooked for my liking. Bunny enjoyed the Pan Roasted Redfish. And of course, dessert—Double Chocolate Fudge Cake for me (mmmmmmmmmmmmmmmmmmmm) and Vanilla Bean Crème Brûlée for Bunny. For a “once-in-a-year” experience, it was quite nice.
Saturday, June 26, 2010
Escaping the surreal area around Disney
Bunny and I managed to escape the sucking vortex of ill-marked, multiply-named, non-Euclidean roadways designed by a disgruntled parking lot architect that is Orlando for the coastal environs of St. Petersburg to view the Salvador Dali Museum.
We arrived in the mid-afternoon and decided against the guided tour and spent a few hours reviewing the works of Salvador Dalí. I enjoyed the experience and was glad I went. I didn't realize that Salvador did more than just melting clocks.
Bunny, however, didn't care for his work that much. It turns out that while she's heard of him, she wasn't familiar with his work; she enjoyed Escher more than Dalí.
The best thing to see when your blue
The roads around Orlando destroy your soul. By the time we got back to the resort, the bickering between Bunny and me over driving had driven me close to a breakdown. I came very close to skipping out on Blue Man Group but a few minutes of meditative silence gave me the resolve to brave the non-Euclidean Orlando roads once more (odd, since I wasn't the one driving).
In fact, until we were sitting in our seats waiting for the show to start, I was very close to losing it (I'm telling you, the roads around Orlando are eeeeeeeeeeeeeeeeeviiiiiilllll).
And I'm glad I went.
It's very hard to describe the Blue Man Group experience. It's part concert (bring ear plugs, or ask one of the ushers for ear plugs—there's a ton of drumming), part performance art, part comedy, part social commentary and part audience participation. The energy level is high and it's just this … incredible experience that you have to … um … experience live in order to “get” it.
The entire trip, the sucking vortex of ill-marked, multiply-named, non-Euclidean roadways designed by a disgruntled parking lot architect that is Orlando, the “eh” experience of the Salvador Dali Museum, the loudness of The Hard Rock Cafe (where Bunny and I ate dinner after The Blue Man Group—it was next door and still open), everything, was worth it just to see The Blue Man Group.
Buyer beware
In our room was a small booklet with advertisements and coupons for the various attractions in the area. Included in this free booklet were maps of the area. Not to scale and crudely drawn, but way more useful than the large “we paid good money for these useless wastes of paper” maps.
That last map was more than accurate enough to get us back on the right road after driving out the wrong exit from Universal City Walk.
Those Orlando roads are eeeeeeeeeviiiiiillllllllll! So are the maps. Buyer beware!
Sunday, June 27, 2010
Home again
We're home safe and sound, far away from the eeeeeeeeeviiiiiiiilllllllll streets of Orlando.
Sunday, July 04, 2010
Just an FYI for those of you who might think of doing something silly like setting off illegal fireworks you got by driving to the border of South Carolina and totally said you would use them legally, wink wink nudge nudge say no more say no more …
Scaring 'da birds
It's been seven years since last I attended a Fourth of July party with my friend C, and once again, I'm at his house to help scare the local bird population. Fortunately this time, there was no big boom, but as a precaution, C moved the festivities to the back yard, next to the pool.
Thursday, July 22, 2010
An update to a quick note on embedding languages within languages
In making this comment I came across this old post of mine from 2007 where I lament the amount of code required to query SNMP values in C. I took one look at the code I wanted:
OID sys = SNMPv2-MIB::sysObjectID.0; if (sys == SNMPv2-SMI::enterprises.5567.1.1) /* riverstone */ { IpAddress destination[] = IP-MIB::ip.24.4.1.1; IpAddress mask[] = IP-MIB::ip.24.4.1.2; IpAddress nexthop[] = IP-MIB::ip.24.4.1.4; int protocol[] = IP-MIB::ip.24.4.1.7; int age[] = IP-MIB::ip.24.4.1.8; int metric[] = IP-MIB::ip.24.4.1.11; int type[] = IP-MIB::ip.24.4.1.6; } else if (sys == SNMPv2-SMI::enterprises.9.1) /* cisco */ { IpAddress destination[] = RFC1213-MIB::ipRouteDest; IpAddress mask[] = RFC1213-MIB::ipRouteMask; IpAddress nexthop[] = RFC1213-MIB::ipRouteNextHop; int protocol[] = RFC1213-MIB::ipRouteProto; int age[] = RFC1213-MIB::ipRouteAge; int metric[] = RFC1213-MIB::ipRouteMetric1; int type[] = RFC1213-MIB::ipRouteType; } for (i = 0 ; i < destination.length; i++) { print( destination[i], mask[i], nexthop[i], snmp.protocol(protocol[i]), metric[i], age[i] ); }
and remembered—I did that!
Yup. Back in September of 2009 when I first started playing around with Lua. I installed the SNMP bindings for Lua and wrote the following:
#!/usr/local/bin/lua -- http://luasnmp.luaforge.net/snmp.html snmp = require "snmp" OID = snmp.mib.oid routeprotos = { "other ", "local ", "netmgmt ", "redirect ", "egp ", "ggp ", "hello ", "rip ", "is-is ", "es-is ", "igrp ", "bbnspf ", "ospf ", "bgp " } print(" Dest Mask NextHop Proto Metric Age") print("-------------------------------------------------------------------------------") router = assert(snmp.open{ peer = arg[1] or "XXXXXXXXXXXXXXXXXXXXXXXXX" , community = arg[2] or "XXXXXXXXX" }) cisco = OID "SNMPv2-SMI::enterprises.9.1" riverstone = OID "SNMPv2-SMI::enterprises.5567.1.1" sysid = router["SNMPv2-MIB::sysObjectID.0"] if string.find(sysid,cisco,1,true) then shouldbe = OID "RFC1213-MIB::ipRouteDest" result = { { oid = "RFC1213-MIB::ipRouteDest" } , { oid = "RFC1213-MIB::ipRouteMask" } , { oid = "RFC1213-MIB::ipRouteNextHop" } , { oid = "RFC1213-MIB::ipRouteProto" } , { oid = "RFC1213-MIB::ipRouteMetric1" } , { oid = "RFC1213-MIB::ipRouteAge" } , } elseif string.find(sysid,riverstone,1,true) then shouldbe = OID "IP-MIB::ip.24.4.1.1" result = { { oid = "IP-MIB::ip.24.4.1.1" } , { oid = "IP-MIB::ip.24.4.1.2" } , { oid = "IP-MIB::ip.24.4.1.4" } , { oid = "IP-MIB::ip.24.4.1.7" } , { oid = "IP-MIB::ip.24.4.1.11" } , { oid = "IP-MIB::ip.24.4.1.8" } , } end repeat result,err = snmp.getnext(router,result); if result ~= nil then if string.find(result[1].oid,shouldbe,1,true) == nil then break end print(string.format("%-16s",result[1].value) .. string.format("%-16s",result[2].value) .. string.format("%-16s",result[3].value) .. routeprotos[result[4].value] .. string.format("%-6d",result[5].value) .. string.format("%-6d",result[6].value) ) end until false os.exit(0)
Those email server blues
I'm concerned that eventually it will no longer be possible to run a private email server and that everyone will end up using Gmail, Yahoo or MySpaceFaceBook because that's the only way we will be able to get email.
Occasionally Dad will call asking why his email to me is bouncing, and every time I check, it's because AOL is taking the forced transitory failure (as generated by my greylist daemon) as “I can't deliver this in one shot, so of course that email address is bogus.” So I've had to whitelist all of AOL.
I had a similar problem with MyFaceSpaceBook. One or two transitory failures and my email address is considered bogus. Another whole swath of IP addresses whitelisted.
Then Corsair writes in about his emails to be being bounced.
Sigh.
Corsair's case I can't really figure out. From the logs:
Jul 18 04:28:36 brevard gld: [98587] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 18 08:28:36 brevard gld: [98799] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 18 12:28:37 brevard gld: [99052] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 18 16:28:37 brevard gld: [99309] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 18 20:28:38 brevard gld: [99491] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 00:28:38 brevard gld: [99675] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 04:28:39 brevard gld: [99944] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 08:28:39 brevard gld: [100234] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 12:28:40 brevard gld: [100509] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:00:38 brevard gld: [100595] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:01:09 brevard gld: [100596] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:05:38 brevard gld: [100604] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:06:09 brevard gld: [100605] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:13:09 brevard gld: [100610] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:13:40 brevard gld: [100613] tuple: [XXXXXXXX.194 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:24:24 brevard gld: [100629] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 14:41:18 brevard gld: [100641] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] GRAYLIST GREYLIST Jul 19 15:06:38 brevard gld: [100678] tuple: [XXXXXXXX.195 , XXXXXXXXXXXXXXXXXXXXXXX , sean@conman.org] ACCEPT WHITELIST
His email should have gone through on the second attempt, as it was only four hours, which is less than the six hours it takes to purge unreferenced tuples. Others I can explain; the third becuase it's from a different IP address, and the fourth becuse it definitely is past the six hour lifetime of an unreferenced tuple. Same for the fifth, but again, I can't explain why it wasn't accepted by the sixth attempt.
My initial thought was that it had something to do with searching the tuple list. I recently rewrote the binary search code so it was not only half the size, but much clearer, but maybe it doesn't work. Maybe I missed some subtle boundary condition.
50,000,000 tests later (no, really!), and no, both the old binary search routine and the new binary search routine return identical results. If there is a corner case, 50,000,000 random tests was not enough to reveal it. So I doubt it's that code.
About the only thing I can think of (and I haven't tested this) is that the timeout for old tuples is not what I think it is, but when I query my greylist daemon it returns a value of six hours for the lifetime of a greylisted tuple.
In any case, I whitelisted Corsair's email address. And I'm pondering why I even run my own email server any more …
How not to design a PHP web site
I don't even know where to begin.
One of the tasks our team is currently working on for “Project: SocialSpace2.0” is to separate the user uploaded content (images, videos, etc) to its own webserver (what we're calling “the static content”) to make it easier to scale out the site.
My task is to figure out how the content is uploaded to the site. This
is not an easy task. The proprietary PHP framework (where have I heard that tale before?) consists of
over 400,000 lines of undocumented code (but of course!) spread across 3,000
files (really!) that is a prime example of why object oriented programming
can be a Bad Thing™ (early on in the project, I was curious as to how
far down the rabbit hole this code went, so I spent some time starting with
the topmost PHP file and replacing each require_once()
function
call with the code it referenced; I stopped after including over 15,000
lines of code and that's before anything is executed).
The automagical code is amazing. Even though I changed
$EO->set_setting('FS_IMAGE_UPLOAD_PATH','uploads/images/ );
which is relative to the DOCUMENT_ROOT
and thus, hard to
move to its own domain, to
$EO->set_setting('FS_IMAGE_UPLOAD_PATH','/home/spc/tmp/images/');
the content is still save to the old location, thus making it harder to move the content to its own domain.
Going further—the actual upload control on the webpage is a Flash program (lovely), which apparently is controled by some Javascript (oh it's getting better) that is generated on the fly (can you feel the love) and yet still manages to hardcode the destination directory.
Wow! I'm seriously impressed by the previous team's work.
This is going to be painful, I can tell.
Tuesday, August 03, 2010
The Dream of a Lifetime
A tip from Jeff lead me to very cool Uncle Scrooge story that may (or may not be) the inspiration for the movie “Inception” (actual links from boingboing).
I'm more familiar with the work of Carl Barks than I am with Don Rosa so I found the artwork a bit jarring but the story was quite imaginative (and odd to see so many Beagle Boys in one story).
Tuesday, August 24, 2010
A lesson learned
My webserver has been down for two days, and I only now noticed, even though I've been receiving notifications, on the hour, every hour, since it went down.
Sigh.
Why didn't I notice?
Because I also receive the same notification (same subject line in the email) from another webserver that tends to go down more often than not, and it too, went down around the same time as my webserver.
I restarted the other webserver and kept wondering why I was constantly receiving notifications of it being down. I figured the webserver might have been updated, thus breaking the status page I check, so I adjusted the monitoring code appropriately (yes, one of the fields I was looking for didn't exist, which lead me to that conclusion).
It wasn't until I tried hitting my own webserver that I actually noticed it was down.
That's when I realized that not having the website name in the subject line of the email notifications was not a good idea.
I also realized I should blog more often.
Wednesday, August 25, 2010
Perhaps RACTER ghostwrites this guy's blog?
- From
- "Angela Smith" <webmaster@bussinesdata.com>
- To
- sean@conman.org
- Subject
- Link Request
- Date
- Wed, 25 Aug 2010 12:46:55 +0500
Greetings!
My name is Angela Smith, SEO Consultant. I've greatly enjoyed looking through your site and I was wondering if you'd be interested in providing a link to one of our client www.flowerbud.com. In lieu of this link, we will provide a link back or more from our human edited directories that have built good credibility with search engines.
Our linking details:
Please add following details to your website and we will give you as many link backs as you provide. (In case you have more than one website)
- URL
- http://www.flowerbud.com/shop-by-occasion-cat/birthday-cat
- Title
- birthday flowers delivery
- Description
- Flower bud presents you an ample, daily fresh and multi hued bunch from the farm in Baja Sur California and you'll get beautiful, fresh flower arrangements. Flower delivery direct from the grower, most of our flowers will arrive on your doorstep before they even bloom.
If you are interested, please add the above link details to your website and please send me the URL and TITLE in order to list your website. I'll add your link as soon as possible.
I hope you have a nice day and thank you for your time.
Regards
Angela SmithP.S. Please note that this is not a spam or automated email, it's only a request for a link exchange. Your email address has not been added to any lists, and you will not be contacted again. if you'd like to make sure we don't contact you again, please fill in the following form:
http://bit.ly/unsubscribe-list
please accept our apologies for contacting you.Address: A 26, Sec-59, Noida, India
I get these “link exchange” emails from time to time and for the most part I ignore them. But this one I just had to share; not the email per se nor the website in question but for the owner's blog:
Seriously now, I think the executive along with the rank and file of the UAW should rise up and be reliably counted upon to use Flowerbud.com as their one and only source of fresh cut flowers in all seasons and for any, all and no particular reason at all. After all I am about to weave yet another fine product from General Motors into a hectic day and a story of approval that at one time would have ranked as quite out of character. In ‘Vase Runner’ I lauded their behemoth Suburban on a marathon run and now I find myself waxing on about a plain vanilla sedan that seems to have entered the market place to little fanfare, perhaps because it coincided with “The General” slipping into “Government Motors” and the over reaching overdrive gear.
Whatever, It's comfortable seat and serenely quiet interior still lie a thousand miles south of a warm bed and another 4 am PDX trek. My car parks ( it knows its own way ) at blue R7, the Rosetta Stone experiment that is the bus from economy parking to the terminal is as punctual as ever and of course my considerable monetary infusion to Alaska Airlines goes a long way in aiding all the free flying enjoyed by airline staff and families, not to mention allowing just the odd day of work for those with inordinate seniority. Numerous days away from the job being essential to the storing up of sufficient surliness and bile that qualifies one to be cabin crew, and whoo hoo, even the beer in hand emergency slide operator on more than a few of today's airlines. Just this week I was chatting with a Delta Platinum passenger who in three months of assiduous looking over 60,ooo miles had yet to find a cabin crew member who had earned one of the accolades that Delta HQ wants their premium passengers to award to staff for simply doing their job with pleasant and helpful demeanor. You would not accept that of your pizza delivery person yet after dropping $500, $2000 or even more for a plane fare you to take it from any number of personnel attached to the aircraft … and keep mute.
Ms.Malibu « Mark’s Bloomin’ Journal
Seriously, from a florist.
It just goes on and on with this rambling stream-of-conscience keyword laden verbal diarrhea. Even better, it appears that all the blog entries from Mark's Bloomin' Journal are like that.
Amazing.
Monday, October 11, 2010
Let me 'splain … no, there is too much. Let me sum up
Back in April M (and until he states otherwise, he shall remain M) brought me on as a consultant at The Corporation (not to be confused with The Company, where I work with Smirk and P) to do help profile “Project Wolowizard” (M's name for the project (which deals with telephony communications protocols) and way better than any name I came up with for use on my blog) which later lead to writing a test harness (in C, C++ and Lua) and in late August lead to full time employment with The Corporation (they made an offer I couldn't refuse).
Which is good, because we (“we” being The Company) were let go from “Project: SocialSpace 2.0” due to an internal disagreement high up on the hiring company over the direction of the website. And shortly after that, Smirk was also hired by The Corporation to help with documenation and server ops. P declined to work with The Corporation; the Ft. Lauderdale Office being too far for him to work, so P still works for The Company (along with Smirk amd me, which the Corporation has no problems with).
So between “Project: SocialSpace 2.0” (until it blew up in late August/early September) and “Project: Wolowizard” (still ongoing for the foreseeable future) I've been too busy (okay, laziness has something to do with it as well) to update the ol' blog here. But now that I've settled into a new work groove, I hope things will improve around here.
Unclear on the concept of public and private community strings
Okay, what prompted me to update the blog was an incident that happened at The Corporation.
Like I mentioned, we're working on “Project: Wolowizard,” which involves dealing with SS7, a telephony-based communications protocol. To handle SS7, The Corporation licensed The Protocol Stack From Hell™. Now, given the amount of money that exchanged hands, one would think we would get decent documentation, source code and even technical support, but no, no source code, documentation that is technically correct but completely useless and what can only be described as “obstinate technical support.”
So for the past two weeks Smirk has been tasked with setting up The Protocol Stack From Hell™ to respond to SNMP queries. The technically correct but completely useless documentation indicated that it was a simple matter to set up SNMP support; just configure the IP address, UDP port and community string (really a password) in a few files and it should “just work.”
I wouldn't be mentioning it here if it “just worked.” Not only did The Protocol Stack From Hell™ fail to reply to SNMP queries, but it wasn't even opening up the UDP port. For two weeks, Smirk, R (the office manager in the Ft. Lauderdale Office of The Corporation) and a few others from the Main Office of The Corporation (which is in Seattle, Washington—nice because they're three hours behind us) were going back and forth with the obstinate technical support. Things were going nowhere slowly.
Then R suggested that Smirk set the SNMP community string to “public”, which is a typical default setting for SNMP. Of course that worked. As an off-handed remark, I suggested that Smirk should try “private”—the other, “more secure” community string that is commonly used.
That worked too.
But not the community string we wanted to use.
Head. Meet desk.
(For the record, The Protocol Stack From Hell was selected way before R, who runs the Ft. Lauderdale Office of The Corporation, even worked for The Corporation)
I mean, yes, there certainly was some braindeath in the “Project: SocialSpace 2.0” but at least it worked even if you could measure its speed in bits per eon. This wonderous feature of The Protocol Stack From Hell™ is almost criminal in its action.
Sheesh.
Wednesday, October 13, 2010
Feeding your inner carnivore
All I can say is … expense accounts rock!
A few members of upper management from The Main Office of The Corporation arrived at The Ft. Lauderdale Office of The Corporation to check things out. And because they're upper managment, they pretty much get to expense anything they want, so they took the Ft. Lauderdale Office (meaning: us) out to dinner to a Brazillian steakhouse called Chima.
There is no menu. Instead, you just flip a small disk to the side indicating you want food; then you are inundated with men carrying large hunks of roast beast offering you slices of various cuts of beef, pork or lamb. It's a never ending river of meat; vegetarians need not apply. To stem the rising tide, just flip the disk over to the other side. You can keep doing this as long as you want. As much as you want.
It's insane.
And insanely good.
It also appears to be one of those “if you have to ask, you can't afford it” type places.
Thursday, October 14, 2010
Beware of feeding your inner carnivore
Uhg.
I ate way too much. I should have skipped lunch yesterday had I known about Chima.
I should have also kept the small disk on the “no” side more often.
The food was great going down. It wasn't so much fun coming back up.
Twice.
I emailed in sick, slept for nearly twelve hours (that is, after … um … cleansing my system) and spent the rest of the day recovering.
Saturday, October 16, 2010
“Red”
Bunny and I went to see “Red” tonight and yes, it was worth it. It's an action comedy film about a retired CIA agent (played by Bruce Willis, who is “Retired but Extremely Dangerous”—thus the name of the film) trying to lead a normal life who suddenly finds himself running for his life while trying to protect an innocent love interest (played by Mary-Louise Parker).
The cast is outstanding; John Malkovich playing a paranoid conspiracy theorist who's fun to watch on the screen, the always good Morgan Freeman, still sexy Helen Mirren and Richard Dreyfuss in a small but crucial role.
The film (thankfully) keeps its humor level throughout the film and it manages to keep the action sequences thrilling to watch. It's not high art, nor does it pretent to be anything other than a comedy action film. And for what it is, it's quite good.
Tuesday, October 19, 2010
We were experiencing technical difficulties …
I finally get back into the blogging mode when the server my site (and email) is on goes offline when its power supply flames out.
Sigh.
It was weird today—I felt lost without access to my server, mainly because of the loss of my personal email (yes, I still run my own email server; yes, I'm masochistic like that) but yes, the website being down was also weighing on me—until yesterday (Monday) I don't think any server my site has been on has been down for more than three or four hours (oh, there have been some times recently when the site has been down for over 24 hours but that's because various processes on the box were inadvertantly killed and I didn't realize it, but that's different than the actual server being offline).
But things seem to be back to normal, and I would like to thank the crew hosting my website for their hard work in getting it back up quicker than I expected.
Your complete boating resource
I received this last week:
- From
- MXXXXX XXXXX <XXXXXXXXXXXXXXXXXXXXXXX>
- To
- sean@conman.org
- Subject
- Website Inquiry
- Date
- Tue, 12 Oct 2010 15:32:04 +0800
Dear Webmaster,
I recently discovered your "Uhg … sushi for breakfast" page here:
http://boston.conman.org/2002/05/03.4
I wanted to let you know that the “PublixDirect” link on your site points to a website (
http://www.publixdirect.com/
) that is no longer working. Would you please consider replacing it with a link to my website called Boating.com? It is a resource that provides hundreds of used and new boats for sale, as well as reviews, tips and buying guides for anyone interested in boating.If you think it would be of use to your visitors, would you please consider adding a link to my website on your page here:
http://boston.conman.org/2002/05/03.4
Or in any of the pages of your site that will be most appropriate.
Here is the HTML link you could add: <a href="http://www.boating.com">Boating.com</a> - the complete boating resource.
Will look forward to hearing from you.
Thanks!
MXXXXX
Boating.com
www.boating.com
My original thought was to post it immediately, but I thought better of it at the time. I mean, it's nice that M pointed out a dead link on my site, but given that that particular entry is eight years old that is to be expected (also, PublixDirect was discontinued in August of 2003).
But I found it really odd to suggest replacing the PublixDirect link with a site about boat sales. It doesn't make sense in the context of the entry and it tells me that M probably has some piece of software that looks for sites with broken links that have a certain page rank and spams any address found on the page asking to replace the dead link.
Nice, except I'm not a link farm. The links in each entry provide (or at
least, I hope they provide) a context for what I'm writing about and
frankly, having the text “PublixDirect” go to a site about boat sales
doesn't serve me, or my readers, any good purpose (hmm … but what to
do about links that have since changed since I wrote an entry
… but that's another show entry). It's also obvious that M
never bothered to read the entry in question or else M wouldn't have
bothered to suggest replacing the PublixDirect link with some spam site.
I replied to M (but without quite the snarkiness here) but have yet to receive a reply, so a week later, it's fair game. And heck, if I'm lucky, perhaps this entry will become the complete boating resource; that'll show M.
“NPA” seems to be a popular party these days …
I received the sample ballot in the mail today (okay,
technically yesterday but I'm still in “today” mode) and I'm
excited to see all the non-Republican/non-Democratic candidates in the eight
races I can vote in on November 2nd. There are eight Republicans, eight Democrats (figures the
Republicans would have a .com
and the Democrats a
.org
), one Libertarian (Alex Snitker), one Constitutional Party of Florida (Bernie DeCastro),
one for the Tea
Party (Ira Chester), two
Independance
Party candidates (Peter Allen and his running
mate John Zanni) and fifteen (count them, fifteen)
non-party-affiliated candidates.
I find that wonderful for some reason.
Do I really expect the non-Republicans/non-Democrats to win?
Not really, but a guy can hope, right?
Thursday, October 21, 2010
Maybe this year I'll get a novel out—wouldn't that be novel?
Hey, there! Thanks for your interest in my 8-Week Online Novel-Writing Course. It's designed to coincide with November's National Novel-Writing Month (NaNoWriMo), but includes both pre-drafting and revision instruction as well.
I've been a teacher of creative writing at a major university for more than 8 years, and all through that time I've done independent novel-writing coaching as well as written these dang things myself! When you finish my course, you will have a revised draft of a novel ready to post, sell, share, or revise further. (I offer one final read with comments after your revision!) The only way to be a novelist is to WRITE THAT NOVEL! I will be your cheerleader, drill sergeant, and best friend during this whole process.
My friend Hoade, college English instructor and zombie expert, is offering this novel-writing course.
And I think I'm going to try it.
The limits of sleeping
Testing “Project: Wolowizard” isn't easy, what with dealing with The Protocol Stack From Hell™ and clarifying which optional fields of certain messages are mandatory, further complicated by the fact that some optional mandatory fields are optional if other optional mandatory fields are included, but there's also a feature that if used, means you may not include an optional mandatory field, but if you do, then you need to set another optional optional field.
It's all very confusing.
But while I'm waiting for the first extended test run to finish (24 hours of continous tests), I decided to investigate a particular phenomenon I've noticed when developing the testing program.
We need a way to test various message rates and one of the easiest is somethine like:
for count = 1 , 1000 do endpoint:send(msg) sim.sleep(thepausethatrefreshes) end
If I want to send 10 messages per second, then
thepausethatrefreshes
will be 0.1
(sleep for a
tenth of a second); if I want 1000 messages per second, then
thepausethatrefreshes
will be 0.001
. It's an easy
way to dial up or down the number of messages per second sent.
The sim.sleep()
function (we're using Lua to script our test scenarios) is really a
wrapper around the C function nanosleep()
, which, in theory,
allows a very fine resolution (to the nanosecond) but in reality is limited
to the hardware on the system.
And in my testing, I've found that at best, I can get only about 100
messages per second. I wrote a program to test the limits of
nanosleep()
and found that on our particular testing platform,
the lowest I can go is 0.01 seconds (my Linux system, on the other hand, can
go as low as 0.002).
As a second test, I just blasted out messages as fast as possible and was able to get a sustained (that is, actually sent and acknowledged) 1,500 per second (or one every 0.0006 seconds). I have to rethink my approach here. I think what may work is to send groups of messages before pausing a bit.
So, if I wanted to send 1,000 messages per second, I would need to write something like:
function thousand_per_second() for pauses = 1,50 do for msgs = 1 , 20 do endpoint:send(msg) end sim.sleep(0.02) end end
This sends twenty messages per batch, then pauses 0.02 seconds between batches, to give us approximately 1,000 per second (actually, a bit less due to processing overhead).
Tuesday, October 26, 2010
“Multithreaded programming is hard, let's play Minecraft.”
All I wanted to do was pass some data from one thread to another. I had code to manage a queue (based off the AmigaOS list code), so that wasn't an issue. And while a pthread mutex would ensure exclusive use of the queue, I still had an issue with one thread signaling the other that data was ready.
I took one look at pthread condition variables, didn't understand a word of it, and decided upon a different approach.
The primary issue really wasn't the transferring of data; it was the processing to be done with the data. I'm writing code to drive, let's say, a widget, for “Project: Wolowizard” and part of that test code is simulating, say, a sprocket. When the sprocket gets a request, the original code could return a result or just drop the request. R wanted a way to delay the results, not just drop them. To avoid a redesign of the test program, the easiest solution was to just spawn a separate thread to handle the delayed replies.
So, not only did I have to transfer the request, but signal the other thread that it had a new request to queue up for an X second delay.
My first test approach (a “proof-of-concept”) used a local Unix socket
for the transfer. This approach had the benefits of avoiding the use of
mutexes and condition variables, and the code was fairly easy to write.
Just poll()
the local Unix socket for data with a calculated
timeout; if you get a timeout, then it's time to send the next delayed
result, otherwise it's a new request to queue (ordered by time of delivery)
and recaluclate the timeout.
But I found it annoying that the data was copied into and out of kernel space (not that performance is an issue in this particular case; it's more of an annoyance at the extra work involved).
Fortunately, I found sem_timedwait()
and
sem_post()
. I could use sem_timedwait()
to wait
for requests, much like I used poll()
. Some mutexes around the
queue (I originally used some other semaphores, but M suggested I stick with
mutexes for this) and the code was good to go.
Only it didn't work in the actual program. My “proof-of-concept” worked, but in actual production, the second thread was never notified.
I asked M for help, and through his asking the right questions I suddenly had an idea of what was wrong—one quick change to a test script (the testing program I wrote is driven by Lua) proved my hunch right. Which is when I planted my face in the palm of my hands.
The upshot—my sprocket simulation works in one of two different ways (one method uses The Protocol Stack From Hell™ as a test) and the way I picked (which doesn't use The Protocol Stack From Hell™ and is thus, more “safe,” as both M and I are of the opinion that The Protocol Stack From Hell™ is also subtly buggy) failed to trigger the appropriate semaphore. Worse, had I used the local Unix socket, the whole thing would have worked as intended.
One of these days I'll get this whole multithreaded programming thing down pat.
Hoade won't let me get away with 50,000 fictional words
Hoade's online novel writing class (disclaimer: we're “best-friends-forevah”) is really helpful. I think it's the structured nature of the course and the (so far) daily assignments (no writing yet!—that's another week or so away) that are keeping me focused.
And the constant emails yelling for completed assignments aren't hurting either (seekrit message to Hoade: Bunny's birds left extensive comments on my homework and trust me, you don't want to see it).
Maybe this time I'll have more than a pile of incoherent notes.
Saturday, October 30, 2010
There are technical difficulties, and then there are technical difficulties
Nearly two weeks ago my server went down because of a faulty power supply. At the time, I made a joke about the power supply going out in flames.
Last night (technically this morning) the new server went down for a few hours (oh great!) but then I see this in email (forwarded by Smirk):
At approximately 1:49 a.m. EDT, UPS #2 experienced a catastrophic failure resulting in a fire. As a result, a portion of the XXXXXXXXX X datacenter experienced a power interruption that may have impacted some customers.
The Fire department responded to the fire in UPS #2 immediately, arriving on site at 1:57 a.m. EDT. For safety reasons, access to the XXXX XX area of the facility was temporarily restricted.
Oh.
Well then … (memo to self: no more jokes about servers catching fire)
Tuesday, November 09, 2010
Stupid Twitter Tricks ][
Three years ago I did a Stupid Twitter Trick and it ran fine up until, until Twitter changed how third party applications can access Twitter. Then it went, well, not dark, but it wasn't exacty updating either.
Until this weekend. In fact, it wasn't that hard to update; perhaps an hour or so of work.
But three years though … perhaps I should come up with some other Stupid Twitter Trick to do; my quote file is running out of quotes.
Your ironic bug of the day
I'm working on some code to manipulate quote files (a simple text file filled with various quotes from people). It's not working—it either crashes or corrupts the output.
And it's the same quote each time—“There are only two hard problems in Computer Science: cache invalidation, naming things and off-by-one errors.”
The problem—an “off-by-one” error.
Tuesday, November 30, 2010
Lame excuses
Bunny and I were talking about the lack of posts around here, and I gave two rather pathetic reasons for slacking off. The second reason deals with The Corporation—I'm not sure how free I can be with work details. Smirk isn't my boss at The Corporation (he's a fellow cow-orker) and I haven't brought the issue up with R, who is The Ft. Lauderdale Office Manager (and the one who hired me initially) nor with E, who I do report to but works out of the Seattle Office.
Sure, I could mention that The Protocol Stack From Hell™ says it's thread safe, but still requires a mutex around a few calls. And I could mention that in the source code to The Protocol Stack From Hell™ we do have access to I found a global variable being used when creating a packet of data, thus causing all sorts of issues with the multithreaded testing tool I'm writing. I could also mention the fact that The Protocol Stack From Hell™ failed to properly use C++ inheritance by declaring multiple copies of object member variables in subclasses. And I could mention that we paid quite a lot of money to use this wonderful Protocol Stack From Hell™.
[Related to that—I could write about the wonderful Protocol Stack From Hell™ every day, but even I got tired of my rants against control panels. The Protocol Stack From Hell™ is way worse in that we don't have access to the source (well, we have access to some of the source; just not the stuff that's real puzzling) and it's only after browbeating the tech support that we learn just how bad the stuff really is, such as obtaining a textual representation of an error that's stored past a certain structure our code receives that's so totally not documented. And this is a daily occurrence! Don't even get me started on the silliness of their SNMP support!]
So yeah, I could mention all that, but I'm not sure if I can. Or should. You know?
The first reason I'm slacking off had Bunny rolling on the floor laughing, and yes, I could see her reasoning. It has to do with manually updating MyFaceSpaceBook everytime I posted here; it just feels like too much work.
Okay, there's nothing compelling me to link to my post at MyFaceSpaceBook, except that I personally view MyFaceSpaceBook as another feed (much like my Atom feed or RSS feed), and thus, I feel compelled to update MyFaceSpaceBook when I update here.
Okay, I'll wait until you stop laughing.
So, given that The Protocol Stack From Hell™ was acting up yesterday, I decided to see what it would take to automatically update MyFaceSpaceBook when I post here.
My God MyFaceSpaceBook doesn't make it easy.
All I want to do is update my status. After several hours of reading,
registering as a developer (and man, I would link to that page, but now I
can't even find the page—sheesh!), manually running API web calls (I didn't know
about openssl / s_client -connect host:443
until
tonight—wow!) to tweak settings in my account and learning curl (even though I've
written code to retrieve web pages, it doesn't support HTTPS and
that's a whole mess I'd rather skip for now), I have a
proof-of-concept program that can update my status at MyFaceSpaceBook.
The next step is to integrate the proof-of-concept into the blog engine and have it update MyFaceSpaceBook automatically.
And then, maybe, I'll start posting more often.
I guess I can add “Facebook App Developer” to my résumé now …
I didn't expect my “proof-of-concept” to take as long to integrate into mod_blog as it did, and I do apologize to my friends on MyFaceSpaceBook who got spammed earlier today with, frankly, bizarre posts as I attempted to debug the new code. It's working, but there's still a worrying memory overwrite buried deep in the code that I've yet to fully squash (“fixing” a bug by reordering function calls is not a fix, but a temporary workaround).
But I have accomplished my goal of updating MyFaceSpaceBook automatically. It involved:
- Registering the application at MyFaceSpaceBook (you need to have a MyFaceSpaceBook account to see the page, otherwise I'd link to the page).
- Manually entering the appropriate URL (from the “User-Agent Flow” section) to give my “application” access to my MyFaceSpaceBook account.
- Manually entering another URL to give my “application” access to update the MyFaceSpaceBook status.
- Adding the code to authenticate as an application, which gives me the token required for the last step.
- And finally, adding the code to update my status.
The first three steps took several hours on the development site (which is horribly organized by the way—it's taken about an hour just to find the pages again for this post) and the last two took a couple of hours today to properly add the code (to add configuration options, pass in the actual status text, the final 10% of the code that tends to take 90% to finish).
And with luck, this will post properly.
I can only hope.
Sunday, Debtember 05, 2010
Emergency
I'm waiting in the emergency room as Bunny's thumb is being looked at. Tip, people: be careful when using a mandolin (the slicer, not the musical instrument)—they're sharp!
More details as they happen.
Emergency Part Deux
Bunny just came out to get her coat, and informed me that she may have to get a tetanus shot (ouch). It just depends on if she got one when she broke her arm a few years ago; the hospital is looking up the records now.
Emergency, Part Drei
Bunny got the tetatus shot, the doctor just arrived, looked at her thumb, and just ran off to get the Superglue (no, really—that's what it was designed for initially).
Emergency Part Quatro
Well, the doctor came back with a tube of Superglue, applied a few layers over Bunny's thumb, and now we wait to be discharged.
The crisis is over.
Emergency Part Fem
We're back. We're fine. Bunny is in the garage with a mallet making sure the mandolin won't hurt anyone in the future.
Monday, Debtember 06, 2010
The sane library for decoding DNS packets
Every so often over the past twenty years or so, I've had the need to make DNS queries other than the standard A type; you know, like MX records, or PTR records, but the standard calls available under Unix are lacking. And each time I've investigated other DNS resolving libraries, they've been horrendously complex while at the same time, only resolving a few record types.
So when my attention turned once again towards DNS, I decided that the best course of action was to buckle down and write my own library, which I did over this past Thanksgiving holiday weekend.
My approach to the problem I think is unique. At least, compared to all the other DNS resolving libraries I glanced over, it's unique. First off, I pretty much ignored the actual network aspect. Sure, I have a simple routine to send a query to a DNS server, but the code is dumb and frankly, not terribly efficient, seeing how it opens up a new socket for each request. Also, it only handles UDP based requests, and pretty much assumes there will be no network errors that drop the request (which as far as I can tell, hasn't actually happened).
Instead, I concentrated on the actual protocol aspect of the problem, decoding raw packets into something useful, and taking something useful and encoding it into a raw packet. I tackled encoding first, and it's an elegent solution—you fill in a structure with appropriate data and call an encoding routine which does all the dirty work.
dns_question_t domain; /* what we're asking about */ dns_query_t query; /* the "form" we fill out */ dns_packet_t request[DNS_BUFFER_UDP]; /* the encoded query */ size_t reqsize; dns_rcode_t rc; domain.name = "conman.org."; /* only fully qualified domains name */ domain.type = RR_LOC; /* let's see where this is located */ domain.class = CLASS_IN; query.id = random(); /* randomize the ID for security */ query.query = true; /* yes, this is a query */ query.rd = true; /* we're asking for a recursive lookup */ query.opcode = OP_QUERY; /* obviously */ query.qdcount = 1; /* we're asking one question */ query.questions = &domain; /* about this domain */ reqsize = sizeof(request); rc = dns_encode(request,&reqsize,&query); if (rc != RCODE_OKAY) { /* Houston, we have a problem! */ }
Once encoded, we can send it off via the network to a DNS server. Now, you may think that's quite a bit of code to make a single query, and yes, it is. And yes, it may seem silly to mark the query as a query, and have to specify the actual operation code as a query, but there are a few other types of operations one can specify and besides, this beats the pants of setting up queries in all the other DNS resolving libraries.
And this approach to filling out a structure, then calling an explicit encoding routine is not something I've seen elsewhere. At best, you might get a library that lets you fill out a structure (but most likely, it's a particular call for a particular record type) but then you call this “all-dancing, all-singing” function that does the encoding, sending, retransmissions on lost packets, decoding and returns a single answer. My way, sure, there's a step for encoding, but it allows you to handle the networking portion as it fits into your application.
Anyway, on the backend, once you've received the binary blob back from the DNS server, you simply call one function to decode the whole thing:
dns_packet_t reply[DNS_BUFFER_UDP]; size_t replysize; dns_decoded_t decoded[DNS_DECODEBUF_4K]; size_t decodesize; dns_query_t *result; /* reply contains the packet; replysize is the amount of data */ decodesize = sizeof(decoded); rc = dns_decode(decoded,&decodesize,reply,replysize); if (rc != RCODE_OKAY) { /* Houston, we have another problem! */ } /* Using the above query for the LOC resource record */ result = (dns_query_t *)decoded; if (result->ancount == 0) { /* no answers */ } printf( "Latitude: %3d %2d %2d %s\n" "Longitude: %3d %2d %2d %s\n" "Altitude: %ld\n", result->answers[0].loc.latitude.deg, result->answers[0].loc.latitude.min, result->answers[0].loc.latitude.sec, result->answers[0].loc.latitude.nw ? "N" : "S", result->answers[0].loc.longitude.deg, result->answers[0].loc.longitude.min, result->answers[0].loc.longitude.sec, result->answers[0].loc.longitude.nw ? "W" : "E", result->answers[0].loc.altitude );
Yes, it really is that simple, and yes, I support LOC records, along with 29 other DNS record types (out of 59 defined DNS record types). So I'm fully decoding half the record types. Which sounds horrible (“only 50%?”) until you compare it to the other DNS resolving libraries, which typically only handle around half a dozen records, if that. Even more remarkable is the amount of code it takes to do all this:
Library | Lines of code | Records decoded |
---|---|---|
spcdns | 1321 | 30 |
c-ares | 1452 | 7 |
udns | 872 | 6 |
adns | 1558 | 13 |
djbdns | 1276 | 5 |
(I should also mention that the 1,321 lines for spcdns
include the encoding routine; line counts for all the other libraries
exclude such code; I'm too lazy to separate out the encoding
routine in my code since it's all in one file.)
How was I able to get such densities of code? Well, aside from good code reuse (really! Nine records are decoded by one routine; another twelve by just three routines) perhaps it was just the different approach I took to the whole problem.
Another feature of the code is its lack of memory allocation. Yup, the
decoding routine does not once call malloc()
; nope, all the
memory it uses is passed to it (in the example above, it's the
dns_decoded_t decoded[DNS_DECODEBUF_4K]
line). I've found
through testing that a 4K buffer was
enough to decode any UDP-based result. And by giving the decode routine a
block of memory to work with, not only can it avoid a potentially expensive
malloc()
call, it also avoids fragmenting memory and keeping
the memory cache more coherent (when Mark saw an earlier version he expressed
concern about alignment issues, as at the time, I was passing in a character
buffer; I reworked the code such that the dns_decoded_t
type
forced the memory alignment, but because an application may be making
queries via TCP,
which means they can be bigger, I didn't want to hardcode a size into the
type; it would either be too big, thus wasting memory, or too small, which
makes it useless; the way I have it now, with the array, you can adjust the
memory to fit the situation).
And from that, the code itself is thread safe; there are no globals used, unlike some other protocol stacks I've been forced to use, so there should be absolutely no issues with incorporating the library into an application.
Oh, and I almost forgot the Lua bindings I made for the library. After all, a protocol stack should make things easy, right?
Update on Saturday, October 26th, 2013
I was brought to my attention (thank you, Richard E. Brown) that I should link to the source code so that it's easier to find.
So linked.
Tuesday, Debtember 07, 2010
The Bureaucratic Sinkhole known as the DMV
I find it odd that Florida law requires you to update your driver's license within ten (10) days of moving, yet to get a new driver's license you must provide two (2) proofs of residential address which can include bills, but it's doubtful anyone can get a bill to a new address in less than ten (10) days since moving in. Hmmmm …
It's silliness like this that make me avoid bureaucratic institutions as much as possible (well, that, and the tedious long waits and my own tendencies towards procrastination). But seeing how my driver's license is set to expire in January of 2011, and an inability to extend it online left me little choice but to head on down to the local DMV office.
Last month, I set up an appointment for today at 10:50 am (the earliest datewise in all of Lower Sheol, and the only timeslot for today), so at a most ungodly hour Bunny and I headed out to the Pompano Beach office. To ensure I appeased the bureaucratic gods I brought along my Social Security card (original), birth certificate (old enough to be an original notiarized copy of), car insurance bill (with the current address), bank statement (with the current address), voter's registration card (with the current address) and current car registration (with the current address on it). I almost went to the bank to pull out some cash, but given the ungodly hour, decided against it (and frankly, I forgot to to so yesterday).
We arrived about fifteen minutes before the appointment to a packed DMV office. Ten minutes in line, we were given a number (A1051/b) and told to wait.
Thank God I set the appointment, because it only took fifty minutes (and two blue screens of death on the Windows-run “now serving ticket #X” display) to get called to the back for my picture.
Five minutes later, I have my picture taken (only thing changed from the previous picture—a thicker beard) and pull out my debit card only to have the clerk tsk-tsk-tsk at me.
“We don't accept Visa,” said the clerk, “but we do accept MasterCard, Discover and American Express.”
Sigh. I knew I should have gone to the bank.
Update on Monday, November 10th, 2014
It has come to my attention that some of the links provided above do not go to the “Official Florida DMV” which may cause some confusion. I am sorry about that, but the “Official Site” does not appear to have the same information that is easily locatable.
You have been warned.
Some musings on decentralized DNS
Less than twenty-four hours since I released my DNS resolver library and I'm getting requests for Ruby bindings.
Heh.
While I suspect I can write Ruby bindings, it might help if I actually, you know, knew the language first. It might be something to do over this holiday season.
Also, as a joke, I was asked if I could work on that distributed DNS replacement thing, but I think I'll pass on that.
It's not as if DNS isn't already distributed—it is, but there is a central authority overseeing domain names that also help manage the servers that run the top level of DNS—the so called root servers. So it's probably more accurate to say the project is a “decentralized DNS replacement thing” and therein lies the problem—how are naming conflicts resolved?
With the current system, it's “first come, first served” when it comes to names (way back in 1998, I tried registring spc.com, but found out that Time Magazine got there first—and they still own it; I then tried conner.com, but there was a hard drive manufactorer that owned that name; now it looks like bookseller owns that domain) and if there are conflicts, ICANN will step in to mediate the issue.
But in a decentralized DNS
system, how is this handled? I want the .area51
TLD, but (hypothetically speaking here)
so does Bob Lazar. Who wins? How
is this resolved? The government?
That's the cause of this attempt to decentalize DNS. The courts? Golden rule there (“he who has the gold, makes the rules” although it could be said that's the case now).
It's more of a political (or maybe social) issue than a technical issue, and thus, I don't see it going anywhere any time soon.
Thursday, Debtember 09, 2010
Rosarios
Being that Bunny's brother is in town (from Bremerton, Washington and that it's also Bunny's mom's birthday, we figured we would try a restaurant we saw on “The Best Thing I Ever Ate”, seeing how it was in Boca Raton: Rosarios Ristorante.
It's not quite “if you have to ask, you can't afford it,” but it's close and the service wasn't quite what we were expecting, but the food … oh the food. The garlic toast (featured on “The Best Thing I Ever Ate”) was good with an assertive garlic taste to it. That pretty much dissappeared immediately. The Eggplant Rollatini appetizer I had was the best eggplant dish I've had (not that I've had many, mind you); tender without being mushy and the eggplant taste was mild and not bitter at all.
While I enjoyed the Veal Contadina (I found the tomato, garlic and basil salad a nice cool counter point to the warm veal cutlets), Bunny's Veal Marsala was easily the best dinner served us and in retrospect, I think we all would have ordered it had we known. But that's not to say the other dishes were bad, they were all very good; it's just that the Veal Marsala was out of this world.
I was the only one to order dessert, and the Chocolate Mousse Cake was incredible—it wasn't overly sweet and it had a very mild chocolate taste to it, but even so, it was so good we ordered an extra piece to take home. Apparently the coffee (an Italian coffee) was excellent, but since I don't care for coffee, I'll have to take Bunny and her brother's testimony on this.
One other interesting note—I think Bunny, her brother and I were easily the youngest diners there tonight.
Lego—is there nothing it can't do?
Wow!
Very cool!
I want one!
Saturday, Debtember 18, 2010
“Tron Legacy”
Jeff, Bunny and I went to see “Tron Legacy,” the sequel to Disney's 1982 film “Tron.
I'll say this right now—the film is just as cheesy as the original, but I'm still glad I saw it in the theater. The plot is predictable—Kevin Flynn (Jeff Bridges) makes an incredible discovery in the computer he helped to liberate (in “Tron”) and rebuilt, but one of his helper programs—the very one he wrote, CLU, staged a coup, killed TRON (the program written by Kevin's friend Alan Bradley (Bruce Boxleitner) that helped Flynn in the original movie) and trapped Flynn in the computer. Twenty years later, Flynn's son Sam (Garrett Hedlund) gets a message from his father and when he goes to investigate, gets sucked into the computer and with the help of his father and another program, Quorra (Olivia Wilde) leads yet another revolution to free the computer for the users.
The real draw for this movie (and much like the original) is in the computer imagery and music. The visuals are absolutely stunning and the re-imaging of the iconic … um … icons, from Tron are beautiful, and that may prove to be its ultimate downfall in time. You see, in “Tron,” it was obvious that the animations were done via computer—they're pretty much the iconic look for “computer graphics” and strangely enough, for being almost thirty years old, stand up incredibly well. The imagery for “Tron Legacy” though, is too realistic (exhaust vents on the Recognizers? Really?) and solid looking. Gorgeous, yes, but this is meant to exist inside a computer; does it really need to look … well … realistic?
But on this quibble, time will tell.
Disney also had a tough task in making Jeff Bridges look young (for the opening scenes of the film, and a few flashbacks), and here, I think they failed overall, as the younger Kevin Flynn tended to fall right inside the Uncanny Valley. As CLU it would make sense that his likeness would fall into the Uncanny Valley, but given that none of the other “computer programs” fell there, it just made the “young” Jeff Bridges really disturbing to look at, unlike, say, Olivia Wilde or Beau Garrett, who, I must say, looked absolutely stunning in their Tron costumes.
For a fan of Tron, it's worth seeing. To see stunning visuals, it's worth seeing. For an incredible story and acting … um … not so much. Still a fun film, though.
Sunday, Debtember 19, 2010
Them ol' Lake Lumina Blues
Last night I said I'd drive to the movies.
Only I didn't end up driving. Jeff did.
And therein lies a tale.
As I was pulling out of the driveway, turning the steering wheel suddenly
felt a lot like pulling on a brick wall. Obviously the power steering was
no longer power steering and there was no way I was driving to the theater
(even if it was about two miles away). I was barely able to shove the
brick wall steering wheel the other way to get the car back up
into the driveway.
Anyway, today being The Game Day™, my friends and I huddled around Lake Lumina, looking into the engine block for reasons why the power steering wasn't.
“That doesn't look good,” said Tom. I looked to where he was pointing.
[Actually, in the graphic above, where it says “flywheel” it should say “pully” as that's what the mechanic called it—Sean]
One of the many flywheels driven by the one large serpentine belt winding its way along the side of the engine had shifted a few inches away from the belt. “No, that doesn't look good,” I said.
I pushed on the flywheel pully, seeing if it would push back into position, but
it wasn't budging, much like the steering wheel wasn't budging. After
looking up the
Blue Book value for my car, the consensus was I might want to
drag Lake Lumina behind the shed (or in this case, it might be eaiser to
drag the shed in front of Lake Lumina), put it out of its misery and start
visiting some automobile dealerships.
Sigh.
“I Want A Dog For Christmas, Charlie Brown”
Bunny and I caught “I Want a Dog for Christmas, Charlie Brown” on television and I must say, this is one Peanuts holiday special I've never seen before.
It was … odd. Developed after Charles Schulz died, it's a collection of comic strips that have been animated and strung into a somewhat coherent story of Rerun (the younger brother of Lucy and Linus) and his desire to get a dog for Christmas. And yes, it's just barely coherent with much of the film (it was an hour long, which was about 30 minutes too long in my opinion) feeling like a collection of gags, which, yes, it is a collection of gags.
And unlike “A Charlie Brown Christmas, I don't think this one needs to be seen more than once.
Tuesday, Debtember 21, 2010
An end to the Eternal September?
Okay, now this is odd.
About half a dozen people get notified via email when I post here (it's a feature I dropped three years ago) and I noticed that just now, notifications to AOL are bouncing, not because AOL is rejecting my email (a typical occurance) but … well …
<XXXXXXXXXX@aol.com>: Host or domain name not found. Name service error for
name=aol.com type=A: Host found but no data record of requested type
A few checks, and yes, AOL has no way of receiving email from the outside world at this time.
Could this be the actual end of the Eternal September? (sorry Dad) Or is this just a momentary glitch where heads might end up rolling?
Update a few hours later …
It seems that AOL got its act together and fixed the MX problem. Alas, the Eternal September continues …
Now I understand how I must sound to non-technical people
Good news! There's no need for a new car! Bunny had the car towed to her preferred mechanic, he took one look at the problem, said it was simply “the harmonic balancing stabalizer disintegrated due to prolonged exposure to entropic forces, thus causing the drive belt pully to slide laterally away from the internal combustion containment chambers, setting up wild oscillations in the drive belt which stressed the recharging subsystem causing it to improperly function. Shouldn't take more than two, three hours to fix” (or something like that).
“Make it so,” said Bunny (or something like that).
And thus, three hours later, I was there, picking up my car. It also cost less than I expected, given the seriousness of the situation my friends thought the car experienced.
Woot!
Wednesday, Debtember 22, 2010
How a Japanese cookie lead to 110 Powerball winners and, by the way, we learn who General Tso was and why we love his chicken so much
Game Day Dinner on Sunday was Chinese take-out food (you know, a tradition for the holiday season). While ordering, the topic of General Tso's Chicken came up, because the menu for the local Chinese restaurant was written in Engrish and came out as General Tao's Chicken.
I normally wouldn't even bother mentioning this, but I just now came across this fantastic talk on General Tso's Chicken (who is also making a documentary about General Tso's Chicken) that goes into not only the history of this classic dish, but makes the point that it is a thoroughly American dish (created by a Taiwanese chef in 1970s New York City).
Oh, about the title? That's also covered in the talk, which reminded me a bit of Connections, that wonderful television series by James Burke.
“True Grit” or “The Dude vs. The Duke”
Bunny and I went to see “True Grit” tonight. I've never seen the original “True Grit,” so I have nothing against which to compare it. All I know is that the original starred John “The Duke” Wayne, and there's a scene where he rides a horse, reins in his mouth and a gun in each hand.
In fact, the only reason I went to see this film was as a Christmas gift to Bunny, who really wanted to see it. Normally, westerns aren't my thing, and remakes less so, as Hollywood is creatively bankrupt, as I'm wont to say.
Only thing was, if Hollywood is creatively bankrupt and is solely going to do remakes, more of this please! This was wonderful! No romanticism here of the Old West. No beautiful people either. It took a few seconds for me to recognize Matt Damon, Barry Pepper is completely unrecognizable (and ugly to boot), Hailee Steinfeld is so plain she's just a shade from being homely, and Jeff Bridges looks about 70 in the role of Rooster Cogburn.
In fact, Jeff Bridges was great in this role as an old, cantakerous, drunk and mean marshall in this film. While he wasn't The Duke, neither was he The Dude (thankfully). The acting overall was great, and it would be a shame if Hailee Steinfeld doesn't get a nomination for leading actress. In fact, the entire theather broke out cheering in one of her scenes (although to divulge more would be a spoiler). I also haven't seen a movie this intense since “Reservoir Dogs.”
Another striking aspect of this film is the dialog—the very formal, Victorian (with a Texas twang) pattern of speech that apparently was in the original book the movie was based upon. And as Bunny remarked, no swear words at all (the PG-13 rating comes from the violence, not the langauge).
The story is straight-forward though; Mattie Ross (Hailee Steinfeld) hires Rooster Cogburn (Jeff Bridges) to track down Tom Cheney (Josh Brolin) who killed her father and fled town. Mattie insists on accompanying Cogburn on the trip to Indian Territory to make sure she gets her money's worth. Cogburn is intent on not letting her come along. But she does anyway.
Now, not having seen the original, I can't say if this version is better, but it was very good, enough to watch it again.
Saturday, Debtember 25, 2010
Merry Christmas and all that jazz
It was a quiet, laid back day today. Many presents were exchanged (and the “Most Amusing Present” goes to my friend Hoade who sent me I Could Tell You But Then You Would Have To Be Destroyed By Me: Emblems From The Pentagon's Black World, which are military patches for top secret projects and yes, Area 51 is referenced quite often, although the most amusing patch (to me) is a blank black patch, with the text, in black on black letters along the top “IF I TELL YOU” and in black on black letters cross the bottom, “I HAVE TO KILL YOU”—I tell you, it's comedy gold) and much food was eaten.
This evening, Bunny, her mom and I then drove around to look at the various souls spending what's left of their money in electrical bills.
Very pretty, but very expensive no doubt. Not even the cities are doing anything this extravagant this year.
Other than that, I wish all my readers a Merry Christmas and a Happy New Year!
Wednesday, Debtember 29, 2010
Fifty inches of impressionistic inflatable ink
It didn't make “Most Amusing Present,” but it's still darned amusing—my dad got me “The Scream.”
Okay, he didn't give me the actual painting, nor a reprint, but an inflatable Scream:
Fifty inches of impressionistic inflatable ink. And given where the valve was, his expression makes quite a bit of sense (same for Otto the autopilot now that I think about it).
So now he lives at The Ft. Lauderdale Office of The Corporation where we
could definitely use some non-corporate propaganda art to liven
up the space.