The Boston Diaries

The ongoing saga of a programmer who doesn't live in Boston, nor does he even like Boston, but yet named his weblog/journal “The Boston Diaries.”

Go figure.

Thursday, February 22, 2007

Deterministic computers are so passé

I remember the first time I saw the Star Trek: The Next Generation episode “Contagion.” It starts out innocently enough, when the USS Enterprise receives a distress signal from the USS Yamato out in the Neutral Zone. Picard & Co. arrive just in time to see the USS Yamato explode as some Romulan warbirds decloak off the starboard bow.

And that's before the opening credits.

We find out that the USS Yamato had been exploring a planet in the Neutral Zone when they were probed by million year old alien technology and their systems started acting up. Georgi was able to download some of the logs from the USS Yamato's computer system before it blew up and starts to analyize them for clues as to what happened. Meanwhile, the computer system on the USS Enterprise starts to act up at the same time as the computer systems on the Romulan warbirds act up.

Okay, pretty standard Star Trek episode here. Where it went downhill for me was with Geordi's epiphany—the computers are infected by an alien computer virus (don't get me started on this trope) via the downloaded log files. The same one that infected the USS Yamato when they were probed by million year old alien technology (or started on this trope either). At that point, I lost it—what? The Enterprise computer saw executable code in the log files and decided to just execute it? What is this, Enterprise software from Microsoft?

So now the crew is running around without a clue what to do. Picard is trying to negotiate and surrender to the virus, Worf attempts to wrestle it and gets knocked out, Riker is having trouble seducing it, Data is infected by the computer virus and starts giving fried spam to the crew and Geordi is confused by the technobabble the virus is throwing at him. Since it doesn't involve the warp engines or the deflector shield, Wesley can't do anything about the virus. And for some odd reason, Dr. Crusher keeps repeating, “Damn it, I'm a doctor, not a junior programmer!”

At this point, I'm thinking, Okay, normal procedure is to reinstall the software on the Enterprise with a good known backup and restart it. But the crew is not doing that. There must be some reason they can't do that, I think. Too many computers, or the only known backup is back at Starbase 13. I mean, how do you reboot the Enterprise? Isn't that, like, attempting to reboot the Internet?

So what's the solution the fearless crew of the Enterprise come up with?

Shutdown the computer, reload the software from a good known backup and restart it.

WHAT THE XXXX? I wasted a whole hour on this? It took the crew the entire episode to rediscover what millions of people today know as common sense? What is it with Picard & Co.?

I was reminded of that episode because of Steve Yegge's Pinocchio Problem. Steve Yegge's quest for systems that never have to be rebooted, constantly living, adapting, expanding software/hardware lead directly to the doom of the USS Yamato, and the near doom of the USS Enterprise.

Okay, I exaggerate a bit.

But that does appear to be the eventual outcome of such a scenario, where the notion of restarting a computer is not normal but is in fact, a nearly forgotten recovery technique.

It's just one part of a disturbing trend I see among many programmers—the desire to have software emulate biological systems.

Sussman talks about “degeneracy” in biological systems, and how it can emerge by duplicating a section of DNA and allowing the copies to diverge. In programming languages, this might be done by taking a program, copying a section of code, and then changing each caller so it either continues to call the old version or calls a new one. In order to allow broken pieces of code to continue to evolve without destroying the program, you could make callers “prefer” one version over the other, but silently fall back to their non- preferred implementation if the first version didn't work. For example, maybe their preferred version threw an exception, or maybe it started failing some kind of unit test that the caller cares about.

Here's another idea: generate random segments of code by “connecting the dots”, where by “dot” I mean “type”, or perhaps “function call”. Suppose you have a URL and you want to have a file on disk. If you're lucky, you can search the call graphs of a whole bunch of programs and find some code path that starts with a url and ends with a file. If you're really lucky, that code path will do something appropriate, like downloading the content behind the url. If you took this idea and applied it to all the open source projects in the world, you'd probably have a fair chance of implementing something reasonable, purely by accident. Well, not really by accident—it would actually be by virtue of the fact that you're drawing a random sample from a set of programs that is distributed extremely non-uniformly over the space of all possible programs. Djinn does something like this, but without the benefit of a meaningful dataset of samples to draw from. Haskell probably has an advantage at this kind of thing because it doesn't depend on side effects to determine the meaning of a segment of code.

Combine these two ideas. Generate random code, evolve it by making (fail safe) copies, and mutate it by replacing randomly-selected code paths with randomly-generated code paths that connect the same dots.

Thoughts on Robust Systems

I have nothing against Kim, but her post was the tipping point for this entry. What is this fascination with evolving code? Or emulating biological systems in development? Writing software is already difficult enough on purely deterministic machines (which is why I like computers in the first place—they're deterministic!) and yet programmers want to make things even more difficult on themselves?

Here's an article about Dr. Adrian Thompson, who “evolved” a circuit (on a programmable chip) to detect two different tones.

Although the configuration program specified tasks for all 100 cells, it transpired that only 32 were essential to the circuit's operation. Thompson could bypass the other cells without affecting it. A further five cells appeared to serve no logical purpose at all—there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.

It appears that evolution made use of some physical property of these cells–possibly a capacitive effect or electromagnetic inductance–to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.

However it works, Thompson's device is tailor-made for a single 10 by 10 array of logic cells. But how well would that design travel? To test this, Thompson downloaded the fittest configuration program onto another 10 by 10 array on the FPGA. The resulting circuit was unreliable. Another individual from the final generation of circuits did work, however. Thompson thinks it will be possible to evolve a circuit that uses the general characteristics of a brand of chip rather than relying on the quirks of a particular chip. He is now planning to see what happens when he evolves a circuit design that works on five different FPGAs.

… If evolutionary design fulfils its promise, we could soon be using circuits that work in ways we don't understand. And some see this as a drawback. “I can see engineers in industry who won't trust these devices,” says Thomson. “Because they can't explain how these things work, they might be suspicious of them.”

If the chips ever make their way into engine control systems or medical equipment we could well face an ethical dilemma, says Inman Harvey, head of the Centre for Computational Neuroscience and Robotics. “How acceptable is a safety-critical component of a system if it has been artificially evolved and nobody knows how it works?” he asks. “Will an expert in a white coat give a guarantee? And who can be sued if it fails?”

CREATURES FROM PRIMORDIAL SILICON

“We'll do extensive unit tests,” seems to be the mantra of these organic programmers. I guess they haven't heard of program verification (to be fair, I can't even verify my own software, but on the other hand, neither do I randomly sling code together and hope it works). How come many programmers think evolution is good design?

This “evolving” or “biological” software stuff scares me, and not because it'll lead to computers taking over the world but because they'll fail in new and spectacular ways.

Obligatory Picture

Trying to get into the festive mood this year

Obligatory Contact Info

Obligatory Feeds

Obligatory Links

Obligatory Miscellaneous

Obligatory AI Disclaimer

No AI was used in the making of this site, unless otherwise noted.

You have my permission to link freely to any entry here. Go ahead, I won't bite. I promise.

The dates are the permanent links to that day's entries (or entry, if there is only one entry). The titles are the permanent links to that entry only. The format for the links are simple: Start with the base link for this site: https://boston.conman.org/, then add the date you are interested in, say 2000/08/01, so that would make the final URL:

https://boston.conman.org/2000/08/01

You can also specify the entire month by leaving off the day portion. You can even select an arbitrary portion of time.

You may also note subtle shading of the links and that's intentional: the “closer” the link is (relative to the page) the “brighter” it appears. It's an experiment in using color shading to denote the distance a link is from here. If you don't notice it, don't worry; it's not all that important.

It is assumed that every brand name, slogan, corporate name, symbol, design element, et cetera mentioned in these pages is a protected and/or trademarked entity, the sole property of its owner(s), and acknowledgement of this status is implied.

Copyright © 1999-2024 by Sean Conner. All Rights Reserved.