Wednesday, February 04, 2015
A silly little file redirection trick under Unix
I'm in the process of writing a regression test for “Project: Sippy-Cup” and right now I'm more concentrating on writing what I call a “smoke-test”—something that can be run on my development machine after fixing bugs or adding features so that any obvious problems are “smoked out” before it hits the version control system.
Like “Project: Wolowizard,” this involves running multiple components. That isn't that much of an issue, I have plenty of Lua code to launch a program, and it typically looks like:
errno = require "org.conman.errno" syslog = require "org.conman.syslog" process = require "org.conman.process" pid,err = process.fork() if not pid then syslog('error',"fork() = %s",errno[err]) os.exit(process.EXIT.SOFTWARE) -- who knew about /usr/include/sysexits.h? elseif pid == 0 -- child process local stdin = io.open("/dev/null","r") local stdout = io.open("foobar.stdout.txt","w") local stderr = io.open("foobar.stderr.txt","w") -- -------------------------------------------------------------------- -- redirect stdin, stdout and stderr to these files. Once we've done -- the redirection, we can close the files---they're still "open" as -- stdin, stdout and stderr. Then we attempt to start the program. If -- that fails, there's not much we can do, so just exit the child -- process at that point. -- -------------------------------------------------------------------- fsys.dup(stdin,fsys.STDIN) fsys.dup(stdout,fsys.STDOUT) fsys.dup(stderr,fsys.STDERR) stderr:close() stdout:close() stdin:close() process.exec(EXE,{ "–config" , "config.xml" }) process.exit(process.EXIT.SOFTWARE) end
Each program is launched in a similar manner, and if any of them crash,
the testing harness gets notified. Also, once the tests are done, I can
shutdown each process cleanly, all under program control. I want this to be
a simple run-me
type command that does everything.
During the testing of the testing program, it is nice to be able
to see the output of the programs being tested. Sure, I have any output
from the programs going to a file, but the problem with that is that it's
hard to watch the output in real time. Upon startup (at least under Unix)
if stdout
(the normal output stream) is a terminal, the output
appears a line at a time; otherwise, the output is “fully
buffered”—meaning it's only actually written when there's around 4K or 8K
worth of output, and if the programs aren't that chatty, you could be
waiting a while if you're constantly checking the output files.
But there is a trick—this being Unix, you can redirect the
output to another terminal (or in this modern age, a terminal window). I
open up a spare terminal window (it's easy enough), and run the
w
command to find its device entry:
[spc]lucy:~>w 20:31:32 up 15 days, 6 min, 3 users, load average: 0.00, 0.00, 0.00 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT spc pts/0 marvin.roswell.a 20:06 17.00s 0.28s 0.27s joe 2 spc pts/1 marvin.roswell.a 20:15 15:50 0.03s 0.02s vi process.c spc pts/2 marvin.roswell.a 20:31 0.00s 0.01s 0.00s w [spc]lucy:~>
Here, I can see that the w
command is being run on terminal
device /dev/pts/2
(under Linux, the “/dev/” portion isn't
listed). So all I need to do is redirect stdout
and
stderr
to /dev/pts/2
and the output will appear in
that window, in real time.
So why do it in this roundabout way? Well, remember, I have several programs running. By opening up multiple terminal windows and directing the output of each program to these windows, the output from each program is kept separated and I can see what's going on. Then, when the testing program is working, I can then go back to writing the output to a file.
Oh, and under Mac OS-X:
spc]marvin:~>w
20:40 up 21 days, 1:20, 8 users, load averages: 0.01 0.05 0.08
USER TTY FROM LOGIN@ IDLE WHAT
spc console - 14Jan15 21days -
spc s000 - 18:48 32 -ssh XXXXXXXXXXXXXXXXXX
spc s001 - 20:07 23 -bash
spc s002 - 14Jan15 21days syslogintr
spc s003 - 20:07 - w
spc s004 - 20:07 - -ssh lucy
spc s005 - 20:16 7 -ssh lucy
spc s006 - 20:32 - -ssh lucy
[spc]marvin:~>
The “s003” now becomes /dev/ttys003
.
Of course, statistics has little to say about Murphy's Law
Everyone knew it was coming. Second-and-1 on the 1-yard line. Marshawn Lynch was waiting in the backfield, poised to do what he was put on this Earth to do: Get a touchdown—this touchdown. The football gods had telegraphed how they wanted the game to end, directing a floating ball straight into Jermaine Kearse's hands. Beast Mode was going to drag the New England team kicking and screaming into the end zone if he had to. But the play call came in, Russell Wilson attempted a doomed pass that Malcolm Butler intercepted, and it was Seattle that punched and screamed its way off the field.
…
That's right. On the 1-yard line, QBs threw 66 touchdowns with no interceptions prior to Wilson's errant toss.3 Not mentioned: They also scored four touchdowns on scrambles (which Wilson is pretty good at last I checked). That's a 60.9 percent success rate.
Just for comparison's sake, here's how more than 200 runs fared this year in the same situation:
- 125 led to touchdowns.
- 94 failed to score.
- Of those, 23 were for loss of yardage.
- Two resulted in lost fumbles.
So overall, runs do a bit worse than passes (57.1 percent vs. 60.9 percent).
Via Robert Anstett on MyFaceGooglePlusSpaceBook, A Head Coach Botched The End Of The Super Bowl, And It Wasn’t Pete Carroll | FiveThirtyEight
I don't watch much football (if at all), but even I knew that last Seahawks play was not the right call. But actually, it may not have been the most idiotic thing for the Seahawks to do. The article goes deep into the math behind Pete Carroll's call.