Sunday, January 05, 2025
Security Theater
Also, Linux is getting a landlock thing, which sounds maybe a bit like
unveil
. Are they likewise deluded, or maybe there's something useful about this class of security thingymabobber, especially with “defense in depth” in mind?
An aspect I think you are discounting is the effort required to implement the mitigations.
While plege()
and unveil()
are simple to use,
their implementation is anything but.
Just from reading the man pages,
it appears there are exceptions,
and then exceptions to the exceptions,
that must be supported.
What makes Linux or OpenBSD different than other pieces of software,
like openssl
?
Sure, such things help overall but as you state, there are tradeoffs—and a big one I see is adding complexity to an already complex system. And in my experience, security makes it harder to diagnose issues (one exaple from work—a piece of network equipment was “helpfully” filtering network traffic for exploits, making it difficult to test our software properly, you know, in the absense of such technology).
A different take is that
pledge
andunveil
, along with the various other security mitigations, hackathons, and so forth, are a good part of a healthy diet. Sure, you can still catch a cold, but it may be less bad, or have fewer complications.
I also think you are discounting the risk compensation that this may cause With all these mitigations, what incentives are there for a programmer to be careful in writing code? One area I think we differ in is just how much of a crutch such technology becomes.
If you don't want that defense in depth, eh, you do you.
It's less that I don't want defense in depth (and it's sad to live in a world where that needs to be the default stance) but that you can do everything “by the book” and still get blindsided. I recall the time in the early 90s when I found myself logged into the university computer I used and saw myself also logged in from Russia, all because of a Unix workstation in a different department down the hall had no root password and running a program sniffing the network (for more perspective—at the time the building was wired with 10-Base-2, also known as “cheap-net,” in which all traffic is transmitted to all stations, and the main campus IT department was more concerned with its precious VAX machine than supporting departments running Unix).
My first encounter with the clown show that is “computer security” came in the late 90s. At the time, I was working at a small web-hosting company when a 500+ page report was dumped on my desk (or rather, a large PDF file in my email) with the results of a “PCI compliance scan” on our network. It was page after page of “Oh My God! This computer has an IP address! This computer responds to ping requests! Oh My God! This computer has a web site on it! And DNS entries! Oh My XXXXXXX God! You handle email!”
For. Every. Single. Web. Site. And. Computer. On. Our. Network.
It was such an obviously low effort report with so much garbage, it was difficult to pull out the actual issues with our network. You know what would have been nice? Recognition what we were a web hosting company in addition to handling email and DNS for our customers. Maybe a report broken down by computer, maybe in a table format like:
IP address | protocol/port | port name | notes |
---|---|---|---|
192.0.2.10 | ICMP echo | ping | see Appendix A |
TCP port 22 | SSH | UNEXPECTED—see Appendix D | |
TCP port 25 | SMTP | Maybe consolidate email to a single server—see Appendix B | |
TCP port 53 | DNS | DNS queries resolve—see Appendix C | |
UDP port 53 | DNS | DNS queries resolve—see Appendix C | |
TCP port 80 | HTTP | ||
TCP port 443 | HTTPS | ||
192.0.2.11 | ICMP echo | ping | see Appendix A |
TCP port 22 | SSH | UNEXPECTED—see Appendix D | |
TCP port 25 | SMTP | Maybe consolidate email to a single server—see Appendix B | |
TCP port 53 | DNS | DNS queries resolve—see Appendix C | |
UDP port 53 | DNS | DNS queries resolve—see Appendix C | |
UDP port 69 | TFTP | UNEXPECTED—see Appendix D | |
TCP port 80 | HTTP | ||
TCP port 443 | HTTPS |
Where Appendix A could explain why supporting ping
is questionable,
but allowable,
Appendix B could explain the benefits of consolidating email on a machine that doesn't serve email,
and Appendix C could explain the potential data leaks of a DNS server that resolves non-authoritative domains,
which in our case,
was the real issue with our scan but was buried in just a ton of nonsense results with the assumption that we have no clue what we're doing
(at least, that's how I read the 500+ page report).
The hypothetical report above shows SSH being open on the boxes—fair enough. A common security measure to to have a “SSH jump server” that is specifically hardened to only expose SSH one one host, and the rest only accept SSH connections on a (preferrably) separate “management” interface with private IP addresses. And oh, we're running TFTP on a box—again we should probably have a separate system on a “management” interface running TFTP to backup our router configs.
But such a measured, actionable report takes real work to generate. Much much easier to just dump a raw network scan with scary jargon.
And since then, most talk of “computer security” has, in my experience, been mostly of the breathless “Oh My God You're Pwned!” scare tactic variety.
My latest encounter with “computer security” came a few years ago at The Ft. Lauderdale Office of the Corporation, when our new Overlords wanted to change how we did things. The CSO visited and informed us that they were going to change how we did security, and in the process make our jobs much more difficult. It turns out it wasn't because our network or computers were insecure—no! Our network had a higher score (according to some networking scoring company—think of the various credit scoring companies but for corporate networks) than our new parent company (almost a perfect score). No, it came down to “that's not how we do things. We're doing it, our way!” And “their way” was just checking off a list of boxes on some list as cheaply as possible.
I think another way we differ is in how much we think “computer security” has become a cargo cult.
Update on Monday, January 6th, 2025
This thread on Lobsters is a perfect example of the type of discussion I would like to see around security. Especially on-point is this comment: “… the [question] I was actually asking: ‘Why is it dangerous, so I can have a better mental model of danger in the future?’”
Saturday, January 04, 2025
It's still cargo cult computer security
My first question to you, as someone who is, shall we say, “sensitive” to security issues, why are you exposing a network based program to the Internet without an update in the past 14 years?
Granted, measures such as ASLR and W^X can make life more difficult for an attacker, and you might notice w3m crashing as the attackers try to get the stars to line up for their ROP gadget to work as you (or some automation) try to download a malicious page over and over. Or, you could get unlucky and they are now running whatever code they want, or reading all your files.
I have my own issues with ASLR (I think it's the wrong thing to do—much better would have been to separate the stack into two, a return stack and a parameter (or data) stack, but I suspect we won't ever see such an approach because of the entrenchment of the C ABI) so I won't get into this.
What I would like to see how opening a text editor with the contents of an HTML
<TEXTAREA>
could be attacked. What are the actual attack surfaces? And no, I won't accept “just … bad things, man!” as an answer. What, exactly?Where is your formal verification for the lack of errors?
I did not assert the code was free of error. I was asking for examples of actual attacks.
Otherwise, there is some amount of code executed to make that textarea work, all of which is the “actual attack surface”. If you look at the CVE for
w3m
(nevermind the codew3m
uses from SSL,curses
,iconv
,intl
,libc
, etc.) one may find:
- Format string vulnerability in the
inputAnswer
function infile.c
inw3m
before 0.5.2, when run with the dump or backend option, allows remote attackers to execute arbitrary code via format string specifiers in the Common Name (CN) field of an SSL certificate associated with an https URL.- w3m before 0.3.2.2 does not properly escape HTML tags in the
ALT
attribute of anIMG
tag, which could allow remote attackers to access files or cookies.- Buffer overflow in
w3m
0.2.1 and earlier allows a remote attacker to execute arbitrary code via a long base64 encoded MIME header.
Was that so hard?
The first bug you mention, the “format string vulnerability” seems to be related to this one-line fix (and yes, I did download the source code for this):
@@ -1,4 +1,4 @@ -/* $Id: file.c,v 1.249 2006/12/10 11:06:12 inu Exp $ */ +/* $Id: file.c,v 1.250 2006/12/27 02:15:24 ukai Exp $ */ #include "fm.h" #include <sys/types.h> #include "myctype.h" @@ -8021,7 +8021,7 @@ inputAnswer(char *prompt) ans = inputChar(prompt); } else { - printf(prompt); + printf("%s", prompt); fflush(stdout); ans = Strfgets(stdin)->ptr; }
It would be easy to dimiss this as a rookie mistake, but I admit, it can be hard to use C safely, which is why I keep asking for examples and in some cases, even a proof-of-concept so others can understand how it works, and how to mitigate them.
But just keep crying pledge()
and see how things improve.
The second bug you mentioned seems to be CVE-2002-1335,
which is 23 years old by now and none of the links on that page show any details about this bug.
I also fail to see how this could lead to an “arbitrary file access” back to the attacker unless there's some additional JavaScript required.
The constant banging on the pledge()
drum does nothing to show how such an attack works so as to educate programmers on what to look for and how to think about mitigations.
When I asked “What are the actual attack surfaces?” I actually meant that.
How does this lead to an “arbitrary file access?”
It always appears to be “just assume the nukes have been launched” type of rhetoric.
It doesn't help educate us “dumb” programmers.
Please,
tell me,
how is this exploitable?
Or is that forbidden knowledge not to be given out for fear it will be used by those less intentioned?
This is the crux of my frustration here—all I see is “programs bad, mmmmmmkay?” and magic pixie dust to solve the issues.
I've had to explain to programmers in a well regarded CSE department recently why their code was … sub-optimal. Less polite words could be used. They were running remote, user-supplied strings through a
system(3)
call, and it took a few emails to convince them that this was kind of bad.
And I can bitch about having to teach opererations how to configure syslog
and “no,
we can't have a single configuration file for two different,
geographical sites and besides,
we maintain the configuration files,
not you!” so this cuts both ways.
Moreover, it's fairly simple to pledge and unveil a process to remove classes of system calls (such as executing other programs) or remove access to swathes of the filesystem (so an attacker will have a harder time to run off with your SSH keys).
…
And how, exactly, is adding pledge and unveil onerous? …
Easy huh?
The man page doesn't say anything about limiting calls to open()
.
It appears that is handled by unveil()
which doesn't seem all that easy to me:
… Directories are remembered at the time of a call to
unveil()
. This means that a directory that is removed and recreated after a call tounveil()
will appear to not exist.…
unveil()
use can be tricky because programs misbehave badly when their files unexpectedly disappear. In many cases it is easier to unveil the directories in which an application makes use of files.
unveil(2)
- OpenBSD manual pages
To me, I read “in some cases, code may be difficult to debug.”
And while it may be easy for you to add a call to unveil()
or pledge()
,
I assure you that it's not at all easy for the kernel to support such calls.
Now,
in addition to all the normal Unix checks that need to happen
(and in the past, gone wrong on occasion)
that a whole slew of new checks need to be added which complicate the kernel.
Just as an example,
pass “dns” promise to pledge()
and the calls to socket()
, connect()
, sendto()
and recvfrom()
are disabled until the file /etc/resolv.conf
is opened.
Then they're enabled,
but probably only to allow UDP port 53 through.
Unless the “inet” promise is given,
then socket()
, connect()
, etc. are allowed.
That's … a lot of logic to puzzle through.
And as someone who doesn't trust programmers
(as you stated),
this isn't a problem for you?
As a programmer, it can also make it hard to reason about some scenarios—like, if I use “stdio” promise, but not the “inet” promise, can I open files served up by NFS? I mean, probably, but “probably” isn't “yes” and there are a lot of programming sins commited because “it worked for me.”
I did say that using pledge()
helps,
but it doesn't solve all attacks.
For instance,
there's not special promise I can give to pledge()
that states “I will not send escape codes to the terminal” even though that's an attack vector,
espcially if the terminal in question supports remapping the keyboard!
Any special recomendations for that attack?
Do I really need to embed \e[13;"rm -rf ~/*"p
to drive the point home?
Also (because I do not use OpenBSD) do I still have access to every system call after this?
pledge( " stdio rpath wpath cpath dpath tmppath inet mcast" " fattr chown flock unix dns getpw sendfd recvfd" " tape tty proc exec prot_exec settime ps vminfo" " id pf route wroute audio video bpf unveil" " error");
If not, why not? That's a potential area to look for bugs.
How, exactly, is adding
pledge
andunveil
tow3m
“helplessness”, and then iterating on that design as one gains more experience?
As you said yourself: “I do not trust programmers (nor myself) to not write errors,
so look to pledge
and unveil
by default,
especially for ‘runs anything, accesses remote content’ browser code.”
What am I to make of this,
except for “Oh,
all I have to do is add pledge()
and unveil()
to my program,
and then it'll be safe to execute!”
In my opinion,
banging on the pledge()
drum
doesn't help educate programmers on potential problems.
It doesn't help programmers to write code to be anal when dealing with input.
It doesn't help programmers to think about potential exploits.
It just punts the problem with magic pixie dust that will solve all the problems.
… It took much less time to add to
w3m
than writing this post did; most of the time forw3m
was spent figuring out how to disable color support, kill off images, and to get theCFLAGS
aright. It is almost zero maintenance once done and documented.
What, exactly, is your threat model? Because that's … I don't know what to say. You remove features just because they might be insecure. I guess that's one way to approach security. Another approach might be to cut the network cable.
I only ask as I was hacked once. Bad. Lost two servers (file system wiped clean), almost lost a third. And you know what? Not only did it not change my stance around computer security, there wasn't a XXXXXXXXXX thing I could do about it either! It was an inside job. Is that part of your threat model?
By the way,
/usr/bin/vi -S
is used to edit the temporary file. This does a pledge so that vi cannot run random programs.
But what's stopping an attacker from adding commands to your ~/.bashrc
file to do all the nasty things it wants do to the next time you start a shell?
That's the thing—pledge()
by itself won't stop all attacks,
but by dismissing the question of “what attack surfaces” can lead one to believe that all that's needed is pledge()
.
It leads (in my opinion) to a false sense of security.
It is rather easy to find CVE for errors in HTML parsing code, besides the “did not properly escape HTML tags in the ALT attribute” thing
w3m
was doing that lead to arbitrary file access.CVE-2021-23346, CVE-2024-52595, CVE-2022-0801, CVE-2021-40444, CVE-2024-45338, CVE-2022-24839, CVE-2022-36033, CVE-2023-33733, …
You might want to be more careful in the future, as one of those CVE's you listed has nothing do to with parsing HTML. I'll leave it as an exercise for you to find which one it is.
I also get the feeling that we don't see eye-to-eye on this issue, which is normal for me. I have some opinions that are not mainstream, are quite nuanced, and thus, aren't easy to get across (ask me about defensive programming sometime).
My point with all this—talk about computer security is all cargo cultish and is not helping with actual computer security. And what is being done is making other things way more difficult than it should be.
Friday, January 03, 2025
It's more like computer security theater than actual security
In
w3m
, to edit a form textarea,... f = fopen(tmpf, "w"); if (f == NULL) { /* FIXME: gettextize? */ disp_err_message("Can't open temporary file", FALSE); return; } if (fi->value) form_fputs_decode(fi->value, f); fclose(f); if (exec_cmd(myEditor(Editor, tmpf, 1)->ptr)) goto input_end; ...
exec_cmd
is some setup and teardown around asystem(3)
call with the user's editor and the temporary file. This is not good for security, as it allowsw3m
to execute by default anything. One tentative improvement would be to only allow w3m to execute a wrapper script, something like#!/bin/sh exec /usr/bin/vi -S "$@"or some other restricted editor that cannot run arbitrary commands nor read from
~/.ssh
and send those files off via internet connections. This is better, but why not disallow w3m from running anything at all?if (pledge( "cpath dns fattr flock inet proc rpath stdio tty unveil wpath", NULL) == -1) err(1, "pledge");Here we need the “proc” (
fork
) allow so downloads still work, but “exec” is not allowed. This makes it a bit harder for attackers to run arbitrary programs. An attacker can still read various files, but there are also unveil restrictions that very much reduce the access ofw3m
to the filesystem. An attacker could make DNS and internet connections, though fixing that would require a different browser design that better isolates the “get stuff from the internet” parts from the “try to parse the hairball that is HTML” code, probably viaimsg_init(3)
on OpenBSD, or differently complicated to download to a directory with one process and to parse it with another. That way, a HTML security issue would have a more difficult time in getting out to the interwebs.
What I find annoying is the lack of any type of attack as an example. It's always “data from da Intarwebs bad!” without regard to how it's bad. The author just assumes that hackers out there have some magical way of executing code on their computer just by the very act of downloading a file. The assumption that some special sequence of HTML can open a network connection to some control server in Moscow or Beijing or Washington, DC and siphon off critical data is just … I don't know, insane to me. Javascript, yes, I can see that happening. But HTML?
And then I recall the time that Microsoft added code to their programs to scan JPEG images for code and automatically execute it, and okay, I can see why maybe the cargo cult security mumbo-jumbo exists.
What I would like to see how opening a text editor with the contents of an HTML
<TEXTAREA>
could be attacked.
What are the actual attack surfaces?
And no,
I won't accept “just … bad things, man!” as an answer.
What, exactly?
One possible route would be ECMA-35 escape sequences,
specifically the DCS and OSC sequences
(which could be used to control devices or the operating system respectively),
although I don't know of any terminal emulator today that supports them.
Microsoft did add an escape sequence to reprogram the keyboard
(ESC
“[” key-code “;” string “p”)
but that's in the “private use” area set aside for vendors.
This particular attack vector might work if the editor is running under a terminal or terminal emulator that support it, and the editor in question doesn't remove or escape the raw escape sequence codes. I tried a few text editors on the following text (presented as a hexadecimal dump to show the raw escape sequence):
00000000: 54 68 69 73 20 69 73 20 1B 5B 34 31 6D 72 65 64 This is .[41mred 00000010: 1B 5B 30 6D 20 74 65 78 74 2E 0A 0A .[0m text...
None of the editors I tried (which are all based on the command line and thus, use escape sequences themselves to display text on a terminal) displayed red text. The escape sequence wasn't run as an escape sequence.
Another attack might embedding editor-specific commands within the text.
This is a common aspect of some editors,
like vi
.
And I can see this being concerning,
especially if the commands one can set in a text file include accessing arbitrary files or running commands.
A third attack could be an attempt to buffer overflow the editor, either by sneaking in a huge download (like say, a file with a single one gigabyte line) or erroneous input (for example, if the editor expects a line to end with a CR and LF, send an LF then CR). Huge input is a bit harder to hide, but suble erroneous input could cause issues.
This is why I feel such articles are bad—by not talking about actual threats they enforce a form of “learned helplessness.”
Everything is dangerous and we must submit to onerous measures to keep ourselves safe.
Sprinkling calls to pledge()
aren't the answer.
Yes,
it helps,
but not thinking critically about security leads to a worse experience overall,
such as having to manually edit a file which would still be subject to all three of the above attacks anyway.
By identifying the attacks,
then a much better way to mitigate the attacks could be found
(in this case,
an editor that strips out escape sequences and does not support embedded commands;
and yes, I know I have a minority opinion here—sigh).
And to address the bit about parsing HTML—is parsing really that fraught with danger? All you need to parse HTML is to follow the explicit (and in excruciating detail) HTML5 specification. How hard can that be?
Wednesday, January 01, 2025
Guess who made predictions for 2025? Can you say “Nostradamus?” I knew you could
Of course Nostradamus has predictions for 2025! When hasn't he had predictions for any given year?
Sigh.
So far, checking a few of the articles, not many have bothered to print the quatrains in question, and the one article (of which I hesitate to link to) I found that displays a translation of the quatrain, never bothered to list which quatrain it is.
And because the quatrains listed are translated, it's hard to locate the original in Nostradamus' writings.
For instance, this quatrain:
When the coin of leather rules,
The markets shall tremble,
The crescent and brass unite,
Gold and silver lose their value.
Doesn't seem to exist at all. Checking the version of Nostradamus at Project Gutenberg:
XXV.
French.
Par guerre longue tout l’exercite espuiser,
Que pour Soldats ne trouveront pecune,
Lieu d’Or, d’Argent cair on viendra cuser,
Gaulois Ærain, signe croissant de Lune.English.
By a long War, all the Army drained dry,
So that to raise Souldiers they shall find no Money,
Instead of Gold and Silver, they shall stamp Leather,
The French Copper, the mark of the stamp the new Moon.ANNOT.
This maketh me remember the miserable condition of many Kingdoms, before the west-Indies were discovered; for in Spain Lead was stamped for Money, and so in France in the time of King Dagobert, and it seemeth by this Stanza, that the like is to come again, by reason of a long and tedious War.
This is the only quatrain where “leather” appears. And there's nothing in that quatrain about gold and silver losing their value. Moving on, another quatrain from the article I was able to locate:
4. The Surge of Natural Disasters
Nostradamus warned of a year marked by hurricanes, tsunamis, and earthquakes, driven by geological instability, solar activity, and climate change. His depiction of “hollow mountains” and poisoned waters paints a grim picture of devastation, particularly in vulnerable regions like the Amazon rainforest.
“Garden of the world near the new city,
In the path of the hollow mountains:
It will be seized and plunged into the Tub,
Forced to drink waters poisoned by sulfur.”The confluence of these natural calamities could accelerate global efforts to combat climate change and reimagine disaster resilience. Yet, the cost in lives, resources, and environmental destruction underscores the urgent need for collective action before catastrophe becomes routine.
And let's see what the commentary from the 1600s said about this quatrain:
XLIX.
French.
Jardin du Monde aupres de Cité neufve,
Dans le chemin des Montagnes cavées,
Sera saisi & plongé dans la Cuve,
Beuvant par force eaux Soulphre envenimées.English.
Garden of the World, near the new City,
In the way of the digged Mountains,
Shall be seized on, and thrown into the Tub,
Being forced to drink Sulphurous poisoned waters.ANNOT.
This word Garden of the World, doth signifie a particular person, seeing that this Garden of the World was seized on and poisoned in a Tub of Sulphurous water, in which he was thrown.
The History may be this, that Nostradamus passing for a Prophet and a great Astrologer in his time, abundance of people came to him to know their Fortunes, and chiefly the Fathers to know that of their Children, as did Mr. Lafnier, and Mr. Cotton, Father of that renowned Jesuit of the same name, very like then that Mr. du Jardin having a son did ask Nostradamus what should become of him, and because his son was named Cosmus, which in Greek signifieth the World, he answered him with these four Verses.
Garden of the World, for Cosmus of the Garden, In his travels shall be taken hard by the New City, in a way that hath been digged between the Mountains, and there shall be thrown in to a Tub of poisoned Sulphurous water to cause him to die, being forced to drink that water which those rogues had prepared for him.
Those that have learned the truth of this History, may observe it here. This ought to have come to pass in the last Age, seeing that the party mentioned was then born when this Stanza was written, and this unhappy man being dead of a violent death, there is great likelyhood, that he was not above forty years old.
There is another difficulty, to know which is that new City, there being many of that name in Europe, nevertheless the more probable is, that there being many Knights of Maltha born in Provence (the native Countrey of our Author) it may be believed that by the new City he meaneth the new City of Maltha called la Valete, hard by which there is paths and ways digged in the Mountains, which Mountains are as if it were a Fence and a Barricado against the Sea, or else this Cosmus might have been taken by Pyrats of Algiers, and there in the new City of the Goulette be put to death in the manner aforesaid.
Nothing about it being 2025 when this comes to pass. Nothing about hurranes, tsunamis or earthquakes. It's almost as if Nostradamus was being intentionally vague about his prophesies. It could very well be about Naples, Italy, seeing how it's on the coast nestled in between volcanoes.
Or maybe Los Angeles. Yes, it's Los Angeles, land of Shake and Bake.
Of the other five “Nostradamus prophesies” mention in the aricle, none were written by the man. It's almost as if one could just make up Nostradamus prophesies. Why not?
HAPPY NEW YEAR!
Tuesday, Debtember 31, 2024
A preference for deterministic tools over probabilistic tools
Last month,
I added code to my assembler to output BASIC code instead of binary to make
it easier to use assembly subroutines from BASIC.
But I've been working on a rather large program that assembles to nearly 2K of object code,
and it takes a bit of time to POKE
all that data into memory.
So I took a bit of time
(maybe an hour total)
to add a variation—instead of generating a bunch of DATA
statements and using POKE
to insert the code into memory,
generate a binary file,
and output BASIC code to load said file into memory.
No changes to the assembly code are required.
So the sample code from last month:
.opt basic defusr0 swapbyte .opt basic defusr1 peekw INTCVT equ $B3ED ; put argument into D GIVABF equ $B4F4 ; return D to BASIC org $7F00 swapbyte jsr INTCVT ; get argument exg a,b ; swap bytes jmp GIVABF ; return to BASIC peekw jsr INTCVT ; get address tfr d,x ; transfer to X ldd ,x ; load word from given address jmp GIVABF ; return to BASIC end
I can now generate the previous BASIC code:
10 DATA189,179,237,30,137,126,180,244,189,179,237,31,1,236,132,126,180,244 20 CLEAR200,32511:FORA=32512TO32529:READB:POKEA,B:NEXT:DEFUSR0=32512:DEFUSR1=32520
or now a binary version and the BASIC code to load it into memory:
10 CLEAR200,32511:LOADM"EXAMPLE/BIN":DEFUSR0=32512:DEFUSR1=32520
For this small of a program,
it's probably a wash either way,
but when the assembly code gets large,
it not only takes a noticeable amount of time,
but it also take a considerable amount of space as the DATA
statements still exist in memory.
But as I was finishing up on this code, I had an epiphany on why I'm not so keen on AI. The features I added to my assembler are there to facilitate easier development. They do save time and effort, and sans any bugs, they just work. With AI like ChatGPT or Copilot, the output is not deterministic but probablistic—it may be correct, it may be mostly correct, it may be complete and utter garbage but you can't tell without going over the output. They just don't work one hundred percent of the time, and that just doesn't work for me. I prefer my tools to be reliable, not “mostly” reliable.
That it may write boilerplate code faster? Why are programmers writing boilerplate code in the first place? I recall IDEs of the past that would generate all the boilerplate code for a GUI-based application for the programmer, no AI required at the time. Automatic refactorings have been a thing in Java IDEs for a decade, maybe two now? No AI required there, and it's more reliable than AI too.
I don't even buy the “but it makes it faster to write software” excuse. I'm not sure why being the “first to maket” is even a thing. Microsoft was not first to the market with the GUI—that was Apple. And no, the Macintosh computer wasn't the first system with a GUI, nor even the first system with a GUI from Apple (that was the Lisa). In fact, Microsoft Windows 1.0 wasn't even good (seriously—it's not pretty). Google wasn't the first web search engine (there's easily a dozen engines, maybe more, before Google even showed up). Facebook wasn't the first “social media” type site (My Space and Friendsters come to mind). Amazon wasn't the first on-line retailer.
And so on.
But hey, there are plenty of programmers who find them useful. I'm just not one one of them. The use of AI for programming is totally alien to my way of thinking.
Discussions about this entry
- A preference for deterministic tools over probabilistic tools | Lobsters
- A preference for deterministic tools over probabilistic tools | Hacker News
- A preference for deterministic tools over probabilistic tools - Lemmy: Bestiverse
Thursday, Debtember 26, 2024
Life imitating art
Bunny and I went out for dinner and at the restaurant there were TVs tuned to a sports channel. It was rather surprising to me to see that it was ESPN 8—the Ocho! And here I thought it was just a fake TV channel from the movie “Dodgeball: A True Underdog Story.” It's odd to think that a Cornhole tournament beat out baseball and the Tour de France!
The sports being shown on TV were axe throwing and “fling golf,” which looks silly, but then again, isn't hitting a small ball with a stick silly anyway? Nice, but I would have loved to have seen trampoline dodgeball, or maybe even chess boxing, which is exactly what it says it is.
Friday, Debtember 20, 2024
Notes on an overheard conversation late at night
“You know, you could turn on a light instead of using your phone as a flash light.”
“No, Then I would have to get up to turn on a light.”
“I could turn one on for you.”
“No, then I would just have to get up to turn it off.”