Friday, March 06, 2026
Notes on blocking requests based on the HTTP protocol used
I'm still clearing out some links from last month, just so you know.
“Selectively Disabling HTTP/1.0 and HTTP/1.1”
(via Lobsters)
describes an experiment with disabling (or redirecting) requests made via HTTP/1.1,
as most of the traffic the author saw via HTTP/1.1 they classified as “bad.”
I decided to check that against my own server—in fact, I'm checking it against my blog specifically, since it's the only dynamic site I'm serving up (the rest are all static sites). So, how do requests to my blog stack up?
| protocol | count |
|---|---|
| HTTP/1.0 | 396 |
| HTTP/1.1 | 377647 |
| HTTP/2.0 | 180093 |
| Total | 558136 |
HTTP/1.0 is negligable,
and a breakdown of response codes show that these requests aren't even bad:
| response | count |
|---|---|
SUCCESS.OKAY | 371 |
REDIRECT.MOVEPERM | 13 |
REDIRECT.NOTMODIFIED | 8 |
CLIENT.UNAUTHORIZED | 4 |
The majority of requests are to my RSS feed.
There are a vanishingly small number of agents using HTTP/1.0,
at least from where I can see.
Around ⅔ of my traffic is still HTTP/1.1:
| response | count |
|---|---|
SUCCESS.OKAY | 289181 |
SUCCESS.ACCEPTED | 2 |
SUCCESS.PARTIALCONTENT | 7 |
REDIRECT.MOVEPERM | 886 |
REDIRECT.NOTMODIFIED | 69299 |
CLIENT.BADREQ | 3 |
CLIENT.UNAUTHORIZED | 441 |
CLIENT.FORBIDDEN | 5 |
CLIENT.NOTFOUND | 13249 |
CLIENT.METHODNOTALLOWED | 19 |
CLIENT.GONE | 82 |
CLIENT.TOOMANYREQUESTS | 4211 |
SERVER.INTERNALERR | 261 |
SERVER.NOSERVICE | 1 |
And the results for HTTP/2.0:
| response | count |
|---|---|
SUCCESS.OKAY | 103472 |
SUCCESS.PARTIALCONTENT | 1496 |
REDIRECT.MOVEPERM | 5089 |
REDIRECT.NOTMODIFIED | 68966 |
CLIENT.BADREQ | 3 |
CLIENT.UNAUTHORIZED | 47 |
CLIENT.NOTFOUND | 902 |
CLIENT.METHODNOTALLOWED | 6 |
CLIENT.GONE | 36 |
CLIENT.TOOMANYREQUESTS | 25 |
SERVER.INTERNALERR | 51 |
About 4% of the HTTP/1.1 traffic is “bad” in the “client made an error” bad,
where as HTTP/2.0 only has ½% of such “bad” traffic.
Feed readers are pretty much split 50/50 as per protocol,
and the rest?
I would have to do a deeper dive into it,
but I do note that there are significally more bad clients making too many requests (CLIENT.TOOMANYREQUESTS) with HTTP/1.1 than with HTTP/2.0.
The article concludes that blocking solely on HTTP/1.x is probably not worth it,
as there are other ways to block bad traffic.
In that light,
and with the results I have,
I don't think blocking HTTP/1.1 will work for me.
In contrast,
there's “HTTP/1.1 must die: the desync endgame,”
an article that explitely calls for the immediate removal of HTTP/1.1,
but unstated in that article is that the desync problem is more a problem of Enterprise based websites,
with lots of middleware boxes mucking with the request chain on a web-based application.
Based on that article,
I would think that if you are running an application-centric website,
then yes,
maybe blocking HTTP/1.x is a thing to do,
but if you are running a more document-centric website
(you know, the “old, fun and funky web” from before 2005 or so)
then maybe blocking HTTP/2.0 is in order.
In fact,
I think that might be a decent idea—leave HTTP/1.x for those who want the old web
(or the “smolweb”),
and HTTP/2.0 for the application web.
If you want to only browse the docweb and you get an 426 HTTP_UPGRADE,
then you know you can close the website and avoid downloading 50MB of Javascript to just read a few hundred words in text.
![Oh Chrismtas Tree! My Christmas Tree! Rise up and hear the bells! [Self-portrait with a Christmas Tree] Oh Chrismtas Tree! My Christmas Tree! Rise up and hear the bells!](https://www.conman.org/people/spc/about/2025/1203.t.jpg)