The job I have of monitoring several servers is not that bad of a job, except when getting a call at 8:00 am (only three hours after going to bed) because the servers seem to be down.
Long story short, can't get to the servers. Call the Miami NAP and we've pegged the circuit with so much traffic that nothing is getting through. Eventually the machine being attacked is located (there are several candidates to choose from) and it's shut off from the network; the traffic clears and access to the other servers is established.
Since there is a private network between the machines, I'm still able to get to the affected machine (by going through the one machine still connected, then going through the private network—the affected machine was removed from the public network) and check the logs:
Feb 28 09:27:37 nap1 kernel: NET: 2263 messages suppressed. Feb 28 09:27:37 nap1 kernel: TCP: drop open request from 188.8.131.52/3755 Feb 28 09:27:42 nap1 kernel: NET: 1114 messages suppressed. Feb 28 09:27:42 nap1 kernel: TCP: drop open request from 184.108.40.206/3921 Feb 28 09:27:47 nap1 kernel: NET: 1022 messages suppressed. Feb 28 09:27:47 nap1 kernel: TCP: drop open request from 220.127.116.11/3751 Feb 28 09:27:52 nap1 kernel: NET: 1090 messages suppressed. Feb 28 09:27:52 nap1 kernel: TCP: drop open request from 18.104.22.168/4371 Feb 28 09:27:57 nap1 kernel: NET: 1071 messages suppressed. Feb 28 09:27:57 nap1 kernel: TCP: drop open request from 22.214.171.124/3244
And so on and so on …
New to me—looks like some other form of DDoS attack than the typical
flood. Some research later in the day revealed that is is probably a
SYN flood, I had just never seen the logs produced during a
SYN flood (these are servers I set up; the other servers that
SYN flooded were configured differently than how
I would so that would explain why I didn't initially recognize this as a
SYN flood). The “X messages suppressed” message is the
previous message repeated X times but not logged. Going through the log
file, I found 572 unique IP
addresses making over 1,750,000 fake connection requests over the span of
one hour, 53 minutes and 47 seconds, or over 250 connections per second
It got me thinking about the problem. Supposedly
cookies help, but in this case, I would think that having the kernel check
SYN requests and seeing if it is already in a
SYN receive state from a given IP address/port number, then simply drop the connection
and optionally ban the IP
address. I mean, come on, 6,886 requests from 126.96.36.199:3588 and
something weird isn't going on? Sure, it's a bit of extra
processing, but such a scheme would help with
SYN floods of
this severity (the five lowest connection requests per second were 256/sec,
238/sec, 201/sec, 180/sec and 78/sec; a threshhold of 10/sec
SYN requests from a single IP/port would be generous enough).
Hmmm … on second thought, that would help in the short run, until the script kiddies change their tactics and just start picking random port numbers, so you would end up with 5,000 connection requests from 188.8.131.52 from 5,000 different port numbers. More code to limit the number of connections per IP address per second (reguardless of port number) but that then means more processing, but would be a better longer term solution. This is something that might exist in the Linux kernel—I know it can rate shape network traffic.
Too much ado about an interface to photographs, but outside of that, much ado about nothing—in fact, this title is almost as long as the entry it titles, and quite self-referential at that!
In fact, this is much ado about nothing, really.