The occasional ECONNRESET
69 points
by zdw
4 hours ago
| 4 comments
| movq.de
| HN
smarks
2 hours ago
[-]
Part 2 shows this comment from the Linux TCP code:

    /* As outlined in RFC 2525, section 2.17, we send a RST here because
     * data was lost. To witness the awful effects of the old behavior of
     * always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk
     * GET in an FTP client, suspend the process, wait for the client to
     * advertise a zero window, then kill -9 the FTP client, wheee...
     * Note: timeout is always zero in such a case.
     */
Ok, so the RST is explained and well justified by the literature. But what are the “awful effects” of sending FIN instead? Can someone explain?
reply
jmalicki
1 minute ago
[-]
The difference between RST and FIN shows up in read() as an ECONNRESET vs end-of-file reading 0.

In some protocols, end-of-file has semantic meaning that all data has been transferred, and TCP is set up such that you should be able to rely on that - if you can't rely on that difference, it is a bug in a TCP stack along the way.

FIN also has a sequence number, so you can wait to ACK it until you get the corresponding data if it is dropped or out of order.

TCP RST says the other side won't be resending if not ACKed, it is reset. Further, the downloading client usually cannot even read any packets in the receive window either once an RST has been received - that might be hundreds of KB of missed data.

RST and FIN are very semantically meaningfully different.

Reading the post, if gunicorn is e.g. sending a 404 after seeing a POST to a path it doesn't know about before reading the body, the client will never get the 404 because gunicorn hasn't read the message body.

This case is partly why "Expect: 100-continue" exists, so it will be properly handled, even if it does introduce an extra round-trip lag in the POST.

It might be dangerous to have your protocol rely on a piece of TCP that is often incorrectly implemented.

reply
buckle8017
2 hours ago
[-]
Client sees a clean disconnect and I guess assumes thats the entire file?
reply
amluto
1 hour ago
[-]
The client has been SIGKILLed, so it’s not assuming anything. I wonder whether the comment is a typo and they meant to kill -9 the server instead.
reply
jmalicki
34 minutes ago
[-]
Hypothetically if this was HTTP without a Content-length (like it used to be in the olden days), you could have a proxy server assume this is the entire file.
reply
MrBuddyCasino
1 hour ago
[-]
Perhaps a permanently hung connection because timeout is zero (=disabled?)?
reply
muststopmyths
34 minutes ago
[-]
Seems plausible since FIN only means “I’m done sending” also called a “half close”.

FTP has different data and command connections so the server may not have an outstanding read to detect the data connection break.

But.. it should still clean up both when the command connection dies

reply
toast0
3 hours ago
[-]
Might want to read the section on Lingering Close from here:

https://httpd.apache.org/docs/2.4/misc/perf-tuning.html

reply
zokier
2 hours ago
[-]
That seems very outdated? Doesn't `shutdown` resolve the problem here?
reply
toast0
41 minutes ago
[-]
Shutdown only helps if it's used; but TFA didn't mention it. So they're going to have to relearn the lessons of the 90s?

Also, I think state of the art hasn't really changed? If you don't want a reset, you need to read everything from the socket before you close. If you don't really care about a reset as long as it doesn't interrupt the reader, you can shutdown in your direction, and drop the socket off to something that will wait "long enough" before it closes. In an eventloop architecture, you can just put in as a deferred task; in process per connection, you should probably send the socket to a dedicated lingering closer process that doesn't interrupt your flow.

reply
gunsch
1 hour ago
[-]
A few months ago I was debugging a similar issue in a Go-based service layer, where frequent HTTP requests to the same domain kept making fresh TCP connections when I was expecting TCP conn reuse.

In this situation we were discarding the HTTP response without reading it before closing, which kept Go from reusing the connection. I didn't dig quite as deep as this post's author, but I imagine the same RST behavior was happening under the hood.

reply
Joker_vD
3 hours ago
[-]
> Send off the data and close the socket. If there's data still pending to be read, this will cause a RST, I think.

Um, yes? That's how TCP has been universally implemented for more than 30 years. See [0], 2.17 for discussion.

[0] https://www.rfc-editor.org/rfc/rfc2525#page-50

reply
eggnet
2 hours ago
[-]
That’s for rx, not tx. When you close a socket with data in the send buffer, that does not trigger a RST. If you just close the socket normally.
reply
pdonis
2 hours ago
[-]
> When you close a socket with data in the send buffer

That's not what's happening here. The server is closing the socket when there's data from the client that it hasn't read.

reply
Joker_vD
1 hour ago
[-]
Yep, and that makes implementing addition of "Connection: close" in an HTTP reply at the HTTP/1.1-server's side somewhat tricky: you ideally need to read all of the pipelined requests from the client before closing the connection, which is usually something you'd rather not do. But if you just close it, you risk your client getting a partial reply, so you better add "Content-Length"/"Transfer-Encoding: chunked" in your reply as well... but one common reason to do connection-close reply is when you don't know the content-length beforehand, so — I hope you implemented chunking correctly :)
reply
toast0
28 minutes ago
[-]
More explicit connection closing indications are one of the nice things of http/2. Of course, it's bundled with the silly multiplexing :(
reply