/* As outlined in RFC 2525, section 2.17, we send a RST here because
* data was lost. To witness the awful effects of the old behavior of
* always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk
* GET in an FTP client, suspend the process, wait for the client to
* advertise a zero window, then kill -9 the FTP client, wheee...
* Note: timeout is always zero in such a case.
*/
Ok, so the RST is explained and well justified by the literature. But what are the “awful effects” of sending FIN instead? Can someone explain?In some protocols, end-of-file has semantic meaning that all data has been transferred, and TCP is set up such that you should be able to rely on that - if you can't rely on that difference, it is a bug in a TCP stack along the way.
FIN also has a sequence number, so you can wait to ACK it until you get the corresponding data if it is dropped or out of order.
TCP RST says the other side won't be resending if not ACKed, it is reset. Further, the downloading client usually cannot even read any packets in the receive window either once an RST has been received - that might be hundreds of KB of missed data.
RST and FIN are very semantically meaningfully different.
Reading the post, if gunicorn is e.g. sending a 404 after seeing a POST to a path it doesn't know about before reading the body, the client will never get the 404 because gunicorn hasn't read the message body.
This case is partly why "Expect: 100-continue" exists, so it will be properly handled, even if it does introduce an extra round-trip lag in the POST.
It might be dangerous to have your protocol rely on a piece of TCP that is often incorrectly implemented.
FTP has different data and command connections so the server may not have an outstanding read to detect the data connection break.
But.. it should still clean up both when the command connection dies
Also, I think state of the art hasn't really changed? If you don't want a reset, you need to read everything from the socket before you close. If you don't really care about a reset as long as it doesn't interrupt the reader, you can shutdown in your direction, and drop the socket off to something that will wait "long enough" before it closes. In an eventloop architecture, you can just put in as a deferred task; in process per connection, you should probably send the socket to a dedicated lingering closer process that doesn't interrupt your flow.
In this situation we were discarding the HTTP response without reading it before closing, which kept Go from reusing the connection. I didn't dig quite as deep as this post's author, but I imagine the same RST behavior was happening under the hood.
Um, yes? That's how TCP has been universally implemented for more than 30 years. See [0], 2.17 for discussion.
That's not what's happening here. The server is closing the socket when there's data from the client that it hasn't read.