Date | Commit message (Collapse) |
|
* maint:
unicorn 0.8.2
always set FD_CLOEXEC on sockets post-accept()
Minor cleanups to core
Re-add support for non-portable socket options
Retry listen() on EADDRINUSE 5 times ever 500ms
Unbind listeners as before stopping workers
Conflicts:
CHANGELOG
lib/unicorn.rb
lib/unicorn/configurator.rb
lib/unicorn/const.rb
|
|
|
|
FD_CLOEXEC is not guaranteed to be inherited by the accept()-ed
descriptors even if the listener socket has this set. This can
be a problem with applications that fork+exec long running
background processes.
Thanks to Paul Sponagl for helping me find this.
|
|
(cherry picked from commit ec70433f84664af0dff1336845ddd51f50a714a3)
|
|
Now that we support tunnelling arbitrary protocols over HTTP as
well as "100 Continue" responses, TCP_NODELAY actually becomes
useful to us. TCP_NODELAY is actually reasonably portable
nowadays; even.
While we're adding non-portable options, TCP_CORK/TCP_NOPUSH can
be enabled, too. Unlike some other servers, these can't be
disabled explicitly/intelligently to force a flush, however.
However, these may still improve performance with "normal" HTTP
applications (Mongrel has always had TCP_CORK enabled in Linux).
While we're adding OS-specific features, we might as well
support TCP_DEFER_ACCEPT in Linux and FreeBSD the "httpready"
accept filter to prevent abuse.
These options can all be enabled on a per-listener basis.
(cherry picked from commit 563d03f649ef31d2aec3505cbbed1e015493b8fc)
|
|
This number of retries and delay taken directly from nginx
(cherry picked from commit d247b5d95a3ad2de65cc909db21fdfbc6194b4c9)
|
|
This allows another process to take our listeners
sooner rather than later.
(cherry picked from commit 8c2040127770e40e344a927ddc187bf801073e33)
|
|
|
|
There's a small memory reduction to be had when forking
oodles of processes and the Perl hacker in me still
gets confused into thinking those are arrays...
|
|
Array#+= creates a new array before assigning, Array#concat just
appends one array to another without an intermediate one.
|
|
|
|
Now that upstream curl supports this functionality, there's
no reason to duplicate it here as an example.
|
|
This change gives applications full control to deny clients
from uploading unwanted message bodies. This also paves the
way for doing things like upload progress notification within
applications in a Rack::Lint-compatible manner.
Since we don't support HTTP keepalive, so we have more freedom
here by being able to close TCP connections and deny clients the
ability to write to us (and thus wasting our bandwidth).
While I could've left this feature off by default indefinitely
for maximum backwards compatibility (for arguably broken
applications), Unicorn is not and has never been about
supporting the lowest common denominator.
|
|
This was causing the first part of the body to be missing when
an HTTP client failed to delay between sending the header and
body in the request.
|
|
This gives the app ability to deny clients with 417 instead of
blindly making the decision for the underlying application. Of
course, apps must be made aware of this.
|
|
Now that we support tunnelling arbitrary protocols over HTTP as
well as "100 Continue" responses, TCP_NODELAY actually becomes
useful to us. TCP_NODELAY is actually reasonably portable
nowadays; even.
While we're adding non-portable options, TCP_CORK/TCP_NOPUSH can
be enabled, too. Unlike some other servers, these can't be
disabled explicitly/intelligently to force a flush, however.
However, these may still improve performance with "normal" HTTP
applications (Mongrel has always had TCP_CORK enabled in Linux).
While we're adding OS-specific features, we might as well
support TCP_DEFER_ACCEPT in Linux and FreeBSD the "httpready"
accept filter to prevent abuse.
These options can all be enabled on a per-listener basis.
|
|
This number of retries and delay taken directly from nginx
|
|
This allows another process to take our listeners
sooner rather than later.
|
|
Support for the "Trailer:" header and associated Trailer
lines should be reasonably well supported now
|
|
|
|
We can actually just use one IO and file descriptor here and
simplify the code while we're at it.
|
|
I'd honestly be more comfortable doing this in C (and possibly
adapting the code from the libcurl internals since that code has
been very well-tested).
|
|
Eventually this (and ChunkedReader) may be done in C/Ragel
along with the existing HttpParser.
|
|
Don't allow misbehaving clients to mispell "chunked"
|
|
Under slow/inconsistent network conditions or overly aggressive
clients, there is a possibility we could've already started
reading the body. In those cases, don't bother responding
to the expectation to continue since the client has already
started sending a message body.
|
|
By responding with a "HTTP/1.1 100 Continue" response to
encourage a client to send the rest of the body.
This is part of the HTTP/1.1 standard but not often implemented
by servers:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3
This will speed up curl uploads since curl sleeps up to 1 second if
no response is received:
http://curl.haxx.se/docs/faq.html#My_HTTP_POST_or_PUT_requests_are
|
|
Not sure why this hasn't been an issue yet, but better
safe than sorry with data integrity...
|
|
This won't be heavily used enough to make preallocation worth
the effort. While we're at it, don't enforce policy by forcing
the readpartial buffer to be Encoding::BINARY (even though it
/should/ be :), it's up to the user of the interface to decide.
|
|
The default is false because some applications were not
written to handle partial reads (even though IO#read allows
it, not just IO#readpartial).
|
|
This has been totally broken since
commit b0013b043a15d77d810d5965157766c1af364db2
"Avoid duplicating the "Z" constant"
|
|
|
|
Oops!
|
|
Now that I've beefed out my Makefile to detect errors,
I've noticed this test has been failing under 1.9 for
a while now. Currently no released version of Rack(1.0)
or Rails(2.3.2.1) supports this.
|
|
This can allow you to run make with:
TRACER='strace -f -o $(t).strace -s 100000'
to debug a test failure (it should be usable with truss,
ltrace, and other similar tools).
|
|
This has been broken since
6945342a1f0a4caaa918f2b0b1efef88824439e0
"Transfer-Encoding: chunked streaming input support" but
somehow never caught by me or anyone else.
|
|
|
|
Additionally, provide verifications for sizes after-the-fact
to avoid slamming all of our input into the server.
|
|
The complexity of making the object persistent isn't worth the
potential performance gain here.
|
|
We don't ever expose the @rd object to the public so
Rack-applications won't ever call size() on it.
|
|
Older Sinatra would blindly try to run Mongrel or Thin at_exit.
This causes strange behavior to happen when Unicorn workers are
exited.
|
|
* avoid '' strings for GC-friendliness
* Ensure the '' we do need is binary for 1.9
* Disable passing the raw rack.input object to the child process
This is never possible with our new TeeInput wrapper.
|
|
Pay a performance penalty and always proxy reads through our
TeeInput object to ensure nobody closes our internal reader.
|
|
No point in making syscalls to deal with empty bodies.
Reinstate usage of the NULL_IO object which allows us
to avoid allocating new objects.
|
|
Trying not to repeat ourselves. Unfortunately, Ruby 1.9 forces
us to actually care about encodings of arbitrary byte sequences.
|
|
|
|
Then hopefully soon we'll be able to get rid of this script...
|
|
Just clarifying the license terms of the new code. Other files
should really have this notice in there as well.
|
|
While we're at it, use the rsyncable flag with gzip here
to reduce bandwidth usage on my end.
|
|
|
|
This includes an example of tunneling the git protocol inside a
TE:chunked HTTP request. The example is unfortunately contrived
in that it relies on the custom examples/cat-chunk-proxy.rb
script in the client. My initial wish was to have a generic
tool like curl(1) operate like this:
cat > ~/bin/cat-chunk-proxy.sh <<EOF
#!/bin/sh
exec curl -sfNT- http://$1:$2/
EOF
chmod +x ~/bin/cat-chunk-proxy.sh
GIT_PROXY_COMMAND=cat-chunk-proxy.sh git clone git://0:8080/foo
Unfortunately, curl will attempt a blocking read on stdin before
reading the TCP socket; causing the git-clone consumer to
starve. This does not appear to be a problem with the new
server code for handling chunked requests.
|