Date | Commit message (Collapse) |
|
* maint:
unicorn 0.8.2
always set FD_CLOEXEC on sockets post-accept()
Minor cleanups to core
Re-add support for non-portable socket options
Retry listen() on EADDRINUSE 5 times ever 500ms
Unbind listeners as before stopping workers
Conflicts:
CHANGELOG
lib/unicorn.rb
lib/unicorn/configurator.rb
lib/unicorn/const.rb
|
|
|
|
FD_CLOEXEC is not guaranteed to be inherited by the accept()-ed
descriptors even if the listener socket has this set. This can
be a problem with applications that fork+exec long running
background processes.
Thanks to Paul Sponagl for helping me find this.
|
|
(cherry picked from commit ec70433f84664af0dff1336845ddd51f50a714a3)
|
|
Now that we support tunnelling arbitrary protocols over HTTP as
well as "100 Continue" responses, TCP_NODELAY actually becomes
useful to us. TCP_NODELAY is actually reasonably portable
nowadays; even.
While we're adding non-portable options, TCP_CORK/TCP_NOPUSH can
be enabled, too. Unlike some other servers, these can't be
disabled explicitly/intelligently to force a flush, however.
However, these may still improve performance with "normal" HTTP
applications (Mongrel has always had TCP_CORK enabled in Linux).
While we're adding OS-specific features, we might as well
support TCP_DEFER_ACCEPT in Linux and FreeBSD the "httpready"
accept filter to prevent abuse.
These options can all be enabled on a per-listener basis.
(cherry picked from commit 563d03f649ef31d2aec3505cbbed1e015493b8fc)
|
|
This number of retries and delay taken directly from nginx
(cherry picked from commit d247b5d95a3ad2de65cc909db21fdfbc6194b4c9)
|
|
This allows another process to take our listeners
sooner rather than later.
(cherry picked from commit 8c2040127770e40e344a927ddc187bf801073e33)
|
|
|
|
There's a small memory reduction to be had when forking
oodles of processes and the Perl hacker in me still
gets confused into thinking those are arrays...
|
|
Array#+= creates a new array before assigning, Array#concat just
appends one array to another without an intermediate one.
|
|
|
|
This change gives applications full control to deny clients
from uploading unwanted message bodies. This also paves the
way for doing things like upload progress notification within
applications in a Rack::Lint-compatible manner.
Since we don't support HTTP keepalive, so we have more freedom
here by being able to close TCP connections and deny clients the
ability to write to us (and thus wasting our bandwidth).
While I could've left this feature off by default indefinitely
for maximum backwards compatibility (for arguably broken
applications), Unicorn is not and has never been about
supporting the lowest common denominator.
|
|
This was causing the first part of the body to be missing when
an HTTP client failed to delay between sending the header and
body in the request.
|
|
This gives the app ability to deny clients with 417 instead of
blindly making the decision for the underlying application. Of
course, apps must be made aware of this.
|
|
Now that we support tunnelling arbitrary protocols over HTTP as
well as "100 Continue" responses, TCP_NODELAY actually becomes
useful to us. TCP_NODELAY is actually reasonably portable
nowadays; even.
While we're adding non-portable options, TCP_CORK/TCP_NOPUSH can
be enabled, too. Unlike some other servers, these can't be
disabled explicitly/intelligently to force a flush, however.
However, these may still improve performance with "normal" HTTP
applications (Mongrel has always had TCP_CORK enabled in Linux).
While we're adding OS-specific features, we might as well
support TCP_DEFER_ACCEPT in Linux and FreeBSD the "httpready"
accept filter to prevent abuse.
These options can all be enabled on a per-listener basis.
|
|
This number of retries and delay taken directly from nginx
|
|
This allows another process to take our listeners
sooner rather than later.
|
|
Support for the "Trailer:" header and associated Trailer
lines should be reasonably well supported now
|
|
|
|
We can actually just use one IO and file descriptor here and
simplify the code while we're at it.
|
|
I'd honestly be more comfortable doing this in C (and possibly
adapting the code from the libcurl internals since that code has
been very well-tested).
|
|
Eventually this (and ChunkedReader) may be done in C/Ragel
along with the existing HttpParser.
|
|
Don't allow misbehaving clients to mispell "chunked"
|
|
Under slow/inconsistent network conditions or overly aggressive
clients, there is a possibility we could've already started
reading the body. In those cases, don't bother responding
to the expectation to continue since the client has already
started sending a message body.
|
|
By responding with a "HTTP/1.1 100 Continue" response to
encourage a client to send the rest of the body.
This is part of the HTTP/1.1 standard but not often implemented
by servers:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3
This will speed up curl uploads since curl sleeps up to 1 second if
no response is received:
http://curl.haxx.se/docs/faq.html#My_HTTP_POST_or_PUT_requests_are
|
|
Not sure why this hasn't been an issue yet, but better
safe than sorry with data integrity...
|
|
This won't be heavily used enough to make preallocation worth
the effort. While we're at it, don't enforce policy by forcing
the readpartial buffer to be Encoding::BINARY (even though it
/should/ be :), it's up to the user of the interface to decide.
|
|
The default is false because some applications were not
written to handle partial reads (even though IO#read allows
it, not just IO#readpartial).
|
|
This has been totally broken since
commit b0013b043a15d77d810d5965157766c1af364db2
"Avoid duplicating the "Z" constant"
|
|
Oops!
|
|
The complexity of making the object persistent isn't worth the
potential performance gain here.
|
|
We don't ever expose the @rd object to the public so
Rack-applications won't ever call size() on it.
|
|
* avoid '' strings for GC-friendliness
* Ensure the '' we do need is binary for 1.9
* Disable passing the raw rack.input object to the child process
This is never possible with our new TeeInput wrapper.
|
|
Pay a performance penalty and always proxy reads through our
TeeInput object to ensure nobody closes our internal reader.
|
|
No point in making syscalls to deal with empty bodies.
Reinstate usage of the NULL_IO object which allows us
to avoid allocating new objects.
|
|
Trying not to repeat ourselves. Unfortunately, Ruby 1.9 forces
us to actually care about encodings of arbitrary byte sequences.
|
|
Just clarifying the license terms of the new code. Other files
should really have this notice in there as well.
|
|
This includes an example of tunneling the git protocol inside a
TE:chunked HTTP request. The example is unfortunately contrived
in that it relies on the custom examples/cat-chunk-proxy.rb
script in the client. My initial wish was to have a generic
tool like curl(1) operate like this:
cat > ~/bin/cat-chunk-proxy.sh <<EOF
#!/bin/sh
exec curl -sfNT- http://$1:$2/
EOF
chmod +x ~/bin/cat-chunk-proxy.sh
GIT_PROXY_COMMAND=cat-chunk-proxy.sh git clone git://0:8080/foo
Unfortunately, curl will attempt a blocking read on stdin before
reading the TCP socket; causing the git-clone consumer to
starve. This does not appear to be a problem with the new
server code for handling chunked requests.
|
|
This adds support for handling POST/PUT request bodies sent with
chunked transfer encodings ("Transfer-Encoding: chunked").
Attention has been paid to ensure that a client cannot OOM us by
sending an extremely large chunk.
This implementation is pure Ruby as the Ragel-based
implementation in rfuzz didn't offer a streaming interface. It
should be reasonably close to RFC-compliant but please test it
in an attempt to break it.
The more interesting part is the ability to stream data to the
hosted Rack application as it is being transferred to the
server. This can be done regardless if the input is chunked or
not, enabling the streaming of POST/PUT bodies can allow the
hosted Rack application to process input as it receives it. See
examples/echo.ru for an example echo server over HTTP.
Enabling streaming also allows Rack applications to support
upload progress monitoring previously supported by Mongrel
handlers.
Since Rack specifies that the input needs to be rewindable, this
input is written to a temporary file (a la tee(1)) as it is
streamed to the application the first time. Subsequent rewinded
reads will read from the temporary file instead of the socket.
Streaming input to the application is disabled by default since
applications may not necessarily read the entire input body
before returning. Since this is a completely new feature we've
never seen in any Ruby HTTP application server before, we're
taking the safe route by leaving it disabled by default.
Enabling this can only be done globally by changing the
Unicorn HttpRequest::DEFAULTS hash:
Unicorn::HttpRequest::DEFAULTS["unicorn.stream_input"] = true
Similarly, a Rack application can check if streaming input
is enabled by checking the value of the "unicorn.stream_input"
key in the environment hashed passed to it.
All of this code has only been lightly tested and test coverage
is lacking at the moment.
[1] - http://tools.ietf.org/html/rfc2616#section-3.6.1
|
|
|
|
|
|
That method no longer exists, but Ruby would never know until it
tried to run it. Yes, I miss my compiled languages.
|
|
|
|
Bah, it's so much busy work to deal with this as configuration
option. Maybe I should say we allow any logger the user wants,
as long as it's $stderr :P
|
|
Make us look even better in "Hello World" benchmarks!
Passing a third parameter to avoid the constant lookup
for the HttpRequest object doesn't seem to have a
measurable effect.
|
|
This should be faster/cheaper than using an instance variable
since it's accessed in a critical code path. Unicorn was never
designed to be reentrant or thread-safe at all, either.
|
|
This makes SIGHUP handling more consistent across different
configurations, and allows togging preload_app to take effect
when SIGHUP is issued.
|
|
There is a potential race condition in closing the tempfile
immediately after SIGKILL-ing the child. So instead just
close it when we reap the dead child.
Some some versions of Ruby may also not like having the
WORKERS hash being changed while it is iterating through
each_pair, so dup the hash to be on the safe side (should
be cheap, since it's not a deep copy) since kill_worker()
can modify the WORKERS hash.
This is somewhat of a shotgun fix as I can't reproduce the
problem consistently, but oh well.
|
|
|
|
This should prevent Rack from being required too early
on so "-I" being passed through the unicorn command-line
can modify $LOAD_PATH for Rack
|