Date | Commit message (Collapse) |
|
* avoid '' strings for GC-friendliness
* Ensure the '' we do need is binary for 1.9
* Disable passing the raw rack.input object to the child process
This is never possible with our new TeeInput wrapper.
|
|
Pay a performance penalty and always proxy reads through our
TeeInput object to ensure nobody closes our internal reader.
|
|
No point in making syscalls to deal with empty bodies.
Reinstate usage of the NULL_IO object which allows us
to avoid allocating new objects.
|
|
Trying not to repeat ourselves. Unfortunately, Ruby 1.9 forces
us to actually care about encodings of arbitrary byte sequences.
|
|
|
|
Then hopefully soon we'll be able to get rid of this script...
|
|
Just clarifying the license terms of the new code. Other files
should really have this notice in there as well.
|
|
While we're at it, use the rsyncable flag with gzip here
to reduce bandwidth usage on my end.
|
|
|
|
This includes an example of tunneling the git protocol inside a
TE:chunked HTTP request. The example is unfortunately contrived
in that it relies on the custom examples/cat-chunk-proxy.rb
script in the client. My initial wish was to have a generic
tool like curl(1) operate like this:
cat > ~/bin/cat-chunk-proxy.sh <<EOF
#!/bin/sh
exec curl -sfNT- http://$1:$2/
EOF
chmod +x ~/bin/cat-chunk-proxy.sh
GIT_PROXY_COMMAND=cat-chunk-proxy.sh git clone git://0:8080/foo
Unfortunately, curl will attempt a blocking read on stdin before
reading the TCP socket; causing the git-clone consumer to
starve. This does not appear to be a problem with the new
server code for handling chunked requests.
|
|
This adds support for handling POST/PUT request bodies sent with
chunked transfer encodings ("Transfer-Encoding: chunked").
Attention has been paid to ensure that a client cannot OOM us by
sending an extremely large chunk.
This implementation is pure Ruby as the Ragel-based
implementation in rfuzz didn't offer a streaming interface. It
should be reasonably close to RFC-compliant but please test it
in an attempt to break it.
The more interesting part is the ability to stream data to the
hosted Rack application as it is being transferred to the
server. This can be done regardless if the input is chunked or
not, enabling the streaming of POST/PUT bodies can allow the
hosted Rack application to process input as it receives it. See
examples/echo.ru for an example echo server over HTTP.
Enabling streaming also allows Rack applications to support
upload progress monitoring previously supported by Mongrel
handlers.
Since Rack specifies that the input needs to be rewindable, this
input is written to a temporary file (a la tee(1)) as it is
streamed to the application the first time. Subsequent rewinded
reads will read from the temporary file instead of the socket.
Streaming input to the application is disabled by default since
applications may not necessarily read the entire input body
before returning. Since this is a completely new feature we've
never seen in any Ruby HTTP application server before, we're
taking the safe route by leaving it disabled by default.
Enabling this can only be done globally by changing the
Unicorn HttpRequest::DEFAULTS hash:
Unicorn::HttpRequest::DEFAULTS["unicorn.stream_input"] = true
Similarly, a Rack application can check if streaming input
is enabled by checking the value of the "unicorn.stream_input"
key in the environment hashed passed to it.
All of this code has only been lightly tested and test coverage
is lacking at the moment.
[1] - http://tools.ietf.org/html/rfc2616#section-3.6.1
|
|
|
|
|
|
That method no longer exists, but Ruby would never know until it
tried to run it. Yes, I miss my compiled languages.
|
|
|
|
Bah, it's so much busy work to deal with this as configuration
option. Maybe I should say we allow any logger the user wants,
as long as it's $stderr :P
|
|
Make us look even better in "Hello World" benchmarks!
Passing a third parameter to avoid the constant lookup
for the HttpRequest object doesn't seem to have a
measurable effect.
|
|
This should be faster/cheaper than using an instance variable
since it's accessed in a critical code path. Unicorn was never
designed to be reentrant or thread-safe at all, either.
|
|
This makes SIGHUP handling more consistent across different
configurations, and allows togging preload_app to take effect
when SIGHUP is issued.
|
|
There is a potential race condition in closing the tempfile
immediately after SIGKILL-ing the child. So instead just
close it when we reap the dead child.
Some some versions of Ruby may also not like having the
WORKERS hash being changed while it is iterating through
each_pair, so dup the hash to be on the safe side (should
be cheap, since it's not a deep copy) since kill_worker()
can modify the WORKERS hash.
This is somewhat of a shotgun fix as I can't reproduce the
problem consistently, but oh well.
|
|
|
|
|
|
|
|
This should prevent Rack from being required too early
on so "-I" being passed through the unicorn command-line
can modify $LOAD_PATH for Rack
|
|
No point in refreshing the list of gems unless the app
can actually be reloaded.
|
|
On application reloads, Gems installed by the Ruby instance may
be different than when Unicorn started. So when preload_app is
false, HUP-ing Unicorn will always refresh the list of Gems
before loading the application code.
|
|
* commit 'v0.7.1':
unicorn 0.7.1
Conflicts:
lib/unicorn/const.rb
|
|
|
|
* benchmark:
Define HttpRequest#reset if missing
|
|
Newer versions of Unicorn do not include a #reset method
|
|
* 0.7.x-stable:
GNUmakefile: glob all files in bin/*
Disable formatting for command-line switches
test_response: correct OFS test
http_response: allow string status codes
Enforce minimum timeout at 3 seconds
configurator: fix rdoc formatting
Preserve 1.9 IO encodings in reopen_logs
Fix a warning about @pid being uninitialized
TUNING: add a note about somaxconn with UNIX sockets
Ignore unhandled master signals in the workers
Safer timeout handling and test case
app/old_rails: correctly log errors in output
Add TUNING document
app/exec_cgi: GC prevention
Add example init script
test_upload: still uncomfortable with 1.9 IO encoding...
test_request: enable with Ruby 1.9 now Rack 1.0.0 is out
|
|
Easier to maintain and add new executables this way
|
|
Copy and pasting from the RDoc web page and passing
"\342\200\224config-file" to the command-line does not work.
|
|
Must have multiple headers to test this effectively
|
|
Rack::Lint says they just have to work when to_i is
called on the status, so that's what we'll do.
|
|
2 seconds is still prone to race conditions under high load.
We're intentionally less accurate than we could be in order to
reduce syscall and method dispatch overhead.
|
|
|
|
Ensure we preserve both internal and external encodings
when reopening logs.
|
|
|
|
|
|
This makes it easier to use "killall -$SIGNAL unicorn"
without having to lookup the correct PID.
|
|
Timeouts of less than 2 seconds are unsafe due to the lack of
subsecond resolution in most POSIX filesystems. This is the
trade-off for using a low-complexity solution for timeouts.
Since this type of timeout is a last resort; 2 seconds is not
entirely unreasonable IMNSHO. Additionally, timing out too
aggressively can put us in a fork loop and slow down the system.
Of course, the default is 60 seconds and most people do not
bother to change it.
|
|
"out" was an invalid variable in that context...
|
|
Most of this should be applicable to Mongrel and other
web servers, too.
|
|
Don't allow newly created IO objects to get GC'ed and
subsequently close(2)-ed. We're not reopening the
{$std,STD}{in,out,err} variables since those can't be
trusted to have fileno 1, 2 and 3 respectively.
|
|
This was done in Bourne shell because it's easier for UNIX
sysadmins who don't know Ruby to understand and modify.
Additionally, it can be used for nginx or anything else
that shares compatible signal handling.
|
|
It seems most applications use buffered IO#read instead of
IO#sysread. So make sure our encoding is set correctly for
buffered IO#read applications, too.
|
|
|
|
Easier to maintain and add new executables this way
|
|
Copy and pasting from the RDoc web page and passing
"\342\200\224config-file" to the command-line does not work.
|