about summary refs log tree commit homepage
path: root/lib
DateCommit message (Collapse)
2009-07-01unicorn 0.9.0 v0.9.0
2009-07-01Force streaming input onto apps by default
This change gives applications full control to deny clients from uploading unwanted message bodies. This also paves the way for doing things like upload progress notification within applications in a Rack::Lint-compatible manner. Since we don't support HTTP keepalive, so we have more freedom here by being able to close TCP connections and deny clients the ability to write to us (and thus wasting our bandwidth). While I could've left this feature off by default indefinitely for maximum backwards compatibility (for arguably broken applications), Unicorn is not and has never been about supporting the lowest common denominator.
2009-07-01tee_input: avoid ignoring initial body blob
This was causing the first part of the body to be missing when an HTTP client failed to delay between sending the header and body in the request.
2009-07-01Move "Expect: 100-continue" handling to the app
This gives the app ability to deny clients with 417 instead of blindly making the decision for the underlying application. Of course, apps must be made aware of this.
2009-07-01Re-add support for non-portable socket options
Now that we support tunnelling arbitrary protocols over HTTP as well as "100 Continue" responses, TCP_NODELAY actually becomes useful to us. TCP_NODELAY is actually reasonably portable nowadays; even. While we're adding non-portable options, TCP_CORK/TCP_NOPUSH can be enabled, too. Unlike some other servers, these can't be disabled explicitly/intelligently to force a flush, however. However, these may still improve performance with "normal" HTTP applications (Mongrel has always had TCP_CORK enabled in Linux). While we're adding OS-specific features, we might as well support TCP_DEFER_ACCEPT in Linux and FreeBSD the "httpready" accept filter to prevent abuse. These options can all be enabled on a per-listener basis.
2009-06-30Retry listen() on EADDRINUSE 5 times ever 500ms
This number of retries and delay taken directly from nginx
2009-06-30Unbind listeners as before stopping workers
This allows another process to take our listeners sooner rather than later.
2009-06-30TrailerParser integration into ChunkedReader
Support for the "Trailer:" header and associated Trailer lines should be reasonably well supported now
2009-06-30trailer_parser: set keys with "HTTP_" prefix
2009-06-30TeeInput: use only one IO for tempfile
We can actually just use one IO and file descriptor here and simplify the code while we're at it.
2009-06-30chunked_reader: Add test for chunk parse failure
I'd honestly be more comfortable doing this in C (and possibly adapting the code from the libcurl internals since that code has been very well-tested).
2009-06-30Add trailer_parser for parsing trailers
Eventually this (and ChunkedReader) may be done in C/Ragel along with the existing HttpParser.
2009-06-29http_request: tighter Transfer-Encoding: "chunked" check
Don't allow misbehaving clients to mispell "chunked"
2009-06-29Only send "100 Continue" when no body has been sent
Under slow/inconsistent network conditions or overly aggressive clients, there is a possibility we could've already started reading the body. In those cases, don't bother responding to the expectation to continue since the client has already started sending a message body.
2009-06-29ACK clients on "Expect: 100-continue" header
By responding with a "HTTP/1.1 100 Continue" response to encourage a client to send the rest of the body. This is part of the HTTP/1.1 standard but not often implemented by servers: http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3 This will speed up curl uploads since curl sleeps up to 1 second if no response is received: http://curl.haxx.se/docs/faq.html#My_HTTP_POST_or_PUT_requests_are
2009-06-29http_request: force BUFFER to be Encoding::BINARY
Not sure why this hasn't been an issue yet, but better safe than sorry with data integrity...
2009-06-29chunked_reader: simpler interface
This won't be heavily used enough to make preallocation worth the effort. While we're at it, don't enforce policy by forcing the readpartial buffer to be Encoding::BINARY (even though it /should/ be :), it's up to the user of the interface to decide.
2009-06-29configurator: provide stream_input (true|false) option
The default is false because some applications were not written to handle partial reads (even though IO#read allows it, not just IO#readpartial).
2009-06-29inetd: fix broken constant references
This has been totally broken since commit b0013b043a15d77d810d5965157766c1af364db2 "Avoid duplicating the "Z" constant"
2009-06-29tee_input: avoid rereading fresh data
Oops!
2009-06-29Make TeeInput easier to use
The complexity of making the object persistent isn't worth the potential performance gain here.
2009-06-29tee_input: avoid defining a @rd.size method
We don't ever expose the @rd object to the public so Rack-applications won't ever call size() on it.
2009-06-25exec_cgi: small cleanups
* avoid '' strings for GC-friendliness * Ensure the '' we do need is binary for 1.9 * Disable passing the raw rack.input object to the child process This is never possible with our new TeeInput wrapper.
2009-06-25tee_input: Don't expose the @rd object as a return value
Pay a performance penalty and always proxy reads through our TeeInput object to ensure nobody closes our internal reader.
2009-06-10Optimize body-less GET/HEAD requests (again)
No point in making syscalls to deal with empty bodies. Reinstate usage of the NULL_IO object which allows us to avoid allocating new objects.
2009-06-09Avoid duplicating the "Z" constant
Trying not to repeat ourselves. Unfortunately, Ruby 1.9 forces us to actually care about encodings of arbitrary byte sequences.
2009-06-06Put copyright text in new files, include GPL2 text
Just clarifying the license terms of the new code. Other files should really have this notice in there as well.
2009-06-06Unicorn::App::Inetd: reinventing Unix, poorly :)
This includes an example of tunneling the git protocol inside a TE:chunked HTTP request. The example is unfortunately contrived in that it relies on the custom examples/cat-chunk-proxy.rb script in the client. My initial wish was to have a generic tool like curl(1) operate like this: cat > ~/bin/cat-chunk-proxy.sh <<EOF #!/bin/sh exec curl -sfNT- http://$1:$2/ EOF chmod +x ~/bin/cat-chunk-proxy.sh GIT_PROXY_COMMAND=cat-chunk-proxy.sh git clone git://0:8080/foo Unfortunately, curl will attempt a blocking read on stdin before reading the TCP socket; causing the git-clone consumer to starve. This does not appear to be a problem with the new server code for handling chunked requests.
2009-06-05Transfer-Encoding: chunked streaming input support
This adds support for handling POST/PUT request bodies sent with chunked transfer encodings ("Transfer-Encoding: chunked"). Attention has been paid to ensure that a client cannot OOM us by sending an extremely large chunk. This implementation is pure Ruby as the Ragel-based implementation in rfuzz didn't offer a streaming interface. It should be reasonably close to RFC-compliant but please test it in an attempt to break it. The more interesting part is the ability to stream data to the hosted Rack application as it is being transferred to the server. This can be done regardless if the input is chunked or not, enabling the streaming of POST/PUT bodies can allow the hosted Rack application to process input as it receives it. See examples/echo.ru for an example echo server over HTTP. Enabling streaming also allows Rack applications to support upload progress monitoring previously supported by Mongrel handlers. Since Rack specifies that the input needs to be rewindable, this input is written to a temporary file (a la tee(1)) as it is streamed to the application the first time. Subsequent rewinded reads will read from the temporary file instead of the socket. Streaming input to the application is disabled by default since applications may not necessarily read the entire input body before returning. Since this is a completely new feature we've never seen in any Ruby HTTP application server before, we're taking the safe route by leaving it disabled by default. Enabling this can only be done globally by changing the Unicorn HttpRequest::DEFAULTS hash: Unicorn::HttpRequest::DEFAULTS["unicorn.stream_input"] = true Similarly, a Rack application can check if streaming input is enabled by checking the value of the "unicorn.stream_input" key in the environment hashed passed to it. All of this code has only been lightly tested and test coverage is lacking at the moment. [1] - http://tools.ietf.org/html/rfc2616#section-3.6.1
2009-06-05http_request: fix typo for 1.9
2009-05-31http_request: StringIO is binary for empty bodies (1.9)
2009-05-30http_request: no need to reset the request
That method no longer exists, but Ruby would never know until it tried to run it. Yes, I miss my compiled languages.
2009-05-28unicorn 0.8.1 v0.8.1
2009-05-28Consistent logger assignment for multiple objects
Bah, it's so much busy work to deal with this as configuration option. Maybe I should say we allow any logger the user wants, as long as it's $stderr :P
2009-05-28Avoid instance variables lookups in a critical path
Make us look even better in "Hello World" benchmarks! Passing a third parameter to avoid the constant lookup for the HttpRequest object doesn't seem to have a measurable effect.
2009-05-28Make our HttpRequest object a global constant
This should be faster/cheaper than using an instance variable since it's accessed in a critical code path. Unicorn was never designed to be reentrant or thread-safe at all, either.
2009-05-28SIGHUP reloads app even if preload_app is true
This makes SIGHUP handling more consistent across different configurations, and allows togging preload_app to take effect when SIGHUP is issued.
2009-05-28Fix potential race condition in timeout handling
There is a potential race condition in closing the tempfile immediately after SIGKILL-ing the child. So instead just close it when we reap the dead child. Some some versions of Ruby may also not like having the WORKERS hash being changed while it is iterating through each_pair, so dup the hash to be on the safe side (should be cheap, since it's not a deep copy) since kill_worker() can modify the WORKERS hash. This is somewhat of a shotgun fix as I can't reproduce the problem consistently, but oh well.
2009-05-26unicorn 0.8.0 v0.8.0
2009-05-25Switch to autoload to defer requires
This should prevent Rack from being required too early on so "-I" being passed through the unicorn command-line can modify $LOAD_PATH for Rack
2009-05-25Only refresh the gem list when building the app
No point in refreshing the list of gems unless the app can actually be reloaded.
2009-05-25Refresh Gem list when building the app
On application reloads, Gems installed by the Ruby instance may be different than when Unicorn started. So when preload_app is false, HUP-ing Unicorn will always refresh the list of Gems before loading the application code.
2009-05-22Merge commit 'v0.7.1'
* commit 'v0.7.1': unicorn 0.7.1 Conflicts: lib/unicorn/const.rb
2009-05-22unicorn 0.7.1 v0.7.1
2009-05-22http_response: allow string status codes
Rack::Lint says they just have to work when to_i is called on the status, so that's what we'll do.
2009-05-22Enforce minimum timeout at 3 seconds
2 seconds is still prone to race conditions under high load. We're intentionally less accurate than we could be in order to reduce syscall and method dispatch overhead.
2009-05-22configurator: fix rdoc formatting
2009-05-22Preserve 1.9 IO encodings in reopen_logs
Ensure we preserve both internal and external encodings when reopening logs.
2009-05-22Fix a warning about @pid being uninitialized
2009-05-22Ignore unhandled master signals in the workers
This makes it easier to use "killall -$SIGNAL unicorn" without having to lookup the correct PID.