summary refs log tree commit homepage
AgeCommit message (Collapse)AuthorFilesLines
2009-07-01unicorn 0.9.0 v0.9.0Eric Wong2-1/+2
2009-07-01Remove cat-chunk-proxy, curl CVS supports non-blocking stdinEric Wong4-69/+9
Now that upstream curl supports this functionality, there's no reason to duplicate it here as an example.
2009-07-01Force streaming input onto apps by defaultEric Wong9-59/+18
This change gives applications full control to deny clients from uploading unwanted message bodies. This also paves the way for doing things like upload progress notification within applications in a Rack::Lint-compatible manner. Since we don't support HTTP keepalive, so we have more freedom here by being able to close TCP connections and deny clients the ability to write to us (and thus wasting our bandwidth). While I could've left this feature off by default indefinitely for maximum backwards compatibility (for arguably broken applications), Unicorn is not and has never been about supporting the lowest common denominator.
2009-07-01tee_input: avoid ignoring initial body blobEric Wong1-1/+4
This was causing the first part of the body to be missing when an HTTP client failed to delay between sending the header and body in the request.
2009-07-01Move "Expect: 100-continue" handling to the appEric Wong3-4/+13
This gives the app ability to deny clients with 417 instead of blindly making the decision for the underlying application. Of course, apps must be made aware of this.
2009-07-01Re-add support for non-portable socket optionsEric Wong2-2/+72
Now that we support tunnelling arbitrary protocols over HTTP as well as "100 Continue" responses, TCP_NODELAY actually becomes useful to us. TCP_NODELAY is actually reasonably portable nowadays; even. While we're adding non-portable options, TCP_CORK/TCP_NOPUSH can be enabled, too. Unlike some other servers, these can't be disabled explicitly/intelligently to force a flush, however. However, these may still improve performance with "normal" HTTP applications (Mongrel has always had TCP_CORK enabled in Linux). While we're adding OS-specific features, we might as well support TCP_DEFER_ACCEPT in Linux and FreeBSD the "httpready" accept filter to prevent abuse. These options can all be enabled on a per-listener basis.
2009-06-30Retry listen() on EADDRINUSE 5 times ever 500msEric Wong1-3/+10
This number of retries and delay taken directly from nginx
2009-06-30Unbind listeners as before stopping workersEric Wong1-2/+1
This allows another process to take our listeners sooner rather than later.
2009-06-30TrailerParser integration into ChunkedReaderEric Wong7-17/+63
Support for the "Trailer:" header and associated Trailer lines should be reasonably well supported now
2009-06-30trailer_parser: set keys with "HTTP_" prefixEric Wong2-5/+5
2009-06-30TeeInput: use only one IO for tempfileEric Wong1-25/+21
We can actually just use one IO and file descriptor here and simplify the code while we're at it.
2009-06-30chunked_reader: Add test for chunk parse failureEric Wong2-1/+18
I'd honestly be more comfortable doing this in C (and possibly adapting the code from the libcurl internals since that code has been very well-tested).
2009-06-30Add trailer_parser for parsing trailersEric Wong3-0/+105
Eventually this (and ChunkedReader) may be done in C/Ragel along with the existing HttpParser.
2009-06-29http_request: tighter Transfer-Encoding: "chunked" checkEric Wong1-1/+1
Don't allow misbehaving clients to mispell "chunked"
2009-06-29Only send "100 Continue" when no body has been sentEric Wong1-3/+3
Under slow/inconsistent network conditions or overly aggressive clients, there is a possibility we could've already started reading the body. In those cases, don't bother responding to the expectation to continue since the client has already started sending a message body.
2009-06-29ACK clients on "Expect: 100-continue" headerEric Wong3-2/+5
By responding with a "HTTP/1.1 100 Continue" response to encourage a client to send the rest of the body. This is part of the HTTP/1.1 standard but not often implemented by servers: This will speed up curl uploads since curl sleeps up to 1 second if no response is received:
2009-06-29http_request: force BUFFER to be Encoding::BINARYEric Wong1-0/+1
Not sure why this hasn't been an issue yet, but better safe than sorry with data integrity...
2009-06-29chunked_reader: simpler interfaceEric Wong3-42/+38
This won't be heavily used enough to make preallocation worth the effort. While we're at it, don't enforce policy by forcing the readpartial buffer to be Encoding::BINARY (even though it /should/ be :), it's up to the user of the interface to decide.
2009-06-29configurator: provide stream_input (true|false) optionEric Wong3-7/+31
The default is false because some applications were not written to handle partial reads (even though IO#read allows it, not just IO#readpartial).
2009-06-29inetd: fix broken constant referencesEric Wong1-2/+2
This has been totally broken since commit b0013b043a15d77d810d5965157766c1af364db2 "Avoid duplicating the "Z" constant"
2009-06-29"Fix" tests that break with stream_input=falseEric Wong4-4/+18
2009-06-29tee_input: avoid rereading fresh dataEric Wong1-1/+10
2009-06-29test_rails: workaround long-standing 1.9 bugEric Wong1-12/+18
Now that I've beefed out my Makefile to detect errors, I've noticed this test has been failing under 1.9 for a while now. Currently no released version of Rack(1.0) or Rails( supports this.
2009-06-29GNUmakefile: allow TRACER= to be specified for testsEric Wong1-1/+4
This can allow you to run make with: TRACER='strace -f -o $(t).strace -s 100000' to debug a test failure (it should be usable with truss, ltrace, and other similar tools).
2009-06-29test_upload: fix ECONNRESET with 1.9Eric Wong1-1/+5
This has been broken since 6945342a1f0a4caaa918f2b0b1efef88824439e0 "Transfer-Encoding: chunked streaming input support" but somehow never caught by me or anyone else.
2009-06-29GNUmakefile: more stringent error checking in testsEric Wong1-3/+4
2009-06-29test_upload: add tests for chunked encodingEric Wong1-1/+86
Additionally, provide verifications for sizes after-the-fact to avoid slamming all of our input into the server.
2009-06-29Make TeeInput easier to useEric Wong3-22/+7
The complexity of making the object persistent isn't worth the potential performance gain here.
2009-06-29tee_input: avoid defining a @rd.size methodEric Wong1-3/+0
We don't ever expose the @rd object to the public so Rack-applications won't ever call size() on it.
2009-06-29README: another note about older SinatraEric Wong1-6/+11
Older Sinatra would blindly try to run Mongrel or Thin at_exit. This causes strange behavior to happen when Unicorn workers are exited.
2009-06-25exec_cgi: small cleanupsEric Wong1-9/+7
* avoid '' strings for GC-friendliness * Ensure the '' we do need is binary for 1.9 * Disable passing the raw rack.input object to the child process This is never possible with our new TeeInput wrapper.
2009-06-25tee_input: Don't expose the @rd object as a return valueEric Wong1-1/+1
Pay a performance penalty and always proxy reads through our TeeInput object to ensure nobody closes our internal reader.
2009-06-10Optimize body-less GET/HEAD requests (again)Eric Wong4-13/+62
No point in making syscalls to deal with empty bodies. Reinstate usage of the NULL_IO object which allows us to avoid allocating new objects.
2009-06-09Avoid duplicating the "Z" constantEric Wong6-12/+5
Trying not to repeat ourselves. Unfortunately, Ruby 1.9 forces us to actually care about encodings of arbitrary byte sequences.
2009-06-07Update TODOEric Wong1-10/+10
2009-06-07examples/cat-chunk-proxy: link to proposed curl(1) patchEric Wong1-4/+9
Then hopefully soon we'll be able to get rid of this script...
2009-06-06Put copyright text in new files, include GPL2 textEric Wong7-3/+353
Just clarifying the license terms of the new code. Other files should really have this notice in there as well. publish_doc gzips all html, js, cssEric Wong1-3/+3
While we're at it, use the rsyncable flag with gzip here to reduce bandwidth usage on my end.
2009-06-06README: update with mailing list infoEric Wong1-2/+9
2009-06-06Unicorn::App::Inetd: reinventing Unix, poorly :)Eric Wong4-0/+178
This includes an example of tunneling the git protocol inside a TE:chunked HTTP request. The example is unfortunately contrived in that it relies on the custom examples/cat-chunk-proxy.rb script in the client. My initial wish was to have a generic tool like curl(1) operate like this: cat > ~/bin/ <<EOF #!/bin/sh exec curl -sfNT- http://$1:$2/ EOF chmod +x ~/bin/ git clone git://0:8080/foo Unfortunately, curl will attempt a blocking read on stdin before reading the TCP socket; causing the git-clone consumer to starve. This does not appear to be a problem with the new server code for handling chunked requests.
2009-06-05Transfer-Encoding: chunked streaming input supportEric Wong11-159/+478
This adds support for handling POST/PUT request bodies sent with chunked transfer encodings ("Transfer-Encoding: chunked"). Attention has been paid to ensure that a client cannot OOM us by sending an extremely large chunk. This implementation is pure Ruby as the Ragel-based implementation in rfuzz didn't offer a streaming interface. It should be reasonably close to RFC-compliant but please test it in an attempt to break it. The more interesting part is the ability to stream data to the hosted Rack application as it is being transferred to the server. This can be done regardless if the input is chunked or not, enabling the streaming of POST/PUT bodies can allow the hosted Rack application to process input as it receives it. See examples/ for an example echo server over HTTP. Enabling streaming also allows Rack applications to support upload progress monitoring previously supported by Mongrel handlers. Since Rack specifies that the input needs to be rewindable, this input is written to a temporary file (a la tee(1)) as it is streamed to the application the first time. Subsequent rewinded reads will read from the temporary file instead of the socket. Streaming input to the application is disabled by default since applications may not necessarily read the entire input body before returning. Since this is a completely new feature we've never seen in any Ruby HTTP application server before, we're taking the safe route by leaving it disabled by default. Enabling this can only be done globally by changing the Unicorn HttpRequest::DEFAULTS hash: Unicorn::HttpRequest::DEFAULTS["unicorn.stream_input"] = true Similarly, a Rack application can check if streaming input is enabled by checking the value of the "unicorn.stream_input" key in the environment hashed passed to it. All of this code has only been lightly tested and test coverage is lacking at the moment. [1] -
2009-06-05http_request: fix typo for 1.9Eric Wong1-1/+1
2009-05-31http_request: StringIO is binary for empty bodies (1.9)Eric Wong1-2/+5
2009-05-30http_request: no need to reset the requestEric Wong1-1/+0
That method no longer exists, but Ruby would never know until it tried to run it. Yes, I miss my compiled languages.
2009-05-28unicorn 0.8.1 v0.8.1Eric Wong2-1/+2
2009-05-28Consistent logger assignment for multiple objectsEric Wong1-1/+4
Bah, it's so much busy work to deal with this as configuration option. Maybe I should say we allow any logger the user wants, as long as it's $stderr :P
2009-05-28Avoid instance variables lookups in a critical pathEric Wong1-3/+4
Make us look even better in "Hello World" benchmarks! Passing a third parameter to avoid the constant lookup for the HttpRequest object doesn't seem to have a measurable effect.
2009-05-28Make our HttpRequest object a global constantEric Wong2-3/+8
This should be faster/cheaper than using an instance variable since it's accessed in a critical code path. Unicorn was never designed to be reentrant or thread-safe at all, either.
2009-05-28SIGHUP reloads app even if preload_app is trueEric Wong2-4/+5
This makes SIGHUP handling more consistent across different configurations, and allows togging preload_app to take effect when SIGHUP is issued.
2009-05-28Fix potential race condition in timeout handlingEric Wong1-8/+14
There is a potential race condition in closing the tempfile immediately after SIGKILL-ing the child. So instead just close it when we reap the dead child. Some some versions of Ruby may also not like having the WORKERS hash being changed while it is iterating through each_pair, so dup the hash to be on the safe side (should be cheap, since it's not a deep copy) since kill_worker() can modify the WORKERS hash. This is somewhat of a shotgun fix as I can't reproduce the problem consistently, but oh well.