Date | Commit message (Collapse) |
|
We now depend on Unicorn 1.1.3 to avoid race conditions during
log cycling. This bug mainly affected folks using Rainbows! as
a multithreaded static file server.
"keepalive_timeout 0" now works as documented for all backends
to completely disable keepalive. This was previously broken
under EventMachine, Rev, and Revactor.
There is a new Rainbows::ThreadTimeout Rack middleware which
gives soft timeouts to apps running on multithreaded backends.
There are several bugfixes for proxying IO objects and the usual
round of small code cleanups and documentation updates.
See the commits in git for all the details.
|
|
Although this behavior is mentioned on the documentation,
this was broken under EventMachine, Rev*, and Revactor.
Furthermore, we set the "Connection: close" header to allow the
client to optimize is handling of non-keepalive connections.
|
|
Proxying IO objects with threaded Rev concurrency models
occasionally failed with pipelined requests (t0034). By
deferring the on_write_complete callback until the next
"tick" (similar to what we do in Rev::Client#write),
we prevent clobbering responses during pipelining.
|
|
Remove an unused constant.
|
|
No constant resolution changes, avoid redefining
modules needlessly since this is not meant to be
used standalone.
|
|
Trying to avoid adding singleton methods since it's too easily
accessible by the public and not needed by the general public.
This also allows us (or just Zbatery) to more easily add support
systems without FD_CLOEXEC or fcntl, and also to optimize
away a fcntl call for systems that inherit FD_CLOEXEC.
|
|
This allows for per-dispatch timeouts similar to (but not exactly)
the way Mongrel (1.1.x) implemented them with threads.
|
|
First off we use an FD_MAP to avoid creating redundant IO
objects which map to the same FD. When that doesn't work, we'll
fall back to trapping Errno::EBADF and IOError where
appropriate.
|
|
Our keep-alive timeout mechanism does not need to kick in and
redundantly close when a client. Fortunately there is no danger
of redundantly closing the same numeric file descriptors (and
perhaps causing difficult-to-track-down errors).
|
|
/dev/fd/0 may not be stat()-able on some systems after dropping
permissions from root to a regular user. So just check for
"/dev/fd" which seems to work on RHEL 2.6.18 kernels. This also
allow us to be used independently of Unicorn in case somebody
ever feels the compelling need to /close/ stdin.
|
|
That is the official name of the project and we will not lead
people to believe differently.
|
|
For concurrency models that use sendfile or IO.copy_stream, HTTP
Range requests are honored when serving static files. Due to
the lack of known use cases, multipart range responses are not
supported.
When serving static files with sendfile and proxying
pipe/socket bodies, responses bodies are always properly closed
and we have more test cases for dealing with prematurely
disconnecting clients.
Concurrency model specific changes:
EventMachine, NeverBlock -
* keepalive is now supported when proxying pipes/sockets
* pipelining works properly when using EM::FileStreamer
* these remain the only concurrency models _without_
Range support (EM::FileStreamer doesn't support ranges)
Rev, RevThreadSpawn, RevThreadPool -
* keepalive is now supported when proxying pipes/sockets
* pipelining works properly when using sendfile
RevThreadPool -
* no longer supported under 1.8, it pegs the CPU at 100%.
Use RevThreadSpawn (or any other concurrency model) if
you're on 1.8, or better yet, switch to 1.9.
Revactor -
* proxying pipes/sockets with DevFdResponse is much faster
thanks to a new Actor-aware IO wrapper (used transparently
with DevFdResponse)
* sendfile support added, along with Range responses
FiberSpawn, FiberPool, RevFiberSpawn -
* Range responses supported when using sendfile
ThreadPool, ThreadSpawn, WriterThreadPool, WriterThreadSpawn -
* Range responses supported when using sendfile or
IO.copy_stream.
See the full git logs for a list of all changes.
|
|
It's an internal implementation detail and not for
user consumption.
|
|
EventMachine may close the underlying file descriptor on us if
there are unrecoverable errors during write. So IO#closed? is
a pointless check because EM does not invalidate the underlying
file descriptor.
|
|
Due to the synchronous nature of Revactor, we can
be certain sendfile won't overstep the userspace
output buffering done by Rev.
|
|
This makes life easier for the lazy GC when proxying
large responses (and also improves memory locality).
|
|
Proxying regular Ruby IO objects while Revactor is in use is
highly suboptimal, so proxy it with an Actor-aware wrapper for
better scheduling.
|
|
Since TCP sockets stream, HTTP requests do not come in at
well-defined boundaries and it's possible for pipelined requests
to come in in a staggered form. We need to ensure our
receive_data callback doesn't fire any actions at all while
responding with a deferrable @body.
We still need to be careful about buffering, since EM does not
appear to allow temporarily disabling read events (without
pausing writes), so we shutdown the read end of the socket
if it reaches a maximum header size limit.
|
|
Not sure where this is happening, but this can trigger
Errno::EBADF under heavy load.
|
|
When proxying pipes/sockets, it's possible for the Rev::IO#write
to fail and close our connection. In that case we do not want
our client to continue with the on_write_complete callback.
|
|
It hits 100% CPU usage and Rev's 1.8 support when mixed
with threads is currently suboptimal. Unfortunately
our tests can not check for 100% CPU usage, so I had to
*gasp* confirm it by actually starting an app :x
This appears to be a fixable bug in Rev, however, and
we'll try to fix it as soon as we have time.
|
|
EM::FileStreamer writes may be intermingled with the headers
in the subsequent response if we enable processing of the
second pipelined response right away, so wait until the
first response is complete before hitting the second one.
This also avoids potential deep stack recursion in the unlikely
case where too many requests are pipelined.
|
|
With sendfile enabled, we must avoid writing headers (or normal,
non-file responses) while a file is deferred for sending. This
means we must disable processing of new requests while a file
is deferred for sending and use the on_write_complete callback
less aggressively.
|
|
It's a destructive method, and it does more than just parsing.
|
|
We don't send headers with HTTP/0.9 connections, so the IO write
watchers in Rev are never enabled if we're proxying IO objects
as the response body.
|
|
This was always an issue, but not noticed until
0cd65fa1e01be369b270c72053cf21a3d6bcb45f ...
|
|
The FileStreamer class of EventMachine (and by extension
NeverBlock) unfortunately doesn't handle this. It's possible
to do with Revactor (since it uses Rev under the covers),
but we'll support what we can easily for now.
|
|
This is cheaper for serving static files and only
slightly more expensive for pipes and sockets (extra
path lookup for File.stat).
|
|
Its conceivable that we can avoid loading TeeInput for
EventMachine and Rev concurrency models in the future
since it's unused there.
|
|
We need to remember to close response bodies even if
a client aborts the connection, since body.close can
trigger interesting things like logging and such...
|
|
EM::FileStreamer must be passed a path, so should release
our newly opened descriptor first :<
|
|
Middlewares like Clogger may wrap Rack::File responses
with another body that responds to to_path and still
rely on #close to trigger an action (writing out the log
file).
|
|
Some middlewares such as Clogger rely on wrapping the body
having the close method called on it for logging.
|
|
Similar to what we do in EM, this avoid unnecessary
conditional logic inside more frequently used code paths.
|
|
Remove unnecessary include and also remove unnecessary
nesting.
|
|
Some apps never serve static files nor proxy pipes/sockets,
so they'll never need to deal with deferred responses.
|
|
It's slightly faster as theres no string to parse and also
no garbage format string to be discarded.
|
|
We also properly fail on EM::FileStreamer responses, too
|
|
Extraneous returns are harder to follow.
|
|
Some applications may not use Response*Pipe and TryDefer at all,
so there's no reason to pollute the runtime with extra nodes to
mark during GC.
|
|
This makes it easier to write proxies for slow clients that
benefit from keep-alive. We also need to be careful about
non-HTTP/1.1 connections that can't do keepalive, now.
|
|
If a response proxying a pipe (or socket) includes a
Content-Length, do not attempt to outsmart the application
and just use the given Content-Length.
This helps avoid exposing applications to weird internals such
as env["rainbows.autochunk"] and X-Rainbows-* response headers.
|
|
No need to double up on begin blocks since we know @client.write
won't raise exceptions @io.read_nonblock does. Also prefer
@client.write to @client.send_data since it looks more in
line with other IO interfaces.
|
|
Since the EM loop runs entirely in one thread, we can get away
with using a single buffer across all pipe/socket responses.
|
|
Using EM.enable_proxy with EM.attach seems to cause
EM::Connection#receive_data callbacks to be fired before the
proxy has a chance to act, leading the first few chunks of data
being lost in the default receive_data handler. Instead
just rely on EM.watch like the chunked pipe.
|
|
Rack::File already sets the Content-Length header for us,
so there's no reason to ever set this ourselves.
|
|
|
|
IO#read always returns a binary string buffer if passed an
explicit length to read, and we always do that. This is
a small garbage reduction.
|
|
Favor constants over literal strings for a small garbage
reduction.
|
|
This will give each concurrency model more control over
particular code paths and serving static files.
|