Date | Commit message (Collapse) |
|
We now depend on Unicorn 1.1.3 to avoid race conditions during
log cycling. This bug mainly affected folks using Rainbows! as
a multithreaded static file server.
"keepalive_timeout 0" now works as documented for all backends
to completely disable keepalive. This was previously broken
under EventMachine, Rev, and Revactor.
There is a new Rainbows::ThreadTimeout Rack middleware which
gives soft timeouts to apps running on multithreaded backends.
There are several bugfixes for proxying IO objects and the usual
round of small code cleanups and documentation updates.
See the commits in git for all the details.
|
|
Unicorn 1.1.3 fixes potential race conditions during
SIGUSR1 log reopening.
|
|
Although this behavior is mentioned on the documentation,
this was broken under EventMachine, Rev*, and Revactor.
Furthermore, we set the "Connection: close" header to allow the
client to optimize is handling of non-keepalive connections.
|
|
Rack::Lint uses String#inspect to generate assertion messages
whether or not the assertions are triggered at all.
Unfortunately String#inspect is hilariously slow under 1.9.2
when dealing with odd characters and large strings.
The performance difference is huge:
before: 1m4.386s
after: 0m3.877s
We already have Rack::Lint enabled everywhere else, so removing
this where performance matters most shouldn't hurt us.
|
|
Proxying IO objects with threaded Rev concurrency models
occasionally failed with pipelined requests (t0034). By
deferring the on_write_complete callback until the next
"tick" (similar to what we do in Rev::Client#write),
we prevent clobbering responses during pipelining.
|
|
Remove an unused constant.
|
|
No constant resolution changes, avoid redefining
modules needlessly since this is not meant to be
used standalone.
|
|
Trying to avoid adding singleton methods since it's too easily
accessible by the public and not needed by the general public.
This also allows us (or just Zbatery) to more easily add support
systems without FD_CLOEXEC or fcntl, and also to optimize
away a fcntl call for systems that inherit FD_CLOEXEC.
|
|
This allows for per-dispatch timeouts similar to (but not exactly)
the way Mongrel (1.1.x) implemented them with threads.
|
|
First off we use an FD_MAP to avoid creating redundant IO
objects which map to the same FD. When that doesn't work, we'll
fall back to trapping Errno::EBADF and IOError where
appropriate.
|
|
Our keep-alive timeout mechanism does not need to kick in and
redundantly close when a client. Fortunately there is no danger
of redundantly closing the same numeric file descriptors (and
perhaps causing difficult-to-track-down errors).
|
|
Pound appears to work well in my limited testing with
t/sha1.ru and "curl -T-"
|
|
/dev/fd/0 may not be stat()-able on some systems after dropping
permissions from root to a regular user. So just check for
"/dev/fd" which seems to work on RHEL 2.6.18 kernels. This also
allow us to be used independently of Unicorn in case somebody
ever feels the compelling need to /close/ stdin.
|
|
That is the official name of the project and we will not lead
people to believe differently.
|
|
Ruby 1.9.2 no longer includes '.' inside $LOAD_PATH by default,
so those requires won't work unless we specify the full path.
We prefer File.expand_path to prefixing './' since we want to be
consistent with what Rails itself uses to prevent
double-requires.
|
|
For concurrency models that use sendfile or IO.copy_stream, HTTP
Range requests are honored when serving static files. Due to
the lack of known use cases, multipart range responses are not
supported.
When serving static files with sendfile and proxying
pipe/socket bodies, responses bodies are always properly closed
and we have more test cases for dealing with prematurely
disconnecting clients.
Concurrency model specific changes:
EventMachine, NeverBlock -
* keepalive is now supported when proxying pipes/sockets
* pipelining works properly when using EM::FileStreamer
* these remain the only concurrency models _without_
Range support (EM::FileStreamer doesn't support ranges)
Rev, RevThreadSpawn, RevThreadPool -
* keepalive is now supported when proxying pipes/sockets
* pipelining works properly when using sendfile
RevThreadPool -
* no longer supported under 1.8, it pegs the CPU at 100%.
Use RevThreadSpawn (or any other concurrency model) if
you're on 1.8, or better yet, switch to 1.9.
Revactor -
* proxying pipes/sockets with DevFdResponse is much faster
thanks to a new Actor-aware IO wrapper (used transparently
with DevFdResponse)
* sendfile support added, along with Range responses
FiberSpawn, FiberPool, RevFiberSpawn -
* Range responses supported when using sendfile
ThreadPool, ThreadSpawn, WriterThreadPool, WriterThreadSpawn -
* Range responses supported when using sendfile or
IO.copy_stream.
See the full git logs for a list of all changes.
|
|
Not sure what possessed me to clobber the original
variable set in the parent, but the initial time
for the subshell could be set too late in relation
to the actual server time. So we need to stash the
original time before any HTTP requests were made.
|
|
I keep forgetting, so I'll force it on myself.
|
|
It's an internal implementation detail and not for
user consumption.
|
|
Things have change slightly since 0.95.1, which is
ancient.
|
|
We've knocked out a few of these, so we're closer to being
"done" :)
|
|
EventMachine may close the underlying file descriptor on us if
there are unrecoverable errors during write. So IO#closed? is
a pointless check because EM does not invalidate the underlying
file descriptor.
|
|
Due to the synchronous nature of Revactor, we can
be certain sendfile won't overstep the userspace
output buffering done by Rev.
|
|
Our test suite doesn't include facilities for dealing
with temporary directories, yet.
|
|
We may want to try some external libraries for some tests
via RUBYLIB/RUBYOPT while doing development.
|
|
This makes life easier for the lazy GC when proxying
large responses (and also improves memory locality).
|
|
Proxying regular Ruby IO objects while Revactor is in use is
highly suboptimal, so proxy it with an Actor-aware wrapper for
better scheduling.
|
|
Since TCP sockets stream, HTTP requests do not come in at
well-defined boundaries and it's possible for pipelined requests
to come in in a staggered form. We need to ensure our
receive_data callback doesn't fire any actions at all while
responding with a deferrable @body.
We still need to be careful about buffering, since EM does not
appear to allow temporarily disabling read events (without
pausing writes), so we shutdown the read end of the socket
if it reaches a maximum header size limit.
|
|
Not sure where this is happening, but this can trigger
Errno::EBADF under heavy load.
|
|
When proxying pipes/sockets, it's possible for the Rev::IO#write
to fail and close our connection. In that case we do not want
our client to continue with the on_write_complete callback.
|
|
It hits 100% CPU usage and Rev's 1.8 support when mixed
with threads is currently suboptimal. Unfortunately
our tests can not check for 100% CPU usage, so I had to
*gasp* confirm it by actually starting an app :x
This appears to be a fixable bug in Rev, however, and
we'll try to fix it as soon as we have time.
|
|
They were taking long enough to be annoying :<
|
|
HTTP/1.1 and HTTP/1.0 code paths may vary significantly from the
(highly uncommon) HTTP/0.9 ones in our concurrency models,
so add extra tests for those.
|
|
EM::FileStreamer writes may be intermingled with the headers
in the subsequent response if we enable processing of the
second pipelined response right away, so wait until the
first response is complete before hitting the second one.
This also avoids potential deep stack recursion in the unlikely
case where too many requests are pipelined.
|
|
With sendfile enabled, we must avoid writing headers (or normal,
non-file responses) while a file is deferred for sending. This
means we must disable processing of new requests while a file
is deferred for sending and use the on_write_complete callback
less aggressively.
|
|
It's a destructive method, and it does more than just parsing.
|
|
Those are entirely single-threaded during the application
dispatch phase.
|
|
While gawk can handle binary data, other awks cannot, so
use tr(1) to filter out non-printable characters from the
WebSocket message. We need to send a bigger message, too,
since tr(1) output is buffered and there's no portable way
to unbuffer it :<
|
|
|
|
We don't send headers with HTTP/0.9 connections, so the IO write
watchers in Rev are never enabled if we're proxying IO objects
as the response body.
|
|
This was always an issue, but not noticed until
0cd65fa1e01be369b270c72053cf21a3d6bcb45f ...
|
|
The FileStreamer class of EventMachine (and by extension
NeverBlock) unfortunately doesn't handle this. It's possible
to do with Revactor (since it uses Rev under the covers),
but we'll support what we can easily for now.
|
|
This is cheaper for serving static files and only
slightly more expensive for pipes and sockets (extra
path lookup for File.stat).
|
|
Its conceivable that we can avoid loading TeeInput for
EventMachine and Rev concurrency models in the future
since it's unused there.
|
|
We need to remember to close response bodies even if
a client aborts the connection, since body.close can
trigger interesting things like logging and such...
|
|
EM::FileStreamer must be passed a path, so should release
our newly opened descriptor first :<
|
|
Middlewares like Clogger may wrap Rack::File responses
with another body that responds to to_path and still
rely on #close to trigger an action (writing out the log
file).
|
|
Some middlewares such as Clogger rely on wrapping the body
having the close method called on it for logging.
|
|
Similar to what we do in EM, this avoid unnecessary
conditional logic inside more frequently used code paths.
|
|
Remove unnecessary include and also remove unnecessary
nesting.
|