Date | Commit message (Collapse) |
|
We can't wait for longer than 68 years.
|
|
Rainbows! now scales to more than 1024 worker processes without
special privileges. To enable this, Rainbows! now depends on
Unicorn 4.x and thus raindrops[1].
client_max_header_size directive is added to limit per-client
memory usage in headers.
An experimental StreamResponseEpoll concurrency option now
exists to buffer outgoing responses without any thread-safe
dependencies. Unlike the rest of Rainbows! which works fine
without nginx, this concurrency option is /only/ supported
behind nginx, even more strongly so than Unicorn itself.
non-nginx LAN clients are NOT supported for this. This relies
on the sleepy_penguin[2] RubyGem (and Linux).
There are some minor bug fixes and cleanups all around. See
"git log v3.4.0.." for details.
[1] http://raindrops.bogomips.org/
[2] http://bogomips.org/sleepy_penguin/
|
|
It hasn't been used in a while, but we kept it for
Zbatery version compatibility.
|
|
Some pipe responses can trigger the on_deferred_write_complete
method without ever re-running the event loop.
This appears to be the result of the occasional t0050 failures.
|
|
Untested, but it should work nowadays...
|
|
This removes the extra per-process file descriptor and
replaces it with Raindrops.
|
|
We no longer need to put all listeners away since
Unicorn uses kgio.
|
|
Lowering this will lower worst-case memory usage and mitigate some
denial-of-service attacks. This should be larger than
client_header_buffer_size.
The default value is carried over from Mongrel and Unicorn.
|
|
Do not encourage their use, really.
|
|
Yes, this concurrency model is our strangest yet.
|
|
We can get away with a single stack frame reduction. Unicorn
itself has more stack reductions, but Rainbows! is further
behind in this area.
|
|
Do not assume middlewares/applications are stupid and blindly
add chunking to responses (we have precedence set by
Rack::Chunked).
|
|
HttpParser#trailers and #headers are actually the same
method, so we'll just continue on.
|
|
It's easier-to-use in some cases.
|
|
Rack::File already sets Content-Range, so don't repeat work
and reparse Content-Length.
|
|
This doesn't use Rainbows::Base so we have no keepalive support
at all. This could eventually be an option for streaming
applications.
|
|
We may not always use Rainbows! :Base since we don't want
keepalive/immediate log reopening in some cases.
|
|
Linux 3.0.0 is just around the corner and of course newer
than 2.6.
|
|
It's better under 1.9.3 (sleepy_penguin 3.0.1 was bogus)
|
|
It's better under 1.9.3
|
|
It should hopefully give this more visibility even though it's
an internal feature.
|
|
We need to trigger a recv() to uncork the response.
This won't affect fairness (much) since all recv()s
are non-blocking and a successful header parse will
put us in the back of the queue.
|
|
Since it's cheap to maintain keepalive clients with EM, we need
a way of disconnecting them in a timely fashion on rare SIGQUIT
events.
|
|
This should enable Kgio "autopush" support for ThreadSpawn,
ThreadPool, XEpollThreadSpawn, and XEpollThreadPool.
(still needs tests)
|
|
In concurrency models long keepalive times are cheap (and thus
more likely to be used), this allows Rainbows! to gracefully
shut down more quickly.
|
|
There's less logic in the server this way and easier
to potentially share code this way.
|
|
io_splice 4.1.1 works around issues with socket
buffers filling up pipe buffers on blocking splice.
See http://lkml.org/lkml/2009/1/13/478 for a better
explanation.
|
|
* improved documentation all around, suggestions/comments to further
improve documentation is greatly welcome at: rainbows-talk@rubyforge.org
* added GPLv3 option to the license (now (Ruby|GPLv2|GPLv3), though
Unicorn is still (Ruby|GPLv2) for now)
* added client_header_buffer_size config directive (default 1K)
* small default header buffer size (16K => 1K) to reduce memory usage,
Rails apps with cookie sessions may want to increase this (~2K)
* all concurrency models default to 50 connections per process
* all concurrency models with a secondary :pool_size parameter also
default to 50 (threads/fibers/whatever)
* RLIMIT_NOFILE and RLIMIT_NPROC are automatically increased if needed
* Rainbows::ThreadTimeout middleware rewritten, still not recommended,
lazy people should be using Unicorn anyways :)
* Several experimental Linux-only edge-triggered epoll options:
XEpollThreadSpawn, XEpollThreadPool, XEpoll, and Epoll.
The latter two were in previous releases but never announced.
These require the "sleepy_penguin", "raindrops", and "sendfile" RubyGems
=== Deprecations
* Rainbows::Fiber::IO* APIs all deprecated, Rainbows! will avoid
having any concurrency model-specific APIs in the future and
also avoid introducing new APIs for applications.
* Fiber-based concurrency models are no longer recommended, they're
too fragile for most apps, use at your own risk (they'll continue to
be supported, however). Linux NPTL + Ruby 1.9 is pretty lightweight
and will be even lighter in Ruby 1.9.3 if you're careful with stack
usage in your C extensions.
|
|
I can't wait until I stop supporting Ruby 1.8
|
|
Hopefully makes things easier to try out.
|
|
The only supported method is Rainbows.sleep in here
|
|
Only needed for Ruby 1.9
|
|
Just close the epoll descriptor, since the sleepy_penguin
epoll_wait wrapper may not return EINTR in the future.
|
|
This allows using IO::Splice.copy_stream from the "io_splice"
RubyGem on recent Linux systems. This also allows users to
disable copy_stream usage entirely and use traditional
response_body.each calls which are compatible with all Rack
servers (to workaround bugs in IO.copy_stream under 1.9.2-p180).
|
|
Finally, we have all methods in configurator and it's
much easier to document!
|
|
It can't be used as middleware for fully-buffering concurrency
models.
|
|
There's actually no reason we can't have these methods
in Rainbows::Configurator where it's easier to document
nowadays.
|
|
CoolioThreadPool has had it supported forever, but
only NeverBlock had it documented.
|
|
It's good to describe what they're useful for.
|
|
It's an internal implementation detail.
|
|
speling ficks and less confusing #initialize documentation
|
|
We're now able to configure the number of threads independently
of worker_connections.
|
|
coolio_thread_pool, neverblock both use it, and
xepoll_thread_pool will support it next, too.
|
|
This is probably friendlier on server resources in the worst
case than XEpollThreadSpawn but may perform worse in the client
client-visible way, too.
|
|
Fixed in kgio 2.4.0 now
This reverts commit a1168e7d2bfe182896f139d051ef099616fd1646.
|
|
No need for a string comparison
|
|
We only poll for one event (EPOLLIN/EPOLLOUT) at a time,
so there's no need to actually check since they're too
rare.
|
|
worker_yield is safer than setting a threshold with multiple
acceptors when thread limits are hit. Also, avoid sleep +
Thread#run since it's potentially racy if threads are extremely
unfairly scheduled.
Same things applied to xepoll_thread_spawn.
|
|
Infinite sleep is too dangerous due to possible race conditions,
so use worker_yield which is safer and cheaper in the general
case. We can also avoid sleeping on new threads by only
spawning when the client module is included.
|
|
Otherwise pipeline_ready can false positive on us
|