Date | Commit message (Collapse) |
|
|
|
This still needs work and lots of cleanup, but the basics are
there. The sendfile 1.0.0 RubyGem is now safe to use under MRI
1.8, and is superior to current (1.9.2-preview3) versions of
IO.copy_stream for static files in that it supports more
platforms and doesn't truncate large files on 32-bit platforms.
|
|
This fleshes out Rainbows::Fiber::IO with a few
more methods for people using it.
|
|
|
|
|
|
|
|
No point in using a class here, there's no object
|
|
|
|
No point in documenting our internals and overwhelming
users.
|
|
Rack::Contrib::Sendfile moved into Rack in December 2009.
|
|
Make it easier to link to the Rainbows! configuration
documentation without anchors. This also reduces the
amount of code we spew into Unicorn::Configurator.
|
|
|
|
We can't use it effectively in Rubinius yet, and it's
broken due to the issue described in:
http://github.com/evanphx/rubinius/issues/379
|
|
There's no need to #dup the middleware object, just use
a custom Rainbows::DevFdResponse::Body object.
|
|
|
|
I was originally experimenting with setsockopt to increase the
kernel buffer sizes in a loop, but the benefits were negligible
at best.
|
|
For small responses that can fit inside a kernel socket
buffer, copying that data into an IO::Buffer object is
a waste of precious memory bandwidth.
|
|
This gives a tiny performance improvement to the FiberSpawn and
FiberPool concurrency models.
|
|
Not that many people will actually call Rainbows.sleep
outside of tests...
|
|
HeaderHash objects can only be used as headers without
violating Rack::Lint in Rack 1.1.0 or later.
|
|
Array#[] lookups are slightly faster under both rbx and 1.9,
and easier to read.
|
|
Rubinius still has a few issues that prevent 100% support,
but it basically works if log rotation or USR2 upgrades aren't
required. Tickets for all known issues for Rubinius have
been filed on the project's issue tracker.
* rbx does not support -i/-p yet, so rely on MRI for that
* "io/nonblock" is missing
* avoiding any optional Gems for now (EM, Rev, etc..)
|
|
Some folks can now show off their Rainbows! installation
|
|
duh!
|
|
Since EventMachine and Rev shared the same logic for optimizing
and avoiding extra file opens for IO/File-ish response bodies,
so centralize that.
For Ruby 1.9 users, we've also enabled this logic so ThreadPool,
ThreadSpawn, WriterThreadPool, and WriterThreadSpawn can take
advantage of Rainbows::DevFdResponse-generated bodies while
proxying sockets.
|
|
This release fixes corrupted large response bodies for Ruby 1.8
users with the WriterThreadSpawn and WriterThreadPool models
introduced in 0.93.0. This bug did not affect Ruby 1.9 users
nor the users of any older concurrency models.
There is also a strange new Rainbows::Sendfile middleware. It
is used to negate the effect of Rack::Contrib::Sendfile, if that
makes sense. See the RDoc or
http://rainbows.rubyforge.org/Rainbows/Sendfile.html for all the
gory details.
Finally, the RDoc for our test suite is on the website:
http://rainbows.rubyforge.org/Test_Suite.html
I wrote this document back when the project started but
completely forgot to tell RDoc about it. Personally, this
test suite is one of my favorite parts of the project.
|
|
|
|
This lets most concurrency models understand and process
X-Sendfile efficiently with IO.copy_stream under Ruby 1.9.
EventMachine can take advantage of this middleware under
both Ruby 1.8 and Ruby 1.9.
|
|
While these models are designed to work with IO.copy_stream
under Ruby 1.9, it should be possible to run them under Ruby
1.8 without returning corrupt responses. The large file
response test is beefed up to compare SHA1 checksums of
the served file, not just sizes.
|
|
In our race to have more concurrency options than real sites
using this server, we've added two new and fully supported
concurrency models: WriterThreadSpawn and WriterThreadPool
They're both designed to for serving large static files and work
best with IO.copy_stream (sendfile!) under Ruby 1.9. They may
also be used to dynamically generate long running, streaming
responses after headers are sent (use "proxy_buffering off" with
nginx).
Unlike most concurrency options in Rainbows!, these are designed
to run behind nginx (or haproxy if you don't support POST/PUT
requests) and are vulnerable to slow client denial of service
attacks.
I floated the idea of doing something along these lines back in
the early days of Unicorn, but deemed it too dangerous for some
applications. But nothing is too dangerous for Rainbows! So
here they are now for your experimentation.
|
|
|
|
This should be logical, since we keep the connection alive
when writing in our writer threads.
|
|
|
|
|
|
|
|
It's not worth the trouble and testability since having
a single thread tends to bottleneck if there's a bad
client.
|
|
Idle threads are cheap enough and having responses
queued up with a single slow client on a large response
is bad.
|
|
This is based on an idea I originally had for Unicorn but never
implemented in Unicorn since the benefits were unproven and the
risks were too high.
|
|
It'll be useful later on for a variety of things!
|
|
Mostly internal cleanups and small improvements.
The only backwards incompatible change was the addition of the
"client_max_body_size" parameter to limit upload sizes to
prevent DoS. This defaults to one megabyte (same as nginx), so
any apps relying on the limit-less behavior of previous will
have to configure this in the Unicorn/Rainbows! config file:
Rainbows! do
# nil for unlimited, or any number in bytes
client_max_body_size nil
end
The ThreadSpawn and ThreadPool models are now optimized for serving
large static files under Ruby 1.9 using IO.copy_stream[1].
The EventMachine model has always had optimized static file
serving (using EM::Connection#stream_file_data[2]).
The EventMachine model (finally) gets conditionally deferred app
dispatch in a separate thread, as described by Ezra Zygmuntowicz
for Merb, Ebb and Thin[3].
[1] - http://euruko2008.csrug.cz/system/assets/documents/0000/0007/tanaka-IOcopy_stream-euruko2008.pdf
[2] - http://eventmachine.rubyforge.org/EventMachine/Connection.html#M000312
[3] - http://brainspl.at/articles/2008/04/18/deferred-requests-with-merb-ebb-and-thin
|
|
IO#readpartial on zero bytes will always return an empty
string, so ensure the emulator for Revactor does that as
well.
|
|
Even if it's just an empty file for now, it's critical in
case we ever add any code that returns user-visible strings
since Rack::Lint (and mere sanity) require binary encoding
for "rack.input".
|
|
Since deferred requests run in a separate thread, this affects
the root (non-deferred) thread as well since it may share
data with other threads.
|
|
Since we have conditional deferred execution in the regular
EventMachine concurrency model, we can drop this one.
This concurrency model never fully worked due to lack of
graceful shut downs, and was never promoted nor supported, either.
|
|
Merb (and possibly other) frameworks that support conditionally
deferred app dispatch can now use it just like Ebb and Thin.
http://brainspl.at/articles/2008/04/18/deferred-requests-with-merb-ebb-and-thin
|
|
|
|
* avoid needless links to /Rainbows.html
* keepalive_timeout has been 5 seconds by default for a while
* update "Gemcutter" references to "RubyGems.org"
|
|
WAvoid mucking with Unicorn::TeeInput, since other apps may
depend on that class, so we subclass it as Rainbows::TeeInput
and modify as necessary in worker processes.
For Revactor, remove the special-cased
Rainbows::Revactor::TeeInput class and instead emulate
readpartial for Revactor sockets instead.
|
|
|
|
Since Rainbows! is supported when exposed directly to the
Internet, administrators may want to limit the amount of data a
user may upload in a single request body to prevent a
denial-of-service via disk space exhaustion.
This amount may be specified in bytes, the default limit being
1024*1024 bytes (1 megabyte). To override this default, a user
may specify `client_max_body_size' in the Rainbows! block
of their server config file:
Rainbows! do
client_max_body_size 10 * 1024 * 1024
end
Clients that exceed the limit will get a "413 Request Entity Too
Large" response if the request body is too large and the
connection will close.
For chunked requests, we have no choice but to interrupt during
the client upload since we have no prior knowledge of the
request body size.
|