Date | Commit message (Collapse) |
|
We cannot trigger on_read events and invoke the HTTP parser and
modify @env while we're waiting for an application to run
async.callback. We also need to clear (and *maybe* re-set)
@deferred if we're writing from async.callback
|
|
No point in having too many modules to search around
(for both hackers and the runtime).
|
|
We want to put all constants in one place.
|
|
Rack::Utils::HeaderHash is still expensive, so avoid
forcing it on users since we can assume app/library
authors use normally-cased HTTP headers.
|
|
Hash#[] is slightly slower on the miss case due to calling
Hash#default (but faster for the hit case, probably because it
is inlined in 1.9).
|
|
Reading headers is common and we don't want to create new String
objects (even if they're tiny or copy-on-write) for the GC to
munch on.
|
|
async.callback will be useful with Coolio (and more!) soon, so
ensure it works as well as the rest of Rainbows!
|
|
This will allow Coolio to use it, too.
|
|
Code organization is hard :<
|
|
Rack::Utils::HeaderHash is still very expensive in Rack 1.2,
especially for simple things that we want to run as fast as
possible with minimal interference. HeaderHash is unnecessary
for most requests that do not send Content-Range in responses.
|
|
This lets us simplify repetitive checks worry less about
properly maintaining/closing client connections for each
concurrency model we support.
|
|
This is only needed for concurrency options that
do not use TeeInput, since TeeInput automatically
handles this for us.
|
|
Large uploads behave differently with regard to buffering,
and there were bugs in the way the Rev and Revactor backends
handled uploads.
|
|
All synchronous models have this fixed in unicorn 3.0.1,
so only Rev and EventMachine-based concurrency models
require code changes.
|
|
Hopefully it makes more sense now and is easier to
digest for new hackers.
|
|
We may have other uses for this in the future...
|
|
These allow for small reductions in the amount of variables
we have to manage, more changes coming with later Unicorns.
|
|
This simplifies and disambiguates most constant resolution
issues as well as lowering our identation level. Hopefully
this makes code easier to understand.
|
|
We get basic internal API changes from Unicorn,
code simplifications coming next.
|
|
It removes the burden of byte slicing and setting file
descriptor flags. In some cases, we can remove unnecessary
peeraddr calls, too.
|
|
This makes it easier to write proxies for slow clients that
benefit from keep-alive. We also need to be careful about
non-HTTP/1.1 connections that can't do keepalive, now.
|
|
If a response proxying a pipe (or socket) includes a
Content-Length, do not attempt to outsmart the application
and just use the given Content-Length.
This helps avoid exposing applications to weird internals such
as env["rainbows.autochunk"] and X-Rainbows-* response headers.
|
|
Since we suck at building websites, we just rely on RDoc as a
website builder. And since Rainbows! is an application server
(and not a programming library), our internal API should be of
little interest to end users.
Anybody interested in Rainbows! (or any other project) internals
should be reading the source.
|
|
|
|
Since Rainbows! is supported when exposed directly to the
Internet, administrators may want to limit the amount of data a
user may upload in a single request body to prevent a
denial-of-service via disk space exhaustion.
This amount may be specified in bytes, the default limit being
1024*1024 bytes (1 megabyte). To override this default, a user
may specify `client_max_body_size' in the Rainbows! block
of their server config file:
Rainbows! do
client_max_body_size 10 * 1024 * 1024
end
Clients that exceed the limit will get a "413 Request Entity Too
Large" response if the request body is too large and the
connection will close.
For chunked requests, we have no choice but to interrupt during
the client upload since we have no prior knowledge of the
request body size.
|
|
Every concurrency model does this the same way.
This removes the Rainbows::Const::LOCALHOST constant and
may break some existing apps that rely on it.
|
|
Just create an empty string instead and let Unicorn::HttpParser
allocate it internally to whatever size is needed.
|
|
Some async apps rely on more than just "async.callback" and
make full use of Deferrables provided by the EM::Deferrable
module. Thanks to James Tucker for bringing this to our
attention.
|
|
It gets in the way of Rev/EM-based models that won't use EvCore.
It doesn't actually do anything useful except making an extra
layer of indirection to follow.
|
|
Just let the GC deal with it
|
|
Rev/Packet-based models may support it in the future
|
|
Make sure app errors get logged correctly, and we no longer
return a 500 response when a client EOFs the write end (but not
the read end) of a connection.
|
|
No point in rewinding the NULL_IO especially when most requests
use them instead of bodies that actually have something.
|
|
It'll make development of future ev_core-derived things
easier, hopefully.
|
|
It turns out neither the EventMachine and Rev classes
checked for master death in its heartbeat mechanism.
Since we managed to forget the same thing twice, we
now have a test case for it and also centralized the
code to remove duplication.
|
|
We're simply too uncomfortable with the weird GC issues
associated with Tempfile and having linked temporary files at
all. Instead just depend on the #size-aware TmpIO class that
Unicorn 0.94.0 provides for us.
|
|
Just in case something goes wrong with the write
or the logger, make sure we've triggered a quit.
|
|
Since we're geared towards slower clients, we may be able to
make gains from using userspace IO buffering. This allows us to
avoid metadef-ing a #size method for every File we allocate
and save memory.
|
|
We don't use it in EventMachine since EM has its own
built-in ways to handle deferred bodies.
|
|
Graceful quit means we finish sending everything we have before
exiting. Additionally, only signal quits after we've queued
the error response up.
|
|
EventMachine and Rev models seem to be able to share a lot of
common code, so lets share. We may support Packet in the
future, too, and end up with a similar programming model there
as well.
|