Date | Commit message (Collapse) |
|
|
|
|
|
Eventually we hope to be able to accept arguments like
the way Rack handlers do it:
use :Foo, :bool1, :bool2, :option => value
|
|
It's a tad faster for non-keepalive connections and should do
better on large SMP machines with many workers AND threads.
That means the ActorSpawn model in Rubinius is nothing more than
ThreadSpawn underneath (for now).
|
|
I so wish it used Fibers/green-threads underneath instead.
|
|
Some people fork processes, so it avoid hanging a connection
open because of that...
|
|
|
|
Broken in 145185b76dafebe5574e6a3eefd3276555c72016
|
|
Rubinius Actor specs seem a bit lacking at the moment.
If we find time, we'll fix them, otherwise we'll let
somebody else do it.
|
|
It seems to basically work, this is based heavily on the
Revactor one...
|
|
Not sure what drugs the person that wrote it was on at the
time.
|
|
It can noticeably improve performance if available.
ref: http://rubyforge.org/pipermail/rev-talk/2009-November/000116.html
|
|
|
|
Patches submitted to rev-talk, awaiting feedback and
hopefully a new release.
|
|
While we're at it, ensure our encoding is sane
|
|
While Revactor uses Fiber::Queue in AppPool, we don't want/need
to expose the rest of our Fiber stuff to it since it can lead to
lost Fibers if misused. This includes the Rainbows::Fiber.sleep
method which only works inside Fiber{Spawn,Pool} models and
the Rainbows::Fiber::IO wrapper class.
|
|
Make sure app errors get logged correctly, and we no longer
return a 500 response when a client EOFs the write end (but not
the read end) of a connection.
|
|
Both FiberSpawn and FiberPool share similar main loops, the
only difference being the handling of connection acceptance.
So move the scheduler into it's own function for consistency.
We'll also correctly implement keepalive timeout so clients
get disconnected at the right time.
|
|
This enables the safe use of Rainbows::AppPool with all
concurrency models, not just threaded ones. AppPool is now
effective with *all* Fiber-based concurrency models including
Revactor (and of course the new Fiber{Pool,Spawn} ones).
|
|
It works exactly like Actor.sleep and similar to Kernel.sleep
(no way to sleep indefinitely), but is compatible with the
IO.select-based Fiber scheduler we run. This method only works
within the context of a Rainbows! application dispatch.
|
|
|
|
This is another Fiber-based concurrency model that can exploit
a streaming "rack.input" for clients. Spawning Fibers seems
pretty fast, but maybe there are apps that will benefit from
this.
|
|
This one seems a easy to get working and supports everything we
need to support from the server perspective. Apps will need
modified drivers, but it doesn't seem too hard to add
more/better support for wrapping IO objects with Fiber::IO.
|
|
Due to the addition of keepalive_timeouts, it's safer to
pay a performance penalty and use a hash here instead.
|
|
|
|
Exposing a synchronous interface is too complicated for too
little gain. Given the following factors:
* basic ThreadSpawn performs admirably under REE 1.8
* both ThreadSpawn and Revactor work well under 1.9
* few applications/requests actually need a streaming "rack.input"
We've decided its not worth the effort to attempt to support
streaming rack.input at the moment. Instead, the new
RevThreadSpawn model performs much better for most applications
under Ruby 1.9
|
|
No point in rewinding the NULL_IO especially when most requests
use them instead of bodies that actually have something.
|
|
And change the default to 2 seconds, most clients can
render the page and load all URLs within 2 seconds.
|
|
Fortunately it's easy here.
|
|
This is a bit trickier than the rest since we have to ensure
deferred (proxied) responses aren't nuked.
|
|
If the Revactor implementation using lightweight Actors/Fibers
needs it, then thread implementations do, too.
|
|
We'll be getting a keepalive_timeout setting soon,
clients with 300 second idle keepalives are ridiculous
and Ruby objects are still not that cheap in 1.9
|
|
Client shutdowns/errors when streaming "rack.input" into the
Rack application are quieter now. Rev and EventMachine workers
now shutdown correctly when the master dies. Worker processes
now fail gracefully if log reopening fails. ThreadSpawn and
ThreadPool models now load Unicorn classes in a thread-safe way.
There's also an experimental RevThreadSpawn concurrency
model which may be heavily reworked in the future...
Eric Wong (30):
Threaded models have trouble with late loading under 1.9
cleanup worker heartbeat and master deathwatch
tests: allow use of alternative sha1 implementations
rev/event_machine: simplify keepalive checking a bit
tests: sha1.ru now handles empty bodies
rev: split out further into separate files for reuse
rev: DeferredResponse is independent of parser state
remove unnecessary class variable
ev_core: cleanup handling of APP constant
rev: DeferredResponse: always attach to main loop
initial cut of the RevThreadSpawn model
rev_thread_spawn/revactor: fix TeeInput for short reads
rev_thread_spawn: make 1.9 TeeInput performance tolerable
tests: add executable permissions to t0102
tests: extra check to avoid race in reopen logs test
rev_thread_spawn: 16K chunked reads work better
tests: ensure proper accounting of worker_connections
tests: heartbeat-timeout: simplify and avoid possible race
tests: ensure we process "START" from FIFO when starting
http_response: don't "rescue nil" for body.close
cleanup error handling pieces
tests: more stringent tests for error handling
revactor/tee_input: unnecessary error handling
gracefully exit workers if reopening logs fails
revactor/tee_input: raise ClientDisconnect on EOFError
bump versions since we depend on Unicorn::ClientShutdown
revactor/tee_input: share error handling with superclass
RevThreadSpawn is still experimental
Revert "Threaded models have trouble with late loading under 1.9"
Rakefile: add raa_update task
|
|
This reverts commit e1dcadef6ca242e36e99aab19e3e040bf01070f9.
This is fixed separately in Unicorn 0.95.0 (commit
560216c2fecfc5cf3489f749dc7a0221fd78eb26)
|
|
|
|
Less stuff to maintain is good.
|
|
|
|
Based on unicorn.git commit e4256da292f9626d7dfca60e08f65651a0a9139a
raise Unicorn::ClientShutdown if client aborts in TeeInput
Leaving the EOFError exception as-is bad because most
applications/frameworks run an application-wide exception
handler to pretty-print and/or log the exception with a huge
backtrace.
Since there's absolutely nothing we can do in the server-side
app to deal with clients prematurely shutting down, having a
backtrace does not make sense. Having a backtrace can even be
harmful since it creates unnecessary noise for application
engineers monitoring or tracking down real bugs.
|
|
Permissions for the logs could've been badly set by the master.
So we we'll let the master reopen them and refork children to
get around this problem. We have to be more careful when
reopening logs because we can reopen them in the middle of
client requests (we have to) whereas Unicorn has the luxury
of _knowing_ it has no active clients when it does the reopen.
|
|
We're doomed if the client socket EOFs on us while we're reading
it. So don't hide it and let the exception bubble all the way
up the stack.
|
|
Unicorn 0.94.0 got a more generic handle_error function
that's useful in the Thread* models. The Revactor one
is a little different but similar to be worth refactoring
to match our standard pieces.
|
|
This can hide bugs in Rack applications/middleware. Most other
Rack handlers/servers seem to follow this route as well, so
this helps ensure broken things will break loudly and more
consistently across all Rack-enabled servers.
|
|
When reading 4K chunks, performance is dismal under 1.8
|
|
Somehow 1.8 performance blows with shorter reads in the Rack
application. This may be because the Rev framework uses
a default 16K IO size and our test applications may request
less.
|
|
Explicitly requested short reads may cause too much data to be
returned, which would be bad and potentially break the
application. We need to ensure proper IO#readpartial-like
semantics in both of these models.
|
|
Seems to pass all tests, but that may only mean our
test cases are lacking...
|
|
It's too complicated to deal with multiple Rev loops
so only use the main one for now under 1.9.
|
|
It'll make development of future ev_core-derived things
easier, hopefully.
|
|
It's already global...
|
|
In the upcoming RevThread* models, the parser may be parsing
other requests already by the time DeferredResponse is called.
|