Date | Commit message (Collapse) |
|
Allow the --rainbows/-R command-line option to be passed to
activate Rainbows!!!1
|
|
This should be more robust, faster and easier to deal
with than the ugly proof-of-concept regexp-based ones.
|
|
With the 1.9.2preview1 release (and presumably 1.9.1 p243), the
Ruby core team has decided that bending over backwards to
support crippled operating/file systems was necessary and that
files must be closed before unlinking.
Regardless, this is more efficient than using Tempfile because:
1) no delegation is necessary, this is a real File object
2) no mkdir is necessary for locking, we can trust O_EXCL
to work properly without unnecessary FS activity
3) no finalizer is needed to unlink the file, we unlink
it as soon as possible after creation.
|
|
* maint:
unicorn 0.8.2
always set FD_CLOEXEC on sockets post-accept()
Minor cleanups to core
Re-add support for non-portable socket options
Retry listen() on EADDRINUSE 5 times ever 500ms
Unbind listeners as before stopping workers
Conflicts:
CHANGELOG
lib/unicorn.rb
lib/unicorn/configurator.rb
lib/unicorn/const.rb
|
|
FD_CLOEXEC is not guaranteed to be inherited by the accept()-ed
descriptors even if the listener socket has this set. This can
be a problem with applications that fork+exec long running
background processes.
Thanks to Paul Sponagl for helping me find this.
|
|
(cherry picked from commit ec70433f84664af0dff1336845ddd51f50a714a3)
|
|
This number of retries and delay taken directly from nginx
(cherry picked from commit d247b5d95a3ad2de65cc909db21fdfbc6194b4c9)
|
|
This allows another process to take our listeners
sooner rather than later.
(cherry picked from commit 8c2040127770e40e344a927ddc187bf801073e33)
|
|
|
|
There's a small memory reduction to be had when forking
oodles of processes and the Perl hacker in me still
gets confused into thinking those are arrays...
|
|
Array#+= creates a new array before assigning, Array#concat just
appends one array to another without an intermediate one.
|
|
This gives the app ability to deny clients with 417 instead of
blindly making the decision for the underlying application. Of
course, apps must be made aware of this.
|
|
This number of retries and delay taken directly from nginx
|
|
This allows another process to take our listeners
sooner rather than later.
|
|
Support for the "Trailer:" header and associated Trailer
lines should be reasonably well supported now
|
|
The complexity of making the object persistent isn't worth the
potential performance gain here.
|
|
Trying not to repeat ourselves. Unfortunately, Ruby 1.9 forces
us to actually care about encodings of arbitrary byte sequences.
|
|
This adds support for handling POST/PUT request bodies sent with
chunked transfer encodings ("Transfer-Encoding: chunked").
Attention has been paid to ensure that a client cannot OOM us by
sending an extremely large chunk.
This implementation is pure Ruby as the Ragel-based
implementation in rfuzz didn't offer a streaming interface. It
should be reasonably close to RFC-compliant but please test it
in an attempt to break it.
The more interesting part is the ability to stream data to the
hosted Rack application as it is being transferred to the
server. This can be done regardless if the input is chunked or
not, enabling the streaming of POST/PUT bodies can allow the
hosted Rack application to process input as it receives it. See
examples/echo.ru for an example echo server over HTTP.
Enabling streaming also allows Rack applications to support
upload progress monitoring previously supported by Mongrel
handlers.
Since Rack specifies that the input needs to be rewindable, this
input is written to a temporary file (a la tee(1)) as it is
streamed to the application the first time. Subsequent rewinded
reads will read from the temporary file instead of the socket.
Streaming input to the application is disabled by default since
applications may not necessarily read the entire input body
before returning. Since this is a completely new feature we've
never seen in any Ruby HTTP application server before, we're
taking the safe route by leaving it disabled by default.
Enabling this can only be done globally by changing the
Unicorn HttpRequest::DEFAULTS hash:
Unicorn::HttpRequest::DEFAULTS["unicorn.stream_input"] = true
Similarly, a Rack application can check if streaming input
is enabled by checking the value of the "unicorn.stream_input"
key in the environment hashed passed to it.
All of this code has only been lightly tested and test coverage
is lacking at the moment.
[1] - http://tools.ietf.org/html/rfc2616#section-3.6.1
|
|
Bah, it's so much busy work to deal with this as configuration
option. Maybe I should say we allow any logger the user wants,
as long as it's $stderr :P
|
|
Make us look even better in "Hello World" benchmarks!
Passing a third parameter to avoid the constant lookup
for the HttpRequest object doesn't seem to have a
measurable effect.
|
|
This should be faster/cheaper than using an instance variable
since it's accessed in a critical code path. Unicorn was never
designed to be reentrant or thread-safe at all, either.
|
|
This makes SIGHUP handling more consistent across different
configurations, and allows togging preload_app to take effect
when SIGHUP is issued.
|
|
There is a potential race condition in closing the tempfile
immediately after SIGKILL-ing the child. So instead just
close it when we reap the dead child.
Some some versions of Ruby may also not like having the
WORKERS hash being changed while it is iterating through
each_pair, so dup the hash to be on the safe side (should
be cheap, since it's not a deep copy) since kill_worker()
can modify the WORKERS hash.
This is somewhat of a shotgun fix as I can't reproduce the
problem consistently, but oh well.
|
|
This should prevent Rack from being required too early
on so "-I" being passed through the unicorn command-line
can modify $LOAD_PATH for Rack
|
|
No point in refreshing the list of gems unless the app
can actually be reloaded.
|
|
On application reloads, Gems installed by the Ruby instance may
be different than when Unicorn started. So when preload_app is
false, HUP-ing Unicorn will always refresh the list of Gems
before loading the application code.
|
|
There may be other logs opened if preload_app is true
besides stderr/stdout paths. So err on the safe side
and reopen everything.
|
|
This used to happen on machines that were coming out of
suspend/hibernation.
|
|
These potentially leaves an open file handle around until the
next request hits the process, but this makes the common case
faster.
|
|
|
|
First, reduce no-op fchmod syscalls under heavy traffic.
gettimeofday(2) is a cheaper syscall than fchmod(2). Since
ctime resolution is only in seconds on most filesystems (and
Ruby can only get to seconds AFAIK), we can avoid fchmod(2)
happening within the same second. This allows us to cheat on
synthetic benchmarks where performance is measured in
requests-per-second and not seconds-per-request :)
Secondly, cleanup the acceptor loop and avoid nested
begins/loops as much as possible. If we got ECONNABORTED, then
there's no way the client variable would've been set correctly,
either. If there was something there, then it is at the mercy
of the garbage collector because a method can't both return a
value and raise an exception.
|
|
Use SIGQUIT if you're going to be nice and do graceful
shutdowns. Sometimes people run real applications on this
server and SIGINT/SIGTERM get lost/trapped when Object is
rescued and that is not good. Also make sure we break out of
the loop properly when the master is dead.
Testcases added for both SIGINT and dead master handling.
|
|
If our response succeeds, we've already closed the socket.
Otherwise, we would've raised an exception at some point hit one
of the rescue clauses.
|
|
This makes it easier to use "killall -$SIGNAL unicorn"
without having to lookup the correct PID.
|
|
Timeouts of less than 2 seconds are unsafe due to the lack of
subsecond resolution in most POSIX filesystems. This is the
trade-off for using a low-complexity solution for timeouts.
Since this type of timeout is a last resort; 2 seconds is not
entirely unreasonable IMNSHO. Additionally, timing out too
aggressively can put us in a fork loop and slow down the system.
Of course, the default is 60 seconds and most people do not
bother to change it.
|
|
Since we've switched to readpartial, we'll already be protected
from any unpleasant errors that might get thrown at us. There's
no easy way to prevent MRI from calling a select() internally to
check for readiness, so speculative+blocking read() calls are
out already.
Additionally, most requests come in the form of GETs which are
fully-buffered in the kernel before we even accept() the socket;
so a single readpartial call will be enough to fully consume it.
|
|
Only do speculative accept on the previous ready set of
listeners. This makes it less CPU-intensive to have per-process
debug listeners configured.
Unfortunately, this makes non-primary listeners unable to accept
connections if the server is under extremely heavy load and
speculative accept() on the previous listener is always
succeeding and hogging the process. Fortunately, this is an
uncommon case.
|
|
No need to use ensure since process_client will handle errors
regardless. And if not, there's a bug on our side that needs to
fixed.
|
|
We were closing a no-longer-existent I/O object to break out of
IO.select. This was broken in 0.6.0 but did not affect the
worker when it was busy.
|
|
We do this in both the worker and master processes, so avoid
repeating ourselves.
|
|
This allows dynamic tuning of the worker_processes count without
having to restart existing ones. This also allows
worker_processes to be set to a low initial amount in the config
file for low-traffic deployments/upgrades and then scaled up as
the old processes are killed off.
Remove the proposed reexec_worker_processes from TODO since this
is far more flexible and powerful.
This will allow not-yet-existent third-party monitoring tools to
dynamically change and scale worker processes according to site
load without increasing the complexity of Unicorn itself.
|
|
Seems like a good idea to be able to relocate log
files on a config reload.
|
|
Saying to the world that I may have OCD...
|
|
As long as our speculative accept()s are succeeding, then avoid
checking for master process death and keep processing requests.
This allows us to save some syscalls under extremely heavy
traffic spikes.
|
|
Oops, this was broken in another yak-shaving commit:
9206bb5e54a0837e394e8b1c1a96e27ebaf44e77
|
|
Avoid scaring the thread-safety-first crowd (as much :)
|
|
Since it has to work inside signal handlers, there's
no point in making it a per-object instance variable
given the price of an instance variable in MRI...
|
|
Instance variables are expensive and we'd be encouraging
something like a thread-safe mentality for using ivars
when dealing with things that are global to the entire
process.
|
|
Since file descriptors are only private to a process,
do not treat them as Object-specific.
|
|
|