unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Tony Arcieri <tony.arcieri@gmail.com>
To: unicorn list <mongrel-unicorn@rubyforge.org>
Subject: Fwd: Maintaining capacity during deploys
Date: Thu, 29 Nov 2012 15:05:44 -0800	[thread overview]
Message-ID: <CAHOTMVLYDbcdU4Q_jssiXF0AeEnui07U=4gUS9=T=wvLU9db7w@mail.gmail.com> (raw)
In-Reply-To: <CAHOTMV++otgxdru_oZLXuVuqHF7F4uMwd04O0QZBjxeqFR-=XQ@mail.gmail.com>

We're using unicornctl restart with the default before/after hook
behavior, which is to reap old Unicorn workers via SIGQUIT after the
new one has finished booting.

Unfortunately, while the new workers are forking and begin processing
requests, we're still seeing significant spikes in our haproxy request
queue. It seems as if after we restart, the unwarmed workers get
swamped by the incoming requests. As far as I can tell, the momentary
loss of capacity we experience translates fairly quickly into a
thundering herd.

We've experimented with rolling restarts at the server level but these
do not resolve the problem.

I'm curious if we could do a more granular application-level rolling
restart, perhaps using TTOU instead of QUIT to progressively dial down
the old workers one-at-a-time, and forking new ones to replace them
incrementally. Anyone tried anything like that before?

Or are there any other suggestions? (short of "add more capacity")

--
Tony Arcieri<div class="gmail_extra"><br><br><div
class="gmail_quote">On Thu, Nov 29, 2012 at 2:50 PM, Tony Arcieri
<span dir="ltr">&lt;<a href="mailto:tony.arcieri@gmail.com"
target="_blank">tony.arcieri@gmail.com</a>&gt;</span>
wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex;">We're using
unicornctl restart with the default before/after hook behavior, which
is to reap old Unicorn workers via SIGQUIT after the new one has
finished booting.<div><br></div><div>Unfortunately, while the new
workers are forking and begin processing requests, we're still seeing
significant spikes in our haproxy request queue. It seems as if after
we restart, the unwarmed workers get swamped by the incoming
requests.&nbsp;As far as I can tell, the momentary loss of capacity we
experience translates fairly quickly into a thundering herd.</div>
<div><div><br></div><div>We've experimented with rolling restarts at
the server level but these do not resolve the
problem.</div><div><br></div><div>I'm curious if we could do a more
granular application-level rolling restart, perhaps using TTOU instead
of QUIT to progressively dial down the old workers one-at-a-time, and
forking new ones to replace them incrementally. Anyone tried anything
like that before?</div>
<div><br></div><div>Or are there any other suggestions? (short of "add
more capacity")</div><span class="HOEnZb"><font
color="#888888"><div><br></div>-- <br>Tony Arcieri<br><br>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Tony
Arcieri<br><br>
</div>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

       reply	other threads:[~2012-11-29 23:06 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAHOTMV++otgxdru_oZLXuVuqHF7F4uMwd04O0QZBjxeqFR-=XQ@mail.gmail.com>
2012-11-29 23:05 ` Tony Arcieri [this message]
2012-11-29 23:09   ` Maintaining capacity during deploys Alex Sharp
2012-11-29 23:32   ` Fwd: " Eric Wong
2012-11-29 23:52     ` Tony Arcieri
2012-11-30 21:28     ` Tony Arcieri
2012-11-30 22:27       ` Eric Wong
2012-12-03 23:53         ` Tony Arcieri
2012-12-04  0:34           ` Eric Wong
2012-11-29 23:34   ` Lawrence Pit
2012-11-30  1:10     ` Tony Arcieri
2012-11-30  1:24       ` Eric Wong
2012-11-30  4:48         ` seth.cousins
2012-11-30  1:28     ` Devin Ben-Hur
2012-11-30  1:40       ` Tony Arcieri
2012-12-07 23:42 ` Tony Arcieri
2012-12-07 23:54   ` Eric Wong
2012-12-28 21:38   ` Dan Melnick
2012-12-28 22:07     ` Tony Arcieri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHOTMVLYDbcdU4Q_jssiXF0AeEnui07U=4gUS9=T=wvLU9db7w@mail.gmail.com' \
    --to=tony.arcieri@gmail.com \
    --cc=mongrel-unicorn@rubyforge.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).