unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Michael Fischer <mfischer@zendesk.com>
To: Sarkis Varozian <svarozian@gmail.com>
Cc: unicorn-public <unicorn-public@bogomips.org>
Subject: Re: Request Queueing after deploy + USR2 restart
Date: Wed, 4 Mar 2015 12:17:07 -0800	[thread overview]
Message-ID: <CABHxtY79JHQ-Ba4tFNY5ZUDRzG-Y5NvCowXK6XCsi6rH_eScZQ@mail.gmail.com> (raw)
In-Reply-To: <CAGchx-JsOJD2m1ubTJEOeuK2EgP+NuyT=rD+--LPg6+vPnit8Q@mail.gmail.com>

I'm not exactly sure how preload_app works, but I suspect your app is
lazy-loading a number of Ruby libraries while handling the first few
requests that weren't automatically loaded during the preload process.

Eric, your thoughts?

--Michael

On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Yes, preload_app is set to true, I have not made any changes to the
> unicorn.rb from OP: http://goo.gl/qZ5NLn
>
> Hmmmm, you may be onto something - Here is the i/o metrics from the server
> with the highest response times: http://goo.gl/0HyUYt (in this graph:
> http://goo.gl/x7KcKq)
>
> Looks like it may be i/o related as you suspect - is there much I can do
> to alleviate that?
>
> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> What does your I/O latency look like during this interval?  (iostat -xk
>> 10, look at the busy %).  I'm willing to bet the request queueing is
>> strongly correlated with I/O load.
>>
>> Also is preload_app set to true?  This should help.
>>
>> --Michael
>>
>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Michael,
>>>
>>> Thanks for this - I have since changed the way we are restarting the
>>> unicorn servers after a deploy by changing capistrano task to do:
>>>
>>> in :sequence, wait: 30
>>>
>>> We have 4 backends and the above will restart them sequentially, waiting
>>> 30s (which I think should be more than enough time), however, I still get
>>> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>>>
>>> This is what the individual servers look like for the same time
>>> interval: http://goo.gl/x7KcKq
>>>
>>>
>>>
>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>> wrote:
>>>
>>>> If the response times are falling a minute or so after the reload, I'd
>>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>>> reloads across backends to minimize the impact.
>>>>
>>>> --Michael
>>>>
>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>> wrote:
>>>>
>>>>> We have a rails application with the following unicorn.rb:
>>>>> http://goo.gl/qZ5NLn
>>>>>
>>>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>>>> master which spins up a new master and we use the before_fork in the
>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>> workers come online.
>>>>>
>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>>>> there is always a latency spike - however, at times Request Queueing is
>>>>> higher than previous deploys.
>>>>>
>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>>> this
>>>>> to happen on each deploy?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> --
>>>>> *Sarkis Varozian*
>>>>> svarozian@gmail.com
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




  reply	other threads:[~2015-03-04 20:17 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-03 22:24 Sarkis Varozian
2015-03-03 22:32 ` Michael Fischer
2015-03-04 19:48   ` Sarkis Varozian
2015-03-04 19:51     ` Michael Fischer
2015-03-04 19:58       ` Sarkis Varozian
2015-03-04 20:17         ` Michael Fischer [this message]
2015-03-04 20:24           ` Sarkis Varozian
2015-03-04 20:27             ` Michael Fischer
2015-03-04 20:35             ` Eric Wong
2015-03-04 20:40               ` Sarkis Varozian
2015-03-05 17:07                 ` Sarkis Varozian
2015-03-05 17:13                   ` Bráulio Bhavamitra
2015-03-05 17:28                     ` Sarkis Varozian
2015-03-05 17:31                       ` Bráulio Bhavamitra
2015-03-05 17:32                       ` Bráulio Bhavamitra
2015-03-05 21:12                       ` Eric Wong
2015-03-03 22:47 ` Bráulio Bhavamitra
2015-03-04 19:50   ` Sarkis Varozian
     [not found] ` <CAJri6_vidE15Xor4THzQB3uxyqPdApxHoyWp47NAG8m8TQuw0Q@mail.gmail.com>
2015-09-13 15:12   ` Bráulio Bhavamitra
2015-09-14  2:14     ` Eric Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABHxtY79JHQ-Ba4tFNY5ZUDRzG-Y5NvCowXK6XCsi6rH_eScZQ@mail.gmail.com \
    --to=mfischer@zendesk.com \
    --cc=svarozian@gmail.com \
    --cc=unicorn-public@bogomips.org \
    --subject='Re: Request Queueing after deploy + USR2 restart' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Code repositories for project(s) associated with this inbox:

	../../unicorn.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).