unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Eric Wong <e@80x24.org>
To: Sarkis Varozian <svarozian@gmail.com>
Cc: "Bráulio Bhavamitra" <braulio@eita.org.br>,
	"Michael Fischer" <mfischer@zendesk.com>,
	unicorn-public <unicorn-public@bogomips.org>
Subject: Re: Request Queueing after deploy + USR2 restart
Date: Thu, 5 Mar 2015 21:12:13 +0000	[thread overview]
Message-ID: <20150305211213.GA21611@dcvr.yhbt.net> (raw)
In-Reply-To: <CAGchx-JeYkVjKh2_B7a7oGSb_PDf2i=6KjwrhUXG_ckcrJEDeA@mail.gmail.com>

Sarkis Varozian <svarozian@gmail.com> wrote:
> Braulio,
> 
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/

I'm not about to open images/graphs, but managed to read that.

Now I'm still unsure if they are actually using raindrops or not to
measure your stats, but at least they mention it in that post.

Setting the timestamp header in nginx is a good idea, but you need to be
completely certain clocks are synchronized between machines for accuracy
(no using monotonic clock between multiple hosts, either, must be
real-time).

Have you tried using raindrops standalone to confirm queueing in the
kernel?

raindrops inspects the listen queue in the kernel directly, so it's
as accurate as possible as far as the local machine is concerned.
(it will not measure internal network latency).

I recommend checking raindrops (or inspecting /proc/net/{unix,tcp} or
running "ss -lx" / "ss -lt" to check listen queues).

You can also simulate TCP socket queueing in a standalone Ruby
script by doing something like:

    -----------------------------8<---------------------------
    require 'socket'
    host = '127.0.0.1'
    port = 1234
    re = Regexp.escape("#{host}:#{port}")
    check = lambda do |desc|
      puts desc
      # use "ss -lx" instead for UNIXServer/UNIXSocket
      puts `ss -lt`.split(/\n/).grep(/LISTEN\s.*\b#{re}\b/io)
      puts
    end

    puts "Creating new server"
    s = TCPServer.new(host, port)

    check.call "2nd column should initially be zero:"

    puts "Queueing up one client:"
    c1 = TCPSocket.new(host, port)
    check.call "2nd column should be one, since accept is not yet called:"

    puts "Accepting one client to clear the queue"
    a1 = s.accept
    check.call "2nd column should be back to zero after calling accept:"

    puts "Queueing up two clients:"
    c2 = TCPSocket.new(host, port)
    c3 = TCPSocket.new(host, port)
    check.call "2nd column should show two queued clients"

    a2 = s.accept
    check.call "2nd column should be down to one after calling accept:"
    -----------------------------8<---------------------------

Disclaimer: I'm a Free Software extremist and would not touch
New Relic with a ten-foot pole...

> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.

I assume there's nginx somewhere?  Where is it?

If not, you're not protected from slow uploads with giant request
bodies.  I'm not up-to-date about current haproxy versions, but AFAIK
only nginx buffers request bodies in full.

With nginx, I'm not sure what the point of haproxy is if you're just
going to do round-robin; nginx already does round-robin.  I'd only
use haproxy for a "smarter" load balancing scheme.

  parent reply	other threads:[~2015-03-05 21:12 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-03 22:24 Request Queueing after deploy + USR2 restart Sarkis Varozian
2015-03-03 22:32 ` Michael Fischer
2015-03-04 19:48   ` Sarkis Varozian
2015-03-04 19:51     ` Michael Fischer
2015-03-04 19:58       ` Sarkis Varozian
2015-03-04 20:17         ` Michael Fischer
2015-03-04 20:24           ` Sarkis Varozian
2015-03-04 20:27             ` Michael Fischer
2015-03-04 20:35             ` Eric Wong
2015-03-04 20:40               ` Sarkis Varozian
2015-03-05 17:07                 ` Sarkis Varozian
2015-03-05 17:13                   ` Bráulio Bhavamitra
2015-03-05 17:28                     ` Sarkis Varozian
2015-03-05 17:31                       ` Bráulio Bhavamitra
2015-03-05 17:32                       ` Bráulio Bhavamitra
2015-03-05 21:12                       ` Eric Wong [this message]
2015-03-03 22:47 ` Bráulio Bhavamitra
2015-03-04 19:50   ` Sarkis Varozian
     [not found] ` <CAJri6_vidE15Xor4THzQB3uxyqPdApxHoyWp47NAG8m8TQuw0Q@mail.gmail.com>
2015-09-13 15:12   ` Bráulio Bhavamitra
2015-09-14  2:14     ` Eric Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150305211213.GA21611@dcvr.yhbt.net \
    --to=e@80x24.org \
    --cc=braulio@eita.org.br \
    --cc=mfischer@zendesk.com \
    --cc=svarozian@gmail.com \
    --cc=unicorn-public@bogomips.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).