unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Dusty Doris <unicorn@dusty.name>
To: Eric Wong <normalperson@yhbt.net>
Cc: mongrel-unicorn@rubyforge.org
Subject: Re: Our Unicorn Setup
Date: Fri, 9 Oct 2009 19:17:36 -0400	[thread overview]
Message-ID: <e9e3ad7b0910091617j3caceba3rd1f55f250f1cb5de@mail.gmail.com> (raw)
In-Reply-To: <20091009231133.GC14137@dcvr.yhbt.net>

On Fri, Oct 9, 2009 at 7:11 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Dusty Doris <unicorn@dusty.name> wrote:
>> On Fri, Oct 9, 2009 at 6:01 PM, Eric Wong <normalperson@yhbt.net> wrote:
>> > Dusty Doris <unicorn@dusty.name> wrote:
>> >> 1.  Simply use mongrels upstream and let it round-robin between all
>> >> the unicorn instances on the different servers?  Or, perhaps use the
>> >> fair-upstream plugin?
>> >>
>> >> nginx -> [unicorns]
>> >
>> > Based on your description of your current setup, this would be the best
>> > way to go.  I would configure a lowish listen() :backlog for the
>> > Unicorns, fail_timeout=0 in nginx for every server  This setup means
>> > round-robin by default, but if one machine gets a :backlog overflow,
>> > then nginx will automatically retry on a different backend.
>>
>> Thanks for the recommendation.  I was going to give that a shot first
>> to see how it went, as it would also be the easiest to manage.
>>
>> When you say a lowish backlog?  What kind of numbers are you talking
>> about?  Say, we had 8 workers running that stayed pretty active.  They
>> are usually quick to respond, with an occasional 2 second response
>> (say 1/100) due to a bad sql query that we need to fix.  Would lowish
>> be 16, 32, 64, 128, 1024?
>
> 1024 is the default in Mongrel and Unicorn which is very generous.  5 is
> the default value that Ruby initializes the sockets at, so picking
> something in between is recommended.  It really depends on your app and
> comfort level.  You can also tune and refine it over time safely
> without worrying too much about dropping connections by configuring
> multiple listeners per-instance (see below).
>
> Keep in mind the backlog is rarely an exact setting, it's more of a
> recommendation to the kernel (and the actual value is often higher
> than specified).
>
>> Oh and thanks for the tip on the fail_timeout.
>
> No problem, I somehow thought it was widely-known by now...
>
>> > You can also try the following, which is similar to what I describe in:
>> >
>> >  http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/31
>> >
>>
>> Thats an interesting idea, thanks for sharing it.  I like how the
>> individual server also acts as a load balancer, but only if its having
>> trouble itself.  Otherwise, it just handles the requests through the
>> socket connection.
>> I appreciate your reply and especially for Unicorn.
>
> You can also try a combination of (1) above and my proposed idea in
> $gmane/31 by configuring two listeners per-Unicorn instance:
>
>   # primary
>   listen 8080, :backlog => 10, :tcp_nopush => true
>
>   # only when all servers overflow the backlog=10 above
>   listen 8081, :backlog => 1024, :tcp_nopush => true
>
> And then putting the 8081s as a backup in nginx like this:
>
>   upstream unicorn_failover {
>     # round-robin between unicorns with small backlogs
>    # as the primary option
>     server 192.168.0.1:8080 fail_timeout=0;
>     server 192.168.0.2:8080 fail_timeout=0;
>     server 192.168.0.3:8080 fail_timeout=0;
>
>     # the "backup" parameter means nginx won't ever try these
>    # unless the set of listeners above fail.
>     server 192.168.0.1:8081 fail_timeout=0 backup;
>     server 192.168.0.2:8081 fail_timeout=0 backup;
>     server 192.168.0.3:8081 fail_timeout=0 backup;
>   }
>
> You can monitor the nginx error logs and see how often it fails on the
> low backlog listener, and then increment/decrement the backlog of
> the primary listeners as needed to get better load-balancing.
>
> --
> Eric Wong
>

Awesome!

I am going to give that a shot.

      reply	other threads:[~2009-10-09 23:17 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-09 19:42 Our Unicorn Setup Chris Wanstrath
2009-10-09 20:30 ` Eric Wong
2009-10-09 21:25   ` Chris Wanstrath
2009-10-09 21:03 ` Dusty Doris
2009-10-09 22:01   ` Eric Wong
2009-10-09 22:44     ` Dusty Doris
2009-10-09 23:11       ` Eric Wong
2009-10-09 23:17         ` Dusty Doris [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e9e3ad7b0910091617j3caceba3rd1f55f250f1cb5de@mail.gmail.com \
    --to=unicorn@dusty.name \
    --cc=mongrel-unicorn@rubyforge.org \
    --cc=normalperson@yhbt.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).