unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Eric Wong <normalperson@yhbt.net>
To: Dusty Doris <unicorn@dusty.name>
Cc: Chris Wanstrath <chris@ozmm.org>, mongrel-unicorn@rubyforge.org
Subject: Re: Our Unicorn Setup
Date: Fri, 9 Oct 2009 15:01:10 -0700	[thread overview]
Message-ID: <20091009220110.GB14137@dcvr.yhbt.net> (raw)
In-Reply-To: <e9e3ad7b0910091403wdbb6971q8ddbd4473f32a2c5@mail.gmail.com>

Dusty Doris <unicorn@dusty.name> wrote:
> Thanks for this post Chris, it was very informative and has answered a
> few questions that I've had in my head over the last couple of days.
> I've been testing unicorn with a few apps for a couple days and
> actually already moved one over to it.
> 
> I have a question for list.

First off, please don't top post, thanks :)

> We are currently setup with a load balancer that runs nginx and
> haproxy.  Nginx, simply proxies to haproxy, which then balances that
> across multiple mongrel or thin instances that span several servers.
> We simply include the public directory on our load balancer so nginx
> can serve static files right there.  We don't have nginx running on
> the app servers, they are just mongrel or thin.
> 
> So, my question.  How would you do a Unicorn deployment when you have
> multiple app servers?

For me, it depends on the amount of static files you serve with nginx
and also the traffic you hit.

Can I assume you're running Linux 2.6 (with epoll + awesome VFS layer)?

May I also assume your load balancer box is not very stressed right now?

> 1.  Simply use mongrels upstream and let it round-robin between all
> the unicorn instances on the different servers?  Or, perhaps use the
> fair-upstream plugin?
> 
> nginx -> [unicorns]

Based on your description of your current setup, this would be the best
way to go.  I would configure a lowish listen() :backlog for the
Unicorns, fail_timeout=0 in nginx for every server  This setup means
round-robin by default, but if one machine gets a :backlog overflow,
then nginx will automatically retry on a different backend.

> 2.  Keep haproxy in the middle?
> 
> nginx -> haproxy -> [unicorns]

This is probably not necessary, but it can't hurt a whole lot either.

Also an option for balancing.  If you're uncomfortable with the first
approach you can also configure haproxy as a backup server:

  upstream unicorn_failover {
    # round-robin between unicorn app servers on the LAN:
    server 192.168.0.1:8080 fail_timeout=0;
    server 192.168.0.2:8080 fail_timeout=0;
    server 192.168.0.3:8080 fail_timeout=0;

    # haproxy, configured the same way as you do now
    # the "backup" parameter means nginx won't hit haproxy unless
    # all the direct unicorn connections have backlog overflows
    # or other issues
    server 127.0.0.1:8080 fail_timeout=0 backup; # haproxy backup
  }

So your traffic flow may look like the first for the common case, but
you may have a slightly more balanced/queueing solution in case you're
completely overloaded.

> 3.  Stick haproxy in front and have it balance between the app servers
> that run their own nginx?
> 
> haproxy -> [nginxs] -> unicorn # could use socket instead of tcp in this case

This is probably only necessary if:

  1) you have a lot of static files that don't all fit in the VFS caches

  2) you handle a lot of large uploads/responses and nginx buffering will
     thrash one box

I know some sites that run this (or similar) config, but it's mainly
because this is what they've had for 5-10 years and don't have
time/resources to test new setups.

> I would love to hear any opinions.

You can also try the following, which is similar to what I describe in:

  http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/31

Pretty much all the above setups are valid.  The important part is that
nginx must sit *somewhere* in between Unicorn and the rest of the world.

-- 
Eric Wong

  reply	other threads:[~2009-10-09 22:01 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-09 19:42 Our Unicorn Setup Chris Wanstrath
2009-10-09 20:30 ` Eric Wong
2009-10-09 21:25   ` Chris Wanstrath
2009-10-09 21:03 ` Dusty Doris
2009-10-09 22:01   ` Eric Wong [this message]
2009-10-09 22:44     ` Dusty Doris
2009-10-09 23:11       ` Eric Wong
2009-10-09 23:17         ` Dusty Doris

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091009220110.GB14137@dcvr.yhbt.net \
    --to=normalperson@yhbt.net \
    --cc=chris@ozmm.org \
    --cc=mongrel-unicorn@rubyforge.org \
    --cc=unicorn@dusty.name \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).