From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS14383 205.234.109.0/24 X-Spam-Status: No, score=-0.6 required=5.0 tests=MSGID_FROM_MTA_HEADER, RP_MATCHES_RCVD,T_DKIM_INVALID shortcircuit=no autolearn=unavailable version=3.3.2 Path: news.gmane.org!not-for-mail From: Dusty Doris Newsgroups: gmane.comp.lang.ruby.unicorn.general Subject: Re: Our Unicorn Setup Date: Fri, 9 Oct 2009 18:44:17 -0400 Message-ID: References: <8b73aaca0910091242u78ac787aj48fb1b63b5bf55bc@mail.gmail.com> <20091009220110.GB14137@dcvr.yhbt.net> NNTP-Posting-Host: lo.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Trace: ger.gmane.org 1255128270 2959 80.91.229.12 (9 Oct 2009 22:44:30 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Fri, 9 Oct 2009 22:44:30 +0000 (UTC) Cc: mongrel-unicorn@rubyforge.org To: Eric Wong Original-X-From: mongrel-unicorn-bounces@rubyforge.org Sat Oct 10 00:44:20 2009 Return-path: Envelope-to: gclrug-mongrel-unicorn@m.gmane.org X-Original-To: mongrel-unicorn@rubyforge.org Delivered-To: mongrel-unicorn@rubyforge.org X-Greylist: delayed 6049 seconds by postgrey-1.31 at rubycentral.org; Fri, 09 Oct 2009 18:44:17 EDT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=vCd+z18Rswg7aE4yif1TeqkKbpSDWnNHTelgSbUThtE=; b=n+1c/S89dsQcQwSmPrzPP2/bWaIpVf8oljpP7AAOAcpGidAE2upXJPYsG9kFxEFFKc vbg/9f5fg3OQA+dOKeJ96h3jRGvNUa8qU9yRhA8nmRe0P1uiL5Sv5dM36aHj5XWa1t3g JBLrlxtRPHFrXbbDrxNV/U7RkQ+FYkc3nUeOg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=EYo7rN1aHx2Uk2ZRsyL+D1usUlUJpT/MHeNerqY66nvnqu3Nl0L7WELOPZNk8zRbTU rJHW7byz2ikXujyVZx2PnWkiThsg2OyxnlPiOzREbcj1Gbbw2UiDk8i7qHUC1jC2+E/V sdQKX9UeVqrBtFR2oEm66RURO8HzRcVNwC5e8= In-Reply-To: <20091009220110.GB14137@dcvr.yhbt.net> X-Google-Sender-Auth: 839acc702a776e3f X-BeenThere: mongrel-unicorn@rubyforge.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Original-Sender: mongrel-unicorn-bounces@rubyforge.org Errors-To: mongrel-unicorn-bounces@rubyforge.org Xref: news.gmane.org gmane.comp.lang.ruby.unicorn.general:71 Archived-At: Received: from rubyforge.org ([205.234.109.19]) by lo.gmane.org with esmtp (Exim 4.50) id 1MwOCC-0002Vt-40 for gclrug-mongrel-unicorn@m.gmane.org; Sat, 10 Oct 2009 00:44:20 +0200 Received: from rubyforge.org (rubyforge.org [127.0.0.1]) by rubyforge.org (Postfix) with ESMTP id 3556213C895C; Fri, 9 Oct 2009 18:44:19 -0400 (EDT) Received: from mail-yx0-f187.google.com (mail-yx0-f187.google.com [209.85.210.187]) by rubyforge.org (Postfix) with ESMTP id 96EBB13C895C for ; Fri, 9 Oct 2009 18:44:17 -0400 (EDT) Received: by yxe17 with SMTP id 17so998468yxe.33 for ; Fri, 09 Oct 2009 15:44:17 -0700 (PDT) Received: by 10.150.207.14 with SMTP id e14mr5740148ybg.149.1255128257318; Fri, 09 Oct 2009 15:44:17 -0700 (PDT) On Fri, Oct 9, 2009 at 6:01 PM, Eric Wong wrote: > Dusty Doris wrote: >> Thanks for this post Chris, it was very informative and has answered a >> few questions that I've had in my head over the last couple of days. >> I've been testing unicorn with a few apps for a couple days and >> actually already moved one over to it. >> >> I have a question for list. > > First off, please don't top post, thanks :) Sorry about that. > >> We are currently setup with a load balancer that runs nginx and >> haproxy. =A0Nginx, simply proxies to haproxy, which then balances that >> across multiple mongrel or thin instances that span several servers. >> We simply include the public directory on our load balancer so nginx >> can serve static files right there. =A0We don't have nginx running on >> the app servers, they are just mongrel or thin. >> >> So, my question. =A0How would you do a Unicorn deployment when you have >> multiple app servers? > > For me, it depends on the amount of static files you serve with nginx > and also the traffic you hit. > > Can I assume you're running Linux 2.6 (with epoll + awesome VFS layer)? > > May I also assume your load balancer box is not very stressed right now? > Yep. We server our css, javascript, and some images out of the load balancer. But for the majority of our images and all the dynamically created ones, we serve those from dedicated image servers that have their own nginx instance running on each one. >> 1. =A0Simply use mongrels upstream and let it round-robin between all >> the unicorn instances on the different servers? =A0Or, perhaps use the >> fair-upstream plugin? >> >> nginx -> [unicorns] > > Based on your description of your current setup, this would be the best > way to go. =A0I would configure a lowish listen() :backlog for the > Unicorns, fail_timeout=3D0 in nginx for every server =A0This setup means > round-robin by default, but if one machine gets a :backlog overflow, > then nginx will automatically retry on a different backend. > Thanks for the recommendation. I was going to give that a shot first to see how it went, as it would also be the easiest to manage. When you say a lowish backlog? What kind of numbers are you talking about? Say, we had 8 workers running that stayed pretty active. They are usually quick to respond, with an occasional 2 second response (say 1/100) due to a bad sql query that we need to fix. Would lowish be 16, 32, 64, 128, 1024? Oh and thanks for the tip on the fail_timeout. >> 2. =A0Keep haproxy in the middle? >> >> nginx -> haproxy -> [unicorns] > > This is probably not necessary, but it can't hurt a whole lot either. > > Also an option for balancing. =A0If you're uncomfortable with the first > approach you can also configure haproxy as a backup server: > > =A0upstream unicorn_failover { > =A0 =A0# round-robin between unicorn app servers on the LAN: > =A0 =A0server 192.168.0.1:8080 fail_timeout=3D0; > =A0 =A0server 192.168.0.2:8080 fail_timeout=3D0; > =A0 =A0server 192.168.0.3:8080 fail_timeout=3D0; > > =A0 =A0# haproxy, configured the same way as you do now > =A0 =A0# the "backup" parameter means nginx won't hit haproxy unless > =A0 =A0# all the direct unicorn connections have backlog overflows > =A0 =A0# or other issues > =A0 =A0server 127.0.0.1:8080 fail_timeout=3D0 backup; # haproxy backup > =A0} > > So your traffic flow may look like the first for the common case, but > you may have a slightly more balanced/queueing solution in case you're > completely overloaded. > >> 3. =A0Stick haproxy in front and have it balance between the app servers >> that run their own nginx? >> >> haproxy -> [nginxs] -> unicorn # could use socket instead of tcp in this= case > > This is probably only necessary if: > > =A01) you have a lot of static files that don't all fit in the VFS caches > > =A02) you handle a lot of large uploads/responses and nginx buffering will > =A0 =A0 thrash one box > > I know some sites that run this (or similar) config, but it's mainly > because this is what they've had for 5-10 years and don't have > time/resources to test new setups. > >> I would love to hear any opinions. > > You can also try the following, which is similar to what I describe in: > > =A0http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/31 > Thats an interesting idea, thanks for sharing it. I like how the individual server also acts as a load balancer, but only if its having trouble itself. Otherwise, it just handles the requests through the socket connection. > Pretty much all the above setups are valid. =A0The important part is that > nginx must sit *somewhere* in between Unicorn and the rest of the world. > > -- > Eric Wong > I appreciate your reply and especially for Unicorn. Thanks!