From: Ryan Tomayko <email@example.com>
To: unicorn list <firstname.lastname@example.org>
Cc: Luke Melia <email@example.com>
Subject: Re: Fwd: Support for Soft Timeout in Unicorn
Date: Fri, 18 Jun 2010 13:13:33 -0700 [thread overview]
Message-ID: <AANLkTimT-pFr0GW8fhmhQ-9a8eI98NrzXsGBsqSn5UoJ@mail.gmail.com> (raw)
On Fri, Jun 4, 2010 at 1:59 PM, Eric Wong <firstname.lastname@example.org> wrote:
> Chris Wanstrath <email@example.com> wrote:
>> That's what we do at GitHub. We're running Rails 2.2.2 and have a
>> custom config.ru, thanks to Unicorn:
> By the way, how's the OobGC middleware working for you guys?
We rolled out the OobGC middleware along with a basic RailsBench GC
config (RUBY_HEAP_MIN_SLOTS, etc.). Combined, they knocked about 20ms
(~15%) off the average response time across the site (real traffic).
The impact was significantly more for requests that allocate a lot of
objects -- as much as 50% decreases in response time for the worst
offenders. We saw no noticeable increase in CPU with OobGC set to run
every 10 requests, and a fair increase in CPU with OobGC set to run
every 5 requests.
Because I rolled this stuff out somewhat non-scientifically, I've
always wondered how much OobGC contributed to the overall savings vs.
the RailsBench GC config. Disabling the OobGC middleware but leaving
the RailsBench GC config in place, I get the following graph:
So we're spending ~1ms request time in GC with OobGC, and ~10ms
request time in GC without it.
Here's some system load graphs for the same time period just to show
that OobGC has no adverse effect when set to GC every 10 requests:
I assume the RailsBench GC patches improve the effect of OobGC
considerably by increasing the number of objects that can be allocated
between GC runs, allowing more of the GC work to be deferred to
in-between-requests time. Here's the RailsBench config we're using
today, for the record:
This is only barely tuned for us. I stole most of the numbers from the
Twitter and 37signals examples.
I've also experimented with tuning the GC specifically to take
advantage of OobGC:
# The following GC settings have been tuned for GitHub application web requests.
# Most settings are significantly higher than the example configs published by
# Twitter and 37signals. There's a couple reasons for this. First, the GitHub
# app has a memory footprint that's 3x-4x larger than the standard Rails app
# (roughly 200MB after first request compared to ~40MB-50MB). Second, because
# Unicorn is such an exceptional piece of software, we're able to schedule GC
# to run outside the context of requests so as not to effect response times.
# As such, we try to allocate enough memory to service 5 requests
# GC and then run GC manually immediately after each fifth request has been
# served but before the process starts accepting the next connection. The result
# is higher memory use (~300MB per Unicorn worker process on average) and a
# slight increase in CPU due to forced manual GC, but better response times.
Unfortunately, the bigger heap seems to cause a largish increase in
the time needed for each GC, so the unicorn workers were spending too
much time between requests. CPU and RES increases were even more than
I'd expected. It also didn't eliminate in-request GC entirely as I'd
I eventually abandoned the idea -- even if I could get it to behave,
it's hardly worth the 1ms it would save. I mention it here because the
general approach might work in situations where the base heap size is
a bit smaller (say < 80MB) or perhaps I'm mistuning one of the
> Luke: did you also get a chance to try this in place of my original
> monkey patch?
> Thanks in advance for any info you can share.
> Eric Wong
> Unicorn mailing list - firstname.lastname@example.org
> Do not quote signatures (like this one) or top post when replying
Unicorn mailing list - email@example.com
Do not quote signatures (like this one) or top post when replying
next prev parent reply other threads:[~2010-06-18 20:13 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-03 17:37 Fwd: Support for Soft Timeout in Unicorn Eric Wong
2010-06-03 18:06 ` Pierre Baillet
2010-06-03 18:22 ` Fwd: " Eric Wong
2010-06-03 18:32 ` Pierre Baillet
2010-06-03 18:47 ` Eric Wong
2010-06-03 19:38 ` Chris Wanstrath
2010-06-03 19:40 ` Pierre Baillet
2010-06-09 13:17 ` Pierre Baillet
2010-06-11 1:56 ` Eric Wong
2010-06-04 20:59 ` Eric Wong
2010-06-18 20:13 ` Ryan Tomayko [this message]
2010-06-18 21:48 ` Eric Wong
2010-06-21 19:03 ` Ryan Tomayko
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
List information: https://yhbt.net/unicorn/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).