unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Jordan Ritter <jpr5@darkridge.com>
To: unicorn list <mongrel-unicorn@rubyforge.org>
Subject: Re: Thread.current
Date: Tue, 11 Jan 2011 15:12:51 -0800	[thread overview]
Message-ID: <EB96E69A-F9E3-4A29-994B-A8D35273DCDB@darkridge.com> (raw)
In-Reply-To: <AANLkTim=ymsUHz8KN5niEV6kwJaMtOQyuMny-CqDSqgb@mail.gmail.com>

Unicorn is purely about employing a multi-process model, not a multi-thread model; it specifically avoids spawning threads to handle inbound requests.   In fact, I'll bet that inside each request, Thread.current == Thread.main.

Separate from Unicorn, when running a rack-compatilbe app in multithreaded mode (the default when the app is invoked directly via rackup + config.ru), there's no guarantee about which thread will service a given request.  This fact may not matter to you, depending on what you're trying to do.

That said, you *could* use Thread local storage for per-request storage in either unicorn or multithreaded situations, so long as you wiped your storage at the beginning/end of each request -- but that's a crappy idiom, even if it might be "common" (don't know what you're referring to offhand).  Can't suggest a more appropriate pattern without knowing more about what you're actually trying to do.

cheers,
--jordan

On Jan 11, 2011, at 2:52 PM, Jimmy Soho wrote:

> Hi,
> 
> Some more questions still:
> 
> It seems a worker uses the exact same thread to handle each request.
> 
> Is that guaranteed to happen for the lifetime of a worker? Or are
> there cases where a unicorn worker might spin a new thread to handle
> the next requests?
> 
> If the same thread is always used, isn't that a potential issue when
> programmers use thread local variables, which are not reset at the
> next request?  (I know, the usage of thread local variables is not
> recommended, but take a random rails project, go into their $GEM_HOME
> and do grep -r Thread.current . , see what I mean..)
> 

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


  parent reply	other threads:[~2011-01-11 23:50 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-08  2:00 Thread.current Jimmy Soho
2011-01-08  2:57 ` Thread.current Eric Wong
2011-01-08  3:09 ` Thread.current Curtis j Schofield
2011-01-08  5:54   ` Thread.current Jimmy Soho
2011-01-11 22:52     ` Thread.current Jimmy Soho
2011-01-11 23:12       ` Thread.current Eric Wong
2011-01-11 23:12       ` Jordan Ritter [this message]
2011-01-12  3:07         ` Thread.current Jimmy Soho
2011-01-13  4:26           ` Thread.current Eric Wong
2011-01-13 16:46             ` Thread.current Jordan Ritter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EB96E69A-F9E3-4A29-994B-A8D35273DCDB@darkridge.com \
    --to=jpr5@darkridge.com \
    --cc=mongrel-unicorn@rubyforge.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).