From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on dcvr.yhbt.net X-Spam-Level: * X-Spam-ASN: AS33070 50.56.128.0/17 X-Spam-Status: No, score=1.0 required=3.0 tests=AWL,HK_RANDOM_FROM, MSGID_FROM_MTA_HEADER,TVD_RCVD_IP shortcircuit=no autolearn=no version=3.3.2 Path: news.gmane.org!not-for-mail From: Eric Wong Newsgroups: gmane.comp.lang.ruby.rainbows.general Subject: Re: negative timeout in Rainbows::Fiber::Base Date: Fri, 31 Aug 2012 01:37:31 +0000 Message-ID: <20120831013731.GA16613@dcvr.yhbt.net> References: <20120825024556.GA25977@dcvr.yhbt.net> <20120829211707.GA22726@dcvr.yhbt.net> NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Trace: ger.gmane.org 1346377067 24348 80.91.229.3 (31 Aug 2012 01:37:47 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Fri, 31 Aug 2012 01:37:47 +0000 (UTC) To: Rainbows! list Original-X-From: rainbows-talk-bounces-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Fri Aug 31 03:37:48 2012 Return-path: Envelope-to: gclrrg-rainbows-talk@m.gmane.org X-Original-To: rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Delivered-To: rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Original-Sender: rainbows-talk-bounces-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Errors-To: rainbows-talk-bounces-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Xref: news.gmane.org gmane.comp.lang.ruby.rainbows.general:401 Archived-At: Received: from 50-56-192-79.static.cloud-ips.com ([50.56.192.79] helo=rubyforge.org) by plane.gmane.org with esmtp (Exim 4.69) (envelope-from ) id 1T7GB4-0001BK-Ft for gclrrg-rainbows-talk@m.gmane.org; Fri, 31 Aug 2012 03:37:42 +0200 Received: from localhost.localdomain (localhost [127.0.0.1]) by rubyforge.org (Postfix) with ESMTP id 6508A2E062; Fri, 31 Aug 2012 01:37:39 +0000 (UTC) Received: from dcvr.yhbt.net (dcvr.yhbt.net [64.71.152.64]) by rubyforge.org (Postfix) with ESMTP id A997F2E05F for ; Fri, 31 Aug 2012 01:37:32 +0000 (UTC) Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id 88ED01F5B8; Fri, 31 Aug 2012 01:37:31 +0000 (UTC) "Lin Jen-Shin (godfat)" wrote: > On Thu, Aug 30, 2012 at 5:17 AM, Eric Wong wrote: > > Will probably release v4.4.1 with that tonight unless there's something > > else... > > Yeah, I saw it, thank you. Will release today before I forget again :> > > I haven't followed celluloid* closely since it was originally announced. > > Maybe it's worth it to offer celluloid-io as an option and I will accept > > patches for it. > > Cool. Then I might want to try it some other time. I don't have > confidence though. No problem, take your time :) > >> https://github.com/godfat/ruby-server-exp/blob/9d44f8387739f5395bf97aff1689f17615cc4c7e/config/rainbows-em-thread-pool.rb#L21-25 > >> class REMTPoolClient < Rainbows::EventMachine::Client > >> def app_call input > >> EM.defer{ super } > >> end > >> end > > > > I seem to recall this didn't work well with some corner cases > > (large static files/socket-proxying in responses). > > Sorry that I have no idea about those corner cases. (not sure > if they are corner enough?) I am not surprised if this approach > won't work for all cases. I would be interested to try out though. > Not sure what's socket-proxying, but I'll try large static files, > and see why it's not working properly. Hope it would be easy > to reproduce. I seem to recall problems with some of the more esoteric test cases in Rainbows! a few years ago. Now that I think more about it, it might've been related to client pipelining. If a client pipelines requests, I don't think using EM.defer {} makes it easy to guarantee the servers responses are returned in the correct order. This is made worse since (AFAIK) EM provides no easy way to temporarily disable firing read callbacks for a socket, so a client which pipelines aggressively becomes bad news. > >> > > https://github.com/cardinalblue/rest-more/blob/master/example/rainbows.rb#L15-19 > >> > > class REMFClient < Rainbows::EventMachine::Client > >> > > def app_call input > >> > > Fiber.new{ super }.resume > >> > > end > >> > > end > >> > Does it actually show benefits compared to regular EM? > >> > I suppose the Rack application still needs to be made Fiber-aware > >> > to reap benefits of Fibers > >> > >> The Rack application doesn't need to be fiber-aware, but if > >> the libraries are fiber-aware, then it would be beneficial. > >> The rest of program doesn't have to know the existence of > >> fibers, because we don't throw :async. > > > > OK. But are clients served concurrently by Rainbows! in this case? > > > > I'm not sure if I'm following this correctly (haven't thought about > > Fibers in a long time), but control never goes back to Rainbows! itself > > in your above case, does it? > > It does as far as I know. I am not good at explaining in natural languages, > and English is not my first language. Let me show this concept in codes. > It's something like this: > ( Also in gist: https://gist.github.com/3540749 ) > > # Wrap a fiber around app_call > Fiber.new{ > # [...] > # Below is modified from event_machine/client.rb, in def app_call input, > # but let's forget about :async for now. > # In APP.call, it would call Fiber.yield whenever we're waiting for data. > status, headers, body = APP.call(@env.merge!(RACK_DEFAULTS)) > > # The response is fully ready at this point. > ev_write_response(status, headers, body, @hp.next?) > }.resume > > # * * * * * > > # Here's the app > APP = lambda{ |env| > # First we remember where we need to jump back > f = Fiber.current > # Wait for 5 seconds to emulate waiting for data > EM.add_timer(5){ > # Here then we have data available, let's resume back. > # But we also wrap resuming in a next_tick block, > # giving EM some chances to cleanup it's internal states. > EM.next_tick{ > f.resume('OK') > } > } > # Here we immediately yield and give control back to Rainbows! > # and Rainbows! would then go back to EventMachine's regular loop, > # buffering requests, making requests, etc. > body = Fiber.yield > > # So now body is referring the data resumed from above. > # Rainbows! should handle the response at this point. > [200, {}, [body]] > } > > Not sure if this is clear enough, please let me know if I didn't > make it clear, or there's anything wrong in my assumption or > the over simplified codes. Thanks! Thank you, your code makes it clear. I think your approach will work with most HTTP clients. However, I think pipelined requests will hit the same problems as EM.defer, too. Can you try with pipelining? Maybe disabling keepalive/persistent connections will make this work correctly (but you obviously lose latency benefits, too). I also don't think it's possible to say "no pipelining" to a client if we support persistent connections at all. > > Interesting. Are you on 32 or 64-bit and are you constrained by VMSize > > or RSS? > > I think it's an x86_64 VM. Not sure what's VMSize, but we're observing > memory usage in RSS. They claim we have about 500M for one process, > but sometimes the app cannot allocate more memory even below 500M. > (Well, I don't quite understand how the app would grow to 500M? It's > about 160M on average) It's likely some corner case in your code. Do you generate potentially large responses or read in large amounts of data? (e.g. SELECT statements without a LIMIT, large files (uploads?)). A slow client which triggers large server responses (which EM may buffer even if the Rack app streams it out) can hit this, too. I don't think EM can be configured to buffer writes to the file system (nginx will automatically do this, though). > > If it's VMSize and you're not dealing with deep data structures _and_ > > your code doesn't use recursion much; lowering pthreads stack size to > > 64K in thread_pthread.c should help with memory usage. Not sure if you > > can supply your own Ruby builds for that hosting service, though. > > There's a chance to do that on Heroku, but I guess it's too much effort > to do so. I don't feel there's a standard way to compile stuffs on it. > > How do I tell the current stack size for a thread in pthread? It seems I > cannot run ulimit on Heroku. On the other hand, I am not sure if it's > significant enough to reduce thread stack size. Ruby 1.9 sets stack sizes to 512K regardless of ulimit -s. At least on Linux, memory defaults to being overcommited and is lazily allocated in increments of PAGE_SIZE (4K on x86*). It's likely the actual RSS overhead of a native thread stack is <64K. VMSize overhead becomes important on 32-bit with many native threads, though. In comparison, Fibers use only 4K stack and has no extra overhead in the kernel. > I guess we can also reduce the concurrent level (worker_connections?) > to reduce memory usage, too? > > > If you have time, you could also try making something like GNU pth or > > npth work with Ruby 1.9. I suspect Fibers will become unusable > > with *pth, though... > > Cool, might be interesting. > Could then threads be as lightweight as fibers? :P Yes. Fwiw, MRI 1.8 green threads and Neverblock are basically equivalent, too. > Though I really doubt if threads are really that heavy comparing to fibers. > At least in some simple tests, threads are fine and efficient enough. I agree native threads are light enough for most cases (especially since you're already running Ruby :). > EventMachine is still a lot faster than regular sockets (net/http) though, > so I'll still keep EventMachine for a while even if I switched to threads. I think part of that is the HTTP parser and I/O buffering being implemented in C/C++ vs Ruby. Things like the net-http-persistent gem should help with pure-Ruby performance, though (and performance is likely to be better with upcoming Ruby releases). > > I'm actually surprised unicorn got the attention it has. > > > > As project leader, my preference to maintain a low public profile and my > > refusal to endorse/associate with for-profit interests certainly hurts > > adoption. > > > > If somebody else (like you) wants to market these projects, then more > > power to them :) I'm certainly never going to speak at any conference. > > > > Rainbows! just overwhelms potential users with too many choices, > > so it's unlikely to ever be widely-adopted. > > Umm... I don't know the reason why you want to stay away from them, > but please let me know if you don't want someone to market them. > I'll then try to be as natural as possible when talking about Unicorns. I enjoy my near-anonymity and want as little reputation/recognition as possible. I'm always happy if people talk about software, but I prefer software stand on its own and not on the reputation of its authors. (The only reason I use my name is for potential GPL enforcement) _______________________________________________ Rainbows! mailing list - rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org http://rubyforge.org/mailman/listinfo/rainbows-talk Do not quote signatures (like this one) or top post when replying