From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on dcvr.yhbt.net X-Spam-Level: * X-Spam-ASN: AS33070 50.56.128.0/17 X-Spam-Status: No, score=1.0 required=3.0 tests=AWL,HK_RANDOM_FROM, MSGID_FROM_MTA_HEADER,TVD_RCVD_IP shortcircuit=no autolearn=no version=3.3.2 Path: news.gmane.org!not-for-mail From: Eric Wong Newsgroups: gmane.comp.lang.ruby.rainbows.general Subject: Re: negative timeout in Rainbows::Fiber::Base Date: Wed, 5 Sep 2012 23:27:39 +0000 Message-ID: <20120905232739.GA25153@dcvr.yhbt.net> References: <20120825024556.GA25977@dcvr.yhbt.net> <20120829211707.GA22726@dcvr.yhbt.net> <20120831013731.GA16613@dcvr.yhbt.net> NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Trace: ger.gmane.org 1346887669 9209 80.91.229.3 (5 Sep 2012 23:27:49 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Wed, 5 Sep 2012 23:27:49 +0000 (UTC) To: Rainbows! list Original-X-From: rainbows-talk-bounces-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Thu Sep 06 01:27:48 2012 Return-path: Envelope-to: gclrrg-rainbows-talk@m.gmane.org X-Original-To: rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Delivered-To: rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Original-Sender: rainbows-talk-bounces-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Errors-To: rainbows-talk-bounces-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org Xref: news.gmane.org gmane.comp.lang.ruby.rainbows.general:404 Archived-At: Received: from 50-56-192-79.static.cloud-ips.com ([50.56.192.79] helo=rubyforge.org) by plane.gmane.org with esmtp (Exim 4.69) (envelope-from ) id 1T9P0d-0001bh-M1 for gclrrg-rainbows-talk@m.gmane.org; Thu, 06 Sep 2012 01:27:47 +0200 Received: from localhost.localdomain (localhost [127.0.0.1]) by rubyforge.org (Postfix) with ESMTP id D9DA22E06E; Wed, 5 Sep 2012 23:27:42 +0000 (UTC) Received: from dcvr.yhbt.net (dcvr.yhbt.net [64.71.152.64]) by rubyforge.org (Postfix) with ESMTP id 247B52E069 for ; Wed, 5 Sep 2012 23:27:39 +0000 (UTC) Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id 3D3C81F437; Wed, 5 Sep 2012 23:27:39 +0000 (UTC) "Lin Jen-Shin (godfat)" wrote: > On Fri, Aug 31, 2012 at 9:37 AM, Eric Wong wrote: > > I seem to recall problems with some of the more esoteric test cases in > > Rainbows! a few years ago. > > > > Now that I think more about it, it might've been related to client > > pipelining. If a client pipelines requests, I don't think using > > EM.defer {} makes it easy to guarantee the servers responses are > > returned in the correct order. > > > > This is made worse since (AFAIK) EM provides no easy way to > > temporarily disable firing read callbacks for a socket, so > > a client which pipelines aggressively becomes bad news. > > After some experiments, now I understood why it is hard. But I can't > figure it out by some quick glimpses for how you did solve this problems > for other concurrency model? Simple: we don't read from the socket at all while processing a request Extra data the client sends gets buffered in the kernel, eventually TCP backoff will kick in and the Internet stays usable :> With the inability to easily stop read callbacks via EM, the socket buffers constantly get drained so the clients are able to keep sending data. The only option I've found for Rainbows! + EM was to issue shutdown(SHUT_RD) if a client attempts to pipeline too much (which never happens with legitimate clients AFAIK). > One possible and simple way would be... just make piped requests > sequential, but this would greatly reduce the concurrency ability, > is it right? At least Puma server runs quite poorly whenever I am > testing pipeline requests. Yes, pipelining is handled sequentially. It's the easiest and safest way to implement since HTTP/1.1 requires the responses to be returned in the same order they were sent (as you mention below). > My test script is: > > httperf --hog --server localhost --port 8080 --uri /cpu --num-calls 4 > --burst-length 2 --num-conn 2 --rate 8 --print-reply > > But Zbatery runs quite smoothly with ThreadPool and ThreadSpawn. > I assume it's because Zbatery would handle piped requests concurrently > and collect responses and reply them with the correct order, though > I cannot tell from the code, at least from some quick glimpses. Rainbows!/Zbatery handles pipelined requests sequentially. They only have concurrency on a per-socket level. > At this point I am more confident to say that Unicorn family is the best > Ruby application servers. :) Good to hear :) > To address ordering issue, I guess we can remember the > index of a certain request, and if there's a request being > processed which has a lower index, the response shouldn't > be written back before the lower one has been written. > > Not sure if this is wroth the effort though... This must touch > Rainbows' internal, and it cannot be easily handled by > simply extending the client class. I don't think it is worth the effort. I'm not even sure how often pipelining is used in the real world, all I know is Rainbows! can handle it without falling over. > > Maybe disabling keepalive/persistent connections will make this work > > correctly (but you obviously lose latency benefits, too). > > > > I also don't think it's possible to say "no pipelining" to a client if > > we support persistent connections at all. > > I wonder if we always run Nginx or something similar in front of > Rainbows, does it still matter? It shouldn't matter for nginx, I don't think nginx will (ever) pipeline to a backend. Nowadays nginx can do persistent connections to backends, though I'm not sure how much of a benefit it is for local sockets. > I see. Never thought of that EM might be buffering a lot of large > responses in the memory. As for loading large amounts of data > into memory, I guess I can't tell. As far as I know, no, but who knows :P > This must be accidental if there's one... I think your comment is unfortunately representative of a lot of software development nowadays. Embrace pessimism and let it be your guide :) > > Ruby 1.9 sets stack sizes to 512K regardless of ulimit -s. At least on > > Linux, memory defaults to being overcommited and is lazily allocated in > > increments of PAGE_SIZE (4K on x86*). It's likely the actual RSS overhead > > of a native thread stack is <64K. > > > > VMSize overhead becomes important on 32-bit with many native threads, > > though. In comparison, Fibers use only 4K stack and has no extra > > overhead in the kernel. > > I see, thanks for the explanation. I guess that does matter a bit, but only > if we're using thousands of threads/fibers, and it should be quite rarely > in a web app, I guess. > > Using fibers are also risking from system stack overflow, especially in > a Rails app with a lot of plugins, I guess... Umm, but I also heard that > fibers stack is increased a bit in newer Ruby? You're right, fiber stacks got bigger. They're 64K on all 1.9.3 and 128K on 64-bit for 2.0.0dev. So there's even less benefit in using Fibers nowadays for memory concerns. > >> Though I really doubt if threads are really that heavy comparing to fibers. > >> At least in some simple tests, threads are fine and efficient enough. > > > > I agree native threads are light enough for most cases (especially since > > you're already running Ruby :). > > Speaking to this and green threads, I wonder if it's worth the effort to > implement m:n threading for Ruby? Or we can just compile and > link against a threading library which supports m:n threading? > Goroutine? :P *shrug* Not worth _my_ effort for m:n threads. _______________________________________________ Rainbows! mailing list - rainbows-talk-GrnCvJ7WPxnNLxjTenLetw@public.gmane.org http://rubyforge.org/mailman/listinfo/rainbows-talk Do not quote signatures (like this one) or top post when replying