unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
* Unicorn and streaming in Rails 3.1
@ 2011-06-25 16:08 Xavier Noria
  2011-06-25 20:16 ` Eric Wong
  2011-06-25 20:33 ` Eric Wong
  0 siblings, 2 replies; 4+ messages in thread
From: Xavier Noria @ 2011-06-25 16:08 UTC (permalink / raw)
  To: mongrel-unicorn

Streaming works with Unicorn + Apache. Both with and without deflating.

My understanding is that Unicorn + Apache is not a good combination
though because Apache does not buffer, and thus Unicorn has no fast
client in front. (I don't know which is the ultimate technical reason
Unicorn puts such an emphasis on fast clients, but will do some
research about it.)

I have seen in

    http://unicorn.bogomips.org/examples/nginx.conf

the comment

    "You normally want nginx to buffer responses to slow
    clients, even with Rails 3.1 streaming because otherwise a slow
    client can become a bottleneck of Unicorn."

If I understand how this works correctly, nginx buffers the entire
response from Unicorn. First filling what's configured in
proxy_buffer_size and proxy_buffers, and then going to disk if needed
as a last resort. Thus, even if the application streams, I believe the
client will receive the chunked response, but only after it has been
generated by the application and fully buffered by nginx. Which
defeats the purpose of streaming in the use case we have in mind in
Rails 3.1, which is to serve HEAD as soon as possible.

Is that comment in the example configuration file actually saying that
Unicorn with nginx buffering is not broken? I mean, that if your
application has some actions with stream enabled and you put it behind
this setup, the content will be delivered albeit not streamed?

If that is correct. Is it reasonable to send to nginx the header
X-Accel-Buffering to disable buffering only for streamed responses? Or
is it a very bad idea? If it is a real bad idea, is the recommendation
to Unicorn users that they should just ignore this new feature?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Unicorn and streaming in Rails 3.1
  2011-06-25 16:08 Unicorn and streaming in Rails 3.1 Xavier Noria
@ 2011-06-25 20:16 ` Eric Wong
  2011-06-25 22:23   ` Xavier Noria
  2011-06-25 20:33 ` Eric Wong
  1 sibling, 1 reply; 4+ messages in thread
From: Eric Wong @ 2011-06-25 20:16 UTC (permalink / raw)
  To: unicorn list

Xavier Noria <fxn@hashref.com> wrote:
> Streaming works with Unicorn + Apache. Both with and without deflating.
> 
> My understanding is that Unicorn + Apache is not a good combination
> though because Apache does not buffer, and thus Unicorn has no fast
> client in front. (I don't know which is the ultimate technical reason
> Unicorn puts such an emphasis on fast clients, but will do some
> research about it.)

Basically the per-connection overhead of Unicorn is huge, an entire Ruby
process (tens to several hundreds of megabytes).  The per-connection
overhead of nginx is tiny: maybe a few KB in userspace (including
buffers), and a few KB in in the kernel.  You don't want to maintain
connections to Unicorn for a long time because of that cost.

OK, if you have any specific questions that aren't answered in the
website, please ask.

> I have seen in
> 
>     http://unicorn.bogomips.org/examples/nginx.conf
> 
> the comment
> 
>     "You normally want nginx to buffer responses to slow
>     clients, even with Rails 3.1 streaming because otherwise a slow
>     client can become a bottleneck of Unicorn."
> 
> If I understand how this works correctly, nginx buffers the entire
> response from Unicorn. First filling what's configured in
> proxy_buffer_size and proxy_buffers, and then going to disk if needed
> as a last resort. Thus, even if the application streams, I believe the
> client will receive the chunked response, but only after it has been
> generated by the application and fully buffered by nginx. Which
> defeats the purpose of streaming

Yes.

> in the use case we have in mind in
> Rails 3.1, which is to serve HEAD as soon as possible.

Small nit: s/HEAD/the response header/   "HEAD" is a /request/ that only
expects to receive the response header.

nginx only sends HTTP/1.0 requests to unicorn, so Rack::Chunked won't
actually send a chunked/streamed response.  Rails 3.1 /could/ enable
streaming without chunking for HTTP/1.0, but only if the client
didn't set a non-standard HTTP/1.0 header to enable keepalive.  This
is because HTTP/1.0 (w/o keepalive) relies on the server to close
the connection to signal the end of a response.

> Is that comment in the example configuration file actually saying that
> Unicorn with nginx buffering is not broken? I mean, that if your
> application has some actions with stream enabled and you put it behind
> this setup, the content will be delivered albeit not streamed?

Correct.

> If that is correct. Is it reasonable to send to nginx the header
> X-Accel-Buffering to disable buffering only for streamed responses? Or
> is it a very bad idea? If it is a real bad idea, is the recommendation
> to Unicorn users that they should just ignore this new feature?

You can use "X-Accel-Buffering: no" if you know your responses are small
enough to fit into the kernel socket buffers.  There's two kernel
buffers (Unicorn + nginx), you can get a little more space there.  nginx
shouldn't make another request to Unicorn if it's blocked writing a
response to the client already, so an evil pipelining client should not
hurt unicorn in this case:

   require "socket"
   host = "example.com"
   s = TCPSocket.new(host, 80)
   req = "GET /something/big HTTP/1.1\r\nHost: #{host}\r\n\r\n"

   # pipeline a large number of requests, nginx won't send another
   # request to an upstream if it's still writing one
   30.times { s.write(req) }

   # don't read the response, or read it slowly, just keep the socket
   # open here...
   sleep

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Unicorn and streaming in Rails 3.1
  2011-06-25 16:08 Unicorn and streaming in Rails 3.1 Xavier Noria
  2011-06-25 20:16 ` Eric Wong
@ 2011-06-25 20:33 ` Eric Wong
  1 sibling, 0 replies; 4+ messages in thread
From: Eric Wong @ 2011-06-25 20:33 UTC (permalink / raw)
  To: unicorn list

Xavier Noria <fxn@hashref.com> wrote:
> If it is a real bad idea, is the recommendation
> to Unicorn users that they should just ignore this new feature?

Another thing, Rainbows! + ThreadSpawn/ThreadPool concurrency may do the
trick (without needing nginx at all).  The per-client overhead of
Rainbows! + threads (several hundred KB) is higher than nginx, but still
much lower than Unicorn.

All your Rails code must be thread-safe, though.

If you use Linux, XEpollThreadPool/XEpollThreadSpawn can be worth a
try, too.  The cost of completely idle keepalive clients should be
roughly inline with nginx.

If you want to forgo thread-safety, Rainbows! + StreamResponseEpoll[1]
+ ForceStreaming middleware[2] may also be an option, too (needs nginx).


Keep in mind that I don't know of anybody using Rainbows! for any
serious sites, so there could still be major bugs :)

Rainbows! http://rainbows.rubyforge.org/


[1] currently in rainbows.git, will probably be released this weekend...

[2] I'll probably move this to Rainbows! instead of a Unicorn branch:
    http://bogomips.org/unicorn.git/tree/lib/unicorn/force_streaming.rb?h=force_streaming
    This is 100% untested, I've never run it.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Unicorn and streaming in Rails 3.1
  2011-06-25 20:16 ` Eric Wong
@ 2011-06-25 22:23   ` Xavier Noria
  0 siblings, 0 replies; 4+ messages in thread
From: Xavier Noria @ 2011-06-25 22:23 UTC (permalink / raw)
  To: unicorn list

On Sat, Jun 25, 2011 at 10:16 PM, Eric Wong <normalperson@yhbt.net> wrote:

> Basically the per-connection overhead of Unicorn is huge, an entire Ruby
> process (tens to several hundreds of megabytes).  The per-connection
> overhead of nginx is tiny: maybe a few KB in userspace (including
> buffers), and a few KB in in the kernel.  You don't want to maintain
> connections to Unicorn for a long time because of that cost.

I see. I've read also the docs about design and philosophy in the website.

So if I understand it correctly, as far as memory consumption is
concerned the situation seems to be similar to the old days when
mongrel cluster was the standard for production, except perhaps for
setups with copy-on-write friendly interpreters, which weren't
available then.

So you configure only a few processes because of memory consumption,
and since there aren't many you want them to be ready to serve a new
request as soon as possible to handle some normal level of
concurrency. Hence the convenience of buffering in Nginx.

>> in the use case we have in mind in
>> Rails 3.1, which is to serve HEAD as soon as possible.
>
> Small nit: s/HEAD/the response header/   "HEAD" is a /request/ that only
> expects to receive the response header.

Oh yes, that was ambiguous. I actually meant the HEAD element of HTML
documents. The main use case in mind for adding streaming to Rails is
to be able to send the top of your layout (typically everything before
yielding to the view) so that the browser may issue requests for CSS
and JavaScript assets while the application builds an hypothetically
costly dynamic response.

> nginx only sends HTTP/1.0 requests to unicorn, so Rack::Chunked won't
> actually send a chunked/streamed response.  Rails 3.1 /could/ enable
> streaming without chunking for HTTP/1.0, but only if the client
> didn't set a non-standard HTTP/1.0 header to enable keepalive.  This
> is because HTTP/1.0 (w/o keepalive) relies on the server to close
> the connection to signal the end of a response.

It's clear then. Also, Rails has code that prevents streaming from
being triggered if the request is HTTP/1.0:

https://github.com/rails/rails/blob/master/actionpack/lib/action_controller/metal/streaming.rb#L243-244

> You can use "X-Accel-Buffering: no" if you know your responses are small
> enough to fit into the kernel socket buffers.  There's two kernel
> buffers (Unicorn + nginx), you can get a little more space there.  nginx
> shouldn't make another request to Unicorn if it's blocked writing a
> response to the client already, so an evil pipelining client should not
> hurt unicorn in this case:

Excellent.

Thanks Eric!
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-06-25 22:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-25 16:08 Unicorn and streaming in Rails 3.1 Xavier Noria
2011-06-25 20:16 ` Eric Wong
2011-06-25 22:23   ` Xavier Noria
2011-06-25 20:33 ` Eric Wong

Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).