1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
| | # -*- encoding: binary -*-
require "thread"
require "sleepy_penguin"
require "raindrops"
# This is an edge-triggered epoll concurrency model with blocking
# accept() in a (hopefully) native thread. This is comparable to
# ThreadSpawn and CoolioThreadSpawn, but is Linux-only and able to exploit
# "wake one" accept() behavior of a blocking accept() call when used
# with native threads.
#
# This supports streaming "rack.input" and allows +:pool_size+ tuning
# independently of +worker_connections+
#
# === Disadvantages
#
# This is only supported under Linux 2.6 and later kernels.
#
# === Compared to CoolioThreadSpawn
#
# This does not buffer outgoing responses in userspace at all, meaning
# it can lower response latency to fast clients and also prevent
# starvation of other clients when reading slow disks for responses
# (when combined with native threads).
#
# CoolioThreadSpawn is likely better for trickling large static files or
# proxying responses to slow clients, but this is likely better for fast
# clients.
#
# Unlikely CoolioThreadSpawn, this supports streaming "rack.input" which
# is useful for reading large uploads from fast clients.
#
# === Compared to ThreadSpawn
#
# This can maintain idle connections without the memory overhead of an
# idle Thread. The cost of handling/dispatching active connections is
# exactly the same for an equivalent number of active connections.
#
# === RubyGem Requirements
#
# * raindrops 0.6.0 or later
# * sleepy_penguin 3.0.1 or later
module Rainbows::XEpollThreadSpawn
# :stopdoc:
include Rainbows::Base
def init_worker_process(worker)
super
require "rainbows/xepoll_thread_spawn/client"
Rainbows::Client.__send__ :include, Client
end
def worker_loop(worker) # :nodoc:
init_worker_process(worker)
Client.loop
end
# :startdoc:
end
|