* [PATCH 1/5] test_proxy_pass_no_buffering: fix racy test
2016-06-07 7:39 [PATCH \0/5] another round of proxy-related bugfixes! Eric Wong
@ 2016-06-07 7:39 ` Eric Wong
2016-06-07 7:39 ` [PATCH 2/5] queue_*: check for closed IO objects Eric Wong
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Eric Wong @ 2016-06-07 7:39 UTC (permalink / raw)
To: yahns-public
We can force output buffer files to a directory of our
choosing to avoid being confused by temporary files
from other tests polluting the process we care about.
---
test/test_proxy_pass_no_buffering.rb | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/test/test_proxy_pass_no_buffering.rb b/test/test_proxy_pass_no_buffering.rb
index b2c8b48..c60ccad 100644
--- a/test/test_proxy_pass_no_buffering.rb
+++ b/test/test_proxy_pass_no_buffering.rb
@@ -35,11 +35,13 @@ def setup
server_helper_setup
skip "kcar missing yahns/proxy_pass" unless defined?(Kcar)
require 'yahns/proxy_pass'
+ @tmpdir = yahns_mktmpdir
end
def teardown
@srv2.close if defined?(@srv2) && !@srv2.closed?
server_helper_teardown
+ FileUtils.rm_rf(@tmpdir) if defined?(@tmpdir)
end
def check_headers(io)
@@ -55,11 +57,15 @@ def test_proxy_pass_no_buffering
host2, port2 = @srv2.addr[3], @srv2.addr[1]
pxp = Yahns::ProxyPass.new("http://#{host2}:#{port2}",
proxy_buffering: false)
+ tmpdir = @tmpdir
pid = mkserver(cfg) do
ObjectSpace.each_object(Yahns::TmpIO) { |io| io.close unless io.closed? }
@srv2.close
cfg.instance_eval do
- app(:rack, pxp) { listen "#{host}:#{port}" }
+ app(:rack, pxp) do
+ listen "#{host}:#{port}"
+ output_buffering true, tmpdir: tmpdir
+ end
stderr_path err.path
end
end
@@ -85,10 +91,11 @@ def test_proxy_pass_no_buffering
sleep 0.1
# ensure no files get created
if RUBY_PLATFORM =~ /\blinux\b/ && `which lsof 2>/dev/null`.size >= 4
+ qtmpdir = Regexp.quote("#@tmpdir/")
deleted1 = `lsof -p #{pid}`.split("\n")
- deleted1 = deleted1.grep(/\bREG\b.* \(deleted\)/)
+ deleted1 = deleted1.grep(/\bREG\b.*#{qtmpdir}.* \(deleted\)/)
deleted2 = `lsof -p #{pid2}`.split("\n")
- deleted2 = deleted2.grep(/\bREG\b.* \(deleted\)/)
+ deleted2 = deleted2.grep(/\bREG\b.*#{qtmpdir}.* \(deleted\)/)
[ deleted1, deleted2 ].each do |ary|
ary.delete_if { |x| x =~ /\.(?:err|out) \(deleted\)/ }
end
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/5] queue_*: check for closed IO objects
2016-06-07 7:39 [PATCH \0/5] another round of proxy-related bugfixes! Eric Wong
2016-06-07 7:39 ` [PATCH 1/5] test_proxy_pass_no_buffering: fix racy test Eric Wong
@ 2016-06-07 7:39 ` Eric Wong
2016-06-07 7:39 ` [PATCH 3/5] cleanup graceful shutdown handling Eric Wong
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Eric Wong @ 2016-06-07 7:39 UTC (permalink / raw)
To: yahns-public
Using a high max_events may mean some IO objects are closed
after they're retrieved from the kernel but before our Ruby
process has had a chance to get to them.
---
lib/yahns/queue_epoll.rb | 1 +
lib/yahns/queue_kqueue.rb | 1 +
2 files changed, 2 insertions(+)
diff --git a/lib/yahns/queue_epoll.rb b/lib/yahns/queue_epoll.rb
index 6d8a6ca..7f5c038 100644
--- a/lib/yahns/queue_epoll.rb
+++ b/lib/yahns/queue_epoll.rb
@@ -44,6 +44,7 @@ def worker_thread(logger, max_events)
thr_init
begin
epoll_wait(max_events) do |_, io| # don't care for flags for now
+ next if io.closed?
# Note: we absolutely must not do anything with io after
# we've called epoll_ctl on it, io is exclusive to this
diff --git a/lib/yahns/queue_kqueue.rb b/lib/yahns/queue_kqueue.rb
index 531912b..229475c 100644
--- a/lib/yahns/queue_kqueue.rb
+++ b/lib/yahns/queue_kqueue.rb
@@ -53,6 +53,7 @@ def worker_thread(logger, max_events)
thr_init
begin
kevent(nil, max_events) do |_,_,_,_,_,io| # don't care for flags for now
+ next if io.closed?
# Note: we absolutely must not do anything with io after
# we've called kevent(...,EV_ADD) on it, io is exclusive to this
# thread only until kevent(...,EV_ADD) is called on it.
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 3/5] cleanup graceful shutdown handling
2016-06-07 7:39 [PATCH \0/5] another round of proxy-related bugfixes! Eric Wong
2016-06-07 7:39 ` [PATCH 1/5] test_proxy_pass_no_buffering: fix racy test Eric Wong
2016-06-07 7:39 ` [PATCH 2/5] queue_*: check for closed IO objects Eric Wong
@ 2016-06-07 7:39 ` Eric Wong
2016-06-07 7:39 ` [PATCH 4/5] proxy_pass: more descriptive error messages Eric Wong
2016-06-07 7:39 ` [PATCH 5/5] proxy_pass: fix HTTP/1.0 backends on EOF w/o buffering Eric Wong
4 siblings, 0 replies; 6+ messages in thread
From: Eric Wong @ 2016-06-07 7:39 UTC (permalink / raw)
To: yahns-public
Using a 10ms tick was too little, use 100ms instead to avoid
burning CPU. Ideally, we would not tick at all during shutdown
(we're normally tickless); but the common case could be
slightly more expensive; and shutdowns are rare (I hope).
Then, change our process title to indicate we're shutting down,
and finally, cut down on repeated log spew during shutdown and
only log dropping changes.
This mean we could potentially take 90ms longer to notice when
we can do a graceful shutdown, but oh well...
While we're at it, add a test to ensure graceful shutdowns
work as intended with multiple processes.
---
lib/yahns/fdmap.rb | 11 +++++++----
lib/yahns/server.rb | 3 ++-
lib/yahns/server_mp.rb | 2 +-
test/test_server.rb | 35 +++++++++++++++++++++++++++++++++++
4 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/lib/yahns/fdmap.rb b/lib/yahns/fdmap.rb
index fab5d36..0aaf360 100644
--- a/lib/yahns/fdmap.rb
+++ b/lib/yahns/fdmap.rb
@@ -89,10 +89,10 @@ def forget(io)
# We should not be calling this too frequently, it is expensive
# This is called while @fdmap_mtx is held
def __expire(timeout)
- return if @count == 0
+ return 0 if @count == 0
nr = 0
now = Yahns.now
- (now - @last_expire) >= 1.0 or return # don't expire too frequently
+ (now - @last_expire) >= 1.0 or return @count # don't expire too frequently
# @fdmap_ary may be huge, so always expire a bunch at once to
# avoid getting to this method too frequently
@@ -104,8 +104,11 @@ def __expire(timeout)
end
@last_expire = Yahns.now
- msg = timeout ? "timeout=#{timeout}" : "client_timeout"
- @logger.info("dropping #{nr} of #@count clients for #{msg}")
+ if nr != 0
+ msg = timeout ? "timeout=#{timeout}" : "client_timeout"
+ @logger.info("dropping #{nr} of #@count clients for #{msg}")
+ end
+ @count
end
# used for graceful shutdown
diff --git a/lib/yahns/server.rb b/lib/yahns/server.rb
index ba2066b..00e5f15 100644
--- a/lib/yahns/server.rb
+++ b/lib/yahns/server.rb
@@ -496,7 +496,8 @@ def sp_sig_handle(alive)
def dropping(fdmap)
if drop_acceptors[0] || fdmap.size > 0
timeout = @shutdown_expire < Yahns.now ? -1 : @shutdown_timeout
- fdmap.desperate_expire(timeout)
+ n = fdmap.desperate_expire(timeout)
+ $0 = "yahns quitting, #{n} FD(s) remain"
true
else
false
diff --git a/lib/yahns/server_mp.rb b/lib/yahns/server_mp.rb
index fa12a0c..c9cd207 100644
--- a/lib/yahns/server_mp.rb
+++ b/lib/yahns/server_mp.rb
@@ -159,7 +159,7 @@ def run_mp_worker(worker)
def mp_sig_handle(watch, alive)
# not performance critical
watch.delete_if { |io| io.to_io.closed? }
- if r = IO.select(watch, nil, nil, alive ? nil : 0.01)
+ if r = IO.select(watch, nil, nil, alive ? nil : 0.1)
r[0].each(&:yahns_step)
end
case @sig_queue.shift
diff --git a/test/test_server.rb b/test/test_server.rb
index 9f33b42..c6d70cb 100644
--- a/test/test_server.rb
+++ b/test/test_server.rb
@@ -725,6 +725,41 @@ def _persistent_shutdown(nr_workers)
assert_nil c.read(666)
end
+ def test_slow_shutdown_timeout; _slow_shutdown(nil); end
+ def test_slow_shutdown_timeout_mp; _slow_shutdown(1); end
+
+ def _slow_shutdown(nr_workers)
+ err, cfg, host, port = @err, Yahns::Config.new, @srv.addr[3], @srv.addr[1]
+ pid = mkserver(cfg) do
+ ru = lambda { |e| [ 200, {'Content-Length'=>'2'}, %w(OK) ] }
+ cfg.instance_eval do
+ app(:rack, ru) { listen "#{host}:#{port}" }
+ stderr_path err.path
+ worker_processes(nr_workers) if nr_workers
+ end
+ end
+ c = get_tcp_client(host, port)
+ c.write 'G'
+ 100000.times { Thread.pass }
+ Process.kill(:QUIT, pid)
+ "ET / HTTP/1.1\r\nHost: example.com\r\n\r\n".each_byte do |x|
+ Thread.pass
+ c.write(x.chr)
+ Thread.pass
+ end
+ assert_equal c, c.wait(30)
+ buf = ''.dup
+ re = /\r\n\r\nOK\z/
+ Timeout.timeout(30) do
+ begin
+ buf << c.readpartial(666)
+ end until re =~ buf
+ end
+ c.close
+ _, status = Timeout.timeout(5) { Process.waitpid2(pid) }
+ assert status.success?, status.inspect
+ end
+
def test_before_exec
err, cfg, host, port = @err, Yahns::Config.new, @srv.addr[3], @srv.addr[1]
ru = lambda { |e| [ 200, {'Content-Length'=>'2' }, %w(OK) ] }
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 4/5] proxy_pass: more descriptive error messages
2016-06-07 7:39 [PATCH \0/5] another round of proxy-related bugfixes! Eric Wong
` (2 preceding siblings ...)
2016-06-07 7:39 ` [PATCH 3/5] cleanup graceful shutdown handling Eric Wong
@ 2016-06-07 7:39 ` Eric Wong
2016-06-07 7:39 ` [PATCH 5/5] proxy_pass: fix HTTP/1.0 backends on EOF w/o buffering Eric Wong
4 siblings, 0 replies; 6+ messages in thread
From: Eric Wong @ 2016-06-07 7:39 UTC (permalink / raw)
To: yahns-public
This should make it easier to figure out where certain
errors are coming from and perhaps fix problems with
upstreams, too.
This helped me track down the problem causing public-inbox WWW
component running under Perl v5.20.2 on my Debian jessie system
to break and drop connections going through
Plack::Middleware::Deflater with gzip:
https://public-inbox.org/meta/20160607071401.29325-1-e@80x24.org/
Perl 5.14.2 on Debian wheezy did not detect this problem :x
---
lib/yahns/proxy_http_response.rb | 8 ++++++--
lib/yahns/req_res.rb | 6 ++++--
test/test_proxy_pass.rb | 4 ++--
3 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/lib/yahns/proxy_http_response.rb b/lib/yahns/proxy_http_response.rb
index 8de5b4f..9867da2 100644
--- a/lib/yahns/proxy_http_response.rb
+++ b/lib/yahns/proxy_http_response.rb
@@ -53,6 +53,8 @@ def proxy_err_response(code, req_res, exc)
logger.error('premature upstream EOF')
when Kcar::ParserError
logger.error("upstream response error: #{exc.message}")
+ when String
+ logger.error(exc)
else
Yahns::Log.exception(logger, 'upstream error', exc)
end
@@ -167,7 +169,9 @@ def proxy_read_body(tip, kcar, req_res)
return proxy_unbuffer(wbuf) if Yahns::WbufLite === wbuf
when nil # EOF
# HTTP/1.1 upstream, unexpected premature EOF:
- return proxy_err_response(nil, req_res, nil) if len || chunk
+ msg = "upstream EOF (#{len} bytes left)" if len
+ msg = 'upstream EOF (chunk)' if chunk
+ return proxy_err_response(nil, req_res, msg) if msg
# HTTP/1.0 upstream:
wbuf = proxy_write(wbuf, "0\r\n\r\n".freeze, req_res) if alive
@@ -198,7 +202,7 @@ def proxy_read_trailers(kcar, req_res)
when :wait_readable
return wait_on_upstream(req_res)
when nil # premature EOF
- return proxy_err_response(nil, req_res, nil)
+ return proxy_err_response(nil, req_res, 'upstream EOF (trailers)')
end # no loop here
end
wbuf = proxy_write(wbuf, trailer_out(tlr), req_res)
diff --git a/lib/yahns/req_res.rb b/lib/yahns/req_res.rb
index 9bb8f35..041b908 100644
--- a/lib/yahns/req_res.rb
+++ b/lib/yahns/req_res.rb
@@ -42,7 +42,8 @@ def yahns_step # yahns event loop entry point
# continue looping in middle "case @resbuf" loop
when :wait_readable
return rv # spurious wakeup
- when nil then return c.proxy_err_response(502, self, nil)
+ when nil
+ return c.proxy_err_response(502, self, 'upstream EOF (headers)')
end # NOT looping here
when String # continue reading trickled response headers from upstream
@@ -50,7 +51,8 @@ def yahns_step # yahns event loop entry point
case rv = kgio_tryread(0x2000, buf)
when String then res = req.headers(@hdr, resbuf << rv) and break
when :wait_readable then return rv
- when nil then return c.proxy_err_response(502, self, nil)
+ when nil
+ return c.proxy_err_response(502, self, 'upstream EOF (big headers)')
end while true
@resbuf = false
diff --git a/test/test_proxy_pass.rb b/test/test_proxy_pass.rb
index 5dd8058..4c4b53a 100644
--- a/test/test_proxy_pass.rb
+++ b/test/test_proxy_pass.rb
@@ -485,7 +485,7 @@ def check_truncated_upstream(host, port)
assert_equal exp, res
errs = File.readlines(@err.path).grep(/\bERROR\b/)
assert_equal 1, errs.size
- assert_match(/premature upstream EOF/, errs[0])
+ assert_match(/upstream EOF/, errs[0])
@err.truncate(0)
# truncated headers or no response at all...
@@ -501,7 +501,7 @@ def check_truncated_upstream(host, port)
s.close
errs = File.readlines(@err.path).grep(/\bERROR\b/)
assert_equal 1, errs.size
- assert_match(/premature upstream EOF/, errs[0])
+ assert_match(/upstream EOF/, errs[0])
@err.truncate(0)
end
end
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 5/5] proxy_pass: fix HTTP/1.0 backends on EOF w/o buffering
2016-06-07 7:39 [PATCH \0/5] another round of proxy-related bugfixes! Eric Wong
` (3 preceding siblings ...)
2016-06-07 7:39 ` [PATCH 4/5] proxy_pass: more descriptive error messages Eric Wong
@ 2016-06-07 7:39 ` Eric Wong
4 siblings, 0 replies; 6+ messages in thread
From: Eric Wong @ 2016-06-07 7:39 UTC (permalink / raw)
To: yahns-public
We must ensure we properly close connections to HTTP/1.0
backends even if we blocked writing on outgoing data.
---
lib/yahns/proxy_http_response.rb | 9 ++-
lib/yahns/wbuf_lite.rb | 7 +-
test/test_proxy_pass_no_buffering.rb | 138 +++++++++++++++++++----------------
3 files changed, 84 insertions(+), 70 deletions(-)
diff --git a/lib/yahns/proxy_http_response.rb b/lib/yahns/proxy_http_response.rb
index 9867da2..316c310 100644
--- a/lib/yahns/proxy_http_response.rb
+++ b/lib/yahns/proxy_http_response.rb
@@ -10,13 +10,14 @@
module Yahns::HttpResponse # :nodoc:
# switch and yield
- def proxy_unbuffer(wbuf)
+ def proxy_unbuffer(wbuf, nxt = :ignore)
@state = wbuf
+ wbuf.req_res = nil if nxt.nil? && wbuf.respond_to?(:req_res=)
tc = Thread.current
tc[:yahns_fdmap].remember(self) # Yahns::HttpClient
tc[:yahns_queue].queue_mod(self, wbuf.busy == :wait_readable ?
Yahns::Queue::QEV_RD : Yahns::Queue::QEV_WR)
- :ignore
+ nxt
end
def wbuf_alloc(req_res)
@@ -175,9 +176,9 @@ def proxy_read_body(tip, kcar, req_res)
# HTTP/1.0 upstream:
wbuf = proxy_write(wbuf, "0\r\n\r\n".freeze, req_res) if alive
- return proxy_unbuffer(wbuf) if Yahns::WbufLite === wbuf
req_res.shutdown
- break
+ return proxy_unbuffer(wbuf, nil) if Yahns::WbufLite === wbuf
+ return proxy_busy_mod(wbuf, req_res)
when :wait_readable
return wait_on_upstream(req_res)
end until kcar.body_eof?
diff --git a/lib/yahns/wbuf_lite.rb b/lib/yahns/wbuf_lite.rb
index afee1e9..fa52f54 100644
--- a/lib/yahns/wbuf_lite.rb
+++ b/lib/yahns/wbuf_lite.rb
@@ -7,9 +7,11 @@
# This is only used for "proxy_buffering: false"
class Yahns::WbufLite < Yahns::Wbuf # :nodoc:
attr_reader :busy
+ attr_writer :req_res
def initialize(req_res)
- super(nil, :ignore)
+ alive = req_res.alive
+ super(nil, alive ? :ignore : false)
@req_res = req_res
end
@@ -35,8 +37,9 @@ def wbuf_close(client)
if @req_res
client.hijack_cleanup
Thread.current[:yahns_queue].queue_mod(@req_res, Yahns::Queue::QEV_RD)
+ return :ignore
end
- :ignore
+ @wbuf_persist
rescue
@req_res = @req_res.close if @req_res
raise
diff --git a/test/test_proxy_pass_no_buffering.rb b/test/test_proxy_pass_no_buffering.rb
index c60ccad..0afa4e1 100644
--- a/test/test_proxy_pass_no_buffering.rb
+++ b/test/test_proxy_pass_no_buffering.rb
@@ -18,8 +18,13 @@ def call(env)
when 'GET'
case env['PATH_INFO']
when '/giant-body'
- h = [ %W(content-type text/pain),
- %W(content-length #{NCHUNK * STR4.size}) ]
+ h = [ %W(content-type text/pain) ]
+
+ # HTTP/1.0 is not Rack-compliant, so no Rack::Lint for us :)
+ if env['HTTP_VERSION'] == 'HTTP/1.1'
+ h << %W(content-length #{NCHUNK * STR4.size})
+ end
+
body = Object.new
def body.each
NCHUNK.times { yield STR4 }
@@ -53,6 +58,7 @@ def check_headers(io)
end
def test_proxy_pass_no_buffering
+ to_close = []
err, cfg, host, port = @err, Yahns::Config.new, @srv.addr[3], @srv.addr[1]
host2, port2 = @srv2.addr[3], @srv2.addr[1]
pxp = Yahns::ProxyPass.new("http://#{host2}:#{port2}",
@@ -81,79 +87,83 @@ def test_proxy_pass_no_buffering
stderr_path err.path
end
end
- s = TCPSocket.new(host, port)
- req = "GET /giant-body HTTP/1.1\r\nHost: example.com\r\n" \
- "Connection: close\r\n\r\n"
- s.write(req)
- bufs = []
- sleep 1
- 10.times do
- sleep 0.1
- # ensure no files get created
- if RUBY_PLATFORM =~ /\blinux\b/ && `which lsof 2>/dev/null`.size >= 4
- qtmpdir = Regexp.quote("#@tmpdir/")
- deleted1 = `lsof -p #{pid}`.split("\n")
- deleted1 = deleted1.grep(/\bREG\b.*#{qtmpdir}.* \(deleted\)/)
- deleted2 = `lsof -p #{pid2}`.split("\n")
- deleted2 = deleted2.grep(/\bREG\b.*#{qtmpdir}.* \(deleted\)/)
- [ deleted1, deleted2 ].each do |ary|
- ary.delete_if { |x| x =~ /\.(?:err|out) \(deleted\)/ }
+ %w(1.0 1.1).each do |ver|
+ s = TCPSocket.new(host, port)
+ to_close << s
+ req = "GET /giant-body HTTP/#{ver}\r\nHost: example.com\r\n".dup
+ req << "Connection: close\r\n" if ver == '1.1'
+ req << "\r\n"
+ s.write(req)
+ bufs = []
+ sleep 1
+ 10.times do
+ sleep 0.1
+ # ensure no files get created
+ if RUBY_PLATFORM =~ /\blinux\b/ && `which lsof 2>/dev/null`.size >= 4
+ qtmpdir = Regexp.quote("#@tmpdir/")
+ deleted1 = `lsof -p #{pid}`.split("\n")
+ deleted1 = deleted1.grep(/\bREG\b.*#{qtmpdir}.* \(deleted\)/)
+ deleted2 = `lsof -p #{pid2}`.split("\n")
+ deleted2 = deleted2.grep(/\bREG\b.*#{qtmpdir}.* \(deleted\)/)
+ [ deleted1, deleted2 ].each do |ary|
+ ary.delete_if { |x| x =~ /\.(?:err|out) \(deleted\)/ }
+ end
+ assert_equal 1, deleted1.size, "pid1=#{deleted1.inspect}"
+ assert_equal 0, deleted2.size, "pid2=#{deleted2.inspect}"
+ bufs.push(deleted1[0])
end
- assert_equal 1, deleted1.size, "pid1=#{deleted1.inspect}"
- assert_equal 0, deleted2.size, "pid2=#{deleted2.inspect}"
- bufs.push(deleted1[0])
end
- end
- before = bufs.size
- bufs.uniq!
- assert bufs.size < before, 'unlinked buffer should not grow'
- buf = ''.dup
- slow = Digest::MD5.new
- ft = Thread.new do
+ before = bufs.size
+ bufs.uniq!
+ assert bufs.size < before, 'unlinked buffer should not grow'
+ buf = ''.dup
+ slow = Digest::MD5.new
+ ft = Thread.new do
+ fast = Digest::MD5.new
+ f = TCPSocket.new(host2, port2)
+ f.write(req)
+ b2 = ''.dup
+ check_headers(f)
+ nf = 0
+ begin
+ f.readpartial(1024 * 1024, b2)
+ nf += b2.bytesize
+ fast.update(b2)
+ rescue EOFError
+ f = f.close
+ end while f
+ b2.clear
+ [ nf, fast.hexdigest ]
+ end
+ Thread.abort_on_exception = true
+ check_headers(s)
+ n = 0
+ begin
+ s.readpartial(1024 * 1024, buf)
+ slow.update(buf)
+ n += buf.bytesize
+ sleep 0.01
+ rescue EOFError
+ s = s.close
+ end while s
+ ft.join(5)
+ assert_equal [n, slow.hexdigest ], ft.value
+
fast = Digest::MD5.new
- f = TCPSocket.new(host2, port2)
+ f = TCPSocket.new(host, port)
f.write(req)
- b2 = ''.dup
check_headers(f)
- nf = 0
begin
- f.readpartial(1024 * 1024, b2)
- nf += b2.bytesize
- fast.update(b2)
+ f.readpartial(1024 * 1024, buf)
+ fast.update(buf)
rescue EOFError
f = f.close
end while f
- b2.clear
- [ nf, fast.hexdigest ]
+ buf.clear
+ assert_equal slow.hexdigest, fast.hexdigest
end
- Thread.abort_on_exception = true
- check_headers(s)
- n = 0
- begin
- s.readpartial(1024 * 1024, buf)
- slow.update(buf)
- n += buf.bytesize
- sleep 0.01
- rescue EOFError
- s = s.close
- end while s
- ft.join(5)
- assert_equal [n, slow.hexdigest ], ft.value
-
- fast = Digest::MD5.new
- f = TCPSocket.new(host, port)
- f.write(req)
- check_headers(f)
- begin
- f.readpartial(1024 * 1024, buf)
- fast.update(buf)
- rescue EOFError
- f = f.close
- end while f
- buf.clear
- assert_equal slow.hexdigest, fast.hexdigest
ensure
- s.close if s
+ to_close.each { |io| io.close unless io.closed? }
quit_wait(pid)
quit_wait(pid2)
end
^ permalink raw reply related [flat|nested] 6+ messages in thread