Date | Commit message (Collapse) |
|
It is causing _blocked_zombie to fail on rtype=11 due to the
addition of Content-Length making the client persistent when
we didn't actually want it to be (for the test).
|
|
Needed for Ruby 3.1, and likely 3.2, as well...
|
|
When tweaking buffer sizes, another IN_CREATE event can
happen soon after the delete.
|
|
IO.read may invoke subprocesses, which can set off
security warnings.
|
|
These options can be useful for limiting CGI process runtime and
memory usage.
|
|
Make the this test less multi-core/scheduler dependent.
|
|
Since we've required Ruby 2.0+ for a while, we can assume
descriptors are created with IO#close_on_exec=true and
avoid bloating our code with calls to it.
|
|
After this fix, all tests except test_client_expire are
passing on my system (macos 10.12.6). Honestly, I don't
really care if it's working perfectly fine. It's just
nice to be able to run the same server on development
machine. Production is of course Linux.
|
|
This is apparently needed to pass tests with a newer version of
OpenSSL found in Debian 9
|
|
A test failure was causing SIGQUIT to be delivered before
the forked process had a chance to hit trap(:QUIT).
|
|
We can't require 'proxy_pass' in both a parent and forked child,
so require it up front (as kcar will become a hard dependency
in place of unicorn).
Then, rely on GTL (global test lock) to synchronize around fork
since the VM may not always be able to protect that.
However, there's no need to synchronize around
spawn/system/`backtick`, as the VM should always be using those
in a thread-safe way (via vfork).
|
|
Current (tested r60757) ruby trunk warns in a few more places than
2.4.x did, so clean them up.
|
|
Setting TAIL=1 will automatically use the portable "tail -f".
This can be helpful in diagnosing failures during development.
GNU tail users may set TAIL="tail -F" (or "gtail -F")
to use the "-F" ("--follow=name") option to track changes
across SIGUSR1 log reopening testing.
|
|
Since there'll be some changes to accomodate the new parser,
ensure we prepare the Rack environment correctly.
|
|
Since the common case is still to run a single app inside yahns,
we can simplify setup a bit for systemd (and like) users by
allowing them to omit the "listen" directive when they are
running a single app in yahns.
|
|
It's possible to have "ruby" executables by other names
(e.g. "ruby24"), so use a supported API for finding our
executable.
This feature was added in Ruby 1.9.2, so it's safe to use
as we've always been 1.9.3+ (and nowadays 2.0+)
|
|
On FreeBSD 10.3 (and presumably other *BSD TCP stacks, the value
of SO_KEEPALIVE returned by getsockopt is 8, even when set to
'1' via setsockopt. Relax the test to only ensure the boolean
value is interpreted as "true".
Verified independently of Ruby using the following:
--------8<---------
#include <sys/types.h>
#include <sys/socket.h>
#include <stdio.h>
static int err(const char *msg)
{
perror(msg);
return 1;
}
int main(void)
{
int sv[2];
int set = 1;
int got;
socklen_t len = (socklen_t) sizeof(int);
int rc;
rc = socketpair(PF_LOCAL, SOCK_STREAM, 0, sv);
if (rc)
return err("socketpair failed");
rc = setsockopt(sv[0], SOL_SOCKET, SO_KEEPALIVE, &set, len);
if (rc)
return err("setsockopt failed");
rc = getsockopt(sv[0], SOL_SOCKET, SO_KEEPALIVE, &got, &len);
if (rc)
return err("getsockopt failed");
printf("got: %d\n", got);
return 0;
}
|
|
As with the previous commit
("response: do not set chunked header on bodyless responses"),
blindly setting "Transfer-Encoding: chunked" is wrong and
confuses "curl -T" on 204 responses, at least.
|
|
Setting "Transfer-Encoding: chunked" on responses will confuse
clients which see a 204 response and do not expect a body.
This follows Rack::Chunked behavior, as yahns should function
without Rack::Chunked middleware.
This regression appeared in yahns v1.13.0 (2016-08-05)
|
|
We still need to iterate through all response headers to support
response-only Rack hijacking. Previously, we only supported
full hijacking on so-called "HTTP/0.9" clients.
n.b. This diff will be easier to read with the
-b/--ignore-space-change option of git-diff(1) or GNU diff(1)
|
|
We might as well do it since puma and thin both do(*),
and we can still do writev for now to get some speedups
by avoiding Rack::Chunked overhead.
timing runs of "curl --no-buffer http://127.0.0.1:9292/ >/dev/null"
results in a best case drop from ~260ms to ~205ms on one VM
by disabling Rack::Chunked in the below config.ru
$ ruby -I lib bin/yahns-rackup -E none config.ru
==> config.ru <==
class Body
STR = ' ' * 1024 * 16
def each
10000.times { yield STR }
end
end
use Rack::Chunked if ENV['RACK_CHUNKED']
run(lambda do |env|
[ 200, [ %w(Content-Type text/plain) ], Body.new ]
end)
(*) they can do Content-Length, but I don't think it's
worth the effort at the server level.
|
|
Clients are not able to handle persistent connections unless
the client knows the length of the response.
|
|
It's too hard to reliably test output buffering behavior
with non-default values users sometimes set; so just skip
and warn about it for now.
ref: commit dad99b5ecd93cdf0a514ff9fb51d198f8aebb188
("test/test_proxy_pass: remove buffer size tuning")
|
|
rack 2.x has some incompatible changes an deprecations; support
it but remain compatible with rack 1.x for the next few years.
|
|
Rack::Lint-compliant applications wouldn't have this problem;
but apparently public-facing Rack servers (webrick/puma/thin)
all implement this; so there is precedence for implementing
this in yahns itself.
|
|
This allows us to work transparently with our OpenSSL
workaround[*] while allowing us to reuse our non-sendfile
compatibility code. Unfortunately, this means we duplicate a
lot of code from the normal wbuf code for now; but that should
be fairly stable at this point.
[*] https://bugs.ruby-lang.org/issues/12085
|
|
We may have temporary files lingering from concurrent
multi-threaded tests in our forked child since FD_CLOFORK
does not exist :P
|
|
This is mainly to benefit curl(1) users who forget to use '-f'
to show failures. Not sure if I want to keep this change, it
seems like bloat; but Rack::ShowStatus pages are totally
overkill...
|
|
It seems unnecessary to set it at all and it's deprecated
in current Ruby trunk.
|
|
Static gzip files may not exist for symlinks, but they
could resolve to a file for which a pre-gzipped file
exists.
|
|
We must ensure we properly close connections to HTTP/1.0
backends even if we blocked writing on outgoing data.
|
|
This should make it easier to figure out where certain
errors are coming from and perhaps fix problems with
upstreams, too.
This helped me track down the problem causing public-inbox WWW
component running under Perl v5.20.2 on my Debian jessie system
to break and drop connections going through
Plack::Middleware::Deflater with gzip:
https://public-inbox.org/meta/20160607071401.29325-1-e@80x24.org/
Perl 5.14.2 on Debian wheezy did not detect this problem :x
|
|
Using a 10ms tick was too little, use 100ms instead to avoid
burning CPU. Ideally, we would not tick at all during shutdown
(we're normally tickless); but the common case could be
slightly more expensive; and shutdowns are rare (I hope).
Then, change our process title to indicate we're shutting down,
and finally, cut down on repeated log spew during shutdown and
only log dropping changes.
This mean we could potentially take 90ms longer to notice when
we can do a graceful shutdown, but oh well...
While we're at it, add a test to ensure graceful shutdowns
work as intended with multiple processes.
|
|
We can force output buffer files to a directory of our
choosing to avoid being confused by temporary files
from other tests polluting the process we care about.
|
|
OpenSSL can't handle write retries if we append to an existing
string. Thus we must preserve the same string object upon
retrying.
Do that by utilizing the underlying Wbuf class which could
already handles it transparently using trysendfile.
However, we still avoiding the subtlety of wbuf_close_common
reliance we previously used.
ref: commit 551e670281bea77e727a732ba94275265ccae5f6
("fix output buffering with SSL_write")
|
|
We can retrieve it when we actually need to create the
temporary file. This saves an ivar slot and method dispatch
parameters.
This patch is nice, unfortunately the patch which follows is
not :P
|
|
On ENAMETOOLONG and perhaps other system errors which we can do
nothing about, we should not spew a giant backtrace which could
be used as an easy DoS vector.
|
|
@busy will be reset on wbuf_write anyways, since there is no
initial data and we will always attempt to write to the socket
aggressively.
|
|
Relying on @body.close in Yahns::WbufCommon#wbuf_close_common to
resume reading the upstream response was too subtle and
potentially racy.
Instead use a new Yahns::WbufLite class which does exactly what
we want for implementing this feature, and nothing more.
|
|
This may be useful to avoid wasting resources when proxying for
an upstream which can already handle slow clients itself.
It is impossible to completely disable buffering, this merely
prevents gigantic amounts of buffering.
This may be useful when an upstream can generate a gigantic
response which would cause excessive disk I/O traffic if
buffered by yahns. An example of this would be an upstream
dynamically-generating a pack for a giant git (clone|fetch)
operation.
In other words, this option allows the upstream to react to
backpressure from slow clients. It is not recommended to
enable this unless your upstream server is capable of
supporting slow clients.
|
|
Instead, we must drop non-terminated responses since
HTTP/1.0 clients do not understand chunked encoding.
This is necessary for "ab -k" which still uses HTTP/1.0.
|
|
The ab(1) command we use for testing is limited to 20000 open
connections under Debian jessie; a perfectly reasonable limit
to avoid port exhaustion. I never noticed this limit before,
but systemd under Jessie seems to have upped the default
RLIMIT_NOFILE to 65536(!), causing ab to error out.
We don't even need 10K connections for testing,
we just need to hit *some* limit before we start expiring.
So lower the RLIMIT_NOFILE back to 1024 in the forked server
process so we can test more quickly without running out of
ports or memory, since exhausting the 65536 RLIMIT_NOFILE
limit is not going to happen with a single TCP address.
|
|
These are followups to the following two commits:
* commit d16326723d
("proxy_http_response: fix non-terminated fast responses, too")
* commit 8c9f33a539
("proxy_http_response: workaround non-terminated backends")
|
|
We should not infinite loop, oops :x
Also, ensure 'yahns' is in the directory in case tests are
SIGKILL-ed and directories are left over.
|
|
We need to ensure SERVER_PORT is still parsed from the Host:
header when it is given, there.
|
|
This helps Rack::Request#url and similar methods generate proper
URLs instead of the obviously wrong: "https://example.com:80/"
Note: we don't track the actual port the listener is bound to,
and it may not be worth it since the use of the Host: header
is long-established and Host: headers include the port number
if non-standard.
|
|
The "threads:" option for the "listen" directive is worthless.
Having a dedicated thread per-process is already more than enough
(and ideal) for a multi-process setup. Multiple acceptor threads
is still wrong for a single-process setup (even if we did not
have a GVL) as it still incurs contention with the worker
pool within the kernel.
So remove the documentation regarding "listen ... threads: ",
for now; at least until somebody can prove it's useful and not
taking up space.
Additionally, "atfork_parent" may be useful for restarting
background threads/connections if somebody wants to run
background jobs in the master process, so stop saying
it's completely useless.
|
|
env['HTTPS'] is not documented in rack SPEC, but appears to be
used by Rack::Request since 2010[*]. Also, set rack.url_scheme
as documented by rack SPEC.
[*] - commit 4defbe5d7c07b3ba721ff34a8ff59fde480a4a9f
("Improves performance by lazy loading the session.")
|
|
We cannot use the sendfile(2) syscall when serving static files
to TLS clients without breaking them. We currently rely on
OpenSSL to encrypt the data before it hits the socket, so it
must be read into userspace buffers before being written to the
socket.
|
|
Using the 'update-copyright' script from gnulib[1]:
git ls-files | UPDATE_COPYRIGHT_HOLDER='all contributors' \
UPDATE_COPYRIGHT_USE_INTERVALS=2 \
xargs /path/to/gnulib/build-aux/update-copyright
We're also switching to 'GPL-3.0+' as recommended by SPDX
to be consistent with our gemspec and other metadata
(as opposed to the longer but equivalent "GPLv3 or later").
[1] git://git.savannah.gnu.org/gnulib.git
|