Date | Commit message (Collapse) |
|
Actually, I guess I misread, rack (starting at) 1.0 stopped
requiring Content-Length/Chunked headers but I never noticed.
Oh well.
This reverts commit 4968041a7e1ff90b920704f50fccb9e7968d0d99.
|
|
We might as well do it since puma and thin both do(*),
and we can still do writev for now to get some speedups
by avoiding Rack::Chunked overhead.
timing runs of "curl --no-buffer http://127.0.0.1:9292/ >/dev/null"
results in a best case drop from ~260ms to ~205ms on one VM
by disabling Rack::Chunked in the below config.ru
$ ruby -I lib bin/yahns-rackup -E none config.ru
==> config.ru <==
class Body
STR = ' ' * 1024 * 16
def each
10000.times { yield STR }
end
end
use Rack::Chunked if ENV['RACK_CHUNKED']
run(lambda do |env|
[ 200, [ %w(Content-Type text/plain) ], Body.new ]
end)
(*) they can do Content-Length, but I don't think it's
worth the effort at the server level.
|
|
This makes it easier to add more parameters to
http_response_write and simplifies current callers.
|
|
Clients are not able to handle persistent connections unless
the client knows the length of the response.
|
|
It's too hard to reliably test output buffering behavior
with non-default values users sometimes set; so just skip
and warn about it for now.
ref: commit dad99b5ecd93cdf0a514ff9fb51d198f8aebb188
("test/test_proxy_pass: remove buffer size tuning")
|
|
Sometimes, one process is all you need :>
Fwiw, I am also experimenting with the following in my
yahns.conf.rb file:
scache_stats = lambda do |env|
s = ctx.session_cache_stats.inspect << "\n"
[200, [%W(Content-Length #{s.size}), %w(Content-Type text/plain)], [s]]
end
app(:rack, scache_stats, preload: true) do
listen "unix:/tmp/yahns-scache.#$$.sock"
end
Which allows me to get stats based on the master PID (not worker):
printf 'GET / HTTP/1.0\r\n\r\n' | socat - UNIX:/tmp/yahns-scache.$PID.sock
|
|
rack 2.x has some incompatible changes an deprecations; support
it but remain compatible with rack 1.x for the next few years.
|
|
Rack::Lint-compliant applications wouldn't have this problem;
but apparently public-facing Rack servers (webrick/puma/thin)
all implement this; so there is precedence for implementing
this in yahns itself.
|
|
This allows us to speed up subsequent calls to wbuf_write when
the client socket buffers are cleared. This doesn't affect
functionality outside of performance.
|
|
Since we use wbuf_lite for long, streaming requests, we need to
reset the offset and counts when aborting the existing wbuf and
not assume the wbuf goes out-of-scope and expires when we
are done using it. Fix stupid bugs in BUG notices while
we're at it :x
|
|
StringIO can never be truncated outside our control, so it is
a bug if we see EOF on trysendio, here.
|
|
All of our wbuf code assumes we append to existing buffers
(files) since sendfile cannot deal otherwise. We also
follow this pattern for StringIO to avoid extra data copies.
|
|
And explain why this is doable for StringIO and not TmpIO,
which is file-backed.
|
|
This allows us to work transparently with our OpenSSL
workaround[*] while allowing us to reuse our non-sendfile
compatibility code. Unfortunately, this means we duplicate a
lot of code from the normal wbuf code for now; but that should
be fairly stable at this point.
[*] https://bugs.ruby-lang.org/issues/12085
|
|
We may have temporary files lingering from concurrent
multi-threaded tests in our forked child since FD_CLOFORK
does not exist :P
|
|
Reduce raciness in the init script and add LSB tags.
However, the systemd examples should be race-free and
safer (if one feels safe using systemd :P)
|
|
This is mainly to benefit curl(1) users who forget to use '-f'
to show failures. Not sure if I want to keep this change, it
seems like bloat; but Rack::ShowStatus pages are totally
overkill...
|
|
Another critical bugfix for this yet-to-be-released feature.
By the time we call proxy_unbuffer in proxy_read_body,
the req_res socket may be completely drained of readable data
and a persistent-connection-capable backend will be waiting
for the next request (not knowing we do not yet support
persistent connections).
This affects both chunked and identity responses from the
upstream.
|
|
The HTTP state (@hs) object could be replaced in proxy_wait_next
causing @hs.env['rack.logger'] to become inaccessible.
|
|
proxy_unbuffer is vulnerable to the same race condition
we avoided in commit 5328992829b2
("proxy_pass: fix race condition due to flawed hijack check")
|
|
When breaking early out of the upstream response body loop, we
must ensure we know we've hit the parser body EOF and preserve
the buffer for trailer parsing. Otherwise, reentering
proxy_read_body will put us in a bad state and corrupt
responses.
This is a critical bugfix which only affects users of
the soon-to-be-released "proxy_buffering: false" feature
of proxy_pass.
|
|
When we call shutdown on bad upstream responses, FD limits,
or server termination, we need to ensure the TLS connection
is terminated properly by calling SSL_shutdown and avoiding
confusion on the client side (or violating TLS specs!).
|
|
It seems unnecessary to set it at all and it's deprecated
in current Ruby trunk.
|
|
Static gzip files may not exist for symlinks, but they
could resolve to a file for which a pre-gzipped file
exists.
|
|
This fixes a case where "proxy_buffering: false" users may
encounter a "upstream error: BUG: EOF on tmpio sf_offset="
as a wbuf may be reused.
Oddly, it took over one week of running the latest proxy_pass
as of commit 616e42c8d609905d9355bb5db726a5348303ffae
("proxy_pass: fix HTTP/1.0 backends on EOF w/o buffering")
|
|
We must ensure we properly close connections to HTTP/1.0
backends even if we blocked writing on outgoing data.
|
|
This should make it easier to figure out where certain
errors are coming from and perhaps fix problems with
upstreams, too.
This helped me track down the problem causing public-inbox WWW
component running under Perl v5.20.2 on my Debian jessie system
to break and drop connections going through
Plack::Middleware::Deflater with gzip:
https://public-inbox.org/meta/20160607071401.29325-1-e@80x24.org/
Perl 5.14.2 on Debian wheezy did not detect this problem :x
|
|
Using a 10ms tick was too little, use 100ms instead to avoid
burning CPU. Ideally, we would not tick at all during shutdown
(we're normally tickless); but the common case could be
slightly more expensive; and shutdowns are rare (I hope).
Then, change our process title to indicate we're shutting down,
and finally, cut down on repeated log spew during shutdown and
only log dropping changes.
This mean we could potentially take 90ms longer to notice when
we can do a graceful shutdown, but oh well...
While we're at it, add a test to ensure graceful shutdowns
work as intended with multiple processes.
|
|
Using a high max_events may mean some IO objects are closed
after they're retrieved from the kernel but before our Ruby
process has had a chance to get to them.
|
|
We can force output buffer files to a directory of our
choosing to avoid being confused by temporary files
from other tests polluting the process we care about.
|
|
OpenSSL can't handle write retries if we append to an existing
string. Thus we must preserve the same string object upon
retrying.
Do that by utilizing the underlying Wbuf class which could
already handles it transparently using trysendfile.
However, we still avoiding the subtlety of wbuf_close_common
reliance we previously used.
ref: commit 551e670281bea77e727a732ba94275265ccae5f6
("fix output buffering with SSL_write")
|
|
We can retrieve it when we actually need to create the
temporary file. This saves an ivar slot and method dispatch
parameters.
This patch is nice, unfortunately the patch which follows is
not :P
|
|
On ENAMETOOLONG and perhaps other system errors which we can do
nothing about, we should not spew a giant backtrace which could
be used as an easy DoS vector.
|
|
* maint:
yahns 1.12.5 - proxy_pass + rack.hijack fixes
proxy_pass: X-Forwarded-For appends to existing list
|
|
Hopefully the last of the 1.12.x series, this release
fixes a few minor bugs mainly needed for testing.
No upgrade should be necessary for non-proxy_pass users.
4 changes since v1.12.4 from the "maint" branch at
git://yhbt.net/yahns.git
http_client: set state to :ignore before hijack callback
test/test_client_expire: fix for high RLIMIT_NOFILE
proxy_pass: do not chunk HTTP/1.0 with keep-alive
proxy_pass: X-Forwarded-For appends to existing list
lib/yahns/http_client.rb | 6 +++---
lib/yahns/proxy_http_response.rb | 8 ++++++--
lib/yahns/proxy_pass.rb | 5 ++++-
test/test_client_expire.rb | 13 +++++++++++--
test/test_proxy_pass.rb | 10 ++++++++++
5 files changed, 34 insertions(+), 8 deletions(-)
Note: the current "master" branch (at commit 5e211ea003d2)
includes refactorings and new features not included in this
release.
|
|
Ugh, this is a little slower, but some people will want to
forward through multiple proxies.
|
|
@busy will be reset on wbuf_write anyways, since there is no
initial data and we will always attempt to write to the socket
aggressively.
|
|
Relying on @body.close in Yahns::WbufCommon#wbuf_close_common to
resume reading the upstream response was too subtle and
potentially racy.
Instead use a new Yahns::WbufLite class which does exactly what
we want for implementing this feature, and nothing more.
|
|
We cannot rely on env being available after proxy_wait_next
|
|
"proxy_response_start" caller is ReqRes#yahns_step which
already does a global rescue. Furthermore, it clobbers
backtraces on network errors which programmers can do
nothing to fix.
|
|
This may be useful to avoid wasting resources when proxying for
an upstream which can already handle slow clients itself.
It is impossible to completely disable buffering, this merely
prevents gigantic amounts of buffering.
This may be useful when an upstream can generate a gigantic
response which would cause excessive disk I/O traffic if
buffered by yahns. An example of this would be an upstream
dynamically-generating a pack for a giant git (clone|fetch)
operation.
In other words, this option allows the upstream to react to
backpressure from slow clients. It is not recommended to
enable this unless your upstream server is capable of
supporting slow clients.
|
|
This will allow us to add extra options at the response
layer without taking up extra env hash keys.
|
|
Ugh, this is a little slower, but some people will want to
forward through multiple proxies.
|
|
This fixes a bug where a cleared wbuf would kill the
process after the response got flushed out to the client
as `wbuf.busy' becomes `false'.
This fixes a regression introduced in
"proxy_pass: trim down proxy_response_finish, too"
which was never in a stable release
|
|
This makes the ReqRes class easier-to-find and hopefully
maintain when using with other parts of yahns, although there
may be no reason to use this class outside of ProxyPass.
|
|
After the previous commits, we've added more branches
and made the existing response handling more generic;
so we can remove some duplicated logic and increase
review-ability.
|
|
Hopefully this increases readability as well and allows
us to make easier adjustments to support new features in
the future.
|
|
Again, the cost of having extra branches within a loop is
probably neglible compared to having bigger bytecode resulting
in worse CPU cache performance and increased maintenance
overhead for extra code.
|
|
proxy_response_start is gigantic an hard-to-read, we can
more clearly see the lifetimes of some objects, now, and
hopefully shorten some of them.
|
|
The cost of extra branches inside a loop is negligible compared
to the cost of all the other method calls we make. Favor
smaller code instead and inline some (now) single-use methods.
Furthermore, this allows us to reuse the request header buffer
instead of relying on thread-local storage and potentially
having to to swap buffers.
|