unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Eric Wong <bofh@yhbt.net>
To: Bertrand Paquet <bertrand.paquet@doctolib.com>
Cc: unicorn-public@bogomips.org
Subject: Re: Traffic priority with Unicorn
Date: Tue, 17 Dec 2019 05:12:11 +0000	[thread overview]
Message-ID: <20191217051211.GA85715@dcvr> (raw)
In-Reply-To: <CANCGK7nU=AuO--z-Q9YLSiHttZSCJTJAW4XeNyOByXivfGmihg@mail.gmail.com>

Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> Hello,
> 
> I would like to introduce some traffic priority in Unicorn. The goal
> is to keep critical endpoints online even if the application is
> slowing down a lot.
> 
> The idea is to classify the request at nginx level (by vhost, http
> path, header or whatever), and send the queries to two different
> unicorn sockets (opened by the same unicorn instance): one for high
> priority request, one for low priority request.
> I need to do some small modifications [1] in the unicorn worker loop
> to process high priority requests first. It seems to work:
> - I launch a first apache bench toward the low priority port
> - I launch a second apache bench toward the high priority port
> - Unicorn handles the queries only for that one, and stop answering to
> the low priority traffic

> [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#

Easier to view locally w/o JS/CSS using "git show -W" for context:

$ git remote add bpaquet https://github.com/bpaquet/unicorn
$ git fetch bpaquet
$ git show -W 58d6ba28
  <snip>
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 5334fa0c..976b728e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
   # runs inside each forked worker, this sits around and waits
   # for connections and doesn't die until the parent dies (or is
   # given a INT, QUIT, or TERM signal)
   def worker_loop(worker)
     ppid = @master_pid
     readers = init_worker_process(worker)
     nr = 0 # this becomes negative if we need to reopen logs
 
     # this only works immediately if the master sent us the signal
     # (which is the normal case)
     trap(:USR1) { nr = -65536 }
 
     ready = readers.dup
+    high_priority_reader = readers.first
+    last_processed_is_high_priority = false
     @after_worker_ready.call(self, worker)
 
     begin
       nr < 0 and reopen_worker_logs(worker.nr)
       nr = 0
       worker.tick = time_now.to_i
       tmp = ready.dup
       while sock = tmp.shift
         # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
         # but that will return false
         if client = sock.kgio_tryaccept
+          last_processed_is_high_priority = sock == high_priority_reader
           process_client(client)
           nr += 1
           worker.tick = time_now.to_i
         end
-        break if nr < 0
+        break if nr < 0 || last_processed_is_high_priority
       end
 
       # make the following bet: if we accepted clients this round,
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0
+      unless nr == 0 || !last_processed_is_high_priority
         tmp = ready.dup
         redo
       end
 
       ppid == Process.ppid or return
 
       # timeout used so we can detect parent death:
       worker.tick = time_now.to_i
       ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
     rescue => e
       redo if nr < 0 && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
     end while readers[0]
   end
 
   # delivers a signal to a worker and fails gracefully if the worker
   # is no longer running.

> The tradeoff are
> - No more "bet"[2] on low priority traffic. This is probably slowing
> down a little bit the low  priority traffic.

Yeah, but low priority is low priority, so it's fine to slow
them down, right? :>

> - This approach is only low / high. Not sure if I can extend it for 3
> (or more) level of priority without a non negligible performance
> impact (because of the "bet" above).

I don't think it makes sense to have more than two levels of
priority (zero, one, two, infinity rule?)
<https://en.wikipedia.org/wiki/Zero_One_Infinity>

> Do you think this approach is correct?

readers order isn't guaranteed, especially when inheriting
sockets from systemd or similar launchers.

I think some sort order could be defined via listen option...

I'm not sure if inheriting multiple sockets from systemd or
similar launchers using LISTEN_FDS env can guarantee ordering
(or IO.select in Ruby, for that matter).

It seems OK otherwise, I think...  Have you tested in real world?

> Do you have any better idea to have some traffic prioritization?
> (Another idea is to have dedicated workers for each priority class.
> This approach has other downsides, I would like to avoid it).
> Is it something we can  introduce in Unicorn (not as default
> behaviour, but as a configuration option)?

If you're willing to drop some low-priority requests, using a
small listen :backlog value for a low-priority listen may work.

I'm hesitant to put extra code in worker_loop method since
it can slow down current users who don't need the feature.

Instead, perhaps try replacing the worker_loop method entirely
(similar to how oob_gc.rb wraps process_client) so users who
don't enable the feature won't be penalized with extra code.
Users who opt into the feature can get an entirely different
method.

> Thx for any opinion.

The best option would be to never get yourself in a situation
where you're never overloaded by making everything fast :>
Anything else seems pretty ugly...

  reply	other threads:[~2019-12-17  5:12 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-16 22:34 Traffic priority with Unicorn Bertrand Paquet
2019-12-17  5:12 ` Eric Wong [this message]
2019-12-18 22:06   ` Bertrand Paquet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191217051211.GA85715@dcvr \
    --to=bofh@yhbt.net \
    --cc=bertrand.paquet@doctolib.com \
    --cc=unicorn-public@bogomips.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).