unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
* Traffic priority with Unicorn
@ 2019-12-16 22:34 Bertrand Paquet
  2019-12-17  5:12 ` Eric Wong
  0 siblings, 1 reply; 3+ messages in thread
From: Bertrand Paquet @ 2019-12-16 22:34 UTC (permalink / raw)
  To: unicorn-public

Hello,

I would like to introduce some traffic priority in Unicorn. The goal
is to keep critical endpoints online even if the application is
slowing down a lot.

The idea is to classify the request at nginx level (by vhost, http
path, header or whatever), and send the queries to two different
unicorn sockets (opened by the same unicorn instance): one for high
priority request, one for low priority request.
I need to do some small modifications [1] in the unicorn worker loop
to process high priority requests first. It seems to work:
- I launch a first apache bench toward the low priority port
- I launch a second apache bench toward the high priority port
- Unicorn handles the queries only for that one, and stop answering to
the low priority traffic

The tradeoff are
- No more "bet"[2] on low priority traffic. This is probably slowing
down a little bit the low  priority traffic.
- This approach is only low / high. Not sure if I can extend it for 3
(or more) level of priority without a non negligible performance
impact (because of the "bet" above).

Do you think this approach is correct?
Do you have any better idea to have some traffic prioritization?
(Another idea is to have dedicated workers for each priority class.
This approach has other downsides, I would like to avoid it).
Is it something we can  introduce in Unicorn (not as default
behaviour, but as a configuration option)?

Thx for any opinion.

Bertrand

[1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#
[2] https://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb#n707

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Traffic priority with Unicorn
  2019-12-16 22:34 Traffic priority with Unicorn Bertrand Paquet
@ 2019-12-17  5:12 ` Eric Wong
  2019-12-18 22:06   ` Bertrand Paquet
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Wong @ 2019-12-17  5:12 UTC (permalink / raw)
  To: Bertrand Paquet; +Cc: unicorn-public

Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> Hello,
> 
> I would like to introduce some traffic priority in Unicorn. The goal
> is to keep critical endpoints online even if the application is
> slowing down a lot.
> 
> The idea is to classify the request at nginx level (by vhost, http
> path, header or whatever), and send the queries to two different
> unicorn sockets (opened by the same unicorn instance): one for high
> priority request, one for low priority request.
> I need to do some small modifications [1] in the unicorn worker loop
> to process high priority requests first. It seems to work:
> - I launch a first apache bench toward the low priority port
> - I launch a second apache bench toward the high priority port
> - Unicorn handles the queries only for that one, and stop answering to
> the low priority traffic

> [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#

Easier to view locally w/o JS/CSS using "git show -W" for context:

$ git remote add bpaquet https://github.com/bpaquet/unicorn
$ git fetch bpaquet
$ git show -W 58d6ba28
  <snip>
diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
index 5334fa0c..976b728e 100644
--- a/lib/unicorn/http_server.rb
+++ b/lib/unicorn/http_server.rb
@@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
   # runs inside each forked worker, this sits around and waits
   # for connections and doesn't die until the parent dies (or is
   # given a INT, QUIT, or TERM signal)
   def worker_loop(worker)
     ppid = @master_pid
     readers = init_worker_process(worker)
     nr = 0 # this becomes negative if we need to reopen logs
 
     # this only works immediately if the master sent us the signal
     # (which is the normal case)
     trap(:USR1) { nr = -65536 }
 
     ready = readers.dup
+    high_priority_reader = readers.first
+    last_processed_is_high_priority = false
     @after_worker_ready.call(self, worker)
 
     begin
       nr < 0 and reopen_worker_logs(worker.nr)
       nr = 0
       worker.tick = time_now.to_i
       tmp = ready.dup
       while sock = tmp.shift
         # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
         # but that will return false
         if client = sock.kgio_tryaccept
+          last_processed_is_high_priority = sock == high_priority_reader
           process_client(client)
           nr += 1
           worker.tick = time_now.to_i
         end
-        break if nr < 0
+        break if nr < 0 || last_processed_is_high_priority
       end
 
       # make the following bet: if we accepted clients this round,
       # we're probably reasonably busy, so avoid calling select()
       # and do a speculative non-blocking accept() on ready listeners
       # before we sleep again in select().
-      unless nr == 0
+      unless nr == 0 || !last_processed_is_high_priority
         tmp = ready.dup
         redo
       end
 
       ppid == Process.ppid or return
 
       # timeout used so we can detect parent death:
       worker.tick = time_now.to_i
       ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
     rescue => e
       redo if nr < 0 && readers[0]
       Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
     end while readers[0]
   end
 
   # delivers a signal to a worker and fails gracefully if the worker
   # is no longer running.

> The tradeoff are
> - No more "bet"[2] on low priority traffic. This is probably slowing
> down a little bit the low  priority traffic.

Yeah, but low priority is low priority, so it's fine to slow
them down, right? :>

> - This approach is only low / high. Not sure if I can extend it for 3
> (or more) level of priority without a non negligible performance
> impact (because of the "bet" above).

I don't think it makes sense to have more than two levels of
priority (zero, one, two, infinity rule?)
<https://en.wikipedia.org/wiki/Zero_One_Infinity>

> Do you think this approach is correct?

readers order isn't guaranteed, especially when inheriting
sockets from systemd or similar launchers.

I think some sort order could be defined via listen option...

I'm not sure if inheriting multiple sockets from systemd or
similar launchers using LISTEN_FDS env can guarantee ordering
(or IO.select in Ruby, for that matter).

It seems OK otherwise, I think...  Have you tested in real world?

> Do you have any better idea to have some traffic prioritization?
> (Another idea is to have dedicated workers for each priority class.
> This approach has other downsides, I would like to avoid it).
> Is it something we can  introduce in Unicorn (not as default
> behaviour, but as a configuration option)?

If you're willing to drop some low-priority requests, using a
small listen :backlog value for a low-priority listen may work.

I'm hesitant to put extra code in worker_loop method since
it can slow down current users who don't need the feature.

Instead, perhaps try replacing the worker_loop method entirely
(similar to how oob_gc.rb wraps process_client) so users who
don't enable the feature won't be penalized with extra code.
Users who opt into the feature can get an entirely different
method.

> Thx for any opinion.

The best option would be to never get yourself in a situation
where you're never overloaded by making everything fast :>
Anything else seems pretty ugly...

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: Traffic priority with Unicorn
  2019-12-17  5:12 ` Eric Wong
@ 2019-12-18 22:06   ` Bertrand Paquet
  0 siblings, 0 replies; 3+ messages in thread
From: Bertrand Paquet @ 2019-12-18 22:06 UTC (permalink / raw)
  To: Eric Wong; +Cc: unicorn-public

On Tue, 17 Dec 2019 at 06:12, Eric Wong <bofh@yhbt.net> wrote:
>
> Bertrand Paquet <bertrand.paquet@doctolib.com> wrote:
> > Hello,
> >
> > I would like to introduce some traffic priority in Unicorn. The goal
> > is to keep critical endpoints online even if the application is
> > slowing down a lot.
> >
> > The idea is to classify the request at nginx level (by vhost, http
> > path, header or whatever), and send the queries to two different
> > unicorn sockets (opened by the same unicorn instance): one for high
> > priority request, one for low priority request.
> > I need to do some small modifications [1] in the unicorn worker loop
> > to process high priority requests first. It seems to work:
> > - I launch a first apache bench toward the low priority port
> > - I launch a second apache bench toward the high priority port
> > - Unicorn handles the queries only for that one, and stop answering to
> > the low priority traffic
>
> > [1] https://github.com/bpaquet/unicorn/commit/58d6ba2805d4399f680f97eefff82c407e0ed30f#
>
> Easier to view locally w/o JS/CSS using "git show -W" for context:
>
> $ git remote add bpaquet https://github.com/bpaquet/unicorn
> $ git fetch bpaquet
> $ git show -W 58d6ba28
>   <snip>
> diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb
> index 5334fa0c..976b728e 100644
> --- a/lib/unicorn/http_server.rb
> +++ b/lib/unicorn/http_server.rb
> @@ -676,53 +676,56 @@ def reopen_worker_logs(worker_nr)
>    # runs inside each forked worker, this sits around and waits
>    # for connections and doesn't die until the parent dies (or is
>    # given a INT, QUIT, or TERM signal)
>    def worker_loop(worker)
>      ppid = @master_pid
>      readers = init_worker_process(worker)
>      nr = 0 # this becomes negative if we need to reopen logs
>
>      # this only works immediately if the master sent us the signal
>      # (which is the normal case)
>      trap(:USR1) { nr = -65536 }
>
>      ready = readers.dup
> +    high_priority_reader = readers.first
> +    last_processed_is_high_priority = false
>      @after_worker_ready.call(self, worker)
>
>      begin
>        nr < 0 and reopen_worker_logs(worker.nr)
>        nr = 0
>        worker.tick = time_now.to_i
>        tmp = ready.dup
>        while sock = tmp.shift
>          # Unicorn::Worker#kgio_tryaccept is not like accept(2) at all,
>          # but that will return false
>          if client = sock.kgio_tryaccept
> +          last_processed_is_high_priority = sock == high_priority_reader
>            process_client(client)
>            nr += 1
>            worker.tick = time_now.to_i
>          end
> -        break if nr < 0
> +        break if nr < 0 || last_processed_is_high_priority
>        end
>
>        # make the following bet: if we accepted clients this round,
>        # we're probably reasonably busy, so avoid calling select()
>        # and do a speculative non-blocking accept() on ready listeners
>        # before we sleep again in select().
> -      unless nr == 0
> +      unless nr == 0 || !last_processed_is_high_priority
>          tmp = ready.dup
>          redo
>        end
>
>        ppid == Process.ppid or return
>
>        # timeout used so we can detect parent death:
>        worker.tick = time_now.to_i
>        ret = IO.select(readers, nil, nil, @timeout) and ready = ret[0]
>      rescue => e
>        redo if nr < 0 && readers[0]
>        Unicorn.log_error(@logger, "listen loop error", e) if readers[0]
>      end while readers[0]
>    end
>
>    # delivers a signal to a worker and fails gracefully if the worker
>    # is no longer running.
>
> > The tradeoff are
> > - No more "bet"[2] on low priority traffic. This is probably slowing
> > down a little bit the low  priority traffic.
>
> Yeah, but low priority is low priority, so it's fine to slow
> them down, right? :>
>
> > - This approach is only low / high. Not sure if I can extend it for 3
> > (or more) level of priority without a non negligible performance
> > impact (because of the "bet" above).
>
> I don't think it makes sense to have more than two levels of
> priority (zero, one, two, infinity rule?)
> <https://en.wikipedia.org/wiki/Zero_One_Infinity>
>
> > Do you think this approach is correct?
>
> readers order isn't guaranteed, especially when inheriting
> sockets from systemd or similar launchers.

Interesting point.

>
> I think some sort order could be defined via listen option...
>
> I'm not sure if inheriting multiple sockets from systemd or
> similar launchers using LISTEN_FDS env can guarantee ordering
> (or IO.select in Ruby, for that matter).
>
> It seems OK otherwise, I think...  Have you tested in real world?

Not yet. But I will probably test it soon.

>
> > Do you have any better idea to have some traffic prioritization?
> > (Another idea is to have dedicated workers for each priority class.
> > This approach has other downsides, I would like to avoid it).
> > Is it something we can  introduce in Unicorn (not as default
> > behaviour, but as a configuration option)?
>
> If you're willing to drop some low-priority requests, using a
> small listen :backlog value for a low-priority listen may work.
>
> I'm hesitant to put extra code in worker_loop method since
> it can slow down current users who don't need the feature.
>
> Instead, perhaps try replacing the worker_loop method entirely
> (similar to how oob_gc.rb wraps process_client) so users who
> don't enable the feature won't be penalized with extra code.
> Users who opt into the feature can get an entirely different
> method.

Ok I will try this approach. I'm a little bit annoyed by the code duplication.

>
> > Thx for any opinion.
>
> The best option would be to never get yourself in a situation
> where you're never overloaded by making everything fast :>
> Anything else seems pretty ugly...

On a system which handle 10k QPS, it's really difficult to never have
an issue somewhere :)

Thx

Bertrand

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-12-18 22:07 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-16 22:34 Traffic priority with Unicorn Bertrand Paquet
2019-12-17  5:12 ` Eric Wong
2019-12-18 22:06   ` Bertrand Paquet

Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).