All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* ovf scheduler
@ 2015-07-13 19:45 rhadoo.io88
  2015-07-13 20:35 ` Julian Anastasov
  0 siblings, 1 reply; 15+ messages in thread
From: rhadoo.io88 @ 2015-07-13 19:45 UTC (permalink / raw
  To: lvs-devel

Hi,
My name is Raducu Deaconu, i am a romanian syadmin/solution manager
and i have been working with lvs for some years now, great software!
I mainly use ldirectord on top of lvs and every now and then i do run
into customer tasks that would need new features.
One such feature is the need of a failover scheduler that would allow
a certain number of active connections to be served by a server and
only in case that is overloaded send some jobs to another/other
servers.
That would be needed say in the case you have let's say a galera
cluster and you want to make sure all writes go to one node, and only
one node,or in the case where you have some caching implemented in an
application and you want the virtual service to always go to that
server, unless there is a problem, case when another server can handle
the job, although without the caching.
These features are not possible now in ldirectord/lvs and i think they
would bring some benefits to many use cases like my own.
let me expose two cases:

1)galera cluster:
192.168.0.100:3306   ->192.168.0.1:3306 weight 500
                                   ->192.168.0.2:3306 weight 499
                                   ->192.168.0.3:3306 weight 498

this setup will allow all writes to go to the same node (w 500) and
thus maintain lowest  latency, while allowing failover to the next
(499) if the first fails.

2)application specific

192.168.0.100:8080  ->192.168.0.1:8080 weight 500
                                  ->192.168.0.2:8080 weight 499

192.168.0.101:8080  ->192.168.0.2:8080 weight 500
                                  ->192.168.0.1:8080 weight 499

in this setup all connections go to a preferred node and only go to
the other one if the node is overloaded.

this also allows to have a normal server and a fallback server that is
health checked in ldirectord (instead of using the fallback directive
which is not checked)


I made a small scheduler for this:


 cat ip_vs_ovf.c


/*
 * IPVS:        Overflow-Connection Scheduling module
 *
 * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
 *
 *              This program is free software; you can redistribute it and/or
 *              modify it under the terms of the GNU General Public License
 *              as published by the Free Software Foundation; either version
 *              2 of the License, or (at your option) any later version.
 *
 * Scheduler implements "overflow" loadbalancing according to number
of active connections , will keep all conections to the node with the
highest weight
 * and overflow to the next node if the number of connections exceeds
the node's weight
 *
 */

#define KMSG_COMPONENT "IPVS"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt

#include <linux/module.h>
#include <linux/kernel.h>

#include <net/ip_vs.h>

/*
 *      OVF Connection scheduling
 */
static struct ip_vs_dest *
ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb)
{
        struct ip_vs_dest *dest, *hw = NULL;
        unsigned int highestw = 0, curentw;

        IP_VS_DBG(6, "%s(): Scheduling...\n", __func__);

        /*
         *select the node with highest weight, go to next in line if
active connections exceed weight
         *
         */

        list_for_each_entry(dest, &svc->destinations, n_list) {
                if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
(atomic_read(&dest->activeconns) > atomic_read(&dest->weight)) ||
                    atomic_read(&dest->weight) == 0)
                        continue;
                curentw = atomic_read(&dest->weight);
                if (!hw || curentw > highestw) {
                        hw = dest;
                        highestw = curentw;
                }
        }

        if (!hw)
                ip_vs_scheduler_err(svc, "no destination available");
        else
                IP_VS_DBG_BUF(6, "OVF: server %s:%u activeconns %d "
                              "inactconns %d\n",
                              IP_VS_DBG_ADDR(svc->af, &hw->addr),
                                ntohs(hw->port),
                              atomic_read(&hw->activeconns),
                              atomic_read(&hw->inactconns));

        return hw;
}


static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
        .name =                 "ovf",
        .refcnt =               ATOMIC_INIT(0),
        .module =               THIS_MODULE,
        .n_list =               LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
        .schedule =             ip_vs_ovf_schedule,
};


static int __init ip_vs_ovf_init(void)
{
        return register_ip_vs_scheduler(&ip_vs_ovf_scheduler) ;
}

static void __exit ip_vs_ovf_cleanup(void)
{
        unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
}

module_init(ip_vs_ovf_init);
module_exit(ip_vs_ovf_cleanup);
MODULE_LICENSE("GPL");




Hope this makes some sense and perhaps this can make it in ipvs modules,

Thank you

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-13 19:45 ovf scheduler rhadoo.io88
@ 2015-07-13 20:35 ` Julian Anastasov
  2015-07-13 22:47   ` rhadoo.io88
  0 siblings, 1 reply; 15+ messages in thread
From: Julian Anastasov @ 2015-07-13 20:35 UTC (permalink / raw
  To: rhadoo.io88; +Cc: lvs-devel


	Hello,

On Mon, 13 Jul 2015, rhadoo.io88 wrote:

> Hi,
> My name is Raducu Deaconu, i am a romanian syadmin/solution manager
> and i have been working with lvs for some years now, great software!
> I mainly use ldirectord on top of lvs and every now and then i do run
> into customer tasks that would need new features.
> One such feature is the need of a failover scheduler that would allow
> a certain number of active connections to be served by a server and
> only in case that is overloaded send some jobs to another/other
> servers.
> That would be needed say in the case you have let's say a galera
> cluster and you want to make sure all writes go to one node, and only
> one node,or in the case where you have some caching implemented in an
> application and you want the virtual service to always go to that
> server, unless there is a problem, case when another server can handle
> the job, although without the caching.
> These features are not possible now in ldirectord/lvs and i think they
> would bring some benefits to many use cases like my own.

	Can the same be achieved by setting --u-threshold
and using the FO scheduler? ip_vs_bind_dest() sets the
IP_VS_DEST_F_OVERLOAD flag if number of connections
exceed upper threshold and then the FO scheduler can select
another real server with lower weight.

Regards

--
Julian Anastasov <ja@ssi.bg>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-13 20:35 ` Julian Anastasov
@ 2015-07-13 22:47   ` rhadoo.io88
  2015-07-14  1:36     ` Simon Horman
  2015-07-14 19:11     ` Julian Anastasov
  0 siblings, 2 replies; 15+ messages in thread
From: rhadoo.io88 @ 2015-07-13 22:47 UTC (permalink / raw
  To: Julian Anastasov; +Cc: lvs-devel

Unfortunately i was not aware of the FO scheduler, i looked it up just
now...indeed you could get similar behavior with FO scheduler and
proper thresholds.
The only trouble with that is that i can't set the thresholds from
within ldirectord , so i won't be able to keep all the config in one
place,  and that upper threshold seems to take into account all
connections ( active and inactive) , that making the upper limit a bit
vague.
i think this approach has the advantage of keeping all the config in
one place, having the weight set at the actual number of active
connections the node can handle whilst still allowing to have
thresholds on the total connections.

On Mon, Jul 13, 2015 at 11:35 PM, Julian Anastasov <ja@ssi.bg> wrote:
>
>         Hello,
>
> On Mon, 13 Jul 2015, rhadoo.io88 wrote:
>
>> Hi,
>> My name is Raducu Deaconu, i am a romanian syadmin/solution manager
>> and i have been working with lvs for some years now, great software!
>> I mainly use ldirectord on top of lvs and every now and then i do run
>> into customer tasks that would need new features.
>> One such feature is the need of a failover scheduler that would allow
>> a certain number of active connections to be served by a server and
>> only in case that is overloaded send some jobs to another/other
>> servers.
>> That would be needed say in the case you have let's say a galera
>> cluster and you want to make sure all writes go to one node, and only
>> one node,or in the case where you have some caching implemented in an
>> application and you want the virtual service to always go to that
>> server, unless there is a problem, case when another server can handle
>> the job, although without the caching.
>> These features are not possible now in ldirectord/lvs and i think they
>> would bring some benefits to many use cases like my own.
>
>         Can the same be achieved by setting --u-threshold
> and using the FO scheduler? ip_vs_bind_dest() sets the
> IP_VS_DEST_F_OVERLOAD flag if number of connections
> exceed upper threshold and then the FO scheduler can select
> another real server with lower weight.
>
> Regards
>
> --
> Julian Anastasov <ja@ssi.bg>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-13 22:47   ` rhadoo.io88
@ 2015-07-14  1:36     ` Simon Horman
  2015-07-14 14:29       ` rhadoo.io88
  2015-07-14 19:11     ` Julian Anastasov
  1 sibling, 1 reply; 15+ messages in thread
From: Simon Horman @ 2015-07-14  1:36 UTC (permalink / raw
  To: rhadoo.io88; +Cc: Julian Anastasov, lvs-devel

Hi,

On Tue, Jul 14, 2015 at 01:47:16AM +0300, rhadoo.io88 wrote:
> Unfortunately i was not aware of the FO scheduler, i looked it up just
> now...indeed you could get similar behavior with FO scheduler and
> proper thresholds.
> The only trouble with that is that i can't set the thresholds from
> within ldirectord , so i won't be able to keep all the config in one
> place,  and that upper threshold seems to take into account all
> connections ( active and inactive) , that making the upper limit a bit
> vague.

Perhaps ldirectord could be enhanced in this regard?

> i think this approach has the advantage of keeping all the config in
> one place, having the weight set at the actual number of active
> connections the node can handle whilst still allowing to have
> thresholds on the total connections.
> 
> On Mon, Jul 13, 2015 at 11:35 PM, Julian Anastasov <ja@ssi.bg> wrote:
> >
> >         Hello,
> >
> > On Mon, 13 Jul 2015, rhadoo.io88 wrote:
> >
> >> Hi,
> >> My name is Raducu Deaconu, i am a romanian syadmin/solution manager
> >> and i have been working with lvs for some years now, great software!
> >> I mainly use ldirectord on top of lvs and every now and then i do run
> >> into customer tasks that would need new features.
> >> One such feature is the need of a failover scheduler that would allow
> >> a certain number of active connections to be served by a server and
> >> only in case that is overloaded send some jobs to another/other
> >> servers.
> >> That would be needed say in the case you have let's say a galera
> >> cluster and you want to make sure all writes go to one node, and only
> >> one node,or in the case where you have some caching implemented in an
> >> application and you want the virtual service to always go to that
> >> server, unless there is a problem, case when another server can handle
> >> the job, although without the caching.
> >> These features are not possible now in ldirectord/lvs and i think they
> >> would bring some benefits to many use cases like my own.
> >
> >         Can the same be achieved by setting --u-threshold
> > and using the FO scheduler? ip_vs_bind_dest() sets the
> > IP_VS_DEST_F_OVERLOAD flag if number of connections
> > exceed upper threshold and then the FO scheduler can select
> > another real server with lower weight.
> >
> > Regards
> >
> > --
> > Julian Anastasov <ja@ssi.bg>
> --
> To unsubscribe from this list: send the line "unsubscribe lvs-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-14  1:36     ` Simon Horman
@ 2015-07-14 14:29       ` rhadoo.io88
  0 siblings, 0 replies; 15+ messages in thread
From: rhadoo.io88 @ 2015-07-14 14:29 UTC (permalink / raw
  To: Simon Horman; +Cc: Julian Anastasov, lvs-devel

Hi,
Yes....adding thresholds support in ldirectord seems like a good ideea anyway...
But perhaps there would be benefit in merging my proposed OVF
scheduler with the existing FO, in essence they do the same thing but
matching the number of active connections supported by the node with
the node's weight whilst still keeping thresholds for the total number
of connections might give a bit more flexibility.




On Tue, Jul 14, 2015 at 4:36 AM, Simon Horman <horms@verge.net.au> wrote:
> Hi,
>
> On Tue, Jul 14, 2015 at 01:47:16AM +0300, rhadoo.io88 wrote:
>> Unfortunately i was not aware of the FO scheduler, i looked it up just
>> now...indeed you could get similar behavior with FO scheduler and
>> proper thresholds.
>> The only trouble with that is that i can't set the thresholds from
>> within ldirectord , so i won't be able to keep all the config in one
>> place,  and that upper threshold seems to take into account all
>> connections ( active and inactive) , that making the upper limit a bit
>> vague.
>
> Perhaps ldirectord could be enhanced in this regard?
>
>> i think this approach has the advantage of keeping all the config in
>> one place, having the weight set at the actual number of active
>> connections the node can handle whilst still allowing to have
>> thresholds on the total connections.
>>
>> On Mon, Jul 13, 2015 at 11:35 PM, Julian Anastasov <ja@ssi.bg> wrote:
>> >
>> >         Hello,
>> >
>> > On Mon, 13 Jul 2015, rhadoo.io88 wrote:
>> >
>> >> Hi,
>> >> My name is Raducu Deaconu, i am a romanian syadmin/solution manager
>> >> and i have been working with lvs for some years now, great software!
>> >> I mainly use ldirectord on top of lvs and every now and then i do run
>> >> into customer tasks that would need new features.
>> >> One such feature is the need of a failover scheduler that would allow
>> >> a certain number of active connections to be served by a server and
>> >> only in case that is overloaded send some jobs to another/other
>> >> servers.
>> >> That would be needed say in the case you have let's say a galera
>> >> cluster and you want to make sure all writes go to one node, and only
>> >> one node,or in the case where you have some caching implemented in an
>> >> application and you want the virtual service to always go to that
>> >> server, unless there is a problem, case when another server can handle
>> >> the job, although without the caching.
>> >> These features are not possible now in ldirectord/lvs and i think they
>> >> would bring some benefits to many use cases like my own.
>> >
>> >         Can the same be achieved by setting --u-threshold
>> > and using the FO scheduler? ip_vs_bind_dest() sets the
>> > IP_VS_DEST_F_OVERLOAD flag if number of connections
>> > exceed upper threshold and then the FO scheduler can select
>> > another real server with lower weight.
>> >
>> > Regards
>> >
>> > --
>> > Julian Anastasov <ja@ssi.bg>
>> --
>> To unsubscribe from this list: send the line "unsubscribe lvs-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-13 22:47   ` rhadoo.io88
  2015-07-14  1:36     ` Simon Horman
@ 2015-07-14 19:11     ` Julian Anastasov
  2015-07-16  7:03       ` Raducu Deaconu
  1 sibling, 1 reply; 15+ messages in thread
From: Julian Anastasov @ 2015-07-14 19:11 UTC (permalink / raw
  To: rhadoo.io88; +Cc: lvs-devel


	Hello,

On Tue, 14 Jul 2015, rhadoo.io88 wrote:

> Unfortunately i was not aware of the FO scheduler, i looked it up just
> now...indeed you could get similar behavior with FO scheduler and
> proper thresholds.
> The only trouble with that is that i can't set the thresholds from
> within ldirectord , so i won't be able to keep all the config in one
> place,  and that upper threshold seems to take into account all
> connections ( active and inactive) , that making the upper limit a bit
> vague.
> i think this approach has the advantage of keeping all the config in
> one place, having the weight set at the actual number of active
> connections the node can handle whilst still allowing to have
> thresholds on the total connections.

	OK. But you have to update this scheduler because
it is based on old code. You can use ip_vs_fo.c for
reference. You have to fix any coding style warnings by using
scripts/checkpatch.pl --strict /tmp/your.patch
You can check Documentation/CodingStyle for reference.
You can add comment that this scheduler can be used
for TCP/SCTP but not for UDP because it uses only the
activeconns.

	Your patch should include also changes for Kconfig
and Makefile.

	Also, the 'curentw = atomic_read(&dest->weight);'
can be used early to read the weight only once.

Regards

--
Julian Anastasov <ja@ssi.bg>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-14 19:11     ` Julian Anastasov
@ 2015-07-16  7:03       ` Raducu Deaconu
  2015-07-16  7:56         ` Julian Anastasov
  0 siblings, 1 reply; 15+ messages in thread
From: Raducu Deaconu @ 2015-07-16  7:03 UTC (permalink / raw
  To: Julian Anastasov; +Cc: lvs-devel

[-- Attachment #1: Type: text/plain, Size: 1575 bytes --]

Hi,
I have updated the code and generated the patch, according to your indications.


On Tue, Jul 14, 2015 at 10:11 PM, Julian Anastasov <ja@ssi.bg> wrote:
>
>         Hello,
>
> On Tue, 14 Jul 2015, rhadoo.io88 wrote:
>
>> Unfortunately i was not aware of the FO scheduler, i looked it up just
>> now...indeed you could get similar behavior with FO scheduler and
>> proper thresholds.
>> The only trouble with that is that i can't set the thresholds from
>> within ldirectord , so i won't be able to keep all the config in one
>> place,  and that upper threshold seems to take into account all
>> connections ( active and inactive) , that making the upper limit a bit
>> vague.
>> i think this approach has the advantage of keeping all the config in
>> one place, having the weight set at the actual number of active
>> connections the node can handle whilst still allowing to have
>> thresholds on the total connections.
>
>         OK. But you have to update this scheduler because
> it is based on old code. You can use ip_vs_fo.c for
> reference. You have to fix any coding style warnings by using
> scripts/checkpatch.pl --strict /tmp/your.patch
> You can check Documentation/CodingStyle for reference.
> You can add comment that this scheduler can be used
> for TCP/SCTP but not for UDP because it uses only the
> activeconns.
>
>         Your patch should include also changes for Kconfig
> and Makefile.
>
>         Also, the 'curentw = atomic_read(&dest->weight);'
> can be used early to read the weight only once.
>
> Regards
>
> --
> Julian Anastasov <ja@ssi.bg>

[-- Attachment #2: addovf.patch --]
[-- Type: text/x-patch, Size: 4598 bytes --]

From 69eab2c6e5add4e565b882fd91959de9cb8526fb Mon Sep 17 00:00:00 2001
From: Raducu Deaconu <rhadoo.io88@gmail.com>
Date: Thu, 16 Jul 2015 08:54:16 +0300
Subject: [PATCH] Add ovf scheduler

Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   87 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 99 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..2563c18 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
 	  If you want to compile it in kernel, say Y. To compile it as a
 	  module, choose M here. If unsure, say N.
 
+config  IP_VS_OVF
+		tristate "weighted overflow scheduling"
+	---help---
+	  The weighted overflow scheduling algorithm directs network
+	  connections to the server with the highest weight that is
+	  currently available and overflows to the next when active
+	  connections exceed the node's weight.
+
+	  If you want to compile it in kernel, say Y. To compile it as a
+	  module, choose M here. If unsure, say N.
+
 config	IP_VS_LBLC
 	tristate "locality-based least-connection scheduling"
 	---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..4e9458d
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,87 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+		   struct ip_vs_iphdr *iph)
+{
+	struct ip_vs_dest *dest, *hw = NULL;
+	int highestw = 0, curentw;
+
+	IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+	/* select the node with highest weight, go to next in line if active
+	* connections exceed weight
+	*
+	*/
+	list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+		curentw = atomic_read(&dest->weight);
+	if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+	    atomic_read(&dest->activeconns) > curentw ||
+	    curentw == 0)
+			continue;
+	if (!hw || curentw > highestw) {
+			hw = dest;
+			highestw = curentw;
+		}
+	}
+
+	if (hw) {
+		IP_VS_DBG_BUF(6, "OVF: server %s:%u activeconns %d weight %d\n",
+			      IP_VS_DBG_ADDR(svc->af, &hw->addr),
+			      ntohs(hweight->port),
+			      atomic_read(&hw->activeconns),
+			      atomic_read(&hw->weight));
+		return hw;
+	}
+
+	ip_vs_scheduler_err(svc, "no destination available");
+	return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+	.name =			"ovf",
+	.refcnt =		ATOMIC_INIT(0),
+	.module =		THIS_MODULE,
+	.n_list =		LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+	.schedule =		ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+	return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+	unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+	synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-16  7:03       ` Raducu Deaconu
@ 2015-07-16  7:56         ` Julian Anastasov
  2015-07-16 11:06           ` Raducu Deaconu
  0 siblings, 1 reply; 15+ messages in thread
From: Julian Anastasov @ 2015-07-16  7:56 UTC (permalink / raw
  To: Raducu Deaconu; +Cc: lvs-devel


	Hello,

On Thu, 16 Jul 2015, Raducu Deaconu wrote:

> Hi,
> I have updated the code and generated the patch, according to your indications.

	When patch is attached it is difficult to
comment it. Anyways, I'll try.

- The Subject should include "ipvs: ",
eg. Subject: [PATCH] ipvs: add ovf scheduler

- In Kconfig the "tristate..." line has wrong indentation,
one tab should be removed

- When debugging is enabled (CONFIG_IP_VS_DEBUG=y) the module
does not compile, you can also use short var names, eg:

	struct ip_vs_dest *dest, *h = NULL;
	int hw = 0, w;

- the comment block "select the node..." has one empty line
at end, should be removed

- the 'if ...' blocks in the loop are not properly indented

Regards

--
Julian Anastasov <ja@ssi.bg>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-16  7:56         ` Julian Anastasov
@ 2015-07-16 11:06           ` Raducu Deaconu
  2015-07-16 18:27             ` Julian Anastasov
  0 siblings, 1 reply; 15+ messages in thread
From: Raducu Deaconu @ 2015-07-16 11:06 UTC (permalink / raw
  To: Julian Anastasov; +Cc: lvs-devel

[-- Attachment #1: Type: text/plain, Size: 5956 bytes --]

Hello,

I have readjusted the patch, sorry for the code style issues, this is
my first attempt at a contribution.




Subject: [PATCH] ipvs: Add ovf scheduler

Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   86 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..b32fb0d 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
          If you want to compile it in kernel, say Y. To compile it as a
          module, choose M here. If unsure, say N.

+config  IP_VS_OVF
+       tristate "weighted overflow scheduling"
+       ---help---
+         The weighted overflow scheduling algorithm directs network
+         connections to the server with the highest weight that is
+         currently available and overflows to the next when active
+         connections exceed the node's weight.
+
+         If you want to compile it in kernel, say Y. To compile it as a
+         module, choose M here. If unsure, say N.
+
 config IP_VS_LBLC
        tristate "locality-based least-connection scheduling"
        ---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..3478b0d
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,86 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds
the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+                  struct ip_vs_iphdr *iph)
+{
+       struct ip_vs_dest *dest, *h = NULL;
+       int hw = 0, w;
+
+       IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+       /* select the node with highest weight, go to next in line if active
+       * connections exceed weight
+       */
+       list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+               w = atomic_read(&dest->weight);
+               if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+                   atomic_read(&dest->activeconns) > w ||
+                   w == 0)
+                       continue;
+               if (!h || w > hw) {
+                       h = dest;
+                       hw = w;
+               }
+       }
+
+               if (h) {
+                       IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
+                                     IP_VS_DBG_ADDR(svc->af, &h->addr),
+                                     ntohs(h->port),
+                                     atomic_read(&h->activeconns),
+                                     atomic_read(&h->weight));
+               return h;
+               }
+
+       ip_vs_scheduler_err(svc, "no destination available");
+       return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+       .name =                 "ovf",
+       .refcnt =               ATOMIC_INIT(0),
+       .module =               THIS_MODULE,
+       .n_list =               LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+       .schedule =             ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+       return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+       unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+       synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4

On Thu, Jul 16, 2015 at 10:56 AM, Julian Anastasov <ja@ssi.bg> wrote:
>
>         Hello,
>
> On Thu, 16 Jul 2015, Raducu Deaconu wrote:
>
>> Hi,
>> I have updated the code and generated the patch, according to your indications.
>
>         When patch is attached it is difficult to
> comment it. Anyways, I'll try.
>
> - The Subject should include "ipvs: ",
> eg. Subject: [PATCH] ipvs: add ovf scheduler
>
> - In Kconfig the "tristate..." line has wrong indentation,
> one tab should be removed
>
> - When debugging is enabled (CONFIG_IP_VS_DEBUG=y) the module
> does not compile, you can also use short var names, eg:
>
>         struct ip_vs_dest *dest, *h = NULL;
>         int hw = 0, w;
>
> - the comment block "select the node..." has one empty line
> at end, should be removed
>
> - the 'if ...' blocks in the loop are not properly indented
>
> Regards
>
> --
> Julian Anastasov <ja@ssi.bg>

[-- Attachment #2: addovf.patch --]
[-- Type: text/x-patch, Size: 4532 bytes --]

From 5eecae75dd98d16c30c2ab983e816ea94c307fa8 Mon Sep 17 00:00:00 2001
From: Raducu Deaconu <rhadoo.io88@gmail.com>
Date: Thu, 16 Jul 2015 13:56:23 +0300
Subject: [PATCH] ipvs: Add ovf scheduler

Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   86 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..b32fb0d 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
 	  If you want to compile it in kernel, say Y. To compile it as a
 	  module, choose M here. If unsure, say N.
 
+config  IP_VS_OVF
+	tristate "weighted overflow scheduling"
+	---help---
+	  The weighted overflow scheduling algorithm directs network
+	  connections to the server with the highest weight that is
+	  currently available and overflows to the next when active
+	  connections exceed the node's weight.
+
+	  If you want to compile it in kernel, say Y. To compile it as a
+	  module, choose M here. If unsure, say N.
+
 config	IP_VS_LBLC
 	tristate "locality-based least-connection scheduling"
 	---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..3478b0d
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,86 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+		   struct ip_vs_iphdr *iph)
+{
+	struct ip_vs_dest *dest, *h = NULL;
+	int hw = 0, w;
+
+	IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+	/* select the node with highest weight, go to next in line if active
+	* connections exceed weight
+	*/
+	list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+		w = atomic_read(&dest->weight);
+		if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+		    atomic_read(&dest->activeconns) > w ||
+		    w == 0)
+			continue;
+		if (!h || w > hw) {
+			h = dest;
+			hw = w;
+		}
+	}
+
+		if (h) {
+			IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
+				      IP_VS_DBG_ADDR(svc->af, &h->addr),
+				      ntohs(h->port),
+				      atomic_read(&h->activeconns),
+				      atomic_read(&h->weight));
+		return h;
+		}
+
+	ip_vs_scheduler_err(svc, "no destination available");
+	return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+	.name =			"ovf",
+	.refcnt =		ATOMIC_INIT(0),
+	.module =		THIS_MODULE,
+	.n_list =		LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+	.schedule =		ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+	return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+	unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+	synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-16 11:06           ` Raducu Deaconu
@ 2015-07-16 18:27             ` Julian Anastasov
  2015-07-17  5:53               ` Raducu Deaconu
  0 siblings, 1 reply; 15+ messages in thread
From: Julian Anastasov @ 2015-07-16 18:27 UTC (permalink / raw
  To: Raducu Deaconu; +Cc: lvs-devel


	Hello,

On Thu, 16 Jul 2015, Raducu Deaconu wrote:

> Hello,
> 
> I have readjusted the patch, sorry for the code style issues, this is
> my first attempt at a contribution.

	No worries. But one new problem...

> +               if (h) {
> +                       IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
> +                                     IP_VS_DBG_ADDR(svc->af, &h->addr),

	Indentation of this 'if' block should be fixed.
It was correct in first patch. Also, change svc->af to
h->af in above line.

> +                                     ntohs(h->port),
> +                                     atomic_read(&h->activeconns),
> +                                     atomic_read(&h->weight));
> +               return h;
> +               }

Regards

--
Julian Anastasov <ja@ssi.bg>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-16 18:27             ` Julian Anastasov
@ 2015-07-17  5:53               ` Raducu Deaconu
  2015-07-17  6:16                 ` Julian Anastasov
  0 siblings, 1 reply; 15+ messages in thread
From: Raducu Deaconu @ 2015-07-17  5:53 UTC (permalink / raw
  To: Julian Anastasov; +Cc: lvs-devel

[-- Attachment #1: Type: text/plain, Size: 5848 bytes --]

Hello,

I have done the corrections.


Subject: [PATCH] Add ovf scheduler

Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   86
++++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..b32fb0d 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
          If you want to compile it in kernel, say Y. To compile it as
a
          module, choose M here. If unsure, say N.

+config  IP_VS_OVF
+       tristate "weighted overflow scheduling"
+       ---help---
+         The weighted overflow scheduling algorithm directs network
+         connections to the server with the highest weight that is
+         currently available and overflows to the next when active
+         connections exceed the node's weight.
+
+         If you want to compile it in kernel, say Y. To compile it as a
+         module, choose M here. If unsure, say N.
+
 config IP_VS_LBLC
        tristate "locality-based least-connection scheduling"
        ---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..f7d62c3
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,86 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds
the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+                  struct ip_vs_iphdr *iph)
+{
+       struct ip_vs_dest *dest, *h = NULL;
+       int hw = 0, w;
+
+       IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+       /* select the node with highest weight, go to next in line if active
+       * connections exceed weight
+       */
+       list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+               w = atomic_read(&dest->weight);
+               if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+                   atomic_read(&dest->activeconns) > w ||
+                   w == 0)
+                       continue;
+               if (!h || w > hw) {
+                       h = dest;
+                       hw = w;
+               }
+       }
+
+       if (h) {
+               IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
+                             IP_VS_DBG_ADDR(h->af, &h->addr),
+                             ntohs(h->port),
+                             atomic_read(&h->activeconns),
+                             atomic_read(&h->weight));
+               return h;
+       }
+
+       ip_vs_scheduler_err(svc, "no destination available");
+       return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+       .name =                 "ovf",
+       .refcnt =               ATOMIC_INIT(0),
+       .module =               THIS_MODULE,
+       .n_list =               LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+       .schedule =             ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+       return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+       unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+       synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4

On Thu, Jul 16, 2015 at 9:27 PM, Julian Anastasov <ja@ssi.bg> wrote:
>
>         Hello,
>
> On Thu, 16 Jul 2015, Raducu Deaconu wrote:
>
>> Hello,
>>
>> I have readjusted the patch, sorry for the code style issues, this is
>> my first attempt at a contribution.
>
>         No worries. But one new problem...
>
>> +               if (h) {
>> +                       IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
>> +                                     IP_VS_DBG_ADDR(svc->af, &h->addr),
>
>         Indentation of this 'if' block should be fixed.
> It was correct in first patch. Also, change svc->af to
> h->af in above line.
>
>> +                                     ntohs(h->port),
>> +                                     atomic_read(&h->activeconns),
>> +                                     atomic_read(&h->weight));
>> +               return h;
>> +               }
>
> Regards
>
> --
> Julian Anastasov <ja@ssi.bg>

[-- Attachment #2: addovf.patch --]
[-- Type: text/x-patch, Size: 4517 bytes --]

From 2bcbfa0261d5e1837b4dcb8060e4901ce7992fa9 Mon Sep 17 00:00:00 2001
From: Raducu Deaconu <rhadoo.io88@gmail.com>
Date: Fri, 17 Jul 2015 08:45:40 +0300
Subject: [PATCH] Add ovf scheduler

Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   86 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..b32fb0d 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
 	  If you want to compile it in kernel, say Y. To compile it as a
 	  module, choose M here. If unsure, say N.
 
+config  IP_VS_OVF
+	tristate "weighted overflow scheduling"
+	---help---
+	  The weighted overflow scheduling algorithm directs network
+	  connections to the server with the highest weight that is
+	  currently available and overflows to the next when active
+	  connections exceed the node's weight.
+
+	  If you want to compile it in kernel, say Y. To compile it as a
+	  module, choose M here. If unsure, say N.
+
 config	IP_VS_LBLC
 	tristate "locality-based least-connection scheduling"
 	---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..f7d62c3
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,86 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+		   struct ip_vs_iphdr *iph)
+{
+	struct ip_vs_dest *dest, *h = NULL;
+	int hw = 0, w;
+
+	IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+	/* select the node with highest weight, go to next in line if active
+	* connections exceed weight
+	*/
+	list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+		w = atomic_read(&dest->weight);
+		if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+		    atomic_read(&dest->activeconns) > w ||
+		    w == 0)
+			continue;
+		if (!h || w > hw) {
+			h = dest;
+			hw = w;
+		}
+	}
+
+	if (h) {
+		IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
+			      IP_VS_DBG_ADDR(h->af, &h->addr),
+			      ntohs(h->port),
+			      atomic_read(&h->activeconns),
+			      atomic_read(&h->weight));
+		return h;
+	}
+
+	ip_vs_scheduler_err(svc, "no destination available");
+	return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+	.name =			"ovf",
+	.refcnt =		ATOMIC_INIT(0),
+	.module =		THIS_MODULE,
+	.n_list =		LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+	.schedule =		ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+	return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+	unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+	synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-17  5:53               ` Raducu Deaconu
@ 2015-07-17  6:16                 ` Julian Anastasov
  2015-07-17  6:43                   ` Simon Horman
  0 siblings, 1 reply; 15+ messages in thread
From: Julian Anastasov @ 2015-07-17  6:16 UTC (permalink / raw
  To: Raducu Deaconu; +Cc: lvs-devel, Simon Horman


	Hello,

On Fri, 17 Jul 2015, Raducu Deaconu wrote:

> Hello,
> 
> I have done the corrections.
> 
> 
> Subject: [PATCH] Add ovf scheduler
> 
> Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>

	Thanks!

Acked-by: Julian Anastasov <ja@ssi.bg>

	Simon, please apply the attached patch to -next
tree after adding the "ipvs: " prefix.

> ---
>  net/netfilter/ipvs/Kconfig     |   11 +++++
>  net/netfilter/ipvs/Makefile    |    1 +
>  net/netfilter/ipvs/ip_vs_ovf.c |   86
> ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 98 insertions(+)
>  create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c
> 
> diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
> index 3b6929d..b32fb0d 100644
> --- a/net/netfilter/ipvs/Kconfig
> +++ b/net/netfilter/ipvs/Kconfig
> @@ -162,6 +162,17 @@ config  IP_VS_FO
>           If you want to compile it in kernel, say Y. To compile it as
> a
>           module, choose M here. If unsure, say N.
> 
> +config  IP_VS_OVF
> +       tristate "weighted overflow scheduling"
> +       ---help---
> +         The weighted overflow scheduling algorithm directs network
> +         connections to the server with the highest weight that is
> +         currently available and overflows to the next when active
> +         connections exceed the node's weight.
> +
> +         If you want to compile it in kernel, say Y. To compile it as a
> +         module, choose M here. If unsure, say N.
> +
>  config IP_VS_LBLC
>         tristate "locality-based least-connection scheduling"
>         ---help---
> diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
> index 38b2723..67f3f43 100644
> --- a/net/netfilter/ipvs/Makefile
> +++ b/net/netfilter/ipvs/Makefile
> @@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
>  obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
>  obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
>  obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
> +obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
>  obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
>  obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
>  obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
> diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
> new file mode 100644
> index 0000000..f7d62c3
> --- /dev/null
> +++ b/net/netfilter/ipvs/ip_vs_ovf.c
> @@ -0,0 +1,86 @@
> +/*
> + * IPVS:        Overflow-Connection Scheduling module
> + *
> + * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
> + *
> + *              This program is free software; you can redistribute it and/or
> + *              modify it under the terms of the GNU General Public License
> + *              as published by the Free Software Foundation; either version
> + *              2 of the License, or (at your option) any later version.
> + *
> + * Scheduler implements "overflow" loadbalancing according to number of active
> + * connections , will keep all conections to the node with the highest weight
> + * and overflow to the next node if the number of connections exceeds
> the node's
> + * weight.
> + * Note that this scheduler might not be suitable for UDP because it only uses
> + * active connections
> + *
> + */
> +
> +#define KMSG_COMPONENT "IPVS"
> +#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +
> +#include <net/ip_vs.h>
> +
> +/* OVF Connection scheduling  */
> +static struct ip_vs_dest *
> +ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
> +                  struct ip_vs_iphdr *iph)
> +{
> +       struct ip_vs_dest *dest, *h = NULL;
> +       int hw = 0, w;
> +
> +       IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
> +       /* select the node with highest weight, go to next in line if active
> +       * connections exceed weight
> +       */
> +       list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
> +               w = atomic_read(&dest->weight);
> +               if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
> +                   atomic_read(&dest->activeconns) > w ||
> +                   w == 0)
> +                       continue;
> +               if (!h || w > hw) {
> +                       h = dest;
> +                       hw = w;
> +               }
> +       }
> +
> +       if (h) {
> +               IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
> +                             IP_VS_DBG_ADDR(h->af, &h->addr),
> +                             ntohs(h->port),
> +                             atomic_read(&h->activeconns),
> +                             atomic_read(&h->weight));
> +               return h;
> +       }
> +
> +       ip_vs_scheduler_err(svc, "no destination available");
> +       return NULL;
> +}
> +
> +static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
> +       .name =                 "ovf",
> +       .refcnt =               ATOMIC_INIT(0),
> +       .module =               THIS_MODULE,
> +       .n_list =               LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
> +       .schedule =             ip_vs_ovf_schedule,
> +};
> +
> +static int __init ip_vs_ovf_init(void)
> +{
> +       return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
> +}
> +
> +static void __exit ip_vs_ovf_cleanup(void)
> +{
> +       unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
> +       synchronize_rcu();
> +}
> +
> +module_init(ip_vs_ovf_init);
> +module_exit(ip_vs_ovf_cleanup);
> +MODULE_LICENSE("GPL");
> -- 
> 1.7.10.4

Regards

--
Julian Anastasov <ja@ssi.bg>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-17  6:16                 ` Julian Anastasov
@ 2015-07-17  6:43                   ` Simon Horman
  2015-07-17 11:15                     ` Raducu Deaconu
  0 siblings, 1 reply; 15+ messages in thread
From: Simon Horman @ 2015-07-17  6:43 UTC (permalink / raw
  To: Julian Anastasov; +Cc: Raducu Deaconu, lvs-devel

On Fri, Jul 17, 2015 at 09:16:40AM +0300, Julian Anastasov wrote:
> 
> 	Hello,
> 
> On Fri, 17 Jul 2015, Raducu Deaconu wrote:
> 
> > Hello,
> > 
> > I have done the corrections.
> > 
> > 
> > Subject: [PATCH] Add ovf scheduler
> > 
> > Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
> 
> 	Thanks!
> 
> Acked-by: Julian Anastasov <ja@ssi.bg>
> 
> 	Simon, please apply the attached patch to -next
> tree after adding the "ipvs: " prefix.

I am happy to do so, but Raducu, could you flesh out the changelog to
describe what the new scheduler does. The Kconfig text (bellow) seems to
provide the necessary information but I'd really like it in the changelog
too.

Thanks.

> 
> > ---
> >  net/netfilter/ipvs/Kconfig     |   11 +++++
> >  net/netfilter/ipvs/Makefile    |    1 +
> >  net/netfilter/ipvs/ip_vs_ovf.c |   86
> > ++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 98 insertions(+)
> >  create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c
> > 
> > diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
> > index 3b6929d..b32fb0d 100644
> > --- a/net/netfilter/ipvs/Kconfig
> > +++ b/net/netfilter/ipvs/Kconfig
> > @@ -162,6 +162,17 @@ config  IP_VS_FO
> >           If you want to compile it in kernel, say Y. To compile it as
> > a
> >           module, choose M here. If unsure, say N.
> > 
> > +config  IP_VS_OVF
> > +       tristate "weighted overflow scheduling"
> > +       ---help---
> > +         The weighted overflow scheduling algorithm directs network
> > +         connections to the server with the highest weight that is
> > +         currently available and overflows to the next when active
> > +         connections exceed the node's weight.
> > +
> > +         If you want to compile it in kernel, say Y. To compile it as a
> > +         module, choose M here. If unsure, say N.
> > +
> >  config IP_VS_LBLC
> >         tristate "locality-based least-connection scheduling"
> >         ---help---

[snip]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-17  6:43                   ` Simon Horman
@ 2015-07-17 11:15                     ` Raducu Deaconu
  2015-07-18  1:01                       ` Simon Horman
  0 siblings, 1 reply; 15+ messages in thread
From: Raducu Deaconu @ 2015-07-17 11:15 UTC (permalink / raw
  To: Simon Horman; +Cc: Julian Anastasov, lvs-devel

[-- Attachment #1: Type: text/plain, Size: 7409 bytes --]

Hi,
I have added entry for changelog ,
Thank you


From 2bcbfa0261d5e1837b4dcb8060e4901ce7992fa9 Mon Sep 17 00:00:00 2001
From: Raducu Deaconu <rhadoo.io88@gmail.com>
Date: Fri, 17 Jul 2015 08:45:40 +0300
Subject: [PATCH] ipvs: Add ovf scheduler

The weighted overflow scheduling algorithm directs network connections
to the server with the highest weight that is currently available
and overflows to the next when active connections exceed the node's weight.
Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   86 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..b32fb0d 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
          If you want to compile it in kernel, say Y. To compile it as a
          module, choose M here. If unsure, say N.

+config  IP_VS_OVF
+       tristate "weighted overflow scheduling"
+       ---help---
+         The weighted overflow scheduling algorithm directs network
+         connections to the server with the highest weight that is
+         currently available and overflows to the next when active
+         connections exceed the node's weight.
+
+         If you want to compile it in kernel, say Y. To compile it as a
+         module, choose M here. If unsure, say N.
+
 config IP_VS_LBLC
        tristate "locality-based least-connection scheduling"
        ---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..f7d62c3
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,86 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds
the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+                  struct ip_vs_iphdr *iph)
+{
+       struct ip_vs_dest *dest, *h = NULL;
+       int hw = 0, w;
+
+       IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+       /* select the node with highest weight, go to next in line if active
+       * connections exceed weight
+       */
+       list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+               w = atomic_read(&dest->weight);
+               if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+                   atomic_read(&dest->activeconns) > w ||
+                   w == 0)
+                       continue;
+               if (!h || w > hw) {
+                       h = dest;
+                       hw = w;
+               }
+       }
+
+       if (h) {
+               IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
+                             IP_VS_DBG_ADDR(h->af, &h->addr),
+                             ntohs(h->port),
+                             atomic_read(&h->activeconns),
+                             atomic_read(&h->weight));
+               return h;
+       }
+
+       ip_vs_scheduler_err(svc, "no destination available");
+       return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+       .name =                 "ovf",
+       .refcnt =               ATOMIC_INIT(0),
+       .module =               THIS_MODULE,
+       .n_list =               LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+       .schedule =             ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+       return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+       unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+       synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4

On Fri, Jul 17, 2015 at 9:43 AM, Simon Horman <horms@verge.net.au> wrote:
> On Fri, Jul 17, 2015 at 09:16:40AM +0300, Julian Anastasov wrote:
>>
>>       Hello,
>>
>> On Fri, 17 Jul 2015, Raducu Deaconu wrote:
>>
>> > Hello,
>> >
>> > I have done the corrections.
>> >
>> >
>> > Subject: [PATCH] Add ovf scheduler
>> >
>> > Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
>>
>>       Thanks!
>>
>> Acked-by: Julian Anastasov <ja@ssi.bg>
>>
>>       Simon, please apply the attached patch to -next
>> tree after adding the "ipvs: " prefix.
>
> I am happy to do so, but Raducu, could you flesh out the changelog to
> describe what the new scheduler does. The Kconfig text (bellow) seems to
> provide the necessary information but I'd really like it in the changelog
> too.
>
> Thanks.
>
>>
>> > ---
>> >  net/netfilter/ipvs/Kconfig     |   11 +++++
>> >  net/netfilter/ipvs/Makefile    |    1 +
>> >  net/netfilter/ipvs/ip_vs_ovf.c |   86
>> > ++++++++++++++++++++++++++++++++++++++++
>> >  3 files changed, 98 insertions(+)
>> >  create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c
>> >
>> > diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
>> > index 3b6929d..b32fb0d 100644
>> > --- a/net/netfilter/ipvs/Kconfig
>> > +++ b/net/netfilter/ipvs/Kconfig
>> > @@ -162,6 +162,17 @@ config  IP_VS_FO
>> >           If you want to compile it in kernel, say Y. To compile it as
>> > a
>> >           module, choose M here. If unsure, say N.
>> >
>> > +config  IP_VS_OVF
>> > +       tristate "weighted overflow scheduling"
>> > +       ---help---
>> > +         The weighted overflow scheduling algorithm directs network
>> > +         connections to the server with the highest weight that is
>> > +         currently available and overflows to the next when active
>> > +         connections exceed the node's weight.
>> > +
>> > +         If you want to compile it in kernel, say Y. To compile it as a
>> > +         module, choose M here. If unsure, say N.
>> > +
>> >  config IP_VS_LBLC
>> >         tristate "locality-based least-connection scheduling"
>> >         ---help---
>
> [snip]

[-- Attachment #2: addovf.patch --]
[-- Type: text/x-patch, Size: 4737 bytes --]

From 2bcbfa0261d5e1837b4dcb8060e4901ce7992fa9 Mon Sep 17 00:00:00 2001
From: Raducu Deaconu <rhadoo.io88@gmail.com>
Date: Fri, 17 Jul 2015 08:45:40 +0300
Subject: [PATCH] ipvs: Add ovf scheduler

The weighted overflow scheduling algorithm directs network connections
to the server with the highest weight that is currently available 
and overflows to the next when active connections exceed the node's weight.
Signed-off-by: Raducu Deaconu <rhadoo.io88@gmail.com>
---
 net/netfilter/ipvs/Kconfig     |   11 +++++
 net/netfilter/ipvs/Makefile    |    1 +
 net/netfilter/ipvs/ip_vs_ovf.c |   86 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 98 insertions(+)
 create mode 100644 net/netfilter/ipvs/ip_vs_ovf.c

diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
index 3b6929d..b32fb0d 100644
--- a/net/netfilter/ipvs/Kconfig
+++ b/net/netfilter/ipvs/Kconfig
@@ -162,6 +162,17 @@ config  IP_VS_FO
 	  If you want to compile it in kernel, say Y. To compile it as a
 	  module, choose M here. If unsure, say N.
 
+config  IP_VS_OVF
+	tristate "weighted overflow scheduling"
+	---help---
+	  The weighted overflow scheduling algorithm directs network
+	  connections to the server with the highest weight that is
+	  currently available and overflows to the next when active
+	  connections exceed the node's weight.
+
+	  If you want to compile it in kernel, say Y. To compile it as a
+	  module, choose M here. If unsure, say N.
+
 config	IP_VS_LBLC
 	tristate "locality-based least-connection scheduling"
 	---help---
diff --git a/net/netfilter/ipvs/Makefile b/net/netfilter/ipvs/Makefile
index 38b2723..67f3f43 100644
--- a/net/netfilter/ipvs/Makefile
+++ b/net/netfilter/ipvs/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_IP_VS_WRR) += ip_vs_wrr.o
 obj-$(CONFIG_IP_VS_LC) += ip_vs_lc.o
 obj-$(CONFIG_IP_VS_WLC) += ip_vs_wlc.o
 obj-$(CONFIG_IP_VS_FO) += ip_vs_fo.o
+obj-$(CONFIG_IP_VS_OVF) += ip_vs_ovf.o
 obj-$(CONFIG_IP_VS_LBLC) += ip_vs_lblc.o
 obj-$(CONFIG_IP_VS_LBLCR) += ip_vs_lblcr.o
 obj-$(CONFIG_IP_VS_DH) += ip_vs_dh.o
diff --git a/net/netfilter/ipvs/ip_vs_ovf.c b/net/netfilter/ipvs/ip_vs_ovf.c
new file mode 100644
index 0000000..f7d62c3
--- /dev/null
+++ b/net/netfilter/ipvs/ip_vs_ovf.c
@@ -0,0 +1,86 @@
+/*
+ * IPVS:        Overflow-Connection Scheduling module
+ *
+ * Authors:     Raducu Deaconu <rhadoo_io@yahoo.com>
+ *
+ *              This program is free software; you can redistribute it and/or
+ *              modify it under the terms of the GNU General Public License
+ *              as published by the Free Software Foundation; either version
+ *              2 of the License, or (at your option) any later version.
+ *
+ * Scheduler implements "overflow" loadbalancing according to number of active
+ * connections , will keep all conections to the node with the highest weight
+ * and overflow to the next node if the number of connections exceeds the node's
+ * weight.
+ * Note that this scheduler might not be suitable for UDP because it only uses
+ * active connections
+ *
+ */
+
+#define KMSG_COMPONENT "IPVS"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+#include <net/ip_vs.h>
+
+/* OVF Connection scheduling  */
+static struct ip_vs_dest *
+ip_vs_ovf_schedule(struct ip_vs_service *svc, const struct sk_buff *skb,
+		   struct ip_vs_iphdr *iph)
+{
+	struct ip_vs_dest *dest, *h = NULL;
+	int hw = 0, w;
+
+	IP_VS_DBG(6, "ip_vs_ovf_schedule(): Scheduling...\n");
+	/* select the node with highest weight, go to next in line if active
+	* connections exceed weight
+	*/
+	list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
+		w = atomic_read(&dest->weight);
+		if ((dest->flags & IP_VS_DEST_F_OVERLOAD) ||
+		    atomic_read(&dest->activeconns) > w ||
+		    w == 0)
+			continue;
+		if (!h || w > hw) {
+			h = dest;
+			hw = w;
+		}
+	}
+
+	if (h) {
+		IP_VS_DBG_BUF(6, "OVF: server %s:%u active %d w %d\n",
+			      IP_VS_DBG_ADDR(h->af, &h->addr),
+			      ntohs(h->port),
+			      atomic_read(&h->activeconns),
+			      atomic_read(&h->weight));
+		return h;
+	}
+
+	ip_vs_scheduler_err(svc, "no destination available");
+	return NULL;
+}
+
+static struct ip_vs_scheduler ip_vs_ovf_scheduler = {
+	.name =			"ovf",
+	.refcnt =		ATOMIC_INIT(0),
+	.module =		THIS_MODULE,
+	.n_list =		LIST_HEAD_INIT(ip_vs_ovf_scheduler.n_list),
+	.schedule =		ip_vs_ovf_schedule,
+};
+
+static int __init ip_vs_ovf_init(void)
+{
+	return register_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+}
+
+static void __exit ip_vs_ovf_cleanup(void)
+{
+	unregister_ip_vs_scheduler(&ip_vs_ovf_scheduler);
+	synchronize_rcu();
+}
+
+module_init(ip_vs_ovf_init);
+module_exit(ip_vs_ovf_cleanup);
+MODULE_LICENSE("GPL");
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: ovf scheduler
  2015-07-17 11:15                     ` Raducu Deaconu
@ 2015-07-18  1:01                       ` Simon Horman
  0 siblings, 0 replies; 15+ messages in thread
From: Simon Horman @ 2015-07-18  1:01 UTC (permalink / raw
  To: Raducu Deaconu; +Cc: Julian Anastasov, lvs-devel

On Fri, Jul 17, 2015 at 02:15:27PM +0300, Raducu Deaconu wrote:
> Hi,
> I have added entry for changelog ,
> Thank you

Thanks.

I have applied this to the ipvs-next tree which
is targeted at inclusion in Linux v4.3.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-07-18  1:01 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-13 19:45 ovf scheduler rhadoo.io88
2015-07-13 20:35 ` Julian Anastasov
2015-07-13 22:47   ` rhadoo.io88
2015-07-14  1:36     ` Simon Horman
2015-07-14 14:29       ` rhadoo.io88
2015-07-14 19:11     ` Julian Anastasov
2015-07-16  7:03       ` Raducu Deaconu
2015-07-16  7:56         ` Julian Anastasov
2015-07-16 11:06           ` Raducu Deaconu
2015-07-16 18:27             ` Julian Anastasov
2015-07-17  5:53               ` Raducu Deaconu
2015-07-17  6:16                 ` Julian Anastasov
2015-07-17  6:43                   ` Simon Horman
2015-07-17 11:15                     ` Raducu Deaconu
2015-07-18  1:01                       ` Simon Horman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.