From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ariel Rodriguez Subject: Re: 4 Traffic classes per Pipe limitation Date: Fri, 29 Nov 2013 20:33:05 -0200 Message-ID: References: <3EB4FA525960D640B5BDFFD6A3D891261A5D21F4@IRSMSX102.ger.corp.intel.com> <20131129132611.2ed0335c@nehalam.linuxnetplumber.net> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Cc: "dev-VfR2kkLFssw@public.gmane.org" To: Stephen Hemminger Return-path: In-Reply-To: <20131129132611.2ed0335c-We1ePj4FEcvRI77zikRAJc56i+j3xesD0e7PPNI6Mm0@public.gmane.org> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" Ok thats give the reason i need, yes i could change the number of bits of ,for example , pipe size which is 20 bytes but we need around a million of pipe (the telecom has a million of concurent subscribers). Thank you so much, i have to think about this, for the moment i believe we will use the 4 traffic classes and group the differents protocols to a traffic class. Maybe later i will ask some questions about the traffic metering. Thank you again , best regards, Ariel Horacio Rodriguez, Callis Technologies. On Fri, Nov 29, 2013 at 6:26 PM, Stephen Hemminger < stephen-OTpzqLSitTUnbdJkjeBofR2eb7JE58TQ@public.gmane.org> wrote: > On Fri, 29 Nov 2013 17:50:34 -0200 > Ariel Rodriguez wrote: > > > Thanks for the answer, your explanation was perfect. > Unfortunally > > , the client requirements are those, we need at traffic control level > > around 64 traffic metering controlers (traffic classes) at subscriber > level. > > I think you maybe confused by the Intel QoS naming. It is better to > think about it as 3 different classification levels and not get too hung > up about the naming. > > The way to do what you want that is with 64 different 'pipes'. > In our usage: > subport => VLAN > pipe => subscriber matched by tuple > traffic class => mapping from DSCP to TC > > > > Each subscriber have a global plan rate (each pipe have the > same > > token bucket configuration), inside that plan there are different rules > for > > the traffic (traffic classes). For Example, facebook traffic, twitter > > traffic, whatsapp traffic have different plan rates lower than the plan > > global rate but different than the others protocols. We could group those > > in one traffic class, but still the 4 traffic classes is a strong > > limitation for us, beacause each protocol mapped to a traffic class share > > the same configuration (facebook traffic, twitter traffic have had the > same > > rate and more, they compete for the same traffic class rate). > > We have to compete against cisco bandwith control solution and > at > > least we need to offer the same features. The cisco solution its a DPI > but > > also a traffic control solution, its permit priorization of traffic and > > manage the congestion inside the network per subscriber and per > application > > service. So apperently the dpdk qos scheduller doesnt fit for our needs. > > Anyway, i still doesnt understand the traffic classes > limitation. > > Inside the dpdk code of the qos scheduler i saw this: > > > > /** Number of queues per pipe traffic class. Cannot be changed. */ > > #define RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS 4 > > > I follow where the code use that define and except for the > struct > > rte_sched_port_hierarchy where its mandatory a bitwise field of two > (0...3) > > , i dont see where is the limitation here (except for performance). Its > > worth to change the code to support more than 4 traffic classes, well i > > could try to change the code more precisely jejeje. I just want to know > if > > there are another limitation than a design desicion of that number. I > dont > > want to make the effort for nothing maybe you guys can help me to > > understand why the limitation. > > I strongly use the dpdk solution for feed our dpi solution, i > > wont change that because work greats!!! but its difficult to develop a > > traffic control managment from scratch and integrated with the dpdk in a > > clean way without touching the dpdk api, you guys just done that with the > > qos scheduler, i dont want to reinvent the wheel. > > Again thank you for the patience, and for your expertise. > > > The limitation on number's of TC (and pipes) comes from the number of > bits available. Since the QoS code overloads the 32 bit RSS field in > the mbuf there isn't enough bits to a lot. But then again if you add lots > of pipes or subports the memory footprint gets huge. > > Since it is open source, you could reduce the number of bits for one > field and increase the other. But having lots of priority classes > would lead to poor performance and potential starvation. >