From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f171.google.com (mail-pd0-f171.google.com [209.85.192.171]) by dpdk.org (Postfix) with ESMTP id 0745C5934 for ; Fri, 29 Nov 2013 22:25:13 +0100 (CET) Received: by mail-pd0-f171.google.com with SMTP id z10so14396247pdj.30 for ; Fri, 29 Nov 2013 13:26:14 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=pizVXczovCgwCCZd6tIwtzAwS/vFu97IsvF/wAMOOm4=; b=Ihgki6mlbgFNgSnNg/zUOuixHWNx3eHY7lhrZ48I1h5hF/rq6LqkhuKNFduCCKs1vR 6X56I1/BlXCluWWjDxPs5QhUrNvGI8iEtJmHNf3fLNMldEZe23Y4tn7JJJzPU8Li/BQa fywyUoM/7DMioGZq3Ezu1gCPt7KWzksP/kdXXwAf5UbM1ZOy0VRuXH4UmyXLETRaSxCI y7o8FA7R9otYklC5tXh0fr/KT1zNgQ0Aq22s9ZbTEQdtUc1CElngDJ10S1QcWoHLo9yj XmuFwHBGs1p1t/mINfk8lVWVfDQGfWooStRvxYx3O/5UKJxNuhvWncU5UVZwtmlDpaQt CP/g== X-Gm-Message-State: ALoCoQlbM1e6f89vWhWnUJ+cUeOFR826y9cVwRFePg0RovUpHjKDNSWbpK+6nJwpfHxb9AgZQtxJ X-Received: by 10.69.1.105 with SMTP id bf9mr17976947pbd.53.1385760374487; Fri, 29 Nov 2013 13:26:14 -0800 (PST) Received: from nehalam.linuxnetplumber.net (static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51]) by mx.google.com with ESMTPSA id ju10sm35715936pbd.33.2013.11.29.13.26.13 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 29 Nov 2013 13:26:14 -0800 (PST) Date: Fri, 29 Nov 2013 13:26:11 -0800 From: Stephen Hemminger To: Ariel Rodriguez Message-ID: <20131129132611.2ed0335c@nehalam.linuxnetplumber.net> In-Reply-To: References: <3EB4FA525960D640B5BDFFD6A3D891261A5D21F4@IRSMSX102.ger.corp.intel.com> X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] 4 Traffic classes per Pipe limitation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 Nov 2013 21:25:14 -0000 On Fri, 29 Nov 2013 17:50:34 -0200 Ariel Rodriguez wrote: > Thanks for the answer, your explanation was perfect. Unfortunally > , the client requirements are those, we need at traffic control level > around 64 traffic metering controlers (traffic classes) at subscriber level. I think you maybe confused by the Intel QoS naming. It is better to think about it as 3 different classification levels and not get too hung up about the naming. The way to do what you want that is with 64 different 'pipes'. In our usage: subport => VLAN pipe => subscriber matched by tuple traffic class => mapping from DSCP to TC > Each subscriber have a global plan rate (each pipe have the same > token bucket configuration), inside that plan there are different rules for > the traffic (traffic classes). For Example, facebook traffic, twitter > traffic, whatsapp traffic have different plan rates lower than the plan > global rate but different than the others protocols. We could group those > in one traffic class, but still the 4 traffic classes is a strong > limitation for us, beacause each protocol mapped to a traffic class share > the same configuration (facebook traffic, twitter traffic have had the same > rate and more, they compete for the same traffic class rate). > We have to compete against cisco bandwith control solution and at > least we need to offer the same features. The cisco solution its a DPI but > also a traffic control solution, its permit priorization of traffic and > manage the congestion inside the network per subscriber and per application > service. So apperently the dpdk qos scheduller doesnt fit for our needs. > Anyway, i still doesnt understand the traffic classes limitation. > Inside the dpdk code of the qos scheduler i saw this: > > /** Number of queues per pipe traffic class. Cannot be changed. */ > #define RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS 4 > I follow where the code use that define and except for the struct > rte_sched_port_hierarchy where its mandatory a bitwise field of two (0...3) > , i dont see where is the limitation here (except for performance). Its > worth to change the code to support more than 4 traffic classes, well i > could try to change the code more precisely jejeje. I just want to know if > there are another limitation than a design desicion of that number. I dont > want to make the effort for nothing maybe you guys can help me to > understand why the limitation. > I strongly use the dpdk solution for feed our dpi solution, i > wont change that because work greats!!! but its difficult to develop a > traffic control managment from scratch and integrated with the dpdk in a > clean way without touching the dpdk api, you guys just done that with the > qos scheduler, i dont want to reinvent the wheel. > Again thank you for the patience, and for your expertise. The limitation on number's of TC (and pipes) comes from the number of bits available. Since the QoS code overloads the 32 bit RSS field in the mbuf there isn't enough bits to a lot. But then again if you add lots of pipes or subports the memory footprint gets huge. Since it is open source, you could reduce the number of bits for one field and increase the other. But having lots of priority classes would lead to poor performance and potential starvation.