From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sonic316-35.consmr.mail.gq1.yahoo.com (sonic316-35.consmr.mail.gq1.yahoo.com [98.137.69.59]) by dpdk.org (Postfix) with ESMTP id 3D6133257 for ; Sat, 30 Sep 2017 19:37:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1506793078; bh=R+wQkAeyYXP8wcGiY6RcCZh1QE5ffpGmXM52gw0cB10=; h=Date:From:Reply-To:To:Subject:References:From:Subject; b=fkLyEAuJCWMdrKwZF3+WNYPETXT8rISnhRkZ/+/n1Wd3kuXVvmWnO60A8UIP0ZMNOFAfiyWl3ZyA1KYNNtkK4O42j5iY4gyMvkrcfBASlliAEtS3eEiGIhlhsJwvKDVdMuD3prujAI+mjvdDJqdOUJCYqCbT6FsC/Dmxl0SWj1PWS4Hcf2zO1tsHoNiC9gULDWDcAJHYNrxlrjbICbCs2y/yPxgI/EZhlYTyqpEHMSukZ5Wftl5CijpDHwGJivWujmAFjNNjv5/jgN/icPFzrKEVxXJ0gPKPkXyyXAHMMBdstTWBjZhWoJ5xqfNerMetZrrcKI+K5XIYVCYw+3frEQ== X-YMail-OSG: 688Otn0VM1nBY3yqV4c2eFjNQr06UOAkoHfc2TMMxu4lHThkqrvhGIE.AbvRwVE aygc0p_jn23fJSyz_674Jy43DGmLEC9dElWA3fDddnxz9ma5LG6R6b0HcP2dttMTo2PlXGY4fP5b INF5Jof1OuR3YT03APA1lgC8h7KXrGzDrJrCVQY1z5BvgUsso_IMHyVAg7F6xHo_DHBDCrS09iqy NgA6whtwhFxXsqmpb2UUsiH_5AgTudKcZAQl5mS07r5XGYskn17v_QRmXjLh0_llvWsRTLBxaCBB 6Sci6uPmd52J.LewVhsJM2gYk7PwokBPI7QKDqdfMh1u7FMNf19F1e4eTTkuqlsH3vp2uYyNCKuU 2QqCzwm_1SoQRlEvvA6WaTTylorLOMgV_81aIhUiC1QBR77QxaKSlXLvdywsxXdb85hOHisqd.hm 62tToZFaZlY9EPkegYO.Z.B6GKH3Mw95N4_LdFQyL6JASBjcZ5LbcvtwI Received: from sonic.gate.mail.ne1.yahoo.com by sonic316.consmr.mail.gq1.yahoo.com with HTTP; Sat, 30 Sep 2017 17:37:58 +0000 Date: Sat, 30 Sep 2017 17:33:56 +0000 (UTC) From: Manoj Mallawaarachchi To: "longtb5@viettel.com.vn" , "users@dpdk.org" , CristianDumitrescu Message-ID: <1550329615.1084836.1506792836645@mail.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable References: <1550329615.1084836.1506792836645.ref@mail.yahoo.com> X-Mailer: WebService/1.1.10653 YahooMailBasic Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0 Subject: Re: [dpdk-users] IP Pipeline QoS X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: Manoj Mallawaarachchi List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Sep 2017 17:37:59 -0000 Hi Cristian & BL, Thanks for the detail feedback, I also exploring smiler work. Can you pleas= e elaborate more in how to plug QOS scheduler to pipe line with qnq I'm not clearly understanding below clearly; encap =3D ethernet_qinq qinq_sched =3D test ip_hdr_offset =3D 270 Can you elaborate more on: 1) How to configure qos scheduler to pipe-line in configuration file the sc= enario describe Item #3 2) I have a single network (192.168.1.x) how will qnq will work this scenar= io with QoS scheduler and pipe-line edge_router_downstream. 3) I'm going to use my QoS app like forwarding gateway local network and In= ternet and back (INTERNET browsing) Your advice highly appreciated to move forward. Thank you, Manoj M -------------------------------------------- On Fri, 9/29/17, Dumitrescu, Cristian wrote= : Subject: Re: [dpdk-users] IP Pipeline QoS To: "longtb5@viettel.com.vn" , "users@dpdk.org" Date: Friday, September 29, 2017, 9:30 PM =20 Hi BL, =20 My answers inline below: =20 > -----Original Message----- > From: longtb5@viettel.com.vn [mailto:longtb5@viettel.com.vn] > Sent: Saturday, September 23, 2017 9:00 AM > To: users@dpdk.org > Cc: Dumitrescu, Cristian > Subject: IP Pipeline QoS >=20 >=20 > Hi, > I am trying to build a QoS/Traffic management application using packet > framework. The initial goal is to be able to=C2=A0 configure traffic flow for upto 1000 > users, *individually*, through the front end cmdline. =20 Makes sense, you can map each subscriber/user to its own pipe (L3 node in the hierarchy), which basically results in 16x queues per subscriber split into 4x traffic classes. =20 > Atm I'm looking at ip_pipeline's edge_router_downstream sample and the > qos_sched app for starting point. =20 Yes, these are good starting points. =20 > I have a few questions: >=20 > 1. The traffic management pipeline in edge_router_downstream.cfg is > configured as followed: >=20 > [PIPELINE2] > type =3D PASS-THROUGH > pktq_in =3D SWQ0 SWQ1 SWQ2 SWQ3 TM0 TM1 TM2 TM3 > pktq_out =3D TM0 TM1 TM2 TM3 SWQ4 SWQ5 SWQ6 SWQ7 >=20 > I'm not exactly sure how this works. My thinking is that since this is a passthru > table with no action, the output of SWQ0 gets connected > to the input of TM0 and the output of TM0 gets connected to input of SWQ4, > effectively route SWQ0 to SWQ4 through TM0. Is that correct? =20 Yes, you got it right. =20 >=20 > 2. If that's the case, why don't we do it this way: > =20 > [PIPELINE1] > type =3D ROUTING > pktq_in =3D RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0 > pktq_out =3D TM0 TM1 TM2 TM3 SINK0 >=20 > [PIPELINE2] > type =3D PASS-THROUGH > pktq_in =3D TM0 TM1 TM2 TM3 > pktq_out =3D TM0 TM1 TM2 TM3 >=20 > [PIPELINE3] > type =3D PASS-THROUGH > pktq_in =3D TM0 TM1 TM2 TM3 > pktq_out =3D TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0 >=20 > In other words, why do we need SWQs in this case? (and in general what is > the typical use of SWQs?) >=20 =20 Great question! =20 First, I think what you are trying to suggest looks more like this one below, as we need to have a single producer and consumer for each TM, right? =20 [PIPELINE1] type =3D ROUTING pktq_in =3D RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0 pktq_out =3D TM0 TM1 TM2 TM3 SINK0 =20 [PIPELINE2] type =3D PASS-THROUGH pktq_in =3D TM0 TM1 TM2 TM3 pktq_out =3D TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0 =20 Second, this approach only works when both of these pipelines are on the same CPU (logical) core, as the TM port underlying rte_sched object has the restriction that enque() and dequeue() for the same port must be executed by the same thread. So eliminating the SWQs is actually dangerous, as you might later decide to push the two pipelines to different CPU cores (which can be quickly done through the ip_pipeline config file). So keeping the SWQs allow treating the TMs as internal objects to their pipeline, hence better encapsulation. =20 Third, what is the benefit of saving some SWQs? If pipelines are on different CPU cores, then the SWQs are a must due to thread safety. If pipelines are on same CPU core, then the SWQ producer and consumer are the same thread, so SWQ enqueue/dequeue overhead is very small (L1 cache read/write), so eliminating them does not provide any real performance benefit. =20 Makes sense? =20 > 3. I understand the fast/slow table copy mechanism for querying/updating > _tables_ through the front end. How should I go about querying/updating > pipe profile, which are parts of TM _ports_ if I'm not mistaken. For example, > to get/set the rate of tc 0 of pipe profile 0. > Put it another way, how can I configure tm_profile.cfg interactively through > the CLI? > Is it even possible to configure TMs on-the-fly like that? >=20 =20 Yes, it is possible to do on-the-fly updates to TM configuration. This is done by re-invoking rte_sched_subport/pipe_config() functions after TM init has been completed. =20 Unfortunately we don't have the CLI commands for this yet in ip_pipeline application, so you would have to write them yourself (straightforward). =20 > Thanks, > BL =20 Regards, Cristian =20