From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id A5EE137B0 for ; Tue, 28 Jun 2016 13:10:38 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 28 Jun 2016 04:10:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,541,1459839600"; d="scan'208";a="996497843" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by fmsmga001.fm.intel.com with ESMTP; 28 Jun 2016 04:10:37 -0700 Received: from fmsmsx111.amr.corp.intel.com (10.18.116.5) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 28 Jun 2016 04:10:37 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by fmsmsx111.amr.corp.intel.com (10.18.116.5) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 28 Jun 2016 04:10:36 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.181]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.116]) with mapi id 14.03.0248.002; Tue, 28 Jun 2016 19:10:35 +0800 From: "Wang, Zhihong" To: =?iso-8859-1?Q?N=E9lio_Laranjeiro?= , "De Lara Guarch, Pablo" CC: "dev@dpdk.org" , "Ananyev, Konstantin" , "Richardson, Bruce" , "thomas.monjalon@6wind.com" Thread-Topic: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Thread-Index: AQHRxs0Usx9kgEwtlkeTHCLoZ3PmFZ/860kAgACJvACAAKb+gIAAsVhA Date: Tue, 28 Jun 2016 11:10:34 +0000 Message-ID: <8F6C2BD409508844A0EFC19955BE0941107628E1@SHSMSX103.ccr.corp.intel.com> References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-5-git-send-email-zhihong.wang@intel.com> <20160627142340.GO14221@autoinstall.dev.6wind.com> <20160628083419.GT14221@autoinstall.dev.6wind.com> In-Reply-To: <20160628083419.GT14221@autoinstall.dev.6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_IC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDY5MzdmYzktMGViYS00MDg5LWFkZDYtMTliYjY1N2I1YTJlIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IkNTWHlRUWhoK05IenhZWnh6ZHh6aWMrUFVWakZRWVpESnpnb3lMbzMzRUE9In0= x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jun 2016 11:10:39 -0000 Thanks Nelio and Pablo! > -----Original Message----- > From: N=E9lio Laranjeiro [mailto:nelio.laranjeiro@6wind.com] > Sent: Tuesday, June 28, 2016 4:34 PM > To: De Lara Guarch, Pablo > Cc: Wang, Zhihong ; dev@dpdk.org; Ananyev, > Konstantin ; Richardson, Bruce > ; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss se= tup >=20 > Hi Pablo, >=20 > On Mon, Jun 27, 2016 at 10:36:38PM +0000, De Lara Guarch, Pablo wrote: > > Hi Nelio, > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of N=E9lio Laranjei= ro > > > Sent: Monday, June 27, 2016 3:24 PM > > > To: Wang, Zhihong > > > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Gua= rch, > > > Pablo; thomas.monjalon@6wind.com > > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rs= s setup > > > > > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > > > This patch removes constraints in rxq handling when multiqueue is e= nabled > > > > to handle all the rxqs. > > > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may= be > > > > ignored when core number is less than rxq number, and that causes > > > confusion > > > > and inconvenience. > > > > > > > > One example: One Red Hat engineer was doing multiqueue test, there'= re 2 > > > > ports in guest each with 4 queues, and testpmd was used as the forw= arding > > > > engine in guest, as usual he used 1 core for forwarding, as a resul= ts he > > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot = of > > > > emails and quite some time are spent to root cause it, and of cours= e it's > > > > caused by this unreasonable testpmd behavior. > > > > > > > > Moreover, even if we understand this behavior, if we want to test t= he > > > > above case, we still need 8 cores for a single guest to poll all th= e > > > > rxqs, obviously this is too expensive. > > > > > > > > We met quite a lot cases like this, one recent example: > > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > > > > > > > Signed-off-by: Zhihong Wang > > > > --- > > > > app/test-pmd/config.c | 8 +------- > > > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > > > index ede7c78..4719a08 100644 > > > > --- a/app/test-pmd/config.c > > > > +++ b/app/test-pmd/config.c > > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > > > cur_fwd_config.nb_fwd_ports =3D nb_fwd_ports; > > > > cur_fwd_config.nb_fwd_streams =3D > > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > > > - cur_fwd_config.nb_fwd_streams =3D > > > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > > > - else > > > > - cur_fwd_config.nb_fwd_lcores =3D > > > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > > > > > /* reinitialize forwarding streams */ > > > > init_fwd_streams(); > > > > > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > > > rxp =3D 0; rxq =3D 0; > > > > - for (lc_id =3D 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) = { > > > > + for (lc_id =3D 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++)= { > > > > struct fwd_stream *fs; > > > > > > > > fs =3D fwd_streams[lc_id]; > > > > -- > > > > 2.5.0 > > > > > > Hi Zhihong, > > > > > > It seems this commits introduce a bug in pkt_burst_transmit(), this o= nly > > > occurs when the number of cores present in the coremask is greater th= an > > > the number of queues i.e. coremask=3D0xffe --txq=3D4 --rxq=3D4. > > > > > > Port 0 Link Up - speed 40000 Mbps - full-duplex > > > Port 1 Link Up - speed 40000 Mbps - full-duplex > > > Done > > > testpmd> start tx_first > > > io packet forwarding - CRC stripping disabled - packets/burst=3D6= 4 > > > nb forwarding cores=3D10 - nb forwarding ports=3D2 > > > RX queues=3D4 - RX desc=3D256 - RX free threshold=3D0 > > > RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > > > TX queues=3D4 - TX desc=3D256 - TX free threshold=3D0 > > > TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > > > TX RS bit threshold=3D0 - TXQ flags=3D0x0 > > > Segmentation fault (core dumped) > > > > > > > > > If I start testpmd with a coremask with at most as many cores as queu= es, > > > everything works well (i.e. coremask=3D0xff0, or 0xf00). > > > > > > Are you able to reproduce the same issue? > > > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). > > > > Thanks for reporting this. I was able to reproduce this issue and > > sent a patch that should fix it. Could you verify it? > > http://dpdk.org/dev/patchwork/patch/14430/ >=20 >=20 > I have tested it, it works, I will add a test report on the > corresponding email. >=20 > Thanks > > > > > > Thanks > > Pablo > > > > > > Regards, >=20 > -- > N=E9lio Laranjeiro > 6WIND