From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 3B3AB5691 for ; Tue, 28 Jun 2016 00:36:42 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP; 27 Jun 2016 15:36:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,538,1459839600"; d="scan'208";a="1006260680" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by orsmga002.jf.intel.com with ESMTP; 27 Jun 2016 15:36:40 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.183]) by IRSMSX153.ger.corp.intel.com ([169.254.9.105]) with mapi id 14.03.0248.002; Mon, 27 Jun 2016 23:36:39 +0100 From: "De Lara Guarch, Pablo" To: =?iso-8859-1?Q?N=E9lio_Laranjeiro?= , "Wang, Zhihong" CC: "dev@dpdk.org" , "Ananyev, Konstantin" , "Richardson, Bruce" , "thomas.monjalon@6wind.com" Thread-Topic: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Thread-Index: AQHRxs0UD6BTYkQPakeVql7Hc3lXfZ/9YKIAgACaLjA= Date: Mon, 27 Jun 2016 22:36:38 +0000 Message-ID: References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-5-git-send-email-zhihong.wang@intel.com> <20160627142340.GO14221@autoinstall.dev.6wind.com> In-Reply-To: <20160627142340.GO14221@autoinstall.dev.6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYmUyODAyYjQtODNkOC00NTZmLTgzMTctZjZlYTI2MzlmOWNkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImZXamhGUWZ2VFFQQndMM2wyU08xbXdBbzFEN1g0U0lWTDhVQTlHVmgrZGM9In0= x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jun 2016 22:36:42 -0000 Hi Nelio, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of N=E9lio Laranjeiro > Sent: Monday, June 27, 2016 3:24 PM > To: Wang, Zhihong > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > Pablo; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss se= tup >=20 > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > This patch removes constraints in rxq handling when multiqueue is enabl= ed > > to handle all the rxqs. > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > ignored when core number is less than rxq number, and that causes > confusion > > and inconvenience. > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > ports in guest each with 4 queues, and testpmd was used as the forwardi= ng > > engine in guest, as usual he used 1 core for forwarding, as a results h= e > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > emails and quite some time are spent to root cause it, and of course it= 's > > caused by this unreasonable testpmd behavior. > > > > Moreover, even if we understand this behavior, if we want to test the > > above case, we still need 8 cores for a single guest to poll all the > > rxqs, obviously this is too expensive. > > > > We met quite a lot cases like this, one recent example: > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > Signed-off-by: Zhihong Wang > > --- > > app/test-pmd/config.c | 8 +------- > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > index ede7c78..4719a08 100644 > > --- a/app/test-pmd/config.c > > +++ b/app/test-pmd/config.c > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > cur_fwd_config.nb_fwd_ports =3D nb_fwd_ports; > > cur_fwd_config.nb_fwd_streams =3D > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > - cur_fwd_config.nb_fwd_streams =3D > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > - else > > - cur_fwd_config.nb_fwd_lcores =3D > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > /* reinitialize forwarding streams */ > > init_fwd_streams(); > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > rxp =3D 0; rxq =3D 0; > > - for (lc_id =3D 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > + for (lc_id =3D 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > struct fwd_stream *fs; > > > > fs =3D fwd_streams[lc_id]; > > -- > > 2.5.0 >=20 > Hi Zhihong, >=20 > It seems this commits introduce a bug in pkt_burst_transmit(), this only > occurs when the number of cores present in the coremask is greater than > the number of queues i.e. coremask=3D0xffe --txq=3D4 --rxq=3D4. >=20 > Port 0 Link Up - speed 40000 Mbps - full-duplex > Port 1 Link Up - speed 40000 Mbps - full-duplex > Done > testpmd> start tx_first > io packet forwarding - CRC stripping disabled - packets/burst=3D64 > nb forwarding cores=3D10 - nb forwarding ports=3D2 > RX queues=3D4 - RX desc=3D256 - RX free threshold=3D0 > RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > TX queues=3D4 - TX desc=3D256 - TX free threshold=3D0 > TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > TX RS bit threshold=3D0 - TXQ flags=3D0x0 > Segmentation fault (core dumped) >=20 >=20 > If I start testpmd with a coremask with at most as many cores as queues, > everything works well (i.e. coremask=3D0xff0, or 0xf00). >=20 > Are you able to reproduce the same issue? > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). Thanks for reporting this. I was able to reproduce this issue and sent a patch that should fix it. Could you verify it? http://dpdk.org/dev/patchwork/patch/14430/ Thanks Pablo >=20 > Regards, >=20 > -- > N=E9lio Laranjeiro > 6WIND