From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f41.google.com (mail-wm0-f41.google.com [74.125.82.41]) by dpdk.org (Postfix) with ESMTP id BC35E2C34 for ; Tue, 28 Jun 2016 10:34:21 +0200 (CEST) Received: by mail-wm0-f41.google.com with SMTP id a66so16176496wme.0 for ; Tue, 28 Jun 2016 01:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=9/+qpaL8Ol37yX3q+/UO8QRSLQYHndqmTB7DubG8/L4=; b=g6l7VALOG6ZHmr1Mfb4ylN24vUAa1gCQWVvYZgZme3Z5JH8RCMz9UtwhEnhfJvr1bp 275XUqQNwcTMw/pGJYYyd/z4dfa/+6XABLUHQKWnWNPtKz/GLXHR4obKCZSuiJX+8bGr Tiap3YJ+MJRQbAViJ5Ib5O7qjqZTh9HdCxilQ8l36LtAF93Y8oPy64OE1eiGe7lZIhOF Edtaeexccpf+gFesCc240jdyGJo6JXUKNlWOOuH0uz5pR6picaFFuj8W+8CHIrfAzfOz QDfE9gzs+iE3Dw7dUHd8gMe5s7QIf2ZOs5kEZ1cHOrujCtZrQSJrZW79z+zcuWoftBtv gxqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=9/+qpaL8Ol37yX3q+/UO8QRSLQYHndqmTB7DubG8/L4=; b=U90xyXZOPPDPMjviJnaN16jN4jDWUHyytNPtULq8/fuBiN1vnuW92bhs1AiOKrdYAW Mq2eQRP6FyAYFeIcWYXLa3IAVXtLnniNBBCa5ZaWzG/U864UniaPmcOQkWN9APfIpScJ r3AQbxRIVlH5vKLo+rsfIGeAWwXXsei0m5wbqQKg9gBwUW5yQ8OuOnQNhEuPLaumy9t5 2VK80va4zdrVLF2ftWpOEJUlVM6FitSYNLoRBrOwNOaNLSWFSpPgJFwjLzpTEJj4zS8B mRbCo3BzBQ0B6/Xq68ofVj4Wy8RKTt7KM4XAWra3qEFd5z1aDSLlQoOwESncPGHB9oQT UbFg== X-Gm-Message-State: ALyK8tI154RLpyAhVLzRo35+V/vqutLyjzRShOZfD1iweP9KdmPbLD9WP0uPk0z+4tA/xToi X-Received: by 10.194.115.130 with SMTP id jo2mr2010953wjb.82.1467102861503; Tue, 28 Jun 2016 01:34:21 -0700 (PDT) Received: from autoinstall.dev.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id 12sm15751181wmj.19.2016.06.28.01.34.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Jun 2016 01:34:21 -0700 (PDT) Date: Tue, 28 Jun 2016 10:34:19 +0200 From: =?iso-8859-1?Q?N=E9lio?= Laranjeiro To: "De Lara Guarch, Pablo" Cc: "Wang, Zhihong" , "dev@dpdk.org" , "Ananyev, Konstantin" , "Richardson, Bruce" , "thomas.monjalon@6wind.com" Message-ID: <20160628083419.GT14221@autoinstall.dev.6wind.com> References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-5-git-send-email-zhihong.wang@intel.com> <20160627142340.GO14221@autoinstall.dev.6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jun 2016 08:34:21 -0000 Hi Pablo, On Mon, Jun 27, 2016 at 10:36:38PM +0000, De Lara Guarch, Pablo wrote: > Hi Nelio, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > > Sent: Monday, June 27, 2016 3:24 PM > > To: Wang, Zhihong > > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > > Pablo; thomas.monjalon@6wind.com > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > > This patch removes constraints in rxq handling when multiqueue is enabled > > > to handle all the rxqs. > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > > ignored when core number is less than rxq number, and that causes > > confusion > > > and inconvenience. > > > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > > engine in guest, as usual he used 1 core for forwarding, as a results he > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > > emails and quite some time are spent to root cause it, and of course it's > > > caused by this unreasonable testpmd behavior. > > > > > > Moreover, even if we understand this behavior, if we want to test the > > > above case, we still need 8 cores for a single guest to poll all the > > > rxqs, obviously this is too expensive. > > > > > > We met quite a lot cases like this, one recent example: > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > > > > Signed-off-by: Zhihong Wang > > > --- > > > app/test-pmd/config.c | 8 +------- > > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > > index ede7c78..4719a08 100644 > > > --- a/app/test-pmd/config.c > > > +++ b/app/test-pmd/config.c > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > > cur_fwd_config.nb_fwd_streams = > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > > - cur_fwd_config.nb_fwd_streams = > > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > > - else > > > - cur_fwd_config.nb_fwd_lcores = > > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > > > /* reinitialize forwarding streams */ > > > init_fwd_streams(); > > > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > > rxp = 0; rxq = 0; > > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > > struct fwd_stream *fs; > > > > > > fs = fwd_streams[lc_id]; > > > -- > > > 2.5.0 > > > > Hi Zhihong, > > > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > > occurs when the number of cores present in the coremask is greater than > > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > > > Port 0 Link Up - speed 40000 Mbps - full-duplex > > Port 1 Link Up - speed 40000 Mbps - full-duplex > > Done > > testpmd> start tx_first > > io packet forwarding - CRC stripping disabled - packets/burst=64 > > nb forwarding cores=10 - nb forwarding ports=2 > > RX queues=4 - RX desc=256 - RX free threshold=0 > > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > TX queues=4 - TX desc=256 - TX free threshold=0 > > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > TX RS bit threshold=0 - TXQ flags=0x0 > > Segmentation fault (core dumped) > > > > > > If I start testpmd with a coremask with at most as many cores as queues, > > everything works well (i.e. coremask=0xff0, or 0xf00). > > > > Are you able to reproduce the same issue? > > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). > > Thanks for reporting this. I was able to reproduce this issue and > sent a patch that should fix it. Could you verify it? > http://dpdk.org/dev/patchwork/patch/14430/ I have tested it, it works, I will add a test report on the corresponding email. Thanks > > > Thanks > Pablo > > > > Regards, -- Nélio Laranjeiro 6WIND