DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
To: Zhihong Wang <zhihong.wang@intel.com>
Cc: dev@dpdk.org, konstantin.ananyev@intel.com,
	bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com,
	thomas.monjalon@6wind.com
Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup
Date: Mon, 27 Jun 2016 16:23:40 +0200	[thread overview]
Message-ID: <20160627142340.GO14221@autoinstall.dev.6wind.com> (raw)
In-Reply-To: <1465945686-142094-5-git-send-email-zhihong.wang@intel.com>

On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote:
> This patch removes constraints in rxq handling when multiqueue is enabled
> to handle all the rxqs.
> 
> Current testpmd forces a dedicated core for each rxq, some rxqs may be
> ignored when core number is less than rxq number, and that causes confusion
> and inconvenience.
> 
> One example: One Red Hat engineer was doing multiqueue test, there're 2
> ports in guest each with 4 queues, and testpmd was used as the forwarding
> engine in guest, as usual he used 1 core for forwarding, as a results he
> only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
> emails and quite some time are spent to root cause it, and of course it's
> caused by this unreasonable testpmd behavior.  
> 
> Moreover, even if we understand this behavior, if we want to test the
> above case, we still need 8 cores for a single guest to poll all the
> rxqs, obviously this is too expensive.
> 
> We met quite a lot cases like this, one recent example:
> http://openvswitch.org/pipermail/dev/2016-June/072110.html
> 
> 
> Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
> ---
>  app/test-pmd/config.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index ede7c78..4719a08 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void)
>  	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
>  	cur_fwd_config.nb_fwd_streams =
>  		(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
> -	if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
> -		cur_fwd_config.nb_fwd_streams =
> -			(streamid_t)cur_fwd_config.nb_fwd_lcores;
> -	else
> -		cur_fwd_config.nb_fwd_lcores =
> -			(lcoreid_t)cur_fwd_config.nb_fwd_streams;
>  
>  	/* reinitialize forwarding streams */
>  	init_fwd_streams();
>  
>  	setup_fwd_config_of_each_lcore(&cur_fwd_config);
>  	rxp = 0; rxq = 0;
> -	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> +	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
>  		struct fwd_stream *fs;
>  
>  		fs = fwd_streams[lc_id];
> -- 
> 2.5.0

Hi Zhihong,

It seems this commits introduce a bug in pkt_burst_transmit(), this only
occurs when the number of cores present in the coremask is greater than
the number of queues i.e. coremask=0xffe --txq=4 --rxq=4.

  Port 0 Link Up - speed 40000 Mbps - full-duplex
  Port 1 Link Up - speed 40000 Mbps - full-duplex
  Done
  testpmd> start tx_first
    io packet forwarding - CRC stripping disabled - packets/burst=64
    nb forwarding cores=10 - nb forwarding ports=2
    RX queues=4 - RX desc=256 - RX free threshold=0
    RX threshold registers: pthresh=0 hthresh=0 wthresh=0
    TX queues=4 - TX desc=256 - TX free threshold=0
    TX threshold registers: pthresh=0 hthresh=0 wthresh=0
    TX RS bit threshold=0 - TXQ flags=0x0
  Segmentation fault (core dumped)


If I start testpmd with a coremask with at most as many cores as queues,
everything works well (i.e. coremask=0xff0, or 0xf00).

Are you able to reproduce the same issue?
Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204).

Regards,

-- 
Nélio Laranjeiro
6WIND

  reply	other threads:[~2016-06-27 14:23 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang
2016-05-05 22:46 ` [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding Zhihong Wang
2016-05-25  9:32   ` Thomas Monjalon
2016-05-26  2:40     ` Wang, Zhihong
2016-05-26  6:27       ` Thomas Monjalon
2016-05-26  9:24         ` Wang, Zhihong
2016-05-05 22:46 ` [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number Zhihong Wang
2016-05-25  9:35   ` Thomas Monjalon
2016-05-26  2:53     ` Wang, Zhihong
2016-05-26  6:31       ` Thomas Monjalon
2016-05-26  9:31         ` Wang, Zhihong
2016-05-05 22:46 ` [dpdk-dev] [PATCH 3/6] testpmd: show throughput in port stats Zhihong Wang
2016-05-05 22:46 ` [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup Zhihong Wang
2016-05-25  9:42   ` Thomas Monjalon
2016-05-26  2:55     ` Wang, Zhihong
2016-06-03  9:22       ` Wang, Zhihong
2016-05-05 22:47 ` [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start Zhihong Wang
2016-05-25  9:45   ` Thomas Monjalon
2016-05-26  2:56     ` Wang, Zhihong
2016-05-05 22:47 ` [dpdk-dev] [PATCH 6/6] testpmd: update documentation Zhihong Wang
2016-05-25  9:48   ` Thomas Monjalon
2016-05-26  2:54     ` Wang, Zhihong
2016-05-20  8:54 ` [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Wang, Zhihong
2016-05-25  9:27 ` Thomas Monjalon
2016-06-01  3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang
2016-06-01  3:27   ` [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option Zhihong Wang
2016-06-07  9:28     ` De Lara Guarch, Pablo
2016-06-08  1:29       ` Wang, Zhihong
2016-06-01  3:27   ` [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number Zhihong Wang
2016-06-07  9:43     ` De Lara Guarch, Pablo
2016-06-01  3:27   ` [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats Zhihong Wang
2016-06-07 10:02     ` De Lara Guarch, Pablo
2016-06-08  1:31       ` Wang, Zhihong
2016-06-01  3:27   ` [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang
2016-06-07 10:29     ` De Lara Guarch, Pablo
2016-06-08  1:28       ` Wang, Zhihong
2016-06-01  3:27   ` [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start Zhihong Wang
2016-06-07 10:56     ` De Lara Guarch, Pablo
2016-06-14 15:13     ` De Lara Guarch, Pablo
2016-06-15  7:05       ` Wang, Zhihong
2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang
2016-06-14 23:08   ` [dpdk-dev] [PATCH v3 1/5] testpmd: add retry option Zhihong Wang
2016-06-14 23:08   ` [dpdk-dev] [PATCH v3 2/5] testpmd: configurable tx_first burst number Zhihong Wang
2016-06-14 23:08   ` [dpdk-dev] [PATCH v3 3/5] testpmd: show throughput in port stats Zhihong Wang
2016-06-14 23:08   ` [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang
2016-06-27 14:23     ` Nélio Laranjeiro [this message]
2016-06-27 22:36       ` De Lara Guarch, Pablo
2016-06-28  8:34         ` Nélio Laranjeiro
2016-06-28 11:10           ` Wang, Zhihong
2016-06-14 23:08   ` [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start Zhihong Wang
2016-06-16 11:09     ` De Lara Guarch, Pablo
2016-06-16 13:33       ` Thomas Monjalon
2016-06-15 10:04   ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility De Lara Guarch, Pablo
2016-06-16 14:36     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160627142340.GO14221@autoinstall.dev.6wind.com \
    --to=nelio.laranjeiro@6wind.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@intel.com \
    --cc=pablo.de.lara.guarch@intel.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).