DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xing, Beilei" <beilei.xing@intel.com>
To: Take Ceara <dumitru.ceara@gmail.com>,
	"Zhang, Helin" <helin.zhang@intel.com>
Cc: "Wu, Jingjing" <jingjing.wu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
Date: Tue, 19 Jul 2016 09:31:55 +0000	[thread overview]
Message-ID: <94479800C636CB44BD422CB454846E013B5A15@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <CAKKV4w__9GZtXZY6BP9ZbQ0K6KRqX6pvqLu01yBKWRd2ERpZ0Q@mail.gmail.com>

Hi Ceara,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Take Ceara
> Sent: Tuesday, July 19, 2016 12:14 AM
> To: Zhang, Helin <helin.zhang@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
> NICs for some RX mbuf sizes
> 
> Hi Helin,
> 
> On Mon, Jul 18, 2016 at 5:15 PM, Zhang, Helin <helin.zhang@intel.com>
> wrote:
> > Hi Ceara
> >
> > Could you help to let me know your firmware version?
> 
> # ethtool -i p7p1 | grep firmware
> firmware-version: f4.40.35115 a1.4 n4.53 e2021
> 
> > And could you help to try with the standard DPDK example application,
> such as testpmd, to see if there is the same issue?
> > Basically we always set the same size for both rx and tx buffer, like the
> default one of 2048 for a lot of applications.
> 
> I'm a bit lost in the testpmd CLI. I enabled RSS, configured 2 RX queues per
> port and started sending traffic with single segmnet packets of size 2K but I
> didn't figure out how to actually verify that the RSS hash is correctly set..
> Please let me know if I should do it in a different way.
> 
> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 2048 -i [...]
> 
> testpmd> port stop all
> Stopping ports...
> Checking link statuses...
> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
> Mbps - full-duplex Done
> 
> testpmd> port config all txq 2
> 
> testpmd> port config all rss all
> 
> testpmd> port config all max-pkt-len 2048 port start all
> Configuring Port 0 (socket 0)
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> PMD: i40e_set_tx_function(): Vector tx finally be used.
> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=0).
> Port 0: 3C:FD:FE:9D:BE:F0
> Configuring Port 1 (socket 0)
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
> PMD: i40e_set_tx_function(): Vector tx finally be used.
> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=1).
> Port 1: 3C:FD:FE:9D:BF:30
> Checking link statuses...
> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
> Mbps - full-duplex Done
> 
> testpmd> set txpkts 2048
> testpmd> show config txpkts
> Number of segments: 1
> Segment sizes: 2048
> Split packet: off
> 
> 
> testpmd> start tx_first
>   io packet forwarding - CRC stripping disabled - packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=2
>   RX queues=1 - RX desc=128 - RX free threshold=32

In testpmd, when RX queues=1, RSS will be disabled, so could you re-configure rx queue(>1) and try again with testpmd?

Regards,
Beilei

>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>   TX queues=2 - TX desc=512 - TX free threshold=32
>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>   TX RS bit threshold=32 - TXQ flags=0xf01
> testpmd> stop
> Telling cores to stop...
> Waiting for lcores to finish...
> 
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>   ----------------------------------------------------------------------------
> 
>   ---------------------- Forward statistics for port 1  ----------------------
>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>   ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 64             RX-dropped: 0             RX-total: 64
>   TX-packets: 64             TX-dropped: 0             TX-total: 64
> 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++
> 
> Done.
> testpmd>
> 
> 
> >
> > Definitely we will try to reproduce that issue with testpmd, with using 2K
> mbufs. Hopefully we can find the root cause, or tell you that's not an issue.
> >
> 
> I forgot to mention that in my test code the TX/RX_MBUF_SIZE macros also
> include the mbuf headroom and the size of the mbuf structure.
> Therefore testing with 2K mbufs in my scenario actually creates mempools of
> objects of size 2K + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM.
> 
> > Thank you very much for your reporting!
> >
> > BTW, dev@dpdk.org should be the right one to replace users@dpdk.org,
> for sending questions/issues like this.
> 
> Thanks, I'll keep that in mind.
> 
> >
> > Regards,
> > Helin
> 
> Regards,
> Dumitru
> 

  reply	other threads:[~2016-07-19  9:32 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAKKV4w9uoN_X=0DKJHgcAHT7VCmeBHP=WrHfi+12o3ogA6htSQ@mail.gmail.com>
2016-07-18 15:15 ` Zhang, Helin
2016-07-18 16:14   ` Take Ceara
2016-07-19  9:31     ` Xing, Beilei [this message]
2016-07-19 14:58       ` Take Ceara
2016-07-20  1:59         ` Xing, Beilei
2016-07-21 10:58           ` Take Ceara
2016-07-22  9:04             ` Xing, Beilei
2016-07-22 12:31               ` Take Ceara
2016-07-22 12:35                 ` Take Ceara
2016-07-25  3:24                 ` Xing, Beilei
2016-07-25 10:04                   ` Take Ceara
2016-07-26  8:38                     ` Take Ceara
2016-07-26  8:47                       ` Zhang, Helin
2016-07-26  8:57                         ` Take Ceara
2016-07-26  9:23 Ananyev, Konstantin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=94479800C636CB44BD422CB454846E013B5A15@SHSMSX101.ccr.corp.intel.com \
    --to=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=dumitru.ceara@gmail.com \
    --cc=helin.zhang@intel.com \
    --cc=jingjing.wu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).