From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f180.google.com (mail-io0-f180.google.com [209.85.223.180]) by dpdk.org (Postfix) with ESMTP id 7E3CC2C01 for ; Tue, 19 Jul 2016 16:59:14 +0200 (CEST) Received: by mail-io0-f180.google.com with SMTP id 38so21362658iol.0 for ; Tue, 19 Jul 2016 07:59:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=uONkQ+MugcJdr1bT501VpU+HMajhBsJIwctjUqYuIFY=; b=rjEm1VfiQXP10nMPPzYNTl66TszHOo+6h7HpEXn2rAfjEhSNMFKvcWOiFIdGea2G40 OnSeF3WbWH9WtAQXQkXMATC6miP2kTaeBFAEhuu8u1w69HEikqT0nmEwNpzfg7kXweGi Ll8JzGM2t8Cx6baFfdMWJtlDHypS14iOVVQgvtTggLuKOlzdOkZ1StLw1wSIzP76kYMq HxiWRoemtTQIMgQGiLMulOcadraXEI0YVdz4EZbIyad2MUIl60KBAVHGK/SXWGrCRQdj hSVz5WGPhMPNtGP0N4JdIrWjpmQDBwSt/fk5RvRnsQJ3mTjnAhSOjcaLiVRzXXQm6wDs 4LNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=uONkQ+MugcJdr1bT501VpU+HMajhBsJIwctjUqYuIFY=; b=SmFHZqM4+eURNl45FO6RSPxiCJAF1/epjhFFn2lRb6SEmLpu3GRhwo11aIfOrn/s4G bqk0dKqv1BSu5NIfYWv6VaoGjYF9BLuK6pMbDub+HcrATgJrICWwAykemly23SKJnGYy yLnx/6Md/UWCbw4a8FjTnqu3LF+5MUXNVamHBae9y9FlFA+cZx5Czl/PH5oPZ2oyS/bP JqQUkKF/YQCo8FiErUnynJ2i22U2tz+N9ZBQ/3qFgEujv6PVjfQF9XWEdnEcxYkvkGYV /hRPcMJFX4OikonbyojhqFmfwUPYZ+cFtOvcD9KvqK0BUNC2i3nI17VTySoxVnukorkI CQjw== X-Gm-Message-State: ALyK8tLLzqINa9yQdeT52uy1J9txiHpDDQ8TgWuzybFg+++x0XekNK2nuSPhc1oKapyDtg2uk7LxPGIbPs2seA== X-Received: by 10.107.10.170 with SMTP id 42mr43068318iok.131.1468940353769; Tue, 19 Jul 2016 07:59:13 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.7.82 with HTTP; Tue, 19 Jul 2016 07:58:54 -0700 (PDT) In-Reply-To: <94479800C636CB44BD422CB454846E013B5A15@SHSMSX101.ccr.corp.intel.com> References: <94479800C636CB44BD422CB454846E013B5A15@SHSMSX101.ccr.corp.intel.com> From: Take Ceara Date: Tue, 19 Jul 2016 16:58:54 +0200 Message-ID: To: "Xing, Beilei" Cc: "Zhang, Helin" , "Wu, Jingjing" , "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Jul 2016 14:59:14 -0000 Hi Beilei, On Tue, Jul 19, 2016 at 11:31 AM, Xing, Beilei wrote: > Hi Ceara, > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Take Ceara >> Sent: Tuesday, July 19, 2016 12:14 AM >> To: Zhang, Helin >> Cc: Wu, Jingjing ; dev@dpdk.org >> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 >> NICs for some RX mbuf sizes >> >> Hi Helin, >> >> On Mon, Jul 18, 2016 at 5:15 PM, Zhang, Helin >> wrote: >> > Hi Ceara >> > >> > Could you help to let me know your firmware version? >> >> # ethtool -i p7p1 | grep firmware >> firmware-version: f4.40.35115 a1.4 n4.53 e2021 >> >> > And could you help to try with the standard DPDK example application, >> such as testpmd, to see if there is the same issue? >> > Basically we always set the same size for both rx and tx buffer, like the >> default one of 2048 for a lot of applications. >> >> I'm a bit lost in the testpmd CLI. I enabled RSS, configured 2 RX queues per >> port and started sending traffic with single segmnet packets of size 2K but I >> didn't figure out how to actually verify that the RSS hash is correctly set.. >> Please let me know if I should do it in a different way. >> >> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 2048 -i [...] >> >> testpmd> port stop all >> Stopping ports... >> Checking link statuses... >> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000 >> Mbps - full-duplex Done >> >> testpmd> port config all txq 2 >> >> testpmd> port config all rss all >> >> testpmd> port config all max-pkt-len 2048 port start all >> Configuring Port 0 (socket 0) >> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq. >> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq. >> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are >> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0. >> PMD: i40e_set_tx_function(): Vector tx finally be used. >> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=0). >> Port 0: 3C:FD:FE:9D:BE:F0 >> Configuring Port 1 (socket 0) >> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq. >> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq. >> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are >> satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0. >> PMD: i40e_set_tx_function(): Vector tx finally be used. >> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=1). >> Port 1: 3C:FD:FE:9D:BF:30 >> Checking link statuses... >> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000 >> Mbps - full-duplex Done >> >> testpmd> set txpkts 2048 >> testpmd> show config txpkts >> Number of segments: 1 >> Segment sizes: 2048 >> Split packet: off >> >> >> testpmd> start tx_first >> io packet forwarding - CRC stripping disabled - packets/burst=32 >> nb forwarding cores=1 - nb forwarding ports=2 >> RX queues=1 - RX desc=128 - RX free threshold=32 > > In testpmd, when RX queues=1, RSS will be disabled, so could you re-configure rx queue(>1) and try again with testpmd? I changed the way I run testmpd to: testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 1152 --rss-ip --rxq=2 --txpkts 1024 -i As far as I understand this will allocate mbufs with the same size I was using in my test (--mbuf-size seems to include the mbuf headroom therefore 1152 = 1024 + 128 headroom). testpmd> start tx_first io packet forwarding - CRC stripping disabled - packets/burst=32 nb forwarding cores=1 - nb forwarding ports=2 RX queues=2 - RX desc=128 - RX free threshold=32 RX threshold registers: pthresh=8 hthresh=8 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=32 TX threshold registers: pthresh=32 hthresh=0 wthresh=0 TX RS bit threshold=32 - TXQ flags=0xf01 testpmd> show port stats all ######################## NIC statistics for port 0 ######################## RX-packets: 18817613 RX-missed: 5 RX-bytes: 19269115888 RX-errors: 0 RX-nombuf: 0 TX-packets: 18818064 TX-errors: 0 TX-bytes: 19269567464 ############################################################################ ######################## NIC statistics for port 1 ######################## RX-packets: 18818392 RX-missed: 5 RX-bytes: 19269903360 RX-errors: 0 RX-nombuf: 0 TX-packets: 18817979 TX-errors: 0 TX-bytes: 19269479424 ############################################################################ Ttraffic is sent/received. However, I couldn't find any way to verify that the incoming mbufs actually have the mbuf->hash.rss field set except for starting test-pmd with gdb and setting a breakpoint in the io fwd engine. After doing that I noticed that none of the incoming packets has the PKT_RX_RSS_HASH flag set in ol_flags... I guess for some reason test-pmd doesn't actually configure RSS in this case but I fail to see where. Thanks, Dumitru > > Regards, > Beilei > >> RX threshold registers: pthresh=8 hthresh=8 wthresh=0 >> TX queues=2 - TX desc=512 - TX free threshold=32 >> TX threshold registers: pthresh=32 hthresh=0 wthresh=0 >> TX RS bit threshold=32 - TXQ flags=0xf01 >> testpmd> stop >> Telling cores to stop... >> Waiting for lcores to finish... >> >> ---------------------- Forward statistics for port 0 ---------------------- >> RX-packets: 32 RX-dropped: 0 RX-total: 32 >> TX-packets: 32 TX-dropped: 0 TX-total: 32 >> ---------------------------------------------------------------------------- >> >> ---------------------- Forward statistics for port 1 ---------------------- >> RX-packets: 32 RX-dropped: 0 RX-total: 32 >> TX-packets: 32 TX-dropped: 0 TX-total: 32 >> ---------------------------------------------------------------------------- >> >> +++++++++++++++ Accumulated forward statistics for all >> ports+++++++++++++++ >> RX-packets: 64 RX-dropped: 0 RX-total: 64 >> TX-packets: 64 TX-dropped: 0 TX-total: 64 >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> ++++++++++++++++++ >> >> Done. >> testpmd> >> >> >> > >> > Definitely we will try to reproduce that issue with testpmd, with using 2K >> mbufs. Hopefully we can find the root cause, or tell you that's not an issue. >> > >> >> I forgot to mention that in my test code the TX/RX_MBUF_SIZE macros also >> include the mbuf headroom and the size of the mbuf structure. >> Therefore testing with 2K mbufs in my scenario actually creates mempools of >> objects of size 2K + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM. >> >> > Thank you very much for your reporting! >> > >> > BTW, dev@dpdk.org should be the right one to replace users@dpdk.org, >> for sending questions/issues like this. >> >> Thanks, I'll keep that in mind. >> >> > >> > Regards, >> > Helin >> >> Regards, >> Dumitru >> -- Dumitru Ceara