DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
       [not found] <CAKKV4w9uoN_X=0DKJHgcAHT7VCmeBHP=WrHfi+12o3ogA6htSQ@mail.gmail.com>
@ 2016-07-18 15:15 ` Zhang, Helin
  2016-07-18 16:14   ` Take Ceara
  0 siblings, 1 reply; 15+ messages in thread
From: Zhang, Helin @ 2016-07-18 15:15 UTC (permalink / raw)
  To: Take Ceara; +Cc: Wu, Jingjing, dev

Hi Ceara

Could you help to let me know your firmware version?
And could you help to try with the standard DPDK example application, such as testpmd, to see if there is the same issue?
Basically we always set the same size for both rx and tx buffer, like the default one of 2048 for a lot of applications.

Definitely we will try to reproduce that issue with testpmd, with using 2K mbufs. Hopefully we can find the root cause, or tell you that's not an issue.

Thank you very much for your reporting!

BTW, dev@dpdk.org should be the right one to replace users@dpdk.org, for sending questions/issues like this.

Regards,
Helin

> -----Original Message-----
> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> Sent: Monday, July 18, 2016 4:03 PM
> To: users@dpdk.org
> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX
> mbuf sizes
> 
> Hi,
> 
> Is there any known issue regarding the i40e DPDK driver when having RSS
> hashing enabled in DPDK 16.04?
> I've noticed that for some specific receive mbuf sizes the RSS hash is always set
> to 0 for incoming packets.
> 
> I have a setup with two XL710 ports connected back to back. The simple test
> program below sends fixed TCP packets from port 0 to port 1. The
> L5 payload is added in the packet in such a way that the packet consumes exactly
> one TX mbuf. For some values of the RX mbuf size the incoming mbuf has the
> hash.rss == 0 even though the PKT_RX_RSS_HASH flag is set in ol_flags. In my
> code the TX/RX mbuf sizes are controlled by the RX_MBUF_SIZE and
> TX_MBUF_SIZE macros.
> 
> As an example, with some of the following TX/RX sizes the assert that checks if
> the RSS hash is non-zero fails and with the other it passes:
> 
> RX_MBUF_SIZE  TX_MBUF_SIZE assert
> =================================
> 1024          1024         fail
> 1025          1024         ok
> 1024          2048         fail
> 2048          2048         fail
> 2048          2047         fail
> 2049          2048         ok
> 
> On the same setup I have another loopback connection between two 82599ES
> 10G NICs and when I run exactly the same test the RSS hash is always correct in
> all cases.
> 
> $ $RTE_SDK/tools/dpdk_nic_bind.py -s
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> drv=igb_uio unused=
> 0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> drv=igb_uio unused=
> 0000:82:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio unused=
> 0000:83:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio unused=
> 
> The command line I use for running the test on the 40G NICs is:
> 
> ./build/test -c 0x1 -n 4 -m 1024 -w 0000:82:00.0 -w 0000:83:00.0
> 
> Thanks,
> Dumitru Ceara
> 
> #include <stdbool.h>
> #include <stdint.h>
> #include <assert.h>
> #include <unistd.h>
> 
> #include <rte_ethdev.h>
> #include <rte_timer.h>
> #include <rte_ip.h>
> #include <rte_tcp.h>
> #include <rte_udp.h>
> #include <rte_errno.h>
> #include <rte_arp.h>
> 
> #define MBUF_SIZE(frag_size) \
>     ((frag_size) + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> 
> #define RX_MBUF_SIZE MBUF_SIZE(RTE_MBUF_DEFAULT_DATAROOM)
> #define TX_MBUF_SIZE MBUF_SIZE(RTE_MBUF_DEFAULT_DATAROOM)
> 
> #define MBUF_CACHE 512
> #define MBUF_COUNT 1024
> 
> static struct rte_mempool *rx_mpool;
> static struct rte_mempool *tx_mpool;
> 
> #define PORT_MAX_MTU 9198
> 
> #define L5_GET_LEN(pkt) (rte_pktmbuf_tailroom((pkt)))
> 
> #define PORT0  0
> #define PORT1  1
> #define QUEUE0 0
> #define Q_CNT  1
> 
> 
> struct rte_eth_conf default_port_config = {
>     .rxmode = {
>         .mq_mode        = ETH_MQ_RX_RSS,
>         .max_rx_pkt_len = PORT_MAX_MTU,
>         .split_hdr_size = 0,
>         .header_split   = 0, /**< Header Split disabled */
>         .hw_ip_checksum = 1, /**< IP checksum offload enabled */
>         .hw_vlan_filter = 0, /**< VLAN filtering disabled */
>         .jumbo_frame    = 1, /**< Jumbo Frame Support disabled */
>         .hw_strip_crc   = 0, /**< CRC stripped by hardware */
>     },
>     .rx_adv_conf = {
>         .rss_conf = {
>             .rss_key = NULL,
>             .rss_key_len = 0,
>             .rss_hf = ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP |
> ETH_RSS_NONFRAG_IPV4_UDP,
>         },
>     },
>     .txmode = {
>         .mq_mode = ETH_MQ_TX_NONE,
>     }
> };
> 
> struct rte_eth_rxconf rx_conf = {
>     .rx_thresh = {
>         .pthresh = 8,
>         .hthresh = 8,
>         .wthresh = 4,
>     },
>     .rx_free_thresh = 64,
>     .rx_drop_en = 0
> };
> 
> struct rte_eth_txconf tx_conf = {
>     .tx_thresh = {
>         .pthresh = 36,
>         .hthresh = 0,
>         .wthresh = 0,
>     },
>     .tx_free_thresh = 64,
>     .tx_rs_thresh = 32,
> };
> 
> static void port_setup(uint32_t port)
> {
>     uint32_t queue;
>     int ret;
> 
>     assert(rte_eth_dev_configure(port, Q_CNT, Q_CNT,
>                                  &default_port_config) == 0);
>     for (queue = 0; queue < Q_CNT; queue++) {
>         ret = rte_eth_rx_queue_setup(port, queue, 128, SOCKET_ID_ANY,
>                                      &rx_conf,
>                                      rx_mpool);
>         assert(ret == 0);
>         ret = rte_eth_tx_queue_setup(port, queue, 128, SOCKET_ID_ANY,
>                                      &tx_conf);
>         assert(ret == 0);
>     }
> 
>     assert(rte_eth_dev_start(port) == 0); }
> 
> #define HDRS_SIZE                   \
>         (sizeof(struct ether_hdr) + \
>          sizeof(struct ipv4_hdr) +  \
>          sizeof(struct tcp_hdr))
> 
> static struct rte_mbuf *get_tcp_pkt(uint16_t eth_port) {
>     struct rte_mbuf  *pkt;
>     struct ether_hdr *eth_hdr;
>     struct ipv4_hdr  *ip_hdr;
>     struct tcp_hdr   *tcp_hdr;
>     uint32_t          ip_hdr_len = sizeof(*ip_hdr);
>     uint32_t          tcp_hdr_len = sizeof(*tcp_hdr);
>     uint32_t          l5_len;
> 
>     assert(pkt = rte_pktmbuf_alloc(tx_mpool));
> 
>     pkt->port = eth_port;
>     pkt->l2_len = sizeof(*eth_hdr);
> 
>     RTE_LOG(ERR, USER1, "1:head = %d, tail = %d, len = %d\n",
>             rte_pktmbuf_headroom(pkt), rte_pktmbuf_tailroom(pkt),
>             rte_pktmbuf_pkt_len(pkt));
> 
>     /* Reserve space for ETH + IP + TCP Headers.
>      * Store how much tailroom we have.
>      */
>     eth_hdr = (struct ether_hdr *)rte_pktmbuf_append(pkt, HDRS_SIZE);
>     assert(eth_hdr);
>     l5_len = L5_GET_LEN(pkt);
> 
>     /* ETH Header. */
>     rte_eth_macaddr_get(PORT0, &eth_hdr->s_addr);
>     rte_eth_macaddr_get(PORT1, &eth_hdr->d_addr);
>     eth_hdr->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
> 
>     /* IP Header. */
>     ip_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
>     ip_hdr->version_ihl = (4 << 4) | (ip_hdr_len >> 2);
>     ip_hdr->type_of_service = 0;
>     ip_hdr->total_length = rte_cpu_to_be_16(ip_hdr_len + tcp_hdr_len +
> l5_len);
>     ip_hdr->packet_id = 0;
>     ip_hdr->fragment_offset = rte_cpu_to_be_16(0);
>     ip_hdr->time_to_live = 60;
>     ip_hdr->next_proto_id = IPPROTO_TCP;
>     ip_hdr->src_addr = rte_cpu_to_be_32(0x01010101);
>     ip_hdr->dst_addr = rte_cpu_to_be_32(0x01010101);
>     ip_hdr->hdr_checksum = rte_cpu_to_be_16(0);
> 
>     pkt->l3_len = ip_hdr_len;
>     pkt->ol_flags |= PKT_TX_IP_CKSUM;
> 
>     /* TCP Header. */
>     tcp_hdr = (struct tcp_hdr *)(ip_hdr + 1);
>     tcp_hdr->src_port = rte_cpu_to_be_16(0x42);
>     tcp_hdr->dst_port = rte_cpu_to_be_16(0x24);
>     tcp_hdr->sent_seq = rte_cpu_to_be_32(0x1234);
>     tcp_hdr->recv_ack = rte_cpu_to_be_32(0x1234);
>     tcp_hdr->data_off = tcp_hdr_len >> 2 << 4;
>     tcp_hdr->tcp_flags = TCP_FIN_FLAG;
>     tcp_hdr->rx_win = rte_cpu_to_be_16(0xffff);
>     tcp_hdr->tcp_urp = rte_cpu_to_be_16(0);
> 
>     pkt->ol_flags |= PKT_TX_TCP_CKSUM | PKT_TX_IPV4;
>     pkt->l4_len = tcp_hdr_len;
> 
>     tcp_hdr->cksum = 0;
>     tcp_hdr->cksum = rte_ipv4_phdr_cksum(ip_hdr, pkt->ol_flags);
> 
>     /* Add Payload. */
>     assert(rte_pktmbuf_append(pkt, l5_len));
> 
>     RTE_LOG(ERR, USER1, "1:head = %d, tail = %d, len = %d\n",
>             rte_pktmbuf_headroom(pkt), rte_pktmbuf_tailroom(pkt),
>             rte_pktmbuf_pkt_len(pkt));
> 
>     return pkt;
> }
> 
> int main(int argc, char **argv)
> {
>     struct rte_mbuf *tx_mbuf[3];
> 
>     rte_eal_init(argc, argv);
> 
>     rx_mpool = rte_mempool_create("rx_mpool", MBUF_COUNT,
> RX_MBUF_SIZE,
>                                   0,
>                                   sizeof(struct rte_pktmbuf_pool_private),
>                                   rte_pktmbuf_pool_init, NULL,
>                                   rte_pktmbuf_init, NULL,
>                                   SOCKET_ID_ANY,
>                                   0);
> 
>     tx_mpool = rte_mempool_create("tx_mpool", MBUF_COUNT,
> TX_MBUF_SIZE,
>                                   0,
>                                   sizeof(struct rte_pktmbuf_pool_private),
>                                   rte_pktmbuf_pool_init, NULL,
>                                   rte_pktmbuf_init, NULL,
>                                   SOCKET_ID_ANY,
>                                   0);
> 
>     assert(rx_mpool && tx_mpool);
> 
>     port_setup(PORT0);
>     port_setup(PORT1);
> 
>     for (;;) {
>         uint16_t no_rx_buffers;
>         uint16_t i;
>         struct rte_mbuf *rx_pkts[16];
> 
>         tx_mbuf[0] = get_tcp_pkt(PORT0);
>         assert(rte_eth_tx_burst(PORT0, QUEUE0, tx_mbuf, 1) == 1);
> 
>         no_rx_buffers = rte_eth_rx_burst(PORT1, QUEUE0, rx_pkts, 16);
>         for (i = 0; i < no_rx_buffers; i++) {
>             RTE_LOG(ERR, USER1, "RX RSS HASH: %8lX %4X\n",
>                     rx_pkts[i]->ol_flags,
>                     rx_pkts[i]->hash.rss);
> 
>             assert(rx_pkts[i]->ol_flags == PKT_RX_RSS_HASH);
>             assert(rx_pkts[i]->hash.rss != 0);
> 
>             rte_pktmbuf_free(rx_pkts[i]);
>         }
>     }
> 
>     return 0;
> }

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-18 15:15 ` [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes Zhang, Helin
@ 2016-07-18 16:14   ` Take Ceara
  2016-07-19  9:31     ` Xing, Beilei
  0 siblings, 1 reply; 15+ messages in thread
From: Take Ceara @ 2016-07-18 16:14 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: Wu, Jingjing, dev

Hi Helin,

On Mon, Jul 18, 2016 at 5:15 PM, Zhang, Helin <helin.zhang@intel.com> wrote:
> Hi Ceara
>
> Could you help to let me know your firmware version?

# ethtool -i p7p1 | grep firmware
firmware-version: f4.40.35115 a1.4 n4.53 e2021

> And could you help to try with the standard DPDK example application, such as testpmd, to see if there is the same issue?
> Basically we always set the same size for both rx and tx buffer, like the default one of 2048 for a lot of applications.

I'm a bit lost in the testpmd CLI. I enabled RSS, configured 2 RX
queues per port and started sending traffic with single segmnet
packets of size 2K but I didn't figure out how to actually verify that
the RSS hash is correctly set.. Please let me know if I should do it
in a different way.

testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 2048 -i
[...]

testpmd> port stop all
Stopping ports...
Checking link statuses...
Port 0 Link Up - speed 40000 Mbps - full-duplex
Port 1 Link Up - speed 40000 Mbps - full-duplex
Done

testpmd> port config all txq 2

testpmd> port config all rss all

testpmd> port config all max-pkt-len 2048
testpmd> port start all
Configuring Port 0 (socket 0)
PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
satisfied. Rx Burst Bulk Alloc function will be used on port=0,
queue=0.
PMD: i40e_set_tx_function(): Vector tx finally be used.
PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=0).
Port 0: 3C:FD:FE:9D:BE:F0
Configuring Port 1 (socket 0)
PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
satisfied. Rx Burst Bulk Alloc function will be used on port=1,
queue=0.
PMD: i40e_set_tx_function(): Vector tx finally be used.
PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=1).
Port 1: 3C:FD:FE:9D:BF:30
Checking link statuses...
Port 0 Link Up - speed 40000 Mbps - full-duplex
Port 1 Link Up - speed 40000 Mbps - full-duplex
Done

testpmd> set txpkts 2048
testpmd> show config txpkts
Number of segments: 1
Segment sizes: 2048
Split packet: off


testpmd> start tx_first
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  RX queues=1 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=2 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 32             RX-dropped: 0             RX-total: 32
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 32             RX-dropped: 0             RX-total: 32
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 64             RX-dropped: 0             RX-total: 64
  TX-packets: 64             TX-dropped: 0             TX-total: 64
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd>


>
> Definitely we will try to reproduce that issue with testpmd, with using 2K mbufs. Hopefully we can find the root cause, or tell you that's not an issue.
>

I forgot to mention that in my test code the TX/RX_MBUF_SIZE macros
also include the mbuf headroom and the size of the mbuf structure.
Therefore testing with 2K mbufs in my scenario actually creates
mempools of objects of size 2K + sizeof(struct rte_mbuf) +
RTE_PKTMBUF_HEADROOM.

> Thank you very much for your reporting!
>
> BTW, dev@dpdk.org should be the right one to replace users@dpdk.org, for sending questions/issues like this.

Thanks, I'll keep that in mind.

>
> Regards,
> Helin

Regards,
Dumitru

>
>> -----Original Message-----
>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>> Sent: Monday, July 18, 2016 4:03 PM
>> To: users@dpdk.org
>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
>> Subject: [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX
>> mbuf sizes
>>
>> Hi,
>>
>> Is there any known issue regarding the i40e DPDK driver when having RSS
>> hashing enabled in DPDK 16.04?
>> I've noticed that for some specific receive mbuf sizes the RSS hash is always set
>> to 0 for incoming packets.
>>
>> I have a setup with two XL710 ports connected back to back. The simple test
>> program below sends fixed TCP packets from port 0 to port 1. The
>> L5 payload is added in the packet in such a way that the packet consumes exactly
>> one TX mbuf. For some values of the RX mbuf size the incoming mbuf has the
>> hash.rss == 0 even though the PKT_RX_RSS_HASH flag is set in ol_flags. In my
>> code the TX/RX mbuf sizes are controlled by the RX_MBUF_SIZE and
>> TX_MBUF_SIZE macros.
>>
>> As an example, with some of the following TX/RX sizes the assert that checks if
>> the RSS hash is non-zero fails and with the other it passes:
>>
>> RX_MBUF_SIZE  TX_MBUF_SIZE assert
>> =================================
>> 1024          1024         fail
>> 1025          1024         ok
>> 1024          2048         fail
>> 2048          2048         fail
>> 2048          2047         fail
>> 2049          2048         ok
>>
>> On the same setup I have another loopback connection between two 82599ES
>> 10G NICs and when I run exactly the same test the RSS hash is always correct in
>> all cases.
>>
>> $ $RTE_SDK/tools/dpdk_nic_bind.py -s
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=igb_uio unused=
>> 0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=igb_uio unused=
>> 0000:82:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio unused=
>> 0000:83:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio unused=
>>
>> The command line I use for running the test on the 40G NICs is:
>>
>> ./build/test -c 0x1 -n 4 -m 1024 -w 0000:82:00.0 -w 0000:83:00.0
>>
>> Thanks,
>> Dumitru Ceara
>>
>> #include <stdbool.h>
>> #include <stdint.h>
>> #include <assert.h>
>> #include <unistd.h>
>>
>> #include <rte_ethdev.h>
>> #include <rte_timer.h>
>> #include <rte_ip.h>
>> #include <rte_tcp.h>
>> #include <rte_udp.h>
>> #include <rte_errno.h>
>> #include <rte_arp.h>
>>
>> #define MBUF_SIZE(frag_size) \
>>     ((frag_size) + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
>>
>> #define RX_MBUF_SIZE MBUF_SIZE(RTE_MBUF_DEFAULT_DATAROOM)
>> #define TX_MBUF_SIZE MBUF_SIZE(RTE_MBUF_DEFAULT_DATAROOM)
>>
>> #define MBUF_CACHE 512
>> #define MBUF_COUNT 1024
>>
>> static struct rte_mempool *rx_mpool;
>> static struct rte_mempool *tx_mpool;
>>
>> #define PORT_MAX_MTU 9198
>>
>> #define L5_GET_LEN(pkt) (rte_pktmbuf_tailroom((pkt)))
>>
>> #define PORT0  0
>> #define PORT1  1
>> #define QUEUE0 0
>> #define Q_CNT  1
>>
>>
>> struct rte_eth_conf default_port_config = {
>>     .rxmode = {
>>         .mq_mode        = ETH_MQ_RX_RSS,
>>         .max_rx_pkt_len = PORT_MAX_MTU,
>>         .split_hdr_size = 0,
>>         .header_split   = 0, /**< Header Split disabled */
>>         .hw_ip_checksum = 1, /**< IP checksum offload enabled */
>>         .hw_vlan_filter = 0, /**< VLAN filtering disabled */
>>         .jumbo_frame    = 1, /**< Jumbo Frame Support disabled */
>>         .hw_strip_crc   = 0, /**< CRC stripped by hardware */
>>     },
>>     .rx_adv_conf = {
>>         .rss_conf = {
>>             .rss_key = NULL,
>>             .rss_key_len = 0,
>>             .rss_hf = ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP |
>> ETH_RSS_NONFRAG_IPV4_UDP,
>>         },
>>     },
>>     .txmode = {
>>         .mq_mode = ETH_MQ_TX_NONE,
>>     }
>> };
>>
>> struct rte_eth_rxconf rx_conf = {
>>     .rx_thresh = {
>>         .pthresh = 8,
>>         .hthresh = 8,
>>         .wthresh = 4,
>>     },
>>     .rx_free_thresh = 64,
>>     .rx_drop_en = 0
>> };
>>
>> struct rte_eth_txconf tx_conf = {
>>     .tx_thresh = {
>>         .pthresh = 36,
>>         .hthresh = 0,
>>         .wthresh = 0,
>>     },
>>     .tx_free_thresh = 64,
>>     .tx_rs_thresh = 32,
>> };
>>
>> static void port_setup(uint32_t port)
>> {
>>     uint32_t queue;
>>     int ret;
>>
>>     assert(rte_eth_dev_configure(port, Q_CNT, Q_CNT,
>>                                  &default_port_config) == 0);
>>     for (queue = 0; queue < Q_CNT; queue++) {
>>         ret = rte_eth_rx_queue_setup(port, queue, 128, SOCKET_ID_ANY,
>>                                      &rx_conf,
>>                                      rx_mpool);
>>         assert(ret == 0);
>>         ret = rte_eth_tx_queue_setup(port, queue, 128, SOCKET_ID_ANY,
>>                                      &tx_conf);
>>         assert(ret == 0);
>>     }
>>
>>     assert(rte_eth_dev_start(port) == 0); }
>>
>> #define HDRS_SIZE                   \
>>         (sizeof(struct ether_hdr) + \
>>          sizeof(struct ipv4_hdr) +  \
>>          sizeof(struct tcp_hdr))
>>
>> static struct rte_mbuf *get_tcp_pkt(uint16_t eth_port) {
>>     struct rte_mbuf  *pkt;
>>     struct ether_hdr *eth_hdr;
>>     struct ipv4_hdr  *ip_hdr;
>>     struct tcp_hdr   *tcp_hdr;
>>     uint32_t          ip_hdr_len = sizeof(*ip_hdr);
>>     uint32_t          tcp_hdr_len = sizeof(*tcp_hdr);
>>     uint32_t          l5_len;
>>
>>     assert(pkt = rte_pktmbuf_alloc(tx_mpool));
>>
>>     pkt->port = eth_port;
>>     pkt->l2_len = sizeof(*eth_hdr);
>>
>>     RTE_LOG(ERR, USER1, "1:head = %d, tail = %d, len = %d\n",
>>             rte_pktmbuf_headroom(pkt), rte_pktmbuf_tailroom(pkt),
>>             rte_pktmbuf_pkt_len(pkt));
>>
>>     /* Reserve space for ETH + IP + TCP Headers.
>>      * Store how much tailroom we have.
>>      */
>>     eth_hdr = (struct ether_hdr *)rte_pktmbuf_append(pkt, HDRS_SIZE);
>>     assert(eth_hdr);
>>     l5_len = L5_GET_LEN(pkt);
>>
>>     /* ETH Header. */
>>     rte_eth_macaddr_get(PORT0, &eth_hdr->s_addr);
>>     rte_eth_macaddr_get(PORT1, &eth_hdr->d_addr);
>>     eth_hdr->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
>>
>>     /* IP Header. */
>>     ip_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
>>     ip_hdr->version_ihl = (4 << 4) | (ip_hdr_len >> 2);
>>     ip_hdr->type_of_service = 0;
>>     ip_hdr->total_length = rte_cpu_to_be_16(ip_hdr_len + tcp_hdr_len +
>> l5_len);
>>     ip_hdr->packet_id = 0;
>>     ip_hdr->fragment_offset = rte_cpu_to_be_16(0);
>>     ip_hdr->time_to_live = 60;
>>     ip_hdr->next_proto_id = IPPROTO_TCP;
>>     ip_hdr->src_addr = rte_cpu_to_be_32(0x01010101);
>>     ip_hdr->dst_addr = rte_cpu_to_be_32(0x01010101);
>>     ip_hdr->hdr_checksum = rte_cpu_to_be_16(0);
>>
>>     pkt->l3_len = ip_hdr_len;
>>     pkt->ol_flags |= PKT_TX_IP_CKSUM;
>>
>>     /* TCP Header. */
>>     tcp_hdr = (struct tcp_hdr *)(ip_hdr + 1);
>>     tcp_hdr->src_port = rte_cpu_to_be_16(0x42);
>>     tcp_hdr->dst_port = rte_cpu_to_be_16(0x24);
>>     tcp_hdr->sent_seq = rte_cpu_to_be_32(0x1234);
>>     tcp_hdr->recv_ack = rte_cpu_to_be_32(0x1234);
>>     tcp_hdr->data_off = tcp_hdr_len >> 2 << 4;
>>     tcp_hdr->tcp_flags = TCP_FIN_FLAG;
>>     tcp_hdr->rx_win = rte_cpu_to_be_16(0xffff);
>>     tcp_hdr->tcp_urp = rte_cpu_to_be_16(0);
>>
>>     pkt->ol_flags |= PKT_TX_TCP_CKSUM | PKT_TX_IPV4;
>>     pkt->l4_len = tcp_hdr_len;
>>
>>     tcp_hdr->cksum = 0;
>>     tcp_hdr->cksum = rte_ipv4_phdr_cksum(ip_hdr, pkt->ol_flags);
>>
>>     /* Add Payload. */
>>     assert(rte_pktmbuf_append(pkt, l5_len));
>>
>>     RTE_LOG(ERR, USER1, "1:head = %d, tail = %d, len = %d\n",
>>             rte_pktmbuf_headroom(pkt), rte_pktmbuf_tailroom(pkt),
>>             rte_pktmbuf_pkt_len(pkt));
>>
>>     return pkt;
>> }
>>
>> int main(int argc, char **argv)
>> {
>>     struct rte_mbuf *tx_mbuf[3];
>>
>>     rte_eal_init(argc, argv);
>>
>>     rx_mpool = rte_mempool_create("rx_mpool", MBUF_COUNT,
>> RX_MBUF_SIZE,
>>                                   0,
>>                                   sizeof(struct rte_pktmbuf_pool_private),
>>                                   rte_pktmbuf_pool_init, NULL,
>>                                   rte_pktmbuf_init, NULL,
>>                                   SOCKET_ID_ANY,
>>                                   0);
>>
>>     tx_mpool = rte_mempool_create("tx_mpool", MBUF_COUNT,
>> TX_MBUF_SIZE,
>>                                   0,
>>                                   sizeof(struct rte_pktmbuf_pool_private),
>>                                   rte_pktmbuf_pool_init, NULL,
>>                                   rte_pktmbuf_init, NULL,
>>                                   SOCKET_ID_ANY,
>>                                   0);
>>
>>     assert(rx_mpool && tx_mpool);
>>
>>     port_setup(PORT0);
>>     port_setup(PORT1);
>>
>>     for (;;) {
>>         uint16_t no_rx_buffers;
>>         uint16_t i;
>>         struct rte_mbuf *rx_pkts[16];
>>
>>         tx_mbuf[0] = get_tcp_pkt(PORT0);
>>         assert(rte_eth_tx_burst(PORT0, QUEUE0, tx_mbuf, 1) == 1);
>>
>>         no_rx_buffers = rte_eth_rx_burst(PORT1, QUEUE0, rx_pkts, 16);
>>         for (i = 0; i < no_rx_buffers; i++) {
>>             RTE_LOG(ERR, USER1, "RX RSS HASH: %8lX %4X\n",
>>                     rx_pkts[i]->ol_flags,
>>                     rx_pkts[i]->hash.rss);
>>
>>             assert(rx_pkts[i]->ol_flags == PKT_RX_RSS_HASH);
>>             assert(rx_pkts[i]->hash.rss != 0);
>>
>>             rte_pktmbuf_free(rx_pkts[i]);
>>         }
>>     }
>>
>>     return 0;
>> }



-- 
Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-18 16:14   ` Take Ceara
@ 2016-07-19  9:31     ` Xing, Beilei
  2016-07-19 14:58       ` Take Ceara
  0 siblings, 1 reply; 15+ messages in thread
From: Xing, Beilei @ 2016-07-19  9:31 UTC (permalink / raw)
  To: Take Ceara, Zhang, Helin; +Cc: Wu, Jingjing, dev

Hi Ceara,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Take Ceara
> Sent: Tuesday, July 19, 2016 12:14 AM
> To: Zhang, Helin <helin.zhang@intel.com>
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
> NICs for some RX mbuf sizes
> 
> Hi Helin,
> 
> On Mon, Jul 18, 2016 at 5:15 PM, Zhang, Helin <helin.zhang@intel.com>
> wrote:
> > Hi Ceara
> >
> > Could you help to let me know your firmware version?
> 
> # ethtool -i p7p1 | grep firmware
> firmware-version: f4.40.35115 a1.4 n4.53 e2021
> 
> > And could you help to try with the standard DPDK example application,
> such as testpmd, to see if there is the same issue?
> > Basically we always set the same size for both rx and tx buffer, like the
> default one of 2048 for a lot of applications.
> 
> I'm a bit lost in the testpmd CLI. I enabled RSS, configured 2 RX queues per
> port and started sending traffic with single segmnet packets of size 2K but I
> didn't figure out how to actually verify that the RSS hash is correctly set..
> Please let me know if I should do it in a different way.
> 
> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 2048 -i [...]
> 
> testpmd> port stop all
> Stopping ports...
> Checking link statuses...
> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
> Mbps - full-duplex Done
> 
> testpmd> port config all txq 2
> 
> testpmd> port config all rss all
> 
> testpmd> port config all max-pkt-len 2048 port start all
> Configuring Port 0 (socket 0)
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> PMD: i40e_set_tx_function(): Vector tx finally be used.
> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=0).
> Port 0: 3C:FD:FE:9D:BE:F0
> Configuring Port 1 (socket 0)
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
> PMD: i40e_set_tx_function(): Vector tx finally be used.
> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=1).
> Port 1: 3C:FD:FE:9D:BF:30
> Checking link statuses...
> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
> Mbps - full-duplex Done
> 
> testpmd> set txpkts 2048
> testpmd> show config txpkts
> Number of segments: 1
> Segment sizes: 2048
> Split packet: off
> 
> 
> testpmd> start tx_first
>   io packet forwarding - CRC stripping disabled - packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=2
>   RX queues=1 - RX desc=128 - RX free threshold=32

In testpmd, when RX queues=1, RSS will be disabled, so could you re-configure rx queue(>1) and try again with testpmd?

Regards,
Beilei

>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>   TX queues=2 - TX desc=512 - TX free threshold=32
>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>   TX RS bit threshold=32 - TXQ flags=0xf01
> testpmd> stop
> Telling cores to stop...
> Waiting for lcores to finish...
> 
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>   ----------------------------------------------------------------------------
> 
>   ---------------------- Forward statistics for port 1  ----------------------
>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>   ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 64             RX-dropped: 0             RX-total: 64
>   TX-packets: 64             TX-dropped: 0             TX-total: 64
> 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++
> 
> Done.
> testpmd>
> 
> 
> >
> > Definitely we will try to reproduce that issue with testpmd, with using 2K
> mbufs. Hopefully we can find the root cause, or tell you that's not an issue.
> >
> 
> I forgot to mention that in my test code the TX/RX_MBUF_SIZE macros also
> include the mbuf headroom and the size of the mbuf structure.
> Therefore testing with 2K mbufs in my scenario actually creates mempools of
> objects of size 2K + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM.
> 
> > Thank you very much for your reporting!
> >
> > BTW, dev@dpdk.org should be the right one to replace users@dpdk.org,
> for sending questions/issues like this.
> 
> Thanks, I'll keep that in mind.
> 
> >
> > Regards,
> > Helin
> 
> Regards,
> Dumitru
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-19  9:31     ` Xing, Beilei
@ 2016-07-19 14:58       ` Take Ceara
  2016-07-20  1:59         ` Xing, Beilei
  0 siblings, 1 reply; 15+ messages in thread
From: Take Ceara @ 2016-07-19 14:58 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Beilei,

On Tue, Jul 19, 2016 at 11:31 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> Hi Ceara,
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Take Ceara
>> Sent: Tuesday, July 19, 2016 12:14 AM
>> To: Zhang, Helin <helin.zhang@intel.com>
>> Cc: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
>> NICs for some RX mbuf sizes
>>
>> Hi Helin,
>>
>> On Mon, Jul 18, 2016 at 5:15 PM, Zhang, Helin <helin.zhang@intel.com>
>> wrote:
>> > Hi Ceara
>> >
>> > Could you help to let me know your firmware version?
>>
>> # ethtool -i p7p1 | grep firmware
>> firmware-version: f4.40.35115 a1.4 n4.53 e2021
>>
>> > And could you help to try with the standard DPDK example application,
>> such as testpmd, to see if there is the same issue?
>> > Basically we always set the same size for both rx and tx buffer, like the
>> default one of 2048 for a lot of applications.
>>
>> I'm a bit lost in the testpmd CLI. I enabled RSS, configured 2 RX queues per
>> port and started sending traffic with single segmnet packets of size 2K but I
>> didn't figure out how to actually verify that the RSS hash is correctly set..
>> Please let me know if I should do it in a different way.
>>
>> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 2048 -i [...]
>>
>> testpmd> port stop all
>> Stopping ports...
>> Checking link statuses...
>> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
>> Mbps - full-duplex Done
>>
>> testpmd> port config all txq 2
>>
>> testpmd> port config all rss all
>>
>> testpmd> port config all max-pkt-len 2048 port start all
>> Configuring Port 0 (socket 0)
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
>> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
>> PMD: i40e_set_tx_function(): Vector tx finally be used.
>> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=0).
>> Port 0: 3C:FD:FE:9D:BE:F0
>> Configuring Port 1 (socket 0)
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
>> satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
>> PMD: i40e_set_tx_function(): Vector tx finally be used.
>> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=1).
>> Port 1: 3C:FD:FE:9D:BF:30
>> Checking link statuses...
>> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
>> Mbps - full-duplex Done
>>
>> testpmd> set txpkts 2048
>> testpmd> show config txpkts
>> Number of segments: 1
>> Segment sizes: 2048
>> Split packet: off
>>
>>
>> testpmd> start tx_first
>>   io packet forwarding - CRC stripping disabled - packets/burst=32
>>   nb forwarding cores=1 - nb forwarding ports=2
>>   RX queues=1 - RX desc=128 - RX free threshold=32
>
> In testpmd, when RX queues=1, RSS will be disabled, so could you re-configure rx queue(>1) and try again with testpmd?

I changed the way I run testmpd to:

testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 1152
--rss-ip --rxq=2 --txpkts 1024 -i

As far as I understand this will allocate mbufs with the same size I
was using in my test (--mbuf-size seems to include the mbuf headroom
therefore 1152 = 1024 + 128 headroom).

testpmd> start tx_first
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  RX queues=2 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=1 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 18817613   RX-missed: 5          RX-bytes:  19269115888
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 18818064   TX-errors: 0          TX-bytes:  19269567464
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 18818392   RX-missed: 5          RX-bytes:  19269903360
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 18817979   TX-errors: 0          TX-bytes:  19269479424
  ############################################################################

Ttraffic is sent/received. However, I couldn't find any way to verify
that the incoming mbufs actually have the mbuf->hash.rss field set
except for starting test-pmd with gdb and setting a breakpoint in the
io fwd engine. After doing that I noticed that none of the incoming
packets has the PKT_RX_RSS_HASH flag set in ol_flags... I guess for
some reason test-pmd doesn't actually configure RSS in this case but I
fail to see where.

Thanks,
Dumitru

>
> Regards,
> Beilei
>
>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>   TX queues=2 - TX desc=512 - TX free threshold=32
>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>   TX RS bit threshold=32 - TXQ flags=0xf01
>> testpmd> stop
>> Telling cores to stop...
>> Waiting for lcores to finish...
>>
>>   ---------------------- Forward statistics for port 0  ----------------------
>>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>>   ----------------------------------------------------------------------------
>>
>>   ---------------------- Forward statistics for port 1  ----------------------
>>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>>   ----------------------------------------------------------------------------
>>
>>   +++++++++++++++ Accumulated forward statistics for all
>> ports+++++++++++++++
>>   RX-packets: 64             RX-dropped: 0             RX-total: 64
>>   TX-packets: 64             TX-dropped: 0             TX-total: 64
>>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> ++++++++++++++++++
>>
>> Done.
>> testpmd>
>>
>>
>> >
>> > Definitely we will try to reproduce that issue with testpmd, with using 2K
>> mbufs. Hopefully we can find the root cause, or tell you that's not an issue.
>> >
>>
>> I forgot to mention that in my test code the TX/RX_MBUF_SIZE macros also
>> include the mbuf headroom and the size of the mbuf structure.
>> Therefore testing with 2K mbufs in my scenario actually creates mempools of
>> objects of size 2K + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM.
>>
>> > Thank you very much for your reporting!
>> >
>> > BTW, dev@dpdk.org should be the right one to replace users@dpdk.org,
>> for sending questions/issues like this.
>>
>> Thanks, I'll keep that in mind.
>>
>> >
>> > Regards,
>> > Helin
>>
>> Regards,
>> Dumitru
>>



-- 
Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-19 14:58       ` Take Ceara
@ 2016-07-20  1:59         ` Xing, Beilei
  2016-07-21 10:58           ` Take Ceara
  0 siblings, 1 reply; 15+ messages in thread
From: Xing, Beilei @ 2016-07-20  1:59 UTC (permalink / raw)
  To: Take Ceara; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Ceara,

> -----Original Message-----
> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> Sent: Tuesday, July 19, 2016 10:59 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
> NICs for some RX mbuf sizes
> 
> Hi Beilei,
> 
> I changed the way I run testmpd to:
> 
> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 1152 --rss-ip -
> -rxq=2 --txpkts 1024 -i
> 
> As far as I understand this will allocate mbufs with the same size I was using
> in my test (--mbuf-size seems to include the mbuf headroom therefore 1152
> = 1024 + 128 headroom).
> 
> testpmd> start tx_first
>   io packet forwarding - CRC stripping disabled - packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=2
>   RX queues=2 - RX desc=128 - RX free threshold=32
>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>   TX queues=1 - TX desc=512 - TX free threshold=32
>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>   TX RS bit threshold=32 - TXQ flags=0xf01
> testpmd> show port stats all
> 
>   ######################## NIC statistics for port 0
> ########################
>   RX-packets: 18817613   RX-missed: 5          RX-bytes:  19269115888
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 18818064   TX-errors: 0          TX-bytes:  19269567464
> 
> ##########################################################
> ##################
> 
>   ######################## NIC statistics for port 1
> ########################
>   RX-packets: 18818392   RX-missed: 5          RX-bytes:  19269903360
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 18817979   TX-errors: 0          TX-bytes:  19269479424
> 
> ##########################################################
> ##################
> 
> Ttraffic is sent/received. However, I couldn't find any way to verify that the
> incoming mbufs actually have the mbuf->hash.rss field set except for starting
> test-pmd with gdb and setting a breakpoint in the io fwd engine. After doing
> that I noticed that none of the incoming packets has the PKT_RX_RSS_HASH
> flag set in ol_flags... I guess for some reason test-pmd doesn't actually
> configure RSS in this case but I fail to see where.
> 

Actually there's a way to check mbuf->hash.rss, you need set forward mode to "rxonly", and set verbose to 1.
I run testpmd with the configuration you used, and found i40e RSS works well.
With the following steps, you can see RSS hash value and receive queue, and PKT_RX_RSS_HASH is set too.
I think you can use the same way to check what you want.

./testpmd -c fffff -n 4 -- -i --coremask=0xffffe --rxq=16 --txq=16 --mbuf-size 1152 --rss-ip --txpkts 1024
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
testpmd> port 0/queue 1: received 1 packets
  src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb
 - Receive queue=0x1
  PKT_RX_RSS_HASH
port 0/queue 0: received 1 packets
  src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb_segs=1 - RSS hash=0x4e949f23 - RSS queue=0x0Unknown packet type
 - Receive queue=0x0
  PKT_RX_RSS_HASH
port 0/queue 8: received 1 packets
  src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb_segs=1 - RSS hash=0xa3c78b2b - RSS queue=0x8Unknown packet type
 - Receive queue=0x8
  PKT_RX_RSS_HASH
port 0/queue 5: received 1 packets
  src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb_segs=1 - RSS hash=0xe29b3d36 - RSS queue=0x5Unknown packet type
 - Receive queue=0x5
  PKT_RX_RSS_HASH

/Beilei

> Thanks,
> Dumitru
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-20  1:59         ` Xing, Beilei
@ 2016-07-21 10:58           ` Take Ceara
  2016-07-22  9:04             ` Xing, Beilei
  0 siblings, 1 reply; 15+ messages in thread
From: Take Ceara @ 2016-07-21 10:58 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Beilei,

On Wed, Jul 20, 2016 at 3:59 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> Hi Ceara,
>
>> -----Original Message-----
>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>> Sent: Tuesday, July 19, 2016 10:59 PM
>> To: Xing, Beilei <beilei.xing@intel.com>
>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
>> <jingjing.wu@intel.com>; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
>> NICs for some RX mbuf sizes
>>
>> Hi Beilei,
>>
>> I changed the way I run testmpd to:
>>
>> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 1152 --rss-ip -
>> -rxq=2 --txpkts 1024 -i
>>
>> As far as I understand this will allocate mbufs with the same size I was using
>> in my test (--mbuf-size seems to include the mbuf headroom therefore 1152
>> = 1024 + 128 headroom).
>>
>> testpmd> start tx_first
>>   io packet forwarding - CRC stripping disabled - packets/burst=32
>>   nb forwarding cores=1 - nb forwarding ports=2
>>   RX queues=2 - RX desc=128 - RX free threshold=32
>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>   TX queues=1 - TX desc=512 - TX free threshold=32
>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>   TX RS bit threshold=32 - TXQ flags=0xf01
>> testpmd> show port stats all
>>
>>   ######################## NIC statistics for port 0
>> ########################
>>   RX-packets: 18817613   RX-missed: 5          RX-bytes:  19269115888
>>   RX-errors: 0
>>   RX-nombuf:  0
>>   TX-packets: 18818064   TX-errors: 0          TX-bytes:  19269567464
>>
>> ##########################################################
>> ##################
>>
>>   ######################## NIC statistics for port 1
>> ########################
>>   RX-packets: 18818392   RX-missed: 5          RX-bytes:  19269903360
>>   RX-errors: 0
>>   RX-nombuf:  0
>>   TX-packets: 18817979   TX-errors: 0          TX-bytes:  19269479424
>>
>> ##########################################################
>> ##################
>>
>> Ttraffic is sent/received. However, I couldn't find any way to verify that the
>> incoming mbufs actually have the mbuf->hash.rss field set except for starting
>> test-pmd with gdb and setting a breakpoint in the io fwd engine. After doing
>> that I noticed that none of the incoming packets has the PKT_RX_RSS_HASH
>> flag set in ol_flags... I guess for some reason test-pmd doesn't actually
>> configure RSS in this case but I fail to see where.
>>
>
> Actually there's a way to check mbuf->hash.rss, you need set forward mode to "rxonly", and set verbose to 1.
> I run testpmd with the configuration you used, and found i40e RSS works well.
> With the following steps, you can see RSS hash value and receive queue, and PKT_RX_RSS_HASH is set too.
> I think you can use the same way to check what you want.
>
> ./testpmd -c fffff -n 4 -- -i --coremask=0xffffe --rxq=16 --txq=16 --mbuf-size 1152 --rss-ip --txpkts 1024
> testpmd> set verbose 1
> testpmd> set fwd rxonly
> testpmd> start
> testpmd> port 0/queue 1: received 1 packets
>   src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb
>  - Receive queue=0x1
>   PKT_RX_RSS_HASH
> port 0/queue 0: received 1 packets
>   src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb_segs=1 - RSS hash=0x4e949f23 - RSS queue=0x0Unknown packet type
>  - Receive queue=0x0
>   PKT_RX_RSS_HASH
> port 0/queue 8: received 1 packets
>   src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb_segs=1 - RSS hash=0xa3c78b2b - RSS queue=0x8Unknown packet type
>  - Receive queue=0x8
>   PKT_RX_RSS_HASH
> port 0/queue 5: received 1 packets
>   src=00:00:01:00:0F:00 - dst=68:05:CA:32:03:4C - type=0x0800 - length=1020 - nb_segs=1 - RSS hash=0xe29b3d36 - RSS queue=0x5Unknown packet type
>  - Receive queue=0x5
>   PKT_RX_RSS_HASH
>

Following your testpmd example run I managed to replicate the problem
on my dpdk 16.04 setup like this:

I have two X710 adapters connected back to back:
$ ./tools/dpdk_nic_bind.py -s

Network devices using DPDK-compatible driver
============================================
0000:01:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
0000:81:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=

The firmware of the two adapters is up to date with the latest
version: 5.04 (f5.0.40043 a1.5 n5.04 e24cd)

I run testpmd with mbuf-size 1152 and txpktsize 1024 such that upon
receival the whole mbuf (except headroom) is filled.
I enabled RX IP checksum in hw and RX RSS hashing for UDP.
With test-pmd forward mode "rxonly" and verbose 1 I see that incoming
packets have PKT_RX_RSS_HASH set but the hash value is 0.

./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
--coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
--enable-rx-cksum --rss-udp
[...]
testpmd> set verbose 1
Change verbose level from 0 to 1
testpmd> set fwd rxonly
Set rxonly packet forwarding mode
testpmd> start tx_first
  rxonly packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=16 - nb forwarding ports=2
  RX queues=16 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=16 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01
port 0/queue 1: received 32 packets
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
- Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
Unknown - Inner L4 type: Unknown
 - Receive queue=0x1
  PKT_RX_RSS_HASH
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
- Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
Unknown - Inner L4 type: Unknown
 - Receive queue=0x1
  PKT_RX_RSS_HASH

If I use a different mbuf-size, for example 2048, everything is fine:

./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
--coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024
--enable-rx-cksum --rss-udp
[...]
testpmd> set verbose 1
Change verbose level from 0 to 1
testpmd> set fwd rxonly
Set rxonly packet forwarding mode
testpmd> start tx_first
  rxonly packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=16 - nb forwarding ports=2
  RX queues=16 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=16 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01
port 0/queue 1: received 32 packets
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
(outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
L3 type: Unknown - Inner L4 type: Unknown
 - Receive queue=0x1
  PKT_RX_RSS_HASH

Another weird thing I noticed is when I run test-pmd without
--enable-rx-cksum (which is the default mode) then the RSS flag doesn
get set at all. Instead the PKT_RX_FDIR flag gets set. This happens
even though fdir_mode is set to RTE_FDIR_MODE_NONE in the device
configuration:

./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
--coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
--rss-udp
[...]
testpmd> set verbose 1
Change verbose level from 0 to 1
testpmd> set fwd rxonly
Set rxonly packet forwarding mode
testpmd> start tx_first
  rxonly packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=16 - nb forwarding ports=2
  RX queues=16 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=16 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01
port 0/queue 1: received 16 packets
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
packet type
 - Receive queue=0x1
  PKT_RX_FDIR
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
packet type
 - Receive queue=0x1
  PKT_RX_FDIR

Please let me know if there's more debug info that might be of
interest in troubleshooting the RSS=0 issue.

Thanks,
Dumitru

> /Beilei
>
>> Thanks,
>> Dumitru
>>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-21 10:58           ` Take Ceara
@ 2016-07-22  9:04             ` Xing, Beilei
  2016-07-22 12:31               ` Take Ceara
  0 siblings, 1 reply; 15+ messages in thread
From: Xing, Beilei @ 2016-07-22  9:04 UTC (permalink / raw)
  To: Take Ceara; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Ceara,

> -----Original Message-----
> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> Sent: Thursday, July 21, 2016 6:58 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
> NICs for some RX mbuf sizes
> 
> 
> Following your testpmd example run I managed to replicate the problem on
> my dpdk 16.04 setup like this:
> 
> I have two X710 adapters connected back to back:
> $ ./tools/dpdk_nic_bind.py -s
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:01:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
> 0000:81:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
> 
> The firmware of the two adapters is up to date with the latest
> version: 5.04 (f5.0.40043 a1.5 n5.04 e24cd)
> 
> I run testpmd with mbuf-size 1152 and txpktsize 1024 such that upon receival
> the whole mbuf (except headroom) is filled.
> I enabled RX IP checksum in hw and RX RSS hashing for UDP.
> With test-pmd forward mode "rxonly" and verbose 1 I see that incoming
> packets have PKT_RX_RSS_HASH set but the hash value is 0.
> 
> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 --
> enable-rx-cksum --rss-udp [...]
> testpmd> set verbose 1
> Change verbose level from 0 to 1
> testpmd> set fwd rxonly
> Set rxonly packet forwarding mode
> testpmd> start tx_first
>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>   nb forwarding cores=16 - nb forwarding ports=2
>   RX queues=16 - RX desc=128 - RX free threshold=32
>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>   TX queues=16 - TX desc=512 - TX free threshold=32
>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 32
> packets
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
> type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
> - Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
> Unknown - Inner L4 type: Unknown
>  - Receive queue=0x1
>   PKT_RX_RSS_HASH
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
> type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
> - Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
> Unknown - Inner L4 type: Unknown
>  - Receive queue=0x1
>   PKT_RX_RSS_HASH

What's the source ip address and destination ip address of the packet you sent to port 0? Could you try to change ip address or port number to observe if hash value changes? I remember I saw hash value was 0 before, but with different ip address, there'll be different hash values. 

> 
> If I use a different mbuf-size, for example 2048, everything is fine:
> 
> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024 --
> enable-rx-cksum --rss-udp [...]
> testpmd> set verbose 1
> Change verbose level from 0 to 1
> testpmd> set fwd rxonly
> Set rxonly packet forwarding mode
> testpmd> start tx_first
>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>   nb forwarding cores=16 - nb forwarding ports=2
>   RX queues=16 - RX desc=128 - RX free threshold=32
>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>   TX queues=16 - TX desc=512 - TX free threshold=32
>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 32
> packets
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
> L3 type: Unknown - Inner L4 type: Unknown
>  - Receive queue=0x1
>   PKT_RX_RSS_HASH
> 

Did you send the same packet as before to port 0?

> Another weird thing I noticed is when I run test-pmd without --enable-rx-
> cksum (which is the default mode) then the RSS flag doesn get set at all.
> Instead the PKT_RX_FDIR flag gets set. This happens even though fdir_mode
> is set to RTE_FDIR_MODE_NONE in the device
> configuration:
> 
> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 --rss-
> udp [...]
> testpmd> set verbose 1
> Change verbose level from 0 to 1
> testpmd> set fwd rxonly
> Set rxonly packet forwarding mode
> testpmd> start tx_first
>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>   nb forwarding cores=16 - nb forwarding ports=2
>   RX queues=16 - RX desc=128 - RX free threshold=32
>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>   TX queues=16 - TX desc=512 - TX free threshold=32
>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 16
> packets
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
> packet type
>  - Receive queue=0x1
>   PKT_RX_FDIR
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
> packet type
>  - Receive queue=0x1
>   PKT_RX_FDIR
> 

For this issue, I think following patch can solve your problem, please apply this patch.
http://dpdk.org/dev/patchwork/patch/13593/

Beilei

> Please let me know if there's more debug info that might be of interest in
> troubleshooting the RSS=0 issue.
> 
> Thanks,
> Dumitru
> 
> > /Beilei
> >
> >> Thanks,
> >> Dumitru
> >>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-22  9:04             ` Xing, Beilei
@ 2016-07-22 12:31               ` Take Ceara
  2016-07-22 12:35                 ` Take Ceara
  2016-07-25  3:24                 ` Xing, Beilei
  0 siblings, 2 replies; 15+ messages in thread
From: Take Ceara @ 2016-07-22 12:31 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Beilei,

On Fri, Jul 22, 2016 at 11:04 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> Hi Ceara,
>
>> -----Original Message-----
>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>> Sent: Thursday, July 21, 2016 6:58 PM
>> To: Xing, Beilei <beilei.xing@intel.com>
>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
>> <jingjing.wu@intel.com>; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
>> NICs for some RX mbuf sizes
>>
>>
>> Following your testpmd example run I managed to replicate the problem on
>> my dpdk 16.04 setup like this:
>>
>> I have two X710 adapters connected back to back:
>> $ ./tools/dpdk_nic_bind.py -s
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:01:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
>> 0000:81:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
>>
>> The firmware of the two adapters is up to date with the latest
>> version: 5.04 (f5.0.40043 a1.5 n5.04 e24cd)
>>
>> I run testpmd with mbuf-size 1152 and txpktsize 1024 such that upon receival
>> the whole mbuf (except headroom) is filled.
>> I enabled RX IP checksum in hw and RX RSS hashing for UDP.
>> With test-pmd forward mode "rxonly" and verbose 1 I see that incoming
>> packets have PKT_RX_RSS_HASH set but the hash value is 0.
>>
>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 --
>> enable-rx-cksum --rss-udp [...]
>> testpmd> set verbose 1
>> Change verbose level from 0 to 1
>> testpmd> set fwd rxonly
>> Set rxonly packet forwarding mode
>> testpmd> start tx_first
>>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>   nb forwarding cores=16 - nb forwarding ports=2
>>   RX queues=16 - RX desc=128 - RX free threshold=32
>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>   TX queues=16 - TX desc=512 - TX free threshold=32
>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 32
>> packets
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
>> type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
>> - Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
>> Unknown - Inner L4 type: Unknown
>>  - Receive queue=0x1
>>   PKT_RX_RSS_HASH
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
>> type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
>> - Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
>> Unknown - Inner L4 type: Unknown
>>  - Receive queue=0x1
>>   PKT_RX_RSS_HASH
>
> What's the source ip address and destination ip address of the packet you sent to port 0? Could you try to change ip address or port number to observe if hash value changes? I remember I saw hash value was 0 before, but with different ip address, there'll be different hash values.

I was using the test-pmd "txonly" implementation which sends fixed UDP
packets from 192.168.0.1:1024 -> 192.168.0.2:1024.

I changed the test-pmd tx_only code so that it sends traffic with
incremental destination IP: 192.168.0.1:1024 -> [192.168.0.2,
192.168.0.12]:1024
I also dumped the source and destination IPs in the "rxonly"
pkt_burst_receive function.
Then I see that packets are indeed sent to different queues but the
mbuf->hash.rss value is still 0.

./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
--coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
--enable-rx-cksum --rss-udp

[...]

 - Receive queue=0xf
  PKT_RX_RSS_HASH
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
(outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 - (outer)
L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
(outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 - (outer)
L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
L3 type: Unknown - Inner L4 type: Unknown
 - Receive queue=0x7
  PKT_RX_RSS_HASH
  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009 -
type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown - Inner
L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
queue=0x7 - Inner L4 type: Unknown

[...]

testpmd> stop
  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
  RX-packets: 59             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
  RX-packets: 48             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
  RX-packets: 59             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
  RX-packets: 48             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
  RX-packets: 59             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
  RX-packets: 48             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
  RX-packets: 59             TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
  RX-packets: 48             TX-packets: 32             TX-dropped: 0
  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 428            RX-dropped: 84            RX-total: 512
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 512            TX-dropped: 0             TX-total: 512
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 428            RX-dropped: 84            RX-total: 512
  TX-packets: 512            TX-dropped: 0             TX-total: 512
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I checked all the RSS hash values for all the 10 different incoming
streams and they're all 0. Also, the fact that the traffic is actually
distributed seems to suggest that RSS itself is working but the mbuf
hash field is (I guess) either not written or corrupted.

>
>>
>> If I use a different mbuf-size, for example 2048, everything is fine:
>>
>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024 --
>> enable-rx-cksum --rss-udp [...]
>> testpmd> set verbose 1
>> Change verbose level from 0 to 1
>> testpmd> set fwd rxonly
>> Set rxonly packet forwarding mode
>> testpmd> start tx_first
>>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>   nb forwarding cores=16 - nb forwarding ports=2
>>   RX queues=16 - RX desc=128 - RX free threshold=32
>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>   TX queues=16 - TX desc=512 - TX free threshold=32
>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 32
>> packets
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
>> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>> L3 type: Unknown - Inner L4 type: Unknown
>>  - Receive queue=0x1
>>   PKT_RX_RSS_HASH
>>
>
> Did you send the same packet as before to port 0?
>
>> Another weird thing I noticed is when I run test-pmd without --enable-rx-
>> cksum (which is the default mode) then the RSS flag doesn get set at all.
>> Instead the PKT_RX_FDIR flag gets set. This happens even though fdir_mode
>> is set to RTE_FDIR_MODE_NONE in the device
>> configuration:
>>
>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 --rss-
>> udp [...]
>> testpmd> set verbose 1
>> Change verbose level from 0 to 1
>> testpmd> set fwd rxonly
>> Set rxonly packet forwarding mode
>> testpmd> start tx_first
>>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>   nb forwarding cores=16 - nb forwarding ports=2
>>   RX queues=16 - RX desc=128 - RX free threshold=32
>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>   TX queues=16 - TX desc=512 - TX free threshold=32
>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 16
>> packets
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>> packet type
>>  - Receive queue=0x1
>>   PKT_RX_FDIR
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>> packet type
>>  - Receive queue=0x1
>>   PKT_RX_FDIR
>>
>
> For this issue, I think following patch can solve your problem, please apply this patch.
> http://dpdk.org/dev/patchwork/patch/13593/
>

I tried to apply it directly on 16.04 but it can't be applied. I see
it's been applied to dpdk-next-net/rel_16_07. Do you happen to have
one that would work on the latest stable 16.04 release?

Thanks,
Dumitru

> Beilei
>
>> Please let me know if there's more debug info that might be of interest in
>> troubleshooting the RSS=0 issue.
>>
>> Thanks,
>> Dumitru
>>
>> > /Beilei
>> >
>> >> Thanks,
>> >> Dumitru
>> >>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-22 12:31               ` Take Ceara
@ 2016-07-22 12:35                 ` Take Ceara
  2016-07-25  3:24                 ` Xing, Beilei
  1 sibling, 0 replies; 15+ messages in thread
From: Take Ceara @ 2016-07-22 12:35 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: Zhang, Helin, Wu, Jingjing, dev

On Fri, Jul 22, 2016 at 2:31 PM, Take Ceara <dumitru.ceara@gmail.com> wrote:
> Hi Beilei,
>
> On Fri, Jul 22, 2016 at 11:04 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
>> Hi Ceara,
>>
>>> -----Original Message-----
>>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>>> Sent: Thursday, July 21, 2016 6:58 PM
>>> To: Xing, Beilei <beilei.xing@intel.com>
>>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
>>> <jingjing.wu@intel.com>; dev@dpdk.org
>>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
>>> NICs for some RX mbuf sizes
>>>
>>>
>>> Following your testpmd example run I managed to replicate the problem on
>>> my dpdk 16.04 setup like this:
>>>
>>> I have two X710 adapters connected back to back:
>>> $ ./tools/dpdk_nic_bind.py -s
>>>
>>> Network devices using DPDK-compatible driver
>>> ============================================
>>> 0000:01:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
>>> 0000:81:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=igb_uio unused=
>>>
>>> The firmware of the two adapters is up to date with the latest
>>> version: 5.04 (f5.0.40043 a1.5 n5.04 e24cd)
>>>
>>> I run testpmd with mbuf-size 1152 and txpktsize 1024 such that upon receival
>>> the whole mbuf (except headroom) is filled.
>>> I enabled RX IP checksum in hw and RX RSS hashing for UDP.
>>> With test-pmd forward mode "rxonly" and verbose 1 I see that incoming
>>> packets have PKT_RX_RSS_HASH set but the hash value is 0.
>>>
>>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 --
>>> enable-rx-cksum --rss-udp [...]
>>> testpmd> set verbose 1
>>> Change verbose level from 0 to 1
>>> testpmd> set fwd rxonly
>>> Set rxonly packet forwarding mode
>>> testpmd> start tx_first
>>>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>>   nb forwarding cores=16 - nb forwarding ports=2
>>>   RX queues=16 - RX desc=128 - RX free threshold=32
>>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>>   TX queues=16 - TX desc=512 - TX free threshold=32
>>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 32
>>> packets
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
>>> type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
>>> - Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
>>> Unknown - Inner L4 type: Unknown
>>>  - Receive queue=0x1
>>>   PKT_RX_RSS_HASH
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x1 - (outer) L2
>>> type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: UDP
>>> - Tunnel type: Unknown - Inner L2 type: Unknown - Inner L3 type:
>>> Unknown - Inner L4 type: Unknown
>>>  - Receive queue=0x1
>>>   PKT_RX_RSS_HASH
>>
>> What's the source ip address and destination ip address of the packet you sent to port 0? Could you try to change ip address or port number to observe if hash value changes? I remember I saw hash value was 0 before, but with different ip address, there'll be different hash values.
>
> I was using the test-pmd "txonly" implementation which sends fixed UDP
> packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
>
> I changed the test-pmd tx_only code so that it sends traffic with
> incremental destination IP: 192.168.0.1:1024 -> [192.168.0.2,
> 192.168.0.12]:1024
> I also dumped the source and destination IPs in the "rxonly"
> pkt_burst_receive function.
> Then I see that packets are indeed sent to different queues but the
> mbuf->hash.rss value is still 0.
>
> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> --enable-rx-cksum --rss-udp
>
> [...]
>
>  - Receive queue=0xf
>   PKT_RX_RSS_HASH
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 - (outer)
> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 - (outer)
> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
> L3 type: Unknown - Inner L4 type: Unknown
>  - Receive queue=0x7
>   PKT_RX_RSS_HASH
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009 -
> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown - Inner
> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
> queue=0x7 - Inner L4 type: Unknown
>
> [...]
>
> testpmd> stop
>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>   ----------------------------------------------------------------------------
>
>   ---------------------- Forward statistics for port 1  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>   ----------------------------------------------------------------------------
>
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> I checked all the RSS hash values for all the 10 different incoming
> streams and they're all 0. Also, the fact that the traffic is actually
> distributed seems to suggest that RSS itself is working but the mbuf
> hash field is (I guess) either not written or corrupted.
>
>>
>>>
>>> If I use a different mbuf-size, for example 2048, everything is fine:
>>>
>>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024 --
>>> enable-rx-cksum --rss-udp [...]
>>> testpmd> set verbose 1
>>> Change verbose level from 0 to 1
>>> testpmd> set fwd rxonly
>>> Set rxonly packet forwarding mode
>>> testpmd> start tx_first
>>>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>>   nb forwarding cores=16 - nb forwarding ports=2
>>>   RX queues=16 - RX desc=128 - RX free threshold=32
>>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>>   TX queues=16 - TX desc=512 - TX free threshold=32
>>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 32
>>> packets
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
>>> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
>>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>>> L3 type: Unknown - Inner L4 type: Unknown
>>>  - Receive queue=0x1
>>>   PKT_RX_RSS_HASH
>>>
>>
>> Did you send the same packet as before to port 0?
>>

Sorry, I forgot to reply to this question. Yes, the packet is identical.

>>> Another weird thing I noticed is when I run test-pmd without --enable-rx-
>>> cksum (which is the default mode) then the RSS flag doesn get set at all.
>>> Instead the PKT_RX_FDIR flag gets set. This happens even though fdir_mode
>>> is set to RTE_FDIR_MODE_NONE in the device
>>> configuration:
>>>
>>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 --rss-
>>> udp [...]
>>> testpmd> set verbose 1
>>> Change verbose level from 0 to 1
>>> testpmd> set fwd rxonly
>>> Set rxonly packet forwarding mode
>>> testpmd> start tx_first
>>>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>>   nb forwarding cores=16 - nb forwarding ports=2
>>>   RX queues=16 - RX desc=128 - RX free threshold=32
>>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>>   TX queues=16 - TX desc=512 - TX free threshold=32
>>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received 16
>>> packets
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>>> packet type
>>>  - Receive queue=0x1
>>>   PKT_RX_FDIR
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>>> packet type
>>>  - Receive queue=0x1
>>>   PKT_RX_FDIR
>>>
>>
>> For this issue, I think following patch can solve your problem, please apply this patch.
>> http://dpdk.org/dev/patchwork/patch/13593/
>>
>
> I tried to apply it directly on 16.04 but it can't be applied. I see
> it's been applied to dpdk-next-net/rel_16_07. Do you happen to have
> one that would work on the latest stable 16.04 release?
>
> Thanks,
> Dumitru
>
>> Beilei
>>
>>> Please let me know if there's more debug info that might be of interest in
>>> troubleshooting the RSS=0 issue.
>>>
>>> Thanks,
>>> Dumitru
>>>
>>> > /Beilei
>>> >
>>> >> Thanks,
>>> >> Dumitru
>>> >>



-- 
Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-22 12:31               ` Take Ceara
  2016-07-22 12:35                 ` Take Ceara
@ 2016-07-25  3:24                 ` Xing, Beilei
  2016-07-25 10:04                   ` Take Ceara
  1 sibling, 1 reply; 15+ messages in thread
From: Xing, Beilei @ 2016-07-25  3:24 UTC (permalink / raw)
  To: Take Ceara; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi,

> -----Original Message-----
> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> Sent: Friday, July 22, 2016 8:32 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs
> for some RX mbuf sizes
> 
> I was using the test-pmd "txonly" implementation which sends fixed UDP
> packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
> 
> I changed the test-pmd tx_only code so that it sends traffic with incremental
> destination IP: 192.168.0.1:1024 -> [192.168.0.2,
> 192.168.0.12]:1024
> I also dumped the source and destination IPs in the "rxonly"
> pkt_burst_receive function.
> Then I see that packets are indeed sent to different queues but the
> mbuf->hash.rss value is still 0.
> 
> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> --enable-rx-cksum --rss-udp
> 
> [...]
> 
>  - Receive queue=0xf
>   PKT_RX_RSS_HASH
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 - (outer)
> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 - (outer)
> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
> L3 type: Unknown - Inner L4 type: Unknown
>  - Receive queue=0x7
>   PKT_RX_RSS_HASH
>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009 -
> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown - Inner
> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
> queue=0x7 - Inner L4 type: Unknown
> 
> [...]
> 
> testpmd> stop
>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>   ----------------------------------------------------------------------------
> 
>   ---------------------- Forward statistics for port 1  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>   ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>   TX-packets: 512            TX-dropped: 0             TX-total: 512
> 
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++
> 
> I checked all the RSS hash values for all the 10 different incoming streams and
> they're all 0. Also, the fact that the traffic is actually distributed seems to suggest
> that RSS itself is working but the mbuf hash field is (I guess) either not written or
> corrupted.
> 

I tried to reproduce the problem with the same steps you used on 16.04 and 16.07, but I really didn't replicate it.
I think you can try follow ways:
1. apply the patch I told you last time and check if the problem still exists.
2. update the codebase and check if the problem still exists.
3. disable vector when you run testpmd, and check if the problem still exists.

> >
> >>
> >> If I use a different mbuf-size, for example 2048, everything is fine:
> >>
> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024
> >> -- enable-rx-cksum --rss-udp [...]
> >> testpmd> set verbose 1
> >> Change verbose level from 0 to 1
> >> testpmd> set fwd rxonly
> >> Set rxonly packet forwarding mode
> >> testpmd> start tx_first
> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>   nb forwarding cores=16 - nb forwarding ports=2
> >>   RX queues=16 - RX desc=128 - RX free threshold=32
> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>   TX queues=16 - TX desc=512 - TX free threshold=32
> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received
> >> 32 packets
> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
> >> L3 type: Unknown - Inner L4 type: Unknown
> >>  - Receive queue=0x1
> >>   PKT_RX_RSS_HASH
> >>
> >
> > Did you send the same packet as before to port 0?
> >
> >> Another weird thing I noticed is when I run test-pmd without
> >> --enable-rx- cksum (which is the default mode) then the RSS flag doesn get
> set at all.
> >> Instead the PKT_RX_FDIR flag gets set. This happens even though
> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
> >> configuration:
> >>
> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> >> --rss- udp [...]
> >> testpmd> set verbose 1
> >> Change verbose level from 0 to 1
> >> testpmd> set fwd rxonly
> >> Set rxonly packet forwarding mode
> >> testpmd> start tx_first
> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>   nb forwarding cores=16 - nb forwarding ports=2
> >>   RX queues=16 - RX desc=128 - RX free threshold=32
> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>   TX queues=16 - TX desc=512 - TX free threshold=32
> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received
> >> 16 packets
> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
> >> packet type
> >>  - Receive queue=0x1
> >>   PKT_RX_FDIR
> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
> >> packet type
> >>  - Receive queue=0x1
> >>   PKT_RX_FDIR
> >>
> >
> > For this issue, I think following patch can solve your problem, please apply this
> patch.
> > http://dpdk.org/dev/patchwork/patch/13593/
> >
> 
> I tried to apply it directly on 16.04 but it can't be applied. I see it's been applied to
> dpdk-next-net/rel_16_07. Do you happen to have one that would work on the
> latest stable 16.04 release?

Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by going through the patch.

Beilei

> 
> Thanks,
> Dumitru
> 
> > Beilei
> >
> >> Please let me know if there's more debug info that might be of
> >> interest in troubleshooting the RSS=0 issue.
> >>
> >> Thanks,
> >> Dumitru
> >>
> >> > /Beilei
> >> >
> >> >> Thanks,
> >> >> Dumitru
> >> >>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-25  3:24                 ` Xing, Beilei
@ 2016-07-25 10:04                   ` Take Ceara
  2016-07-26  8:38                     ` Take Ceara
  0 siblings, 1 reply; 15+ messages in thread
From: Take Ceara @ 2016-07-25 10:04 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Beilei,

On Mon, Jul 25, 2016 at 5:24 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> Hi,
>
>> -----Original Message-----
>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>> Sent: Friday, July 22, 2016 8:32 PM
>> To: Xing, Beilei <beilei.xing@intel.com>
>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>> dev@dpdk.org
>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs
>> for some RX mbuf sizes
>>
>> I was using the test-pmd "txonly" implementation which sends fixed UDP
>> packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
>>
>> I changed the test-pmd tx_only code so that it sends traffic with incremental
>> destination IP: 192.168.0.1:1024 -> [192.168.0.2,
>> 192.168.0.12]:1024
>> I also dumped the source and destination IPs in the "rxonly"
>> pkt_burst_receive function.
>> Then I see that packets are indeed sent to different queues but the
>> mbuf->hash.rss value is still 0.
>>
>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
>> --enable-rx-cksum --rss-udp
>>
>> [...]
>>
>>  - Receive queue=0xf
>>   PKT_RX_RSS_HASH
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 - (outer)
>> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
>> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 - (outer)
>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>> L3 type: Unknown - Inner L4 type: Unknown
>>  - Receive queue=0x7
>>   PKT_RX_RSS_HASH
>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
>> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009 -
>> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown - Inner
>> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
>> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
>> queue=0x7 - Inner L4 type: Unknown
>>
>> [...]
>>
>> testpmd> stop
>>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>   ---------------------- Forward statistics for port 0  ----------------------
>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>>   ----------------------------------------------------------------------------
>>
>>   ---------------------- Forward statistics for port 1  ----------------------
>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>>   ----------------------------------------------------------------------------
>>
>>   +++++++++++++++ Accumulated forward statistics for all
>> ports+++++++++++++++
>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>>
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> +++++++++++++++
>>
>> I checked all the RSS hash values for all the 10 different incoming streams and
>> they're all 0. Also, the fact that the traffic is actually distributed seems to suggest
>> that RSS itself is working but the mbuf hash field is (I guess) either not written or
>> corrupted.
>>
>
> I tried to reproduce the problem with the same steps you used on 16.04 and 16.07, but I really didn't replicate it.
> I think you can try follow ways:
> 1. apply the patch I told you last time and check if the problem still exists.

I applied the changes in the patch manually to 16.04. The RSS=0
problem still exists while the FDIR issue is fixed.

> 2. update the codebase and check if the problem still exists.

I updated the codebase to the latest version on
http://dpdk.org/git/dpdk. I still see the RSS=0 issue.

> 3. disable vector when you run testpmd, and check if the problem still exists.

I recompiled the latest dpdk code with
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=n and the RSS=0 issue is still
there.

My current command line is:
./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
--coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
--rss-udp

Not sure if relevant but I'm running kernel 4.2.0-27:
$ uname -a
Linux jspg2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22
15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Is there anything else that might help you identify the cause of the problem?

Thanks,
Dumitru

>
>> >
>> >>
>> >> If I use a different mbuf-size, for example 2048, everything is fine:
>> >>
>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024
>> >> -- enable-rx-cksum --rss-udp [...]
>> >> testpmd> set verbose 1
>> >> Change verbose level from 0 to 1
>> >> testpmd> set fwd rxonly
>> >> Set rxonly packet forwarding mode
>> >> testpmd> start tx_first
>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>> >>   nb forwarding cores=16 - nb forwarding ports=2
>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received
>> >> 32 packets
>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
>> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
>> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>> >> L3 type: Unknown - Inner L4 type: Unknown
>> >>  - Receive queue=0x1
>> >>   PKT_RX_RSS_HASH
>> >>
>> >
>> > Did you send the same packet as before to port 0?
>> >
>> >> Another weird thing I noticed is when I run test-pmd without
>> >> --enable-rx- cksum (which is the default mode) then the RSS flag doesn get
>> set at all.
>> >> Instead the PKT_RX_FDIR flag gets set. This happens even though
>> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
>> >> configuration:
>> >>
>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
>> >> --rss- udp [...]
>> >> testpmd> set verbose 1
>> >> Change verbose level from 0 to 1
>> >> testpmd> set fwd rxonly
>> >> Set rxonly packet forwarding mode
>> >> testpmd> start tx_first
>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>> >>   nb forwarding cores=16 - nb forwarding ports=2
>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received
>> >> 16 packets
>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>> >> packet type
>> >>  - Receive queue=0x1
>> >>   PKT_RX_FDIR
>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>> >> packet type
>> >>  - Receive queue=0x1
>> >>   PKT_RX_FDIR
>> >>
>> >
>> > For this issue, I think following patch can solve your problem, please apply this
>> patch.
>> > http://dpdk.org/dev/patchwork/patch/13593/
>> >
>>
>> I tried to apply it directly on 16.04 but it can't be applied. I see it's been applied to
>> dpdk-next-net/rel_16_07. Do you happen to have one that would work on the
>> latest stable 16.04 release?
>
> Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by going through the patch.
>
> Beilei
>
>>
>> Thanks,
>> Dumitru
>>
>> > Beilei
>> >
>> >> Please let me know if there's more debug info that might be of
>> >> interest in troubleshooting the RSS=0 issue.
>> >>
>> >> Thanks,
>> >> Dumitru
>> >>
>> >> > /Beilei
>> >> >
>> >> >> Thanks,
>> >> >> Dumitru
>> >> >>



-- 
Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-25 10:04                   ` Take Ceara
@ 2016-07-26  8:38                     ` Take Ceara
  2016-07-26  8:47                       ` Zhang, Helin
  0 siblings, 1 reply; 15+ messages in thread
From: Take Ceara @ 2016-07-26  8:38 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: Zhang, Helin, Wu, Jingjing, dev

Hi Beilei,

On Mon, Jul 25, 2016 at 12:04 PM, Take Ceara <dumitru.ceara@gmail.com> wrote:
> Hi Beilei,
>
> On Mon, Jul 25, 2016 at 5:24 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
>> Hi,
>>
>>> -----Original Message-----
>>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>>> Sent: Friday, July 22, 2016 8:32 PM
>>> To: Xing, Beilei <beilei.xing@intel.com>
>>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>>> dev@dpdk.org
>>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs
>>> for some RX mbuf sizes
>>>
>>> I was using the test-pmd "txonly" implementation which sends fixed UDP
>>> packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
>>>
>>> I changed the test-pmd tx_only code so that it sends traffic with incremental
>>> destination IP: 192.168.0.1:1024 -> [192.168.0.2,
>>> 192.168.0.12]:1024
>>> I also dumped the source and destination IPs in the "rxonly"
>>> pkt_burst_receive function.
>>> Then I see that packets are indeed sent to different queues but the
>>> mbuf->hash.rss value is still 0.
>>>
>>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
>>> --enable-rx-cksum --rss-udp
>>>
>>> [...]
>>>
>>>  - Receive queue=0xf
>>>   PKT_RX_RSS_HASH
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
>>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 - (outer)
>>> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
>>> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
>>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 - (outer)
>>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>>> L3 type: Unknown - Inner L4 type: Unknown
>>>  - Receive queue=0x7
>>>   PKT_RX_RSS_HASH
>>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
>>> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009 -
>>> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown - Inner
>>> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
>>> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
>>> queue=0x7 - Inner L4 type: Unknown
>>>
>>> [...]
>>>
>>> testpmd> stop
>>>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
>>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
>>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
>>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
>>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
>>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
>>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
>>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
>>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>>>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
>>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>>>   ---------------------- Forward statistics for port 0  ----------------------
>>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>>>   ----------------------------------------------------------------------------
>>>
>>>   ---------------------- Forward statistics for port 1  ----------------------
>>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>>>   ----------------------------------------------------------------------------
>>>
>>>   +++++++++++++++ Accumulated forward statistics for all
>>> ports+++++++++++++++
>>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>>>
>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>> +++++++++++++++
>>>
>>> I checked all the RSS hash values for all the 10 different incoming streams and
>>> they're all 0. Also, the fact that the traffic is actually distributed seems to suggest
>>> that RSS itself is working but the mbuf hash field is (I guess) either not written or
>>> corrupted.
>>>
>>
>> I tried to reproduce the problem with the same steps you used on 16.04 and 16.07, but I really didn't replicate it.
>> I think you can try follow ways:
>> 1. apply the patch I told you last time and check if the problem still exists.
>
> I applied the changes in the patch manually to 16.04. The RSS=0
> problem still exists while the FDIR issue is fixed.
>
>> 2. update the codebase and check if the problem still exists.
>
> I updated the codebase to the latest version on
> http://dpdk.org/git/dpdk. I still see the RSS=0 issue.
>
>> 3. disable vector when you run testpmd, and check if the problem still exists.
>
> I recompiled the latest dpdk code with
> CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=n and the RSS=0 issue is still
> there.
>
> My current command line is:
> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> --rss-udp
>
> Not sure if relevant but I'm running kernel 4.2.0-27:
> $ uname -a
> Linux jspg2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22
> 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>
> Is there anything else that might help you identify the cause of the problem?
>
> Thanks,
> Dumitru

After some debugging in the i40e DPDK driver I figured out the problem
When receiving packets with i40e_recv_scattered_pkts, which gets
called in my case because the incoming packet is bigger than 1 full
mbuf (the 4 bytes CRC goes in the second mbuf of the chain), the
pkt_flags, hash, etc are set only when processing the last mbuf in the
packet chain. However, when the hash.rss field is set, instead of
setting it in the first mbuf of the packet it gets set in the current
mbuf (rxm). This can also cause a lot of unpredictable behavior if the
last mbuf only contained the CRC that was stripped as rxm would have
already been freed by then. The line I'm referring to is:

if (pkt_flags & PKT_RX_RSS_HASH)
        rxm->hash.rss =
                rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);

I changed it to setting the rss field in first_seg instead of rxm and
it works fine now.

As far as I see this is the only place where we can receive scattered
packets and all the other places where the RSS hash is set seem to be
fine.
Should I submit a proper patch for this or will you do it as you're
more familiar to the code?

Thanks,
Dumitru

>
>>
>>> >
>>> >>
>>> >> If I use a different mbuf-size, for example 2048, everything is fine:
>>> >>
>>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts 1024
>>> >> -- enable-rx-cksum --rss-udp [...]
>>> >> testpmd> set verbose 1
>>> >> Change verbose level from 0 to 1
>>> >> testpmd> set fwd rxonly
>>> >> Set rxonly packet forwarding mode
>>> >> testpmd> start tx_first
>>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>> >>   nb forwarding cores=16 - nb forwarding ports=2
>>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
>>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
>>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received
>>> >> 32 packets
>>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
>>> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer)
>>> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>>> >> L3 type: Unknown - Inner L4 type: Unknown
>>> >>  - Receive queue=0x1
>>> >>   PKT_RX_RSS_HASH
>>> >>
>>> >
>>> > Did you send the same packet as before to port 0?
>>> >
>>> >> Another weird thing I noticed is when I run test-pmd without
>>> >> --enable-rx- cksum (which is the default mode) then the RSS flag doesn get
>>> set at all.
>>> >> Instead the PKT_RX_FDIR flag gets set. This happens even though
>>> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
>>> >> configuration:
>>> >>
>>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
>>> >> --rss- udp [...]
>>> >> testpmd> set verbose 1
>>> >> Change verbose level from 0 to 1
>>> >> testpmd> set fwd rxonly
>>> >> Set rxonly packet forwarding mode
>>> >> testpmd> start tx_first
>>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>>> >>   nb forwarding cores=16 - nb forwarding ports=2
>>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
>>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
>>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1: received
>>> >> 16 packets
>>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>>> >> packet type
>>> >>  - Receive queue=0x1
>>> >>   PKT_RX_FDIR
>>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 Unknown
>>> >> packet type
>>> >>  - Receive queue=0x1
>>> >>   PKT_RX_FDIR
>>> >>
>>> >
>>> > For this issue, I think following patch can solve your problem, please apply this
>>> patch.
>>> > http://dpdk.org/dev/patchwork/patch/13593/
>>> >
>>>
>>> I tried to apply it directly on 16.04 but it can't be applied. I see it's been applied to
>>> dpdk-next-net/rel_16_07. Do you happen to have one that would work on the
>>> latest stable 16.04 release?
>>
>> Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by going through the patch.
>>
>> Beilei
>>
>>>
>>> Thanks,
>>> Dumitru
>>>
>>> > Beilei
>>> >
>>> >> Please let me know if there's more debug info that might be of
>>> >> interest in troubleshooting the RSS=0 issue.
>>> >>
>>> >> Thanks,
>>> >> Dumitru
>>> >>
>>> >> > /Beilei
>>> >> >
>>> >> >> Thanks,
>>> >> >> Dumitru
>>> >> >>
>
>
>
> --
> Dumitru Ceara



-- 
Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-26  8:38                     ` Take Ceara
@ 2016-07-26  8:47                       ` Zhang, Helin
  2016-07-26  8:57                         ` Take Ceara
  0 siblings, 1 reply; 15+ messages in thread
From: Zhang, Helin @ 2016-07-26  8:47 UTC (permalink / raw)
  To: Take Ceara, Xing, Beilei; +Cc: Wu, Jingjing, dev

Hi Ceara

For testpmd command line, txqflags = 0xf01 should be set for receiving packets which needs more than one mbufs.
I am not sure if it is helpful for you here. Please have a try!

Regards,
Helin

> -----Original Message-----
> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> Sent: Tuesday, July 26, 2016 4:38 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs
> for some RX mbuf sizes
> 
> Hi Beilei,
> 
> On Mon, Jul 25, 2016 at 12:04 PM, Take Ceara <dumitru.ceara@gmail.com>
> wrote:
> > Hi Beilei,
> >
> > On Mon, Jul 25, 2016 at 5:24 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> >> Hi,
> >>
> >>> -----Original Message-----
> >>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> >>> Sent: Friday, July 22, 2016 8:32 PM
> >>> To: Xing, Beilei <beilei.xing@intel.com>
> >>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
> >>> <jingjing.wu@intel.com>; dev@dpdk.org
> >>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for
> >>> XL710/X710 NICs for some RX mbuf sizes
> >>>
> >>> I was using the test-pmd "txonly" implementation which sends fixed
> >>> UDP packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
> >>>
> >>> I changed the test-pmd tx_only code so that it sends traffic with
> >>> incremental destination IP: 192.168.0.1:1024 -> [192.168.0.2,
> >>> 192.168.0.12]:1024
> >>> I also dumped the source and destination IPs in the "rxonly"
> >>> pkt_burst_receive function.
> >>> Then I see that packets are indeed sent to different queues but the
> >>> mbuf->hash.rss value is still 0.
> >>>
> >>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> >>> --enable-rx-cksum --rss-udp
> >>>
> >>> [...]
> >>>
> >>>  - Receive queue=0xf
> >>>   PKT_RX_RSS_HASH
> >>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 -
> >>> (outer)
> >>> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
> >>> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 -
> >>> (outer)
> >>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
> >>> L3 type: Unknown - Inner L4 type: Unknown
> >>>  - Receive queue=0x7
> >>>   PKT_RX_RSS_HASH
> >>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
> >>> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009
> >>> -
> >>> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown -
> >>> Inner
> >>> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
> >>> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
> >>> queue=0x7 - Inner L4 type: Unknown
> >>>
> >>> [...]
> >>>
> >>> testpmd> stop
> >>>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1
> -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2
> -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4
> -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7
> -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9
> -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10
> -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12
> -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14
> -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15
> -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ---------------------- Forward statistics for port 0  ----------------------
> >>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
> >>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>>
> >>> --------------------------------------------------------------------
> >>> --------
> >>>
> >>>   ---------------------- Forward statistics for port 1  ----------------------
> >>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
> >>>
> >>> --------------------------------------------------------------------
> >>> --------
> >>>
> >>>   +++++++++++++++ Accumulated forward statistics for all
> >>> ports+++++++++++++++
> >>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
> >>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
> >>>
> >>>
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>> +++++++++++++++
> >>>
> >>> I checked all the RSS hash values for all the 10 different incoming
> >>> streams and they're all 0. Also, the fact that the traffic is
> >>> actually distributed seems to suggest that RSS itself is working but
> >>> the mbuf hash field is (I guess) either not written or corrupted.
> >>>
> >>
> >> I tried to reproduce the problem with the same steps you used on 16.04 and
> 16.07, but I really didn't replicate it.
> >> I think you can try follow ways:
> >> 1. apply the patch I told you last time and check if the problem still exists.
> >
> > I applied the changes in the patch manually to 16.04. The RSS=0
> > problem still exists while the FDIR issue is fixed.
> >
> >> 2. update the codebase and check if the problem still exists.
> >
> > I updated the codebase to the latest version on
> > http://dpdk.org/git/dpdk. I still see the RSS=0 issue.
> >
> >> 3. disable vector when you run testpmd, and check if the problem still exists.
> >
> > I recompiled the latest dpdk code with
> > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=n and the RSS=0 issue is still
> > there.
> >
> > My current command line is:
> > ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> > --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> > --rss-udp
> >
> > Not sure if relevant but I'm running kernel 4.2.0-27:
> > $ uname -a
> > Linux jspg2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22
> > 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> >
> > Is there anything else that might help you identify the cause of the problem?
> >
> > Thanks,
> > Dumitru
> 
> After some debugging in the i40e DPDK driver I figured out the problem When
> receiving packets with i40e_recv_scattered_pkts, which gets called in my case
> because the incoming packet is bigger than 1 full mbuf (the 4 bytes CRC goes in
> the second mbuf of the chain), the pkt_flags, hash, etc are set only when
> processing the last mbuf in the packet chain. However, when the hash.rss field is
> set, instead of setting it in the first mbuf of the packet it gets set in the current
> mbuf (rxm). This can also cause a lot of unpredictable behavior if the last mbuf
> only contained the CRC that was stripped as rxm would have already been freed
> by then. The line I'm referring to is:
> 
> if (pkt_flags & PKT_RX_RSS_HASH)
>         rxm->hash.rss =
>                 rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
> 
> I changed it to setting the rss field in first_seg instead of rxm and it works fine
> now.
> 
> As far as I see this is the only place where we can receive scattered packets and
> all the other places where the RSS hash is set seem to be fine.
> Should I submit a proper patch for this or will you do it as you're more familiar to
> the code?
> 
> Thanks,
> Dumitru
> 
> >
> >>
> >>> >
> >>> >>
> >>> >> If I use a different mbuf-size, for example 2048, everything is fine:
> >>> >>
> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts
> >>> >> 1024
> >>> >> -- enable-rx-cksum --rss-udp [...]
> >>> >> testpmd> set verbose 1
> >>> >> Change verbose level from 0 to 1
> >>> >> testpmd> set fwd rxonly
> >>> >> Set rxonly packet forwarding mode
> >>> >> testpmd> start tx_first
> >>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>> >>   nb forwarding cores=16 - nb forwarding ports=2
> >>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
> >>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
> >>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
> >>> >> received
> >>> >> 32 packets
> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
> >>> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN -
> >>> >> (outer)
> >>> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown -
> >>> >> Inner
> >>> >> L3 type: Unknown - Inner L4 type: Unknown
> >>> >>  - Receive queue=0x1
> >>> >>   PKT_RX_RSS_HASH
> >>> >>
> >>> >
> >>> > Did you send the same packet as before to port 0?
> >>> >
> >>> >> Another weird thing I noticed is when I run test-pmd without
> >>> >> --enable-rx- cksum (which is the default mode) then the RSS flag
> >>> >> doesn get
> >>> set at all.
> >>> >> Instead the PKT_RX_FDIR flag gets set. This happens even though
> >>> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
> >>> >> configuration:
> >>> >>
> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts
> >>> >> 1024
> >>> >> --rss- udp [...]
> >>> >> testpmd> set verbose 1
> >>> >> Change verbose level from 0 to 1
> >>> >> testpmd> set fwd rxonly
> >>> >> Set rxonly packet forwarding mode
> >>> >> testpmd> start tx_first
> >>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>> >>   nb forwarding cores=16 - nb forwarding ports=2
> >>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
> >>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
> >>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
> >>> >> received
> >>> >> 16 packets
> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263
> >>> >> Unknown packet type
> >>> >>  - Receive queue=0x1
> >>> >>   PKT_RX_FDIR
> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263
> >>> >> Unknown packet type
> >>> >>  - Receive queue=0x1
> >>> >>   PKT_RX_FDIR
> >>> >>
> >>> >
> >>> > For this issue, I think following patch can solve your problem,
> >>> > please apply this
> >>> patch.
> >>> > http://dpdk.org/dev/patchwork/patch/13593/
> >>> >
> >>>
> >>> I tried to apply it directly on 16.04 but it can't be applied. I see
> >>> it's been applied to dpdk-next-net/rel_16_07. Do you happen to have
> >>> one that would work on the latest stable 16.04 release?
> >>
> >> Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by
> going through the patch.
> >>
> >> Beilei
> >>
> >>>
> >>> Thanks,
> >>> Dumitru
> >>>
> >>> > Beilei
> >>> >
> >>> >> Please let me know if there's more debug info that might be of
> >>> >> interest in troubleshooting the RSS=0 issue.
> >>> >>
> >>> >> Thanks,
> >>> >> Dumitru
> >>> >>
> >>> >> > /Beilei
> >>> >> >
> >>> >> >> Thanks,
> >>> >> >> Dumitru
> >>> >> >>
> >
> >
> >
> > --
> > Dumitru Ceara
> 
> 
> 
> --
> Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
  2016-07-26  8:47                       ` Zhang, Helin
@ 2016-07-26  8:57                         ` Take Ceara
  0 siblings, 0 replies; 15+ messages in thread
From: Take Ceara @ 2016-07-26  8:57 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: Xing, Beilei, Wu, Jingjing, dev

Hi Beilei,

On Tue, Jul 26, 2016 at 10:47 AM, Zhang, Helin <helin.zhang@intel.com> wrote:
> Hi Ceara
>
> For testpmd command line, txqflags = 0xf01 should be set for receiving packets which needs more than one mbufs.
> I am not sure if it is helpful for you here. Please have a try!
>

Just tried, and it doesn't really help:
testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
--coremask=0xffff0 --rxq=2 --txq=2 --mbuf-size 1152 --txpkts 1024
--enable-rx-cksum --rss-udp --txqflags 0xf01

  src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
length=1024 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - (outer) L2
type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001
DIP=C0A8000A - (outer) L4 type: UDP - Tunnel type: Unknown - Inner L2
type: Unknown - Inner L3 type: Unknown - Inner L4 type: Unknown
 - Receive queue=0x0
  PKT_RX_RSS_HASH

As I was saying in my previous email the problem is that the RSS is
set in the last mbuf instead of the first:

http://dpdk.org/browse/dpdk/tree/drivers/net/i40e/i40e_rxtx.c#n1438

Even worse, the last rxm mbuf was already freed if it only contained
the CRC which had to be stripped:

http://dpdk.org/browse/dpdk/tree/drivers/net/i40e/i40e_rxtx.c#n1419

Regards,
Dumitru


> Regards,
> Helin
>
>> -----Original Message-----
>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>> Sent: Tuesday, July 26, 2016 4:38 PM
>> To: Xing, Beilei <beilei.xing@intel.com>
>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>> dev@dpdk.org
>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs
>> for some RX mbuf sizes
>>
>> Hi Beilei,
>>
>> On Mon, Jul 25, 2016 at 12:04 PM, Take Ceara <dumitru.ceara@gmail.com>
>> wrote:
>> > Hi Beilei,
>> >
>> > On Mon, Jul 25, 2016 at 5:24 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
>> >> Hi,
>> >>
>> >>> -----Original Message-----
>> >>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
>> >>> Sent: Friday, July 22, 2016 8:32 PM
>> >>> To: Xing, Beilei <beilei.xing@intel.com>
>> >>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
>> >>> <jingjing.wu@intel.com>; dev@dpdk.org
>> >>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for
>> >>> XL710/X710 NICs for some RX mbuf sizes
>> >>>
>> >>> I was using the test-pmd "txonly" implementation which sends fixed
>> >>> UDP packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
>> >>>
>> >>> I changed the test-pmd tx_only code so that it sends traffic with
>> >>> incremental destination IP: 192.168.0.1:1024 -> [192.168.0.2,
>> >>> 192.168.0.12]:1024
>> >>> I also dumped the source and destination IPs in the "rxonly"
>> >>> pkt_burst_receive function.
>> >>> Then I see that packets are indeed sent to different queues but the
>> >>> mbuf->hash.rss value is still 0.
>> >>>
>> >>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> >>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
>> >>> --enable-rx-cksum --rss-udp
>> >>>
>> >>> [...]
>> >>>
>> >>>  - Receive queue=0xf
>> >>>   PKT_RX_RSS_HASH
>> >>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >>> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
>> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 -
>> >>> (outer)
>> >>> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
>> >>> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
>> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 -
>> >>> (outer)
>> >>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - Inner
>> >>> L3 type: Unknown - Inner L4 type: Unknown
>> >>>  - Receive queue=0x7
>> >>>   PKT_RX_RSS_HASH
>> >>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
>> >>> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80009
>> >>> -
>> >>> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown -
>> >>> Inner
>> >>> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
>> >>> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
>> >>> queue=0x7 - Inner L4 type: Unknown
>> >>>
>> >>> [...]
>> >>>
>> >>> testpmd> stop
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1
>> -------
>> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2
>> -------
>> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4
>> -------
>> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7
>> -------
>> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9
>> -------
>> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10
>> -------
>> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12
>> -------
>> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14
>> -------
>> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >>>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15
>> -------
>> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
>> >>>   ---------------------- Forward statistics for port 0  ----------------------
>> >>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>> >>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>> >>>
>> >>> --------------------------------------------------------------------
>> >>> --------
>> >>>
>> >>>   ---------------------- Forward statistics for port 1  ----------------------
>> >>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>> >>>
>> >>> --------------------------------------------------------------------
>> >>> --------
>> >>>
>> >>>   +++++++++++++++ Accumulated forward statistics for all
>> >>> ports+++++++++++++++
>> >>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
>> >>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
>> >>>
>> >>>
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> >>> +++++++++++++++
>> >>>
>> >>> I checked all the RSS hash values for all the 10 different incoming
>> >>> streams and they're all 0. Also, the fact that the traffic is
>> >>> actually distributed seems to suggest that RSS itself is working but
>> >>> the mbuf hash field is (I guess) either not written or corrupted.
>> >>>
>> >>
>> >> I tried to reproduce the problem with the same steps you used on 16.04 and
>> 16.07, but I really didn't replicate it.
>> >> I think you can try follow ways:
>> >> 1. apply the patch I told you last time and check if the problem still exists.
>> >
>> > I applied the changes in the patch manually to 16.04. The RSS=0
>> > problem still exists while the FDIR issue is fixed.
>> >
>> >> 2. update the codebase and check if the problem still exists.
>> >
>> > I updated the codebase to the latest version on
>> > http://dpdk.org/git/dpdk. I still see the RSS=0 issue.
>> >
>> >> 3. disable vector when you run testpmd, and check if the problem still exists.
>> >
>> > I recompiled the latest dpdk code with
>> > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=n and the RSS=0 issue is still
>> > there.
>> >
>> > My current command line is:
>> > ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> > --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
>> > --rss-udp
>> >
>> > Not sure if relevant but I'm running kernel 4.2.0-27:
>> > $ uname -a
>> > Linux jspg2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22
>> > 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>> >
>> > Is there anything else that might help you identify the cause of the problem?
>> >
>> > Thanks,
>> > Dumitru
>>
>> After some debugging in the i40e DPDK driver I figured out the problem When
>> receiving packets with i40e_recv_scattered_pkts, which gets called in my case
>> because the incoming packet is bigger than 1 full mbuf (the 4 bytes CRC goes in
>> the second mbuf of the chain), the pkt_flags, hash, etc are set only when
>> processing the last mbuf in the packet chain. However, when the hash.rss field is
>> set, instead of setting it in the first mbuf of the packet it gets set in the current
>> mbuf (rxm). This can also cause a lot of unpredictable behavior if the last mbuf
>> only contained the CRC that was stripped as rxm would have already been freed
>> by then. The line I'm referring to is:
>>
>> if (pkt_flags & PKT_RX_RSS_HASH)
>>         rxm->hash.rss =
>>                 rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
>>
>> I changed it to setting the rss field in first_seg instead of rxm and it works fine
>> now.
>>
>> As far as I see this is the only place where we can receive scattered packets and
>> all the other places where the RSS hash is set seem to be fine.
>> Should I submit a proper patch for this or will you do it as you're more familiar to
>> the code?
>>
>> Thanks,
>> Dumitru
>>
>> >
>> >>
>> >>> >
>> >>> >>
>> >>> >> If I use a different mbuf-size, for example 2048, everything is fine:
>> >>> >>
>> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts
>> >>> >> 1024
>> >>> >> -- enable-rx-cksum --rss-udp [...]
>> >>> >> testpmd> set verbose 1
>> >>> >> Change verbose level from 0 to 1
>> >>> >> testpmd> set fwd rxonly
>> >>> >> Set rxonly packet forwarding mode
>> >>> >> testpmd> start tx_first
>> >>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>> >>> >>   nb forwarding cores=16 - nb forwarding ports=2
>> >>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
>> >>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> >>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
>> >>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> >>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
>> >>> >> received
>> >>> >> 32 packets
>> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >>> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
>> >>> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN -
>> >>> >> (outer)
>> >>> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown -
>> >>> >> Inner
>> >>> >> L3 type: Unknown - Inner L4 type: Unknown
>> >>> >>  - Receive queue=0x1
>> >>> >>   PKT_RX_RSS_HASH
>> >>> >>
>> >>> >
>> >>> > Did you send the same packet as before to port 0?
>> >>> >
>> >>> >> Another weird thing I noticed is when I run test-pmd without
>> >>> >> --enable-rx- cksum (which is the default mode) then the RSS flag
>> >>> >> doesn get
>> >>> set at all.
>> >>> >> Instead the PKT_RX_FDIR flag gets set. This happens even though
>> >>> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
>> >>> >> configuration:
>> >>> >>
>> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
>> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts
>> >>> >> 1024
>> >>> >> --rss- udp [...]
>> >>> >> testpmd> set verbose 1
>> >>> >> Change verbose level from 0 to 1
>> >>> >> testpmd> set fwd rxonly
>> >>> >> Set rxonly packet forwarding mode
>> >>> >> testpmd> start tx_first
>> >>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
>> >>> >>   nb forwarding cores=16 - nb forwarding ports=2
>> >>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
>> >>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> >>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
>> >>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> >>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
>> >>> >> received
>> >>> >> 16 packets
>> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263
>> >>> >> Unknown packet type
>> >>> >>  - Receive queue=0x1
>> >>> >>   PKT_RX_FDIR
>> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
>> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263
>> >>> >> Unknown packet type
>> >>> >>  - Receive queue=0x1
>> >>> >>   PKT_RX_FDIR
>> >>> >>
>> >>> >
>> >>> > For this issue, I think following patch can solve your problem,
>> >>> > please apply this
>> >>> patch.
>> >>> > http://dpdk.org/dev/patchwork/patch/13593/
>> >>> >
>> >>>
>> >>> I tried to apply it directly on 16.04 but it can't be applied. I see
>> >>> it's been applied to dpdk-next-net/rel_16_07. Do you happen to have
>> >>> one that would work on the latest stable 16.04 release?
>> >>
>> >> Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by
>> going through the patch.
>> >>
>> >> Beilei
>> >>
>> >>>
>> >>> Thanks,
>> >>> Dumitru
>> >>>
>> >>> > Beilei
>> >>> >
>> >>> >> Please let me know if there's more debug info that might be of
>> >>> >> interest in troubleshooting the RSS=0 issue.
>> >>> >>
>> >>> >> Thanks,
>> >>> >> Dumitru
>> >>> >>
>> >>> >> > /Beilei
>> >>> >> >
>> >>> >> >> Thanks,
>> >>> >> >> Dumitru
>> >>> >> >>
>> >
>> >
>> >
>> > --
>> > Dumitru Ceara
>>
>>
>>
>> --
>> Dumitru Ceara



-- 
Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
@ 2016-07-26  9:23 Ananyev, Konstantin
  0 siblings, 0 replies; 15+ messages in thread
From: Ananyev, Konstantin @ 2016-07-26  9:23 UTC (permalink / raw)
  To: Take Ceara; +Cc: dev




Hi Dumitru,

> 
> Hi Beilei,
> 
> On Mon, Jul 25, 2016 at 12:04 PM, Take Ceara <dumitru.ceara@gmail.com> wrote:
> > Hi Beilei,
> >
> > On Mon, Jul 25, 2016 at 5:24 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> >> Hi,
> >>
> >>> -----Original Message-----
> >>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> >>> Sent: Friday, July 22, 2016 8:32 PM
> >>> To: Xing, Beilei <beilei.xing@intel.com>
> >>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing 
> >>> <jingjing.wu@intel.com>; dev@dpdk.org
> >>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for
> >>> XL710/X710 NICs for some RX mbuf sizes
> >>>
> >>> I was using the test-pmd "txonly" implementation which sends fixed 
> >>> UDP packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
> >>>
> >>> I changed the test-pmd tx_only code so that it sends traffic with 
> >>> incremental destination IP: 192.168.0.1:1024 -> [192.168.0.2,
> >>> 192.168.0.12]:1024
> >>> I also dumped the source and destination IPs in the "rxonly"
> >>> pkt_burst_receive function.
> >>> Then I see that packets are indeed sent to different queues but 
> >>> the
> >>> mbuf->hash.rss value is still 0.
> >>>
> >>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 
> >>> 1024 --enable-rx-cksum --rss-udp
> >>>
> >>> [...]
> >>>
> >>>  - Receive queue=0xf
> >>>   PKT_RX_RSS_HASH
> >>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 -
> >>> (outer)
> >>> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
> >>> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 -
> >>> (outer)
> >>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - 
> >>> Inner
> >>> L3 type: Unknown - Inner L4 type: Unknown
> >>>  - Receive queue=0x7
> >>>   PKT_RX_RSS_HASH
> >>>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
> >>> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 
> >>> DIP=C0A80009
> >>> -
> >>> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown - 
> >>> Inner
> >>> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
> >>> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
> >>> queue=0x7 - Inner L4 type: Unknown
> >>>
> >>> [...]
> >>>
> >>> testpmd> stop
> >>>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
> >>>   RX-packets: 59             TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
> >>>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >>>   ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
> >>>   RX-packets: 48             TX-packets: 32             TX-dropped: 0
> >>>   ---------------------- Forward statistics for port 0  ----------------------
> >>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
> >>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>>
> >>> ------------------------------------------------------------------
> >>> --
> >>> --------
> >>>
> >>>   ---------------------- Forward statistics for port 1  ----------------------
> >>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
> >>>
> >>> ------------------------------------------------------------------
> >>> --
> >>> --------
> >>>
> >>>   +++++++++++++++ Accumulated forward statistics for all
> >>> ports+++++++++++++++
> >>>   RX-packets: 428            RX-dropped: 84            RX-total: 512
> >>>   TX-packets: 512            TX-dropped: 0             TX-total: 512
> >>>
> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>> +++++++++++++++
> >>>
> >>> I checked all the RSS hash values for all the 10 different 
> >>> incoming streams and they're all 0. Also, the fact that the 
> >>> traffic is actually distributed seems to suggest that RSS itself 
> >>> is working but the mbuf hash field is (I guess) either not written or corrupted.
> >>>
> >>
> >> I tried to reproduce the problem with the same steps you used on 16.04 and 16.07, but I really didn't replicate it.
> >> I think you can try follow ways:
> >> 1. apply the patch I told you last time and check if the problem still exists.
> >
> > I applied the changes in the patch manually to 16.04. The RSS=0 
> > problem still exists while the FDIR issue is fixed.
> >
> >> 2. update the codebase and check if the problem still exists.
> >
> > I updated the codebase to the latest version on 
> > http://dpdk.org/git/dpdk. I still see the RSS=0 issue.
> >
> >> 3. disable vector when you run testpmd, and check if the problem still exists.
> >
> > I recompiled the latest dpdk code with 
> > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=n and the RSS=0 issue is still 
> > there.
> >
> > My current command line is:
> > ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> > --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024 
> > --rss-udp
> >
> > Not sure if relevant but I'm running kernel 4.2.0-27:
> > $ uname -a
> > Linux jspg2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22
> > 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> >
> > Is there anything else that might help you identify the cause of the problem?
> >
> > Thanks,
> > Dumitru
> 
> After some debugging in the i40e DPDK driver I figured out the problem 
> When receiving packets with i40e_recv_scattered_pkts, which gets 
> called in my case because the incoming packet is bigger than 1 full 
> mbuf (the 4 bytes CRC goes in the second mbuf of the chain), the 
> pkt_flags, hash, etc are set only when processing the last mbuf in the packet chain. However, when the hash.rss field is set, instead of setting it in the first mbuf of the packet it gets set in the current mbuf (rxm). This can also cause a lot of unpredictable behavior if the last mbuf only contained the CRC that was stripped as rxm would have already been freed by then. The line I'm referring to is:
> 
> if (pkt_flags & PKT_RX_RSS_HASH)
>         rxm->hash.rss =
>                 rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
> 
> I changed it to setting the rss field in first_seg instead of rxm and it works fine now.
> 
> As far as I see this is the only place where we can receive scattered 
> packets and all the other places where the RSS hash is set seem to be fine.
> Should I submit a proper patch for this or will you do it as you're more familiar to the code?
> 

Yes please, and thanks for great catch.
Unfortunately, we are probably too late to include it into 16.07 :( 
Konstantin 

> Thanks,
> Dumitru
> 
> >
> >>
> >>> >
> >>> >>
> >>> >> If I use a different mbuf-size, for example 2048, everything is fine:
> >>> >>
> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts
> >>> >> 1024
> >>> >> -- enable-rx-cksum --rss-udp [...]
> >>> >> testpmd> set verbose 1
> >>> >> Change verbose level from 0 to 1
> >>> >> testpmd> set fwd rxonly
> >>> >> Set rxonly packet forwarding mode
> >>> >> testpmd> start tx_first
> >>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>> >>   nb forwarding cores=16 - nb forwarding ports=2
> >>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
> >>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
> >>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
> >>> >> received
> >>> >> 32 packets
> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
> >>> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN -
> >>> >> (outer)
> >>> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown - 
> >>> >> Inner
> >>> >> L3 type: Unknown - Inner L4 type: Unknown
> >>> >>  - Receive queue=0x1
> >>> >>   PKT_RX_RSS_HASH
> >>> >>
> >>> >
> >>> > Did you send the same packet as before to port 0?
> >>> >
> >>> >> Another weird thing I noticed is when I run test-pmd without
> >>> >> --enable-rx- cksum (which is the default mode) then the RSS 
> >>> >> flag doesn get
> >>> set at all.
> >>> >> Instead the PKT_RX_FDIR flag gets set. This happens even though 
> >>> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
> >>> >> configuration:
> >>> >>
> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts
> >>> >> 1024
> >>> >> --rss- udp [...]
> >>> >> testpmd> set verbose 1
> >>> >> Change verbose level from 0 to 1
> >>> >> testpmd> set fwd rxonly
> >>> >> Set rxonly packet forwarding mode
> >>> >> testpmd> start tx_first
> >>> >>   rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>> >>   nb forwarding cores=16 - nb forwarding ports=2
> >>> >>   RX queues=16 - RX desc=128 - RX free threshold=32
> >>> >>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>> >>   TX queues=16 - TX desc=512 - TX free threshold=32
> >>> >>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>> >>   TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
> >>> >> received
> >>> >> 16 packets
> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 
> >>> >> Unknown packet type
> >>> >>  - Receive queue=0x1
> >>> >>   PKT_RX_FDIR
> >>> >>   src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263 
> >>> >> Unknown packet type
> >>> >>  - Receive queue=0x1
> >>> >>   PKT_RX_FDIR
> >>> >>
> >>> >
> >>> > For this issue, I think following patch can solve your problem, 
> >>> > please apply this
> >>> patch.
> >>> > http://dpdk.org/dev/patchwork/patch/13593/
> >>> >
> >>>
> >>> I tried to apply it directly on 16.04 but it can't be applied. I 
> >>> see it's been applied to dpdk-next-net/rel_16_07. Do you happen to 
> >>> have one that would work on the latest stable 16.04 release?
> >>
> >> Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by going through the patch.
> >>
> >> Beilei
> >>
> >>>
> >>> Thanks,
> >>> Dumitru
> >>>
> >>> > Beilei
> >>> >
> >>> >> Please let me know if there's more debug info that might be of 
> >>> >> interest in troubleshooting the RSS=0 issue.
> >>> >>
> >>> >> Thanks,
> >>> >> Dumitru
> >>> >>
> >>> >> > /Beilei
> >>> >> >
> >>> >> >> Thanks,
> >>> >> >> Dumitru
> >>> >> >>
> >
> >
> >
> > --
> > Dumitru Ceara
> 
> 
> 
> --
> Dumitru Ceara

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-07-26  9:23 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAKKV4w9uoN_X=0DKJHgcAHT7VCmeBHP=WrHfi+12o3ogA6htSQ@mail.gmail.com>
2016-07-18 15:15 ` [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes Zhang, Helin
2016-07-18 16:14   ` Take Ceara
2016-07-19  9:31     ` Xing, Beilei
2016-07-19 14:58       ` Take Ceara
2016-07-20  1:59         ` Xing, Beilei
2016-07-21 10:58           ` Take Ceara
2016-07-22  9:04             ` Xing, Beilei
2016-07-22 12:31               ` Take Ceara
2016-07-22 12:35                 ` Take Ceara
2016-07-25  3:24                 ` Xing, Beilei
2016-07-25 10:04                   ` Take Ceara
2016-07-26  8:38                     ` Take Ceara
2016-07-26  8:47                       ` Zhang, Helin
2016-07-26  8:57                         ` Take Ceara
2016-07-26  9:23 Ananyev, Konstantin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).