* Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes
@ 2016-07-26 9:23 Ananyev, Konstantin
2016-07-26 10:46 ` [dpdk-dev] [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts dumitru.ceara
0 siblings, 1 reply; 4+ messages in thread
From: Ananyev, Konstantin @ 2016-07-26 9:23 UTC (permalink / raw)
To: Take Ceara; +Cc: dev
Hi Dumitru,
>
> Hi Beilei,
>
> On Mon, Jul 25, 2016 at 12:04 PM, Take Ceara <dumitru.ceara@gmail.com> wrote:
> > Hi Beilei,
> >
> > On Mon, Jul 25, 2016 at 5:24 AM, Xing, Beilei <beilei.xing@intel.com> wrote:
> >> Hi,
> >>
> >>> -----Original Message-----
> >>> From: Take Ceara [mailto:dumitru.ceara@gmail.com]
> >>> Sent: Friday, July 22, 2016 8:32 PM
> >>> To: Xing, Beilei <beilei.xing@intel.com>
> >>> Cc: Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing
> >>> <jingjing.wu@intel.com>; dev@dpdk.org
> >>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for
> >>> XL710/X710 NICs for some RX mbuf sizes
> >>>
> >>> I was using the test-pmd "txonly" implementation which sends fixed
> >>> UDP packets from 192.168.0.1:1024 -> 192.168.0.2:1024.
> >>>
> >>> I changed the test-pmd tx_only code so that it sends traffic with
> >>> incremental destination IP: 192.168.0.1:1024 -> [192.168.0.2,
> >>> 192.168.0.12]:1024
> >>> I also dumped the source and destination IPs in the "rxonly"
> >>> pkt_burst_receive function.
> >>> Then I see that packets are indeed sent to different queues but
> >>> the
> >>> mbuf->hash.rss value is still 0.
> >>>
> >>> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts
> >>> 1024 --enable-rx-cksum --rss-udp
> >>>
> >>> [...]
> >>>
> >>> - Receive queue=0xf
> >>> PKT_RX_RSS_HASH
> >>> src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> length=1024 - nb_segs=1 - RSS queue=0xa - (outer) L2 type: ETHER -
> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80006 -
> >>> (outer)
> >>> L4 type: UDP - Tunnel type: Unknown - RSS hash=0x0 - Inner L2 type:
> >>> Unknown - RSS queue=0xf - RSS queue=0x7 - (outer) L2 type: ETHER -
> >>> (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001 DIP=C0A80007 -
> >>> (outer)
> >>> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown -
> >>> Inner
> >>> L3 type: Unknown - Inner L4 type: Unknown
> >>> - Receive queue=0x7
> >>> PKT_RX_RSS_HASH
> >>> src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - (outer) L2 type:
> >>> ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN SIP=C0A80001
> >>> DIP=C0A80009
> >>> -
> >>> type=0x0800 - length=1024 - nb_segs=1 - Inner L3 type: Unknown -
> >>> Inner
> >>> L4 type: Unknown - RSS hash=0x0 - (outer) L4 type: UDP - Tunnel type:
> >>> Unknown - Inner L2 type: Unknown - Inner L3 type: Unknown - RSS
> >>> queue=0x7 - Inner L4 type: Unknown
> >>>
> >>> [...]
> >>>
> >>> testpmd> stop
> >>> ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
> >>> RX-packets: 59 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
> >>> RX-packets: 48 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 4 -> TX Port= 1/Queue= 4 -------
> >>> RX-packets: 59 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 5 -> TX Port= 1/Queue= 5 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 6 -> TX Port= 1/Queue= 6 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 7 -> TX Port= 1/Queue= 7 -------
> >>> RX-packets: 48 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 8 -> TX Port= 1/Queue= 8 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue= 9 -> TX Port= 1/Queue= 9 -------
> >>> RX-packets: 59 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue=10 -> TX Port= 1/Queue=10 -------
> >>> RX-packets: 48 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue=11 -> TX Port= 1/Queue=11 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue=12 -> TX Port= 1/Queue=12 -------
> >>> RX-packets: 59 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue=13 -> TX Port= 1/Queue=13 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue=14 -> TX Port= 1/Queue=14 -------
> >>> RX-packets: 0 TX-packets: 32 TX-dropped: 0
> >>> ------- Forward Stats for RX Port= 0/Queue=15 -> TX Port= 1/Queue=15 -------
> >>> RX-packets: 48 TX-packets: 32 TX-dropped: 0
> >>> ---------------------- Forward statistics for port 0 ----------------------
> >>> RX-packets: 428 RX-dropped: 84 RX-total: 512
> >>> TX-packets: 0 TX-dropped: 0 TX-total: 0
> >>>
> >>> ------------------------------------------------------------------
> >>> --
> >>> --------
> >>>
> >>> ---------------------- Forward statistics for port 1 ----------------------
> >>> RX-packets: 0 RX-dropped: 0 RX-total: 0
> >>> TX-packets: 512 TX-dropped: 0 TX-total: 512
> >>>
> >>> ------------------------------------------------------------------
> >>> --
> >>> --------
> >>>
> >>> +++++++++++++++ Accumulated forward statistics for all
> >>> ports+++++++++++++++
> >>> RX-packets: 428 RX-dropped: 84 RX-total: 512
> >>> TX-packets: 512 TX-dropped: 0 TX-total: 512
> >>>
> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>> +++++++++++++++
> >>>
> >>> I checked all the RSS hash values for all the 10 different
> >>> incoming streams and they're all 0. Also, the fact that the
> >>> traffic is actually distributed seems to suggest that RSS itself
> >>> is working but the mbuf hash field is (I guess) either not written or corrupted.
> >>>
> >>
> >> I tried to reproduce the problem with the same steps you used on 16.04 and 16.07, but I really didn't replicate it.
> >> I think you can try follow ways:
> >> 1. apply the patch I told you last time and check if the problem still exists.
> >
> > I applied the changes in the patch manually to 16.04. The RSS=0
> > problem still exists while the FDIR issue is fixed.
> >
> >> 2. update the codebase and check if the problem still exists.
> >
> > I updated the codebase to the latest version on
> > http://dpdk.org/git/dpdk. I still see the RSS=0 issue.
> >
> >> 3. disable vector when you run testpmd, and check if the problem still exists.
> >
> > I recompiled the latest dpdk code with
> > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=n and the RSS=0 issue is still
> > there.
> >
> > My current command line is:
> > ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> > --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts 1024
> > --rss-udp
> >
> > Not sure if relevant but I'm running kernel 4.2.0-27:
> > $ uname -a
> > Linux jspg2 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22
> > 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> >
> > Is there anything else that might help you identify the cause of the problem?
> >
> > Thanks,
> > Dumitru
>
> After some debugging in the i40e DPDK driver I figured out the problem
> When receiving packets with i40e_recv_scattered_pkts, which gets
> called in my case because the incoming packet is bigger than 1 full
> mbuf (the 4 bytes CRC goes in the second mbuf of the chain), the
> pkt_flags, hash, etc are set only when processing the last mbuf in the packet chain. However, when the hash.rss field is set, instead of setting it in the first mbuf of the packet it gets set in the current mbuf (rxm). This can also cause a lot of unpredictable behavior if the last mbuf only contained the CRC that was stripped as rxm would have already been freed by then. The line I'm referring to is:
>
> if (pkt_flags & PKT_RX_RSS_HASH)
> rxm->hash.rss =
> rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
>
> I changed it to setting the rss field in first_seg instead of rxm and it works fine now.
>
> As far as I see this is the only place where we can receive scattered
> packets and all the other places where the RSS hash is set seem to be fine.
> Should I submit a proper patch for this or will you do it as you're more familiar to the code?
>
Yes please, and thanks for great catch.
Unfortunately, we are probably too late to include it into 16.07 :(
Konstantin
> Thanks,
> Dumitru
>
> >
> >>
> >>> >
> >>> >>
> >>> >> If I use a different mbuf-size, for example 2048, everything is fine:
> >>> >>
> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 2048 --txpkts
> >>> >> 1024
> >>> >> -- enable-rx-cksum --rss-udp [...]
> >>> >> testpmd> set verbose 1
> >>> >> Change verbose level from 0 to 1
> >>> >> testpmd> set fwd rxonly
> >>> >> Set rxonly packet forwarding mode
> >>> >> testpmd> start tx_first
> >>> >> rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>> >> nb forwarding cores=16 - nb forwarding ports=2
> >>> >> RX queues=16 - RX desc=128 - RX free threshold=32
> >>> >> RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>> >> TX queues=16 - TX desc=512 - TX free threshold=32
> >>> >> TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>> >> TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
> >>> >> received
> >>> >> 32 packets
> >>> >> src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=1 - RSS hash=0x5263c3f2 - RSS queue=0x1 -
> >>> >> (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN -
> >>> >> (outer)
> >>> >> L4 type: UDP - Tunnel type: Unknown - Inner L2 type: Unknown -
> >>> >> Inner
> >>> >> L3 type: Unknown - Inner L4 type: Unknown
> >>> >> - Receive queue=0x1
> >>> >> PKT_RX_RSS_HASH
> >>> >>
> >>> >
> >>> > Did you send the same packet as before to port 0?
> >>> >
> >>> >> Another weird thing I noticed is when I run test-pmd without
> >>> >> --enable-rx- cksum (which is the default mode) then the RSS
> >>> >> flag doesn get
> >>> set at all.
> >>> >> Instead the PKT_RX_FDIR flag gets set. This happens even though
> >>> >> fdir_mode is set to RTE_FDIR_MODE_NONE in the device
> >>> >> configuration:
> >>> >>
> >>> >> ./testpmd -c ffff1 -n 4 -w 0000:01:00.3 -w 0000:81:00.3 -- -i
> >>> >> --coremask=0xffff0 --rxq=16 --txq=16 --mbuf-size 1152 --txpkts
> >>> >> 1024
> >>> >> --rss- udp [...]
> >>> >> testpmd> set verbose 1
> >>> >> Change verbose level from 0 to 1
> >>> >> testpmd> set fwd rxonly
> >>> >> Set rxonly packet forwarding mode
> >>> >> testpmd> start tx_first
> >>> >> rxonly packet forwarding - CRC stripping disabled - packets/burst=32
> >>> >> nb forwarding cores=16 - nb forwarding ports=2
> >>> >> RX queues=16 - RX desc=128 - RX free threshold=32
> >>> >> RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >>> >> TX queues=16 - TX desc=512 - TX free threshold=32
> >>> >> TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >>> >> TX RS bit threshold=32 - TXQ flags=0xf01 port 0/queue 1:
> >>> >> received
> >>> >> 16 packets
> >>> >> src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263
> >>> >> Unknown packet type
> >>> >> - Receive queue=0x1
> >>> >> PKT_RX_FDIR
> >>> >> src=68:05:CA:38:6D:63 - dst=02:00:00:00:00:01 - type=0x0800 -
> >>> >> length=1024 - nb_segs=2 - FDIR matched hash=0xc3f2 ID=0x5263
> >>> >> Unknown packet type
> >>> >> - Receive queue=0x1
> >>> >> PKT_RX_FDIR
> >>> >>
> >>> >
> >>> > For this issue, I think following patch can solve your problem,
> >>> > please apply this
> >>> patch.
> >>> > http://dpdk.org/dev/patchwork/patch/13593/
> >>> >
> >>>
> >>> I tried to apply it directly on 16.04 but it can't be applied. I
> >>> see it's been applied to dpdk-next-net/rel_16_07. Do you happen to
> >>> have one that would work on the latest stable 16.04 release?
> >>
> >> Sorry, I haven't. If there's conflict with R16.04, I think you can resolve it by going through the patch.
> >>
> >> Beilei
> >>
> >>>
> >>> Thanks,
> >>> Dumitru
> >>>
> >>> > Beilei
> >>> >
> >>> >> Please let me know if there's more debug info that might be of
> >>> >> interest in troubleshooting the RSS=0 issue.
> >>> >>
> >>> >> Thanks,
> >>> >> Dumitru
> >>> >>
> >>> >> > /Beilei
> >>> >> >
> >>> >> >> Thanks,
> >>> >> >> Dumitru
> >>> >> >>
> >
> >
> >
> > --
> > Dumitru Ceara
>
>
>
> --
> Dumitru Ceara
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts
2016-07-26 9:23 [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes Ananyev, Konstantin
@ 2016-07-26 10:46 ` dumitru.ceara
2016-07-26 12:11 ` Ananyev, Konstantin
0 siblings, 1 reply; 4+ messages in thread
From: dumitru.ceara @ 2016-07-26 10:46 UTC (permalink / raw)
To: dev
Cc: beilei.xing, helin.zhang, jingjing.wu, konstantin.ananyev, Dumitru Ceara
From: Dumitru Ceara <dumitru.ceara@gmail.com>
The driver is incorrectly setting the RSS field in the last mbuf in
the packet chain instead of the first. Moreover, the last mbuf might
have already been freed if it only contained the Ethernet CRC.
Also, fix the call to i40e_rxd_build_fdir to store the fdir flags in
the first mbuf of the chain instead of the last.
Signed-off-by: Dumitru Ceara <dumitru.ceara@gmail.com>
---
drivers/net/i40e/i40e_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index d3cfb98..554d167 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1436,10 +1436,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
- rxm->hash.rss =
+ first_seg->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
if (pkt_flags & PKT_RX_FDIR)
- pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
+ pkt_flags |= i40e_rxd_build_fdir(&rxd, first_seg);
#ifdef RTE_LIBRTE_IEEE1588
pkt_flags |= i40e_get_iee15888_flags(first_seg, qword1);
--
1.9.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts
2016-07-26 10:46 ` [dpdk-dev] [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts dumitru.ceara
@ 2016-07-26 12:11 ` Ananyev, Konstantin
2016-07-28 13:48 ` Thomas Monjalon
0 siblings, 1 reply; 4+ messages in thread
From: Ananyev, Konstantin @ 2016-07-26 12:11 UTC (permalink / raw)
To: dumitru.ceara, dev; +Cc: Xing, Beilei, Zhang, Helin, Wu, Jingjing
> -----Original Message-----
> From: dumitru.ceara@gmail.com [mailto:dumitru.ceara@gmail.com]
> Sent: Tuesday, July 26, 2016 11:46 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Dumitru Ceara <dumitru.ceara@gmail.com>
> Subject: [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts
>
> From: Dumitru Ceara <dumitru.ceara@gmail.com>
>
> The driver is incorrectly setting the RSS field in the last mbuf in the packet chain instead of the first. Moreover, the last mbuf might have
> already been freed if it only contained the Ethernet CRC.
>
> Also, fix the call to i40e_rxd_build_fdir to store the fdir flags in the first mbuf of the chain instead of the last.
>
> Signed-off-by: Dumitru Ceara <dumitru.ceara@gmail.com>
> ---
> drivers/net/i40e/i40e_rxtx.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index d3cfb98..554d167 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1436,10 +1436,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
> i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
> I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
> if (pkt_flags & PKT_RX_RSS_HASH)
> - rxm->hash.rss =
> + first_seg->hash.rss =
> rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
> if (pkt_flags & PKT_RX_FDIR)
> - pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
> + pkt_flags |= i40e_rxd_build_fdir(&rxd, first_seg);
>
> #ifdef RTE_LIBRTE_IEEE1588
> pkt_flags |= i40e_get_iee15888_flags(first_seg, qword1);
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 1.9.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts
2016-07-26 12:11 ` Ananyev, Konstantin
@ 2016-07-28 13:48 ` Thomas Monjalon
0 siblings, 0 replies; 4+ messages in thread
From: Thomas Monjalon @ 2016-07-28 13:48 UTC (permalink / raw)
To: dumitru.ceara
Cc: dev, Ananyev, Konstantin, Xing, Beilei, Zhang, Helin, Wu, Jingjing
> > From: Dumitru Ceara <dumitru.ceara@gmail.com>
> >
> > The driver is incorrectly setting the RSS field in the last mbuf in the packet chain instead of the first. Moreover, the last mbuf might have
> > already been freed if it only contained the Ethernet CRC.
> >
> > Also, fix the call to i40e_rxd_build_fdir to store the fdir flags in the first mbuf of the chain instead of the last.
> >
> > Signed-off-by: Dumitru Ceara <dumitru.ceara@gmail.com>
>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Fixes: 4861cde46116 ("i40e: new poll mode driver")
Fixes: 5a21d9715f81 ("i40e: report flow director matching")
Title reworded: net/i40e: fix metadata in first mbuf of scattered Rx
Applied, thanks
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-07-28 13:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-26 9:23 [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes Ananyev, Konstantin
2016-07-26 10:46 ` [dpdk-dev] [PATCH] net/i40e: fix setting RSS in i40e_recv_scattered_pkts dumitru.ceara
2016-07-26 12:11 ` Ananyev, Konstantin
2016-07-28 13:48 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).