DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Question about zero length segments in received mbuf
@ 2015-10-16 13:32 Tom Kiely
  2015-10-16 14:00 ` Bruce Richardson
  0 siblings, 1 reply; 2+ messages in thread
From: Tom Kiely @ 2015-10-16 13:32 UTC (permalink / raw)
  To: dev

Hi,
     I am currently experiencing a serious issue and was hoping someone 
else might have encountered it.

I have a KVM VM using two ixgbe interfaces A and B (configured to use 
PCI passthrough) and forwarding traffic from interface A via B.
At about 4 million pps of 64 byte frames, the rx driver 
ixgbe_recv_scattered_pkts_vec() appears to be generating mbufs with 2 
segments, the first of which has data_len ==0 and the second data_len==64.
The real problem is that when ixgbe_xmit_pkts() on the tx side gets 
about 18 of these packets, it seems to mess up the transmit descriptor 
handling.
ixgbe_xmit_cleanup() never sees the STAT_DD bit set and no descriptor 
get freed leading to total traffic loss.

I'm still debugging the xmit side to find out what's causing the 
descriptor ring problem.

Has anyone encountered the rx side zero-length-segment issue ? I found a 
reference to such an issue on the web but it was years old.

I'm using DPDK 1.8.0.

Any information gratefully received,
    Tom

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] Question about zero length segments in received mbuf
  2015-10-16 13:32 [dpdk-dev] Question about zero length segments in received mbuf Tom Kiely
@ 2015-10-16 14:00 ` Bruce Richardson
  0 siblings, 0 replies; 2+ messages in thread
From: Bruce Richardson @ 2015-10-16 14:00 UTC (permalink / raw)
  To: Tom Kiely; +Cc: dev

On Fri, Oct 16, 2015 at 02:32:15PM +0100, Tom Kiely wrote:
> Hi,
>     I am currently experiencing a serious issue and was hoping someone else
> might have encountered it.
> 
> I have a KVM VM using two ixgbe interfaces A and B (configured to use PCI
> passthrough) and forwarding traffic from interface A via B.
> At about 4 million pps of 64 byte frames, the rx driver
> ixgbe_recv_scattered_pkts_vec() appears to be generating mbufs with 2
> segments, the first of which has data_len ==0 and the second data_len==64.
> The real problem is that when ixgbe_xmit_pkts() on the tx side gets about 18
> of these packets, it seems to mess up the transmit descriptor handling.
> ixgbe_xmit_cleanup() never sees the STAT_DD bit set and no descriptor get
> freed leading to total traffic loss.
> 
> I'm still debugging the xmit side to find out what's causing the descriptor
> ring problem.
> 
> Has anyone encountered the rx side zero-length-segment issue ? I found a
> reference to such an issue on the web but it was years old.
> 
> I'm using DPDK 1.8.0.
> 
> Any information gratefully received,
>    Tom

Hi Tom,

on the TX side, if these two-segment packets are getting sent to the NIC, you
probably want to make sure that the TX code is set up to handle multi-segment
packets. By default in most drivers, the NO_MULTISEG flag is set on queue
initialization.

/Bruce

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-10-16 14:00 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-16 13:32 [dpdk-dev] Question about zero length segments in received mbuf Tom Kiely
2015-10-16 14:00 ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).