DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [dpdk-dev] [Bug 866] Huge Packet drops observed on vmxnet3 after updating DPDK version from 20.11 to 21.08
Date: Fri, 29 Oct 2021 11:01:41 +0000	[thread overview]
Message-ID: <bug-866-3@http.bugs.dpdk.org/> (raw)

https://bugs.dpdk.org/show_bug.cgi?id=866

            Bug ID: 866
           Summary: Huge Packet drops observed on vmxnet3 after updating
                    DPDK version from 20.11 to 21.08
           Product: DPDK
           Version: 21.08
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: sahithi.singam@oracle.com
  Target Milestone: ---

On the HOST, we are using VMWARE ESXi 6.7 and on one of the hosts, we are
running DPDK based custom application over vmxnet3 devices. 
When we upgraded DPDK version in our application from 20.11 to 21.08, we saw a
huge difference in the performance numbers. 
In our tests, we have observed that there is around packet drops of 1.5% with
DPDK21.08 version and these packet drops were not visible either in the guest
DPDK port stats or in the HOST vmxnet3 stats. 

On  further debugging, we have nailed out that this issue is due to recent
commit "046f1161956777e3afb13504acbe8df2ec3a383c net/vmxnet3: support MSI-X
interrupt" 

Following code change in above commit is resulting in this issue. 

=====================================================================
+               if (hw->intr.lsc_only)
+                       tqd->conf.intrIdx = 1;
+               else
+                       tqd->conf.intrIdx = intr_handle->intr_vec[i];
...
...
...
+               if (hw->intr.lsc_only)
+                       rqd->conf.intrIdx = 1;
+               else
+                       rqd->conf.intrIdx = intr_handle->intr_vec[i];
======================================================================

We are using igb_uio and link status interrupt for link status detection. Due
to which our application is landing in the lsc_only if condition and as per
code RX/TX queues are using interrupts at index 1. Though interrupt at index 1
is not enabled in the guest DPDK code, it is resulting in the packet drops. We
are seeing following events at host stats. 

[root@vmw-node1:~] vsish -e cat 
/net/portsets/switch1/ports/100663342/vmxnet3/intrs/0/stats
stats of the individual intr {
   actions posted:0
   actions posted with hint:0
   actions avoided:0
}

[root@vmw-node1:~] vsish -e cat 
/net/portsets/switch1/ports/100663342/vmxnet3/intrs/1/stats
stats of the individual intr {
   actions posted:1
   actions posted with hint:1
   actions avoided:27207273
}

When I reverted above code changes to the changes as in DPDK version 20.11, i.e
as below , we are not seeing the packet drops. 
=========================================================
tqd->conf.intrIdx      = txq->comp_ring.intr_idx;
rqd->conf.intrIdx         = rxq->comp_ring.intr_idx;
=========================================================

-- 
You are receiving this mail because:
You are the assignee for the bug.

                 reply	other threads:[~2021-10-29 11:01 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-866-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).