patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Xueming Li <xuemingl@nvidia.com>
To: <dev@dpdk.org>
Cc: <xuemingl@nvidia.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>, <stable@dpdk.org>,
	Matan Azrad <matan@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Subject: [dpdk-stable] [PATCH v2 1/2] vdpa/mlx5: workaround FW first completion in start
Date: Fri, 15 Oct 2021 23:05:44 +0800	[thread overview]
Message-ID: <20211015150545.1673312-1-xuemingl@nvidia.com> (raw)
In-Reply-To: <20210923081758.178745-1-xuemingl@nvidia.com>

After a vDPA application restart, qemu restores VQ with used and
available index, new incoming packet triggers virtio driver to
handle buffers. Under heavy traffic, no available buffer for
firmware to receive new packets, no Rx interrupts generated,
driver is stuck on endless interrupt waiting.

As a firmware workaround, this patch sends a notification after
VQ setup to ask driver handling buffers and filling new buffers.

Fixes: bff735011078 ("vdpa/mlx5: prepare virtio queues")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058f..71470d23d9e 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -4,6 +4,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <sys/mman.h>
+#include <sys/eventfd.h>
 
 #include <rte_malloc.h>
 #include <rte_errno.h>
@@ -367,6 +368,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
 		goto error;
 	}
 	virtq->stopped = false;
+	/* Initial notification to ask qemu handling completed buffers. */
+	if (virtq->eqp.cq.callfd != -1)
+		eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1);
 	DRV_LOG(DEBUG, "vid %u virtq %u was created successfully.", priv->vid,
 		index);
 	return 0;
-- 
2.33.0


  parent reply	other threads:[~2021-10-15 15:06 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20210923081758.178745-1-xuemingl@nvidia.com>
2021-10-15 13:43 ` [dpdk-stable] [PATCH v1 " Xueming Li
2021-10-15 13:43   ` [dpdk-stable] [PATCH v1 2/2] vdpa/mlx5: retry VAR allocation during vDPA restart Xueming Li
2021-10-15 13:57   ` [dpdk-stable] [PATCH v1 1/2] vdpa/mlx5: workaround FW first completion in start Maxime Coquelin
2021-10-15 14:51     ` Xueming(Steven) Li
2021-10-15 15:05 ` Xueming Li [this message]
2021-10-15 15:05   ` [dpdk-stable] [PATCH v2 2/2] vdpa/mlx5: retry VAR allocation during vDPA restart Xueming Li
2021-10-21  9:40     ` Maxime Coquelin
2021-10-21 12:27     ` Maxime Coquelin
2021-10-21  9:40   ` [dpdk-stable] [PATCH v2 1/2] vdpa/mlx5: workaround FW first completion in start Maxime Coquelin
2021-10-21 12:27   ` Maxime Coquelin
2021-10-21 12:36     ` Xueming(Steven) Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211015150545.1673312-1-xuemingl@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).