DPDK patches and discussions
 help / color / mirror / Atom feed
From: Xueming Li <xuemingl@nvidia.com>
To: Matan Azrad <matan@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Cc: dev@dpdk.org, Asaf Penso <asafp@nvidia.com>
Subject: [dpdk-dev] [v1] vdpa/mlx5: fix event channel setup
Date: Tue, 25 Aug 2020 09:17:28 +0000	[thread overview]
Message-ID: <1598347048-25796-1-git-send-email-xuemingl@nvidia.com> (raw)
In-Reply-To: <1597933959-3219-1-git-send-email-xuemingl@nvidia.com>

During vdap device setup, if some error happens, event channel release
stuck at polling event channel.

Event channel fd is set to nonblocking in cqe setup, so if any error
happens before this function and after event channel created, the
pooling before releasing resources will stuck.

This patch moves event channel to non-blocking mode right after
creation.

Fixes: 8395927cdf ("vdpa/mlx5: prepare HW queues")
Cc: matan@nvidia.com

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 5a2d4fb1ec..bda547ffe0 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -51,6 +51,8 @@ mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv)
 static int
 mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv)
 {
+	int flags, ret;
+
 	if (priv->eventc)
 		return 0;
 	if (mlx5_glue->devx_query_eqn(priv->ctx, 0, &priv->eqn)) {
@@ -66,6 +68,12 @@ mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv)
 			rte_errno);
 		goto error;
 	}
+	flags = fcntl(priv->eventc->fd, F_GETFL);
+	ret = fcntl(priv->eventc->fd, F_SETFL, flags | O_NONBLOCK);
+	if (ret) {
+		DRV_LOG(ERR, "Failed to change event channel FD.");
+		goto error;
+	}
 	priv->uar = mlx5_glue->devx_alloc_uar(priv->ctx, 0);
 	if (!priv->uar) {
 		rte_errno = errno;
@@ -376,7 +384,6 @@ mlx5_vdpa_interrupt_handler(void *cb_arg)
 int
 mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv)
 {
-	int flags;
 	int ret;
 
 	if (!priv->eventc)
@@ -393,12 +400,6 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv)
 			return -1;
 		}
 	}
-	flags = fcntl(priv->eventc->fd, F_GETFL);
-	ret = fcntl(priv->eventc->fd, F_SETFL, flags | O_NONBLOCK);
-	if (ret) {
-		DRV_LOG(ERR, "Failed to change event channel FD.");
-		goto error;
-	}
 	priv->intr_handle.fd = priv->eventc->fd;
 	priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
 	if (rte_intr_callback_register(&priv->intr_handle,
-- 
2.17.1


  reply	other threads:[~2020-08-25  9:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-20 14:32 [dpdk-dev] [PATCH] " Xueming Li
2020-08-25  9:17 ` Xueming Li [this message]
2020-08-30  8:17   ` [dpdk-dev] [v1] " Matan Azrad
2020-09-18  9:41   ` Maxime Coquelin
2020-09-18 11:14     ` Xueming(Steven) Li
2020-09-18 12:28   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1598347048-25796-1-git-send-email-xuemingl@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=asafp@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).