DPDK patches and discussions
 help / color / mirror / Atom feed
From: Suanming Mou <suanmingm@nvidia.com>
To: viacheslavo@nvidia.com, matan@nvidia.com
Cc: rasland@nvidia.com, dev@dpdk.org, stable@dpdk.org
Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: fix secondary process attach port Tx queue
Date: Sun, 24 Jan 2021 19:02:05 +0800	[thread overview]
Message-ID: <1611486126-84749-4-git-send-email-suanmingm@nvidia.com> (raw)
In-Reply-To: <1611486126-84749-1-git-send-email-suanmingm@nvidia.com>

Currently, the secondary process port UAR register mapping used by Tx
queue is done during port initializing.

Unluckily, in port hot-plug case, the secondary process was requested
to initialize the port when primary process did not complete the
device configuration and the port Tx queue number is not configured
yet. Hence, the secondary process getS the zero Tx queue number during
probing, causing the UAR registers not be mapped in the correct
fashion.

This commit checks the configured number of Tx queues in secondary
process when the port start is requested. In case the Tx queue
number mismatch found the UAR mapping is reinitialized accordingly.

Fixes: 2aac5b5d119f ("net/mlx5: sync stop/start with secondary process")
cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_mp_os.c | 19 +++++++++++++++++++
 drivers/net/mlx5/mlx5.c             |  2 +-
 drivers/net/mlx5/mlx5.h             |  6 +++++-
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c
index 08ade75..95372e2 100644
--- a/drivers/net/mlx5/linux/mlx5_mp_os.c
+++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
@@ -115,6 +115,7 @@
 	const struct mlx5_mp_param *param =
 		(const struct mlx5_mp_param *)mp_msg->param;
 	struct rte_eth_dev *dev;
+	struct mlx5_proc_priv *ppriv;
 	struct mlx5_priv *priv;
 	int ret;
 
@@ -132,6 +133,20 @@
 		rte_mb();
 		dev->rx_pkt_burst = mlx5_select_rx_function(dev);
 		dev->tx_pkt_burst = mlx5_select_tx_function(dev);
+		ppriv = (struct mlx5_proc_priv *)dev->process_private;
+		/* If Tx queue number changes, re-initialize UAR. */
+		if (ppriv->uar_table_sz != priv->txqs_n) {
+			mlx5_tx_uar_uninit_secondary(dev);
+			mlx5_proc_priv_uninit(dev);
+			ret = mlx5_proc_priv_init(dev);
+			if (ret)
+				return -rte_errno;
+			ret = mlx5_tx_uar_init_secondary(dev, mp_msg->fds[0]);
+			if (ret) {
+				mlx5_proc_priv_uninit(dev);
+				return -rte_errno;
+			}
+		}
 		mp_init_msg(&priv->mp_id, &mp_res, param->type);
 		res->result = 0;
 		ret = rte_mp_reply(&mp_res, peer);
@@ -183,6 +198,10 @@
 		return;
 	}
 	mp_init_msg(&priv->mp_id, &mp_req, type);
+	if (type == MLX5_MP_REQ_START_RXTX) {
+		mp_req.num_fds = 1;
+		mp_req.fds[0] = ((struct ibv_context *)priv->sh->ctx)->cmd_fd;
+	}
 	ret = rte_mp_request_sync(&mp_req, &mp_rep, &ts);
 	if (ret) {
 		if (rte_errno != ENOTSUP)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index efedbb9..b82e767 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1268,7 +1268,7 @@ struct mlx5_dev_ctx_shared *
  * @param dev
  *   Pointer to Ethernet device structure.
  */
-static void
+void
 mlx5_proc_priv_uninit(struct rte_eth_dev *dev)
 {
 	if (!dev->process_private)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 101e9c2..899241e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -743,7 +743,10 @@ struct mlx5_dev_ctx_shared {
 	struct mlx5_dev_shared_port port[]; /* per device port data array. */
 };
 
-/* Per-process private structure. */
+/*
+ * Per-process private structure.
+ * Caution, secondary pocess may rebuid the struct during port start.
+ */
 struct mlx5_proc_priv {
 	size_t uar_table_sz;
 	/* Size of UAR register table. */
@@ -998,6 +1001,7 @@ struct rte_hairpin_peer_info {
 
 int mlx5_getenv_int(const char *);
 int mlx5_proc_priv_init(struct rte_eth_dev *dev);
+void mlx5_proc_priv_uninit(struct rte_eth_dev *dev);
 int mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev,
 			      struct rte_eth_udp_tunnel *udp_tunnel);
 uint16_t mlx5_eth_find_next(uint16_t port_id, struct rte_pci_device *pci_dev);
-- 
1.8.3.1


  parent reply	other threads:[~2021-01-24 11:02 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-24 11:02 [dpdk-dev] [PATCH 0/4] net/mlx: fix secondary process bugs Suanming Mou
2021-01-24 11:02 ` [dpdk-dev] [PATCH 1/4] net/mlx5: fix invalid multi-process ID Suanming Mou
2021-01-25  8:31   ` Raslan Darawsheh
2021-01-24 11:02 ` [dpdk-dev] [PATCH 2/4] net/mlx5: fix secondary process port detach crash Suanming Mou
2021-01-24 11:02 ` Suanming Mou [this message]
2021-01-25  8:30   ` [dpdk-dev] [PATCH 3/4] net/mlx5: fix secondary process attach port Tx queue Raslan Darawsheh
2021-01-24 11:02 ` [dpdk-dev] [PATCH 4/4] net/mlx4: " Suanming Mou
2021-01-25 10:09 ` [dpdk-dev] [PATCH 0/4] net/mlx: fix secondary process bugs Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1611486126-84749-4-git-send-email-suanmingm@nvidia.com \
    --to=suanmingm@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).