From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 217FCA0350 for ; Mon, 21 Feb 2022 16:40:40 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1CEAA4068C; Mon, 21 Feb 2022 16:40:40 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id A993240DF6 for ; Mon, 21 Feb 2022 16:40:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645458037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IeO6JgEuOe7XbCilhN6LnQMM4I+QEYQ0MMSCMVp2GZc=; b=TerP61ETknNxO0KNVLwtzKwD6uUVxJRTtwr0GFfODNYSEWrboSZypZwsk/CuMZvoDe9U+v +U0nMm2cVVUjhJWAl+0Usf95O7M4LHEm4Nc8uGxReTv218wDkCKSJgTX+9Vpu9qpmCyU5z 4091FPSbqdc6G24EpXF51tZPL8swF9w= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-660-gUqd5YQvOIWW9Df0_-_G0w-1; Mon, 21 Feb 2022 10:40:31 -0500 X-MC-Unique: gUqd5YQvOIWW9Df0_-_G0w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B6163801ADA; Mon, 21 Feb 2022 15:40:30 +0000 (UTC) Received: from rh.Home (unknown [10.39.195.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A6547B9D9; Mon, 21 Feb 2022 15:40:29 +0000 (UTC) From: Kevin Traynor To: Raja Zidane Cc: Matan Azrad , dpdk stable Subject: patch 'net/mlx5: fix mark enabling for Rx' has been queued to stable release 21.11.1 Date: Mon, 21 Feb 2022 15:35:03 +0000 Message-Id: <20220221153625.152324-114-ktraynor@redhat.com> In-Reply-To: <20220221153625.152324-1-ktraynor@redhat.com> References: <20220221153625.152324-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 21.11.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/26/22. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/99f5cd0dc3a0453b2572e2b56364618183e26ef5 Thanks. Kevin --- >From 99f5cd0dc3a0453b2572e2b56364618183e26ef5 Mon Sep 17 00:00:00 2001 From: Raja Zidane Date: Sun, 16 Jan 2022 15:23:47 +0000 Subject: [PATCH] net/mlx5: fix mark enabling for Rx [ upstream commit 082becbf1f35bda03a9ad80fcd7fe4afe3aea7be ] To optimize datapath, the mlx5 pmd checked for mark action on flow creation, and flagged possible destination rxqs (through queue/RSS actions), then it enabled the mark action logic only for flagged rxqs. Mark action didn't work if no queue/rss action was in the same flow, even when the user use multi-group logic to manage the flows. So, if mark action is performed in group X and the packet is moved to group Y > X when the packet is forwarded to Rx queues, SW did not get the mark ID to the mbuf. Flag Rx datapath to report mark action for any queue when the driver detects the first mark action after dev_start operation. Fixes: 8e61555657b2 ("net/mlx5: fix shared RSS and mark actions combination") Signed-off-by: Raja Zidane Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.c | 53 ++++++++++++++++-------------- drivers/net/mlx5/mlx5_flow.h | 2 +- drivers/net/mlx5/mlx5_flow_dv.c | 14 +++++--- drivers/net/mlx5/mlx5_flow_verbs.c | 4 +-- drivers/net/mlx5/mlx5_rx.h | 1 - 6 files changed, 41 insertions(+), 34 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9413e3397c..737ad6895c 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1414,4 +1414,5 @@ struct mlx5_priv { unsigned int mtr_reg_share:1; /* Whether support meter REG_C share. */ unsigned int lb_used:1; /* Loopback queue is referred to. */ + uint32_t mark_enabled:1; /* If mark action is enabled on rxqs. */ uint16_t domain_id; /* Switch domain identifier. */ uint16_t vport_id; /* Associated VF vport index (if any). */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2cadf615ec..d7cb1eb89b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1235,5 +1235,4 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; - const int mark = dev_handle->mark; const int tunnel = !!(dev_handle->layers & MLX5_FLOW_LAYER_TUNNEL); struct mlx5_ind_table_obj *ind_tbl = NULL; @@ -1270,13 +1269,4 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, * from other port - not from local flows only. */ - if (priv->config.dv_flow_en && - priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && - mlx5_flow_ext_mreg_supported(dev)) { - rxq_ctrl->rxq.mark = 1; - rxq_ctrl->flow_mark_n = 1; - } else if (mark) { - rxq_ctrl->rxq.mark = 1; - rxq_ctrl->flow_mark_n++; - } if (tunnel) { unsigned int j; @@ -1296,4 +1286,18 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, } +static void +flow_rxq_mark_flag_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (priv->mark_enabled) + return; + LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) { + rxq_ctrl->rxq.mark = 1; + } + priv->mark_enabled = 1; +} + /** * Set the Rx queue flags (Mark/Flag and Tunnel Ptypes) for a flow @@ -1310,5 +1314,9 @@ flow_rxq_flags_set(struct rte_eth_dev *dev, struct rte_flow *flow) uint32_t handle_idx; struct mlx5_flow_handle *dev_handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + MLX5_ASSERT(wks); + if (wks->mark) + flow_rxq_mark_flag_set(dev); SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, handle_idx, dev_handle, next) @@ -1330,5 +1338,4 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; - const int mark = dev_handle->mark; const int tunnel = !!(dev_handle->layers & MLX5_FLOW_LAYER_TUNNEL); struct mlx5_ind_table_obj *ind_tbl = NULL; @@ -1361,13 +1368,4 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, if (rxq_ctrl == NULL) continue; - if (priv->config.dv_flow_en && - priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && - mlx5_flow_ext_mreg_supported(dev)) { - rxq_ctrl->rxq.mark = 1; - rxq_ctrl->flow_mark_n = 1; - } else if (mark) { - rxq_ctrl->flow_mark_n--; - rxq_ctrl->rxq.mark = !!rxq_ctrl->flow_mark_n; - } if (tunnel) { unsigned int j; @@ -1426,5 +1424,4 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) if (rxq == NULL || rxq->ctrl == NULL) continue; - rxq->ctrl->flow_mark_n = 0; rxq->ctrl->rxq.mark = 0; for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) @@ -1432,4 +1429,5 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) rxq->ctrl->rxq.tunnel = 0; } + priv->mark_enabled = 0; } @@ -4812,4 +4810,5 @@ flow_create_split_inner(struct rte_eth_dev *dev, { struct mlx5_flow *dev_flow; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); dev_flow = flow_drv_prepare(dev, flow, attr, items, actions, @@ -4830,6 +4829,8 @@ flow_create_split_inner(struct rte_eth_dev *dev, if (flow_split_info->prefix_layers) dev_flow->handle->layers = flow_split_info->prefix_layers; - if (flow_split_info->prefix_mark) - dev_flow->handle->mark = 1; + if (flow_split_info->prefix_mark) { + MLX5_ASSERT(wks); + wks->mark = 1; + } if (sub_flow) *sub_flow = dev_flow; @@ -6144,5 +6145,5 @@ flow_create_split_meter(struct rte_eth_dev *dev, flow_split_info->prefix_layers = flow_get_prefix_layer_flags(dev_flow); - flow_split_info->prefix_mark |= dev_flow->handle->mark; + flow_split_info->prefix_mark |= wks->mark; flow_split_info->table_id = MLX5_MTR_TABLE_ID_SUFFIX; } @@ -6210,4 +6211,5 @@ flow_create_split_sample(struct rte_eth_dev *dev, struct mlx5_flow_tbl_data_entry *sfx_tbl_data; struct mlx5_flow_tbl_resource *sfx_tbl; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); #endif size_t act_size; @@ -6296,5 +6298,6 @@ flow_create_split_sample(struct rte_eth_dev *dev, flow_split_info->prefix_layers = flow_get_prefix_layer_flags(dev_flow); - flow_split_info->prefix_mark |= dev_flow->handle->mark; + MLX5_ASSERT(wks); + flow_split_info->prefix_mark |= wks->mark; /* Suffix group level already be scaled with factor, set * MLX5_SCALE_FLOW_GROUP_BIT of skip_scale to 1 to avoid scale diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 125d85899c..7fec79afb3 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -698,5 +698,4 @@ struct mlx5_flow_handle { uint32_t split_flow_id:27; /**< Sub flow unique match flow id. */ uint32_t is_meter_flow_id:1; /**< Indicate if flow_id is for meter. */ - uint32_t mark:1; /**< Metadata rxq mark flag. */ uint32_t fate_action:3; /**< Fate action type. */ uint32_t flex_item; /**< referenced Flex Item bitmask. */ @@ -1109,4 +1108,5 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ + uint32_t mark:1; /* Indicates if flow contains mark action. */ }; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 0383976883..18992b1e26 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -11647,5 +11647,5 @@ flow_dv_translate_action_sample(struct rte_eth_dev *dev, (sub_actions->conf))->id); - dev_flow->handle->mark = 1; + wks->mark = 1; pre_rix = dev_flow->handle->dvh.rix_tag; /* Save the mark resource before sample */ @@ -12807,5 +12807,5 @@ flow_dv_translate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_FLAG: action_flags |= MLX5_FLOW_ACTION_FLAG; - dev_flow->handle->mark = 1; + wks->mark = 1; if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { struct rte_flow_action_mark mark = { @@ -12836,5 +12836,5 @@ flow_dv_translate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_MARK: action_flags |= MLX5_FLOW_ACTION_MARK; - dev_flow->handle->mark = 1; + wks->mark = 1; if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { const struct rte_flow_action_mark *mark = @@ -15404,5 +15404,7 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, } mhdr_dummy; struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + MLX5_ASSERT(wks); egress = (domain == MLX5_MTR_DOMAIN_EGRESS) ? 1 : 0; transfer = (domain == MLX5_MTR_DOMAIN_TRANSFER) ? 1 : 0; @@ -15442,5 +15444,5 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, "cannot create policy " "mark action for this color"); - dev_flow.handle->mark = 1; + wks->mark = 1; if (flow_dv_tag_resource_register(dev, tag_be, &dev_flow, &flow_err)) @@ -16867,5 +16869,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, uint32_t domain = MLX5_MTR_DOMAIN_INGRESS; uint16_t sub_policy_num; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + MLX5_ASSERT(wks); rte_spinlock_lock(&mtr_policy->sl); for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { @@ -16941,5 +16945,5 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, memset(&dh, 0, sizeof(struct mlx5_flow_handle)); if (act_cnt->rix_mark) - dh.mark = 1; + wks->mark = 1; dh.fate_action = MLX5_FLOW_FATE_QUEUE; dh.rix_hrxq = hrxq_idx[i]; diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 192a00d4fd..90ccb9aaff 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1694,10 +1694,10 @@ flow_verbs_translate(struct rte_eth_dev *dev, flow_verbs_translate_action_flag(dev_flow, actions); action_flags |= MLX5_FLOW_ACTION_FLAG; - dev_flow->handle->mark = 1; + wks->mark = 1; break; case RTE_FLOW_ACTION_TYPE_MARK: flow_verbs_translate_action_mark(dev_flow, actions); action_flags |= MLX5_FLOW_ACTION_MARK; - dev_flow->handle->mark = 1; + wks->mark = 1; break; case RTE_FLOW_ACTION_TYPE_DROP: diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index c178f9a24b..cb5d51340d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -162,5 +162,4 @@ struct mlx5_rxq_ctrl { unsigned int started:1; /* Whether (shared) RXQ has been started. */ unsigned int irq:1; /* Whether IRQ is enabled. */ - uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */ uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ uint32_t wqn; /* WQ number. */ -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2022-02-21 15:22:47.067081590 +0000 +++ 0114-net-mlx5-fix-mark-enabling-for-Rx.patch 2022-02-21 15:22:44.237704455 +0000 @@ -1 +1 @@ -From 082becbf1f35bda03a9ad80fcd7fe4afe3aea7be Mon Sep 17 00:00:00 2001 +From 99f5cd0dc3a0453b2572e2b56364618183e26ef5 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 082becbf1f35bda03a9ad80fcd7fe4afe3aea7be ] + @@ -20 +21,0 @@ -Cc: stable@dpdk.org