From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C634AA0C3F for ; Sat, 12 Jun 2021 01:17:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B59124003F; Sat, 12 Jun 2021 01:17:29 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2068.outbound.protection.outlook.com [40.107.94.68]) by mails.dpdk.org (Postfix) with ESMTP id AA97E40691 for ; Sat, 12 Jun 2021 01:17:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Bpfk1J9Ie2n9UkEyG/u5w80Z4hfFISK8kRKfDz7VkAZf+5oEOiOsToZ8io6VshW/kXXv+EIfEPeNlxZk359/DASCULm1HPL5efPIfJ88t7gR5xY61XDHSzBH+FgJOWay5j1vP3pHfICSb/52aXaN569KQR/Uz5BHfZPwJEm885P9MCscTLkRnWGm/nIAIDTEq6lY2WP9iJ6VPB+MWtLOKU7pBZM797q3iKyP9czZA49fBDg/7Rpde1dhIh7Ex1Bz5J3orhxL/lwAeKVHA27ikWJ9kwzSKx72dAfGF9YnAm7z+1EhKwLfqDxh3v61OO31b3Yyf33vQB6MIFdzYuYy0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T8+7dGu+xyyJ4sRNVOvG8gKD4XiShadvwMDv+Llre6I=; b=E2ydAUcegqBaCf7lPm0IK0SJFt6WjtcJIXWTv2sfWfivA0fsjVIB5VnZInixN9gFfjwrCGJKfmk+FpQcog3+jBS5mQrpXfgVFyWEpZqhRlHZzChqDMKg0b1EStgjS0PDc5jdoEqkkLGvRYiRj7dtSM+IjvEh2mgRayIeUYsIyiDyEFwj07WM6Neyigafx3V2/dULJD3XeFRTqLWXqv1WHHqtnCj0NNa4+ke3bJhwidRVh8n2SeEJ3iNwolEvFsqZlj2FRsitBSX9/T50XPKIiRDmiu5qE/AHd3GGZkT85GcuaSRf5otpeqmbAX1KwQxUH6b8UaQFu8odAPafyRYV5Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T8+7dGu+xyyJ4sRNVOvG8gKD4XiShadvwMDv+Llre6I=; b=kse7cGnkSTjhrsra0i/NBk6hlKYWZffpUSrw0n0QrjXF301B22xiNUiEZS7HMVwjoakSBzb9RV5fN2PQVZh8/MlS1R90DGWFx9/5n4h7UFauIAEpwFln5YRD0g8gw1zPLVzj5cMsxWQkQrAKriR2eJn45scNQxOT9ZIwfFybD6gLr4DG8VX5uUjSN6tIievQEwREOjqQjQNlDAqi8SfiR6KM++rc/9aNAu3x1SSC0TaKjX2u9TVoFPYQzH7Jqr23f7FwnantvpmSzi5ltB4ZAk2ye/c/JmZ0esJMtoR99nIA+IwPavCNpNkyNDqWgzxryfFzk/phgM+8nmJfy82gxQ== Received: from BN9PR03CA0189.namprd03.prod.outlook.com (2603:10b6:408:f9::14) by DM6PR12MB2651.namprd12.prod.outlook.com (2603:10b6:5:42::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.25; Fri, 11 Jun 2021 23:17:25 +0000 Received: from BN8NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f9:cafe::34) by BN9PR03CA0189.outlook.office365.com (2603:10b6:408:f9::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.20 via Frontend Transport; Fri, 11 Jun 2021 23:17:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT003.mail.protection.outlook.com (10.13.177.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4219.21 via Frontend Transport; Fri, 11 Jun 2021 23:17:25 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 11 Jun 2021 23:17:22 +0000 From: Xueming Li To: Gregory Etelson CC: Luca Boccassi , Viacheslav Ovsiienko , dpdk stable Date: Sat, 12 Jun 2021 07:03:46 +0800 Message-ID: <20210611230433.8208-132-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210611230433.8208-1-xuemingl@nvidia.com> References: <20210510160258.30982-229-xuemingl@nvidia.com> <20210611230433.8208-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d51cea11-b889-4609-054e-08d92d2f12fd X-MS-TrafficTypeDiagnostic: DM6PR12MB2651: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XaVfJNISK/ipqJOPkQhNi8e7Lohv7Vtcdjwnm2sGyc/SI5Ukpnq2+3G1iazDMwIGn5LbTXu5tbUppa/VD12erVt/5altialv/s63jONWZ5Oazc4A5BEDsMtEieSc0odnNtbu+c4R8DniLWlWzwjZRujGhbgiUU2/dhFGPcdSJ1IE+W/f9klkL4Vj/NXMwSo0q/iegTuARBxZl/jEIGwDQ0DL9ddGn9KdeGOG0Xu9ur7txTmMxOURsIJkzwYRlX+owadwb7eG43UgNonbC8hjyfB2F3t5gmdFnknjzIPeqCb8dKapszzLp75ejUIifztNxc74jFNGD+kMNHcuWUFxBTqk4XhFNO8m5OqeiHGWXB4GQH3k9futY5D7PVVhPjuTzHocBdS90E8IyVatn5fHzWqRorVckit5XOmdXzsfDAsIbM+ksrDAm3rmvghxnrZMtvljbcxrZeWjmTbFolHrDrH5Cl7Qxx/Ivj/58UIzZKU+UeS3982nx9WFp4be3hjbfdEKPzG6A/VPP7+UL1xd7bH3aRuzMvfqBIgPbf6eQxC6WlDUuSEaAOy9jGJNPNItGJENUYNwb+CaRg+79saUeLbHdDaLIulk36jmCE5dCCzK6BYK7o4dX1tPMeOBrHW5H3xikWX6t/gJYvYIWdVUDaxkyrbcauwj4GsxZXzBn+EBfrmYwOKVAYw2mOp4KKYlkYhEQyWpzrlF+CyvZ+NFW2UVbe0ZNatGbq5PfWnyQhMvcK6yh7biiS5HBBSBG1qaPHuy/3SVCf5uWD1C4x4aorLqg/W8NfyE0mBeZDpcDP9EU7vYzKL38fXMjrGnJVKfG7tX07jaDetbyUGI7QJszw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(376002)(396003)(346002)(36840700001)(46966006)(82310400003)(478600001)(2616005)(36756003)(30864003)(8936002)(6286002)(16526019)(6636002)(336012)(186003)(426003)(70586007)(1076003)(8676002)(4326008)(5660300002)(54906003)(82740400003)(6862004)(83380400001)(36860700001)(86362001)(53546011)(316002)(37006003)(7636003)(47076005)(36906005)(55016002)(70206006)(356005)(7696005)(26005)(2906002)(966005)(6666004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2021 23:17:25.2192 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d51cea11-b889-4609-054e-08d92d2f12fd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2651 Subject: [dpdk-stable] patch 'net/mlx5: fix tunnel offload private items location' has been queued to stable release 20.11.2 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 20.11.2 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 06/14/21. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/steevenlee/dpdk This queued commit can be viewed at: https://github.com/steevenlee/dpdk/commit/96883cec2af4f2921287d451c73f93a6d4e47772 Thanks. Xueming Li --- >From 96883cec2af4f2921287d451c73f93a6d4e47772 Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Thu, 6 May 2021 12:57:51 +0300 Subject: [PATCH] net/mlx5: fix tunnel offload private items location Cc: Luca Boccassi [ upstream commit 8c5a231bce3adb2088252fea73df70bf9d7fb329 ] Tunnel offload API requires application to query PMD for specific flow items and actions. Application uses these PMD specific elements to build flow rules according to the tunnel offload model. The model does not restrict private elements location in a flow rule, but the current MLX5 PMD implementation expects that tunnel offload rule will begin with PMD specific elements. The patch removes that placement limitation. Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++------- drivers/net/mlx5/mlx5_flow.h | 46 ++++++++++-------- drivers/net/mlx5/mlx5_flow_dv.c | 84 ++++++++++++++++----------------- 3 files changed, 100 insertions(+), 78 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index d976ca9a8d..cab0aee567 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -50,6 +50,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_action *app_actions, uint32_t flow_idx, + const struct mlx5_flow_tunnel *tunnel, struct tunnel_default_miss_ctx *ctx, struct rte_flow_error *error); static struct mlx5_flow_tunnel * @@ -5183,22 +5184,14 @@ flow_create_split_outer(struct rte_eth_dev *dev, return ret; } -static struct mlx5_flow_tunnel * -flow_tunnel_from_rule(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[]) +static inline struct mlx5_flow_tunnel * +flow_tunnel_from_rule(const struct mlx5_flow *flow) { struct mlx5_flow_tunnel *tunnel; #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wcast-qual" - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) - tunnel = (struct mlx5_flow_tunnel *)items[0].spec; - else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) - tunnel = (struct mlx5_flow_tunnel *)actions[0].conf; - else - tunnel = NULL; + tunnel = (typeof(tunnel))flow->tunnel; #pragma GCC diagnostic pop return tunnel; @@ -5392,12 +5385,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, error); if (ret < 0) goto error; - if (is_flow_tunnel_steer_rule(dev, attr, - buf->entry[i].pattern, - p_actions_rx)) { + if (is_flow_tunnel_steer_rule(wks->flows[0].tof_type)) { ret = flow_tunnel_add_default_miss(dev, flow, attr, p_actions_rx, idx, + wks->flows[0].tunnel, &default_miss_ctx, error); if (ret < 0) { @@ -5461,7 +5453,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, } flow_rxq_flags_set(dev, flow); rte_free(translated_actions); - tunnel = flow_tunnel_from_rule(dev, attr, items, actions); + tunnel = flow_tunnel_from_rule(wks->flows); if (tunnel) { flow->tunnel = 1; flow->tunnel_id = tunnel->tunnel_id; @@ -7210,6 +7202,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) return ret; } +const struct mlx5_flow_tunnel * +mlx5_get_tof(const struct rte_flow_item *item, + const struct rte_flow_action *action, + enum mlx5_tof_rule_type *rule_type) +{ + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (item->type == (typeof(item->type)) + MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) { + *rule_type = MLX5_TUNNEL_OFFLOAD_MATCH_RULE; + return flow_items_to_tunnel(item); + } + } + for (; action->conf != RTE_FLOW_ACTION_TYPE_END; action++) { + if (action->type == (typeof(action->type)) + MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) { + *rule_type = MLX5_TUNNEL_OFFLOAD_SET_RULE; + return flow_actions_to_tunnel(action); + } + } + return NULL; +} + /** * tunnel offload functionalilty is defined for DV environment only */ @@ -7240,13 +7254,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_action *app_actions, uint32_t flow_idx, + const struct mlx5_flow_tunnel *tunnel, struct tunnel_default_miss_ctx *ctx, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow *dev_flow; struct rte_flow_attr miss_attr = *attr; - const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf; const struct rte_flow_item miss_items[2] = { { .type = RTE_FLOW_ITEM_TYPE_ETH, @@ -7332,6 +7346,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, dev_flow->flow = flow; dev_flow->external = true; dev_flow->tunnel = tunnel; + dev_flow->tof_type = MLX5_TUNNEL_OFFLOAD_MISS_RULE; /* Subflow object was created, we must include one in the list. */ SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx, dev_flow->handle, next); @@ -7926,6 +7941,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev, __rte_unused const struct rte_flow_attr *attr, __rte_unused const struct rte_flow_action *actions, __rte_unused uint32_t flow_idx, + __rte_unused const struct mlx5_flow_tunnel *tunnel, __rte_unused struct tunnel_default_miss_ctx *ctx, __rte_unused struct rte_flow_error *error) { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 293c60f5b4..9b72cde5ff 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -753,6 +753,16 @@ struct mlx5_flow_verbs_workspace { /** Maximal number of device sub-flows supported. */ #define MLX5_NUM_MAX_DEV_FLOWS 32 +/** + * tunnel offload rules type + */ +enum mlx5_tof_rule_type { + MLX5_TUNNEL_OFFLOAD_NONE = 0, + MLX5_TUNNEL_OFFLOAD_SET_RULE, + MLX5_TUNNEL_OFFLOAD_MATCH_RULE, + MLX5_TUNNEL_OFFLOAD_MISS_RULE, +}; + /** Device flow structure. */ __extension__ struct mlx5_flow { @@ -774,6 +784,7 @@ struct mlx5_flow { struct mlx5_flow_handle *handle; uint32_t handle_idx; /* Index of the mlx5 flow handle memory. */ const struct mlx5_flow_tunnel *tunnel; + enum mlx5_tof_rule_type tof_type; }; /* Flow meter state. */ @@ -983,10 +994,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev) } static inline bool -is_tunnel_offload_active(struct rte_eth_dev *dev) +is_tunnel_offload_active(const struct rte_eth_dev *dev) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5_priv *priv = dev->data->dev_private; + const struct mlx5_priv *priv = dev->data->dev_private; return !!priv->config.dv_miss_info; #else RTE_SET_USED(dev); @@ -995,23 +1006,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev) } static inline bool -is_flow_tunnel_match_rule(__rte_unused struct rte_eth_dev *dev, - __rte_unused const struct rte_flow_attr *attr, - __rte_unused const struct rte_flow_item items[], - __rte_unused const struct rte_flow_action actions[]) +is_flow_tunnel_match_rule(enum mlx5_tof_rule_type tof_rule_type) { - return (items[0].type == (typeof(items[0].type)) - MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL); + return tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE; } static inline bool -is_flow_tunnel_steer_rule(__rte_unused struct rte_eth_dev *dev, - __rte_unused const struct rte_flow_attr *attr, - __rte_unused const struct rte_flow_item items[], - __rte_unused const struct rte_flow_action actions[]) +is_flow_tunnel_steer_rule(enum mlx5_tof_rule_type tof_rule_type) { - return (actions[0].type == (typeof(actions[0].type)) - MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET); + return tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE; } static inline const struct mlx5_flow_tunnel * @@ -1252,11 +1255,10 @@ struct flow_grp_info { static inline bool tunnel_use_standard_attr_group_translate - (struct rte_eth_dev *dev, - const struct mlx5_flow_tunnel *tunnel, + (const struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[]) + const struct mlx5_flow_tunnel *tunnel, + enum mlx5_tof_rule_type tof_rule_type) { bool verdict; @@ -1272,7 +1274,7 @@ tunnel_use_standard_attr_group_translate * method */ verdict = !attr->group && - is_flow_tunnel_steer_rule(dev, attr, items, actions); + is_flow_tunnel_steer_rule(tof_rule_type); } else { /* * non-tunnel group translation uses standard method for @@ -1505,4 +1507,10 @@ void flow_dv_dest_array_remove_cb(struct mlx5_cache_list *list, struct mlx5_cache_entry *entry); struct mlx5_aso_age_action *flow_aso_age_get_by_idx(struct rte_eth_dev *dev, uint32_t age_idx); +const struct mlx5_flow_tunnel * +mlx5_get_tof(const struct rte_flow_item *items, + const struct rte_flow_action *actions, + enum mlx5_tof_rule_type *rule_type); + + #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a1d1579991..5a5a33172a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5323,32 +5323,33 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, int16_t rw_act_num = 0; uint64_t is_root; const struct mlx5_flow_tunnel *tunnel; + enum mlx5_tof_rule_type tof_rule_type; struct flow_grp_info grp_info = { .external = !!external, .transfer = !!attr->transfer, .fdb_def_rule = !!priv->fdb_def_rule, + .std_tbl_fix = true, }; const struct rte_eth_hairpin_conf *conf; if (items == NULL) return -1; - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) { - tunnel = flow_items_to_tunnel(items); - action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH | - MLX5_FLOW_ACTION_DECAP; - } else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) { - tunnel = flow_actions_to_tunnel(actions); - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; - } else { - tunnel = NULL; + tunnel = is_tunnel_offload_active(dev) ? + mlx5_get_tof(items, actions, &tof_rule_type) : NULL; + if (tunnel) { + if (priv->representor) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "decap not supported for VF representor"); + if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE) + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + else if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE) + action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH | + MLX5_FLOW_ACTION_DECAP; + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, tof_rule_type); } - if (tunnel && priv->representor) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "decap not supported " - "for VF representor"); - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, tunnel, attr, items, actions); ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info, error); if (ret < 0) return ret; @@ -5362,15 +5363,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); switch (type) { - case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL: - if (items[0].type != (typeof(items[0].type)) - MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) - return rte_flow_error_set - (error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "MLX5 private items " - "must be the first"); - break; case RTE_FLOW_ITEM_TYPE_VOID: break; case RTE_FLOW_ITEM_TYPE_PORT_ID: @@ -5631,6 +5623,11 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_LAYER_ECPRI; break; + case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL: + /* tunnel offload item was processed before + * list it here as a supported type + */ + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -6099,15 +6096,9 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, ++actions_n; break; case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - if (actions[0].type != (typeof(actions[0].type)) - MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) - return rte_flow_error_set - (error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "MLX5 private action " - "must be the first"); - - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + /* tunnel offload action was processed before + * list it here as a supported type + */ break; default: return rte_flow_error_set(error, ENOTSUP, @@ -9730,12 +9721,13 @@ flow_dv_translate(struct rte_eth_dev *dev, int tmp_actions_n = 0; uint32_t table; int ret = 0; - const struct mlx5_flow_tunnel *tunnel; + const struct mlx5_flow_tunnel *tunnel = NULL; struct flow_grp_info grp_info = { .external = !!dev_flow->external, .transfer = !!attr->transfer, .fdb_def_rule = !!priv->fdb_def_rule, .skip_scale = !!dev_flow->skip_scale, + .std_tbl_fix = true, }; if (!wks) @@ -9750,15 +9742,21 @@ flow_dv_translate(struct rte_eth_dev *dev, MLX5DV_FLOW_TABLE_TYPE_NIC_RX; /* update normal path action resource into last index of array */ sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - tunnel = is_flow_tunnel_match_rule(dev, attr, items, actions) ? - flow_items_to_tunnel(items) : - is_flow_tunnel_steer_rule(dev, attr, items, actions) ? - flow_actions_to_tunnel(actions) : - dev_flow->tunnel ? dev_flow->tunnel : NULL; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, tunnel, attr, items, actions); ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, &grp_info, error); if (ret) @@ -9770,7 +9768,7 @@ flow_dv_translate(struct rte_eth_dev *dev, priority = dev_conf->flow_prio - 1; /* number of actions must be set to 0 in case of dirty stack. */ mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) { + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { /* * do not add decap action if match rule drops packet * HW rejects rules with decap & drop -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2021-06-12 06:53:59.955671100 +0800 +++ 0132-net-mlx5-fix-tunnel-offload-private-items-location.patch 2021-06-12 06:53:56.570000000 +0800 @@ -1 +1 @@ -From 8c5a231bce3adb2088252fea73df70bf9d7fb329 Mon Sep 17 00:00:00 2001 +From 96883cec2af4f2921287d451c73f93a6d4e47772 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Luca Boccassi + +[ upstream commit 8c5a231bce3adb2088252fea73df70bf9d7fb329 ] @@ -15 +17,0 @@ -Cc: stable@dpdk.org @@ -20,4 +22,4 @@ - drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++------ - drivers/net/mlx5/mlx5_flow.h | 46 ++++++++++------- - drivers/net/mlx5/mlx5_flow_dv.c | 88 ++++++++++++++++----------------- - 3 files changed, 102 insertions(+), 80 deletions(-) + drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++------- + drivers/net/mlx5/mlx5_flow.h | 46 ++++++++++-------- + drivers/net/mlx5/mlx5_flow_dv.c | 84 ++++++++++++++++----------------- + 3 files changed, 100 insertions(+), 78 deletions(-) @@ -26 +28 @@ -index 32634c9af7..8c375f1aac 100644 +index d976ca9a8d..cab0aee567 100644 @@ -37 +39 @@ -@@ -5968,22 +5969,14 @@ flow_create_split_outer(struct rte_eth_dev *dev, +@@ -5183,22 +5184,14 @@ flow_create_split_outer(struct rte_eth_dev *dev, @@ -63 +65 @@ -@@ -6178,12 +6171,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +@@ -5392,12 +5385,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, @@ -78 +80 @@ -@@ -6247,7 +6239,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +@@ -5461,7 +5453,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, @@ -87 +89 @@ -@@ -8159,6 +8151,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) +@@ -7210,6 +7202,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains) @@ -116 +118 @@ -@@ -8189,13 +8203,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, +@@ -7240,13 +7254,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, @@ -131 +133 @@ -@@ -8281,6 +8295,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, +@@ -7332,6 +7346,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev, @@ -139 +141 @@ -@@ -8894,6 +8909,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev, +@@ -7926,6 +7941,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev, @@ -148 +150 @@ -index 5365699426..04c8806bf6 100644 +index 293c60f5b4..9b72cde5ff 100644 @@ -151 +153 @@ -@@ -819,6 +819,16 @@ struct mlx5_flow_verbs_workspace { +@@ -753,6 +753,16 @@ struct mlx5_flow_verbs_workspace { @@ -168 +170 @@ -@@ -854,6 +864,7 @@ struct mlx5_flow { +@@ -774,6 +784,7 @@ struct mlx5_flow { @@ -176 +178 @@ -@@ -949,10 +960,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev) +@@ -983,10 +994,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev) @@ -189 +191 @@ -@@ -961,23 +972,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev) +@@ -995,23 +1006,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev) @@ -217 +219 @@ -@@ -1273,11 +1276,10 @@ struct flow_grp_info { +@@ -1252,11 +1255,10 @@ struct flow_grp_info { @@ -232 +234 @@ -@@ -1293,7 +1295,7 @@ tunnel_use_standard_attr_group_translate +@@ -1272,7 +1274,7 @@ tunnel_use_standard_attr_group_translate @@ -241,4 +243,4 @@ -@@ -1681,4 +1683,10 @@ int mlx5_flow_create_def_policy(struct rte_eth_dev *dev); - void mlx5_flow_destroy_def_policy(struct rte_eth_dev *dev); - void flow_drv_rxq_flags_set(struct rte_eth_dev *dev, - struct mlx5_flow_handle *dev_handle); +@@ -1505,4 +1507,10 @@ void flow_dv_dest_array_remove_cb(struct mlx5_cache_list *list, + struct mlx5_cache_entry *entry); + struct mlx5_aso_age_action *flow_aso_age_get_by_idx(struct rte_eth_dev *dev, + uint32_t age_idx); @@ -253 +255 @@ -index 70e8d0b113..10ca342edc 100644 +index a1d1579991..5a5a33172a 100644 @@ -256,2 +258,2 @@ -@@ -6627,10 +6627,12 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - uint32_t rw_act_num = 0; +@@ -5323,32 +5323,33 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + int16_t rw_act_num = 0; @@ -268,2 +269,0 @@ - const struct rte_flow_item *rule_items = items; -@@ -6638,23 +6640,22 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -308 +308 @@ -@@ -6668,15 +6669,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -5362,15 +5363,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -324,2 +324 @@ -@@ -6975,6 +6967,11 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - if (ret < 0) +@@ -5631,6 +5623,11 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -326,0 +326 @@ + last_item = MLX5_FLOW_LAYER_ECPRI; @@ -336,2 +336 @@ -@@ -7516,17 +7513,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - action_flags |= MLX5_FLOW_ACTION_SAMPLE; +@@ -6099,15 +6096,9 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -340 +339 @@ -- case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: @@ -350,9 +348,0 @@ -- break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - ret = flow_dv_validate_action_modify_field(dev, - action_flags, -@@ -7551,6 +7537,11 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - return ret; - action_flags |= MLX5_FLOW_ACTION_CT; - break; -+ case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: @@ -362 +352 @@ -+ break; + break; @@ -365,2 +355 @@ - RTE_FLOW_ERROR_TYPE_ACTION, -@@ -12035,13 +12026,14 @@ flow_dv_translate(struct rte_eth_dev *dev, +@@ -9730,12 +9721,13 @@ flow_dv_translate(struct rte_eth_dev *dev, @@ -376,2 +365 @@ - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .skip_scale = !!dev_flow->skip_scale, @@ -380 +367,0 @@ - const struct rte_flow_item *head_item = items; @@ -382 +369,2 @@ -@@ -12057,15 +12049,21 @@ flow_dv_translate(struct rte_eth_dev *dev, + if (!wks) +@@ -9750,15 +9742,21 @@ flow_dv_translate(struct rte_eth_dev *dev, @@ -411,2 +399,2 @@ -@@ -12075,7 +12073,7 @@ flow_dv_translate(struct rte_eth_dev *dev, - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; +@@ -9770,7 +9768,7 @@ flow_dv_translate(struct rte_eth_dev *dev, + priority = dev_conf->flow_prio - 1;