From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F996A00C5; Mon, 24 Oct 2022 03:49:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D241A40A89; Mon, 24 Oct 2022 03:48:59 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2062.outbound.protection.outlook.com [40.107.223.62]) by mails.dpdk.org (Postfix) with ESMTP id 1672E40A7E for ; Mon, 24 Oct 2022 03:48:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ND9pkPUHdcNsC6h7azayp/MIboNjmMMM2lq0u4oWdpdX7DDLTNrIia3FBAIWq+6/BVusbXTUOMkNubHe8/P+W0cluCa60sXTjX77O5BfuKvlxLWVTLR4LWbr5y8sNV4YgyrCyt9DUVh253JTFL9IizPv9TaPw+F747Q4ZpIhm6Nymj5K5SG5NSR032Ze+vkV3tDsw1I4e3Ge2IbqXVtn4V9VaSQ0XtXMeOctwd4JQNDRXo9i0meEz3jurhz/sjAqkU6mTnL8c9JKN0pHiS1PS6G8bgFKugryVtASvRsJGWM5D59xG3KvRkXcunqI3vIHOam9z3z9bRuCB+i+Ulwfnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Bh0oIktvPtFVup7j0wxm/nNzpojhfzZgzP18HnscEG4=; b=GlnquH0iG3w/w4tLyDs/+dfhqOCzeJ7RUrPtngkTGjHlgjNwUIC6ih9rpzwZSpP51wvIZUMl9v5VPwtTftiZ8wDUuJNZ4xUjZLti2eZPRKAhA1GEnQTjrwFWQu+xHmjkQqCrj1JaAhJA8EwF+xW5UFfi1ccBJ4vPot1eLZe2FnkRjb7P6sM8LTG83HVfiWJNTKRBMpxmsTXwaA7ZxiZ1E5fFRRpdTDAHuc7AIJbHH69m/RE19fzRa7Sl9Wq4HbfJmwayd4mAos3dH+r+ZJrU5cZ3WPI76EjKKLF34fsmCj7pzraL/A2HAdAnDgti+J5Mwm/0eAgTf0njysofQamSrQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Bh0oIktvPtFVup7j0wxm/nNzpojhfzZgzP18HnscEG4=; b=KtGa6/dUitwUR6S2x0vya5CssxfTU1omeqrM96VOjmqywrXgMeC+6dHfFGpTBQetwr73OHbldYC28DlFopk+45iW1LMM7osxjS6uNpjX7vu9I6C7WpNnbu3aVNkftkRWl62Xlcfil9S9Y+Jk29FHXoHyon9jcPtBzh9fA9MJ3dqdAcZg+EkcJlpltXMSQBYd1GJRWFl94xlzw9NObks+zgR7nzsxTK5C9aNbYV0RFeQWXKKrI6FTLy0xAIcpG9+bV4/RqUoUekh/mS009JgGCp2UUzUhAKY/i6ADMhGUGvV5k4NfVio1FUpzgwhT1nqx9rlHcJB3bQIX3RzdlDQ9SA== Received: from BN9PR03CA0535.namprd03.prod.outlook.com (2603:10b6:408:131::30) by BL1PR12MB5078.namprd12.prod.outlook.com (2603:10b6:208:313::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.25; Mon, 24 Oct 2022 01:48:55 +0000 Received: from BN8NAM11FT076.eop-nam11.prod.protection.outlook.com (2603:10b6:408:131:cafe::51) by BN9PR03CA0535.outlook.office365.com (2603:10b6:408:131::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.23 via Frontend Transport; Mon, 24 Oct 2022 01:48:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT076.mail.protection.outlook.com (10.13.176.174) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Mon, 24 Oct 2022 01:48:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Sun, 23 Oct 2022 18:48:45 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Sun, 23 Oct 2022 18:48:43 -0700 From: Sean Zhang To: , Matan Azrad , "Viacheslav Ovsiienko" CC: Subject: [v3] net/mlx5: add port representor item support Date: Mon, 24 Oct 2022 01:48:14 +0000 Message-ID: <20221024014815.10598-1-xiazhang@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221020012028.250527-1-xiazhang@nvidia.com> References: <20221020012028.250527-1-xiazhang@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT076:EE_|BL1PR12MB5078:EE_ X-MS-Office365-Filtering-Correlation-Id: 9f8e4315-39d5-4f85-9ab3-08dab561e93a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xLBywF54viHWDTkC0nqjJjeaJ3/cQf7nbmN3mnnBCL1UhYkEAv58zJdiJ/f4iNydgBos1QnOmXSxwBnJLHl7Fz/PyPRBAydlv06NsTIKzzds4AoMhdJ5WKDBXwXEwWxrwugVidsOFMlXihTLVHQAxnPeJJf2o0cFIftgvvP8RMcycmq3Bzq+9I3WhAtgocesL60aEXNRRE30w02uYRrJOerBKrJE9eaJTw0AZ0Sev9ryCBiVApNho1pAjVnoLvSxRMhUjW6uB2oz1gbRb54odHAZXSmeDnccwiog/w+iAdnWC6Hw05WZh4q5kqJnROp8BCArv8noZJugNu95+3LR4wBS2E4Y+1DB3YQDgbvSbR5Ys4nTjjPQNI8SNMGy5qpplogGdKH+MZazK3Ovm6LGEE/iRnxN0S7Ny+nuajg5tK3ePHVfdc7/hk/QmxH4ffbhdENhhNKOe50f/m2KmdUWM/zbwVe/C45K3PdCVB3YKnnEQmS8DdllbOw+C45tkmq5pHDx3QEAnIFyuC5g7y3s8uzJw85hgnbUgo0/5RPu+RUdURgTVxnoUvTPGeX8q4lpASrVVUc50aJFzNGGhpxi4eFJsTfLkaHN5DYExdqijhVdZzOojFRUHlmgB7doc6NZ8b5PVeFbxwYdPD3iFSlSGPD+64tyHYaQBn4B1fRoAVsA/O/vVHHu3p7Ruv7loQJs3t5T9ExX5zVhcfIBZb7MjyLvFOc95rBQcJBfpGNq8ZEjJeNUF9T0wa2b0awpWCWKqEe1wNufKR12FuuWo+VfKbbMSnYh5keDdaDk5i4IhfNawsLj8l8x+rm4QXUGxtdHjyMK77/qMy0eK6R7IEsqDhnaCVAc8WOqNzeB6svU8lE= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(5660300002)(8936002)(82310400005)(41300700001)(36756003)(478600001)(47076005)(83380400001)(426003)(336012)(966005)(4326008)(70586007)(8676002)(70206006)(110136005)(316002)(86362001)(40480700001)(6636002)(55016003)(36860700001)(2906002)(40460700003)(1076003)(186003)(16526019)(26005)(6286002)(7696005)(356005)(82740400003)(7636003)(6666004)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Oct 2022 01:48:55.3702 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9f8e4315-39d5-4f85-9ab3-08dab561e93a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT076.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5078 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for port_representor item, it will match on traffic originated from representor port specified in the pattern. This item is supported in FDB steering domain only (in the flow with transfer attribute). For example, below flow will redirect the destination of traffic from ethdev 1 to ethdev 2. testpmd> ... pattern eth / port_representor port_id is 1 / end actions represented_port ethdev_port_id 2 / ... To handle abovementioned item, Tx queue matching is added in the driver, and the flow will be expanded to number of the Tx queues. If the spec of port_representor is NULL, the flow will not be expanded and match on traffic from any representor port. Signed-off-by: Sean Zhang Acked-by: Viacheslav Ovsiienko --- The depending patches as below: [1] http://patches.dpdk.org/project/dpdk/cover/20220930125315.5079-1-suanmingm@nvidia.com --- v3 - rebase to the latest version v2 - commit message updated and add missing feature in doc --- drivers/net/mlx5/mlx5_flow.c | 116 ++++++++++++++++++++++++++++++-- drivers/net/mlx5/mlx5_flow_dv.c | 11 ++- 2 files changed, 122 insertions(+), 5 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index e19e9b20ed..64e48ce6d4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -128,6 +128,15 @@ struct mlx5_flow_expand_node { */ }; +/** Keep same format with mlx5_flow_expand_rss to share the buffer for expansion. */ +struct mlx5_flow_expand_sqn { + uint32_t entries; /** Number of entries */ + struct { + struct rte_flow_item *pattern; /**< Expanded pattern array. */ + uint32_t priority; /**< Priority offset for each expansion. */ + } entry[]; +}; + /* Optional expand field. The expansion alg will not go deeper. */ #define MLX5_EXPANSION_NODE_OPTIONAL (UINT64_C(1) << 0) @@ -576,6 +585,88 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, return lsize; } +/** + * Expand SQN flows into several possible flows according to the Tx queue + * number + * + * @param[in] buf + * Buffer to store the result expansion. + * @param[in] size + * Buffer size in bytes. If 0, @p buf can be NULL. + * @param[in] pattern + * User flow pattern. + * @param[in] sq_specs + * Buffer to store sq spec. + * + * @return + * 0 for success and negative value for failure + * + */ +static int +mlx5_flow_expand_sqn(struct mlx5_flow_expand_sqn *buf, size_t size, + const struct rte_flow_item *pattern, + struct mlx5_rte_flow_item_sq *sq_specs) +{ + const struct rte_flow_item *item; + bool port_representor = false; + size_t user_pattern_size = 0; + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + void *addr = NULL; + uint16_t port_id; + size_t lsize; + int elt = 2; + uint16_t i; + + buf->entries = 0; + for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (item->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { + const struct rte_flow_item_ethdev *pid_v = item->spec; + + if (!pid_v) + return 0; + port_id = pid_v->port_id; + port_representor = true; + } + user_pattern_size += sizeof(*item); + } + if (!port_representor) + return 0; + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + buf->entry[0].pattern = (void *)&buf->entry[priv->txqs_n]; + lsize = offsetof(struct mlx5_flow_expand_sqn, entry) + + sizeof(buf->entry[0]) * priv->txqs_n; + if (lsize + (user_pattern_size + sizeof(struct rte_flow_item) * elt) * priv->txqs_n > size) + return -EINVAL; + addr = buf->entry[0].pattern; + for (i = 0; i != priv->txqs_n; ++i) { + struct rte_flow_item pattern_add[] = { + { + .type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .spec = &sq_specs[i], + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + struct mlx5_txq_ctrl *txq = mlx5_txq_get(dev, i); + + if (txq == NULL) + return -EINVAL; + buf->entry[i].pattern = addr; + sq_specs[i].queue = mlx5_txq_get_sqn(txq); + mlx5_txq_release(dev, i); + rte_memcpy(addr, pattern, user_pattern_size); + addr = (void *)(((uintptr_t)addr) + user_pattern_size); + rte_memcpy(addr, pattern_add, sizeof(struct rte_flow_item) * elt); + addr = (void *)(((uintptr_t)addr) + sizeof(struct rte_flow_item) * elt); + buf->entries++; + } + return 0; +} + enum mlx5_expansion { MLX5_EXPANSION_ROOT, MLX5_EXPANSION_ROOT_OUTER, @@ -5425,6 +5516,11 @@ flow_meter_split_prep(struct rte_eth_dev *dev, memcpy(sfx_items, items, sizeof(*sfx_items)); sfx_items++; break; + case RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR: + flow_src_port = 0; + memcpy(sfx_items, items, sizeof(*sfx_items)); + sfx_items++; + break; case RTE_FLOW_ITEM_TYPE_VLAN: /* Determine if copy vlan item below. */ vlan_item_src = items; @@ -6080,7 +6176,8 @@ flow_sample_split_prep(struct rte_eth_dev *dev, }; /* Prepare the suffix subflow items. */ for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - if (items->type == RTE_FLOW_ITEM_TYPE_PORT_ID) { + if (items->type == RTE_FLOW_ITEM_TYPE_PORT_ID || + items->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { memcpy(sfx_items, items, sizeof(*sfx_items)); sfx_items++; } @@ -6893,7 +6990,7 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, int indir_actions_n = MLX5_MAX_INDIRECT_ACTIONS; union { struct mlx5_flow_expand_rss buf; - uint8_t buffer[4096]; + uint8_t buffer[8192]; } expand_buffer; union { struct rte_flow_action actions[MLX5_MAX_SPLIT_ACTIONS]; @@ -6907,6 +7004,7 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, struct rte_flow_item items[MLX5_MAX_SPLIT_ITEMS]; uint8_t buffer[2048]; } items_tx; + struct mlx5_rte_flow_item_sq sq_specs[RTE_MAX_QUEUES_PER_PORT]; struct mlx5_flow_expand_rss *buf = &expand_buffer.buf; struct mlx5_flow_rss_desc *rss_desc; const struct rte_flow_action *p_actions_rx; @@ -6995,8 +7093,18 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, mlx5_dbg__print_pattern(buf->entry[i].pattern); } } else { - buf->entries = 1; - buf->entry[0].pattern = (void *)(uintptr_t)items; + ret = mlx5_flow_expand_sqn((struct mlx5_flow_expand_sqn *)buf, + sizeof(expand_buffer.buffer), + items, sq_specs); + if (ret) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "not enough memory for rte_flow"); + goto error; + } + if (buf->entries == 0) { + buf->entries = 1; + buf->entry[0].pattern = (void *)(uintptr_t)items; + } } rss_desc->shared_rss = flow_get_shared_rss_action(dev, indir_actions, indir_actions_n); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index dbe55a5103..677b85bd8d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7185,6 +7185,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, port_id_item = items; break; case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + case RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR: ret = flow_dv_validate_item_represented_port (dev, items, attr, item_flags, error); if (ret < 0) @@ -13607,6 +13608,7 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, mlx5_flow_get_thread_workspace())->rss_desc, }; struct mlx5_dv_matcher_workspace wks_m = wks; + int item_type; int ret = 0; int tunnel; @@ -13616,7 +13618,8 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); - switch (items->type) { + item_type = items->type; + switch (item_type) { case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, match_value, items); @@ -13628,6 +13631,12 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : MLX5_FLOW_ITEM_OUTER_FLEX; break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + flow_dv_translate_item_sq(match_value, items, + MLX5_SET_MATCHER_SW_V); + flow_dv_translate_item_sq(match_mask, items, + MLX5_SET_MATCHER_SW_M); + break; default: ret = flow_dv_translate_items(dev, items, &wks_m, match_mask, MLX5_SET_MATCHER_SW_M, error); -- 2.34.1