From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9549642DA8; Mon, 3 Jul 2023 11:21:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3E89640ED5; Mon, 3 Jul 2023 11:21:14 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2082.outbound.protection.outlook.com [40.107.92.82]) by mails.dpdk.org (Postfix) with ESMTP id C050B40689 for ; Sun, 2 Jul 2023 06:58:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lbQCH1w0aWtOFk51XCpgsXC0/XeNYeM8hjZoRQBbxsKpSb9N6wlVk7MGWae6J7kJlzuZfQVuLZUbmr/tlEFMUJpjS0UMi6LFkljyUEq85KonxCvPPFIiueLmnSqoa0XGG760pVGajR181D2FlxoBPAoXIck4UQ7V49/7y+ukee1hZpe/7hzzlcNz04l4gqvwOP7JJ6zPFBYPEC6IyA5vID02mrcnQ9PEIuflultdfNkw0ketxkMn1At9XN25puTLP4nQ8uNS4JWJuE6jxqoBTyncBx00xbyuEMI4zLE51lBbGP30Ca0gmvMnEKU/CqfkV6KjrLlQ2ZbX69WJDrZh+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=copTGdlB7QNw05WfsTn3tF7OypHTTf4yapz611EtHL8=; b=F50g0b7C/qFzg1y4vdekn4g1FQ22q657U8BYkLL1Tr2U+y2mxDzY2B0B5kMjWlKEAKQ5hHtzsq8Q2arq3lK5zxeg5/H2IezAQsplxND1eHvR70PN0f/tA8TV4zBbs4dppskZnGHXEY9RDcl7kZmXOVDj6K4sbKxs2OtaaQAo8K+HlZ5CJ0soV8OSxY2CcwP5DdeZT/d1XQt1mUvhsZJ1S5IBIGmtvfzZkd2ot1acIQQnWyWbOSyF2xATV/Hpi1m5BHSJdFS3wQjeYU8Zv7IO3lg7bsvuTI4bSH7718ZOmH6xzQRLEYFCuG4IwMUR4SmpKHIUJa8nQn7FQ0VcM+0SuQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=copTGdlB7QNw05WfsTn3tF7OypHTTf4yapz611EtHL8=; b=aPjugeyVimOUSzj/ROveQ7lslfL5On/JgY8ha8oDGIVCHtD4JjkMsRblDG7UqNR9JfxZfl/ViyqqvjwOHluZ+JKFinXKGXun4fw8QgCvcvUg6+HZMy2/I9zLJZQaqKLGFlLPDxRAXARqp286lpdu1nr29wcFTvxmInsjabaQLqZcvlUwXRc/JTWWb6Vnw8CPy1axfKcfPiZKUuVzHM7vJbKBUR67IV8s5/T8gBSYJx4qvwlRhBDvMc0zdG0w+lDTigCdFTOF2BEN6Ppdst3A7OLc+erADF0lVqO3yY7rNyrI/GOx3cFRsMv3FNBZtkf/xnsQSHqqRmB82qCVOEWbkg== Received: from BN9PR03CA0722.namprd03.prod.outlook.com (2603:10b6:408:110::7) by PH0PR12MB5677.namprd12.prod.outlook.com (2603:10b6:510:14d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.24; Sun, 2 Jul 2023 04:58:17 +0000 Received: from BN8NAM11FT006.eop-nam11.prod.protection.outlook.com (2603:10b6:408:110:cafe::c4) by BN9PR03CA0722.outlook.office365.com (2603:10b6:408:110::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.26 via Frontend Transport; Sun, 2 Jul 2023 04:58:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT006.mail.protection.outlook.com (10.13.177.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.43 via Frontend Transport; Sun, 2 Jul 2023 04:58:16 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sat, 1 Jul 2023 21:58:07 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sat, 1 Jul 2023 21:58:06 -0700 Received: from nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Sat, 1 Jul 2023 21:58:04 -0700 From: Itamar Gozlan To: , , , , CC: , , Gregory Etelson Subject: [v2 1/5] net/mlx5: support indirect list METER_MARK action Date: Sun, 2 Jul 2023 07:57:54 +0300 Message-ID: <20230702045758.23244-1-igozlan@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230629072125.20369-5-igozlan@nvidia.com> References: <20230629072125.20369-5-igozlan@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT006:EE_|PH0PR12MB5677:EE_ X-MS-Office365-Filtering-Correlation-Id: 560f2e48-6fd2-480a-a008-08db7ab8f2b9 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: roDOSvKpRql0X8peUw//79Ur7bwDx6fkbHz/tMwXAtTzSug70RbSUO1MolABSinrcBjyp5A+2dsluCZ1820Up2nHbeyT51a6kOpodF8CQh88LqTEZ1WD+kKLgx+9w4/dEF7qaF85I+6+YULIPbwalFvez0MVNjpS4It4K8MPbnWV7ptUuHzn3hzbOb03hvOQF+vv1WMtl66ZRLP7/hfVTnGXAAfSDDU4658OkdEItQGmKuIZ9ajc1z5Qxuifj0sUzlVDQKty7v833RZp+/VUuyKuMlwS8aePRdGi4AtYXH4l383kDb0e/wJBQj0gQznsUDMHx833OOEs3XcyY27maJKIO8T+ZLrvgHV6nK/BddQuRimEHLTQPZ3sLAWBYRMYw+FcQfIj3oYBdhrr0t+ETw913Vz3FkyYIDRqa2EtecPKH25L8vyla6Fu0gD5ZCbHD4UVqZe7Z/XgIhOuM+FFMsbYNHRKYqa9OlNLPhFWsx16npDZWsxYlzqy0HYs4ScIUqouicAXZBh8kHXii5SNksZA2nKtPaGppTXXPsog4ptnE5mrhLP5J+sZSSXP5/yKmPw3VjmhlRUwEt2u/LD4wNoKUnyx8+uGuqMMgMNpbq+N0ucRVtWJ6uhfQTI5WUH+Q3nRWTDY/LeF5BVwJmszuAXxUREGZqnnlfMnxG4ItskoQdO7NhwvW3nWaGbP8Gf+MBLpqj1uzDvPh+Vq8VGrvUCIF9urpltBgllVy9iihebqyT8jaoNKQtL8D8o7Rlcr X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(39860400002)(396003)(376002)(346002)(451199021)(36840700001)(40470700004)(46966006)(1076003)(6636002)(7636003)(70206006)(82740400003)(4326008)(70586007)(356005)(40480700001)(55016003)(478600001)(40460700003)(41300700001)(107886003)(316002)(426003)(36860700001)(6286002)(47076005)(2616005)(336012)(83380400001)(82310400005)(186003)(26005)(54906003)(110136005)(2906002)(7696005)(5660300002)(36756003)(30864003)(86362001)(8936002)(6666004)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2023 04:58:16.5692 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 560f2e48-6fd2-480a-a008-08db7ab8f2b9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT006.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5677 X-Mailman-Approved-At: Mon, 03 Jul 2023 11:21:13 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.c | 69 +++++- drivers/net/mlx5/mlx5_flow.h | 67 ++++- drivers/net/mlx5/mlx5_flow_hw.c | 419 +++++++++++++++++++++++++++----- 3 files changed, 476 insertions(+), 79 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 29e9819dd6..fb7b82fa26 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -63,8 +63,11 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) switch (e->type) { #ifdef HAVE_MLX5_HWS_SUPPORT case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: - mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)e, true); + mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)e); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: + mlx5_destroy_legacy_indirect(dev, e); + break; #endif default: DRV_LOG(ERR, "invalid indirect list type"); @@ -1156,7 +1159,24 @@ mlx5_flow_async_action_list_handle_destroy const struct rte_flow_op_attr *op_attr, struct rte_flow_action_list_handle *action_handle, void *user_data, struct rte_flow_error *error); - +static int +mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, + const + struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); +static int +mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_action_list_handle *handle, + const void **update, + void **query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -1206,6 +1226,10 @@ static const struct rte_flow_ops mlx5_flow_ops = { mlx5_flow_async_action_list_handle_create, .async_action_list_handle_destroy = mlx5_flow_async_action_list_handle_destroy, + .action_list_handle_query_update = + mlx5_flow_action_list_handle_query_update, + .async_action_list_handle_query_update = + mlx5_flow_async_action_list_handle_query_update, }; /* Tunnel information. */ @@ -11054,6 +11078,47 @@ mlx5_flow_async_action_list_handle_destroy error); } +static int +mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, + const + struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, + action_list_handle_query_update, ENOTSUP); + return fops->action_list_handle_query_update(dev, handle, update, query, + mode, error); +} + +static int +mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev, + uint32_t queue_id, + const + struct rte_flow_op_attr *op_attr, + const struct + rte_flow_action_list_handle *handle, + const void **update, + void **query, + enum + rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, + async_action_list_handle_query_update, ENOTSUP); + return fops->async_action_list_handle_query_update(dev, queue_id, op_attr, + handle, update, + query, mode, + user_data, error); +} + + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e4c03a6be2..46bfd4d8a7 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -113,25 +113,41 @@ enum mlx5_indirect_type{ #define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX enum mlx5_indirect_list_type { - MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 1, + MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0, + MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY = 1, + MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 2, }; -/* +/** * Base type for indirect list type. - * Actual indirect list type MUST override that type and put type spec data - * after the `chain`. */ struct mlx5_indirect_list { - /* type field MUST be the first */ + /* Indirect list type. */ enum mlx5_indirect_list_type type; + /* Optional storage list entry */ LIST_ENTRY(mlx5_indirect_list) entry; - /* put type specific data after chain */ }; +static __rte_always_inline void +mlx5_indirect_list_add_entry(void *head, struct mlx5_indirect_list *elem) +{ + LIST_HEAD(, mlx5_indirect_list) *h = head; + + LIST_INSERT_HEAD(h, elem, entry); +} + +static __rte_always_inline void +mlx5_indirect_list_remove_entry(struct mlx5_indirect_list *elem) +{ + if (elem->entry.le_prev) + LIST_REMOVE(elem, entry); + +} + static __rte_always_inline enum mlx5_indirect_list_type -mlx5_get_indirect_list_type(const struct mlx5_indirect_list *obj) +mlx5_get_indirect_list_type(const struct rte_flow_action_list_handle *obj) { - return obj->type; + return ((const struct mlx5_indirect_list *)obj)->type; } /* Matches on selected register. */ @@ -1292,9 +1308,12 @@ struct rte_flow_hw { #pragma GCC diagnostic error "-Wpedantic" #endif -struct mlx5dr_action; -typedef struct mlx5dr_action * -(*indirect_list_callback_t)(const struct rte_flow_action *); +struct mlx5_action_construct_data; +typedef int +(*indirect_list_callback_t)(struct rte_eth_dev *, + const struct mlx5_action_construct_data *, + const struct rte_flow_action *, + struct mlx5dr_rule_action *); /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { @@ -1342,6 +1361,7 @@ struct mlx5_action_construct_data { } shared_counter; struct { uint32_t id; + uint32_t conf_masked:1; } shared_meter; struct { indirect_list_callback_t cb; @@ -2162,7 +2182,21 @@ typedef int const struct rte_flow_op_attr *op_attr, struct rte_flow_action_list_handle *action_handle, void *user_data, struct rte_flow_error *error); - +typedef int +(*mlx5_flow_action_list_handle_query_update_t) + (struct rte_eth_dev *dev, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); +typedef int +(*mlx5_flow_async_action_list_handle_query_update_t) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -2230,6 +2264,10 @@ struct mlx5_flow_driver_ops { async_action_list_handle_create; mlx5_flow_async_action_list_handle_destroy_t async_action_list_handle_destroy; + mlx5_flow_action_list_handle_query_update_t + action_list_handle_query_update; + mlx5_flow_async_action_list_handle_query_update_t + async_action_list_handle_query_update; }; /* mlx5_flow.c */ @@ -2999,6 +3037,9 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); #ifdef HAVE_MLX5_HWS_SUPPORT struct mlx5_mirror; void -mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror, bool release); +mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror); +void +mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, + struct mlx5_indirect_list *ptr); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 87db302bae..7b4661ad4f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -67,16 +67,19 @@ (!!((priv)->sh->esw_mode && (priv)->master)) +struct mlx5_indlst_legacy { + struct mlx5_indirect_list indirect; + struct rte_flow_action_handle *handle; + enum rte_flow_action_type legacy_type; +}; + struct mlx5_mirror_clone { enum rte_flow_action_type type; void *action_ctx; }; struct mlx5_mirror { - /* type field MUST be the first */ - enum mlx5_indirect_list_type type; - LIST_ENTRY(mlx5_indirect_list) entry; - + struct mlx5_indirect_list indirect; uint32_t clones_num; struct mlx5dr_action *mirror_action; struct mlx5_mirror_clone clone[MLX5_MIRROR_MAX_CLONES_NUM]; @@ -1424,46 +1427,211 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, return 0; } -static struct mlx5dr_action * -flow_hw_mirror_action(const struct rte_flow_action *action) +static int +flow_hw_translate_indirect_mirror(__rte_unused struct rte_eth_dev *dev, + __rte_unused const struct mlx5_action_construct_data *act_data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + const struct rte_flow_action_indirect_list *list_conf = action->conf; + const struct mlx5_mirror *mirror = (typeof(mirror))list_conf->handle; + + dr_rule->action = mirror->mirror_action; + return 0; +} + +/** + * HWS mirror implemented as FW island. + * The action does not support indirect list flow configuration. + * If template handle was masked, use handle mirror action in flow rules. + * Otherwise let flow rule specify mirror handle. + */ +static int +hws_table_tmpl_translate_indirect_mirror(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret = 0; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + + if (mask_conf && mask_conf->handle) { + /** + * If mirror handle was masked, assign fixed DR5 mirror action. + */ + flow_hw_translate_indirect_mirror(dev, NULL, action, + &acts->rule_acts[action_dst]); + } else { + struct mlx5_priv *priv = dev->data->dev_private; + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, + flow_hw_translate_indirect_mirror); + } + return ret; +} + +static int +flow_dr_set_meter(struct mlx5_priv *priv, + struct mlx5dr_rule_action *dr_rule, + const struct rte_flow_action_indirect_list *action_conf) { - struct mlx5_mirror *mirror = (void *)(uintptr_t)action->conf; + const struct mlx5_indlst_legacy *legacy_obj = + (typeof(legacy_obj))action_conf->handle; + struct mlx5_aso_mtr_pool *mtr_pool = priv->hws_mpool; + uint32_t act_idx = (uint32_t)(uintptr_t)legacy_obj->handle; + uint32_t mtr_id = act_idx & (RTE_BIT32(MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + struct mlx5_aso_mtr *aso_mtr = mlx5_ipool_get(mtr_pool->idx_pool, mtr_id); + + if (!aso_mtr) + return -EINVAL; + dr_rule->action = mtr_pool->action; + dr_rule->aso_meter.offset = aso_mtr->offset; + return 0; +} - return mirror->mirror_action; +__rte_always_inline static void +flow_dr_mtr_flow_color(struct mlx5dr_rule_action *dr_rule, enum rte_color init_color) +{ + dr_rule->aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color)rte_col_2_mlx5_col(init_color); } +static int +flow_hw_translate_indirect_meter(struct rte_eth_dev *dev, + const struct mlx5_action_construct_data *act_data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_indirect_list *action_conf = action->conf; + const struct rte_flow_indirect_update_flow_meter_mark **flow_conf = + (typeof(flow_conf))action_conf->conf; + + /* + * Masked indirect handle set dr5 action during template table + * translation. + */ + if (!dr_rule->action) { + ret = flow_dr_set_meter(priv, dr_rule, action_conf); + if (ret) + return ret; + } + if (!act_data->shared_meter.conf_masked) { + if (flow_conf && flow_conf[0] && flow_conf[0]->init_color < RTE_COLORS) + flow_dr_mtr_flow_color(dr_rule, flow_conf[0]->init_color); + } + return 0; +} + +static int +hws_table_tmpl_translate_indirect_meter(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_indirect_list *action_conf = action->conf; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + bool is_handle_masked = mask_conf && mask_conf->handle; + bool is_conf_masked = mask_conf && mask_conf->conf && mask_conf->conf[0]; + struct mlx5dr_rule_action *dr_rule = &acts->rule_acts[action_dst]; + + if (is_handle_masked) { + ret = flow_dr_set_meter(priv, dr_rule, action->conf); + if (ret) + return ret; + } + if (is_conf_masked) { + const struct + rte_flow_indirect_update_flow_meter_mark **flow_conf = + (typeof(flow_conf))action_conf->conf; + flow_dr_mtr_flow_color(dr_rule, + flow_conf[0]->init_color); + } + if (!is_handle_masked || !is_conf_masked) { + struct mlx5_action_construct_data *act_data; + + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, flow_hw_translate_indirect_meter); + if (ret) + return ret; + act_data = LIST_FIRST(&acts->act_list); + act_data->shared_meter.conf_masked = is_conf_masked; + } + return 0; +} + +static int +hws_table_tmpl_translate_indirect_legacy(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret; + const struct rte_flow_action_indirect_list *indlst_conf = action->conf; + struct mlx5_indlst_legacy *indlst_obj = (typeof(indlst_obj))indlst_conf->handle; + uint32_t act_idx = (uint32_t)(uintptr_t)indlst_obj->handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + ret = hws_table_tmpl_translate_indirect_meter(dev, action, mask, + acts, action_src, + action_dst); + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +/* + * template .. indirect_list handle Ht conf Ct .. + * mask .. indirect_list handle Hm conf Cm .. + * + * PMD requires Ht != 0 to resolve handle type. + * If Ht was masked (Hm != 0) DR5 action will be set according to Ht and will + * not change. Otherwise, DR5 action will be resolved during flow rule build. + * If Ct was masked (Cm != 0), table template processing updates base + * indirect action configuration with Ct parameters. + */ static int table_template_translate_indirect_list(struct rte_eth_dev *dev, const struct rte_flow_action *action, const struct rte_flow_action *mask, struct mlx5_hw_actions *acts, - uint16_t action_src, - uint16_t action_dst) + uint16_t action_src, uint16_t action_dst) { - int ret; - bool is_masked = action->conf && mask->conf; - struct mlx5_priv *priv = dev->data->dev_private; + int ret = 0; enum mlx5_indirect_list_type type; + const struct rte_flow_action_indirect_list *list_conf = action->conf; - if (!action->conf) + if (!list_conf || !list_conf->handle) return -EINVAL; - type = mlx5_get_indirect_list_type(action->conf); + type = mlx5_get_indirect_list_type(list_conf->handle); switch (type) { + case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: + ret = hws_table_tmpl_translate_indirect_legacy(dev, action, mask, + acts, action_src, + action_dst); + break; case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: - if (is_masked) { - acts->rule_acts[action_dst].action = flow_hw_mirror_action(action); - } else { - ret = flow_hw_act_data_indirect_list_append - (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, - action_src, action_dst, flow_hw_mirror_action); - if (ret) - return ret; - } + ret = hws_table_tmpl_translate_indirect_mirror(dev, action, mask, + acts, action_src, + action_dst); break; default: return -EINVAL; } - return 0; + return ret; } /** @@ -2383,8 +2551,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, (int)action->type == act_data->type); switch ((int)act_data->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: - rule_acts[act_data->action_dst].action = - act_data->indirect_list.cb(action); + act_data->indirect_list.cb(dev, act_data, actions, rule_acts); break; case MLX5_RTE_FLOW_ACTION_TYPE_IPSEC: /* Fall-through. */ @@ -4741,20 +4908,11 @@ action_template_set_type(struct rte_flow_actions_template *at, } static int -flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, - unsigned int action_src, +flow_hw_dr_actions_template_handle_shared(int type, uint32_t action_src, enum mlx5dr_action_type *action_types, uint16_t *curr_off, uint16_t *cnt_off, struct rte_flow_actions_template *at) { - uint32_t type; - - if (!mask) { - DRV_LOG(WARNING, "Unable to determine indirect action type " - "without a mask specified"); - return -EINVAL; - } - type = mask->type; switch (type) { case RTE_FLOW_ACTION_TYPE_RSS: action_template_set_type(at, action_types, action_src, curr_off, @@ -4799,12 +4957,24 @@ static int flow_hw_template_actions_list(struct rte_flow_actions_template *at, unsigned int action_src, enum mlx5dr_action_type *action_types, - uint16_t *curr_off) + uint16_t *curr_off, uint16_t *cnt_off) { - enum mlx5_indirect_list_type list_type; + int ret; + const struct rte_flow_action_indirect_list *indlst_conf = at->actions[action_src].conf; + enum mlx5_indirect_list_type list_type = mlx5_get_indirect_list_type(indlst_conf->handle); + const union { + struct mlx5_indlst_legacy *legacy; + struct rte_flow_action_list_handle *handle; + } indlst_obj = { .handle = indlst_conf->handle }; - list_type = mlx5_get_indirect_list_type(at->actions[action_src].conf); switch (list_type) { + case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: + ret = flow_hw_dr_actions_template_handle_shared + (indlst_obj.legacy->legacy_type, action_src, + action_types, curr_off, cnt_off, at); + if (ret) + return ret; + break; case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: action_template_set_type(at, action_types, action_src, curr_off, MLX5DR_ACTION_TYP_DEST_ARRAY); @@ -4850,7 +5020,7 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) break; case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: ret = flow_hw_template_actions_list(at, i, action_types, - &curr_off); + &curr_off, &cnt_off); if (ret) return NULL; break; @@ -4858,11 +5028,8 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) /* Fall-through. */ case RTE_FLOW_ACTION_TYPE_INDIRECT: ret = flow_hw_dr_actions_template_handle_shared - (&at->masks[i], - i, - action_types, - &curr_off, - &cnt_off, at); + (at->masks[i].type, i, action_types, + &curr_off, &cnt_off, at); if (ret) return NULL; break; @@ -5339,7 +5506,6 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, * Need to restore the indirect action index from action conf here. */ case RTE_FLOW_ACTION_TYPE_INDIRECT: - case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: at->actions[i].conf = ra[i].conf; at->masks[i].conf = rm[i].conf; break; @@ -9476,18 +9642,16 @@ mlx5_mirror_destroy_clone(struct rte_eth_dev *dev, } void -mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror, bool release) +mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror) { uint32_t i; - if (mirror->entry.le_prev) - LIST_REMOVE(mirror, entry); + mlx5_indirect_list_remove_entry(&mirror->indirect); for(i = 0; i < mirror->clones_num; i++) mlx5_mirror_destroy_clone(dev, &mirror->clone[i]); if (mirror->mirror_action) mlx5dr_action_destroy(mirror->mirror_action); - if (release) - mlx5_free(mirror); + mlx5_free(mirror); } static inline enum mlx5dr_table_type @@ -9798,7 +9962,8 @@ mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev, actions, "Failed to allocate mirror context"); return NULL; } - mirror->type = MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR; + + mirror->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR; mirror->clones_num = clones_num; for (i = 0; i < clones_num; i++) { const struct rte_flow_action *clone_actions; @@ -9830,15 +9995,72 @@ mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev, goto error; } - LIST_INSERT_HEAD(&priv->indirect_list_head, - (struct mlx5_indirect_list *)mirror, entry); + mlx5_indirect_list_add_entry(&priv->indirect_list_head, &mirror->indirect); return (struct rte_flow_action_list_handle *)mirror; error: - mlx5_hw_mirror_destroy(dev, mirror, true); + mlx5_hw_mirror_destroy(dev, mirror); return NULL; } +void +mlx5_destroy_legacy_indirect(__rte_unused struct rte_eth_dev *dev, + struct mlx5_indirect_list *ptr) +{ + struct mlx5_indlst_legacy *obj = (typeof(obj))ptr; + + switch (obj->legacy_type) { + case RTE_FLOW_ACTION_TYPE_METER_MARK: + break; /* ASO meters were released in mlx5_flow_meter_flush() */ + default: + break; + } + mlx5_free(obj); +} + +static struct rte_flow_action_list_handle * +mlx5_create_legacy_indlst(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indlst_legacy *indlst_obj = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*indlst_obj), + 0, SOCKET_ID_ANY); + + if (!indlst_obj) + return NULL; + indlst_obj->handle = flow_hw_action_handle_create(dev, queue, attr, conf, + actions, user_data, + error); + if (!indlst_obj->handle) { + mlx5_free(indlst_obj); + return NULL; + } + indlst_obj->legacy_type = actions[0].type; + indlst_obj->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY; + mlx5_indirect_list_add_entry(&priv->indirect_list_head, &indlst_obj->indirect); + return (struct rte_flow_action_list_handle *)indlst_obj; +} + +static __rte_always_inline enum mlx5_indirect_list_type +flow_hw_inlist_type_get(const struct rte_flow_action *actions) +{ + switch (actions[0].type) { + case RTE_FLOW_ACTION_TYPE_SAMPLE: + return MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + return actions[1].type == RTE_FLOW_ACTION_TYPE_END ? + MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY : + MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; + default: + break; + } + return MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; +} + static struct rte_flow_action_list_handle * flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_op_attr *attr, @@ -9849,6 +10071,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, { struct mlx5_hw_q_job *job = NULL; bool push = flow_hw_action_push(attr); + enum mlx5_indirect_list_type list_type; struct rte_flow_action_list_handle *handle; struct mlx5_priv *priv = dev->data->dev_private; const struct mlx5_flow_template_table_cfg table_cfg = { @@ -9867,6 +10090,16 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, NULL, "No action list"); return NULL; } + list_type = flow_hw_inlist_type_get(actions); + if (list_type == MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY) { + /* + * Legacy indirect actions already have + * async resources management. No need to do it twice. + */ + handle = mlx5_create_legacy_indlst(dev, queue, attr, conf, + actions, user_data, error); + goto end; + } if (attr) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_CREATE, @@ -9874,8 +10107,8 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, if (!job) return NULL; } - switch (actions[0].type) { - case RTE_FLOW_ACTION_TYPE_SAMPLE: + switch (list_type) { + case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: handle = mlx5_hw_mirror_handle_create(dev, &table_cfg, actions, error); break; @@ -9889,6 +10122,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, flow_hw_action_finalize(dev, queue, job, push, false, handle != NULL); } +end: return handle; } @@ -9917,6 +10151,15 @@ flow_hw_async_action_list_handle_destroy enum mlx5_indirect_list_type type = mlx5_get_indirect_list_type((void *)handle); + if (type == MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY) { + struct mlx5_indlst_legacy *legacy = (typeof(legacy))handle; + + ret = flow_hw_action_handle_destroy(dev, queue, attr, + legacy->handle, + user_data, error); + mlx5_indirect_list_remove_entry(&legacy->indirect); + goto end; + } if (attr) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, @@ -9926,20 +10169,17 @@ flow_hw_async_action_list_handle_destroy } switch(type) { case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: - mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle, false); + mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle); break; default: - handle = NULL; ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Invalid indirect list handle"); } if (job) { - job->action = handle; - flow_hw_action_finalize(dev, queue, job, push, false, - handle != NULL); + flow_hw_action_finalize(dev, queue, job, push, false, true); } - mlx5_free(handle); +end: return ret; } @@ -9953,6 +10193,53 @@ flow_hw_action_list_handle_destroy(struct rte_eth_dev *dev, error); } +static int +flow_hw_async_action_list_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error) +{ + enum mlx5_indirect_list_type type = + mlx5_get_indirect_list_type((const void *)handle); + + if (type == MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY) { + struct mlx5_indlst_legacy *legacy = (void *)(uintptr_t)handle; + + if (update && query) + return flow_hw_async_action_handle_query_update + (dev, queue_id, attr, legacy->handle, + update, query, mode, user_data, error); + else if (update && update[0]) + return flow_hw_action_handle_update(dev, queue_id, attr, + legacy->handle, update[0], + user_data, error); + else if (query && query[0]) + return flow_hw_action_handle_query(dev, queue_id, attr, + legacy->handle, query[0], + user_data, error); + else + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid legacy handle query_update parameters"); + } + return -ENOTSUP; +} + +static int +flow_hw_action_list_handle_query_update(struct rte_eth_dev *dev, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + return flow_hw_async_action_list_handle_query_update + (dev, MLX5_HW_INV_QUEUE, NULL, handle, + update, query, mode, NULL, error); +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -9983,10 +10270,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .action_query_update = flow_hw_action_query_update, .action_list_handle_create = flow_hw_action_list_handle_create, .action_list_handle_destroy = flow_hw_action_list_handle_destroy, + .action_list_handle_query_update = + flow_hw_action_list_handle_query_update, .async_action_list_handle_create = flow_hw_async_action_list_handle_create, .async_action_list_handle_destroy = flow_hw_async_action_list_handle_destroy, + .async_action_list_handle_query_update = + flow_hw_async_action_list_handle_query_update, .query = flow_hw_query, .get_aged_flows = flow_hw_get_aged_flows, .get_q_aged_flows = flow_hw_get_q_aged_flows, -- 2.18.1