From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 417A4A0C4E; Tue, 2 Nov 2021 09:54:49 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F1F9C41147; Tue, 2 Nov 2021 09:54:16 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 0073741141 for ; Tue, 2 Nov 2021 09:54:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aJXvDe5IdOWJ9aK/ACm8sJHJx61nTDTDReFLeCRialyd5qygB3kg2Os3Rl5rLveyEIQW2r9f6lk+HQfBoNG120sjFbTutrsa/5K1/v+XYV/2K5I0OkCMoA/kiZr6uSj9zaX+QoXHruDMwb1+Gg+X+lKH8dGlfm8KGMxCABo/IojetxgDdQ+bpG74uUXgqYGHLiZiZMxXu/aQ2Y5spSllpMVGYxdr/rhP8D0Kn9VC+fMT2gBRFIJy2kCU2ejYztSvti0JEoYk3VDQfzoocjx3SxqOWvYhLXSkmOsQnIY9G8OtxsXdcB9Th497ydkZkcoP4k4JZbWv1G2weZoUVd2HTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=srviaplFjYn8XclHjaeqoA7u31zrHrU7wNuLLZJSgdw=; b=cXgnCi8yMsMfdgbRKTGa6SBHYby87hFw0xih3tdSkjTKw8+O47BG50IAQqwN+E5m8LtZ9yLt+I1iqtexEQIZ0Ag4pNZvIy097+osSkfci9E66RW3izCsk+mBkpFa30KloKObwHdlZwEluj0rBiNhhOnVGmbd5Y2LtDWDZhhliH1jcV84nT3eScvS6JcSGsKbPv+s1eOHrtL8SZT992aAjDtojypcTilSKNh9o9CcXh4KOKtYnmq9ATUr4Jib3HRQatnUyX7tQeFIO5r8GNT90MpEgwVdWnMEcu/f2iyASMYhicLPpuKnm3ITSRsIwKiiWKzh+owLE1PftdStXr/bBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=srviaplFjYn8XclHjaeqoA7u31zrHrU7wNuLLZJSgdw=; b=fQBf2QJKGGTp//oc97+EDe2e8InIMSeC3YbgbGIZ4URERmtmPLYK/OJKVzDsAteMUH2egqtv6IUndxQpxckq1AEFhDatLsnVz3QCD4DH4adXjc3OrSfdJ6i4gq9nkGzokL9OM4YoXdZC1AyVrYelD8IQTDPw6dkJfzvZXbLRgx1udPgIeA0tmJEhTovGaYmf3XNzuOKk2IReKfQQMzmjsuPzRX+CiH4/y21D0fP1G/8NUdtBgNwYvvpokgzwgS+6US0qHkHme4YlcnrnjNhNlU/so8eWo43E81FX8TJRFY1BMqICSx8gTOAZQHx8GXzelhcNYJuhTg9jIoHsvWRUvQ== Received: from DM6PR21CA0020.namprd21.prod.outlook.com (2603:10b6:5:174::30) by DM6PR12MB4233.namprd12.prod.outlook.com (2603:10b6:5:210::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Tue, 2 Nov 2021 08:54:13 +0000 Received: from DM6NAM11FT014.eop-nam11.prod.protection.outlook.com (2603:10b6:5:174:cafe::a6) by DM6PR21CA0020.outlook.office365.com (2603:10b6:5:174::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4690.2 via Frontend Transport; Tue, 2 Nov 2021 08:54:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT014.mail.protection.outlook.com (10.13.173.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 08:54:13 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 08:54:10 +0000 From: Gregory Etelson To: , , CC: Date: Tue, 2 Nov 2021 10:53:43 +0200 Message-ID: <20211102085347.20568-7-getelson@nvidia.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211102085347.20568-1-getelson@nvidia.com> References: <20211101091514.3891-1-getelson@nvidia.com> <20211102085347.20568-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3f67cbcd-bbee-43e7-e3cd-08d99dde57d7 X-MS-TrafficTypeDiagnostic: DM6PR12MB4233: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:293; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xyKv8J7JNZIYY11PyhwJvGPeXLfrKLU4JifPbAlycJZiZ8I2XQUaiCE3vcmPVxfnCinScPMOvV5V56soj4OBaAflA46KVeTMd5AWZWcZR7zEHx52s8beIpVXtzhq88pBm+lpAcXRysZHSj1snFqJRVl3w7BBDcAKnMbknCkPGfHzAzo14S/wIaXPXk3gvyxVpmPeuXp4K7enb1fZOWl80ufpQtCe3QgB1azfHEibZfUj4J7UbwYK80Ko1H3Q8EqmjtkekprX7okCpJyNZtQSg1u6gntZyVmOhTNi/brOcJ1GR7aoX1kjrzdHliPJCEElRLkEzpjvVsMsO/R0HH1IMPnEkSJF8PVQrUOGgYxeroEwWUVQ2/Od8O4j6EsJW3B2/WkxP25geDxcR+Wa/0LXOohOvQyuBSGiHq194NlR9+NfuMDof34nNR/JnTQoGFMrJGHWdxmLScU8sTItKb4wGqtWqXtPhsVsFks52yxloVm93inw14CFhkDXKtg/NGYSkG+YhKZIw3Caf6guzC02hQTpGZWORKnBl71jEj/f5b6uHP3ed6+0fdkT+z1lDq/CuCMFhlAZ8kDC4uGP2vaT1uQmu62w+NHJKJ8U50W7B29maNhG3noVn0RTNzVX/xVUVVJhH9WZawKn7PJnpKdCYnJcEIQSwU0vIOf1aYOn6pM6Z0nPTl70SbgvMoAO0T3MQsXvllWEPL6N1xa8EYIiBs7G1bDbUeA3JrMi0M7JTmw= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(316002)(426003)(6636002)(47076005)(2616005)(1076003)(2906002)(6666004)(5660300002)(336012)(36860700001)(7636003)(36756003)(356005)(82310400003)(83380400001)(86362001)(26005)(55016002)(8936002)(8676002)(6286002)(508600001)(107886003)(186003)(16526019)(110136005)(4326008)(70586007)(7696005)(70206006)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 08:54:13.0663 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3f67cbcd-bbee-43e7-e3cd-08d99dde57d7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT014.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4233 Subject: [dpdk-dev] [PATCH v2 6/9] net/mlx5: add flex parser DevX object management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The DevX flex parsers can be shared between representors within the same IB context. We should put the flex parser objects into the shared list and engage the standard mlx5_list_xxx API to manage ones. Signed-off-by: Gregory Etelson Reviewed-by: Viacheslav Ovsiienko --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++ drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5.h | 20 +++++ drivers/net/mlx5/mlx5_flow_flex.c | 121 +++++++++++++++++++++++++++++- 4 files changed, 154 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 3f7c34b687..1c6f50b72a 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -337,6 +337,16 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) flow_dv_dest_array_clone_free_cb); if (!sh->dest_array_list) goto error; + /* Init shared flex parsers list, no need lcore_share */ + snprintf(s, sizeof(s), "%s_flex_parsers_list", sh->ibdev_name); + sh->flex_parsers_dv = mlx5_list_create(s, sh, false, + mlx5_flex_parser_create_cb, + mlx5_flex_parser_match_cb, + mlx5_flex_parser_remove_cb, + mlx5_flex_parser_clone_cb, + mlx5_flex_parser_clone_free_cb); + if (!sh->flex_parsers_dv) + goto error; #endif #ifdef HAVE_MLX5DV_DR void *domain; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index a4a0e258a9..dc15688f21 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1429,6 +1429,10 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) mlx5_flow_os_release_workspace(); } pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex); + if (sh->flex_parsers_dv) { + mlx5_list_destroy(sh->flex_parsers_dv); + sh->flex_parsers_dv = NULL; + } /* * Ensure there is no async event handler installed. * Only primary process handles async device events. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f0c1775f8c..63de6523e8 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1099,6 +1099,15 @@ struct mlx5_lag { uint8_t affinity_mode; /* TIS or hash based affinity */ }; +/* DevX flex parser context. */ +struct mlx5_flex_parser_devx { + struct mlx5_list_entry entry; /* List element at the beginning. */ + uint32_t num_samples; + void *devx_obj; + struct mlx5_devx_graph_node_attr devx_conf; + uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; +}; + /* Port flex item context. */ struct mlx5_flex_item { struct mlx5_flex_parser_devx *devx_fp; /* DevX flex parser object. */ @@ -1159,6 +1168,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_list *push_vlan_action_list; /* Push VLAN actions. */ struct mlx5_list *sample_action_list; /* List of sample actions. */ struct mlx5_list *dest_array_list; + struct mlx5_list *flex_parsers_dv; /* Flex Item parsers. */ /* List of destination array actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ void *default_miss_action; /* Default miss action. */ @@ -1828,4 +1838,14 @@ int flow_dv_item_release(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flex_item_port_init(struct rte_eth_dev *dev); void mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev); +/* Flex parser list callbacks. */ +struct mlx5_list_entry *mlx5_flex_parser_create_cb(void *list_ctx, void *ctx); +int mlx5_flex_parser_match_cb(void *list_ctx, + struct mlx5_list_entry *iter, void *ctx); +void mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx, + struct mlx5_list_entry *entry, + void *ctx); +void mlx5_flex_parser_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index b7bc4af6fb..2f87073e97 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -45,7 +45,13 @@ mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev) for (i = 0; i < MLX5_PORT_FLEX_ITEM_NUM && priv->flex_item_map ; i++) { if (priv->flex_item_map & (1 << i)) { - /* DevX object dereferencing should be provided here. */ + struct mlx5_flex_item *flex = &priv->flex_item[i]; + + claim_zero(mlx5_list_unregister + (priv->sh->flex_parsers_dv, + &flex->devx_fp->entry)); + flex->devx_fp = NULL; + flex->refcnt = 0; priv->flex_item_map &= ~(1 << i); } } @@ -127,7 +133,9 @@ flow_dv_item_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_parser_devx devx_config = { .devx_obj = NULL }; struct mlx5_flex_item *flex; + struct mlx5_list_entry *ent; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); flex = mlx5_flex_alloc(priv); @@ -137,10 +145,22 @@ flow_dv_item_create(struct rte_eth_dev *dev, "too many flex items created on the port"); return NULL; } + ent = mlx5_list_register(priv->sh->flex_parsers_dv, &devx_config); + if (!ent) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flex item creation failure"); + goto error; + } + flex->devx_fp = container_of(ent, struct mlx5_flex_parser_devx, entry); RTE_SET_USED(conf); /* Mark initialized flex item valid. */ __atomic_add_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); return (struct rte_flow_item_flex_handle *)flex; + +error: + mlx5_flex_free(priv, flex); + return NULL; } /** @@ -166,6 +186,7 @@ flow_dv_item_release(struct rte_eth_dev *dev, struct mlx5_flex_item *flex = (struct mlx5_flex_item *)(uintptr_t)handle; uint32_t old_refcnt = 1; + int rc; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); rte_spinlock_lock(&priv->flex_item_sl); @@ -184,6 +205,104 @@ flow_dv_item_release(struct rte_eth_dev *dev, } /* Flex item is marked as invalid, we can leave locked section. */ rte_spinlock_unlock(&priv->flex_item_sl); + MLX5_ASSERT(flex->devx_fp); + rc = mlx5_list_unregister(priv->sh->flex_parsers_dv, + &flex->devx_fp->entry); + flex->devx_fp = NULL; mlx5_flex_free(priv, flex); + if (rc < 0) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex item release failure"); return 0; } + +/* DevX flex parser list callbacks. */ +struct mlx5_list_entry * +mlx5_flex_parser_create_cb(void *list_ctx, void *ctx) +{ + struct mlx5_dev_ctx_shared *sh = list_ctx; + struct mlx5_flex_parser_devx *fp, *conf = ctx; + int ret; + + fp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_flex_parser_devx), + 0, SOCKET_ID_ANY); + if (!fp) + return NULL; + /* Copy the requested configurations. */ + fp->num_samples = conf->num_samples; + memcpy(&fp->devx_conf, &conf->devx_conf, sizeof(fp->devx_conf)); + /* Create DevX flex parser. */ + fp->devx_obj = mlx5_devx_cmd_create_flex_parser(sh->cdev->ctx, + &fp->devx_conf); + if (!fp->devx_obj) + goto error; + /* Query the firmware assigned sample ids. */ + ret = mlx5_devx_cmd_query_parse_samples(fp->devx_obj, + fp->sample_ids, + fp->num_samples); + if (ret) + goto error; + DRV_LOG(DEBUG, "DEVx flex parser %p created, samples num: %u", + (const void *)fp, fp->num_samples); + return &fp->entry; +error: + if (fp->devx_obj) + mlx5_devx_cmd_destroy((void *)(uintptr_t)fp->devx_obj); + if (fp) + mlx5_free(fp); + return NULL; +} + +int +mlx5_flex_parser_match_cb(void *list_ctx, + struct mlx5_list_entry *iter, void *ctx) +{ + struct mlx5_flex_parser_devx *fp = + container_of(iter, struct mlx5_flex_parser_devx, entry); + struct mlx5_flex_parser_devx *org = + container_of(ctx, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + return !iter || !ctx || memcmp(&fp->devx_conf, + &org->devx_conf, + sizeof(fp->devx_conf)); +} + +void +mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + MLX5_ASSERT(fp->devx_obj); + claim_zero(mlx5_devx_cmd_destroy(fp->devx_obj)); + DRV_LOG(DEBUG, "DEVx flex parser %p destroyed", (const void *)fp); + mlx5_free(entry); +} + +struct mlx5_list_entry * +mlx5_flex_parser_clone_cb(void *list_ctx, + struct mlx5_list_entry *entry, void *ctx) +{ + struct mlx5_flex_parser_devx *fp; + + RTE_SET_USED(list_ctx); + RTE_SET_USED(entry); + fp = mlx5_malloc(0, sizeof(struct mlx5_flex_parser_devx), + 0, SOCKET_ID_ANY); + if (!fp) + return NULL; + memcpy(fp, ctx, sizeof(struct mlx5_flex_parser_devx)); + return &fp->entry; +} + +void +mlx5_flex_parser_clone_free_cb(void *list_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + RTE_SET_USED(list_ctx); + mlx5_free(fp); +} -- 2.33.1