From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3A12A0C57; Mon, 1 Nov 2021 10:16:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 15AD641136; Mon, 1 Nov 2021 10:16:01 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2043.outbound.protection.outlook.com [40.107.244.43]) by mails.dpdk.org (Postfix) with ESMTP id 298F741135 for ; Mon, 1 Nov 2021 10:15:59 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V0TxIMSPqISz60iJ4B2cSkP+sh2YHqeOlV+k4P1M2cgtXDf9LQzjG/4cEqXa4y4QkwY7ZoGqibRqazM2TVYBdUBpJ+SO8hu0Bkfp39EWc5FbXMvIgY/ITs4bfi/lKZzZvwJi6vWYDBnC1Eq/sQjsHtLcr7nG2yG6OxcVxMcz6RK3CBRSIysrO/Jtc9rIlPnTNu/y/3dE0NYze6ZDoSqnlZYnGKf7WkWDOzEOui72jgdlX6hp7HuLTWTSc00h4gfAQe6lfVkE+uwTaXScJ2i/A9xBbjZoL0Pb5Y9PUkwe5RrCh+kCAmeEg7xRoHpeIpTl0eQSMm0mOtYRiPLZ2/P7Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XLSix7UxgYbyemIP2JRlRt+K4DCfKzdIM+4sx2WJC3g=; b=CHfNfxELKxTztf6pKa6wjtOZA4D0EtoHqxDVpioNRYthVp/NIOw/7QcB3lHDcZlyS//cIs9G1boEEBQe5bRu4spcqctq+os0hEfyiVBTgcrswEIFs5yAQTPy1sLzuvcrFu4f0i/nkVDVwVYRjh/JHRsXQfPNI+zHuIgOKU/KJ4qOAajjdAh0ISNywbQjZ5K3Kv/arTuw5vrm1pUF7FesSZZB5Oy2H3N3q9qucA8KclkIuyN9bpTel1aP63WvqsE44AvA1FUlrX0HoHvHZX/uJ9pDAUdJ54mNmCnF9CQIJvaihFXqmxoM4i4++1098ly6aOsK2WuFscwqQSZ8spu2Ng== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XLSix7UxgYbyemIP2JRlRt+K4DCfKzdIM+4sx2WJC3g=; b=FBi6KDKC5vmDYOMilgNjcsFIGgk4/41MXHCqQzvGYmovWDR/Ohd6bjRuzLlDWRCTKt718QPdg6B3ncAeTNicoi7YGeLaPEapWytMCNOu6d7IBobfKtYowNbbRP2qQ3oKfVlvZL1XSqsYqKfuTbACYT8h3t2KRk9SCTRilhrOjzDYd2+YPlFozsTHeCpeWKnR7dt5j/tIwL5wO+RpoWuG92JYbraVDbS3mPkHKkXcQyPL8PQql2GhHPCEECRbJ4N2C2QmT3fhGMxZPKKJMPE9T2U4SWfAlR3ZODBPLB7FLFGG1kXN4OcFrk3VyXKxCTRAKU5J+9vosEdtZehdn0C4bg== Received: from BN9PR03CA0293.namprd03.prod.outlook.com (2603:10b6:408:f5::28) by DM5PR12MB1564.namprd12.prod.outlook.com (2603:10b6:4:f::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.19; Mon, 1 Nov 2021 09:15:56 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f5:cafe::80) by BN9PR03CA0293.outlook.office365.com (2603:10b6:408:f5::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Mon, 1 Nov 2021 09:15:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Mon, 1 Nov 2021 09:15:56 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 1 Nov 2021 09:15:40 +0000 From: Gregory Etelson To: , , CC: Date: Mon, 1 Nov 2021 11:15:11 +0200 Message-ID: <20211101091514.3891-7-getelson@nvidia.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211101091514.3891-1-getelson@nvidia.com> References: <20211101091514.3891-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 52e34580-c3f3-433b-f3e5-08d99d18367c X-MS-TrafficTypeDiagnostic: DM5PR12MB1564: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:293; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ISPOKBpY9kfMpZNOWV/OKiQuSg1aO6polCtEcWIWB+VC1rynLAiiEoFBd3IolEADxU5UK0hfij8qFXiDhZ58ZDXCVie+9tZHAdWHI7Srcl2wS5VKUEyvqzlEw/8y0WFq0oEhMGOOrivOUT3unzOOmgRwAKA2g80mkxXSZd1GSp7a5YSqEXN4DSK5T4i8+BfmTWR2GzwH8v3jun3Au7YbrhXds/QT08p5BNeDn42df0qtG3i4L+I3omeYqhUq9AIidMQqgYtx+1lFgdqZfXpCYUSzZRedt1xhWrIUlLsR5Dpn9SO0iH4wIxSh8kN8AmPUYUX2iENcr29e/o+V2teCu1hQwGHJM/hoHUgSkR+hObpG+AkHJD2BVVL2g3ctO96dW5wv22XRSHIUQxEHLBp9ZOf9/PTBkSkzYFgEcLtpHUIJb++Ydc+h7ed8ndKmjPItvUqaw1lm5lG6iTQSNs4oqkgGZfs/RqqF0facyKNQshJDpq2kqyu4oYqLbVVbJg7rol0twFIUN1JCIVkNWv5BiAdmEhxKKBMSJW3nw+RHBKTSsNpqAlfET/X+PiABFHMiOEySXYIM1UMu7QpU/tIB6OfcPUQgyh5N6yHm1549RBF0+LUgpWI2sjVQYwGNnY1QKIVBugY/dDte+OroFOaDmdqQtchtXS1vT3UbIQS1yjOqK9Pu5/e3IPGY78Fu9FaJ+SlP5g2yDCFHn74WNZHlgL02WZ8RxVK2m9s8d853mCk= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(8676002)(8936002)(7696005)(7636003)(2906002)(6286002)(1076003)(336012)(356005)(82310400003)(4326008)(5660300002)(70586007)(426003)(70206006)(2616005)(6666004)(6636002)(110136005)(55016002)(36756003)(16526019)(186003)(316002)(47076005)(26005)(86362001)(36860700001)(508600001)(83380400001)(107886003)(36906005)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Nov 2021 09:15:56.5502 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 52e34580-c3f3-433b-f3e5-08d99d18367c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1564 Subject: [dpdk-dev] [PATCH 6/9] net/mlx5: add flex parser DevX object management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The DevX flex parsers can be shared between representors within the same IB context. We should put the flex parser objects into the shared list and engage the standard mlx5_list_xxx API to manage ones. Signed-off-by: Gregory Etelson Reviewed-by: Viacheslav Ovsiienko --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++ drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5.h | 20 +++++ drivers/net/mlx5/mlx5_flow_flex.c | 121 +++++++++++++++++++++++++++++- 4 files changed, 154 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index cf5c5b9722..b800ddd01a 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -337,6 +337,16 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) flow_dv_dest_array_clone_free_cb); if (!sh->dest_array_list) goto error; + /* Init shared flex parsers list, no need lcore_share */ + snprintf(s, sizeof(s), "%s_flex_parsers_list", sh->ibdev_name); + sh->flex_parsers_dv = mlx5_list_create(s, sh, false, + mlx5_flex_parser_create_cb, + mlx5_flex_parser_match_cb, + mlx5_flex_parser_remove_cb, + mlx5_flex_parser_clone_cb, + mlx5_flex_parser_clone_free_cb); + if (!sh->flex_parsers_dv) + goto error; #endif #ifdef HAVE_MLX5DV_DR void *domain; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 8166d6272c..2c1e6b6637 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1428,6 +1428,10 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) mlx5_flow_os_release_workspace(); } pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex); + if (sh->flex_parsers_dv) { + mlx5_list_destroy(sh->flex_parsers_dv); + sh->flex_parsers_dv = NULL; + } /* * Ensure there is no async event handler installed. * Only primary process handles async device events. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 75906da2c0..244f45bea2 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1101,6 +1101,15 @@ struct mlx5_lag { uint8_t affinity_mode; /* TIS or hash based affinity */ }; +/* DevX flex parser context. */ +struct mlx5_flex_parser_devx { + struct mlx5_list_entry entry; /* List element at the beginning. */ + uint32_t num_samples; + void *devx_obj; + struct mlx5_devx_graph_node_attr devx_conf; + uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; +}; + /* Port flex item context. */ struct mlx5_flex_item { struct mlx5_flex_parser_devx *devx_fp; /* DevX flex parser object. */ @@ -1157,6 +1166,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_list *push_vlan_action_list; /* Push VLAN actions. */ struct mlx5_list *sample_action_list; /* List of sample actions. */ struct mlx5_list *dest_array_list; + struct mlx5_list *flex_parsers_dv; /* Flex Item parsers. */ /* List of destination array actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ void *default_miss_action; /* Default miss action. */ @@ -1823,4 +1833,14 @@ int flow_dv_item_release(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flex_item_port_init(struct rte_eth_dev *dev); void mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev); +/* Flex parser list callbacks. */ +struct mlx5_list_entry *mlx5_flex_parser_create_cb(void *list_ctx, void *ctx); +int mlx5_flex_parser_match_cb(void *list_ctx, + struct mlx5_list_entry *iter, void *ctx); +void mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx, + struct mlx5_list_entry *entry, + void *ctx); +void mlx5_flex_parser_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index b7bc4af6fb..2f87073e97 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -45,7 +45,13 @@ mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev) for (i = 0; i < MLX5_PORT_FLEX_ITEM_NUM && priv->flex_item_map ; i++) { if (priv->flex_item_map & (1 << i)) { - /* DevX object dereferencing should be provided here. */ + struct mlx5_flex_item *flex = &priv->flex_item[i]; + + claim_zero(mlx5_list_unregister + (priv->sh->flex_parsers_dv, + &flex->devx_fp->entry)); + flex->devx_fp = NULL; + flex->refcnt = 0; priv->flex_item_map &= ~(1 << i); } } @@ -127,7 +133,9 @@ flow_dv_item_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flex_parser_devx devx_config = { .devx_obj = NULL }; struct mlx5_flex_item *flex; + struct mlx5_list_entry *ent; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); flex = mlx5_flex_alloc(priv); @@ -137,10 +145,22 @@ flow_dv_item_create(struct rte_eth_dev *dev, "too many flex items created on the port"); return NULL; } + ent = mlx5_list_register(priv->sh->flex_parsers_dv, &devx_config); + if (!ent) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flex item creation failure"); + goto error; + } + flex->devx_fp = container_of(ent, struct mlx5_flex_parser_devx, entry); RTE_SET_USED(conf); /* Mark initialized flex item valid. */ __atomic_add_fetch(&flex->refcnt, 1, __ATOMIC_RELEASE); return (struct rte_flow_item_flex_handle *)flex; + +error: + mlx5_flex_free(priv, flex); + return NULL; } /** @@ -166,6 +186,7 @@ flow_dv_item_release(struct rte_eth_dev *dev, struct mlx5_flex_item *flex = (struct mlx5_flex_item *)(uintptr_t)handle; uint32_t old_refcnt = 1; + int rc; MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); rte_spinlock_lock(&priv->flex_item_sl); @@ -184,6 +205,104 @@ flow_dv_item_release(struct rte_eth_dev *dev, } /* Flex item is marked as invalid, we can leave locked section. */ rte_spinlock_unlock(&priv->flex_item_sl); + MLX5_ASSERT(flex->devx_fp); + rc = mlx5_list_unregister(priv->sh->flex_parsers_dv, + &flex->devx_fp->entry); + flex->devx_fp = NULL; mlx5_flex_free(priv, flex); + if (rc < 0) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "flex item release failure"); return 0; } + +/* DevX flex parser list callbacks. */ +struct mlx5_list_entry * +mlx5_flex_parser_create_cb(void *list_ctx, void *ctx) +{ + struct mlx5_dev_ctx_shared *sh = list_ctx; + struct mlx5_flex_parser_devx *fp, *conf = ctx; + int ret; + + fp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_flex_parser_devx), + 0, SOCKET_ID_ANY); + if (!fp) + return NULL; + /* Copy the requested configurations. */ + fp->num_samples = conf->num_samples; + memcpy(&fp->devx_conf, &conf->devx_conf, sizeof(fp->devx_conf)); + /* Create DevX flex parser. */ + fp->devx_obj = mlx5_devx_cmd_create_flex_parser(sh->cdev->ctx, + &fp->devx_conf); + if (!fp->devx_obj) + goto error; + /* Query the firmware assigned sample ids. */ + ret = mlx5_devx_cmd_query_parse_samples(fp->devx_obj, + fp->sample_ids, + fp->num_samples); + if (ret) + goto error; + DRV_LOG(DEBUG, "DEVx flex parser %p created, samples num: %u", + (const void *)fp, fp->num_samples); + return &fp->entry; +error: + if (fp->devx_obj) + mlx5_devx_cmd_destroy((void *)(uintptr_t)fp->devx_obj); + if (fp) + mlx5_free(fp); + return NULL; +} + +int +mlx5_flex_parser_match_cb(void *list_ctx, + struct mlx5_list_entry *iter, void *ctx) +{ + struct mlx5_flex_parser_devx *fp = + container_of(iter, struct mlx5_flex_parser_devx, entry); + struct mlx5_flex_parser_devx *org = + container_of(ctx, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + return !iter || !ctx || memcmp(&fp->devx_conf, + &org->devx_conf, + sizeof(fp->devx_conf)); +} + +void +mlx5_flex_parser_remove_cb(void *list_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + + RTE_SET_USED(list_ctx); + MLX5_ASSERT(fp->devx_obj); + claim_zero(mlx5_devx_cmd_destroy(fp->devx_obj)); + DRV_LOG(DEBUG, "DEVx flex parser %p destroyed", (const void *)fp); + mlx5_free(entry); +} + +struct mlx5_list_entry * +mlx5_flex_parser_clone_cb(void *list_ctx, + struct mlx5_list_entry *entry, void *ctx) +{ + struct mlx5_flex_parser_devx *fp; + + RTE_SET_USED(list_ctx); + RTE_SET_USED(entry); + fp = mlx5_malloc(0, sizeof(struct mlx5_flex_parser_devx), + 0, SOCKET_ID_ANY); + if (!fp) + return NULL; + memcpy(fp, ctx, sizeof(struct mlx5_flex_parser_devx)); + return &fp->entry; +} + +void +mlx5_flex_parser_clone_free_cb(void *list_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flex_parser_devx *fp = + container_of(entry, struct mlx5_flex_parser_devx, entry); + RTE_SET_USED(list_ctx); + mlx5_free(fp); +} -- 2.33.1