From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90E1AA0524; Wed, 5 May 2021 11:51:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BAF7041128; Wed, 5 May 2021 11:50:44 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2048.outbound.protection.outlook.com [40.107.244.48]) by mails.dpdk.org (Postfix) with ESMTP id 3618A41116 for ; Wed, 5 May 2021 11:50:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JLVYxRqCl49AfQWkAOdYwhvar5yo4wAnIwD1G3nq5Ed1XuAwf9pEHKoo48uXGAyUwMIuqZxzjRdWSjK5VeXYk01VAAR7jNEurudPOY+k3ljGSRoOG7QO1hyWfjHtW8T1LBO4eiGZtT7//AicFDRMqiqzM7wycN17KEmskCawkJXPMbAp+VBYIa2HvSHjyRBwhq6oyrD6LiraiIQIwG66Jo5xJZKDjDrug5fXiHCOsXwNMZumNe5fKoFzk3z2nBlbXtoxomtOYimtLg+NhA70RX2ciU8uRqeh2lgdWVhLQY09nAH2NVuLLRcK2oyaEJ8M4RWXbVkU5RwnvzuMya3eSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KWaiwGAuGYH+N/yh/07y8YPt7/CgpoInV17SOY3C6jM=; b=cAB/Px6CHXJndskxSiJmtVU8Nn/HDrfprdYoKC+NhiGiMdN4AljdcUgaK/ccBdMGtCK7wf8FE0BRZLxl/A8ZMaSd4ONp4vxQNm0BGWDDB63d9Ozh03NgAPOGTA6YYr5F8BWox2hNmXqP2wnIAoy+gSoTGvxRanNoO/9r4Y3GpuQeOtY6+o0B4+th8Erkh9LjZHGHeH/ot5rlO/0elzVo3IBzkxLA/ptmj4fe+iVqwvT70bDBbTaqE3tZtzaL5HAJHWUsPdQWOwvC123qjqOsmXbWgVbwI5pPmLYTkffBi32eiDvlgjmVxj9nW23Z3WfB1F2q9VAH0HKz2oVWkkVOLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KWaiwGAuGYH+N/yh/07y8YPt7/CgpoInV17SOY3C6jM=; b=JMLS23z6dCx8xELWFeoXUc34PzuEy0skXltMAZDuuu94tHq0QhcayqhWt3fOsXVBOujbNvPtSM46cqYVyOsOIhU0PcwM4y8wmt1TcLc65jNj9STVgkheg4sM3j8+nsSYHSHgUAZinThwkBktriMiJgqGDqzInqYB1tn8JS9TCLsT4i2Se4GuX4d37rN1UH8lXXCq4tFFCVSfw4Wb5/zsqvQL0wXPGRM4TaRtthfaJEhoXMNRW413mt+cq8PqFZx5gQ080kDzBaplukVX+YXvXC1Sp07PZK5TGMyFrZyrFNVfTS2XCIyx3+4rns2fOYQEOdGI1S3qlT26gNh75IpLOQ== Received: from BN9PR03CA0514.namprd03.prod.outlook.com (2603:10b6:408:131::9) by SN6PR12MB4720.namprd12.prod.outlook.com (2603:10b6:805:e6::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Wed, 5 May 2021 09:50:41 +0000 Received: from BN8NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:408:131:cafe::e) by BN9PR03CA0514.outlook.office365.com (2603:10b6:408:131::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.39 via Frontend Transport; Wed, 5 May 2021 09:50:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT067.mail.protection.outlook.com (10.13.177.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4108.25 via Frontend Transport; Wed, 5 May 2021 09:50:41 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 5 May 2021 09:50:38 +0000 From: Bing Zhao To: , , CC: , , Date: Wed, 5 May 2021 12:49:59 +0300 Message-ID: <20210505095009.40250-8-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210505095009.40250-1-bingz@nvidia.com> References: <20210427153811.11554-1-bingz@nvidia.com> <20210505095009.40250-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: de3b4bc3-2008-44ba-b469-08d90fab3e91 X-MS-TrafficTypeDiagnostic: SN6PR12MB4720: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1923; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 22q93RSmvSuo6J72fPMIkrK7Dj6H5WTcl5aZ92khhMy6FfirMprW6mvHgGDEekRlYZdiU0DKRmFOrRBEWQisiGgbnD3OSHFeLr9mwfv0Hk/evQtEDfLyUw7RtyG6+E7hoGrGZlbvwSI/kk1HzdhIMMfQ2FEmdKCE7WRZE5c9urc/nHWQzZ8lp/8xTBXxe6ckuJ+7PANzUQMpG/EPLynRbxQL8zk9bP3aD/fJQ1jxGlPhW+MkvBNe0LAyPUAWIIaJWdYXF/w85b5IaFm1TAdj/1ug0q/nhkXUbXE9FpkiXgxzmeqouS79GSI0S62jmLroqAaFAksGdMVxozsEcR5CeYTI7Cxt4Y+jadrm5qLRdqgMzD3+/VPu/jTC7/eqv6E404ar5Q4Gg7erTiGAkbOJKOLOYM2cQFubilh5p0Zrs+6tRKx4k/ua/ZdheojJGmP6TRgfcZdhZm5p30XFpb/Mj91NOee016ops6n24uYwtYhyO2qls9z9ury2cQUXts2QzelhLcMrY0B/2t8/y+EnRnFpkNn5pglQpNi8K1+GtGQQRVGwFV48SSlvj/WJPjryIeFilVxY7tEpXagD3CFNGiQdu2ks0RcB5JA5Pr7Gr9BbpFaCCuU7bliYj5MZed+ve7pxCHMLGWnGnvFYGL0j9VL4WVMItkwS/ibm9u4YuPw= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(136003)(376002)(346002)(36840700001)(46966006)(47076005)(186003)(36756003)(5660300002)(2906002)(82310400003)(426003)(36860700001)(336012)(55016002)(7696005)(6286002)(8676002)(110136005)(16526019)(4326008)(2616005)(70206006)(54906003)(316002)(107886003)(83380400001)(82740400003)(356005)(86362001)(26005)(8936002)(30864003)(478600001)(36906005)(1076003)(70586007)(6666004)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 09:50:41.1457 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: de3b4bc3-2008-44ba-b469-08d90fab3e91 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB4720 Subject: [dpdk-dev] [PATCH v6 07/17] net/mlx5: add actions creating for CT X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Allocating a CT from the management pools and creating the DR actions for both directions by default. If there is no available connection tracking action, a new pool will be created with a fixed size bulk allocation. Right now, all the resources are controlled by the linked list. The ASO connection tracking context associated with these actions need to be updated via WQE before using for steering. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5.h | 4 + drivers/net/mlx5/mlx5_flow.h | 28 ++++ drivers/net/mlx5/mlx5_flow_dv.c | 263 ++++++++++++++++++++++++++++++++ 3 files changed, 295 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 96b5cccf19..0f2a26efc0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -992,6 +992,10 @@ struct mlx5_bond_info { /* Number of connection tracking objects per pool: must be a power of 2. */ #define MLX5_ASO_CT_ACTIONS_PER_POOL 64 +/* Generate incremental and unique CT index from pool and offset. */ +#define MLX5_MAKE_CT_IDX(pool, offset) \ + ((pool) * MLX5_ASO_CT_ACTIONS_PER_POOL + (offset) + 1) + /* ASO Conntrack state. */ enum mlx5_aso_ct_state { ASO_CONNTRACK_FREE, /* Inactive, in the free list. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 71b0871bcd..0d2daa7faf 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -46,6 +46,7 @@ enum { MLX5_INDIRECT_ACTION_TYPE_RSS, MLX5_INDIRECT_ACTION_TYPE_AGE, MLX5_INDIRECT_ACTION_TYPE_COUNT, + MLX5_INDIRECT_ACTION_TYPE_CT, }; /* Matches on selected register. */ @@ -1317,6 +1318,33 @@ mlx5_validate_integrity_item(const struct rte_flow_item_integrity *item) return (test.value == 0); } +/* + * Get ASO CT action by index. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] idx + * Index to the ASO CT action. + * + * @return + * The specified ASO CT action pointer. + */ +static inline struct mlx5_aso_ct_action * +flow_aso_ct_get_by_idx(struct rte_eth_dev *dev, uint32_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_pool *pool; + + idx--; + MLX5_ASSERT((idx / MLX5_ASO_CT_ACTIONS_PER_POOL) < mng->n); + /* Bit operation AND could be used. */ + rte_rwlock_read_lock(&mng->resize_rwl); + pool = mng->pools[idx / MLX5_ASO_CT_ACTIONS_PER_POOL]; + rte_rwlock_read_unlock(&mng->resize_rwl); + return &pool->actions[idx % MLX5_ASO_CT_ACTIONS_PER_POOL]; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c6f90e0a89..b3606e895c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -11515,6 +11515,262 @@ flow_dv_prepare_counter(struct rte_eth_dev *dev, return flow_dv_counter_get_by_idx(dev, flow->counter, NULL); } +/* + * Release an ASO CT action. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] idx + * Index of ASO CT action to release. + * + * @return + * 0 when CT action was removed, otherwise the number of references. + */ +static inline int +flow_dv_aso_ct_release(struct rte_eth_dev *dev, uint32_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_action *ct = flow_aso_ct_get_by_idx(dev, idx); + uint32_t ret = __atomic_sub_fetch(&ct->refcnt, 1, __ATOMIC_RELAXED); + + if (!ret) { + if (ct->dr_action_orig) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + claim_zero(mlx5_glue->destroy_flow_action + (ct->dr_action_orig)); +#endif + ct->dr_action_orig = NULL; + } + if (ct->dr_action_rply) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + claim_zero(mlx5_glue->destroy_flow_action + (ct->dr_action_rply)); +#endif + ct->dr_action_rply = NULL; + } + rte_spinlock_lock(&mng->ct_sl); + LIST_INSERT_HEAD(&mng->free_cts, ct, next); + rte_spinlock_unlock(&mng->ct_sl); + } + return ret; +} + +/* + * Resize the ASO CT pools array by 64 pools. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * + * @return + * 0 on success, otherwise negative errno value and rte_errno is set. + */ +static int +flow_dv_aso_ct_pools_resize(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + void *old_pools = mng->pools; + /* Magic number now, need a macro. */ + uint32_t resize = mng->n + 64; + uint32_t mem_size = sizeof(struct mlx5_aso_ct_pool *) * resize; + void *pools = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 0, SOCKET_ID_ANY); + + if (!pools) { + rte_errno = ENOMEM; + return -rte_errno; + } + rte_rwlock_write_lock(&mng->resize_rwl); + /* ASO SQ/QP was already initialized in the startup. */ + if (old_pools) { + /* Realloc could be an alternative choice. */ + rte_memcpy(pools, old_pools, + mng->n * sizeof(struct mlx5_aso_ct_pool *)); + mlx5_free(old_pools); + } + mng->n = resize; + mng->pools = pools; + rte_rwlock_write_unlock(&mng->resize_rwl); + return 0; +} + +/* + * Create and initialize a new ASO CT pool. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[out] ct_free + * Where to put the pointer of a new CT action. + * + * @return + * The CT actions pool pointer and @p ct_free is set on success, + * NULL otherwise and rte_errno is set. + */ +static struct mlx5_aso_ct_pool * +flow_dv_ct_pool_create(struct rte_eth_dev *dev, + struct mlx5_aso_ct_action **ct_free) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_pool *pool = NULL; + struct mlx5_devx_obj *obj = NULL; + uint32_t i; + uint32_t log_obj_size = rte_log2_u32(MLX5_ASO_CT_ACTIONS_PER_POOL); + + obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->ctx, + priv->sh->pdn, log_obj_size); + if (!obj) { + rte_errno = ENODATA; + DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX."); + return NULL; + } + pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY); + if (!pool) { + rte_errno = ENOMEM; + claim_zero(mlx5_devx_cmd_destroy(obj)); + return NULL; + } + pool->devx_obj = obj; + pool->index = mng->next; + /* Resize pools array if there is no room for the new pool in it. */ + if (pool->index == mng->n && flow_dv_aso_ct_pools_resize(dev)) { + claim_zero(mlx5_devx_cmd_destroy(obj)); + mlx5_free(pool); + return NULL; + } + mng->pools[pool->index] = pool; + mng->next++; + /* Assign the first action in the new pool, the rest go to free list. */ + *ct_free = &pool->actions[0]; + /* Lock outside, the list operation is safe here. */ + for (i = 1; i < MLX5_ASO_CT_ACTIONS_PER_POOL; i++) { + /* refcnt is 0 when allocating the memory. */ + pool->actions[i].offset = i; + LIST_INSERT_HEAD(&mng->free_cts, &pool->actions[i], next); + } + return pool; +} + +/* + * Allocate a ASO CT action from free list. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * Index to ASO CT action on success, 0 otherwise and rte_errno is set. + */ +static uint32_t +flow_dv_aso_ct_alloc(struct rte_eth_dev *dev, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_action *ct = NULL; + struct mlx5_aso_ct_pool *pool; + uint8_t reg_c; + uint32_t ct_idx; + + MLX5_ASSERT(mng); + if (!priv->config.devx) { + rte_errno = ENOTSUP; + return 0; + } + /* Get a free CT action, if no, a new pool will be created. */ + rte_spinlock_lock(&mng->ct_sl); + ct = LIST_FIRST(&mng->free_cts); + if (ct) { + LIST_REMOVE(ct, next); + } else if (!flow_dv_ct_pool_create(dev, &ct)) { + rte_spinlock_unlock(&mng->ct_sl); + rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "failed to create ASO CT pool"); + return 0; + } + rte_spinlock_unlock(&mng->ct_sl); + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + ct_idx = MLX5_MAKE_CT_IDX(pool->index, ct->offset); + /* 0: inactive, 1: created, 2+: used by flows. */ + __atomic_store_n(&ct->refcnt, 1, __ATOMIC_RELAXED); + reg_c = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, error); + if (!ct->dr_action_orig) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + ct->dr_action_orig = mlx5_glue->dv_create_flow_action_aso + (priv->sh->rx_domain, pool->devx_obj->obj, + ct->offset, + MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR, + reg_c - REG_C_0); +#else + RTE_SET_USED(reg_c); +#endif + if (!ct->dr_action_orig) { + flow_dv_aso_ct_release(dev, ct_idx); + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "failed to create ASO CT action"); + return 0; + } + } + if (!ct->dr_action_rply) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + ct->dr_action_rply = mlx5_glue->dv_create_flow_action_aso + (priv->sh->rx_domain, pool->devx_obj->obj, + ct->offset, + MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_RESPONDER, + reg_c - REG_C_0); +#endif + if (!ct->dr_action_rply) { + flow_dv_aso_ct_release(dev, ct_idx); + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "failed to create ASO CT action"); + return 0; + } + } + return ct_idx; +} + +/* + * Create a conntrack object with context and actions by using ASO mechanism. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] pro + * Pointer to conntrack information profile. + * @param[out] error + * Pointer to the error structure. + * + * @return + * Index to conntrack object on success, 0 otherwise. + */ +static uint32_t +flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, + const struct rte_flow_action_conntrack *pro, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_aso_ct_action *ct; + uint32_t idx; + + if (!sh->ct_aso_en) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Connection is not supported"); + idx = flow_dv_aso_ct_alloc(dev, error); + if (!idx) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to allocate CT object"); + ct = flow_aso_ct_get_by_idx(dev, idx); + if (mlx5_aso_ct_update_by_wqe(sh, ct, pro)) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to update CT"); + return idx; +} + /** * Fill the flow with DV spec, lock free * (mutex should be acquired by caller). @@ -13729,6 +13985,7 @@ flow_dv_action_create(struct rte_eth_dev *dev, { uint32_t idx = 0; uint32_t ret = 0; + struct mlx5_priv *priv = dev->data->dev_private; switch (action->type) { case RTE_FLOW_ACTION_TYPE_RSS: @@ -13754,6 +14011,12 @@ flow_dv_action_create(struct rte_eth_dev *dev, idx = (MLX5_INDIRECT_ACTION_TYPE_COUNT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | ret; break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + ret = flow_dv_translate_create_conntrack(dev, action->conf, + err); + idx = (MLX5_INDIRECT_ACTION_TYPE_CT << + MLX5_INDIRECT_ACTION_TYPE_OFFSET) | ret; + break; default: rte_flow_error_set(err, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action type not supported"); -- 2.27.0