From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5774DA0A02; Tue, 27 Apr 2021 17:39:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AFACC4127F; Tue, 27 Apr 2021 17:38:59 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2069.outbound.protection.outlook.com [40.107.94.69]) by mails.dpdk.org (Postfix) with ESMTP id 401AD4127F for ; Tue, 27 Apr 2021 17:38:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Hjw+heoBKUYJ6tbabqZTekrcBhD/k7QEPDvCVdyE4URvgphFes2K/4Ddc/dX0y1j/anuPG4B+GTgRVu6pK713Fy+HmbdBnUL2xYniHlPeGyfRt0jYKO+jMNoMalD1vfu85AaPTEpvbzSkObjdd1kt7eAVUjX0hsfK8C6vRCRAPC2FgqTxhDgMfV3d723Za6R8MlfZNjd6b+5rWcHN066LqAmGgwy1o5Sw34NSgPn+4KU2J1lH1uD/zaXQcSBjj6q7YUR/+/mhK8Zzp8HqvM6fljYhsb9Bxq97dwPYWvTwoeIbMMrnH0nTrjihirkw4fA/3tHN5l/MsUi9BLU1KePEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sqv4wYvm+UZy6xvTuz91qQ+gqM0B/dFUCOWMNNHeh9E=; b=Ic/G3X9ZBKOLL7sj4m7z/1oaxDWBANlfcmh6RmBWlD3XEwehU/eXGvt0BPHkXRja5wTb7prwA7iR9F8AO4qKHwEGngduHXzTg0iQs/vAZMhRyDD3kXbWSBo9puC8SaP7d2uT2/2qretVHFHcGHdP4BY5uDQH+7RS9DswJswdYl69qW4XTLPDHuYHM/3YzMeO34qMoxE0fzddcIOUTM3ccVBnAbrjo+owiDeaOdIdjePK7Ohj9QCtTnb27oqhiYhbK75pXSPiNol2I9Cmry3oF3m36yMQAX63YTN+3QnKFGkWj+n3rAA5rTRYAY1uj1GE2x+0oRVSXOC/iSWxrFMRMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sqv4wYvm+UZy6xvTuz91qQ+gqM0B/dFUCOWMNNHeh9E=; b=MuKe3jU6QksgyuMoNGbahOBx2KhOFn98kJeEpiVJ2x03g7xRwmswKtcciuNhZWlppp8j8Wz63C1Od0lPt4u0pgKlPHrTQPfF/dNx9cbKFZB2+K7xTUNZcNEnJ+iBOLs/VqiIfuQ4Tn3k3lLxku28/1dspJwKiOG2Fhf/+86G3XaNjuBDfKuLxk4/4fUAiplQYmJTO3oqNpAElpjWrvm3s5Zsv43r4COHLYJS2RjxhoxdK1F5eKH1EmFNIOyiFsyU/1jurR1am1Yubhs+gfdXwwZeM0/Rl8Cf/AbOQ5DaqEB+VWkcOKVuCvMJ/JtE7fyZnuXSp7CcEUXKK+ta+vEUXw== Received: from MW4PR04CA0145.namprd04.prod.outlook.com (2603:10b6:303:84::30) by CH2PR12MB4821.namprd12.prod.outlook.com (2603:10b6:610:c::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.20; Tue, 27 Apr 2021 15:38:56 +0000 Received: from CO1NAM11FT025.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::b9) by MW4PR04CA0145.outlook.office365.com (2603:10b6:303:84::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.25 via Frontend Transport; Tue, 27 Apr 2021 15:38:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT025.mail.protection.outlook.com (10.13.175.232) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Tue, 27 Apr 2021 15:38:55 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 27 Apr 2021 15:38:53 +0000 From: Bing Zhao To: , CC: , , Date: Tue, 27 Apr 2021 18:38:01 +0300 Message-ID: <20210427153811.11554-8-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210427153811.11554-1-bingz@nvidia.com> References: <20210427153811.11554-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: affbb57b-c0fe-4d80-a2ee-08d909929163 X-MS-TrafficTypeDiagnostic: CH2PR12MB4821: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1923; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cuMVAlT6TIE39LdMfrOzbB0eBAYVN0JNED9iSeWVsOcvrXXjXk/+XuDeT1rgYh+eGGXwBp5Ra/M0GA61gxc8QW4Dk5wwNv3eydpqFpY+Rr25O/7Zvzs2ljvmTYW7QgApP8s3RZF7nzwG2pITUn9T2fbNdAkQo4uBjJoj9930qViq+rJiZbj1M/mLFbO7NNbYl6ERJXz8uZ+6/YGGSbiOOV5oRLuSG2dKYuTXR1epkUbQjbD3Y8actUADgBjR3afidrIDyGkxU1mCT+plnBSt7dcKAe2KnQndJVgcSdwByEAmDy3In/tji+4OwSVUk15RxYYCVmV5nlfFe14vq3WvECiMkLidYh+Qbt7rQ76n77Fjbebpj/HWQHB9L+0BQb7HvFAJyZ9xLDcaDvnLPvjEwx8blZuThDgxATgtf26qB9VIMmFo5BIllHx9hNZLQCyx0OTyhXpzD1/kve8yIA78e1Pfdig2AaQTT7aSi46gdPQip0SisHe/nsHdoSJvqN/C4hqE3kzUaBdZfxoRL1wylx3A5flh5y+BfNn5c89Tbl0ZJbzu2fu4fPpzL8buVwcsMMCWKR/8Sqyz5od5ewcH+tzZj+rYsNg5DR/989XwwGEPovguGYhvsqNnKuIFEGa+gIrD/CxXO1zKq8OD5NRyM5SRK7F6VkVxXscl6Rm1ifk= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(346002)(396003)(39860400002)(376002)(46966006)(36840700001)(478600001)(316002)(36906005)(186003)(82310400003)(36756003)(356005)(4326008)(16526019)(54906003)(110136005)(6286002)(86362001)(7696005)(6636002)(2616005)(7636003)(47076005)(36860700001)(107886003)(6666004)(55016002)(426003)(83380400001)(1076003)(70586007)(5660300002)(70206006)(8676002)(30864003)(26005)(2906002)(336012)(82740400003)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2021 15:38:55.7317 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: affbb57b-c0fe-4d80-a2ee-08d909929163 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT025.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4821 Subject: [dpdk-dev] [PATCH 07/17] net/mlx5: add actions creating for CT X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Allocating a CT from the management pools and creating the DR actions for both directions by default. If there is no available connection tracking action, a new pool will be created with a fixed size bulk allocation. Right now, all the resources are controlled by the linked list. The ASO connection tracking context associated with these actions need to be updated via WQE before using for steering. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5.h | 4 + drivers/net/mlx5/mlx5_flow.h | 27 ++++- drivers/net/mlx5/mlx5_flow_dv.c | 261 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 291 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 1d31813..982c0c2 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -995,6 +995,10 @@ struct mlx5_bond_info { /* Number of connection tracking objects per pool: must be a power of 2. */ #define MLX5_ASO_CT_ACTIONS_PER_POOL 64 +/* Generate incremental and unique CT index from pool and offset. */ +#define MLX5_MAKE_CT_IDX(pool, offset) \ + ((pool) * MLX5_ASO_CT_ACTIONS_PER_POOL + (offset) + 1) + /* ASO Conntrack state. */ enum mlx5_aso_ct_state { ASO_CONNTRACK_FREE, /* Inactive, in the free list. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c3e7bf8..988b171 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -45,7 +45,7 @@ enum mlx5_rte_flow_action_type { enum { MLX5_INDIRECT_ACTION_TYPE_RSS, MLX5_INDIRECT_ACTION_TYPE_AGE, - MLX5_INDIRECT_ACTION_TYPE_AGE, + MLX5_INDIRECT_ACTION_TYPE_CT, }; /* Matches on selected register. */ @@ -1286,6 +1286,31 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx) return &pool->mtrs[idx % MLX5_ASO_MTRS_PER_POOL]; } +/* + * Get ASO CT action by index. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] idx + * Index to the ASO CT action. + * + * @return + * The specified ASO CT action pointer. + */ +static inline struct mlx5_aso_ct_action * +flow_aso_ct_get_by_idx(struct rte_eth_dev *dev, uint32_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_pool *pool; + + idx--; + MLX5_ASSERT((idx / MLX5_ASO_CT_ACTIONS_PER_POOL) < mng->n); + /* Bit operation AND could be used. */ + pool = mng->pools[idx / MLX5_ASO_CT_ACTIONS_PER_POOL]; + return &pool->actions[idx % MLX5_ASO_CT_ACTIONS_PER_POOL]; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d810466..51e6ff4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -11119,6 +11119,260 @@ flow_dv_translate_create_aso_age(struct rte_eth_dev *dev, return age_idx; } +/* + * Release an ASO CT action. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] idx + * Index of ASO CT action to release. + * + * @return + * 0 when CT action was removed, otherwise the number of references. + */ +static inline int +flow_dv_aso_ct_release(struct rte_eth_dev *dev, uint32_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_action *ct = flow_aso_ct_get_by_idx(dev, idx); + uint32_t ret = __atomic_sub_fetch(&ct->refcnt, 1, __ATOMIC_RELAXED); + + if (!ret) { + if (ct->dr_action_orig) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + claim_zero(mlx5_glue->destroy_flow_action + (ct->dr_action_orig)); +#endif + ct->dr_action_orig = NULL; + } + if (ct->dr_action_rply) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + claim_zero(mlx5_glue->destroy_flow_action + (ct->dr_action_rply)); +#endif + ct->dr_action_rply = NULL; + } + rte_spinlock_lock(&mng->ct_sl); + LIST_INSERT_HEAD(&mng->free_cts, ct, next); + rte_spinlock_unlock(&mng->ct_sl); + } + return ret; +} + +/* + * Resize the ASO CT pools array by 64 pools. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * + * @return + * 0 on success, otherwise negative errno value and rte_errno is set. + */ +static int +flow_dv_aso_ct_pools_resize(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + void *old_pools = mng->pools; + /* Magic number now, need a macro. */ + uint32_t resize = mng->n + 64; + uint32_t mem_size = sizeof(struct mlx5_aso_ct_pool *) * resize; + void *pools = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 0, SOCKET_ID_ANY); + + if (!pools) { + rte_errno = ENOMEM; + return -rte_errno; + } + /* ASO SQ/QP was already initialized in the startup. */ + if (old_pools) { + /* Realloc could be an alternative choice. */ + rte_memcpy(pools, old_pools, + mng->n * sizeof(struct mlx5_aso_ct_pool *)); + mlx5_free(old_pools); + } + mng->n = resize; + mng->pools = pools; + return 0; +} + +/* + * Create and initialize a new ASO CT pool. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[out] ct_free + * Where to put the pointer of a new CT action. + * + * @return + * The CT actions pool pointer and @p ct_free is set on success, + * NULL otherwise and rte_errno is set. + */ +static struct mlx5_aso_ct_pool * +flow_dv_ct_pool_create(struct rte_eth_dev *dev, + struct mlx5_aso_ct_action **ct_free) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_pool *pool = NULL; + struct mlx5_devx_obj *obj = NULL; + uint32_t i; + uint32_t log_obj_size = rte_log2_u32(MLX5_ASO_CT_ACTIONS_PER_POOL); + + obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->ctx, + priv->sh->pdn, log_obj_size); + if (!obj) { + rte_errno = ENODATA; + DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX."); + return NULL; + } + pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY); + if (!pool) { + rte_errno = ENOMEM; + claim_zero(mlx5_devx_cmd_destroy(obj)); + return NULL; + } + pool->devx_obj = obj; + pool->index = mng->next; + /* Resize pools array if there is no room for the new pool in it. */ + if (pool->index == mng->n && flow_dv_aso_ct_pools_resize(dev)) { + claim_zero(mlx5_devx_cmd_destroy(obj)); + mlx5_free(pool); + return NULL; + } + mng->pools[pool->index] = pool; + mng->next++; + /* Assign the first action in the new pool, the rest go to free list. */ + *ct_free = &pool->actions[0]; + /* Lock outside, the list operation is safe here. */ + for (i = 1; i < MLX5_ASO_CT_ACTIONS_PER_POOL; i++) { + /* refcnt is 0 when allocating the memory. */ + pool->actions[i].offset = i; + LIST_INSERT_HEAD(&mng->free_cts, &pool->actions[i], next); + } + return pool; +} + +/* + * Allocate a ASO CT action from free list. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * Index to ASO CT action on success, 0 otherwise and rte_errno is set. + */ +static uint32_t +flow_dv_aso_ct_alloc(struct rte_eth_dev *dev, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pools_mng *mng = priv->sh->ct_mng; + struct mlx5_aso_ct_action *ct = NULL; + struct mlx5_aso_ct_pool *pool; + uint8_t reg_c; + uint32_t ct_idx; + + MLX5_ASSERT(mng); + if (!priv->config.devx) { + rte_errno = ENOTSUP; + return 0; + } + /* Get a free CT action, if no, a new pool will be created. */ + rte_spinlock_lock(&mng->ct_sl); + ct = LIST_FIRST(&mng->free_cts); + if (ct) { + LIST_REMOVE(ct, next); + } else if (!flow_dv_ct_pool_create(dev, &ct)) { + rte_spinlock_unlock(&mng->ct_sl); + rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "failed to create ASO CT pool"); + return 0; + } + rte_spinlock_unlock(&mng->ct_sl); + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + ct_idx = MLX5_MAKE_CT_IDX(pool->index, ct->offset); + /* 0: inactive, 1: created, 2+: used by flows. */ + __atomic_store_n(&ct->refcnt, 1, __ATOMIC_RELAXED); + reg_c = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, error); + if (!ct->dr_action_orig) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + ct->dr_action_orig = mlx5_glue->dv_create_flow_action_aso + (priv->sh->rx_domain, pool->devx_obj->obj, + ct->offset, + MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR, + reg_c - REG_C_0); +#else + RTE_SET_USED(reg_c); +#endif + if (!ct->dr_action_orig) { + flow_dv_aso_ct_release(dev, ct_idx); + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "failed to create ASO CT action"); + return 0; + } + } + if (!ct->dr_action_rply) { +#ifdef HAVE_MLX5_DR_ACTION_ASO_CT + ct->dr_action_rply = mlx5_glue->dv_create_flow_action_aso + (priv->sh->rx_domain, pool->devx_obj->obj, + ct->offset, + MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_RESPONDER, + reg_c - REG_C_0); +#endif + if (!ct->dr_action_rply) { + flow_dv_aso_ct_release(dev, ct_idx); + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "failed to create ASO CT action"); + return 0; + } + } + return ct_idx; +} + +/* + * Create a conntrack object with context and actions by using ASO mechanism. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] pro + * Pointer to conntrack information profile. + * @param[out] error + * Pointer to the error structure. + * + * @return + * Index to conntrack object on success, 0 otherwise. + */ +static uint32_t +flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, + const struct rte_flow_action_conntrack *pro, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_aso_ct_action *ct; + uint32_t idx; + + if (!sh->ct_aso_en) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Connection is not supported"); + idx = flow_dv_aso_ct_alloc(dev, error); + if (!idx) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to allocate CT object"); + ct = flow_aso_ct_get_by_idx(dev, idx); + if (mlx5_aso_ct_update_by_wqe(sh, ct, pro)) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to update CT"); + return idx; +} + /** * Fill the flow with DV spec, lock free * (mutex should be acquired by caller). @@ -13309,6 +13563,7 @@ flow_dv_action_create(struct rte_eth_dev *dev, { uint32_t idx = 0; uint32_t ret = 0; + struct mlx5_priv *priv = dev->data->dev_private; switch (action->type) { case RTE_FLOW_ACTION_TYPE_RSS: @@ -13329,6 +13584,12 @@ flow_dv_action_create(struct rte_eth_dev *dev, (void *)(uintptr_t)idx; } break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + ret = flow_dv_translate_create_conntrack(dev, action->conf, + err); + idx = (MLX5_INDIRECT_ACTION_TYPE_CT << + MLX5_INDIRECT_ACTION_TYPE_OFFSET) | ret; + break; default: rte_flow_error_set(err, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action type not supported"); -- 2.5.5