From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1FF1EA0A02; Tue, 27 Apr 2021 17:39:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AFB141286; Tue, 27 Apr 2021 17:39:05 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam08on2074.outbound.protection.outlook.com [40.107.102.74]) by mails.dpdk.org (Postfix) with ESMTP id 1B47F41286 for ; Tue, 27 Apr 2021 17:39:04 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YIFaJjHpWdZPoJ7a0bNydw6lqKebGpxMSpqBJPPkoem/YjJ2i78i87LGlBFhyBcs/kKrvC/bLIHDFmeXEALPKrSkfZ+Rfspqv0gv0HyTXR11ZX4yZ76iUnlQ3Sc00nTAzbU+0zVvhsQuVZQs6qscGLP0bY/mcmw9RaQklBwQHvgRK2CDUb9YwcbP0BeH4A44K1FPAjJ3sGmjo8XuqXPl6lDJgxXOdsS4Quv5B1wh2lG1LNulH9KVI7UaZqNSs2X/zr4ct0vXi7a4DhqVKn5g+ynAw5J6Wmfswtxhpkj18JS0NCOw5JfBtzrzlBpso4NtwR9aVOfHOgnUt7qD5DHhrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tUC+dkR7jL/VjayV8RpBd+5gQbupYsgs68DY8vocnKw=; b=H2awaFl3QAnE76J+rdY70jUn3uD2Ra/0Zog1HVt3OfxCzvBDAqpY/0SeDnMu7//DVzUPTpTDI4Pp/M5c4wnw0D7w0HIxT0UC0Ai0P0bA+cRrhyBh4PW3lBEEkoHSRdgC2iG+Ed78LmNKVFIftNoSt4HOz/wubZOyte4UgonKwsmtAt7SVLeBV2hKWYSzDhtWLeDsxbPKVK9MZuWQS4Jh+wSfaY0luvl6FSypWvYrYTyzUOmrWG+lBgXrKCUBSGjkMwWqfvTs6kc0fxu5u+ne75YXwZurtc0W6fvzvqNAvRZZpiWfDEWzyAEijy7Gp9ZtoJ+ErGFM84BttnSPqcdLGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tUC+dkR7jL/VjayV8RpBd+5gQbupYsgs68DY8vocnKw=; b=Zai5laSpDE8ZsOua4JbsIIkmVp/sHRR4qt8YyH1Tuwu82b2IATVbcgpmzeBgxnI+GaV1di8eQxlt+1KJDGLZpuv+ABFSYMdy8onlF7Q3fu0kxz5qd3bAGKCL+/zJF+s5/otu9ScTWtw+36AmiP62pazoemu/AGmnc1bEd4v4p5l4fWx7Kf/6X07JUjQWXH549L/GZlVszurw5Zy2ffYS+rQ+1vD6EhSnrbxgFqz9kBU0zJqn+qho13BFXY9FfxXw8+UeGjb67keQD3hwhRrVlW5QQ9HV9j1nln/PeaCBF25UiMnxIA5fGTdYALIeRu8C5TkPyUxTVJ4a6yEyn51bag== Received: from MW4PR03CA0230.namprd03.prod.outlook.com (2603:10b6:303:b9::25) by CH2PR12MB3781.namprd12.prod.outlook.com (2603:10b6:610:27::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.24; Tue, 27 Apr 2021 15:39:01 +0000 Received: from CO1NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::7f) by MW4PR03CA0230.outlook.office365.com (2603:10b6:303:b9::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.22 via Frontend Transport; Tue, 27 Apr 2021 15:39:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT045.mail.protection.outlook.com (10.13.175.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Tue, 27 Apr 2021 15:39:01 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 27 Apr 2021 15:38:59 +0000 From: Bing Zhao To: , CC: , , Date: Tue, 27 Apr 2021 18:38:03 +0300 Message-ID: <20210427153811.11554-10-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210427153811.11554-1-bingz@nvidia.com> References: <20210427153811.11554-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ac528769-8dfa-477f-6119-08d9099294e5 X-MS-TrafficTypeDiagnostic: CH2PR12MB3781: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:108; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FXkKiiTgdK9jHEzi0/FPzBTFpGVg4wMGCM4yyPIrOhYkBl4pxNjAjohN2WbBclRr9xktJNRVs5Ph8riRwIFqXaHMXmziUT9+qH1GZb7YucvAPn4gOiy8WLdJhozosoJMW716xpvi4NsQZCztRoHnpb4RZlXk3ybulUEhhr9kZkqN9AgtOZs2Z+wJYUICHhYRjubuAO2hygYRf8Ku6Ks4OR+MgD73p7p8gJjqjBYdbGdJfDmpiW8zQnF+rvCJDMoIyfTasjUiKFYzxls5bPzHxM6pk2oqYZrUNepKHCEPzpIYno3uxKLxwQpYj5O0D2aE2mXYmgHCOki1dz5qsRD89IjJWB63LtUNutBLQs79uIDFTkZA1+OZFfC8A8cSzIb9ldTfP2HJZF/icG67i3OtnbC6GXxPCl78ny2EuZMrK5comTRI9mILTmaSALgqpCpSrrF/uQYo5nG8vP8dvwsq2svXdiPUlBzZdeBZkSyODAYpU85AnBMSIGULtCsGOpUn3bI1IDTx034orjXYUKeT2DUq9egrmQAu8+kmAu7bPltI4tlWeuMdE7NtgM0RFQC1NNlwsDWMqdkXMkHN3KXXo8+eBoF5qZamNe4hXplS10mQJynPdkgyP+s1EyuBAkDgbywBaV/n4Rag0FD48I+oPWSOLesZM0sbI1+DB19qNuM= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(46966006)(36840700001)(478600001)(36906005)(7696005)(110136005)(316002)(47076005)(186003)(426003)(82310400003)(8936002)(30864003)(8676002)(336012)(16526019)(6666004)(54906003)(70586007)(4326008)(356005)(1076003)(7636003)(26005)(36860700001)(70206006)(86362001)(107886003)(36756003)(83380400001)(6286002)(2616005)(55016002)(5660300002)(2906002)(6636002)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2021 15:39:01.6492 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ac528769-8dfa-477f-6119-08d9099294e5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB3781 Subject: [dpdk-dev] [PATCH 09/17] net/mlx5: add ASO CT query implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" After the connection tracking context is created and being used by the flows, the context will be updated by the HW automatically after a packet passed the CT validation. E.g., the ACK, SEQ, window and state of CT can be updated with both direction traffic. In order to query the updated contents of this context, a WQE should be posted to the SQ with a return buffer. The data will be filled into the buffer. And the profile will be filled with specific value. During the execution of query command, the context may be updated. The result of the query command may not be the latest one. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5.h | 9 +- drivers/net/mlx5/mlx5_flow_aso.c | 205 +++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_dv.c | 10 ++ 3 files changed, 223 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 982c0c2..f999828 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -498,7 +498,10 @@ struct mlx5_aso_sq_elem { uint16_t burst_size; }; struct mlx5_aso_mtr *mtr; - struct mlx5_aso_ct_action *ct; + struct { + struct mlx5_aso_ct_action *ct; + char *query_data; + }; }; }; @@ -1710,5 +1713,9 @@ int mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, const struct rte_flow_action_conntrack *profile); int mlx5_aso_ct_wait_ready(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_ct_action *ct); +int mlx5_aso_ct_query_by_wqe(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct, + struct rte_flow_action_conntrack *profile); + #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 6a13b98..12e8dc7 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -943,6 +943,7 @@ mlx5_aso_ct_sq_enqueue_single(struct mlx5_aso_ct_pools_mng *mng, /* Fill next WQE. */ __atomic_store_n(&ct->state, ASO_CONNTRACK_WAIT, __ATOMIC_RELAXED); sq->elts[sq->head & mask].ct = ct; + sq->elts[sq->head & mask].query_data = NULL; pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); /* Each WQE will have a single CT object. */ wqe->general_cseg.misc = rte_cpu_to_be_32(pool->devx_obj->id + @@ -1059,10 +1060,92 @@ mlx5_aso_ct_status_update(struct mlx5_aso_sq *sq, uint16_t num) MLX5_ASSERT(ct); __atomic_store_n(&ct->state, ASO_CONNTRACK_READY, __ATOMIC_RELAXED); + if (sq->elts[idx].query_data) + rte_memcpy(sq->elts[idx].query_data, + (char *)((uintptr_t)sq->mr.buf + idx * 64), + 64); } } /* + * Post a WQE to the ASO CT SQ to query the current context. + * + * @param[in] mng + * Pointer to the CT pools management structure. + * @param[in] ct + * Pointer to the generic CT structure related to the context. + * @param[in] data + * Pointer to data area to be filled. + * + * @return + * 1 on success (WQE number), 0 on failure. + */ +static int +mlx5_aso_ct_sq_query_single(struct mlx5_aso_ct_pools_mng *mng, + struct mlx5_aso_ct_action *ct, char *data) +{ + volatile struct mlx5_aso_wqe *wqe = NULL; + struct mlx5_aso_sq *sq = &mng->aso_sq; + uint16_t size = 1 << sq->log_desc_n; + uint16_t mask = size - 1; + uint16_t res; + uint16_t wqe_idx; + struct mlx5_aso_ct_pool *pool; + uint8_t state = __atomic_load_n(&ct->state, __ATOMIC_RELAXED); + + if (state == ASO_CONNTRACK_FREE) { + DRV_LOG(ERR, "Fail: No context to query"); + return -1; + } else if (state == ASO_CONNTRACK_WAIT) { + return 0; + } + rte_spinlock_lock(&sq->sqsl); + res = size - (uint16_t)(sq->head - sq->tail); + if (unlikely(!res)) { + rte_spinlock_unlock(&sq->sqsl); + DRV_LOG(ERR, "Fail: SQ is full and no free WQE to send"); + return 0; + } + __atomic_store_n(&ct->state, ASO_CONNTRACK_QUERY, __ATOMIC_RELAXED); + wqe = &sq->sq_obj.aso_wqes[sq->head & mask]; + /* Confirm the location and address of the prefetch instruction. */ + rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]); + /* Fill next WQE. */ + wqe_idx = sq->head & mask; + sq->elts[wqe_idx].ct = ct; + sq->elts[wqe_idx].query_data = data; + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + /* Each WQE will have a single CT object. */ + wqe->general_cseg.misc = rte_cpu_to_be_32(pool->devx_obj->id + + ct->offset); + wqe->general_cseg.opcode = rte_cpu_to_be_32(MLX5_OPCODE_ACCESS_ASO | + (ASO_OPC_MOD_CONNECTION_TRACKING << + WQE_CSEG_OPC_MOD_OFFSET) | + sq->pi << WQE_CSEG_WQE_INDEX_OFFSET); + /* + * There is no write request is required. + * ASO_OPER_LOGICAL_AND and ASO_OP_ALWAYS_FALSE are both 0. + * Set to 0 directly to reduce an endian swap. (Modify should rewrite.) + * "data_mask" is ignored. + * Buffer address was already filled during initialization. + */ + wqe->aso_cseg.operand_masks = 0; + sq->head++; + /* + * Each WQE contains 2 WQEBB's, even though + * data segment is not used in this case. + */ + sq->pi += 2; + rte_io_wmb(); + sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi); + rte_wmb(); + *sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH. */ + rte_wmb(); + rte_spinlock_unlock(&sq->sqsl); + return 1; +} + +/* * Handle completions from WQEs sent to ASO CT. * * @param[in] mng @@ -1189,3 +1272,125 @@ mlx5_aso_ct_wait_ready(struct mlx5_dev_ctx_shared *sh, ct->offset, pool->index); return -1; } + +/* + * Convert the hardware conntrack data format into the profile. + * + * @param[in] profile + * Pointer to conntrack profile to be filled after query. + * @param[in] wdata + * Pointer to data fetched from hardware. + */ +static inline void +mlx5_aso_ct_obj_analyze(struct rte_flow_action_conntrack *profile, + char *wdata) +{ + void *o_dir = MLX5_ADDR_OF(conn_track_aso, wdata, original_dir); + void *r_dir = MLX5_ADDR_OF(conn_track_aso, wdata, reply_dir); + + /* MLX5_GET16 should be taken into consideration. */ + profile->state = (enum rte_flow_conntrack_state) + MLX5_GET(conn_track_aso, wdata, state); + profile->enable = !MLX5_GET(conn_track_aso, wdata, freeze_track); + profile->selective_ack = MLX5_GET(conn_track_aso, wdata, + sack_permitted); + profile->live_connection = MLX5_GET(conn_track_aso, wdata, + connection_assured); + profile->challenge_ack_passed = MLX5_GET(conn_track_aso, wdata, + challenged_acked); + profile->max_ack_window = MLX5_GET(conn_track_aso, wdata, + max_ack_window); + profile->retransmission_limit = MLX5_GET(conn_track_aso, wdata, + retranmission_limit); + profile->last_window = MLX5_GET(conn_track_aso, wdata, last_win); + profile->last_direction = MLX5_GET(conn_track_aso, wdata, last_dir); + profile->last_index = (enum rte_flow_conntrack_tcp_last_index) + MLX5_GET(conn_track_aso, wdata, last_index); + profile->last_seq = MLX5_GET(conn_track_aso, wdata, last_seq); + profile->last_ack = MLX5_GET(conn_track_aso, wdata, last_ack); + profile->last_end = MLX5_GET(conn_track_aso, wdata, last_end); + profile->liberal_mode = MLX5_GET(conn_track_aso, wdata, + reply_dircetion_tcp_liberal_enabled) | + MLX5_GET(conn_track_aso, wdata, + original_dircetion_tcp_liberal_enabled); + /* No liberal in the RTE structure profile. */ + profile->reply_dir.scale = MLX5_GET(conn_track_aso, wdata, + reply_dircetion_tcp_scale); + profile->reply_dir.close_initiated = MLX5_GET(conn_track_aso, wdata, + reply_dircetion_tcp_close_initiated); + profile->reply_dir.data_unacked = MLX5_GET(conn_track_aso, wdata, + reply_dircetion_tcp_data_unacked); + profile->reply_dir.last_ack_seen = MLX5_GET(conn_track_aso, wdata, + reply_dircetion_tcp_max_ack); + profile->reply_dir.sent_end = MLX5_GET(tcp_window_params, + r_dir, sent_end); + profile->reply_dir.reply_end = MLX5_GET(tcp_window_params, + r_dir, reply_end); + profile->reply_dir.max_win = MLX5_GET(tcp_window_params, + r_dir, max_win); + profile->reply_dir.max_ack = MLX5_GET(tcp_window_params, + r_dir, max_ack); + profile->original_dir.scale = MLX5_GET(conn_track_aso, wdata, + original_dircetion_tcp_scale); + profile->original_dir.close_initiated = MLX5_GET(conn_track_aso, wdata, + original_dircetion_tcp_close_initiated); + profile->original_dir.data_unacked = MLX5_GET(conn_track_aso, wdata, + original_dircetion_tcp_data_unacked); + profile->original_dir.last_ack_seen = MLX5_GET(conn_track_aso, wdata, + original_dircetion_tcp_max_ack); + profile->original_dir.sent_end = MLX5_GET(tcp_window_params, + o_dir, sent_end); + profile->original_dir.reply_end = MLX5_GET(tcp_window_params, + o_dir, reply_end); + profile->original_dir.max_win = MLX5_GET(tcp_window_params, + o_dir, max_win); + profile->original_dir.max_ack = MLX5_GET(tcp_window_params, + o_dir, max_ack); +} + +/* + * Query connection tracking information parameter by send WQE. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] ct + * Pointer to connection tracking offload object. + * @param[out] profile + * Pointer to connection tracking TCP information. + * + * @return + * 0 on success, -1 on failure. + */ +int +mlx5_aso_ct_query_by_wqe(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct, + struct rte_flow_action_conntrack *profile) +{ + struct mlx5_aso_ct_pools_mng *mng = sh->ct_mng; + uint32_t poll_wqe_times = MLX5_CT_POLL_WQE_CQE_TIMES; + struct mlx5_aso_ct_pool *pool; + char out_data[64 * 2]; + int ret; + + /* Assertion here. */ + do { + mlx5_aso_ct_completion_handle(mng); + ret = mlx5_aso_ct_sq_query_single(mng, ct, out_data); + if (ret < 0) + return ret; + else if (ret > 0) + goto data_handle; + /* Waiting for wqe resource or state. */ + else + rte_delay_us_sleep(10u); + } while (--poll_wqe_times); + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + DRV_LOG(ERR, "Fail to send WQE for ASO CT %d in pool %d", + ct->offset, pool->index); + return -1; +data_handle: + ret = mlx5_aso_ct_wait_ready(sh, ct); + if (!ret) + mlx5_aso_ct_obj_analyze(profile, out_data); + return ret; +} diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 51e6ff4..9093142 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13765,6 +13765,8 @@ flow_dv_action_query(struct rte_eth_dev *dev, uint32_t act_idx = (uint32_t)(uintptr_t)handle; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_action *ct; switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -13778,6 +13780,14 @@ flow_dv_action_query(struct rte_eth_dev *dev, resp->sec_since_last_hit = __atomic_load_n (&age_param->sec_since_last_hit, __ATOMIC_RELAXED); return 0; + case MLX5_INDIRECT_ACTION_TYPE_CT: + ct = flow_aso_ct_get_by_idx(dev, idx); + if (!ct->refcnt) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "CT object is inactive"); + return mlx5_aso_ct_query_by_wqe(priv->sh, ct, data); default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, -- 2.5.5