From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 2E57D43C0F;
	Fri,  1 Mar 2024 20:18:14 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id CB71F434CE;
	Fri,  1 Mar 2024 20:16:09 +0100 (CET)
Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com
 [67.231.148.174])
 by mails.dpdk.org (Postfix) with ESMTP id 05723433ED
 for <dev@dpdk.org>; Fri,  1 Mar 2024 20:16:07 +0100 (CET)
Received: from pps.filterd (m0045849.ppops.net [127.0.0.1])
 by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id
 4219mhE8013738 for <dev@dpdk.org>; Fri, 1 Mar 2024 11:16:07 -0800
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=
 from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-type; s=pfpt0220; bh=ZLjvDMUloKZ5wIWGAuoa8
 5mgamFeg8vLrhhz8Tcq4VI=; b=HBMQ3bdUvYT6fRMgT0N5pY/mwV2cuxW7dvauO
 aLhDlT+H/jHAMuUzFAzQ0bGmsuYTv/yoqqnzfitM+mmhcbpXBH3f+2ci28H5NQKy
 EJymeMPTOvY/6SZNlYGojr7M1ushIaqLQDm7HY7v2D9nVmbFCRvEVYZgx5Af5jpL
 QJ7vIiY9GYALdL3PzIfm4m5y++MBaF/9G6BSQg45n4YmIJkCYLgV5Jh4uLjuBKF0
 OPP6Pjd2pGOTlaWaAkwt8zXu29sL/v0Z4giIW8rKlpl6/5EmPnbOT6rtOWrqupP3
 GmCLiYOZkHeyBF7AwOKF+XPo67zhGu9QpjLvs2qbFU5jDk4UA==
Received: from dc5-exch05.marvell.com ([199.233.59.128])
 by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3wkcq59skn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT)
 for <dev@dpdk.org>; Fri, 01 Mar 2024 11:16:06 -0800 (PST)
Received: from DC5-EXCH05.marvell.com (10.69.176.209) by
 DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1258.12; Fri, 1 Mar 2024 11:16:06 -0800
Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com
 (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend
 Transport; Fri, 1 Mar 2024 11:16:05 -0800
Received: from localhost.localdomain (unknown [10.29.52.211])
 by maili.marvell.com (Postfix) with ESMTP id B701A3F71FE;
 Fri,  1 Mar 2024 11:16:03 -0800 (PST)
From: Harman Kalra <hkalra@marvell.com>
To: Nithin Dabilpuram <ndabilpuram@marvell.com>, Kiran Kumar K
 <kirankumark@marvell.com>, Sunil Kumar Kori <skori@marvell.com>, Satha Rao
 <skoteshwar@marvell.com>, Harman Kalra <hkalra@marvell.com>
CC: <dev@dpdk.org>
Subject: [PATCH v5 23/23] net/cnxk: other flow operations
Date: Sat, 2 Mar 2024 00:44:50 +0530
Message-ID: <20240301191451.57168-24-hkalra@marvell.com>
X-Mailer: git-send-email 2.18.0
In-Reply-To: <20240301191451.57168-1-hkalra@marvell.com>
References: <20230811163419.165790-1-hkalra@marvell.com>
 <20240301191451.57168-1-hkalra@marvell.com>
MIME-Version: 1.0
Content-Type: text/plain
X-Proofpoint-GUID: ekI2UI02WW4cbJToRBZCj5OciZzLzk7Z
X-Proofpoint-ORIG-GUID: ekI2UI02WW4cbJToRBZCj5OciZzLzk7Z
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26
 definitions=2024-03-01_20,2024-03-01_03,2023-05-22_02
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Implementing other flow operations - validate, destroy, query,
flush, dump for representor ports

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 doc/guides/rel_notes/release_24_03.rst |   1 +
 drivers/net/cnxk/cnxk_rep_flow.c       | 414 +++++++++++++++++++++++++
 drivers/net/cnxk/cnxk_rep_msg.h        |  32 ++
 3 files changed, 447 insertions(+)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 8d440d56a5..c722001cef 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -111,6 +111,7 @@ New Features
   * Added support for ``RTE_FLOW_ITEM_TYPE_PPPOES`` flow item.
   * Added support for ``RTE_FLOW_ACTION_TYPE_SAMPLE`` flow item.
   * Added support for Rx inject.
+  * Added support for port representors.
 
 * **Updated Marvell OCTEON EP driver.**
 
diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c
index 2613be5b9e..d26f5aa12c 100644
--- a/drivers/net/cnxk/cnxk_rep_flow.c
+++ b/drivers/net/cnxk/cnxk_rep_flow.c
@@ -267,6 +267,222 @@ populate_action_data(void *buffer, uint32_t *length, const struct rte_flow_actio
 	*length = len;
 }
 
+static int
+process_flow_destroy(struct cnxk_rep_dev *rep_dev, void *flow, cnxk_rep_msg_ack_data_t *adata)
+{
+	cnxk_rep_msg_flow_destroy_meta_t msg_fd_meta;
+	uint32_t len = 0, rc;
+	void *buffer;
+	size_t size;
+
+	/* If representor not representing any active VF, return 0 */
+	if (!rep_dev->is_vf_active)
+		return 0;
+
+	size = MAX_BUFFER_SIZE;
+	buffer = plt_zmalloc(size, 0);
+	if (!buffer) {
+		plt_err("Failed to allocate mem");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	cnxk_rep_msg_populate_header(buffer, &len);
+
+	msg_fd_meta.portid = rep_dev->rep_id;
+	msg_fd_meta.flow = (uint64_t)flow;
+	plt_rep_dbg("Flow Destroy: flow 0x%" PRIu64 ", portid %d", msg_fd_meta.flow,
+		    msg_fd_meta.portid);
+	cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fd_meta,
+					   sizeof(cnxk_rep_msg_flow_destroy_meta_t),
+					   CNXK_REP_MSG_FLOW_DESTROY);
+	cnxk_rep_msg_populate_msg_end(buffer, &len);
+
+	rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata);
+	if (rc) {
+		plt_err("Failed to process the message, err %d", rc);
+		goto fail;
+	}
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+copy_flow_dump_file(FILE *target)
+{
+	FILE *source = NULL;
+	int pos;
+	char ch;
+
+	source = fopen(DEFAULT_DUMP_FILE_NAME, "r");
+	if (source == NULL) {
+		plt_err("Failed to read default dump file: %s, err %d", DEFAULT_DUMP_FILE_NAME,
+			errno);
+		return errno;
+	}
+
+	fseek(source, 0L, SEEK_END);
+	pos = ftell(source);
+	fseek(source, 0L, SEEK_SET);
+	while (pos--) {
+		ch = fgetc(source);
+		fputc(ch, target);
+	}
+
+	fclose(source);
+
+	/* Remove the default file after reading */
+	remove(DEFAULT_DUMP_FILE_NAME);
+
+	return 0;
+}
+
+static int
+process_flow_dump(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow, FILE *file,
+		  cnxk_rep_msg_ack_data_t *adata)
+{
+	cnxk_rep_msg_flow_dump_meta_t msg_fp_meta;
+	uint32_t len = 0, rc;
+	void *buffer;
+	size_t size;
+
+	size = MAX_BUFFER_SIZE;
+	buffer = plt_zmalloc(size, 0);
+	if (!buffer) {
+		plt_err("Failed to allocate mem");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	cnxk_rep_msg_populate_header(buffer, &len);
+
+	msg_fp_meta.portid = rep_dev->rep_id;
+	msg_fp_meta.flow = (uint64_t)flow;
+	msg_fp_meta.is_stdout = (file == stdout) ? 1 : 0;
+
+	plt_rep_dbg("Flow Dump: flow 0x%" PRIu64 ", portid %d stdout %d", msg_fp_meta.flow,
+		    msg_fp_meta.portid, msg_fp_meta.is_stdout);
+	cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fp_meta,
+					   sizeof(cnxk_rep_msg_flow_dump_meta_t),
+					   CNXK_REP_MSG_FLOW_DUMP);
+	cnxk_rep_msg_populate_msg_end(buffer, &len);
+
+	rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata);
+	if (rc) {
+		plt_err("Failed to process the message, err %d", rc);
+		goto fail;
+	}
+
+	/* Copy contents from default file to user file */
+	if (file != stdout)
+		copy_flow_dump_file(file);
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+process_flow_flush(struct cnxk_rep_dev *rep_dev, cnxk_rep_msg_ack_data_t *adata)
+{
+	cnxk_rep_msg_flow_flush_meta_t msg_ff_meta;
+	uint32_t len = 0, rc;
+	void *buffer;
+	size_t size;
+
+	size = MAX_BUFFER_SIZE;
+	buffer = plt_zmalloc(size, 0);
+	if (!buffer) {
+		plt_err("Failed to allocate mem");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	cnxk_rep_msg_populate_header(buffer, &len);
+
+	msg_ff_meta.portid = rep_dev->rep_id;
+	plt_rep_dbg("Flow Flush: portid %d", msg_ff_meta.portid);
+	cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_ff_meta,
+					   sizeof(cnxk_rep_msg_flow_flush_meta_t),
+					   CNXK_REP_MSG_FLOW_FLUSH);
+	cnxk_rep_msg_populate_msg_end(buffer, &len);
+
+	rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata);
+	if (rc) {
+		plt_err("Failed to process the message, err %d", rc);
+		goto fail;
+	}
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+process_flow_query(struct cnxk_rep_dev *rep_dev, struct rte_flow *flow,
+		   const struct rte_flow_action *action, void *data, cnxk_rep_msg_ack_data_t *adata)
+{
+	cnxk_rep_msg_flow_query_meta_t *msg_fq_meta;
+	struct rte_flow_query_count *query = data;
+	uint32_t len = 0, rc, sz, total_sz;
+	uint64_t action_data[BUFSIZ];
+	void *buffer;
+	size_t size;
+
+	size = MAX_BUFFER_SIZE;
+	buffer = plt_zmalloc(size, 0);
+	if (!buffer) {
+		plt_err("Failed to allocate mem");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	cnxk_rep_msg_populate_header(buffer, &len);
+
+	memset(action_data, 0, BUFSIZ * sizeof(uint64_t));
+	sz = prepare_action_data(action, 1, action_data);
+	total_sz = sz + sizeof(cnxk_rep_msg_flow_query_meta_t);
+
+	msg_fq_meta = plt_zmalloc(total_sz, 0);
+	if (!msg_fq_meta) {
+		plt_err("Failed to allocate memory");
+		rc = -ENOMEM;
+		goto fail;
+	}
+
+	msg_fq_meta->portid = rep_dev->rep_id;
+	msg_fq_meta->reset = query->reset;
+	;
+	msg_fq_meta->flow = (uint64_t)flow;
+	/* Populate the action data */
+	rte_memcpy(msg_fq_meta->action_data, action_data, sz);
+	msg_fq_meta->action_data_sz = sz;
+
+	plt_rep_dbg("Flow query: flow 0x%" PRIu64 ", portid %d, action type %d total sz %d "
+		    "action sz %d", msg_fq_meta->flow, msg_fq_meta->portid, action->type, total_sz,
+		    sz);
+	cnxk_rep_msg_populate_command_meta(buffer, &len, msg_fq_meta, total_sz,
+					   CNXK_REP_MSG_FLOW_QUERY);
+	cnxk_rep_msg_populate_msg_end(buffer, &len);
+
+	rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata);
+	if (rc) {
+		plt_err("Failed to process the message, err %d", rc);
+		goto free;
+	}
+
+	rte_free(msg_fq_meta);
+
+	return 0;
+
+free:
+	rte_free(msg_fq_meta);
+fail:
+	return rc;
+}
+
 static int
 process_flow_rule(struct cnxk_rep_dev *rep_dev, const struct rte_flow_attr *attr,
 		  const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
@@ -396,6 +612,204 @@ cnxk_rep_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *at
 	return NULL;
 }
 
+static int
+cnxk_rep_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr,
+		       const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+		       struct rte_flow_error *error)
+{
+	struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev);
+	cnxk_rep_msg_ack_data_t adata;
+	int rc = 0;
+
+	/* If representor not representing any active VF, return 0 */
+	if (!rep_dev->is_vf_active) {
+		rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Represented VF not active yet");
+		return 0;
+	}
+
+	if (rep_dev->native_repte)
+		return cnxk_flow_validate_common(eth_dev, attr, pattern, actions, error, true);
+
+	rc = process_flow_rule(rep_dev, attr, pattern, actions, &adata, CNXK_REP_MSG_FLOW_VALIDATE);
+	if (!rc || adata.u.sval < 0) {
+		if (adata.u.sval < 0) {
+			rc = (int)adata.u.sval;
+			rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   NULL, "Failed to validate flow");
+			goto fail;
+		}
+	} else {
+		rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to validate flow");
+	}
+
+	plt_rep_dbg("Flow %p validated successfully", adata.u.data);
+
+fail:
+	return rc;
+}
+
+static int
+cnxk_rep_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow,
+		      struct rte_flow_error *error)
+{
+	struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev);
+	cnxk_rep_msg_ack_data_t adata;
+	int rc;
+
+	/* If representor not representing any active VF, return 0 */
+	if (!rep_dev->is_vf_active) {
+		rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Represented VF not active yet");
+		return 0;
+	}
+
+	if (rep_dev->native_repte)
+		return cnxk_flow_destroy_common(eth_dev, (struct roc_npc_flow *)flow, error, true);
+
+	rc = process_flow_destroy(rep_dev, flow, &adata);
+	if (rc || adata.u.sval < 0) {
+		if (adata.u.sval < 0)
+			rc = adata.u.sval;
+
+		rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to destroy flow");
+		goto fail;
+	}
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+cnxk_rep_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow,
+		    const struct rte_flow_action *action, void *data, struct rte_flow_error *error)
+{
+	struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev);
+	cnxk_rep_msg_ack_data_t adata;
+	int rc;
+
+	/* If representor not representing any active VF, return 0 */
+	if (!rep_dev->is_vf_active) {
+		rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Represented VF not active yet");
+		return 0;
+	}
+
+	if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
+		rc = -ENOTSUP;
+		rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Only COUNT is supported in query");
+		goto fail;
+	}
+
+	if (rep_dev->native_repte)
+		return cnxk_flow_query_common(eth_dev, flow, action, data, error, true);
+
+	rc = process_flow_query(rep_dev, flow, action, data, &adata);
+	if (rc || adata.u.sval < 0) {
+		if (adata.u.sval < 0)
+			rc = adata.u.sval;
+
+		rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to query the flow");
+		goto fail;
+	}
+
+	rte_memcpy(data, adata.u.data, adata.size);
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+cnxk_rep_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error)
+{
+	struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev);
+	cnxk_rep_msg_ack_data_t adata;
+	int rc;
+
+	/* If representor not representing any active VF, return 0 */
+	if (!rep_dev->is_vf_active) {
+		rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Represented VF not active yet");
+		return 0;
+	}
+
+	if (rep_dev->native_repte)
+		return cnxk_flow_flush_common(eth_dev, error, true);
+
+	rc = process_flow_flush(rep_dev, &adata);
+	if (rc || adata.u.sval < 0) {
+		if (adata.u.sval < 0)
+			rc = adata.u.sval;
+
+		rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to destroy flow");
+		goto fail;
+	}
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+cnxk_rep_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file,
+		       struct rte_flow_error *error)
+{
+	struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev);
+	cnxk_rep_msg_ack_data_t adata;
+	int rc;
+
+	/* If representor not representing any active VF, return 0 */
+	if (!rep_dev->is_vf_active) {
+		rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Represented VF not active yet");
+		return 0;
+	}
+
+	if (rep_dev->native_repte)
+		return cnxk_flow_dev_dump_common(eth_dev, flow, file, error, true);
+
+	rc = process_flow_dump(rep_dev, flow, file, &adata);
+	if (rc || adata.u.sval < 0) {
+		if (adata.u.sval < 0)
+			rc = adata.u.sval;
+
+		rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "Failed to destroy flow");
+		goto fail;
+	}
+
+	return 0;
+fail:
+	return rc;
+}
+
+static int
+cnxk_rep_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, int enable __rte_unused,
+		      struct rte_flow_error *error)
+{
+	/* If we support, we need to un-install the default mcam
+	 * entry for this port.
+	 */
+
+	rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "Flow isolation not supported");
+
+	return -rte_errno;
+}
+
 struct rte_flow_ops cnxk_rep_flow_ops = {
+	.validate = cnxk_rep_flow_validate,
 	.create = cnxk_rep_flow_create,
+	.destroy = cnxk_rep_flow_destroy,
+	.query = cnxk_rep_flow_query,
+	.flush = cnxk_rep_flow_flush,
+	.isolate = cnxk_rep_flow_isolate,
+	.dev_dump = cnxk_rep_flow_dev_dump,
 };
diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h
index d27a234e48..bfd9ce9c7b 100644
--- a/drivers/net/cnxk/cnxk_rep_msg.h
+++ b/drivers/net/cnxk/cnxk_rep_msg.h
@@ -29,6 +29,11 @@ typedef enum CNXK_REP_MSG {
 	CNXK_REP_MSG_ETH_STATS_CLEAR,
 	/* Flow operation msgs */
 	CNXK_REP_MSG_FLOW_CREATE,
+	CNXK_REP_MSG_FLOW_DESTROY,
+	CNXK_REP_MSG_FLOW_VALIDATE,
+	CNXK_REP_MSG_FLOW_FLUSH,
+	CNXK_REP_MSG_FLOW_DUMP,
+	CNXK_REP_MSG_FLOW_QUERY,
 	/* End of messaging sequence */
 	CNXK_REP_MSG_END,
 } cnxk_rep_msg_t;
@@ -109,6 +114,33 @@ typedef struct cnxk_rep_msg_flow_create_meta {
 	uint16_t nb_action;
 } __rte_packed cnxk_rep_msg_flow_create_meta_t;
 
+/* Flow destroy msg meta */
+typedef struct cnxk_rep_msg_flow_destroy_meta {
+	uint64_t flow;
+	uint16_t portid;
+} __rte_packed cnxk_rep_msg_flow_destroy_meta_t;
+
+/* Flow flush msg meta */
+typedef struct cnxk_rep_msg_flow_flush_meta {
+	uint16_t portid;
+} __rte_packed cnxk_rep_msg_flow_flush_meta_t;
+
+/* Flow dump msg meta */
+typedef struct cnxk_rep_msg_flow_dump_meta {
+	uint64_t flow;
+	uint16_t portid;
+	uint8_t is_stdout;
+} __rte_packed cnxk_rep_msg_flow_dump_meta_t;
+
+/* Flow query msg meta */
+typedef struct cnxk_rep_msg_flow_query_meta {
+	uint64_t flow;
+	uint16_t portid;
+	uint8_t reset;
+	uint32_t action_data_sz;
+	uint8_t action_data[];
+} __rte_packed cnxk_rep_msg_flow_query_meta_t;
+
 /* Type pattern meta */
 typedef struct cnxk_pattern_hdr {
 	uint16_t type;
-- 
2.18.0