From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A423A43747; Tue, 19 Dec 2023 18:43:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7ED6C42EA8; Tue, 19 Dec 2023 18:41:43 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 047BA42EAF for ; Tue, 19 Dec 2023 18:41:37 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 3BJACDTv006452 for ; Tue, 19 Dec 2023 09:41:37 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=dUOIx+x7mgxii/d2Z5d81 4V0iP9QuHseirWEP1B/2pk=; b=iNOalcG1iIR2/c0eHsUf6pP1MnVYpLzPff7Yt 2BauWCQ4AAi8PqOsS5gh0clDqKw+dnOBQxMHpoIRQ4rHGcUGwZX3ajfPLsxwNslA ekiHe808PRQ6SEroQm27ECNo5vSZUqLj486HxlhhOKRzMKqqla9tq8qj4T09MxGd Fkl4m/3TycrQdG9p+wW8Pks4tccSy8PhbWS+yItBBVo1Ek9Jnrs/oSNzu/EMHlbA KBETMK3H+DRrSOiQH/hJWqBdsaqY6EGvA1G1pWrkDUjF5iUJywVNevBdmEQZnzF4 9kUpevV4SlSl5kZ4RcqCLlSCI87Bz1+diAc7eBqHi1fdK5BQA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3v1c9kumj2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 19 Dec 2023 09:41:37 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Dec 2023 09:41:34 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Dec 2023 09:41:34 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 92F253F708D; Tue, 19 Dec 2023 09:41:32 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Subject: [PATCH v2 22/24] net/cnxk: flow create on representor ports Date: Tue, 19 Dec 2023 23:10:01 +0530 Message-ID: <20231219174003.72901-23-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231219174003.72901-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20231219174003.72901-1-hkalra@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: VZace1hJcTFAGwKSLcwTyYfT5uLrHYx9 X-Proofpoint-GUID: VZace1hJcTFAGwKSLcwTyYfT5uLrHYx9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - Implementing base infra for handling flow operations performed on representor ports, where these representor ports may be representing native representees or part of companian apps. - Handling flow create operation Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.h | 9 +- drivers/net/cnxk/cnxk_rep.h | 3 + drivers/net/cnxk/cnxk_rep_flow.c | 399 +++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep_msg.h | 27 +++ drivers/net/cnxk/cnxk_rep_ops.c | 3 +- drivers/net/cnxk/meson.build | 1 + 6 files changed, 439 insertions(+), 3 deletions(-) create mode 100644 drivers/net/cnxk/cnxk_rep_flow.c diff --git a/drivers/net/cnxk/cnxk_flow.h b/drivers/net/cnxk/cnxk_flow.h index 84333e7f9d..26384400c1 100644 --- a/drivers/net/cnxk/cnxk_flow.h +++ b/drivers/net/cnxk/cnxk_flow.h @@ -16,8 +16,13 @@ struct cnxk_rte_flow_term_info { uint16_t item_size; }; -struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, +struct cnxk_rte_flow_action_info { + uint16_t conf_size; +}; + +extern const struct cnxk_rte_flow_term_info term[]; + +struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index 9ac675426e..2b850e7e59 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -20,6 +20,9 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +/* Flow ops for representor ports */ +extern struct rte_flow_ops cnxk_rep_flow_ops; + struct cnxk_rep_queue_stats { uint64_t pkts; uint64_t bytes; diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c new file mode 100644 index 0000000000..ab9ced6ece --- /dev/null +++ b/drivers/net/cnxk/cnxk_rep_flow.c @@ -0,0 +1,399 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#include +#include +#include + +#define DEFAULT_DUMP_FILE_NAME "/tmp/fdump" +#define MAX_BUFFER_SIZE 1500 + +const struct cnxk_rte_flow_action_info action_info[] = { + [RTE_FLOW_ACTION_TYPE_MARK] = {sizeof(struct rte_flow_action_mark)}, + [RTE_FLOW_ACTION_TYPE_VF] = {sizeof(struct rte_flow_action_vf)}, + [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_PORT_ID] = {sizeof(struct rte_flow_action_port_id)}, + [RTE_FLOW_ACTION_TYPE_QUEUE] = {sizeof(struct rte_flow_action_queue)}, + [RTE_FLOW_ACTION_TYPE_RSS] = {sizeof(struct rte_flow_action_rss)}, + [RTE_FLOW_ACTION_TYPE_SECURITY] = {sizeof(struct rte_flow_action_security)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {sizeof(struct rte_flow_action_of_set_vlan_vid)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {sizeof(struct rte_flow_action_of_push_vlan)}, + [RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {sizeof(struct rte_flow_action_of_set_vlan_pcp)}, + [RTE_FLOW_ACTION_TYPE_METER] = {sizeof(struct rte_flow_action_meter)}, + [RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {sizeof(struct rte_flow_action_of_pop_mpls)}, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {sizeof(struct rte_flow_action_of_push_mpls)}, + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {sizeof(struct rte_flow_action_vxlan_encap)}, + [RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {sizeof(struct rte_flow_action_nvgre_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {sizeof(struct rte_flow_action_raw_encap)}, + [RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {sizeof(struct rte_flow_action_raw_decap)}, + [RTE_FLOW_ACTION_TYPE_COUNT] = {sizeof(struct rte_flow_action_count)}, +}; + +static void +cnxk_flow_params_count(const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + uint16_t *n_pattern, uint16_t *n_action) +{ + int i = 0; + + for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) + i++; + + *n_pattern = ++i; + plt_rep_dbg("Total patterns is %d", *n_pattern); + + i = 0; + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) + i++; + *n_action = ++i; + plt_rep_dbg("Total actions is %d", *n_action); +} + +static void +populate_attr_data(void *buffer, uint32_t *length, const struct rte_flow_attr *attr) +{ + uint32_t sz = sizeof(struct rte_flow_attr); + uint32_t len; + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ATTR, sz); + + len = *length; + /* Populate the attribute data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), attr, sz); + len += sz; + + *length = len; +} + +static uint16_t +prepare_pattern_data(const struct rte_flow_item *pattern, uint16_t nb_pattern, + uint64_t *pattern_data) +{ + cnxk_pattern_hdr_t hdr; + uint16_t len = 0; + int i = 0; + + for (i = 0; i < nb_pattern; i++) { + /* Populate the pattern type hdr */ + memset(&hdr, 0, sizeof(cnxk_pattern_hdr_t)); + hdr.type = pattern->type; + if (pattern->spec) { + hdr.spec_sz = term[pattern->type].item_size; + hdr.last_sz = 0; + hdr.mask_sz = term[pattern->type].item_size; + } + + rte_memcpy(RTE_PTR_ADD(pattern_data, len), &hdr, sizeof(cnxk_pattern_hdr_t)); + len += sizeof(cnxk_pattern_hdr_t); + + /* Copy pattern spec data */ + if (pattern->spec) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->spec, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + + /* Copy pattern last data */ + if (pattern->last) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->last, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + + /* Copy pattern mask data */ + if (pattern->mask) { + rte_memcpy(RTE_PTR_ADD(pattern_data, len), pattern->mask, + term[pattern->type].item_size); + len += term[pattern->type].item_size; + } + pattern++; + } + + return len; +} + +static void +populate_pattern_data(void *buffer, uint32_t *length, const struct rte_flow_item *pattern, + uint16_t nb_pattern) +{ + uint64_t pattern_data[BUFSIZ]; + uint32_t len; + uint32_t sz; + + /* Prepare pattern_data */ + sz = prepare_pattern_data(pattern, nb_pattern, pattern_data); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_PATTERN, sz); + + len = *length; + /* Populate the pattern data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), pattern_data, sz); + len += sz; + + *length = len; +} + +static uint16_t +populate_rss_action_conf(const struct rte_flow_action_rss *conf, void *rss_action_conf) +{ + int len, sz; + + len = sizeof(struct rte_flow_action_rss) - sizeof(conf->key) - sizeof(conf->queue); + + if (rss_action_conf) + rte_memcpy(rss_action_conf, conf, len); + + if (conf->key) { + sz = conf->key_len; + if (rss_action_conf) + rte_memcpy(RTE_PTR_ADD(rss_action_conf, len), conf->key, sz); + len += sz; + } + + if (conf->queue) { + sz = conf->queue_num * sizeof(conf->queue); + if (rss_action_conf) + rte_memcpy(RTE_PTR_ADD(rss_action_conf, len), conf->queue, sz); + len += sz; + } + + return len; +} + +static uint16_t +populate_vxlan_encap_action_conf(const struct rte_flow_action_vxlan_encap *vxlan_conf, + void *vxlan_encap_action_data) +{ + const struct rte_flow_item *pattern; + uint64_t nb_patterns = 0; + uint16_t len, sz; + + pattern = vxlan_conf->definition; + for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) + nb_patterns++; + + len = sizeof(uint64_t); + rte_memcpy(vxlan_encap_action_data, &nb_patterns, len); + pattern = vxlan_conf->definition; + /* Prepare pattern_data */ + sz = prepare_pattern_data(pattern, nb_patterns, RTE_PTR_ADD(vxlan_encap_action_data, len)); + + len += sz; + if (len > BUFSIZ) { + plt_err("Incomplete item definition loaded, len %d", len); + return 0; + } + + return len; +} + +static uint16_t +prepare_action_data(const struct rte_flow_action *action, uint16_t nb_action, uint64_t *action_data) +{ + void *action_conf_data = NULL; + cnxk_action_hdr_t hdr; + uint16_t len = 0, sz = 0; + int i = 0; + + for (i = 0; i < nb_action; i++) { + if (action->conf) { + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + sz = populate_rss_action_conf(action->conf, NULL); + action_conf_data = plt_zmalloc(sz, 0); + if (populate_rss_action_conf(action->conf, action_conf_data) != + sz) { + plt_err("Populating RSS action config failed"); + return 0; + } + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + action_conf_data = plt_zmalloc(BUFSIZ, 0); + sz = populate_vxlan_encap_action_conf(action->conf, + action_conf_data); + if (!sz) { + plt_err("Populating vxlan action action config failed"); + return 0; + } + break; + default: + sz = action_info[action->type].conf_size; + action_conf_data = plt_zmalloc(sz, 0); + rte_memcpy(action_conf_data, action->conf, sz); + break; + }; + } + + /* Populate the action type hdr */ + memset(&hdr, 0, sizeof(cnxk_action_hdr_t)); + hdr.type = action->type; + hdr.conf_sz = sz; + + rte_memcpy(RTE_PTR_ADD(action_data, len), &hdr, sizeof(cnxk_action_hdr_t)); + len += sizeof(cnxk_action_hdr_t); + + /* Copy action conf data */ + if (action_conf_data) { + rte_memcpy(RTE_PTR_ADD(action_data, len), action_conf_data, sz); + len += sz; + plt_free(action_conf_data); + action_conf_data = NULL; + } + + action++; + } + + return len; +} + +static void +populate_action_data(void *buffer, uint32_t *length, const struct rte_flow_action *action, + uint16_t nb_action) +{ + uint64_t action_data[BUFSIZ]; + uint32_t len; + uint32_t sz; + + /* Prepare action_data */ + sz = prepare_action_data(action, nb_action, action_data); + + cnxk_rep_msg_populate_type(buffer, length, CNXK_TYPE_ACTION, sz); + + len = *length; + /* Populate the action data */ + rte_memcpy(RTE_PTR_ADD(buffer, len), action_data, sz); + len += sz; + + *length = len; +} + +static int +process_flow_rule(struct cnxk_rep_dev *rep_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + cnxk_rep_msg_ack_data_t *adata, cnxk_rep_msg_t msg) +{ + cnxk_rep_msg_flow_create_meta_t msg_fc_meta; + uint16_t n_pattern, n_action; + uint32_t len = 0, rc = 0; + void *buffer; + size_t size; + + size = MAX_BUFFER_SIZE; + buffer = plt_zmalloc(size, 0); + if (!buffer) { + plt_err("Failed to allocate mem"); + rc = -ENOMEM; + goto fail; + } + + /* Get no of actions and patterns */ + cnxk_flow_params_count(pattern, actions, &n_pattern, &n_action); + + /* Adding the header */ + cnxk_rep_msg_populate_header(buffer, &len); + + /* Representor port identified as rep_xport queue */ + msg_fc_meta.portid = rep_dev->rep_id; + msg_fc_meta.nb_pattern = n_pattern; + msg_fc_meta.nb_action = n_action; + + cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_fc_meta, + sizeof(cnxk_rep_msg_flow_create_meta_t), msg); + + /* Populate flow create parameters data */ + populate_attr_data(buffer, &len, attr); + populate_pattern_data(buffer, &len, pattern, n_pattern); + populate_action_data(buffer, &len, actions, n_action); + + cnxk_rep_msg_populate_msg_end(buffer, &len); + + rc = cnxk_rep_msg_send_process(rep_dev, buffer, len, adata); + if (rc) { + plt_err("Failed to process the message, err %d", rc); + goto fail; + } + + return 0; +fail: + return rc; +} + +static struct rte_flow * +cnxk_rep_flow_create_native(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + struct roc_npc_flow *flow; + uint16_t new_entry; + int rc; + + flow = cnxk_flow_create_internal(eth_dev, attr, pattern, actions, error, true); + /* Shifting the rules with higher priority than exception path rules */ + new_entry = (uint16_t)flow->mcam_id; + rc = cnxk_eswitch_flow_rule_shift(rep_dev->hw_func, &new_entry); + if (rc) { + plt_err("Failed to shift the flow rule entry, err %d", rc); + goto fail; + } + + flow->mcam_id = new_entry; + + return (struct rte_flow *)flow; +fail: + return NULL; +} + +static struct rte_flow * +cnxk_rep_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct cnxk_rep_dev *rep_dev = cnxk_rep_pmd_priv(eth_dev); + struct rte_flow *flow = NULL; + cnxk_rep_msg_ack_data_t adata; + int rc = 0; + + /* If representor not representing any active VF, return 0 */ + if (!rep_dev->is_vf_active) { + rte_flow_error_set(error, -EAGAIN, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Represented VF not active yet"); + return 0; + } + + if (rep_dev->native_repte) + return cnxk_rep_flow_create_native(eth_dev, attr, pattern, actions, error); + + rc = process_flow_rule(rep_dev, attr, pattern, actions, &adata, CNXK_REP_MSG_FLOW_CREATE); + if (!rc || adata.u.sval < 0) { + if (adata.u.sval < 0) { + rc = (int)adata.u.sval; + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to validate flow"); + goto fail; + } + + flow = adata.u.data; + if (!flow) { + rte_flow_error_set(error, adata.u.sval, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to create flow"); + goto fail; + } + } else { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to create flow"); + goto fail; + } + plt_rep_dbg("Flow %p created successfully", adata.u.data); + + return flow; +fail: + return NULL; +} + +struct rte_flow_ops cnxk_rep_flow_ops = { + .create = cnxk_rep_flow_create, +}; diff --git a/drivers/net/cnxk/cnxk_rep_msg.h b/drivers/net/cnxk/cnxk_rep_msg.h index 3236de50ad..2a7b5e3bc5 100644 --- a/drivers/net/cnxk/cnxk_rep_msg.h +++ b/drivers/net/cnxk/cnxk_rep_msg.h @@ -12,6 +12,10 @@ typedef enum CNXK_TYPE { CNXK_TYPE_HEADER = 0, CNXK_TYPE_MSG, + CNXK_TYPE_ATTR, + CNXK_TYPE_PATTERN, + CNXK_TYPE_ACTION, + CNXK_TYPE_FLOW } cnxk_type_t; typedef enum CNXK_REP_MSG { @@ -23,6 +27,8 @@ typedef enum CNXK_REP_MSG { CNXK_REP_MSG_ETH_SET_MAC, CNXK_REP_MSG_ETH_STATS_GET, CNXK_REP_MSG_ETH_STATS_CLEAR, + /* Flow operation msgs */ + CNXK_REP_MSG_FLOW_CREATE, /* End of messaging sequence */ CNXK_REP_MSG_END, } cnxk_rep_msg_t; @@ -96,6 +102,27 @@ typedef struct cnxk_rep_msg_eth_stats_meta { uint16_t portid; } __rte_packed cnxk_rep_msg_eth_stats_meta_t; +/* Flow create msg meta */ +typedef struct cnxk_rep_msg_flow_create_meta { + uint16_t portid; + uint16_t nb_pattern; + uint16_t nb_action; +} __rte_packed cnxk_rep_msg_flow_create_meta_t; + +/* Type pattern meta */ +typedef struct cnxk_pattern_hdr { + uint16_t type; + uint16_t spec_sz; + uint16_t last_sz; + uint16_t mask_sz; +} __rte_packed cnxk_pattern_hdr_t; + +/* Type action meta */ +typedef struct cnxk_action_hdr { + uint16_t type; + uint16_t conf_sz; +} __rte_packed cnxk_action_hdr_t; + void cnxk_rep_msg_populate_command(void *buffer, uint32_t *length, cnxk_rep_msg_t type, uint32_t size); void cnxk_rep_msg_populate_command_meta(void *buffer, uint32_t *length, void *msg_meta, uint32_t sz, diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c index e07c63dcb2..a461ae1dc3 100644 --- a/drivers/net/cnxk/cnxk_rep_ops.c +++ b/drivers/net/cnxk/cnxk_rep_ops.c @@ -637,7 +637,8 @@ int cnxk_rep_flow_ops_get(struct rte_eth_dev *ethdev, const struct rte_flow_ops **ops) { PLT_SET_USED(ethdev); - PLT_SET_USED(ops); + *ops = &cnxk_rep_flow_ops; + return 0; } diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 9ca7732713..8cc06f4967 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -39,6 +39,7 @@ sources = files( 'cnxk_rep.c', 'cnxk_rep_msg.c', 'cnxk_rep_ops.c', + 'cnxk_rep_flow.c', 'cnxk_stats.c', 'cnxk_tm.c', ) -- 2.18.0