From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B519945B82; Tue, 29 Oct 2024 08:31:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5644542DDF; Tue, 29 Oct 2024 08:31:23 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1D37A42DD1 for ; Tue, 29 Oct 2024 08:31:20 +0100 (CET) Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49T06ulI028403 for ; Tue, 29 Oct 2024 00:31:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=G 6R49Si3GN7mnWiaPZtoBSlBK6t+NGil+HLJJiUZRBA=; b=AWPtPsWOQ9l+XVV/c XbrP6gy1ub30TuT5C20CHI1LjdEpt9P5ApNoHqft+weP9CNKof5gW1GDWouyzh1l E3L3OsirkHIuK3PsCLLAeCsw5tY5dXHmyNVlwR/jok4HrfOh2nZiseQvJ496rB/5 koyDICf6QtQ3IzMShQq0O0WRsM6XI+tMOZs7t0sh2Zm8mKYTMQXWCAoWY/c5oV+u 7EmeJ6e1W+Ua5YINKZILU36k+ZxRCPEyN8Yhe5/9yT9q5vUgEjnV7LNWaTSjIG+t 78PRau9pVZ5jESY/vDupf6Q2nee0BxRoKTfEOFd3yKQrr+13qdKsTrAuerch4gM9 PB4Qg== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 42jmv50uyh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 29 Oct 2024 00:31:19 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 29 Oct 2024 00:31:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 29 Oct 2024 00:31:17 -0700 Received: from cavium-OptiPlex-3070-BM17.. (unknown [10.28.34.33]) by maili.marvell.com (Postfix) with ESMTP id 6A4E15E6E1E; Tue, 29 Oct 2024 00:31:15 -0700 (PDT) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: , Satheesh Paul Subject: [dpdk-dev] [PATCH v2 2/2] net/cnxk: support rte flow on cn20k Date: Tue, 29 Oct 2024 13:01:06 +0530 Message-ID: <20241029073106.2620920-2-psatheesh@marvell.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241029073106.2620920-1-psatheesh@marvell.com> References: <20241021040144.974453-1-psatheesh@marvell.com> <20241029073106.2620920-1-psatheesh@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: 5IwG8sdJOPJq7O6X8dQLmyZ_PL1xSqkT X-Proofpoint-ORIG-GUID: 5IwG8sdJOPJq7O6X8dQLmyZ_PL1xSqkT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.687,Hydra:6.0.235,FMLib:17.0.607.475 definitions=2020-10-13_15,2020-10-13_02,2020-04-07_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satheesh Paul Support for rte flow in cn20k. Signed-off-by: Satheesh Paul Reviewed-by: Kiran Kumar K --- drivers/net/cnxk/cn10k_ethdev.c | 8 +- drivers/net/cnxk/cn10k_flow.h | 21 -- drivers/net/cnxk/cn20k_ethdev.c | 4 + drivers/net/cnxk/cnxk_ethdev_devargs.c | 10 +- drivers/net/cnxk/cnxk_flow.c | 10 +- drivers/net/cnxk/cnxk_flow_wrapper.c | 303 +++++++++++++++++++++++++ drivers/net/cnxk/cnxk_flow_wrapper.h | 21 ++ drivers/net/cnxk/meson.build | 10 +- 8 files changed, 355 insertions(+), 32 deletions(-) delete mode 100644 drivers/net/cnxk/cn10k_flow.h create mode 100644 drivers/net/cnxk/cnxk_flow_wrapper.c create mode 100644 drivers/net/cnxk/cnxk_flow_wrapper.h diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index fbb9b09062..a4b3d56c61 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -2,9 +2,9 @@ * Copyright(C) 2021 Marvell. */ #include "cn10k_ethdev.h" -#include "cn10k_flow.h" #include "cn10k_rx.h" #include "cn10k_tx.h" +#include "cnxk_flow_wrapper.h" static uint16_t nix_rx_offload_flags(struct rte_eth_dev *eth_dev) @@ -913,9 +913,9 @@ npc_flow_ops_override(void) init_once = 1; /* Update platform specific ops */ - cnxk_flow_ops.create = cn10k_flow_create; - cnxk_flow_ops.destroy = cn10k_flow_destroy; - cnxk_flow_ops.info_get = cn10k_flow_info_get; + cnxk_flow_ops.create = cnxk_flow_create_wrapper; + cnxk_flow_ops.destroy = cnxk_flow_destroy_wrapper; + cnxk_flow_ops.info_get = cnxk_flow_info_get_wrapper; } static int diff --git a/drivers/net/cnxk/cn10k_flow.h b/drivers/net/cnxk/cn10k_flow.h deleted file mode 100644 index 316b74e6a6..0000000000 --- a/drivers/net/cnxk/cn10k_flow.h +++ /dev/null @@ -1,21 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2020 Marvell. - */ -#ifndef __CN10K_RTE_FLOW_H__ -#define __CN10K_RTE_FLOW_H__ - -#include - -struct rte_flow *cn10k_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error); -int cn10k_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, - struct rte_flow_error *error); - -int cn10k_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info, - struct rte_flow_queue_info *queue_info, struct rte_flow_error *err); - -#define CN10K_NPC_COUNTERS_MAX 512 - -#endif /* __CN10K_RTE_FLOW_H__ */ diff --git a/drivers/net/cnxk/cn20k_ethdev.c b/drivers/net/cnxk/cn20k_ethdev.c index 37c372d80f..e74dd88172 100644 --- a/drivers/net/cnxk/cn20k_ethdev.c +++ b/drivers/net/cnxk/cn20k_ethdev.c @@ -4,6 +4,7 @@ #include "cn20k_ethdev.h" #include "cn20k_rx.h" #include "cn20k_tx.h" +#include "cnxk_flow_wrapper.h" static uint16_t nix_rx_offload_flags(struct rte_eth_dev *eth_dev) @@ -867,6 +868,9 @@ npc_flow_ops_override(void) init_once = 1; /* Update platform specific ops */ + cnxk_flow_ops.create = cnxk_flow_create_wrapper; + cnxk_flow_ops.destroy = cnxk_flow_destroy_wrapper; + cnxk_flow_ops.info_get = cnxk_flow_info_get_wrapper; } static int diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c index 5bd50bb9a1..bdf5d88b92 100644 --- a/drivers/net/cnxk/cnxk_ethdev_devargs.c +++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c @@ -88,8 +88,7 @@ parse_flow_max_priority(const char *key, const char *value, void *extra_args) val = atoi(value); - /* Limit the max priority to 32 */ - if (val < 1 || val > 32) + if (val < 1 || val > ROC_NPC_MAX_MCAM_PRIORITY) return -EINVAL; *(uint16_t *)extra_args = val; @@ -390,7 +389,12 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) dev->nix.meta_buf_sz = meta_buf_sz; dev->npc.flow_prealloc_size = flow_prealloc_size; - dev->npc.flow_max_priority = flow_max_priority; + + if (roc_model_is_cn20k()) + dev->npc.flow_max_priority = ROC_NPC_MAX_MCAM_PRIORITY; + else + dev->npc.flow_max_priority = flow_max_priority; + dev->npc.switch_header_type = switch_header_type; dev->npc.sdp_channel = sdp_chan.channel; dev->npc.sdp_channel_mask = sdp_chan.mask; diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index e42e2f8deb..4a32c11765 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -1083,10 +1083,14 @@ cnxk_flow_query_common(struct rte_eth_dev *eth_dev, struct rte_flow *flow, npc = &rep_dev->parent_dev->npc; } - if (in_flow->use_pre_alloc) + if (in_flow->use_pre_alloc) { rc = roc_npc_inl_mcam_read_counter(in_flow->ctr_id, &query->hits); - else - rc = roc_npc_mcam_read_counter(npc, in_flow->ctr_id, &query->hits); + } else { + if (roc_model_is_cn20k()) + rc = roc_npc_mcam_get_stats(npc, in_flow, &query->hits); + else + rc = roc_npc_mcam_read_counter(npc, in_flow->ctr_id, &query->hits); + } if (rc != 0) { errcode = EIO; errmsg = "Error reading flow counter"; diff --git a/drivers/net/cnxk/cnxk_flow_wrapper.c b/drivers/net/cnxk/cnxk_flow_wrapper.c new file mode 100644 index 0000000000..af2eee7266 --- /dev/null +++ b/drivers/net/cnxk/cnxk_flow_wrapper.c @@ -0,0 +1,303 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#ifdef CNXK_PLATFORM_CN10K +#include "cn10k_ethdev.h" +#include "cn10k_rx.h" +#endif +#ifdef CNXK_PLATFORM_CN20K +#include "cn20k_ethdev.h" +#include "cn20k_rx.h" +#endif +#include "cnxk_ethdev_mcs.h" +#include "cnxk_flow_wrapper.h" +#include + +static void +cnxk_eth_set_rx_function(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); +#ifdef CNXK_PLATFORM_CN10K + cn10k_eth_set_rx_function(eth_dev); +#endif +#ifdef CNXK_PLATFORM_CN20K + cn20k_eth_set_rx_function(eth_dev); +#endif +} + +static int +cnxk_mtr_connect(struct rte_eth_dev *eth_dev, uint32_t mtr_id) +{ + return nix_mtr_connect(eth_dev, mtr_id); +} + +static int +cnxk_mtr_destroy(struct rte_eth_dev *eth_dev, uint32_t mtr_id) +{ + struct rte_mtr_error mtr_error; + + return nix_mtr_destroy(eth_dev, mtr_id, &mtr_error); +} + +static int +cnxk_mtr_configure(struct rte_eth_dev *eth_dev, const struct rte_flow_action actions[]) +{ + uint32_t mtr_id = 0xffff, prev_mtr_id = 0xffff, next_mtr_id = 0xffff; + const struct rte_flow_action_meter *mtr_conf; + const struct rte_flow_action_queue *q_conf; + const struct rte_flow_action_rss *rss_conf; + struct cnxk_mtr_policy_node *policy; + bool is_mtr_act = false; + int tree_level = 0; + int rc = -EINVAL, i; + + for (i = 0; actions[i].type != RTE_FLOW_ACTION_TYPE_END; i++) { + if (actions[i].type == RTE_FLOW_ACTION_TYPE_METER) { + mtr_conf = (const struct rte_flow_action_meter *)(actions[i].conf); + mtr_id = mtr_conf->mtr_id; + is_mtr_act = true; + } + if (actions[i].type == RTE_FLOW_ACTION_TYPE_QUEUE) { + q_conf = (const struct rte_flow_action_queue *)(actions[i].conf); + if (is_mtr_act) + nix_mtr_rq_update(eth_dev, mtr_id, 1, &q_conf->index); + } + if (actions[i].type == RTE_FLOW_ACTION_TYPE_RSS) { + rss_conf = (const struct rte_flow_action_rss *)(actions[i].conf); + if (is_mtr_act) + nix_mtr_rq_update(eth_dev, mtr_id, rss_conf->queue_num, + rss_conf->queue); + } + } + + if (!is_mtr_act) + return rc; + + prev_mtr_id = mtr_id; + next_mtr_id = mtr_id; + while (next_mtr_id != 0xffff) { + rc = nix_mtr_validate(eth_dev, next_mtr_id); + if (rc) + return rc; + + rc = nix_mtr_policy_act_get(eth_dev, next_mtr_id, &policy); + if (rc) + return rc; + + rc = nix_mtr_color_action_validate(eth_dev, mtr_id, &prev_mtr_id, &next_mtr_id, + policy, &tree_level); + if (rc) + return rc; + } + + return nix_mtr_configure(eth_dev, mtr_id); +} + +static int +cnxk_rss_action_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_action *act) +{ + const struct rte_flow_action_rss *rss; + + if (act == NULL) + return -EINVAL; + + rss = (const struct rte_flow_action_rss *)act->conf; + + if (attr->egress) { + plt_err("No support of RSS in egress"); + return -EINVAL; + } + + if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) { + plt_err("multi-queue mode is disabled"); + return -ENOTSUP; + } + + if (!rss || !rss->queue_num) { + plt_err("no valid queues"); + return -EINVAL; + } + + if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT) { + plt_err("non-default RSS hash functions are not supported"); + return -ENOTSUP; + } + + if (rss->key_len && rss->key_len > ROC_NIX_RSS_KEY_LEN) { + plt_err("RSS hash key too large"); + return -ENOTSUP; + } + + return 0; +} + +struct rte_flow * +cnxk_flow_create_wrapper(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct rte_flow_action *action_rss = NULL; + const struct rte_flow_action_meter *mtr = NULL; + const struct rte_flow_action *act_q = NULL; + struct roc_npc *npc = &dev->npc; + struct roc_npc_flow *flow; + void *mcs_flow = NULL; + int vtag_actions = 0; + uint32_t req_act = 0; + int mark_actions; + int i, rc; + + for (i = 0; actions[i].type != RTE_FLOW_ACTION_TYPE_END; i++) { + if (actions[i].type == RTE_FLOW_ACTION_TYPE_METER) + req_act |= ROC_NPC_ACTION_TYPE_METER; + + if (actions[i].type == RTE_FLOW_ACTION_TYPE_QUEUE) { + req_act |= ROC_NPC_ACTION_TYPE_QUEUE; + act_q = &actions[i]; + } + if (actions[i].type == RTE_FLOW_ACTION_TYPE_RSS) { + req_act |= ROC_NPC_ACTION_TYPE_RSS; + action_rss = &actions[i]; + } + } + + if (req_act & ROC_NPC_ACTION_TYPE_METER) { + if ((req_act & ROC_NPC_ACTION_TYPE_RSS) && + ((req_act & ROC_NPC_ACTION_TYPE_QUEUE))) { + return NULL; + } + if (req_act & ROC_NPC_ACTION_TYPE_RSS) { + rc = cnxk_rss_action_validate(eth_dev, attr, action_rss); + if (rc) + return NULL; + } else if (req_act & ROC_NPC_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *act_queue; + act_queue = (const struct rte_flow_action_queue *)act_q->conf; + if (act_queue->index > eth_dev->data->nb_rx_queues) + return NULL; + } else { + return NULL; + } + } + for (i = 0; actions[i].type != RTE_FLOW_ACTION_TYPE_END; i++) { + if (actions[i].type == RTE_FLOW_ACTION_TYPE_METER) { + mtr = (const struct rte_flow_action_meter *)actions[i].conf; + rc = cnxk_mtr_configure(eth_dev, actions); + if (rc) { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to configure mtr "); + return NULL; + } + break; + } + } + + if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && + cnxk_eth_macsec_sess_get_by_sess(dev, actions[0].conf) != NULL) { + rc = cnxk_mcs_flow_configure(eth_dev, attr, pattern, actions, error, &mcs_flow); + if (rc) { + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to configure mcs flow"); + return NULL; + } + return (struct rte_flow *)mcs_flow; + } + + flow = cnxk_flow_create(eth_dev, attr, pattern, actions, error); + if (!flow) { + if (mtr) + nix_mtr_chain_reset(eth_dev, mtr->mtr_id); + + return NULL; + } else { + if (mtr) + cnxk_mtr_connect(eth_dev, mtr->mtr_id); + } + + mark_actions = roc_npc_mark_actions_get(npc); + if (mark_actions) { + dev->rx_offload_flags |= NIX_RX_OFFLOAD_MARK_UPDATE_F; + cnxk_eth_set_rx_function(eth_dev); + } + + vtag_actions = roc_npc_vtag_actions_get(npc); + + if (vtag_actions) { + dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + cnxk_eth_set_rx_function(eth_dev); + } + + return (struct rte_flow *)flow; +} + +int +cnxk_flow_info_get_wrapper(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *err) +{ + RTE_SET_USED(dev); + RTE_SET_USED(err); + + memset(port_info, 0, sizeof(*port_info)); + memset(queue_info, 0, sizeof(*queue_info)); + + port_info->max_nb_counters = CNXK_NPC_COUNTERS_MAX; + port_info->max_nb_meters = CNXK_NIX_MTR_COUNT_MAX; + + return 0; +} + +int +cnxk_flow_destroy_wrapper(struct rte_eth_dev *eth_dev, struct rte_flow *rte_flow, + struct rte_flow_error *error) +{ + struct roc_npc_flow *flow = (struct roc_npc_flow *)rte_flow; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct roc_npc *npc = &dev->npc; + int vtag_actions = 0; + int mark_actions; + uint16_t match_id; + uint32_t mtr_id; + int rc; + + match_id = (flow->npc_action >> NPC_RX_ACT_MATCH_OFFSET) & NPC_RX_ACT_MATCH_MASK; + if (match_id) { + mark_actions = roc_npc_mark_actions_sub_return(npc, 1); + if (mark_actions == 0) { + dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F; + cnxk_eth_set_rx_function(eth_dev); + } + } + + vtag_actions = roc_npc_vtag_actions_get(npc); + if (vtag_actions) { + if (flow->nix_intf == ROC_NPC_INTF_RX) { + vtag_actions = roc_npc_vtag_actions_sub_return(npc, 1); + if (vtag_actions == 0) { + dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_VLAN_STRIP_F; + cnxk_eth_set_rx_function(eth_dev); + } + } + } + + if (cnxk_eth_macsec_sess_get_by_sess(dev, (void *)flow) != NULL) { + rc = cnxk_mcs_flow_destroy(dev, (void *)flow); + if (rc < 0) + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to free mcs flow"); + return rc; + } + + mtr_id = flow->mtr_id; + rc = cnxk_flow_destroy(eth_dev, flow, error); + if (!rc && mtr_id != ROC_NIX_MTR_ID_INVALID) { + rc = cnxk_mtr_destroy(eth_dev, mtr_id); + if (rc) { + rte_flow_error_set(error, ENXIO, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter attached to this flow does not exist"); + } + } + return rc; +} diff --git a/drivers/net/cnxk/cnxk_flow_wrapper.h b/drivers/net/cnxk/cnxk_flow_wrapper.h new file mode 100644 index 0000000000..506293dd51 --- /dev/null +++ b/drivers/net/cnxk/cnxk_flow_wrapper.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell. + */ +#ifndef __CNXK_FLOW_WRAPPER_H__ +#define __CNXK_FLOW_WRAPPER_H__ + +#include + +struct rte_flow *cnxk_flow_create_wrapper(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); +int cnxk_flow_destroy_wrapper(struct rte_eth_dev *dev, struct rte_flow *flow, + struct rte_flow_error *error); + +int cnxk_flow_info_get_wrapper(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *err); + +#define CNXK_NPC_COUNTERS_MAX 512 + +#endif /* __CNXK_FLOW_WRAPPER_H__ */ diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index 32ff8aadc0..e77ceb8efc 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -18,6 +18,13 @@ if soc_type != 'cn9k' and soc_type != 'cn10k' and soc_type != 'cn20k' soc_type = 'all' endif +if soc_type == 'cn10k' or soc_type == 'all' + dpdk_conf.set('CNXK_PLATFORM_CN10K', 1) +elif soc_type == 'cn20k' + dpdk_conf.set('CNXK_PLATFORM_CN20K', 1) +endif + + sources = files( 'cnxk_ethdev.c', 'cnxk_ethdev_cman.c', @@ -146,9 +153,9 @@ if soc_type == 'cn10k' or soc_type == 'all' sources += files( 'cn10k_ethdev.c', 'cn10k_ethdev_sec.c', - 'cn10k_flow.c', 'cn10k_rx_select.c', 'cn10k_tx_select.c', + 'cnxk_flow_wrapper.c', ) if host_machine.cpu_family().startswith('aarch') and not disable_template @@ -238,6 +245,7 @@ sources += files( 'cn20k_ethdev.c', 'cn20k_rx_select.c', 'cn20k_tx_select.c', + 'cnxk_flow_wrapper.c', ) if host_machine.cpu_family().startswith('aarch') and not disable_template -- 2.39.2