From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5AD943C0F; Fri, 1 Mar 2024 20:17:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4BE15434A3; Fri, 1 Mar 2024 20:16:03 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3EDE7434BE for ; Fri, 1 Mar 2024 20:16:02 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 421H7Qa9016609 for ; Fri, 1 Mar 2024 11:16:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=QZEroBS2Jl/eXY3zAG68r jLcTVUNXeAleoFtGXMtejI=; b=fP4TggLsPs2CAaj3whyWEX8cvuuXl3fH5MF45 k/9nViVHdwU1yVNgqNYPUQlUbD+v1XIzHBXIJbsLmcKKJHrVD0UxQAX3p87kLNY5 DRHFjxG7SxaWo+uoI6njoOLiezpmVXNLunRbQ5qlX5UCy6rjRS34y+jpXDQ/EPPq EhqK9BOQYZzAx+qzElVP0IliZOoIh3pZHuw+gdPaD2d8iO6hkX0/7n4vfJOP7HrJ wnu8rTDxE/gHy+BT/JsrFAZYMfmcpoh1d7/DBn8CEIBD5q9C7a85+o4WNxVLkPrL ZGi0uYoCUjU9sMVZRAH/tlxczk1KuX6yxNdqefniFsrJCp+fg== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3wjfay8seu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 01 Mar 2024 11:16:01 -0800 (PST) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 11:16:00 -0800 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 11:16:00 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id D842D3F71EB; Fri, 1 Mar 2024 11:15:57 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: Subject: [PATCH v5 21/23] net/cnxk: generalise flow operation APIs Date: Sat, 2 Mar 2024 00:44:48 +0530 Message-ID: <20240301191451.57168-22-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20240301191451.57168-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20240301191451.57168-1-hkalra@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: 36-n1eJNVqhpVYc8lQpAmCCe2WqU1iiQ X-Proofpoint-GUID: 36-n1eJNVqhpVYc8lQpAmCCe2WqU1iiQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_20,2024-03-01_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Flow operations can be performed on cnxk ports as well as representor ports. Since representor ports are not cnxk ports but have eswitch as base device underneath, special handling is required to align with base infra. Introducing a flag to generic flow APIs to discriminate if the operation request made on normal or representor ports. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_flow.c | 556 +++++++++++++++++++++++++++++------ drivers/net/cnxk/cnxk_flow.h | 18 ++ 2 files changed, 489 insertions(+), 85 deletions(-) diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c index 2cd88f0334..d3c20e8315 100644 --- a/drivers/net/cnxk/cnxk_flow.c +++ b/drivers/net/cnxk/cnxk_flow.c @@ -4,6 +4,7 @@ #include #include +#define IS_REP_BIT 7 const struct cnxk_rte_flow_term_info term[] = { [RTE_FLOW_ITEM_TYPE_ETH] = {ROC_NPC_ITEM_TYPE_ETH, sizeof(struct rte_flow_item_eth)}, [RTE_FLOW_ITEM_TYPE_VLAN] = {ROC_NPC_ITEM_TYPE_VLAN, sizeof(struct rte_flow_item_vlan)}, @@ -186,17 +187,162 @@ roc_npc_parse_sample_subaction(struct rte_eth_dev *eth_dev, const struct rte_flo return 0; } +static int +representor_rep_portid_action(struct roc_npc_action *in_actions, struct rte_eth_dev *eth_dev, + struct rte_eth_dev *portid_eth_dev, + enum rte_flow_action_type act_type, uint8_t rep_pattern, + uint16_t *dst_pf_func, bool is_rep, uint64_t *free_allocs, + int *act_cnt) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev *rep_eth_dev = portid_eth_dev; + struct rte_flow_action_of_set_vlan_vid *vlan_vid; + struct rte_flow_action_of_set_vlan_pcp *vlan_pcp; + struct rte_flow_action_of_push_vlan *push_vlan; + struct rte_flow_action_queue *act_q = NULL; + struct cnxk_rep_dev *rep_dev; + struct roc_npc *npc; + uint16_t vlan_tci; + int j = 0; + + /* For inserting an action in the list */ + int i = *act_cnt; + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + npc = &rep_dev->parent_dev->npc; + } + if (rep_pattern >> IS_REP_BIT) { /* Check for normal/representor port as action */ + if ((rep_pattern & 0x7f) == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { + /* Case: Repr port pattern -> Default TX rule -> LBK -> + * Pattern RX LBK rule hit -> Action: send to new pf_func + */ + if (act_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR) { + /* New pf_func corresponds to ESW + queue corresponding to rep_id */ + act_q = plt_zmalloc(sizeof(struct rte_flow_action_queue), 0); + if (!act_q) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + act_q->index = rep_dev->rep_id; + + while (free_allocs[j] != 0) + j++; + free_allocs[j] = (uint64_t)act_q; + in_actions[i].type = ROC_NPC_ACTION_TYPE_QUEUE; + in_actions[i].conf = (struct rte_flow_action_queue *)act_q; + npc->rep_act_pf_func = rep_dev->parent_dev->npc.pf_func; + } else { + /* New pf_func corresponds to hw_func of representee */ + in_actions[i].type = ROC_NPC_ACTION_TYPE_PORT_ID; + npc->rep_act_pf_func = rep_dev->hw_func; + *dst_pf_func = rep_dev->hw_func; + } + /* Additional action to strip the VLAN from packets received by LBK */ + i++; + in_actions[i].type = ROC_NPC_ACTION_TYPE_VLAN_STRIP; + goto done; + } + /* Case: Repd port pattern -> TX Rule with VLAN -> LBK -> Default RX LBK rule hit + * base on vlan, if packet goes to ESW or actual pf_func -> Action : + * act port_representor: send to ESW respective using 1<<8 | rep_id as tci value + * act represented_port: send to actual port using rep_id as tci value. + */ + /* Add RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN action */ + push_vlan = plt_zmalloc(sizeof(struct rte_flow_action_of_push_vlan), 0); + if (!push_vlan) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + + while (free_allocs[j] != 0) + j++; + free_allocs[j] = (uint64_t)push_vlan; + push_vlan->ethertype = ntohs(ROC_ESWITCH_VLAN_TPID); + in_actions[i].type = ROC_NPC_ACTION_TYPE_VLAN_ETHTYPE_INSERT; + in_actions[i].conf = (struct rte_flow_action_of_push_vlan *)push_vlan; + i++; + + /* Add RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP action */ + vlan_pcp = plt_zmalloc(sizeof(struct rte_flow_action_of_set_vlan_pcp), 0); + if (!vlan_pcp) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + + free_allocs[j + 1] = (uint64_t)vlan_pcp; + vlan_pcp->vlan_pcp = 0; + in_actions[i].type = ROC_NPC_ACTION_TYPE_VLAN_PCP_INSERT; + in_actions[i].conf = (struct rte_flow_action_of_set_vlan_pcp *)vlan_pcp; + i++; + + /* Add RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID action */ + vlan_vid = plt_zmalloc(sizeof(struct rte_flow_action_of_set_vlan_vid), 0); + if (!vlan_vid) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + + free_allocs[j + 2] = (uint64_t)vlan_vid; + if (act_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR) + vlan_tci = rep_dev->rep_id | (1ULL << CNXK_ESWITCH_VFPF_SHIFT); + else + vlan_tci = rep_dev->rep_id; + vlan_vid->vlan_vid = ntohs(vlan_tci); + in_actions[i].type = ROC_NPC_ACTION_TYPE_VLAN_INSERT; + in_actions[i].conf = (struct rte_flow_action_of_set_vlan_vid *)vlan_vid; + + /* Change default channel to UCAST_CHAN (63) while sending */ + npc->rep_act_rep = true; + } else { + if (act_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR) { + /* Case: Pattern wire port -> Pattern RX rule-> + * Action: pf_func = ESW. queue = rep_id + */ + act_q = plt_zmalloc(sizeof(struct rte_flow_action_queue), 0); + if (!act_q) { + plt_err("Error allocation memory"); + return -ENOMEM; + } + while (free_allocs[j] != 0) + j++; + free_allocs[j] = (uint64_t)act_q; + act_q->index = rep_dev->rep_id; + + in_actions[i].type = ROC_NPC_ACTION_TYPE_QUEUE; + in_actions[i].conf = (struct rte_flow_action_queue *)act_q; + npc->rep_act_pf_func = rep_dev->parent_dev->npc.pf_func; + } else { + /* Case: Pattern wire port -> Pattern RX rule-> + * Action: Receive at actual hw_func + */ + in_actions[i].type = ROC_NPC_ACTION_TYPE_PORT_ID; + npc->rep_act_pf_func = rep_dev->hw_func; + *dst_pf_func = rep_dev->hw_func; + } + } +done: + *act_cnt = i; + + return 0; +} + static int representor_portid_action(struct roc_npc_action *in_actions, struct rte_eth_dev *portid_eth_dev, - uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, int *act_cnt) + uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, uint64_t *free_allocs, + int *act_cnt) { struct rte_eth_dev *rep_eth_dev = portid_eth_dev; struct rte_flow_action_mark *act_mark; struct cnxk_rep_dev *rep_dev; /* For inserting an action in the list */ - int i = *act_cnt; + int i = *act_cnt, j = 0; rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + *dst_pf_func = rep_dev->hw_func; /* Add Mark action */ @@ -207,6 +353,9 @@ representor_portid_action(struct roc_npc_action *in_actions, struct rte_eth_dev return -ENOMEM; } + while (free_allocs[j] != 0) + j++; + free_allocs[j] = (uint64_t)act_mark; /* Mark ID format: (tunnel type - VxLAN, Geneve << 6) | Tunnel decap */ act_mark->id = has_tunnel_pattern ? ((has_tunnel_pattern << 6) | 5) : 1; in_actions[i].type = ROC_NPC_ACTION_TYPE_MARK; @@ -223,7 +372,8 @@ static int cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, - uint16_t *dst_pf_func, uint8_t has_tunnel_pattern) + uint16_t *dst_pf_func, uint8_t has_tunnel_pattern, bool is_rep, + uint8_t rep_pattern, uint64_t *free_allocs) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_action_queue *act_q = NULL; @@ -273,16 +423,48 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + in_actions[i].conf = actions->conf; + act_ethdev = (const struct rte_flow_action_ethdev *)actions->conf; + if (rte_eth_dev_get_name_by_port(act_ethdev->port_id, if_name)) { + plt_err("Name not found for output port id"); + goto err_exit; + } + portid_eth_dev = rte_eth_dev_allocated(if_name); + if (!portid_eth_dev) { + plt_err("eth_dev not found for output port id"); + goto err_exit; + } + + plt_rep_dbg("Rule installed by port %d if_name %s act_ethdev->port_id %d", + eth_dev->data->port_id, if_name, act_ethdev->port_id); + if (cnxk_ethdev_is_representor(if_name)) { + if (representor_rep_portid_action(in_actions, eth_dev, + portid_eth_dev, actions->type, + rep_pattern, dst_pf_func, is_rep, + free_allocs, &i)) { + plt_err("Representor port action set failed"); + goto err_exit; + } + } else { + if (actions->type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) + continue; + /* Normal port as represented_port as action not supported*/ + return -ENOTSUP; + } + break; case RTE_FLOW_ACTION_TYPE_PORT_ID: + /* No port ID action on representor ethdevs */ + if (is_rep) + continue; in_actions[i].type = ROC_NPC_ACTION_TYPE_PORT_ID; in_actions[i].conf = actions->conf; - act_ethdev = (const struct rte_flow_action_ethdev *) - actions->conf; - port_act = (const struct rte_flow_action_port_id *) - actions->conf; + act_ethdev = (const struct rte_flow_action_ethdev *)actions->conf; + port_act = (const struct rte_flow_action_port_id *)actions->conf; if (rte_eth_dev_get_name_by_port( - actions->type != RTE_FLOW_ACTION_TYPE_PORT_ID ? - act_ethdev->port_id : port_act->id, if_name)) { + actions->type != RTE_FLOW_ACTION_TYPE_PORT_ID ? + act_ethdev->port_id : + port_act->id, + if_name)) { plt_err("Name not found for output port id"); goto err_exit; } @@ -297,7 +479,7 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, act_ethdev->port_id); if (representor_portid_action(in_actions, portid_eth_dev, dst_pf_func, has_tunnel_pattern, - &i)) { + free_allocs, &i)) { plt_err("Representor port action set failed"); goto err_exit; } @@ -321,6 +503,9 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, break; case RTE_FLOW_ACTION_TYPE_RSS: + /* No RSS action on representor ethdevs */ + if (is_rep) + continue; rc = npc_rss_action_validate(eth_dev, attr, actions); if (rc) goto err_exit; @@ -397,14 +582,29 @@ cnxk_map_actions(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, static int cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern[], - struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern) + struct roc_npc_item_info in_pattern[], uint8_t *has_tunnel_pattern, bool is_rep, + uint8_t *rep_pattern, uint64_t *free_allocs) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct rte_flow_item_ethdev *rep_eth_dev; struct rte_eth_dev *portid_eth_dev; char if_name[RTE_ETH_NAME_MAX_LEN]; struct cnxk_eth_dev *hw_dst; - int i = 0; + struct cnxk_rep_dev *rdev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; + int i = 0, j = 0; + + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rdev = cnxk_rep_pmd_priv(eth_dev); + npc = &rdev->parent_dev->npc; + + npc->rep_npc = npc; + npc->rep_port_id = rdev->port_id; + npc->rep_pf_func = rdev->hw_func; + } while (pattern->type != RTE_FLOW_ITEM_TYPE_END) { in_pattern[i].spec = pattern->spec; @@ -412,7 +612,8 @@ cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern in_pattern[i].mask = pattern->mask; in_pattern[i].type = term[pattern->type].item_type; in_pattern[i].size = term[pattern->type].item_size; - if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) { + if (pattern->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT || + pattern->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { rep_eth_dev = (const struct rte_flow_item_ethdev *)pattern->spec; if (rte_eth_dev_get_name_by_port(rep_eth_dev->port_id, if_name)) { plt_err("Name not found for output port id"); @@ -423,11 +624,7 @@ cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern plt_err("eth_dev not found for output port id"); goto fail; } - if (strcmp(portid_eth_dev->device->driver->name, - eth_dev->device->driver->name) != 0) { - plt_err("Output port not under same driver"); - goto fail; - } + *rep_pattern = pattern->type; if (cnxk_ethdev_is_representor(if_name)) { /* Case where represented port not part of same * app and represented by a representor port. @@ -437,20 +634,56 @@ cnxk_map_pattern(struct rte_eth_dev *eth_dev, const struct rte_flow_item pattern rep_dev = cnxk_rep_pmd_priv(portid_eth_dev); eswitch_dev = rep_dev->parent_dev; - dev->npc.rep_npc = &eswitch_dev->npc; - dev->npc.rep_port_id = rep_eth_dev->port_id; - dev->npc.rep_pf_func = rep_dev->hw_func; + npc->rep_npc = &eswitch_dev->npc; + npc->rep_port_id = rep_eth_dev->port_id; + npc->rep_pf_func = rep_dev->hw_func; + + if (pattern->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) { + struct rte_flow_item_vlan *vlan; + + npc->rep_pf_func = eswitch_dev->npc.pf_func; + /* Add VLAN pattern corresponding to rep_id */ + i++; + vlan = plt_zmalloc(sizeof(struct rte_flow_item_vlan), 0); + if (!vlan) { + plt_err("error allocation memory"); + return -ENOMEM; + } + + while (free_allocs[j] != 0) + j++; + free_allocs[j] = (uint64_t)vlan; + + npc->rep_rx_channel = ROC_ESWITCH_LBK_CHAN; + vlan->hdr.vlan_tci = RTE_BE16(rep_dev->rep_id); + in_pattern[i].spec = (struct rte_flow_item_vlan *)vlan; + in_pattern[i].last = NULL; + in_pattern[i].mask = &rte_flow_item_vlan_mask; + in_pattern[i].type = + term[RTE_FLOW_ITEM_TYPE_VLAN].item_type; + in_pattern[i].size = + term[RTE_FLOW_ITEM_TYPE_VLAN].item_size; + } + *rep_pattern |= 1 << IS_REP_BIT; plt_rep_dbg("Represented port %d act port %d rep_dev->hw_func 0x%x", rep_eth_dev->port_id, eth_dev->data->port_id, rep_dev->hw_func); } else { + if (strcmp(portid_eth_dev->device->driver->name, + eth_dev->device->driver->name) != 0) { + plt_err("Output port not under same driver"); + goto fail; + } + /* Normal port as port_representor pattern can't be supported */ + if (pattern->type == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR) + return -ENOTSUP; /* Case where represented port part of same app * as PF. */ hw_dst = portid_eth_dev->data->dev_private; - dev->npc.rep_npc = &hw_dst->npc; - dev->npc.rep_port_id = rep_eth_dev->port_id; - dev->npc.rep_pf_func = hw_dst->npc.pf_func; + npc->rep_npc = &hw_dst->npc; + npc->rep_port_id = rep_eth_dev->port_id; + npc->rep_pf_func = hw_dst->npc.pf_func; } } @@ -474,56 +707,96 @@ cnxk_map_flow_data(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr struct roc_npc_attr *in_attr, struct roc_npc_item_info in_pattern[], struct roc_npc_action in_actions[], struct roc_npc_action_sample *in_sample_actions, uint32_t *flowkey_cfg, - uint16_t *dst_pf_func) + uint16_t *dst_pf_func, bool is_rep, uint64_t *free_allocs) { - uint8_t has_tunnel_pattern = 0; + uint8_t has_tunnel_pattern = 0, rep_pattern = 0; int rc; in_attr->priority = attr->priority; in_attr->ingress = attr->ingress; in_attr->egress = attr->egress; - rc = cnxk_map_pattern(eth_dev, pattern, in_pattern, &has_tunnel_pattern); + rc = cnxk_map_pattern(eth_dev, pattern, in_pattern, &has_tunnel_pattern, is_rep, + &rep_pattern, free_allocs); if (rc) { plt_err("Failed to map pattern list"); return rc; } + if (attr->transfer) { + /* rep_pattern is used to identify if RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT + * OR RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR is defined + if pattern's portid is + * normal port or representor port. + * For normal port_id, rep_pattern = pattern-> type + * For representor port, rep_pattern = pattern-> type | 1 << IS_REP_BIT + */ + if (is_rep || rep_pattern) { + if (rep_pattern == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT || + ((rep_pattern & 0x7f) == RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR)) + /* If pattern is port_representor or pattern has normal port as + * represented port, install ingress rule. + */ + in_attr->ingress = attr->transfer; + else + in_attr->egress = attr->transfer; + } else { + in_attr->ingress = attr->transfer; + } + } + return cnxk_map_actions(eth_dev, attr, actions, in_actions, in_sample_actions, flowkey_cfg, - dst_pf_func, has_tunnel_pattern); + dst_pf_func, has_tunnel_pattern, is_rep, rep_pattern, free_allocs); } -static int -cnxk_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - struct rte_flow_error *error) +int +cnxk_flow_validate_common(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error, + bool is_rep) { struct roc_npc_item_info in_pattern[ROC_NPC_ITEM_TYPE_END + 1]; struct roc_npc_action in_actions[ROC_NPC_MAX_ACTION_COUNT]; - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); struct roc_npc_action_sample in_sample_action; - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev; struct roc_npc_attr in_attr; + uint64_t *free_allocs, sz; + struct cnxk_eth_dev *dev; struct roc_npc_flow flow; uint32_t flowkey_cfg = 0; uint16_t dst_pf_func = 0; - int rc; - - /* Skip flow validation for MACsec. */ - if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && - cnxk_eth_macsec_sess_get_by_sess(dev, actions[0].conf) != NULL) - return 0; + struct roc_npc *npc; + int rc, j; + + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + /* Skip flow validation for MACsec. */ + if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && + cnxk_eth_macsec_sess_get_by_sess(dev, actions[0].conf) != NULL) + return 0; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } memset(&flow, 0, sizeof(flow)); memset(&in_sample_action, 0, sizeof(in_sample_action)); flow.is_validate = true; + sz = ROC_NPC_MAX_ACTION_COUNT + ROC_NPC_ITEM_TYPE_END + 1; + free_allocs = plt_zmalloc(sz * sizeof(uint64_t), 0); + if (!free_allocs) { + rte_flow_error_set(error, -ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + "Failed to map flow data"); + return -ENOMEM; + } rc = cnxk_map_flow_data(eth_dev, attr, pattern, actions, &in_attr, in_pattern, in_actions, - &in_sample_action, &flowkey_cfg, &dst_pf_func); + &in_sample_action, &flowkey_cfg, &dst_pf_func, is_rep, free_allocs); if (rc) { rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, "Failed to map flow data"); - return rc; + goto clean; } rc = roc_npc_flow_parse(npc, &in_attr, in_pattern, in_actions, &flow); @@ -531,73 +804,147 @@ cnxk_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr if (rc) { rte_flow_error_set(error, 0, rc, NULL, "Flow validation failed"); - return rc; + goto clean; } - return 0; +clean: + /* Freeing the allocations done for additional patterns/actions */ + for (j = 0; (j < (int)sz) && free_allocs[j]; j++) + plt_free((void *)free_allocs[j]); + plt_free(free_allocs); + + return rc; +} + +static int +cnxk_flow_validate(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + return cnxk_flow_validate_common(eth_dev, attr, pattern, actions, error, false); } struct roc_npc_flow * -cnxk_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +cnxk_flow_create_common(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error, + bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); struct roc_npc_item_info in_pattern[ROC_NPC_ITEM_TYPE_END + 1]; struct roc_npc_action in_actions[ROC_NPC_MAX_ACTION_COUNT]; struct roc_npc_action_sample in_sample_action; - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev = NULL; + struct roc_npc_flow *flow = NULL; + struct cnxk_eth_dev *dev = NULL; struct roc_npc_attr in_attr; - struct roc_npc_flow *flow; + uint64_t *free_allocs, sz; uint16_t dst_pf_func = 0; + struct roc_npc *npc; int errcode = 0; - int rc; + int rc, j; + + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + sz = ROC_NPC_MAX_ACTION_COUNT + ROC_NPC_ITEM_TYPE_END + 1; + free_allocs = plt_zmalloc(sz * sizeof(uint64_t), 0); + if (!free_allocs) { + rte_flow_error_set(error, -ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + "Failed to map flow data"); + return NULL; + } memset(&in_sample_action, 0, sizeof(in_sample_action)); memset(&in_attr, 0, sizeof(struct roc_npc_attr)); rc = cnxk_map_flow_data(eth_dev, attr, pattern, actions, &in_attr, in_pattern, in_actions, - &in_sample_action, &npc->flowkey_cfg_state, &dst_pf_func); + &in_sample_action, &npc->flowkey_cfg_state, &dst_pf_func, is_rep, + free_allocs); if (rc) { - rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, "Failed to map flow data"); - return NULL; + goto clean; } flow = roc_npc_flow_create(npc, &in_attr, in_pattern, in_actions, dst_pf_func, &errcode); if (errcode != 0) { rte_flow_error_set(error, errcode, errcode, NULL, roc_error_msg_get(errcode)); - return NULL; + goto clean; } +clean: + /* Freeing the allocations done for additional patterns/actions */ + for (j = 0; (j < (int)sz) && free_allocs[j]; j++) + plt_free((void *)free_allocs[j]); + plt_free(free_allocs); + return flow; } +struct roc_npc_flow * +cnxk_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + return cnxk_flow_create_common(eth_dev, attr, pattern, actions, error, false); +} + int -cnxk_flow_destroy(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, - struct rte_flow_error *error) +cnxk_flow_destroy_common(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, + struct rte_flow_error *error, bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; int rc; + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + rc = roc_npc_flow_destroy(npc, flow); if (rc) - rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Flow Destroy failed"); + rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Flow Destroy failed"); return rc; } -static int -cnxk_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) +int +cnxk_flow_destroy(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, + struct rte_flow_error *error) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; + return cnxk_flow_destroy_common(eth_dev, flow, error, false); +} + +int +cnxk_flow_flush_common(struct rte_eth_dev *eth_dev, struct rte_flow_error *error, bool is_rep) +{ + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; int rc; + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + rc = roc_npc_mcam_free_all_resources(npc); if (rc) { - rte_flow_error_set(error, EIO, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Failed to flush filter"); + rte_flow_error_set(error, EIO, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to flush filter"); return -rte_errno; } @@ -605,14 +952,21 @@ cnxk_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) } static int -cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, - const struct rte_flow_action *action, void *data, - struct rte_flow_error *error) +cnxk_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error) +{ + return cnxk_flow_flush_common(eth_dev, error, false); +} + +int +cnxk_flow_query_common(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, + struct rte_flow_error *error, bool is_rep) { struct roc_npc_flow *in_flow = (struct roc_npc_flow *)flow; - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; struct rte_flow_query_count *query = data; + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; const char *errmsg = NULL; int errcode = ENOTSUP; int rc; @@ -627,6 +981,15 @@ cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, goto err_exit; } + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } + if (in_flow->use_pre_alloc) rc = roc_npc_inl_mcam_read_counter(in_flow->ctr_id, &query->hits); else @@ -660,8 +1023,15 @@ cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, } static int -cnxk_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, - int enable __rte_unused, struct rte_flow_error *error) +cnxk_flow_query(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, struct rte_flow_error *error) +{ + return cnxk_flow_query_common(eth_dev, flow, action, data, error, false); +} + +static int +cnxk_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, int enable __rte_unused, + struct rte_flow_error *error) { /* If we support, we need to un-install the default mcam * entry for this port. @@ -673,16 +1043,25 @@ cnxk_flow_isolate(struct rte_eth_dev *eth_dev __rte_unused, return -rte_errno; } -static int -cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, - FILE *file, struct rte_flow_error *error) +int +cnxk_flow_dev_dump_common(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error, bool is_rep) { - struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); - struct roc_npc *npc = &dev->npc; + struct cnxk_rep_dev *rep_dev; + struct cnxk_eth_dev *dev; + struct roc_npc *npc; + + /* is_rep set for operation performed via representor ports */ + if (!is_rep) { + dev = cnxk_eth_pmd_priv(eth_dev); + npc = &dev->npc; + } else { + rep_dev = cnxk_rep_pmd_priv(eth_dev); + npc = &rep_dev->parent_dev->npc; + } if (file == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "Invalid file"); return -rte_errno; } @@ -701,8 +1080,15 @@ cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, } static int -cnxk_flow_get_aged_flows(struct rte_eth_dev *eth_dev, void **context, - uint32_t nb_contexts, struct rte_flow_error *err) +cnxk_flow_dev_dump(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error) +{ + return cnxk_flow_dev_dump_common(eth_dev, flow, file, error, false); +} + +static int +cnxk_flow_get_aged_flows(struct rte_eth_dev *eth_dev, void **context, uint32_t nb_contexts, + struct rte_flow_error *err) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); struct roc_npc *roc_npc = &dev->npc; diff --git a/drivers/net/cnxk/cnxk_flow.h b/drivers/net/cnxk/cnxk_flow.h index bb23629819..226694fbed 100644 --- a/drivers/net/cnxk/cnxk_flow.h +++ b/drivers/net/cnxk/cnxk_flow.h @@ -24,4 +24,22 @@ struct roc_npc_flow *cnxk_flow_create(struct rte_eth_dev *dev, int cnxk_flow_destroy(struct rte_eth_dev *dev, struct roc_npc_flow *flow, struct rte_flow_error *error); +struct roc_npc_flow *cnxk_flow_create_common(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, bool is_rep); +int cnxk_flow_validate_common(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error, + bool is_rep); +int cnxk_flow_destroy_common(struct rte_eth_dev *eth_dev, struct roc_npc_flow *flow, + struct rte_flow_error *error, bool is_rep); +int cnxk_flow_flush_common(struct rte_eth_dev *eth_dev, struct rte_flow_error *error, bool is_rep); +int cnxk_flow_query_common(struct rte_eth_dev *eth_dev, struct rte_flow *flow, + const struct rte_flow_action *action, void *data, + struct rte_flow_error *error, bool is_rep); +int cnxk_flow_dev_dump_common(struct rte_eth_dev *eth_dev, struct rte_flow *flow, FILE *file, + struct rte_flow_error *error, bool is_rep); + #endif /* __CNXK_RTE_FLOW_H__ */ -- 2.18.0