From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D59EB42481; Wed, 25 Jan 2023 08:33:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B227142D4F; Wed, 25 Jan 2023 08:32:51 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6CEA842D4B for ; Wed, 25 Jan 2023 08:32:50 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30P0DORV029087 for ; Tue, 24 Jan 2023 23:32:49 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=yDZuoI6vQyfTdxrBkodfMBs2or4dgKyy5w7zZ4RlA5E=; b=iOlRYyKm7+yGuJIKKNLhcjAWX9JlU7sxSfqPnEPHoqJvBdOM4bHO+UvqgWA4hq/6Im8j bDYfqsSWEBKFK9qCedcnZFR5B2q90wbVsk/ORnwDRSzBmgrUQCfZORUnCXGswoSpsXR7 fhlkRjeE6RQPt3lyN/Iyu4Y6TyQXYv3pIhL6CUs/YiGx3Vx3EStZxcdAU4O0vjEASu0O vtRkvnheLjeBeNFBU8kbOH0pg5wCpbw/DFmAf6QBt/C6hFPToPWc2dYKZRPoQQtbenCC sfNnmO0LMIF1clFrzBYieT8iGugQ5DwH3HM74uW22KanT1qRRonykTrB0CSh/YwoqUza mA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3n8gern37u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 24 Jan 2023 23:32:49 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Tue, 24 Jan 2023 23:32:47 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Tue, 24 Jan 2023 23:32:47 -0800 Received: from localhost.localdomain (unknown [10.28.36.154]) by maili.marvell.com (Postfix) with ESMTP id 387E03F706A; Tue, 24 Jan 2023 23:32:44 -0800 (PST) From: Rakesh Kudurumalla To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , Rakesh Kudurumalla Subject: [PATCH v2 3/3] net/cnxk: skip red drop for ingress policer Date: Wed, 25 Jan 2023 13:02:31 +0530 Message-ID: <20230125073231.4007078-3-rkudurumalla@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230125073231.4007078-1-rkudurumalla@marvell.com> References: <20221222013904.692922-1-rkudurumalla@marvell.com> <20230125073231.4007078-1-rkudurumalla@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: Ynj3xsF_nni7g9FupO_JjRDHHSydTE7p X-Proofpoint-ORIG-GUID: Ynj3xsF_nni7g9FupO_JjRDHHSydTE7p X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-25_04,2023-01-24_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dropping of packets is based on action configured to meter.If both skip_red and drop actions are configured then tail dropping in invoked else if only drop action is configured then RED drop is invoked.This action is supported only when RED is configured using rte_eth_cman_config_set() Signed-off-by: Rakesh Kudurumalla --- drivers/net/cnxk/cnxk_ethdev.h | 1 + drivers/net/cnxk/cnxk_ethdev_mtr.c | 50 ++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index ea8c70b8b7..7c53122b9f 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -238,6 +238,7 @@ struct policy_actions { uint16_t queue; uint32_t mtr_id; struct action_rss *rss_desc; + bool skip_red; }; }; diff --git a/drivers/net/cnxk/cnxk_ethdev_mtr.c b/drivers/net/cnxk/cnxk_ethdev_mtr.c index dcfa4223d5..27a6e4ef3d 100644 --- a/drivers/net/cnxk/cnxk_ethdev_mtr.c +++ b/drivers/net/cnxk/cnxk_ethdev_mtr.c @@ -358,6 +358,9 @@ cnxk_nix_mtr_policy_validate(struct rte_eth_dev *dev, if (action->type == RTE_FLOW_ACTION_TYPE_VOID) supported[i] = true; + if (action->type == RTE_FLOW_ACTION_TYPE_SKIP_CMAN) + supported[i] = true; + if (!supported[i]) return update_mtr_err(i, error, true); } @@ -397,6 +400,10 @@ cnxk_fill_policy_actions(struct cnxk_mtr_policy_node *fmp, fmp->actions[i].action_fate = action->type; } + + if (action->type == + RTE_FLOW_ACTION_TYPE_SKIP_CMAN) + fmp->actions[i].skip_red = true; } } } @@ -1306,6 +1313,45 @@ nix_mtr_config_map(struct cnxk_meter_node *mtr, struct roc_nix_bpf_cfg *cfg) cfg->action[ROC_NIX_BPF_COLOR_RED] = ROC_NIX_BPF_ACTION_DROP; } +static void +nix_mtr_config_red(struct cnxk_meter_node *mtr, struct roc_nix_rq *rq, + struct roc_nix_bpf_cfg *cfg) +{ + struct cnxk_mtr_policy_node *policy = mtr->policy; + + if ((rq->red_pass && rq->red_pass >= rq->red_drop) || + (rq->spb_red_pass && rq->spb_red_pass >= rq->spb_red_drop) || + (rq->xqe_red_pass && rq->xqe_red_pass >= rq->xqe_red_drop)) { + if (policy->actions[RTE_COLOR_GREEN].action_fate == + RTE_FLOW_ACTION_TYPE_DROP) { + if (policy->actions[RTE_COLOR_GREEN].skip_red) + cfg->action[ROC_NIX_BPF_COLOR_GREEN] = + ROC_NIX_BPF_ACTION_DROP; + else + cfg->action[ROC_NIX_BPF_COLOR_GREEN] = + ROC_NIX_BPF_ACTION_RED; + } + if (policy->actions[RTE_COLOR_YELLOW].action_fate == + RTE_FLOW_ACTION_TYPE_DROP) { + if (policy->actions[RTE_COLOR_YELLOW].skip_red) + cfg->action[ROC_NIX_BPF_COLOR_YELLOW] = + ROC_NIX_BPF_ACTION_DROP; + else + cfg->action[ROC_NIX_BPF_COLOR_YELLOW] = + ROC_NIX_BPF_ACTION_RED; + } + if (policy->actions[RTE_COLOR_RED].action_fate == + RTE_FLOW_ACTION_TYPE_DROP) { + if (policy->actions[RTE_COLOR_RED].skip_red) + cfg->action[ROC_NIX_BPF_COLOR_RED] = + ROC_NIX_BPF_ACTION_DROP; + else + cfg->action[ROC_NIX_BPF_COLOR_RED] = + ROC_NIX_BPF_ACTION_RED; + } + } +} + static void nix_precolor_table_map(struct cnxk_meter_node *mtr, struct roc_nix_bpf_precolor *tbl, @@ -1483,6 +1529,10 @@ nix_mtr_configure(struct rte_eth_dev *eth_dev, uint32_t id) if (!mtr[i]->is_used) { memset(&cfg, 0, sizeof(struct roc_nix_bpf_cfg)); nix_mtr_config_map(mtr[i], &cfg); + for (j = 0; j < mtr[i]->rq_num; j++) { + rq = &dev->rqs[mtr[i]->rq_id[j]]; + nix_mtr_config_red(mtr[i], rq, &cfg); + } rc = roc_nix_bpf_config(nix, mtr[i]->bpf_id, lvl_map[mtr[i]->level], &cfg); -- 2.25.1