From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E8DDA00C3; Mon, 19 Sep 2022 14:29:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E750C41145; Mon, 19 Sep 2022 14:29:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6243240141 for ; Mon, 19 Sep 2022 14:29:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28J5EZxm010921 for ; Mon, 19 Sep 2022 05:29:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=88EZIsTjm9FfLu2Rxzy0xTQJydwbXKROHFh5S1mDTv8=; b=Z5DJa3OKMrG4Ah31B9HYTzgTcWj6kfF5FavMo9xr/H0/VdBzSstuZ1SEmb+UiKuDQoq/ zUaKuoRhVoAF3jrsyfAZlT4EKopoSQQdzqwBn2xyCEEA0oQHqGKJJDw2ZNvuFnyE3Qsj ftXMaDKJY5ge2myUEaukvCqM/DK1TA7HCWMU7HWKsptRYWel7H5HKA1M0W58Bj63vFgz joBhYllEIemi4Y+DGziRH+qKfPm/WE+hiLHs1ZrNCzY39S28H+qLXin1Z4jL/op7pThc 3zPiOmBo2glAsB58HVTi5JghsWx/dlK6AaoRQ41VaWUQ3ChKCw+yv8Yv/J88JN2aRpwa 1Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jndrmp689-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 19 Sep 2022 05:29:05 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 19 Sep 2022 05:29:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 19 Sep 2022 05:29:03 -0700 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 24D6E3F7061; Mon, 19 Sep 2022 05:29:01 -0700 (PDT) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: Subject: [PATCH 3/3] net/cnxk: support congestion management ops Date: Mon, 19 Sep 2022 17:58:50 +0530 Message-ID: <20220919122850.1059173-3-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220919122850.1059173-1-skori@marvell.com> References: <20220919122850.1059173-1-skori@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: cC1_CnkL_G7EkbxXr4eXDhTFXfzlKIzf X-Proofpoint-ORIG-GUID: cC1_CnkL_G7EkbxXr4eXDhTFXfzlKIzf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-19_05,2022-09-16_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori Support congestion management. Depends-on: patch-24710 ("ethdev: support congestion management") Signed-off-by: Sunil Kumar Kori Change-Id: Ic655574a1b9bb34baa177848b8148a29a87fe8cf --- doc/guides/nics/features/cnxk.ini | 1 + drivers/net/cnxk/cnxk_ethdev.c | 4 + drivers/net/cnxk/cnxk_ethdev.h | 12 +++ drivers/net/cnxk/cnxk_ethdev_cman.c | 140 ++++++++++++++++++++++++++++ drivers/net/cnxk/meson.build | 1 + 5 files changed, 158 insertions(+) create mode 100644 drivers/net/cnxk/cnxk_ethdev_cman.c diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 1876fe86c7..bbb90e9527 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -41,6 +41,7 @@ Rx descriptor status = Y Tx descriptor status = Y Basic stats = Y Stats per queue = Y +Congestion management = Y Extended stats = Y FW version = Y Module EEPROM dump = Y diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 48170147a4..2d46938d68 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1678,6 +1678,10 @@ struct eth_dev_ops cnxk_eth_dev_ops = { .tm_ops_get = cnxk_nix_tm_ops_get, .mtr_ops_get = cnxk_nix_mtr_ops_get, .eth_dev_priv_dump = cnxk_nix_eth_dev_priv_dump, + .cman_info_get = cnxk_nix_cman_info_get, + .cman_config_init = cnxk_nix_cman_config_init, + .cman_config_set = cnxk_nix_cman_config_set, + .cman_config_get = cnxk_nix_cman_config_get, }; static int diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index c09e9bff8e..f884a532e1 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -417,6 +417,9 @@ struct cnxk_eth_dev { struct cnxk_mtr_policy mtr_policy; struct cnxk_mtr mtr; + /* Congestion Management */ + struct rte_eth_cman_config cman_cfg; + /* Rx burst for cleanup(Only Primary) */ eth_rx_burst_t rx_pkt_burst_no_offload; @@ -649,6 +652,15 @@ cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev, int cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bufs, bool destroy); +/* Congestion Management */ +int cnxk_nix_cman_info_get(struct rte_eth_dev *dev, struct rte_eth_cman_info *info); + +int cnxk_nix_cman_config_init(struct rte_eth_dev *dev, struct rte_eth_cman_config *config); + +int cnxk_nix_cman_config_set(struct rte_eth_dev *dev, struct rte_eth_cman_config *config); + +int cnxk_nix_cman_config_get(struct rte_eth_dev *dev, struct rte_eth_cman_config *config); + /* Other private functions */ int nix_recalc_mtu(struct rte_eth_dev *eth_dev); int nix_mtr_validate(struct rte_eth_dev *dev, uint32_t id); diff --git a/drivers/net/cnxk/cnxk_ethdev_cman.c b/drivers/net/cnxk/cnxk_ethdev_cman.c new file mode 100644 index 0000000000..5f019cd721 --- /dev/null +++ b/drivers/net/cnxk/cnxk_ethdev_cman.c @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Marvell International Ltd. + */ + +#include "cnxk_ethdev.h" + +#define CNXK_NIX_CMAN_RED_MIN_THRESH 75 +#define CNXK_NIX_CMAN_RED_MAX_THRESH 95 + +int +cnxk_nix_cman_info_get(struct rte_eth_dev *dev, struct rte_eth_cman_info *info) +{ + RTE_SET_USED(dev); + + info->modes_supported = RTE_CMAN_RED; + info->objs_supported = RTE_ETH_CMAN_OBJ_RX_QUEUE | RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL; + + return 0; +} + +int +cnxk_nix_cman_config_init(struct rte_eth_dev *dev, struct rte_eth_cman_config *config) +{ + RTE_SET_USED(dev); + + memset(config, 0, sizeof(struct rte_eth_cman_config)); + + config->obj = RTE_ETH_CMAN_OBJ_RX_QUEUE; + config->mode = RTE_CMAN_RED; + config->mode_param.red.min_th = CNXK_NIX_CMAN_RED_MIN_THRESH; + config->mode_param.red.max_th = CNXK_NIX_CMAN_RED_MAX_THRESH; + return 0; +} + +static int +nix_cman_config_validate(struct rte_eth_dev *eth_dev, struct rte_eth_cman_config *config) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_cman_info info; + + memset(&info, 0, sizeof(struct rte_eth_cman_info)); + cnxk_nix_cman_info_get(eth_dev, &info); + + if (!(config->obj & info.objs_supported)) { + plt_err("Invalid object"); + return -EINVAL; + } + + if (!(config->mode & info.modes_supported)) { + plt_err("Invalid mode"); + return -EINVAL; + } + + if (config->obj_param.rx_queue >= dev->nb_rxq) { + plt_err("Invalid queue ID. Queue = %u", config->obj_param.rx_queue); + return -EINVAL; + } + + if (config->mode_param.red.min_th > CNXK_NIX_CMAN_RED_MAX_THRESH) { + plt_err("Invalid RED minimum threshold. min_th = %u", + config->mode_param.red.min_th); + return -EINVAL; + } + + if (config->mode_param.red.max_th > CNXK_NIX_CMAN_RED_MAX_THRESH) { + plt_err("Invalid RED maximum threshold. max_th = %u", + config->mode_param.red.max_th); + return -EINVAL; + } + + return 0; +} + +int +cnxk_nix_cman_config_set(struct rte_eth_dev *eth_dev, struct rte_eth_cman_config *config) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct roc_nix *nix = &dev->nix; + uint8_t drop, pass, shift; + uint8_t min_th, max_th; + struct roc_nix_cq *cq; + struct roc_nix_rq *rq; + bool is_mempool; + uint64_t buf_cnt; + int rc; + + rc = nix_cman_config_validate(eth_dev, config); + if (rc) + return rc; + + cq = &dev->cqs[config->obj_param.rx_queue]; + rq = &dev->rqs[config->obj_param.rx_queue]; + is_mempool = config->obj & RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL ? true : false; + min_th = config->mode_param.red.min_th; + max_th = config->mode_param.red.max_th; + + if (is_mempool) { + buf_cnt = roc_npa_aura_op_limit_get(rq->aura_handle); + shift = plt_log2_u32(buf_cnt); + shift = shift < 8 ? 0 : shift - 8; + pass = (buf_cnt >> shift) - ((buf_cnt * min_th / 100) >> shift); + drop = (buf_cnt >> shift) - ((buf_cnt * max_th / 100) >> shift); + rq->red_pass = pass; + rq->red_drop = drop; + + if (rq->spb_ena) { + buf_cnt = roc_npa_aura_op_limit_get(rq->spb_aura_handle); + shift = plt_log2_u32(buf_cnt); + shift = shift < 8 ? 0 : shift - 8; + pass = (buf_cnt >> shift) - ((buf_cnt * min_th / 100) >> shift); + drop = (buf_cnt >> shift) - ((buf_cnt * max_th / 100) >> shift); + rq->spb_red_pass = pass; + rq->spb_red_drop = drop; + } + } else { + shift = plt_log2_u32(cq->nb_desc); + shift = shift < 8 ? 0 : shift - 8; + pass = 256 - ((cq->nb_desc * min_th / 100) >> shift); + drop = 256 - ((cq->nb_desc * max_th / 100) >> shift); + + rq->xqe_red_pass = pass; + rq->xqe_red_drop = drop; + } + + rc = roc_nix_rq_cman_config(nix, rq); + if (rc) + return rc; + + memcpy(&dev->cman_cfg, config, sizeof(struct rte_eth_cman_config)); + return 0; +} + +int +cnxk_nix_cman_config_get(struct rte_eth_dev *eth_dev, struct rte_eth_cman_config *config) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + + memcpy(config, &dev->cman_cfg, sizeof(struct rte_eth_cman_config)); + return 0; +} diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index f347e98fce..9253e8d0ab 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -10,6 +10,7 @@ endif sources = files( 'cnxk_ethdev.c', + 'cnxk_ethdev_cman.c', 'cnxk_ethdev_devargs.c', 'cnxk_ethdev_mtr.c', 'cnxk_ethdev_ops.c', -- 2.25.1