From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 58C83A0503;
	Fri,  1 Apr 2022 05:23:38 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 0CE604292C;
	Fri,  1 Apr 2022 05:23:27 +0200 (CEST)
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2077.outbound.protection.outlook.com [40.107.244.77])
 by mails.dpdk.org (Postfix) with ESMTP id B611242918
 for <dev@dpdk.org>; Fri,  1 Apr 2022 05:23:25 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S09AYfIcT/kVvGiZgv8Zlu7EAmQF9V0YgqUQjoshtWKBUS1vdUpmQDy+HLfaomLShjPOkk9E6ozgRZbUFq2LlK0DByVX6la1/dplJ5CBT9nGycOCTNi9s0BsaWao94+zpnpQGrI+zYvD2ivYv4ZT1+1p5RJLO+/O7kues2ww8iXqTIg/YC3S29QA6E27jw3jZaqn2RYQ2veR7Pg4AzT5DbIi35cwvzNScgawstzE+EuzKr4JYLCmiHQXmo12mwhwFkbMLgHWiuNJPz46mLGD+R7LIR2f+AMmF38HO2XF5nKM+e/qmXoYkq7OWbRFQRDOzihhilQ+7VRS9/x5fAiymg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i83Va2ooWUMFKFOlcrl8lvVCObQ+N7jN6iGcSVttg+c=;
 b=Nb1tpBJ71GW97m7+0YL5fC4ZoXDXNeIa7yO29mcO32VQoInxBoGUDwhCA9M+pSjt6KBPVB9zCjkQS1IgxMyFBbdUV1AbzzC+7D/1lQ+1etE5PYhjFDoGmZvBH5UW2o2fotdFXdX7vtT99WwnuJYLaK8hTCxHJLOhHxwXvp+6KUdsqnbUOi0ojIaCqewd8M0pNktVYPJp/CQw1WHWlRwG4vA9xGj4zzqHUHXYG0VYhyNyZ7zTLlqJPrkp2kvlrB4s3hwyr398SFUR9YJXRJ+UULFrSd0qpmbAN30PMVH9hNRw5Mde7a11oSeDkYkoXFrIa/TDQPURsRPfxOrf7XhHLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass
 (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none
 (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i83Va2ooWUMFKFOlcrl8lvVCObQ+N7jN6iGcSVttg+c=;
 b=JjWLgr1bAC8LwS09SF/las0LsKTJ+hJQo/bJ8/6vUbdG9xuBlYbmxnPXDDefTdkvumdssHfW9DSt7JpbXhsg55aLhcWxm0Tihu2xZBTvHsJVH2QFKSk301H7GNkPjPVZE3zvMgwkPh2nZvD1l7lhi5y4v5s4Wm7PFJN3TIOKzqz/lx/Ftv5/pukxSsUjokyrMI+jy0QdkS1ylpfOq5jH538ymix41/rLHA/FUwAN1d/o8hiakSMiVGEqsXSvXPWvWPJWwAHkdKhjO3Ca2svtwq7SbFtWHBiIQRYcb28qLCFrLfkp6HO6lbxCM4u2TvmzwpNYk6YbgC/8HvGhE7siFA==
Received: from DM6PR02CA0037.namprd02.prod.outlook.com (2603:10b6:5:177::14)
 by BN6PR12MB1716.namprd12.prod.outlook.com (2603:10b6:404:104::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.25; Fri, 1 Apr
 2022 03:23:19 +0000
Received: from DM6NAM11FT067.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:177:cafe::6b) by DM6PR02CA0037.outlook.office365.com
 (2603:10b6:5:177::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.23 via Frontend
 Transport; Fri, 1 Apr 2022 03:23:18 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236)
 smtp.mailfrom=nvidia.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=nvidia.com;
Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates
 12.22.5.236 as permitted sender) receiver=protection.outlook.com;
 client-ip=12.22.5.236; helo=mail.nvidia.com;
Received: from mail.nvidia.com (12.22.5.236) by
 DM6NAM11FT067.mail.protection.outlook.com (10.13.172.76) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id
 15.20.5123.19 via Frontend Transport; Fri, 1 Apr 2022 03:23:18 +0000
Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com
 (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32;
 Fri, 1 Apr 2022 03:23:17 +0000
Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com
 (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Thu, 31 Mar
 2022 20:23:15 -0700
From: Spike Du <spiked@nvidia.com>
To: <matan@nvidia.com>, <viacheslavo@nvidia.com>, <orika@nvidia.com>,
 <thomas@monjalon.net>
CC: <dev@dpdk.org>, <rasland@nvidia.com>
Subject: [RFC 6/6] app/testpmd: add LWM and Host Shaper command
Date: Fri, 1 Apr 2022 06:22:32 +0300
Message-ID: <20220401032232.1267376-7-spiked@nvidia.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20220401032232.1267376-1-spiked@nvidia.com>
References: <20220401032232.1267376-1-spiked@nvidia.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.126.230.35]
X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To
 rnnvmail201.nvidia.com (10.129.68.8)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4f1f3473-c213-43af-1d79-08da138ef7bf
X-MS-TrafficTypeDiagnostic: BN6PR12MB1716:EE_
X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr
X-Microsoft-Antispam-PRVS: <BN6PR12MB1716BA974493B2502A69C484A8E09@BN6PR12MB1716.namprd12.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 25MdtHnHoO7a1uk+MzYCF6/lRGayIDb3sWfGSZpVfi9ZY0Avpt6n3DpGDMLeI611QTUAk2UwXnKmVjGNgrMvR7FkGw9RLBUtm/8XP6Bc2f0itwR5WTY6vJLMKHLbCKHDdOpCkDShU4mD1du1d/osfufCvVnxaEjMq+fiL6C+0XIbWTP7MDfzgRqz885CemhvmDc6iurRdMdPSTZ5hiKMVUBdqiXKIuQgBHwmg3MWKOF4GHMSLH+WoQ1hHf6uskNNpanuGnZqVV/VZFgdcPf4Q4teZ1Na4xLXGpH5qcKrRleWP1AgrGYpiR705XHnQKNfxHrM7arK73xvQHXP7xcqH/FM4YfDebF66LZL+3bz507tXh5OZqP5A0aGTyCGKiWWj9w759sf7lR1PoSC4LtNFCQViWfp8OKlbrT7gyNcrdH+EHgbsHiPlT8nNacfIcfm8NusFvDhe6yRk9M5fwPlgxv3tK8Lsi/cB9GhS3V2jcxpXQUi+6kYoBvxQ/s+ltH6Aw31jLMiWzdQpt7+cNoIgfLTYnPx30cwxZT/kdl3rDmU4mAnYGt99+B0O3fdBChe44NqkOtHJmt2oaSn00uZJl6/AuhAYOEVX79dcH3FVSnvc11HODXho+N1swdhAL/jEH35ofQfBu8tb+TwVgXa6V1q2sguE+V2LIsUt366s0tp98F0UOgb1nSrMby+QU+saqHdYQmaqYbIJx+hZZjVXw==
X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE;
 SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(8676002)(7696005)(30864003)(6286002)(6666004)(82310400004)(40460700003)(508600001)(55016003)(8936002)(4326008)(2906002)(70586007)(70206006)(5660300002)(356005)(81166007)(86362001)(26005)(36860700001)(186003)(16526019)(107886003)(83380400001)(1076003)(316002)(36756003)(426003)(2616005)(336012)(47076005)(110136005)(54906003)(36900700001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Apr 2022 03:23:18.7877 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f1f3473-c213-43af-1d79-08da138ef7bf
X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236];
 Helo=[mail.nvidia.com]
X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT067.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1716
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Add command line options to support LWM per-rxq configure.
- Command syntax:
  set port <port_id> rxq <rxq_id> lwm <lwm_num>
  set port <port_id> host_shaper lwm_triggered <0|1> rate <rate_num>

- Example commands:
To configure LWM as 30% of rxq size on port 1 rxq 0:
testpmd> set port 1 rxq 0 lwm 30

To disable LWM on port 1 rxq 0:
testpmd> set port 1 rxq 0 lwm 0

To enable lwm_triggered on port 1 and disable current host shaper:
testpmd> set port 1 host_shaper lwm_triggered 1 rate 0

To disable lwm_triggered and current host shaper on port 1:
testpmd> set port 1 host_shaper lwm_triggered 0 rate 0

The rate unit is 100Mbps.
To disable lwm_triggered and configure a shaper of 5Gbps on port 1:
testpmd> set port 1 host_shaper lwm_triggered 0 rate 50

Add sample code to handle rxq LWM event, it delays a while so that rxq
empties, then disables host shaper and rearms LWM event.

Signed-off-by: Spike Du <spiked@nvidia.com>
---
 app/test-pmd/cmdline.c   | 149 +++++++++++++++++++++++++++++++++++++++++++++++
 app/test-pmd/config.c    | 122 ++++++++++++++++++++++++++++++++++++++
 app/test-pmd/meson.build |   3 +
 app/test-pmd/testpmd.c   |   3 +
 app/test-pmd/testpmd.h   |   5 ++
 doc/guides/nics/mlx5.rst |  76 ++++++++++++++++++++++++
 6 files changed, 358 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 7ab0575..8a5fe26 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -17807,6 +17807,151 @@ struct cmd_show_port_flow_transfer_proxy_result {
 	}
 };
 
+#ifdef RTE_NET_MLX5
+
+/* *** SET LIMIT WARTER MARK FOR A RXQ OF A PORT *** */
+struct cmd_rxq_lwm_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t port;
+	uint16_t port_num;
+	cmdline_fixed_string_t rxq;
+	uint16_t rxq_num;
+	cmdline_fixed_string_t lwm;
+	uint16_t lwm_num;
+};
+
+static void cmd_rxq_lwm_parsed(void *parsed_result,
+		__rte_unused struct cmdline *cl,
+		__rte_unused void *data)
+{
+	struct cmd_rxq_lwm_result *res = parsed_result;
+	int ret = 0;
+
+	if ((strcmp(res->set, "set") == 0) && (strcmp(res->port, "port") == 0)
+	    && (strcmp(res->rxq, "rxq") == 0)
+	    && (strcmp(res->lwm, "lwm") == 0))
+		ret = set_rxq_lwm(res->port_num, res->rxq_num,
+				  res->lwm_num);
+	if (ret < 0)
+		printf("rxq_lwm_cmd error: (%s)\n", strerror(-ret));
+
+}
+
+cmdline_parse_token_string_t cmd_rxq_lwm_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				set, "set");
+cmdline_parse_token_string_t cmd_rxq_lwm_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				port, "port");
+cmdline_parse_token_num_t cmd_rxq_lwm_portnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result,
+				port_num, RTE_UINT16);
+cmdline_parse_token_string_t cmd_rxq_lwm_rxq =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				rxq, "rxq");
+cmdline_parse_token_num_t cmd_rxq_lwm_rxqnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result,
+				rxq_num, RTE_UINT8);
+cmdline_parse_token_string_t cmd_rxq_lwm_lwm =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				lwm, "lwm");
+cmdline_parse_token_num_t cmd_rxq_lwm_lwmnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result,
+				lwm_num, RTE_UINT16);
+
+cmdline_parse_inst_t cmd_rxq_lwm = {
+	.f = cmd_rxq_lwm_parsed,
+	.data = (void *)0,
+	.help_str = "set port <port_id> rxq <rxq_id> lwm <lwm_num>"
+		"Set lwm for rxq on port_id",
+	.tokens = {
+		(void *)&cmd_rxq_lwm_set,
+		(void *)&cmd_rxq_lwm_port,
+		(void *)&cmd_rxq_lwm_portnum,
+		(void *)&cmd_rxq_lwm_rxq,
+		(void *)&cmd_rxq_lwm_rxqnum,
+		(void *)&cmd_rxq_lwm_lwm,
+		(void *)&cmd_rxq_lwm_lwmnum,
+		NULL,
+	},
+};
+
+/* *** SET HOST_SHAPER LWM TRIGGERED FOR A PORT *** */
+struct cmd_port_host_shaper_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t port;
+	uint16_t port_num;
+	cmdline_fixed_string_t host_shaper;
+	cmdline_fixed_string_t lwm_triggered;
+	uint16_t fr;
+	cmdline_fixed_string_t rate;
+	uint8_t rate_num;
+};
+
+static void cmd_port_host_shaper_parsed(void *parsed_result,
+		__rte_unused struct cmdline *cl,
+		__rte_unused void *data)
+{
+	struct cmd_port_host_shaper_result *res = parsed_result;
+	int ret = 0;
+
+	if ((strcmp(res->set, "set") == 0) && (strcmp(res->port, "port") == 0)
+	    && (strcmp(res->host_shaper, "host_shaper") == 0)
+	    && (strcmp(res->lwm_triggered, "lwm_triggered") == 0)
+	    && (strcmp(res->rate, "rate") == 0))
+		ret = set_port_host_shaper(res->port_num, res->fr,
+					   res->rate_num);
+	if (ret < 0)
+		printf("cmd_port_host_shaper error: (%s)\n", strerror(-ret));
+
+}
+
+cmdline_parse_token_string_t cmd_port_host_shaper_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				set, "set");
+cmdline_parse_token_string_t cmd_port_host_shaper_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				port, "port");
+cmdline_parse_token_num_t cmd_port_host_shaper_portnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result,
+				port_num, RTE_UINT16);
+cmdline_parse_token_string_t cmd_port_host_shaper_host_shaper =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				 host_shaper, "host_shaper");
+cmdline_parse_token_string_t cmd_port_host_shaper_lwm_triggered =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				 lwm_triggered, "lwm_triggered");
+cmdline_parse_token_num_t cmd_port_host_shaper_fr =
+	TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result,
+			      fr, RTE_UINT16);
+cmdline_parse_token_string_t cmd_port_host_shaper_rate =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				 rate, "rate");
+cmdline_parse_token_num_t cmd_port_host_shaper_rate_num =
+	TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result,
+			      rate_num, RTE_UINT8);
+
+
+cmdline_parse_inst_t cmd_port_host_shaper = {
+	.f = cmd_port_host_shaper_parsed,
+	.data = (void *)0,
+	.help_str = "set port <port_id> host_shaper lwm_triggered <0|1> "
+	"rate <rate_num>: Set HOST_SHAPER lwm_triggered and rate with port_id",
+	.tokens = {
+		(void *)&cmd_port_host_shaper_set,
+		(void *)&cmd_port_host_shaper_port,
+		(void *)&cmd_port_host_shaper_portnum,
+		(void *)&cmd_port_host_shaper_host_shaper,
+		(void *)&cmd_port_host_shaper_lwm_triggered,
+		(void *)&cmd_port_host_shaper_fr,
+		(void *)&cmd_port_host_shaper_rate,
+		(void *)&cmd_port_host_shaper_rate_num,
+		NULL,
+	},
+};
+
+#endif
+
 /* ******************************************************************************** */
 
 /* list of instructions */
@@ -18093,6 +18238,10 @@ struct cmd_show_port_flow_transfer_proxy_result {
 	(cmdline_parse_inst_t *)&cmd_show_capability,
 	(cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
 	(cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
+#ifdef RTE_NET_MLX5
+	(cmdline_parse_inst_t *)&cmd_rxq_lwm,
+	(cmdline_parse_inst_t *)&cmd_port_host_shaper,
+#endif
 	NULL,
 };
 
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cc8e7aa..11ef7e3 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -39,6 +39,7 @@
 #include <rte_flow.h>
 #include <rte_mtr.h>
 #include <rte_errno.h>
+#include <rte_alarm.h>
 #ifdef RTE_NET_IXGBE
 #include <rte_pmd_ixgbe.h>
 #endif
@@ -52,6 +53,9 @@
 #include <rte_gro.h>
 #endif
 #include <rte_hexdump.h>
+#ifdef RTE_NET_MLX5
+#include <rte_pmd_mlx5.h>
+#endif
 
 #include "testpmd.h"
 #include "cmdline_mtr.h"
@@ -6281,3 +6285,121 @@ struct igb_ring_desc_16_bytes {
 		printf("  %s\n", buf);
 	}
 }
+
+#ifdef RTE_NET_MLX5
+static uint8_t lwms[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT+1];
+static uint8_t host_shaper_lwm_triggered[RTE_MAX_ETHPORTS];
+
+#define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */
+static void
+lwm_event_rxq_limit_reached(uint16_t port_id, uint16_t rxq_id);
+
+static void
+mlx5_shaper_disable(void *args)
+{
+	uint32_t port_rxq_id = (uint32_t)(uint64_t)args;
+	uint16_t port_id = port_rxq_id & 0xffff;
+	unsigned int qid;
+
+	printf("%s disable shaper\n", __func__);
+	/* Need rearm all previous configured rxqs. */
+	for (qid = 0; qid < nb_rxq; qid++) {
+		/* Configure with rxq's saved LWM value to rearm LWM event */
+		if (rte_pmd_mlx5_config_rxq_lwm(port_id, qid, lwms[port_id][qid],
+						lwm_event_rxq_limit_reached))
+			printf("config lwm returns error\n");
+	}
+	/* Only disable the shaper when lwm_triggered is set. */
+	if (host_shaper_lwm_triggered[port_id] &&
+	    rte_pmd_mlx5_config_host_shaper(port_id, 0, 0))
+		printf("%s disable shaper returns error\n", __func__);
+}
+
+static void
+lwm_event_rxq_limit_reached(uint16_t port_id, uint16_t rxq_id)
+{
+	uint32_t port_rxq_id = port_id | (rxq_id << 16);
+	rte_eal_alarm_set(SHAPER_DISABLE_DELAY_US,
+			  mlx5_shaper_disable, (void *)(uintptr_t)port_rxq_id);
+	printf("%s port_id:%u rxq_id:%u\n", __func__, port_id, rxq_id);
+}
+
+static void
+mlx5_lwm_intr_handle_cancel_alarm(uint16_t port_id, uint16_t qid)
+{
+	uint32_t port_rxq_id = port_id | (qid << 16);
+	int retries = 1024;
+
+	rte_errno = 0;
+	while (--retries) {
+		rte_eal_alarm_cancel(mlx5_shaper_disable,
+				     (void *)(uintptr_t)port_rxq_id);
+		if (rte_errno != EINPROGRESS)
+			break;
+		rte_pause();
+	}
+}
+
+int
+set_rxq_lwm(portid_t port_id, uint16_t queue_idx, uint16_t lwm)
+{
+	struct rte_eth_link link;
+	int ret;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN))
+		return -EINVAL;
+	ret = eth_link_get_nowait_print_err(port_id, &link);
+	if (ret < 0)
+		return -EINVAL;
+	if (lwm > 99)
+		return -EINVAL;
+	/* When disable LWM, needs cancal alarm. */
+	if (!lwm)
+		mlx5_lwm_intr_handle_cancel_alarm(port_id, queue_idx);
+	ret = rte_pmd_mlx5_config_rxq_lwm(port_id, queue_idx, lwm,
+						lwm_event_rxq_limit_reached);
+	/* Save the input lwm. */
+	lwms[port_id][queue_idx] = lwm;
+	if (ret)
+		return ret;
+	return 0;
+}
+
+/** Configure host shaper's lwm_triggered and current rate.
+ *
+ * @param[in] lwm_triggered
+ *   Disable/enable lwm_triggered.
+ * @param[in] rate
+ *   Configure current host shaper rate.
+ * @return
+ *   On success, returns 0.
+ *   On failure, returns < 0.
+ */
+int
+set_port_host_shaper(portid_t port_id, uint16_t lwm_triggered, uint8_t rate)
+{
+	struct rte_eth_link link;
+	int ret;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN))
+		return -EINVAL;
+	ret = eth_link_get_nowait_print_err(port_id, &link);
+	if (ret < 0)
+		return ret;
+	host_shaper_lwm_triggered[port_id] = lwm_triggered ? 1 : 0;
+	if (!lwm_triggered) {
+		ret = rte_pmd_mlx5_config_host_shaper(port_id, 0,
+		RTE_BIT32(MLX5_HOST_SHAPER_FLAG_LWM_TRIGGERED));
+	} else {
+		ret = rte_pmd_mlx5_config_host_shaper(port_id, 1,
+		RTE_BIT32(MLX5_HOST_SHAPER_FLAG_LWM_TRIGGERED));
+	}
+	if (ret)
+		return ret;
+	ret = rte_pmd_mlx5_config_host_shaper(port_id, rate, 0);
+	if (ret)
+		return ret;
+	return 0;
+}
+
+#endif
diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build
index 43130c8..c4fd379 100644
--- a/app/test-pmd/meson.build
+++ b/app/test-pmd/meson.build
@@ -73,3 +73,6 @@ endif
 if dpdk_conf.has('RTE_NET_DPAA')
     deps += ['bus_dpaa', 'mempool_dpaa', 'net_dpaa']
 endif
+if dpdk_conf.has('RTE_NET_MLX5')
+    deps += 'net_mlx5'
+endif
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index fe2ce19..3b53cd8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -66,6 +66,9 @@
 #ifdef RTE_EXEC_ENV_WINDOWS
 #include <process.h>
 #endif
+#ifdef RTE_NET_MLX5
+#include <rte_pmd_mlx5.h>
+#endif
 
 #include "testpmd.h"
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 31f766c..aed2057 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1163,6 +1163,11 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
 void flex_item_create(portid_t port_id, uint16_t flex_id, const char *filename);
 void flex_item_destroy(portid_t port_id, uint16_t flex_id);
 void port_flex_item_flush(portid_t port_id);
+#ifdef RTE_NET_MLX5
+int set_rxq_lwm(portid_t port_id, uint16_t queue_idx, uint16_t lwm);
+int set_port_host_shaper(portid_t port_id, uint16_t lwm_triggered,
+			 uint8_t rate);
+#endif
 
 extern int flow_parse(const char *src, void *result, unsigned int size,
 		      struct rte_flow_attr **attr,
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 35210c1..0df779f 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1677,3 +1677,79 @@ The procedure below is an example of using a ConnectX-5 adapter card (pf0) with
 #. For each VF PCIe, using the following command to bind the driver::
 
    $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind
+
+How to use LWM and Host Shaper
+------------------------------
+
+LWM introduction
+~~~~~~~~~~~~~~~~
+
+LWM (Limit WaterMark) is a per Rx queue attribute, it should be configured as
+a percentage of the Rx queue size.
+When Rx queue's available WQE count is below LWM, an event is sent to PMD.
+
+Host shaper introduction
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Host shaper register is per host port register which sets a shaper
+on the host port.
+All VF/hostPF representors belonging to one host port share one host shaper.
+For example, if representor 0 and representor 1 belong to same host port,
+and a host shaper rate of 1Gbps is configured, the shaper throttles both
+representors' traffic from host.
+Host shaper has two modes for setting the shaper, immediate and deferred to
+LWM event trigger. In immediate mode, the rate limit is configured immediately
+to host shaper. When deferring to LWM trigger, the shaper is not set until an
+LWM event is received by any Rx queue in a VF representor belonging to the host
+port. The only rate supported for deferred mode is 100Mbps (there is no limit
+on the supported rates for immediate mode). In deferred mode, the shaper is set
+on the host port by the firmware upon receiving the LMW event, which allows
+throttling host traffic on LWM events at minimum latency, preventing excess
+drops in the Rx queue.
+
+Testpmd CLI examples
+~~~~~~~~~~~~~~~~~~~~
+
+There are sample command lines to configure LWM in testpmd.
+Testpmd also contains sample logic to handle LWM event.
+The typical workflow is: testpmd configure LWM for Rx queues, enable
+lwm_triggered in host shaper and register a callback, when traffic from host is
+too high and available WQE count runs below LWM, PMD receives an event and
+firmware configures a 100Mbps shaper on host port automatically, then PMD call
+the callback registered previously, which will delay a while to let Rx queue
+empty, then disable host shaper.
+
+Let's assume we have a simple Blue Field 2 setup: port 0 is uplink, port 1
+is VF representor. Each port has 2 Rx queues.
+In order to control traffic from host to ARM, we can enable LWM in testpmd by:
+
+.. code-block:: console
+
+   testpmd> set port 1 host_shaper lwm_triggered 1 rate 0
+   testpmd> set port 1 rxq 0 lwm 30
+   testpmd> set port 1 rxq 1 lwm 30
+
+The first command disables current host shaper, and enables LWM triggered mode.
+The left commands configure LWM to 30% of Rx queue size for both Rx queues,
+When traffic from host is too high, you can see testpmd console prints log
+about LWM event receiving, then host shaper is disabled.
+The traffic rate from host is controlled and less drop happens in Rx queues.
+
+When disable LWM and lwm_triggered, we can invoke below commands in testpmd:
+
+.. code-block:: console
+
+   testpmd> set port 1 host_shaper lwm_triggered 0 rate 0
+   testpmd> set port 1 rxq 0 lwm 0
+   testpmd> set port 1 rxq 1 lwm 0
+
+It's recommended an application disables LWM and lwm_triggered before exit,
+if it enables them before.
+
+We can also configure the shaper with a value, the rate unit is 100Mbps, below
+command sets current shaper to 5Gbps and disables lwm_triggered.
+
+.. code-block:: console
+
+   testpmd> set port 1 host_shaper lwm_triggered 0 rate 50
+
-- 
1.8.3.1