From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 761BEA0503;
	Fri,  6 May 2022 05:57:58 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 176DC42834;
	Fri,  6 May 2022 05:57:21 +0200 (CEST)
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2076.outbound.protection.outlook.com [40.107.243.76])
 by mails.dpdk.org (Postfix) with ESMTP id 668B54282F
 for <dev@dpdk.org>; Fri,  6 May 2022 05:57:19 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R1eScEK6NgtoR5TkPc3o0sPwUBTrB5/Alz7nc+KVr3D8jcVZdEv8VAH27XC/TL1DU8nH+BD8zqMX8hsE1SKDm+oGcZIFyuLMMEKuR/9RF4qGyJjZlNu6e5jJXV7eAp/qaQ7mVKb6rzkrcWf8jGtg2w8HIajQsmioSl0SAsLuwOmj2+RVcQ55qOT8rpJqoOilsPW6nYW+iyn7Yssj67Zuj5HhkMEGAs5QEza0lVXGEgPRzOTkGqplUiaWcHPax3Yh6vDDD45X7kmda29pcSNbJzUEcM5NCleJVG6fXRVW+AA7pvHttRPSbwhc/+uqwHHdkzEzpaC806q9ZR97A2qjmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zjQ2M/1Rqo/vbt3MiV+8LJCs80I+1crPMXRN89hcceE=;
 b=Cx+viCGOcvukvs3hdF2da7KPbRPV78nkD5HEaXWaiGGqeFIvrDDx2xA0CbwSn/AH+3T4CIfFOPaWYsmYhYxKA/j8+5VI/nJs8pnXAUE3I3rK2C2ZBgi+yYd0uFX2Q1zrAP8mrOZvZ1qc8GpVdnpk0+yPuCexzI9QqujDTlO5RDlQnEzsatgMq4cQ37hLeoEwaxSDd0iQYBRNR1ZAZsdgTQqmaLws/MRCF/FMPBgrghL4lHFiN4EzYgaV0RbJiBVWLAPEzJTYBQH9xrwqIQ2gbG2ZPxk4N7LT7E0aDKyCmrYFWOK7je8SaXaQTUXoAfBWvi+vHu2gyXyDViWLnhowfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass
 (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none
 (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zjQ2M/1Rqo/vbt3MiV+8LJCs80I+1crPMXRN89hcceE=;
 b=a73UfwKwmpDaJTV54RtP37+g1uYOrkXQEVs2UuHOkqOJg+dRnYem4uX1p1WmT2icg7hMzkBPbVUKbOxHBGdHxEwIMi0tCf9Xv8B9YELqmQcXAlpPi/x2yLyXmpaNcf8PkVN5DDx9Hy1MQujY0dgg/G1JYu6T6da0Qd5RSuUGkVRdvvIOEmG6HFeF479MSHarBTH/Q8ieAz7kckCIMr+1s1afSI42RCh8SyJ+sFaRzDHQxXD+ho8K9xVdxZPkX1hxmQmCjcgMwNDAUP+/dufdF5cGOLthZo3n+RRdORMtlBTBLYr/r2KHvv10x0gpuiHrtsz49J4Jtqc4uYtPIe9ADw==
Received: from DS7PR03CA0027.namprd03.prod.outlook.com (2603:10b6:5:3b8::32)
 by BN9PR12MB5337.namprd12.prod.outlook.com (2603:10b6:408:102::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.24; Fri, 6 May
 2022 03:57:17 +0000
Received: from DM6NAM11FT018.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b8:cafe::f7) by DS7PR03CA0027.outlook.office365.com
 (2603:10b6:5:3b8::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.13 via Frontend
 Transport; Fri, 6 May 2022 03:57:17 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236)
 smtp.mailfrom=nvidia.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=nvidia.com;
Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates
 12.22.5.236 as permitted sender) receiver=protection.outlook.com;
 client-ip=12.22.5.236; helo=mail.nvidia.com;
Received: from mail.nvidia.com (12.22.5.236) by
 DM6NAM11FT018.mail.protection.outlook.com (10.13.172.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id
 15.20.5227.15 via Frontend Transport; Fri, 6 May 2022 03:57:17 +0000
Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com
 (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32;
 Fri, 6 May 2022 03:57:16 +0000
Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com
 (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Thu, 5 May 2022
 20:57:14 -0700
From: Spike Du <spiked@nvidia.com>
To: <matan@nvidia.com>, <viacheslavo@nvidia.com>, <orika@nvidia.com>,
 <thomas@monjalon.net>
CC: <dev@dpdk.org>, <rasland@nvidia.com>
Subject: [RFC v1 7/7] app/testpmd: add LWM and Host Shaper command
Date: Fri, 6 May 2022 06:56:45 +0300
Message-ID: <20220506035645.4101714-8-spiked@nvidia.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20220506035645.4101714-1-spiked@nvidia.com>
References: <20220401032232.1267376-2-spiked@nvidia.com>
 <20220506035645.4101714-1-spiked@nvidia.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.126.231.35]
X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To
 rnnvmail201.nvidia.com (10.129.68.8)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 83d8dc44-f970-4d66-8643-08da2f14831b
X-MS-TrafficTypeDiagnostic: BN9PR12MB5337:EE_
X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr
X-Microsoft-Antispam-PRVS: <BN9PR12MB5337FBD0A782EB943793FE94A8C59@BN9PR12MB5337.namprd12.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qxKnBxD/nSbnVhwSRbsXdrbRHlVgEGlOJOY1ErlFv7t//jma0X8Lcff0avUYcU4eqt9s6+jZwFQaCiM1DoySaDsIgbwxZwTZJUvXr7UO1aGJXYpQag0EBWGk5urhNZH1g8aPIVadcl+xOl0Xyjzz3e1ECgJp3Ue1mSUFm7xezynueMwm0/43dKx6cbptKPGW/wo5QRmaCpo7VrCy+avOCqFsWvfOrikFZCVx0+y5pzebDmPid5rU8l3P18WrHZ7ikWxOz1PPS0dGspFE2XH3Q23vh1lo13BCJfsckmxzzAcKhn0TDD7ZN4NL/J7V+gogqKyUygyEQUnAEvy8gbU81nlXE70HjnETogkUQsDBekh2CVgAMeCWMwNXVnwI7B8fCk0ZET/zi2/XyjEXCqFHBaCAik3E85nOa/piCgAeX5sOuPkNZrRy8IorvSvbcYCYqIH/coqu07QA93ZJQLHH99TpJPuBO8DBE5nMJirThBwxwV7TJ2VDI7xjIGe8Vw9g6ZxzIm/zHesTwLzDN+IgamS7yjdFTcVc081ddvhEirDlLnZ7qU3u0/1vY7lELBXId9vTZuBfKQSfG3Wqb4dvfq7jCb7dxA86PbFZUcnwV9OgcOI5piG60Sf3Yg6lctLuyO/HOURFpU3gM/UoDPZGUlQ5u106tbbmX5ukDYGRt6gCIWNvuzd8y8JaX3RYD7ituFg6PBMP728ifo86UsR9xw==
X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE;
 SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(356005)(8676002)(36860700001)(8936002)(70206006)(70586007)(55016003)(86362001)(4326008)(6286002)(7696005)(36756003)(186003)(1076003)(2906002)(26005)(40460700003)(508600001)(6666004)(47076005)(83380400001)(16526019)(54906003)(110136005)(2616005)(5660300002)(107886003)(82310400005)(81166007)(336012)(316002)(30864003)(426003)(36900700001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2022 03:57:17.0254 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 83d8dc44-f970-4d66-8643-08da2f14831b
X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236];
 Helo=[mail.nvidia.com]
X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT018.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5337
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Add command line options to support LWM per-rxq configure.
- Command syntax:
  set port <port_id> rxq <rxq_id> lwm <lwm_num>
  mlx5 set port <port_id> host_shaper lwm_triggered <0|1> rate <rate_num>

- Example commands:
To configure LWM as 30% of rxq size on port 1 rxq 0:
testpmd> set port 1 rxq 0 lwm 30

To disable LWM on port 1 rxq 0:
testpmd> set port 1 rxq 0 lwm 0

To enable lwm_triggered on port 1 and disable current host shaper:
testpmd> mlx5 set port 1 host_shaper lwm_triggered 1 rate 0

To disable lwm_triggered and current host shaper on port 1:
testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 0

The rate unit is 100Mbps.
To disable lwm_triggered and configure a shaper of 5Gbps on port 1:
testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 50

Add sample code to handle rxq LWM event, it delays a while so that rxq
empties, then disables host shaper and rearms LWM event.

Signed-off-by: Spike Du <spiked@nvidia.com>
---
 app/test-pmd/cmdline.c       |  74 +++++++++++++++++
 app/test-pmd/config.c        |  23 ++++++
 app/test-pmd/meson.build     |   3 +
 app/test-pmd/testpmd.c       |  13 +++
 app/test-pmd/testpmd.h       |   1 +
 doc/guides/nics/mlx5.rst     |  76 +++++++++++++++++
 drivers/net/mlx5/meson.build |   7 +-
 drivers/net/mlx5/mlx5_test.c | 191 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_test.h |  27 ++++++
 9 files changed, 413 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/mlx5/mlx5_test.c
 create mode 100644 drivers/net/mlx5/mlx5_test.h

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 6ffea8e..f98cdf5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -67,6 +67,9 @@
 #include "cmdline_mtr.h"
 #include "cmdline_tm.h"
 #include "bpf_cmd.h"
+#ifdef RTE_NET_MLX5
+#include "mlx5_test.h"
+#endif
 
 static struct cmdline *testpmd_cl;
 
@@ -17807,6 +17810,73 @@ struct cmd_show_port_flow_transfer_proxy_result {
 	}
 };
 
+/* *** SET LIMIT WARTER MARK FOR A RXQ OF A PORT *** */
+struct cmd_rxq_lwm_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t port;
+	uint16_t port_num;
+	cmdline_fixed_string_t rxq;
+	uint16_t rxq_num;
+	cmdline_fixed_string_t lwm;
+	uint16_t lwm_num;
+};
+
+static void cmd_rxq_lwm_parsed(void *parsed_result,
+		__rte_unused struct cmdline *cl,
+		__rte_unused void *data)
+{
+	struct cmd_rxq_lwm_result *res = parsed_result;
+	int ret = 0;
+
+	if ((strcmp(res->set, "set") == 0) && (strcmp(res->port, "port") == 0)
+	    && (strcmp(res->rxq, "rxq") == 0)
+	    && (strcmp(res->lwm, "lwm") == 0))
+		ret = set_rxq_lwm(res->port_num, res->rxq_num,
+				  res->lwm_num);
+	if (ret < 0)
+		printf("rxq_lwm_cmd error: (%s)\n", strerror(-ret));
+
+}
+
+cmdline_parse_token_string_t cmd_rxq_lwm_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				set, "set");
+cmdline_parse_token_string_t cmd_rxq_lwm_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				port, "port");
+cmdline_parse_token_num_t cmd_rxq_lwm_portnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result,
+				port_num, RTE_UINT16);
+cmdline_parse_token_string_t cmd_rxq_lwm_rxq =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				rxq, "rxq");
+cmdline_parse_token_num_t cmd_rxq_lwm_rxqnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result,
+				rxq_num, RTE_UINT8);
+cmdline_parse_token_string_t cmd_rxq_lwm_lwm =
+	TOKEN_STRING_INITIALIZER(struct cmd_rxq_lwm_result,
+				lwm, "lwm");
+cmdline_parse_token_num_t cmd_rxq_lwm_lwmnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_rxq_lwm_result,
+				lwm_num, RTE_UINT16);
+
+cmdline_parse_inst_t cmd_rxq_lwm = {
+	.f = cmd_rxq_lwm_parsed,
+	.data = (void *)0,
+	.help_str = "set port <port_id> rxq <rxq_id> lwm <lwm_num>"
+		"Set lwm for rxq on port_id",
+	.tokens = {
+		(void *)&cmd_rxq_lwm_set,
+		(void *)&cmd_rxq_lwm_port,
+		(void *)&cmd_rxq_lwm_portnum,
+		(void *)&cmd_rxq_lwm_rxq,
+		(void *)&cmd_rxq_lwm_rxqnum,
+		(void *)&cmd_rxq_lwm_lwm,
+		(void *)&cmd_rxq_lwm_lwmnum,
+		NULL,
+	},
+};
+
 /* ******************************************************************************** */
 
 /* list of instructions */
@@ -18093,6 +18163,10 @@ struct cmd_show_port_flow_transfer_proxy_result {
 	(cmdline_parse_inst_t *)&cmd_show_capability,
 	(cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
 	(cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
+	(cmdline_parse_inst_t *)&cmd_rxq_lwm,
+#ifdef RTE_NET_MLX5
+	(cmdline_parse_inst_t *)&_rte_pmd_mlx5_cmd_port_host_shaper,
+#endif
 	NULL,
 };
 
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cc8e7aa..609fde1 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -6281,3 +6281,26 @@ struct igb_ring_desc_16_bytes {
 		printf("  %s\n", buf);
 	}
 }
+
+int
+set_rxq_lwm(portid_t port_id, uint16_t queue_idx, uint16_t lwm)
+{
+	struct rte_eth_link link;
+	int ret;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN))
+		return -EINVAL;
+	ret = eth_link_get_nowait_print_err(port_id, &link);
+	if (ret < 0)
+		return -EINVAL;
+	if (lwm > 99)
+		return -EINVAL;
+	ret = rte_eth_rx_queue_set_lwm(port_id, queue_idx, lwm);
+
+	if (ret)
+		return ret;
+	/* Save the input lwm. */
+	ports[port_id].rx_conf[queue_idx].lwm = lwm;
+	return 0;
+}
+
diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build
index 43130c8..c4fd379 100644
--- a/app/test-pmd/meson.build
+++ b/app/test-pmd/meson.build
@@ -73,3 +73,6 @@ endif
 if dpdk_conf.has('RTE_NET_DPAA')
     deps += ['bus_dpaa', 'mempool_dpaa', 'net_dpaa']
 endif
+if dpdk_conf.has('RTE_NET_MLX5')
+    deps += 'net_mlx5'
+endif
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index fe2ce19..683374c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -66,6 +66,9 @@
 #ifdef RTE_EXEC_ENV_WINDOWS
 #include <process.h>
 #endif
+#ifdef RTE_NET_MLX5
+#include "mlx5_test.h"
+#endif
 
 #include "testpmd.h"
 
@@ -417,6 +420,7 @@ struct fwd_engine * fwd_engines[] = {
 	[RTE_ETH_EVENT_NEW] = "device probed",
 	[RTE_ETH_EVENT_DESTROY] = "device released",
 	[RTE_ETH_EVENT_FLOW_AGED] = "flow aged",
+	[RTE_ETH_EVENT_RXQ_LIMIT_REACHED] = "rxq limit reached",
 	[RTE_ETH_EVENT_MAX] = NULL,
 };
 
@@ -3539,6 +3543,7 @@ struct pmd_test_command {
 eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param,
 		  void *ret_param)
 {
+	uint16_t rxq_idx;
 	RTE_SET_USED(param);
 	RTE_SET_USED(ret_param);
 
@@ -3570,6 +3575,14 @@ struct pmd_test_command {
 		ports[port_id].port_status = RTE_PORT_CLOSED;
 		printf("Port %u is closed\n", port_id);
 		break;
+	case RTE_ETH_EVENT_RXQ_LIMIT_REACHED:
+		rxq_idx = (uint16_t)(uintptr_t)ret_param;
+		printf("recv rxq_limit_reached event, port:%d rxq_id:%d\n", port_id,
+		       rxq_idx);
+#ifdef RTE_NET_MLX5
+		mlx5_test_lwm_event_rxq_limit_reached(port_id, rxq_idx);
+#endif
+		break;
 	default:
 		break;
 	}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 31f766c..b570ea7 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1163,6 +1163,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
 void flex_item_create(portid_t port_id, uint16_t flex_id, const char *filename);
 void flex_item_destroy(portid_t port_id, uint16_t flex_id);
 void port_flex_item_flush(portid_t port_id);
+int set_rxq_lwm(portid_t port_id, uint16_t queue_idx, uint16_t lwm);
 
 extern int flow_parse(const char *src, void *result, unsigned int size,
 		      struct rte_flow_attr **attr,
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4e2ebff..1e6e3c5 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1688,3 +1688,79 @@ The procedure below is an example of using a ConnectX-5 adapter card (pf0) with
 #. For each VF PCIe, using the following command to bind the driver::
 
    $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind
+
+How to use LWM and Host Shaper
+------------------------------
+
+LWM introduction
+~~~~~~~~~~~~~~~~
+
+LWM (Limit WaterMark) is a per Rx queue attribute, it should be configured as
+a percentage of the Rx queue size.
+When Rx queue's available WQE count is below LWM, an event is sent to PMD.
+
+Host shaper introduction
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Host shaper register is per host port register which sets a shaper
+on the host port.
+All VF/hostPF representors belonging to one host port share one host shaper.
+For example, if representor 0 and representor 1 belong to same host port,
+and a host shaper rate of 1Gbps is configured, the shaper throttles both
+representors' traffic from host.
+Host shaper has two modes for setting the shaper, immediate and deferred to
+LWM event trigger. In immediate mode, the rate limit is configured immediately
+to host shaper. When deferring to LWM trigger, the shaper is not set until an
+LWM event is received by any Rx queue in a VF representor belonging to the host
+port. The only rate supported for deferred mode is 100Mbps (there is no limit
+on the supported rates for immediate mode). In deferred mode, the shaper is set
+on the host port by the firmware upon receiving the LMW event, which allows
+throttling host traffic on LWM events at minimum latency, preventing excess
+drops in the Rx queue.
+
+Testpmd CLI examples
+~~~~~~~~~~~~~~~~~~~~
+
+There are sample command lines to configure LWM in testpmd.
+Testpmd also contains sample logic to handle LWM event.
+The typical workflow is: testpmd configure LWM for Rx queues, enable
+lwm_triggered in host shaper and register a callback, when traffic from host is
+too high and available WQE count runs below LWM, PMD receives an event and
+firmware configures a 100Mbps shaper on host port automatically, then PMD call
+the callback registered previously, which will delay a while to let Rx queue
+empty, then disable host shaper.
+
+Let's assume we have a simple Blue Field 2 setup: port 0 is uplink, port 1
+is VF representor. Each port has 2 Rx queues.
+In order to control traffic from host to ARM, we can enable LWM in testpmd by:
+
+.. code-block:: console
+
+   testpmd> mlx5 set port 1 host_shaper lwm_triggered 1 rate 0
+   testpmd> set port 1 rxq 0 lwm 30
+   testpmd> set port 1 rxq 1 lwm 30
+
+The first command disables current host shaper, and enables LWM triggered mode.
+The left commands configure LWM to 30% of Rx queue size for both Rx queues,
+When traffic from host is too high, you can see testpmd console prints log
+about LWM event receiving, then host shaper is disabled.
+The traffic rate from host is controlled and less drop happens in Rx queues.
+
+When disable LWM and lwm_triggered, we can invoke below commands in testpmd:
+
+.. code-block:: console
+
+   testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 0
+   testpmd> set port 1 rxq 0 lwm 0
+   testpmd> set port 1 rxq 1 lwm 0
+
+It's recommended an application disables LWM and lwm_triggered before exit,
+if it enables them before.
+
+We can also configure the shaper with a value, the rate unit is 100Mbps, below
+command sets current shaper to 5Gbps and disables lwm_triggered.
+
+.. code-block:: console
+
+   testpmd> mlx5 set port 1 host_shaper lwm_triggered 0 rate 50
+
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index 99210fd..4c4eea4 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -8,8 +8,10 @@ if not (is_linux or is_windows)
     subdir_done()
 endif
 
-deps += ['hash', 'common_mlx5']
-headers = files('rte_pmd_mlx5.h')
+deps += ['hash', 'common_mlx5', 'cmdline']
+headers = files('rte_pmd_mlx5.h',
+		'mlx5_test.h',
+	)
 sources = files(
         'mlx5.c',
         'mlx5_ethdev.c',
@@ -38,6 +40,7 @@ sources = files(
         'mlx5_vlan.c',
         'mlx5_utils.c',
         'mlx5_devx.c',
+        'mlx5_test.c',
 )
 
 if is_linux
diff --git a/drivers/net/mlx5/mlx5_test.c b/drivers/net/mlx5/mlx5_test.c
new file mode 100644
index 0000000..43d25fe
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_test.c
@@ -0,0 +1,191 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 6WIND S.A.
+ * Copyright 2021 Mellanox Technologies, Ltd
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include <rte_prefetch.h>
+#include <rte_common.h>
+#include <rte_branch_prediction.h>
+#include <rte_ether.h>
+#include <rte_alarm.h>
+#include <mlx5_common.h>
+#include <rte_pmd_mlx5.h>
+#include <rte_ethdev.h>
+#include "mlx5_autoconf.h"
+#include "mlx5_defs.h"
+#include "mlx5.h"
+#include "mlx5_utils.h"
+#include "mlx5_devx.h"
+#include "mlx5_rx.h"
+#include "mlx5_test.h"
+
+static uint8_t host_shaper_lwm_triggered[RTE_MAX_ETHPORTS];
+#define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */
+
+/**
+ * Disable the host shaper and re-arm LWM event.
+ *
+ * @param[in] args
+ *   uint32_t integer combining port_id and rxq_id.
+ */
+static void
+mlx5_test_host_shaper_disable(void *args)
+{
+	uint32_t port_rxq_id = (uint32_t)(uintptr_t)args;
+	uint16_t port_id = port_rxq_id & 0xffff;
+	uint16_t qid = (port_rxq_id >> 16) & 0xffff;
+	struct rte_eth_rxq_info qinfo;
+
+	printf("%s disable shaper\n", __func__);
+	if (rte_eth_rx_queue_info_get(port_id, qid, &qinfo)) {
+		printf("rx_queue_info_get returns error\n");
+		return;
+	}
+	/* Rearm the LWM event. */
+	if (rte_eth_rx_queue_set_lwm(port_id, qid, qinfo.conf.lwm)) {
+		printf("config lwm returns error\n");
+		return;
+	}
+	/* Only disable the shaper when lwm_triggered is set. */
+	if (host_shaper_lwm_triggered[port_id] &&
+	    rte_pmd_mlx5_config_host_shaper(port_id, 0, 0))
+		printf("%s disable shaper returns error\n", __func__);
+}
+
+void
+mlx5_test_lwm_event_rxq_limit_reached(uint16_t port_id, uint16_t rxq_id)
+{
+	uint32_t port_rxq_id = port_id | (rxq_id << 16);
+
+	rte_eal_alarm_set(SHAPER_DISABLE_DELAY_US,
+			  mlx5_test_host_shaper_disable,
+			  (void *)(uintptr_t)port_rxq_id);
+	printf("%s port_id:%u rxq_id:%u\n", __func__, port_id, rxq_id);
+}
+
+/**
+ * Configure host shaper's lwm_triggered and current rate.
+ *
+ * @param[in] lwm_triggered
+ *   Disable/enable lwm_triggered.
+ * @param[in] rate
+ *   Configure current host shaper rate.
+ * @return
+ *   On success, returns 0.
+ *   On failure, returns < 0.
+ */
+static int
+mlx5_test_set_port_host_shaper(uint16_t port_id, uint16_t lwm_triggered, uint8_t rate)
+{
+	struct rte_eth_link link;
+	bool port_id_valid = false;
+	uint16_t pid;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(pid)
+		if (port_id == pid) {
+			port_id_valid = true;
+			break;
+		}
+	if (!port_id_valid)
+		return -EINVAL;
+	ret = rte_eth_link_get_nowait(port_id, &link);
+	if (ret < 0)
+		return ret;
+	host_shaper_lwm_triggered[port_id] = lwm_triggered ? 1 : 0;
+	if (!lwm_triggered) {
+		ret = rte_pmd_mlx5_config_host_shaper(port_id, 0,
+		RTE_BIT32(MLX5_HOST_SHAPER_FLAG_LWM_TRIGGERED));
+	} else {
+		ret = rte_pmd_mlx5_config_host_shaper(port_id, 1,
+		RTE_BIT32(MLX5_HOST_SHAPER_FLAG_LWM_TRIGGERED));
+	}
+	if (ret)
+		return ret;
+	ret = rte_pmd_mlx5_config_host_shaper(port_id, rate, 0);
+	if (ret)
+		return ret;
+	return 0;
+}
+
+/* *** SET HOST_SHAPER FOR A PORT *** */
+struct cmd_port_host_shaper_result {
+	cmdline_fixed_string_t mlx5;
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t port;
+	uint16_t port_num;
+	cmdline_fixed_string_t host_shaper;
+	cmdline_fixed_string_t lwm_triggered;
+	uint16_t fr;
+	cmdline_fixed_string_t rate;
+	uint8_t rate_num;
+};
+
+static void cmd_port_host_shaper_parsed(void *parsed_result,
+		__rte_unused struct cmdline *cl,
+		__rte_unused void *data)
+{
+	struct cmd_port_host_shaper_result *res = parsed_result;
+	int ret = 0;
+
+	if ((strcmp(res->mlx5, "mlx5") == 0) &&
+	    (strcmp(res->set, "set") == 0) &&
+	    (strcmp(res->port, "port") == 0) &&
+	    (strcmp(res->host_shaper, "host_shaper") == 0) &&
+	    (strcmp(res->lwm_triggered, "lwm_triggered") == 0) &&
+	    (strcmp(res->rate, "rate") == 0))
+		ret = mlx5_test_set_port_host_shaper(res->port_num, res->fr,
+					   res->rate_num);
+	if (ret < 0)
+		printf("cmd_port_host_shaper error: (%s)\n", strerror(-ret));
+}
+
+cmdline_parse_token_string_t cmd_port_host_shaper_mlx5 =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				mlx5, "mlx5");
+cmdline_parse_token_string_t cmd_port_host_shaper_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				set, "set");
+cmdline_parse_token_string_t cmd_port_host_shaper_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				port, "port");
+cmdline_parse_token_num_t cmd_port_host_shaper_portnum =
+	TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result,
+				port_num, RTE_UINT16);
+cmdline_parse_token_string_t cmd_port_host_shaper_host_shaper =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				 host_shaper, "host_shaper");
+cmdline_parse_token_string_t cmd_port_host_shaper_lwm_triggered =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				 lwm_triggered, "lwm_triggered");
+cmdline_parse_token_num_t cmd_port_host_shaper_fr =
+	TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result,
+			      fr, RTE_UINT16);
+cmdline_parse_token_string_t cmd_port_host_shaper_rate =
+	TOKEN_STRING_INITIALIZER(struct cmd_port_host_shaper_result,
+				 rate, "rate");
+cmdline_parse_token_num_t cmd_port_host_shaper_rate_num =
+	TOKEN_NUM_INITIALIZER(struct cmd_port_host_shaper_result,
+			      rate_num, RTE_UINT8);
+cmdline_parse_inst_t _rte_pmd_mlx5_cmd_port_host_shaper = {
+	.f = cmd_port_host_shaper_parsed,
+	.data = (void *)0,
+	.help_str = "mlx5 set port <port_id> host_shaper lwm_triggered <0|1> "
+	"rate <rate_num>: Set HOST_SHAPER lwm_triggered and rate with port_id",
+	.tokens = {
+		(void *)&cmd_port_host_shaper_mlx5,
+		(void *)&cmd_port_host_shaper_set,
+		(void *)&cmd_port_host_shaper_port,
+		(void *)&cmd_port_host_shaper_portnum,
+		(void *)&cmd_port_host_shaper_host_shaper,
+		(void *)&cmd_port_host_shaper_lwm_triggered,
+		(void *)&cmd_port_host_shaper_fr,
+		(void *)&cmd_port_host_shaper_rate,
+		(void *)&cmd_port_host_shaper_rate_num,
+		NULL,
+	},
+};
diff --git a/drivers/net/mlx5/mlx5_test.h b/drivers/net/mlx5/mlx5_test.h
new file mode 100644
index 0000000..16efe88
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_test.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 6WIND S.A.
+ * Copyright 2021 Mellanox Technologies, Ltd
+ */
+
+#ifndef RTE_PMD_MLX5_TEST_H_
+#define RTE_PMD_MLX5_TEST_H_
+
+#include <cmdline_parse.h>
+#include <cmdline_parse_num.h>
+#include <cmdline_parse_string.h>
+
+/**
+ * RTE_ETH_EVENT_RXQ_LIMIT_REACHED handler sample code.
+ * It's called in testpmd, the work flow here is delay a while until
+ * RX queueu is empty, then disable host shaper.
+ *
+ * @param[in] port_id
+ *   Port identifier.
+ * @param[in] rxq_id
+ *   Rx queue identifier.
+ */
+void
+mlx5_test_lwm_event_rxq_limit_reached(uint16_t port_id, uint16_t rxq_id);
+
+extern cmdline_parse_inst_t _rte_pmd_mlx5_cmd_port_host_shaper;
+#endif
-- 
1.8.3.1