From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56E78A0C41; Tue, 19 Oct 2021 17:29:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4C204124D; Tue, 19 Oct 2021 17:28:58 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2042.outbound.protection.outlook.com [40.107.101.42]) by mails.dpdk.org (Postfix) with ESMTP id 677BF41248 for ; Tue, 19 Oct 2021 17:28:57 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZXrg9oHGF69fxiC5kB1jUbC8kUT+4dqeKy+eCaKYgzbWcxrBHKZkT2ZSvrEn44Xqhmx/0Kh3ny3GsZN7bXJJvq4yd7FsFmHnnBniGi/CRih17Beq7khlpDXYLYbLwBBYgD9m7khqdK7iejAtANXGFygnnNzVeGcWyfEw+z4QsskA/7YzMoora0tEltNamlLKy5oGWiyfhIdeRbjXQPDlUcvc8VVvOK1SASKgx5v3h84QWkqo8W1Ux6GbEb+F9Rzrb24Ohq9ofWYRkmBNheKL/5hHNvFPCCkBz3L9XrZc8hEbxMM1loTGXBB8fwuHXlUhZk2tldb9GySw5XuxlHdZEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iffq5aWq4KP2Wuh4ndS3RBzmuaqHsqfp5w1wymx3ATg=; b=LQeh7LtAcHQVP5mfE7rL8vyrX2AyceNGauZFxVzsXGMFOjitegK80OtoG0CDkKnhtRUUyPClFMUxWGk1Yk8RZr1NTHDYg1sdGLj1dDrPnQt9R47vdMVBiO4Ve90fdzb+YbKpsvQH+k9YCwFXQ2EO4YZ6CQ+Gdf1Ji7kdSfw32cG0FE/fWxyr9gCINmpwGLqqPg1bzmSru2i4Z8OeboCQ+PLkcGZukzDHWNOHUiWtFIIHquIaafGj5o2/6I+fLildFFomALHwTGkyki3Sfv0mAnPYauCHV53HZH49WNPDH0gN7xxm1HZJ5NTzE+SveUB1o4gT1C76czb4W5m/p/Fx3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iffq5aWq4KP2Wuh4ndS3RBzmuaqHsqfp5w1wymx3ATg=; b=RRCRWwV5yvJqDJBati6FvvlCVt8V8wzYHEpJktxqDLUij69sI1QhHoHpdVVdwJPP6p6Dja1VP+u5273ArCyg2u10aoCMa8ZK3kgzFIiI48oLZlWB482ESrK1//bSsv+clUfrmPDWsx3gXjw0f5OIlvM1Bqsw511iWAqCHMIpNb5rHT+War6fHWl+vv04sTlytkOnpTJatm0EBJFie29Xr8ejtPodCWDOigv2k/7zZSRDM4ob5uiVZpHM0BHeqNh5mBFCUmXK63W5X2lwCoJtVYBxq3Dw/OzDTTivXH6HJJvviF484MaCjAXXpjZgytdcrEvvfftnFKrCGM6Fc/eAww== Received: from BN8PR03CA0012.namprd03.prod.outlook.com (2603:10b6:408:94::25) by MN2PR12MB3742.namprd12.prod.outlook.com (2603:10b6:208:16a::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16; Tue, 19 Oct 2021 15:28:55 +0000 Received: from BN8NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:408:94:cafe::90) by BN8PR03CA0012.outlook.office365.com (2603:10b6:408:94::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.15 via Frontend Transport; Tue, 19 Oct 2021 15:28:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT054.mail.protection.outlook.com (10.13.177.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Tue, 19 Oct 2021 15:28:54 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 19 Oct 2021 15:28:48 +0000 From: Xueming Li To: , Zhang Yuying CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , "Ananyev Konstantin" , Ajit Khaparde , Xiaoyun Li Date: Tue, 19 Oct 2021 23:28:06 +0800 Message-ID: <20211019152809.2278464-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211019152809.2278464-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211019152809.2278464-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 381f9674-85dd-4571-00e0-08d99315298f X-MS-TrafficTypeDiagnostic: MN2PR12MB3742: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2887; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MEbpN2jnHtHO9Z7NNmnnpNhj7pNM4duy0yNLm9AXxZjSLpZMH5J7T09W/bFUJyCP/RsvqAWybIfLm5e9s7lhmM1x3ltTw8VpVEI3xGjFBZXfmLRSVNy1ybbZUVYCIq94vybqvOWOxdmzTpEHvvcic6928J2kESEe8C6Dm5ci4rtquQroD77hgu0Lpby+jbgOKa+dhow+6BtkNePk0/aUinYDARXp+D8DrRNhJYo4v7tho/0as8USMTXFlyC7pPfHvFR2uGdYRGiSoic23SH6dkhntXJQdWY75Mfx86NYTOf0eo6OqLR/gScNcZZcWFuFSUyo2RnWqC0M/FkIUThuj5vf+n6dFb2NfOKVEfZfMni3tQQGoujsB63PKyHqDd/n4n1Yt2XLuIigt7IfXSz0xXLt+dS5VuWd4B5f1C4bsRJHu0xUFs8sXxfBXTv5RDfbISBB8ov9HK8yuP5wO786SrnkVWicuL4ErsQkTO7ezuY8+7+uvCereRDybHBwBu78QNHKGgoiOmVqa6W7LjIq7Y8BEe8PvHh96eoA/5yYQMvr6cqXCqI9DRS0r1QckSt01zGU4SBk4IyIMrLhgy4mm1gbhKq3EurT+vTwJ9/5ekYfSJybgOvTKLWuXtiQUe4sNHlAw9664X2GcLMjbUxqkRFfGugZrl3h1eXTo22wBVQwR6+VPrmU3jNKNj/hDt/CUCsu3uQliWqeZGz1DfoyRQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(7696005)(6666004)(356005)(186003)(26005)(508600001)(7636003)(55016002)(2616005)(36756003)(70586007)(110136005)(54906003)(8676002)(70206006)(36906005)(4326008)(36860700001)(2906002)(86362001)(47076005)(5660300002)(426003)(1076003)(316002)(6286002)(83380400001)(336012)(82310400003)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2021 15:28:54.5292 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 381f9674-85dd-4571-00e0-08d99315298f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3742 Subject: [dpdk-dev] [PATCH v10 4/7] app/testpmd: new parameter to enable shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds "--rxq-share=X" parameter to enable shared RxQ, share if device supports, otherwise fallback to standard RxQ. Share group number grows per X ports. X defaults to MAX, implies all ports join share group 1. Queue ID is mapped equally with shared Rx queue ID. Forwarding engine "shared-rxq" should be used which Rx only and update stream statistics correctly. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 7 ++++++- app/test-pmd/parameters.c | 13 +++++++++++++ app/test-pmd/testpmd.c | 20 +++++++++++++++++--- app/test-pmd/testpmd.h | 2 ++ doc/guides/testpmd_app_ug/run_app.rst | 7 +++++++ 5 files changed, 45 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 2c1b06c544d..fa951a86704 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2738,7 +2738,12 @@ rxtx_config_display(void) printf(" RX threshold registers: pthresh=%d hthresh=%d " " wthresh=%d\n", pthresh_tmp, hthresh_tmp, wthresh_tmp); - printf(" RX Offloads=0x%"PRIx64"\n", offloads_tmp); + printf(" RX Offloads=0x%"PRIx64, offloads_tmp); + if (rx_conf->share_group > 0) + printf(" share_group=%u share_qid=%u", + rx_conf->share_group, + rx_conf->share_qid); + printf("\n"); } /* per tx queue config only for first queue to be less verbose */ diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3f94a82e321..30dae326310 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -167,6 +167,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); + printf(" --rxq-share: number of ports per shared rxq groups, defaults to MAX(1 group)\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -607,6 +608,7 @@ launch_args_parse(int argc, char** argv) { "rxpkts", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, + { "rxq-share", 2, 0, 0 }, { "eth-link-speed", 1, 0, 0 }, { "disable-link-check", 0, 0, 0 }, { "disable-device-start", 0, 0, 0 }, @@ -1271,6 +1273,17 @@ launch_args_parse(int argc, char** argv) } if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow")) txonly_multi_flow = 1; + if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { + if (optarg == NULL) { + rxq_share = UINT32_MAX; + } else { + n = atoi(optarg); + if (n >= 0) + rxq_share = (uint32_t)n; + else + rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); + } + } if (!strcmp(lgopts[opt_idx].name, "no-flush-rx")) no_flush_rx = 1; if (!strcmp(lgopts[opt_idx].name, "eth-link-speed")) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 97ae52e17ec..123142ed110 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -498,6 +498,11 @@ uint8_t record_core_cycles; */ uint8_t record_burst_stats; +/* + * Number of ports per shared Rx queue group, 0 disable. + */ +uint32_t rxq_share; + unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -3393,14 +3398,23 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type, } static void -rxtx_port_config(struct rte_port *port) +rxtx_port_config(portid_t pid) { uint16_t qid; uint64_t offloads; + struct rte_port *port = &ports[pid]; for (qid = 0; qid < nb_rxq; qid++) { offloads = port->rx_conf[qid].offloads; port->rx_conf[qid] = port->dev_info.default_rxconf; + + if (rxq_share > 0 && + (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { + /* Non-zero share group to enable RxQ share. */ + port->rx_conf[qid].share_group = pid / rxq_share + 1; + port->rx_conf[qid].share_qid = qid; /* Equal mapping. */ + } + if (offloads != 0) port->rx_conf[qid].offloads = offloads; @@ -3558,7 +3572,7 @@ init_port_config(void) port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE; } - rxtx_port_config(port); + rxtx_port_config(pid); ret = eth_macaddr_get_print_err(pid, &port->eth_addr); if (ret != 0) @@ -3772,7 +3786,7 @@ init_port_dcb_config(portid_t pid, memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf)); - rxtx_port_config(rte_port); + rxtx_port_config(pid); /* VLAN filter */ rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; for (i = 0; i < RTE_DIM(vlan_tags); i++) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 5863b2f43f3..3dfaaad94c0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -477,6 +477,8 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; +extern uint32_t rxq_share; + extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; extern int nb_flows_flowgen; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 640eadeff73..ff5908dcd50 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -389,6 +389,13 @@ The command line options are: Generate multiple flows in txonly mode. +* ``--rxq-share=[X]`` + + Create queues in shared Rx queue mode if device supports. + Group number grows per X ports. X defaults to MAX, implies all ports + join share group 1. Forwarding engine "shared-rxq" should be used + which Rx only and update stream statistics correctly. + * ``--eth-link-speed`` Set a forced link speed to the ethernet port:: -- 2.33.0