From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C2C4FA0C45; Wed, 20 Oct 2021 09:54:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BB76741199; Wed, 20 Oct 2021 09:54:32 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2051.outbound.protection.outlook.com [40.107.100.51]) by mails.dpdk.org (Postfix) with ESMTP id CBD0D411B8 for ; Wed, 20 Oct 2021 09:54:29 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dF8kulRbe8P9XHlx8qa7NpRY3x4mBrYNBGx6qvkF0RBur+RBl660WWOwci04z1gz7/LXHdvmF55W1sMWTdZ3pb/tVbCGpBL8pR7yCXYUUjZo7JQLPMN63u0IRH+WOl7jGEIUMtkdbv3fJQkhGvNWdtqipTOp4WjbGfezLRnrii3GqsUv/sP/Axp7cir0+Y3L6dbDyqZ9rNkwgyiOmBLSPVCxdeYdNVLan6PvsiVggGObT/10RIDOGq4vBnPLvFhk212jIUJ7m1sTn14xeCYqscNo/r68GAXK2M1MV4b3GyD9Q2aouibzzmW8wQ9KrJ9iJlzzkpPDmEhIQ4f+a0EPSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iffq5aWq4KP2Wuh4ndS3RBzmuaqHsqfp5w1wymx3ATg=; b=AkUZGvlb0R+SqKU9vgE/w2za3Q292GlyJ5ETtp8A3gCKdgRfwCl2EGzC3Xk5UOmIivoVssFIKYtjoljz2wf035zUR+s6VqCAiFhTUYpL/iu1m1f5glVkNrh6k7e7ie/OVDYZH7qgsKZRkeb00Re9yJ6cVS/VkUDZzp5mfSoHKAJjOJqnX3nqj3iKNfFTVvDmwQSMGba4HQdLo+B8yXpyIgo2LUd/Rf7oZ3I5JK/H4kBj7aKl8kVn0qcEjE7OCpEMm4DmWVveiHwKOCzer9tjHjJ82ju9W//lww+Gkuq+AdDqt5WtclI6ytS0T1zfSH8+RBNKolKNwTSMoGFXbstcAw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iffq5aWq4KP2Wuh4ndS3RBzmuaqHsqfp5w1wymx3ATg=; b=Ug5W5kv4Ft5YURqPTHGYV6Q7nw4StOJBo/QXXDcXd/V8uiOF+RQTVSR3lkjigUW4OR8ZEcbKI1ioYaHitzm5D+u99nrYe6sZ58vvKiOsri/Q13WHV/4k/F0sQtJA+gG7FGI2s3GnynwyeJ8AI3zIuof0F/8PQE8s/vkDVyCyzikkFQ0BF2TY1WiysI/sOLgmAbHAaqU1E/VhI1VKMPTX93WPsg3zwwOYP6ccwJBw8WzaFNam1ktstMtxyhCWAxlUJT/Vn9/0ylPCjUj81u1o79EQ8hUrjFBG8rP0+uv7esBpdyn1FZklcOrlB40DPq/RgmOvfxtc0fOglUvGxrpeDQ== Received: from DM5PR21CA0059.namprd21.prod.outlook.com (2603:10b6:3:129::21) by MN2PR12MB2861.namprd12.prod.outlook.com (2603:10b6:208:af::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.18; Wed, 20 Oct 2021 07:54:28 +0000 Received: from DM6NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:3:129:cafe::95) by DM5PR21CA0059.outlook.office365.com (2603:10b6:3:129::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.1 via Frontend Transport; Wed, 20 Oct 2021 07:54:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT039.mail.protection.outlook.com (10.13.172.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4628.16 via Frontend Transport; Wed, 20 Oct 2021 07:54:27 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 20 Oct 2021 07:54:16 +0000 From: Xueming Li To: , Zhang Yuying CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , "Ananyev Konstantin" , Ajit Khaparde , Xiaoyun Li Date: Wed, 20 Oct 2021 15:53:16 +0800 Message-ID: <20211020075319.2397551-5-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211020075319.2397551-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211020075319.2397551-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cf877d69-3e66-4da8-5ba7-08d9939ed76d X-MS-TrafficTypeDiagnostic: MN2PR12MB2861: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2887; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4yddPPcwO+ib5pNt4VtGcY+1COJvsARNO46v6nWObAyf12lbJgkFT4reZ4ZQIK579cUwDvNmUECSNVlGW4Y9oyVNWjuYmIIZIAOAS2SQG3PAuDvtPrxUNqE7JkACzvCarEHxM+l2eZj1BhfozujLLObrl53odVYOm13oCj8XcVaFloRbUKyphtaXyaqdQkqftIX8xHTGk2OEOyIf7VblzoOMKjwQpkWL6f8lDW13J1WCwPL2nb+VpBIG1WZaIW2SceV/BySl9Y9rLc42DeCSbzESiwzcoKUD8sE9mKZgJb1WztmtedomSf0mXayoMDjuAHDp7jNqo4dQqLiGrmAhoygVUe47o3q92xSJrEugpwpRJXRukMd8EKh0fQehQjukgy3xhLpoa78aAi8UfrVcizsMRtsqAF0yWdA251PjPGkgBwSceD7Tova8VTh5AOjQVgg1vBjcWiGOxouvjLJwL6Z7u9UF7K6d+0WZFjzLt6XUa8T8yo7lIZ4ErygUpDm6dfC4L8RxkxB/fxuvdgmOM6wOwDfrkjiSijlD/Hai6W+QRcm+IlQSDsL4a0Mq8qdUzvhfoBsqdpnrXg3Zm0PVwEsPdvWdcphr8ZgiX8NoZaMnwhhmre4anDrr9FkVPgiC3+b1ZPuv8+tNymPmhCyhs0gK4LoHGIQuaLb152VM571UFVus5inB6hgVyUQFmjb5Os9nsWD+8R9SOxj0+o4EjQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(1076003)(83380400001)(82310400003)(426003)(356005)(5660300002)(7636003)(36756003)(186003)(36860700001)(70586007)(70206006)(508600001)(26005)(6286002)(2906002)(55016002)(16526019)(4326008)(7696005)(2616005)(86362001)(8676002)(47076005)(336012)(54906003)(110136005)(316002)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2021 07:54:27.5751 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cf877d69-3e66-4da8-5ba7-08d9939ed76d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB2861 Subject: [dpdk-dev] [PATCH v11 4/7] app/testpmd: new parameter to enable shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds "--rxq-share=X" parameter to enable shared RxQ, share if device supports, otherwise fallback to standard RxQ. Share group number grows per X ports. X defaults to MAX, implies all ports join share group 1. Queue ID is mapped equally with shared Rx queue ID. Forwarding engine "shared-rxq" should be used which Rx only and update stream statistics correctly. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 7 ++++++- app/test-pmd/parameters.c | 13 +++++++++++++ app/test-pmd/testpmd.c | 20 +++++++++++++++++--- app/test-pmd/testpmd.h | 2 ++ doc/guides/testpmd_app_ug/run_app.rst | 7 +++++++ 5 files changed, 45 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 2c1b06c544d..fa951a86704 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2738,7 +2738,12 @@ rxtx_config_display(void) printf(" RX threshold registers: pthresh=%d hthresh=%d " " wthresh=%d\n", pthresh_tmp, hthresh_tmp, wthresh_tmp); - printf(" RX Offloads=0x%"PRIx64"\n", offloads_tmp); + printf(" RX Offloads=0x%"PRIx64, offloads_tmp); + if (rx_conf->share_group > 0) + printf(" share_group=%u share_qid=%u", + rx_conf->share_group, + rx_conf->share_qid); + printf("\n"); } /* per tx queue config only for first queue to be less verbose */ diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3f94a82e321..30dae326310 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -167,6 +167,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); + printf(" --rxq-share: number of ports per shared rxq groups, defaults to MAX(1 group)\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -607,6 +608,7 @@ launch_args_parse(int argc, char** argv) { "rxpkts", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, + { "rxq-share", 2, 0, 0 }, { "eth-link-speed", 1, 0, 0 }, { "disable-link-check", 0, 0, 0 }, { "disable-device-start", 0, 0, 0 }, @@ -1271,6 +1273,17 @@ launch_args_parse(int argc, char** argv) } if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow")) txonly_multi_flow = 1; + if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { + if (optarg == NULL) { + rxq_share = UINT32_MAX; + } else { + n = atoi(optarg); + if (n >= 0) + rxq_share = (uint32_t)n; + else + rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); + } + } if (!strcmp(lgopts[opt_idx].name, "no-flush-rx")) no_flush_rx = 1; if (!strcmp(lgopts[opt_idx].name, "eth-link-speed")) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 97ae52e17ec..123142ed110 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -498,6 +498,11 @@ uint8_t record_core_cycles; */ uint8_t record_burst_stats; +/* + * Number of ports per shared Rx queue group, 0 disable. + */ +uint32_t rxq_share; + unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -3393,14 +3398,23 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type, } static void -rxtx_port_config(struct rte_port *port) +rxtx_port_config(portid_t pid) { uint16_t qid; uint64_t offloads; + struct rte_port *port = &ports[pid]; for (qid = 0; qid < nb_rxq; qid++) { offloads = port->rx_conf[qid].offloads; port->rx_conf[qid] = port->dev_info.default_rxconf; + + if (rxq_share > 0 && + (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { + /* Non-zero share group to enable RxQ share. */ + port->rx_conf[qid].share_group = pid / rxq_share + 1; + port->rx_conf[qid].share_qid = qid; /* Equal mapping. */ + } + if (offloads != 0) port->rx_conf[qid].offloads = offloads; @@ -3558,7 +3572,7 @@ init_port_config(void) port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE; } - rxtx_port_config(port); + rxtx_port_config(pid); ret = eth_macaddr_get_print_err(pid, &port->eth_addr); if (ret != 0) @@ -3772,7 +3786,7 @@ init_port_dcb_config(portid_t pid, memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf)); - rxtx_port_config(rte_port); + rxtx_port_config(pid); /* VLAN filter */ rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; for (i = 0; i < RTE_DIM(vlan_tags); i++) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 5863b2f43f3..3dfaaad94c0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -477,6 +477,8 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; +extern uint32_t rxq_share; + extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; extern int nb_flows_flowgen; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 640eadeff73..ff5908dcd50 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -389,6 +389,13 @@ The command line options are: Generate multiple flows in txonly mode. +* ``--rxq-share=[X]`` + + Create queues in shared Rx queue mode if device supports. + Group number grows per X ports. X defaults to MAX, implies all ports + join share group 1. Forwarding engine "shared-rxq" should be used + which Rx only and update stream statistics correctly. + * ``--eth-link-speed`` Set a forced link speed to the ethernet port:: -- 2.33.0