From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3695CA0C41; Thu, 30 Sep 2021 16:57:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B361410EB; Thu, 30 Sep 2021 16:57:29 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2081.outbound.protection.outlook.com [40.107.93.81]) by mails.dpdk.org (Postfix) with ESMTP id 825E8410E5 for ; Thu, 30 Sep 2021 16:57:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dA8mk43955o0IDo1SgzprELY/gwVbG6JgB0o+SNAF5bAfZJ2irZWEIQNr0VMO5mSDwJTgQJXsQOPBaU1nOuy7wJGurIvkN7h/ZWzHj7+5F302aoKhIPBMP6VV82GJggaJoRehPncjrR4iZXsa1CPbxCIWj3ZOIdaiepKcZpzAotiYPTPYOWSZWSS78KbkY+xzhlI8uFc/z+FI9xia86ShCXaNV490Fq4NJI3LxFw9FvnUkyKwiPmDWWad9rYwbnLb5W3Rg3qatdieAwCqEsyqOIBythklWWvjm2VuAeVK2f/gpNlpTrQFKlEgc3wY51qvQpS0D6ABqhl96ZYRYVZUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=VxXU/Ryb0sZNpmwxKBMLcUmHojY/GuHHHsF8WbuHFKk=; b=gjIta7RiN2t0GeAO6qaNXoX7GfTLs3N/spioo5Oce2r0x7mkXbzW8d8+G3U3/eybCMF6alKeR64rJ5bpA5pUSkGUXnnBBtg6D++pH7mw0t6W4xvoL1Re3gTvLH97+YkIq/oJTSDVphGyc9l6D/ooDr4BQbX/XIHDIZVPuyW/zHJxcPIVLDMzfg2KY3W4BkuCQq+uJjzEE0kpALNFcyyxUu9W8L+vaHKDlXT/8h/8o5Ugm55Qr3x2B4XlDwNxiRDnGXxbsjrRePvgzBUeDgwc8AFhuRgySFd+vPgnStEgdzHNHvvbx40N0ezRiQzbIuALfsBo1CX2MGKMIU96LoAt6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VxXU/Ryb0sZNpmwxKBMLcUmHojY/GuHHHsF8WbuHFKk=; b=diAuGj6PSis7KDNrJPlTuosy/KJQBYoFx8wPx/4AhAtTkDAooNKChHzAfjD2C+u0o05DUCzBelN1+X/v7OzuXD9ch5jQuQgNmTl/KZO9OnjhhiPTY1LD6TXKrK5xg4pzN5PpvgyxXvbMk9S2+dQRr7B5mrWshkiSgx8JrNZdv+6WcZ8EoPNxbvoFxOHG890ViVrrYggiuuh4cCmp9mGjFyJ3q0Da/bVn4W7mel5SQImnZYRrKn6gYC2m+m4wOsAJlgXmvJUXCCYzGSgYmdfZBzwIEu9MOlkPFp4i3sQT1QFr7cAOtqyOhxDwXTp7Jdml0FmuCudiD0P50mMoYFVyNA== Received: from MW4PR03CA0153.namprd03.prod.outlook.com (2603:10b6:303:8d::8) by BN8PR12MB3458.namprd12.prod.outlook.com (2603:10b6:408:44::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.18; Thu, 30 Sep 2021 14:57:22 +0000 Received: from CO1NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8d:cafe::c3) by MW4PR03CA0153.outlook.office365.com (2603:10b6:303:8d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14 via Frontend Transport; Thu, 30 Sep 2021 14:57:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT032.mail.protection.outlook.com (10.13.174.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Thu, 30 Sep 2021 14:57:21 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 30 Sep 2021 14:57:18 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , "Ananyev Konstantin" , Xiaoyun Li Date: Thu, 30 Sep 2021 22:56:01 +0800 Message-ID: <20210930145602.763969-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210930145602.763969-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210930145602.763969-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cdda5538-8463-4995-a70b-08d984229b5a X-MS-TrafficTypeDiagnostic: BN8PR12MB3458: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: A2DYhrCjeiGBLQmE3Q0mFWnP7BllYA4Z2kVWvwK2wYuySTM8T9GeC3vSMYsZ2Y1fuV8jUWT/xFgACyXFLYuLjq3mVQbANP+gtUiiwf2fwlyznxgSHCGG7WMtIKyQf232DbOMZYOAIM2eD9A6Q+byrOwJ5uM0T3y/yl0m6NwUjNslz9XoTR5A++niSPtewbpRU/7GVlt9gnxC5n5z0QwEFf5+p+A1lKZ4kPeKVc2b+vaZRK0uEPS8kbasqah5gzEGBJQ6W2Fp34bQsDj2iSnufUxPT58zqYOWmJCUUXAx3dOp3dQMjpvjdQRZoz70bDGnMhyQDfnlVR3Dkr7EP07qXdVu80VoC1+f/UphhTa5V9L8mMMwOnnN43U9awWVfUD+sfgqCEOYhAXr6haZGVFXENi3uiCNqU+I9skQTqgwh36G2UDxdu8KEUei1oLl1uB20/4BQ/Om57XalUvQJUykNMjFXUmX2Bu+78rXSr0htaFUFEk+vQZ+sWkw90MuBfk++tKc7ARZhSsjNJ5AdhYsngFz5Lvxf46Cx7Av28Fo9JYpuHZGJoAG3Ahj9+Gu7O3f76yHXXn71kPHZi7LuS15aRgt3rX6YMGHFxeAGIF4cs1exkspNUKNNpDD0Xt2nYsaKBcBHDrmzU83FV3GIFR7OGyiHNv0OzBuMsUZEQLGUkGhTGW2AgvS/RZ/4dcfyV1q79h80AMP6BR0TOfNkN2XQg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(426003)(83380400001)(6916009)(55016002)(5660300002)(6286002)(2616005)(336012)(70586007)(1076003)(4326008)(82310400003)(508600001)(6666004)(70206006)(8676002)(8936002)(16526019)(26005)(47076005)(356005)(7636003)(7696005)(36756003)(186003)(86362001)(316002)(54906003)(36860700001)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2021 14:57:21.8710 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cdda5538-8463-4995-a70b-08d984229b5a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3458 Subject: [dpdk-dev] [PATCH v4 5/6] app/testpmd: force shared Rx queue polled on same core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Shared rxqs shares one set rx queue of groups zero. Shared Rx queue must must be polled from one core. Checks and stops forwarding if shared rxq being scheduled on multiple cores. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 96 ++++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 4 +- app/test-pmd/testpmd.h | 2 + 3 files changed, 101 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 6c7f9dee065..8bfa26570ba 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2885,6 +2885,102 @@ port_rss_hash_key_update(portid_t port_id, char rss_type[], uint8_t *hash_key, } } +/* + * Check whether a shared rxq scheduled on other lcores. + */ +static bool +fwd_stream_on_other_lcores(uint16_t domain_id, portid_t src_port, + queueid_t src_rxq, lcoreid_t src_lc, + uint32_t shared_group) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + for (lc_id = src_lc + 1; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + if (domain_id != port->dev_info.switch_info.domain_id) + continue; + if (fs->rx_queue != src_rxq) + continue; + if (rxq_conf->shared_group != shared_group) + continue; + printf("Shared RX queue group %u can't be scheduled on different cores:\n", + shared_group); + printf(" lcore %hhu Port %hu queue %hu\n", + src_lc, src_port, src_rxq); + printf(" lcore %hhu Port %hu queue %hu\n", + lc_id, fs->rx_port, fs->rx_queue); + printf(" please use --nb-cores=%hu to limit forwarding cores\n", + nb_rxq); + return true; + } + } + return false; +} + +/* + * Check shared rxq configuration. + * + * Shared group must not being scheduled on different core. + */ +bool +pkt_fwd_shared_rxq_check(void) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + uint16_t domain_id; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + /* + * Check streams on each core, make sure the same switch domain + + * group + queue doesn't get scheduled on other cores. + */ + for (lc_id = 0; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + /* Update lcore info stream being scheduled. */ + fs->lcore = fwd_lcores[lc_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + /* Check shared rxq not scheduled on remaining cores. */ + domain_id = port->dev_info.switch_info.domain_id; + if (fwd_stream_on_other_lcores(domain_id, fs->rx_port, + fs->rx_queue, lc_id, + rxq_conf->shared_group)) + return false; + } + } + return true; +} + /* * Setup forwarding configuration for each logical core. */ diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 417e92ade11..cab4b36b046 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2241,10 +2241,12 @@ start_packet_forwarding(int with_tx_first) fwd_config_setup(); + pkt_fwd_config_display(&cur_fwd_config); + if (!pkt_fwd_shared_rxq_check()) + return; if(!no_flush_rx) flush_fwd_rx_queues(); - pkt_fwd_config_display(&cur_fwd_config); rxtx_config_display(); fwd_stats_reset(); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 3dfaaad94c0..f121a2da90c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -144,6 +144,7 @@ struct fwd_stream { uint64_t core_cycles; /**< used for RX and TX processing */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; + struct fwd_lcore *lcore; /**< Lcore being scheduled. */ }; /** @@ -795,6 +796,7 @@ void port_summary_header_display(void); void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void fwd_lcores_config_display(void); +bool pkt_fwd_shared_rxq_check(void); void pkt_fwd_config_display(struct fwd_config *cfg); void rxtx_config_display(void); void fwd_config_setup(void); -- 2.33.0