From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 415C6A0C45; Wed, 11 Aug 2021 16:05:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 472DF41200; Wed, 11 Aug 2021 16:04:56 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2069.outbound.protection.outlook.com [40.107.243.69]) by mails.dpdk.org (Postfix) with ESMTP id 917BD411F6 for ; Wed, 11 Aug 2021 16:04:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nqSs/raatScvoD1m67cPjwCfD+GsREyC5rccj+MtD24nT9l3F8yQkaZqTPzqIp/KBt8PobmFEvz0UgWWPIZP57XODzmTJDfPr4zo8NeHMwcKvNGRiF5r0qd6EvrN1VJ/Ya8dPCqlKEFx6rLgUlocy8RG1S33YEGVbqqIE0vup7Elj2gW53BskOxbEHBM47Jj0/1UiLT9hFi0piWSBY3PLzDKTB1zOKO9Ax0a73iUZcRFDpLRdGfAwcPBnkHSbFfA4YOsL6BkB2bFmjKm/RENZxd6L8B1vH/bfDaYGwFJ4XQHpK8IqoJENLxGLyvCSQNLfPlX8PU34MHDcCo8R4qpkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TEJ1FRyP2sgkb45a7Zov0JpWCctTP4VtcuUPXg9k5nE=; b=IRN7sBV+lCudnbdbAywYk3weHO0MNnUYBAs0aTSkAyI9lFymkoKVLbhE6L1zeAGFFw94QTD5qn67f+GxlVC+WLUOwf6G1cXhaDUS0ozhFaC6DswC/CYit7QsyuiqhBxcrr3xKuPEKIwREr8EjPLdltGJAExS65PX0UUl+7cwR+UYLy/pmp2gF7QEVI17n/fxgXV9bVAdCBDzYakdKjWpx+QCnyC7EEDOhJ6LvvgyDdpyL7uUVGyevBvduUZrMDWYrM6PFPPRdPo0NYXAURe6mWISlUIEJUmL9fpYA/7Z8tlduP4Ke0DqV1rvsIDrNNtOGhGNwb8tPdyU4QtgNuYnOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TEJ1FRyP2sgkb45a7Zov0JpWCctTP4VtcuUPXg9k5nE=; b=QJybRUy39flbDBoKGlXzOxGaCFfsVcyvYsbdbFe4eZpumf/Fw8g0hvX2+oKS9cnvo2GrbFpD6JatFqwL/8QxBk4JJPglNVRM0YHoJsiAO6mQcaUpFJNdRYQk0XEDwEG51R8CSHTSCse+0S350oyjNYVhcnAb5zjvaYyR0XXDa/dqVhaQ1qb/2vwY8zKtgKEaLQBq1Fr9pnLVv1czW+7okmaSB88/IZzQtmHnCs3dSTMQjAto4qsYWunxLP6FDl9CwWPIdaOu4ZseAVfBn2ypCzPOA77JKs/JH1pL9BeapvHE8q1hoU2aiMZI7gEAhV+jIZMySsX50U1Xa/GO6YfpWQ== Received: from DM5PR19CA0021.namprd19.prod.outlook.com (2603:10b6:3:151::31) by DM6PR12MB3097.namprd12.prod.outlook.com (2603:10b6:5:11d::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.13; Wed, 11 Aug 2021 14:04:51 +0000 Received: from DM6NAM11FT064.eop-nam11.prod.protection.outlook.com (2603:10b6:3:151:cafe::9c) by DM5PR19CA0021.outlook.office365.com (2603:10b6:3:151::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.13 via Frontend Transport; Wed, 11 Aug 2021 14:04:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT064.mail.protection.outlook.com (10.13.172.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.14 via Frontend Transport; Wed, 11 Aug 2021 14:04:51 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 11 Aug 2021 14:04:49 +0000 From: Xueming Li To: CC: , , Xiaoyun Li Date: Wed, 11 Aug 2021 17:04:06 +0300 Message-ID: <20210811140418.393264-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210811140418.393264-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210811140418.393264-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 828734ae-e175-48ef-4cbf-08d95cd0fd02 X-MS-TrafficTypeDiagnostic: DM6PR12MB3097: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VqyQn8pDdAjzbfjw9GABpLc0PGSILWdkgfJAVVQXCUUbYH3kv5+PE1PFH1tYML7+90Mv5N+fDV8VvglotW9sOvxFp+UyFNWwisvi4WqV75HozpNHo1/U0HemqeOjwAFUGrVGWlSa10sn8Yo3OZ+4IG6e4Hz+5m7TFN3G9Z308KPzJnUsza1lUEJIMZSCbahmblEQJcoAK8yJ5KcdrYYwqUWsnFJZaeW77U9I97P3fxJgEy8awbPamdDeM1HbsBqEDbBzdAvmMk/0aKHpkSIIZYT4WazQdmlsNR/ff4Ma4IlM2+2kYKmnN2rlznssyQjL5TaAueH06L9CeNwFlqS0KWD7xV095WeFuQ4PtbgRKqKR4FWaIbag3asilIg1PDCsrHQazfaEJMlTBdv4ApZb6Cn/t90hPmf8mm36rGTj9YaBpve0fTw8tjTyaDrC3alKhH2iDPPfuMKs0vZkA1dUb+ruvCTBYqEt5Ek9TsPRN0a7JT4Mgz1AmNNOwCRWRhcYfzQJ6PnJVsKswa8n7bagNRaAx4POjdp07ST37tXx24kH4cTM2oT56NDwXfG4mx1kPPbwQ1AydpcGRoCNSypdGmE/WZ5QK1c09L8n8lsBXPLL2lQ+mRGEu5+8zWGiG19UdaCf9ufYXRC8RuR70w49RMna4bIpyETc01tMU5lI8dLrIW+Iqu8WxMeUz0sb1XKtlqsIRLnwBwZNyfTe7ZDNReMiO/r6d8qbJ7f2fhaqCzI= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(396003)(346002)(136003)(39860400002)(46966006)(36840700001)(7696005)(8936002)(55016002)(70586007)(70206006)(8676002)(83380400001)(356005)(47076005)(336012)(82310400003)(36906005)(6666004)(54906003)(7636003)(109986005)(478600001)(36756003)(86362001)(26005)(6286002)(5660300002)(316002)(2616005)(16526019)(186003)(4326008)(426003)(1076003)(82740400003)(2906002)(36860700001)(266003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Aug 2021 14:04:51.6335 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 828734ae-e175-48ef-4cbf-08d95cd0fd02 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT064.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3097 Subject: [dpdk-dev] [PATCH v2 04/15] app/testpmd: make sure shared Rx queue polled on same core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Shared rxqs uses one set rx queue internally, queues must be polled from one core. Stops forwarding if shared rxq being scheduled on multiple cores. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 91 ++++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 4 +- app/test-pmd/testpmd.h | 2 + 3 files changed, 96 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index bb882a56a4..51f7d26045 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2885,6 +2885,97 @@ port_rss_hash_key_update(portid_t port_id, char rss_type[], uint8_t *hash_key, } } +/* + * Check whether a shared rxq scheduled on other lcores. + */ +static bool +fwd_stream_on_other_lcores(uint16_t domain_id, portid_t src_port, + queueid_t src_rxq, lcoreid_t src_lc) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + for (lc_id = src_lc + 1; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + if (domain_id != port->dev_info.switch_info.domain_id) + continue; + if (fs->rx_queue != src_rxq) + continue; + printf("Shared RX queue can't be scheduled on different cores:\n"); + printf(" lcore %hhu Port %hu queue %hu\n", + src_lc, src_port, src_rxq); + printf(" lcore %hhu Port %hu queue %hu\n", + lc_id, fs->rx_port, fs->rx_queue); + printf(" please use --nb-cores=%hu to limit forwarding cores\n", + nb_rxq); + return true; + } + } + return false; +} + +/* + * Check shared rxq configuration. + * + * Shared group must not being scheduled on different core. + */ +bool +pkt_fwd_shared_rxq_check(void) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + uint16_t domain_id; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + /* + * Check streams on each core, make sure the same switch domain + + * group + queue doesn't get scheduled on other cores. + */ + for (lc_id = 0; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + /* Update lcore info stream being scheduled. */ + fs->lcore = fwd_lcores[lc_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + /* Check shared rxq not scheduled on remaining cores. */ + domain_id = port->dev_info.switch_info.domain_id; + if (fwd_stream_on_other_lcores(domain_id, fs->rx_port, + fs->rx_queue, lc_id)) + return false; + } + } + return true; +} + /* * Setup forwarding configuration for each logical core. */ diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 67fd128862..d941bd982e 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2169,10 +2169,12 @@ start_packet_forwarding(int with_tx_first) fwd_config_setup(); + pkt_fwd_config_display(&cur_fwd_config); + if (!pkt_fwd_shared_rxq_check()) + return; if(!no_flush_rx) flush_fwd_rx_queues(); - pkt_fwd_config_display(&cur_fwd_config); rxtx_config_display(); fwd_stats_reset(); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f3b1d34e28..6497c56359 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -144,6 +144,7 @@ struct fwd_stream { uint64_t core_cycles; /**< used for RX and TX processing */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; + struct fwd_lcore *lcore; /**< Lcore being scheduled. */ }; /** @@ -785,6 +786,7 @@ void port_summary_header_display(void); void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void fwd_lcores_config_display(void); +bool pkt_fwd_shared_rxq_check(void); void pkt_fwd_config_display(struct fwd_config *cfg); void rxtx_config_display(void); void fwd_config_setup(void); -- 2.25.1