From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 4CF36A0C43;
	Thu, 21 Oct 2021 05:25:00 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 3C2FC410E2;
	Thu, 21 Oct 2021 05:25:00 +0200 (CEST)
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by mails.dpdk.org (Postfix) with ESMTP id A07EA410E2
 for <dev@dpdk.org>; Thu, 21 Oct 2021 05:24:58 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10143"; a="252430137"
X-IronPort-AV: E=Sophos;i="5.87,168,1631602800"; d="scan'208";a="252430137"
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 20 Oct 2021 20:24:57 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.87,168,1631602800"; d="scan'208";a="444614847"
Received: from fmsmsx605.amr.corp.intel.com ([10.18.126.85])
 by orsmga006.jf.intel.com with ESMTP; 20 Oct 2021 20:24:56 -0700
Received: from fmsmsx609.amr.corp.intel.com (10.18.126.89) by
 fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Wed, 20 Oct 2021 20:24:55 -0700
Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by
 fmsmsx609.amr.corp.intel.com (10.18.126.89) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12; Wed, 20 Oct 2021 20:24:55 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2242.12 via Frontend Transport; Wed, 20 Oct 2021 20:24:55 -0700
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.106)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2242.12; Wed, 20 Oct 2021 20:24:54 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZFrobnWcFfvwPTb+hjKd85UdRA32L6WrKKgEI17gOXONXuFCPu3F5o6mAU77C+HGSNKJawK1Z/pq0QP7xrDDrf1jWnJJ1oAIssDyHYR093F8FxasswlgV10KZIbw3CyTEXYpYiqKZ+1XCz446qcEjlDooe2R20pou8I9ypppn15klBKKsGtK3C6B7cUHWW30syDa7sjBQHaAGUoGJWaeKSX1W+/F2YetCWRAf2ntBP7M6csHfJIxDXt9ZzQ4L1bSkHgXcmAgxeKU26c54v62wvaX5cjmqKizrgIYuuVBJZOfLfm79ojChk1LDKfPv0yCYGiT1lxg2WnRQvvNQK8gBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Avy2r+2VSmW83DS6RC/bu3pMrrEfxSDZeCm9t69qdrg=;
 b=E6/15Lii+s0HoeJYVSdU8rwgKMK+2VHp1vbNYiuhwnfsFDkyG0VJKYNjZ6FeiuXMLdouqoiqcu3jI3GdhjIvbUq1kuEsmxzFyRWDn7NIZ/uhxqgv9k3mUu368m2bSAGAzwwAHZn3+Uw5UYRI0x7gGTPI94NWFWsaMrxDLsov9b8nexdOOqrh/mZet8hE/gNgkCtnAVPJypz0QDX/jvDrgOBIwS18cDwfwI5G5iBSfe0pSZfHCLzi5btNpKqwqMwDhdizPGcSdbBmN/rJIELZrDyw4vq0X043bq70RWaKgCg0B6gu1XKib7/m0PvCzCXb7mAYiIuHNkmvbp7HlOk9GQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Avy2r+2VSmW83DS6RC/bu3pMrrEfxSDZeCm9t69qdrg=;
 b=F/uzgPuroUs5RVbm8yojbOSDMMHOznF234GmPZeRMnlhbJPAxQJUZHBZo2UdOlxUeyk1zAIIoHvTyHlwUYkeafftfYezzGyeEv8FKl1STMgHjYLt5DHW5ytbii++pXEC5oIdKXl8M1N+UkYLxQtEHNDTmIjHIz/CBL3CxWExxXM=
Received: from DM4PR11MB5534.namprd11.prod.outlook.com (2603:10b6:5:391::22)
 by DM6PR11MB2762.namprd11.prod.outlook.com (2603:10b6:5:c8::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.18; Thu, 21 Oct
 2021 03:24:51 +0000
Received: from DM4PR11MB5534.namprd11.prod.outlook.com
 ([fe80::3d9b:76d7:e274:bad3]) by DM4PR11MB5534.namprd11.prod.outlook.com
 ([fe80::3d9b:76d7:e274:bad3%4]) with mapi id 15.20.4608.018; Thu, 21 Oct 2021
 03:24:51 +0000
From: "Li, Xiaoyun" <xiaoyun.li@intel.com>
To: Xueming Li <xuemingl@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>, "Zhang,
 Yuying" <yuying.zhang@intel.com>
CC: Jerin Jacob <jerinjacobk@gmail.com>, "Yigit, Ferruh"
 <ferruh.yigit@intel.com>, Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>, Thomas Monjalon
 <thomas@monjalon.net>, Lior Margalit <lmargalit@nvidia.com>, "Ananyev,
 Konstantin" <konstantin.ananyev@intel.com>, Ajit Khaparde
 <ajit.khaparde@broadcom.com>
Thread-Topic: [PATCH v11 6/7] app/testpmd: force shared Rx queue polled on
 same core
Thread-Index: AQHXxYfS5vFFn1N4c02rGCPIAo89MqvcwhHQ
Date: Thu, 21 Oct 2021 03:24:51 +0000
Message-ID: <DM4PR11MB5534BF46D5AB8422992AD16C99BF9@DM4PR11MB5534.namprd11.prod.outlook.com>
References: <20210727034204.20649-1-xuemingl@nvidia.com>
 <20211020075319.2397551-1-xuemingl@nvidia.com>
 <20211020075319.2397551-7-xuemingl@nvidia.com>
In-Reply-To: <20211020075319.2397551-7-xuemingl@nvidia.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: nvidia.com; dkim=none (message not signed)
 header.d=none;nvidia.com; dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5b8bff90-45a5-4bc1-5ab5-08d9944257e9
x-ms-traffictypediagnostic: DM6PR11MB2762:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM6PR11MB2762083243E3C2F84657A51299BF9@DM6PR11MB2762.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4502;
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: iXcMS2IYXbXOyyZrnlCJUdVMEJLrqPTcdt8sJuIvxDosPyULiSFq7kYhrxRX8b2dnrGMbLd0cWyIVrdcvnNbxa56DSevn7Wo1Mgyhe05k9RKwKLHWclS/VYhcQyDCXwa4Mdu50W/ylM5qOh0YVRLcHTDPp1AM5tsE2ix0JNwjbuCdqwXMBsfdXE9ZDJ3wbP8Vh6mpSirw35dkOmBpxW2qX4GHtX5ruewr57S5E4k2/qRjctjXQBjGHn96kpnYOktJZpMoSlGbRI7GRc28xfuvKpG49S47cO3Oes2qiWrJiNl5lx2qpp7A6n9fEkobRuClZJToHdi/4zB1JABwv//6ZX+B7ZFbKv53ljaTdqpdd20gg/57nkw5Wfx1IY3ZELYUbu3Y5Ft8Pnsn3cFy1+Xuba6cZx8uqWJQLGFc1zRE38v7WSNkGIlp//gyt8d+seL3YlA8p88Z5W/xmLbRmqeSs6V83J75xdoKqFPx+CXYy19jB2qihDP4d7tjoUE3PVTHZoo9O7/CyicrX6T/oiDbNFasbhEW9uGdVShsHDUMj0CK/g2+LiX+ZlLNjNaVXwBE2FFuWBCHSjSia1K1aIqe7JQ3rlTyXx1SbUARaP80ka+GDfhHzA+6jEKWpLD5U92xMz8g7isxmsfynyBA3cPlPCiePf3Pde4WWnF9zy6mfCY8pp+YoJj4rF1jB8lleyXO7VIADdmSykkOLb0QcNoQQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:DM4PR11MB5534.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(366004)(53546011)(9686003)(122000001)(110136005)(66946007)(38100700002)(6506007)(71200400001)(2906002)(8936002)(52536014)(76116006)(54906003)(66476007)(33656002)(86362001)(26005)(5660300002)(186003)(66556008)(64756008)(66446008)(83380400001)(4326008)(8676002)(6636002)(7696005)(316002)(38070700005)(508600001)(82960400001)(55016002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?Ll6loXkAJoFnPAg2sxzLiUWFB2LfoBMQ4sd/7ryzZcrHXPy1gYJYqGpvh6pi?=
 =?us-ascii?Q?Ucmg0wdjT/RKJA4D2z2wS76JWJmKFcDMuL/UdgelirRp9UveP8XPu/tqPocu?=
 =?us-ascii?Q?7J23TcFwYGf/agbNs/b4rZc+a9mQ/zoRxpOCAkGOFZ8Afm5AtaWfWnBQN2Vq?=
 =?us-ascii?Q?rp8lTMMH0JR62k+LAtpS1PHHMrogRZV7YtBAfQeffc0sPy5KiSMd1iBMQu59?=
 =?us-ascii?Q?jxoVCAppSZN8PNaQuLX4uBk2W3LgAloRmcG3VoNmD1MJZ1XbnPjjBTP3isiQ?=
 =?us-ascii?Q?mzUSBa5ddSjn6qNT62vWAJazY/Eurbe5eFUpkeQluDI52dIsMwynbZ3SKnFZ?=
 =?us-ascii?Q?Zr/1IW8Gx0PTaAxYibsRn+C/PEofJSChy4Px3olyBCBcNkTCYs+UWUn5N84A?=
 =?us-ascii?Q?ZIV6bNnOZQXBNJFCspGFe87/m6ASADzZeqzLevvY5M3+2PMQKLBAQ0YJd5qf?=
 =?us-ascii?Q?AiReXNm3TqYnbGLolPAdD9jb5Ca8DDaZsFy9mbdpXPR5eegguB+QTfAOd80C?=
 =?us-ascii?Q?na1OJsrwOXJV7z02Hn4gLHR/oHwrZEkLxoKNtXopTf8EZFA7GSSQkzfNbDV5?=
 =?us-ascii?Q?buUIIEfmzdF5j8eR7Bw90Gckne2YaxKBODmpIx1lyIBdCAxmo/r1WSElnOt4?=
 =?us-ascii?Q?MuO46VnDs60SBe08eRM/Q/2HRAydBVWobNPQUGJcLvsyzDzHNaRiucRNCiA3?=
 =?us-ascii?Q?mvKwl/evN81unwh4GAXOe8PIi75/14Eddefcw3cPe8TWhJ1wFHY+DxoU8/II?=
 =?us-ascii?Q?jwdXnYzl+iI2JVAKj9PJxPyu9n9iXshQp28e+UO2fUZCbAXbg8D24wkIyCPx?=
 =?us-ascii?Q?ikch40SkI0iYEigBqK8YsvqSbKTWNjOjpJ03C4ykRgDo+npXAMOEFbyRe5KX?=
 =?us-ascii?Q?jmdGYfxLLn+Z2vUeKSaQ4//2xOL7FvtrQNX+Gq23MWP+XX+PPFQRN1TLcrE3?=
 =?us-ascii?Q?JbSopWJ006ejofZN9UmjAkOosykiimWwtHZDkFZgiynr3M72JELx5+dUDsq3?=
 =?us-ascii?Q?mUeiYru3d9ayAPgaSaoDJM+ZkMe7BMSLdQtaJQupAtb24N3uyX2T8sj9g3YE?=
 =?us-ascii?Q?A6GpiuFs7B9qYC+CmwteIHWb4wGaK6DuEOfdxw5kfM6UyR3+KKTDZiwAnc0O?=
 =?us-ascii?Q?p1xJoXMeJDq56HfR6KFtMYJV2jDrB0c2dT3H0BQhqQt1R2bNhpJzB5jmF5DB?=
 =?us-ascii?Q?Kl0VH/uTk65zb/ptYdc/SCXjJtUX+z2icKDsGjePuRsOTlXXF/p57boOsf4A?=
 =?us-ascii?Q?NcXhhUNa2YgOm9ulWahvBTcZzIAR8tIwJPWvYxSPvk1/JGmtJESDUE9YNSSw?=
 =?us-ascii?Q?f7o=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM4PR11MB5534.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b8bff90-45a5-4bc1-5ab5-08d9944257e9
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Oct 2021 03:24:51.1726 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xiaoyun.li@intel.com
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB2762
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v11 6/7] app/testpmd: force shared Rx queue
 polled on same core
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Hi

> -----Original Message-----
> From: Xueming Li <xuemingl@nvidia.com>
> Sent: Wednesday, October 20, 2021 15:53
> To: dev@dpdk.org; Zhang, Yuying <yuying.zhang@intel.com>
> Cc: xuemingl@nvidia.com; Jerin Jacob <jerinjacobk@gmail.com>; Yigit, Ferr=
uh
> <ferruh.yigit@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Viacheslav Ovsiienko
> <viacheslavo@nvidia.com>; Thomas Monjalon <thomas@monjalon.net>; Lior
> Margalit <lmargalit@nvidia.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Li, Xiaoyun <xiaoyun.li@intel.com>
> Subject: [PATCH v11 6/7] app/testpmd: force shared Rx queue polled on sam=
e
> core
>=20
> Shared Rx queue must be polled on same core. This patch checks and stops
> forwarding if shared RxQ being scheduled on multiple cores.
>=20
> It's suggested to use same number of Rx queues and polling cores.
>=20
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  app/test-pmd/config.c  | 103
> +++++++++++++++++++++++++++++++++++++++++
>  app/test-pmd/testpmd.c |   4 +-
>  app/test-pmd/testpmd.h |   2 +
>  3 files changed, 108 insertions(+), 1 deletion(-)
>=20
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> fa951a86704..1f1307178be 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -2915,6 +2915,109 @@ port_rss_hash_key_update(portid_t port_id, char
> rss_type[], uint8_t *hash_key,
>  	}
>  }
>=20
> +/*
> + * Check whether a shared rxq scheduled on other lcores.
> + */
> +static bool
> +fwd_stream_on_other_lcores(uint16_t domain_id, lcoreid_t src_lc,
> +			   portid_t src_port, queueid_t src_rxq,
> +			   uint32_t share_group, queueid_t share_rxq) {
> +	streamid_t sm_id;
> +	streamid_t nb_fs_per_lcore;
> +	lcoreid_t  nb_fc;
> +	lcoreid_t  lc_id;
> +	struct fwd_stream *fs;
> +	struct rte_port *port;
> +	struct rte_eth_dev_info *dev_info;
> +	struct rte_eth_rxconf *rxq_conf;
> +
> +	nb_fc =3D cur_fwd_config.nb_fwd_lcores;
> +	/* Check remaining cores. */
> +	for (lc_id =3D src_lc + 1; lc_id < nb_fc; lc_id++) {
> +		sm_id =3D fwd_lcores[lc_id]->stream_idx;
> +		nb_fs_per_lcore =3D fwd_lcores[lc_id]->stream_nb;
> +		for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore;
> +		     sm_id++) {
> +			fs =3D fwd_streams[sm_id];
> +			port =3D &ports[fs->rx_port];
> +			dev_info =3D &port->dev_info;
> +			rxq_conf =3D &port->rx_conf[fs->rx_queue];
> +			if ((dev_info->dev_capa &
> RTE_ETH_DEV_CAPA_RXQ_SHARE)
> +			    =3D=3D 0)
> +				/* Not shared rxq. */
> +				continue;
> +			if (domain_id !=3D port->dev_info.switch_info.domain_id)
> +				continue;
> +			if (rxq_conf->share_group !=3D share_group)
> +				continue;
> +			if (rxq_conf->share_qid !=3D share_rxq)
> +				continue;
> +			printf("Shared Rx queue group %u queue %hu can't be
> scheduled on different cores:\n",
> +			       share_group, share_rxq);
> +			printf("  lcore %hhu Port %hu queue %hu\n",
> +			       src_lc, src_port, src_rxq);
> +			printf("  lcore %hhu Port %hu queue %hu\n",
> +			       lc_id, fs->rx_port, fs->rx_queue);
> +			printf("Please use --nb-cores=3D%hu to limit number of
> forwarding cores\n",
> +			       nb_rxq);
> +			return true;
> +		}
> +	}
> +	return false;
> +}
> +
> +/*
> + * Check shared rxq configuration.
> + *
> + * Shared group must not being scheduled on different core.
> + */
> +bool
> +pkt_fwd_shared_rxq_check(void)
> +{
> +	streamid_t sm_id;
> +	streamid_t nb_fs_per_lcore;
> +	lcoreid_t  nb_fc;
> +	lcoreid_t  lc_id;
> +	struct fwd_stream *fs;
> +	uint16_t domain_id;
> +	struct rte_port *port;
> +	struct rte_eth_dev_info *dev_info;
> +	struct rte_eth_rxconf *rxq_conf;
> +
> +	nb_fc =3D cur_fwd_config.nb_fwd_lcores;
> +	/*
> +	 * Check streams on each core, make sure the same switch domain +
> +	 * group + queue doesn't get scheduled on other cores.
> +	 */
> +	for (lc_id =3D 0; lc_id < nb_fc; lc_id++) {
> +		sm_id =3D fwd_lcores[lc_id]->stream_idx;
> +		nb_fs_per_lcore =3D fwd_lcores[lc_id]->stream_nb;
> +		for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore;
> +		     sm_id++) {
> +			fs =3D fwd_streams[sm_id];
> +			/* Update lcore info stream being scheduled. */
> +			fs->lcore =3D fwd_lcores[lc_id];
> +			port =3D &ports[fs->rx_port];
> +			dev_info =3D &port->dev_info;
> +			rxq_conf =3D &port->rx_conf[fs->rx_queue];
> +			if ((dev_info->dev_capa &
> RTE_ETH_DEV_CAPA_RXQ_SHARE)
> +			    =3D=3D 0)
> +				/* Not shared rxq. */
> +				continue;
> +			/* Check shared rxq not scheduled on remaining cores.

The check will be done anyway just if dev has the capability of share_rxq.
But what if user wants normal queue config when they are using the dev whic=
h has the share_q capability?
You should limit the check only when "rxq_share > 0".

> */
> +			domain_id =3D port->dev_info.switch_info.domain_id;
> +			if (fwd_stream_on_other_lcores(domain_id, lc_id,
> +						       fs->rx_port,
> +						       fs->rx_queue,
> +						       rxq_conf->share_group,
> +						       rxq_conf->share_qid))
> +				return false;
> +		}
> +	}
> +	return true;
> +}
> +
>  /*
>   * Setup forwarding configuration for each logical core.
>   */
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> 123142ed110..f3f81ef561f 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2236,10 +2236,12 @@ start_packet_forwarding(int with_tx_first)
>=20
>  	fwd_config_setup();
>=20
> +	pkt_fwd_config_display(&cur_fwd_config);
> +	if (!pkt_fwd_shared_rxq_check())

Same comment as above
This check should only happens if user enables "--rxq-share=3D[X]".
You can limit the check here too.
If (rxq_share > 0 && !pkt_fwd_shared_rxq_check())

> +		return;
>  	if(!no_flush_rx)
>  		flush_fwd_rx_queues();
>=20
> -	pkt_fwd_config_display(&cur_fwd_config);
>  	rxtx_config_display();
>=20
>  	fwd_stats_reset();
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> 3dfaaad94c0..f121a2da90c 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -144,6 +144,7 @@ struct fwd_stream {
>  	uint64_t     core_cycles; /**< used for RX and TX processing */
>  	struct pkt_burst_stats rx_burst_stats;
>  	struct pkt_burst_stats tx_burst_stats;
> +	struct fwd_lcore *lcore; /**< Lcore being scheduled. */
>  };
>=20
>  /**
> @@ -795,6 +796,7 @@ void port_summary_header_display(void);
>  void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id);  void
> tx_queue_infos_display(portid_t port_idi, uint16_t queue_id);  void
> fwd_lcores_config_display(void);
> +bool pkt_fwd_shared_rxq_check(void);
>  void pkt_fwd_config_display(struct fwd_config *cfg);  void
> rxtx_config_display(void);  void fwd_config_setup(void);
> --
> 2.33.0