From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 838B0439D3; Fri, 26 Jan 2024 16:04:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 13E9440289; Fri, 26 Jan 2024 16:04:12 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2072.outbound.protection.outlook.com [40.107.93.72]) by mails.dpdk.org (Postfix) with ESMTP id B05094021D for ; Fri, 26 Jan 2024 16:04:10 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F+oPN9sV7loCftTZpzabYK7/5cJrAijdjHGo0Ye2Fpfu4VlSXdisJhPEn5Cz1wbWBEUVUVX2WmiwHIFDVcZTxHzkL6EPsI1PAYyEAJeU428bLmJRlBXnKIpqmTQ8xM9j7neU5JCm1XtPJjjDkOxgeRnfK3vGIM4R1MUdvRUDkErr85GbS+R4NfUdP3oPIy5fha09yEewqLDfqayk5kZfMt90MDjERLpQg4sQaLM7+RisjqOYM9ctVE6Q6Qatqqe5KZMn0gpjK0T0W89JVfS6siMjt559jlt1oq94UwFxtMzlk0eq1whnJZQqMg1s6kBMNHyJ6XVcftb+0RNql4WLlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2K5zQy6kMDwCgHH0B6EPrgFXRMWWr9Vf5fubxJQFiI4=; b=Us6DnJcm3gpToDZ2aZTc8R2TtLSUaOIpuWWoCVLz4QO5VbXwa10WEt/SK8qcZdPzqbaknV9qKlGD81HtegceDbvEvTDnoSMO9UVwOslKyKEmKldscqf0faN1+30FI0lqITUWK1/TB9PqbT/YRhBVQoSTl2QElVxRVPDPuIdSZEXJ663D6hPD664eWoT1PDPjbkL4/FRTLjFn74fyvIH6tKPI78YGcNtnG285H+1O8EKHqQXwCO715QYK+fayYZZKqr7yaACjEB50UoAkKfgf+SQ5+i7z7h97s1z+rHQN42BaQJycYaPHqFJR2YHNDQVXQ28/uzhxnu3n9qoxVMuV7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2K5zQy6kMDwCgHH0B6EPrgFXRMWWr9Vf5fubxJQFiI4=; b=3rOyVZH97N50SYzUZmPN5JImerN3mMAyQAFjqsrlRbFBhJMrfvvUhrgicVl5n+VKmEm67No5AG7d//Z+GScE4+Q8IZoTasEE071fK5eKVQECjH/U8bbh11OA6z5iSHtKheMf/HoRNqmDNnKne1U06OXwfXTJotuJMj3zzyAlkRk= Received: from MW6PR12MB8999.namprd12.prod.outlook.com (2603:10b6:303:247::8) by SJ0PR12MB5634.namprd12.prod.outlook.com (2603:10b6:a03:429::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.26; Fri, 26 Jan 2024 15:04:07 +0000 Received: from MW6PR12MB8999.namprd12.prod.outlook.com ([fe80::b2d3:710f:1b2e:81ef]) by MW6PR12MB8999.namprd12.prod.outlook.com ([fe80::b2d3:710f:1b2e:81ef%4]) with mapi id 15.20.7228.026; Fri, 26 Jan 2024 15:04:07 +0000 From: "Boyer, Andrew" To: Jeremy Spewock CC: "dev@dpdk.org" , "qi.z.zhang@intel.co" , "rasland@nvidia.com" Subject: Re: Testing scatter support for PMDs using testpmd Thread-Topic: Testing scatter support for PMDs using testpmd Thread-Index: AQHaTukj/szmDO8muU21yBlYK3smubDsM/EA Date: Fri, 26 Jan 2024 15:04:07 +0000 Message-ID: <696A8523-BDBC-43BD-A998-710EB58AA920@amd.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: MW6PR12MB8999:EE_|SJ0PR12MB5634:EE_ x-ms-office365-filtering-correlation-id: 64aa162b-3eb5-4b22-2a3f-08dc1e800b27 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: tJFkmslVvsKSVwhuOxlfyL+HDcduKyZYac/QkwvlsWK86BgbPp4gAgSW85QAI+itbRqxbJ2eCE9AVfe3o177wJ///Y7cshexZy68ftAXQ8PUnG8Qodqe1I9rsD7rH/FJdG0oSDlQ+OvhUvfzCO+0eZZTNCcqcINwjpHoLkCqWb3ih5OH3uBL4cBXynoY1+0udN0cJId+12FPUZ2jU3+fQFvpA+O9V2YLn86UnaYwjT4H4rEhX++vkLu3kjzuRI5uEDNhhSLOcRzGbQO+RSoqeRuITVpo9ge2slXOfv0sEsMdeZ+D+Q4+kzweO50RAJ1o4+Eq1R2mNbT1eu331OGzPimB3tlzI+3AT8S7HVKOZ0jwqDnpXejuNUf5ysglOww8tkY5ctwLzS7B7e85dwtKPcAzhAPC860U21IRmOxkKknwDFWjxrnUMOKN0tgyeEtpk2vJs63bQ68O5cRRaP7aWxBR0FmrkKm9jZlU6fMBg2uRD+8aSimd2LtW7NJkE5L/SO8l4R5ApA7Chw/owRLaPObPhWYB5iQeJSoYADLK/fhFOVcEMMKFvgx43wyvzJwxct8ldwjdk9KaGsa4beV+/S1kORBDWC8lJadssSpzW983UzDVuZLGillmFjfxA6uxVjYQC9TtGMFY7lciwC9BEg== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MW6PR12MB8999.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(346002)(376002)(39860400002)(396003)(136003)(230922051799003)(186009)(1800799012)(64100799003)(451199024)(38100700002)(6486002)(86362001)(478600001)(26005)(71200400001)(41300700001)(54906003)(8676002)(53546011)(6512007)(8936002)(66556008)(6506007)(76116006)(2616005)(66476007)(66946007)(66446008)(64756008)(6916009)(316002)(122000001)(2906002)(5660300002)(4326008)(33656002)(38070700009)(36756003)(45980500001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?SjE9S5M1jxaa8YkbCM4su3Xt8TybGmwRnhLppuhyzwcz/IpDiLULBTXBJawU?= =?us-ascii?Q?a+UKx2mj8jsMb5pBxgyH6oeJG88XXZyWB0B8KW1f5yZeOwpSkdxg1tkZqvNf?= =?us-ascii?Q?BZUUfyZSnFvbwP5QthiBsU30FlZEmoGbj5mIBlXzl87MGA4XsYjjd+L/2q2/?= =?us-ascii?Q?jaRrMbO73jLZlP2yBIJJFmseqM8YBu13+ihhbkJYynLEPPI7LbQnOSAKm/TC?= =?us-ascii?Q?cE9pcYpjp2S1wc3DSqsOtgS554/eJtGvuuxKEDHn8Wvg/pCxK7t8e0obGDVj?= =?us-ascii?Q?eG8cFjMm0v5nZ3PSO1QnBrSb4aBpzvTIv0oen/CVwbJ0B3GqxnianzJpAzw/?= =?us-ascii?Q?23vO1ooyOc3061zPi1UVrgLT3x68F56KLkFIgxK6X1jcPHzOU9/oWLkIea1i?= =?us-ascii?Q?EaO4nnBvIqiwTzGojNDnMPMVC7BcNq16jEiDBjwYv4CfRqGbqCcihasm6YdB?= =?us-ascii?Q?HwJrB8Uj6ruor3RN1SLkXM4ykeUd3SGSls9Dq/MnpV+M1Tn6Qh5pUJTXTuA0?= =?us-ascii?Q?jeoa6P34cTESRe2rbAed7ZM0ITCmZvVJDvFi7Vpx8PMGOL7tQ9r1RnBNREP5?= =?us-ascii?Q?jLJn0jjtRKrpjUqajhBNmDt61+iiJlV1DOObp3i900l01J7j9RYPS3E9c7MN?= =?us-ascii?Q?kWAO0zYwResvLOEc54X1imIflg3nXND/RepSgu5T/EYuwGmFJT93jUOyUwaa?= =?us-ascii?Q?JUP0qTVrImRnkssY6xk6zuxa/hV9ONExVUgUZ+2BDtd1QudNaBd0RZo0U8sJ?= =?us-ascii?Q?7tYYgiFeeojc+4iem0NpRBZwxtWayhd4VswyqKsHfnrzxre2Uyxk0cn+H+PF?= =?us-ascii?Q?J/fJZlZ5KhbMfU0y3aN7nnjEYU6XX4mKVKLMoBv5wUHdrAM3cqlnjClWASk3?= =?us-ascii?Q?Lci/U11k/YETxvF/kuJEfMJ9xNOSRtTxp3WlKimNIEdaw+H2RoakJh/InrMw?= =?us-ascii?Q?gU0lB2MDf+tY/ihnVL+cfLnCAcsgrn20eE7ScLvVJvBDzgJElk/Sm0NmgY5b?= =?us-ascii?Q?MEKlz4lu2aRRdt+bSCWtPGWMmF7AlXx42RCdB1Ci+Au0wR5QizALSLhBSz54?= =?us-ascii?Q?Hkb7S/qfTVFBxm3Ltn1ild5Fdi6X5hRcrF1JBg1eM6I1rJYm2AL+xJLjCMyx?= =?us-ascii?Q?aan2qpx2avcEnIZHdHjjOrpMtVR/MGbUkTn5WcROPxWJVejV9uGG7y4FjQu/?= =?us-ascii?Q?1beFL3zzfQ7ge1bx6iDBnKpTM4ru/9EgtUKWjR+TcUdCGWodk/JVMs7zmcbN?= =?us-ascii?Q?LqXYpq9Qcg+6UhT0Wgbv/O2GpiCfMTNH0w0zvPaXTDU2BFGywmlXGOityBqV?= =?us-ascii?Q?V/B7sTjf1/qFAQP1GoFyxQykV4iUIvRX8Z1THsSVtw/XTyGtrf2BZ+OyLnr6?= =?us-ascii?Q?75/jkDanjq16meBmlHdDfUmekAJ1F0tF4rizOR5UKX4YT/MlmjURogSJaut1?= =?us-ascii?Q?dyUjDVwb85LbNB3G2FCJxO014I4HZOjei+i7hHftDNC2hliMSvJYRUXF5Vyl?= =?us-ascii?Q?R5dKY4ah0wTVYTo7NDc9CvLQYVk/t/66WFbcVtszshsmCXRMCNWfcnj+xMJr?= =?us-ascii?Q?GKzpTGFS5TlN8dOYNL1Na6r+2KiJm7lIM522CuF2?= Content-Type: multipart/alternative; boundary="_000_696A8523BDBC43BDA998710EB58AA920amdcom_" MIME-Version: 1.0 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MW6PR12MB8999.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 64aa162b-3eb5-4b22-2a3f-08dc1e800b27 X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Jan 2024 15:04:07.1205 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: cQv7g/NeLhhOqHAtVPDzHcFjYKoeoZ6reZsKq4CtGikUzNzNSt2c4E9KwgZdr+6+ X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5634 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --_000_696A8523BDBC43BDA998710EB58AA920amdcom_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable On Jan 24, 2024, at 12:16 PM, Jeremy Spewock wrote: Hello maintainers, In porting over the first ethdev suite to the new DTS framework, there was = an inconsistency that we found and we were wondering if anyone would be abl= e to shed some light on this. In general the inconsistency pertains to Inte= l and Mellanox NICs, where one will throw an error and not start testpmd wh= ile the other will work as expected. In the original DTS suite for testing scattered packets, testpmd is started= with the flags --max-packet-len=3D9000 and --mbuf-size=3D2048. This starts= and works fine on Intel NICs, but when you use the same flags on a Mellano= x NIC, it will produce the error seen below. There is a flag that exists fo= r testpmd called --enable-scatter and when this flag is provided, the Mella= nox NIC seems to accept the flags and start without error. Our assumption is that this should be consistent between NICs, is there a r= eason that one NIC would allow for starting testpmd without explicitly enab= ling scatter and the other wouldn't? Should this flag always be present, an= d is it an error that testpmd can start without it in the first place? Here is the error provided when attempting to run on a Mellanox NIC: mlx5_net: port 0 Rx queue 0: Scatter offload is not configured and no enoug= h mbuf space(2048) to contain the maximum RX packet length(9000) with head-= room(128) mlx5_net: port 0 unable to allocate rx queue index 0 Fail to configure port 0 rx queues Start ports failed Thank you for any insight, Jeremy Hello Jeremy, I can share a little bit of what I've seen while working on our devices. The client can specify the max packet size, MTU, mbuf size, and whether to = enable Rx or Tx s/g (separately). For performance reasons we don't want to = enable s/g if it's not needed. Now, the client can easily set things up with a small MTU, start processing= , stop the port, and increase the MTU - beyond what a single mbuf would hol= d. To avoid having to tear down and rebuild the queues on an MTU change to = enable s/g support, we automatically enable Rx s/g if the client presents m= bufs which are too small to hold the max MTU. Unfortunately the API to configure the Tx queues doesn't tell us anything a= bout the mbuf size, and there's nothing stopping the client from configurin= g Tx before Rx. So we can't reliably auto-enable Tx s/g, and it is possible= to get into a config where the Rx side produces chained mbufs which the Tx= side can't handle. To avoid this misconfig we have some versions of our PMD set to fail to sta= rt if Rx s/g is enabled but Tx s/g isn't. Hope this helps, Andrew --_000_696A8523BDBC43BDA998710EB58AA920amdcom_ Content-Type: text/html; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable

On Jan 24, 2024, at 12:16 PM, Jeremy Spewock <jspewock@iol.unh.edu&= gt; wrote:

Hello m= aintainers,

In port= ing over the first ethdev suite to the new DTS framework, there was an inco= nsistency that we found and we were wondering if anyone would be able to sh= ed some light on this. In general the inconsistency pertains to Intel and Mellanox NICs, where one will throw an= error and not start testpmd while the other will work as expected.

In the = original DTS suite for testing scattered packets, testpmd is started with t= he flags --max-packet-len=3D9000 and --mbuf-size=3D2048. This starts and wo= rks fine on Intel NICs, but when you use the same flags on a Mellanox NIC, it will produce the error seen below. Th= ere is a flag that exists for testpmd called --enable-scatter and when this= flag is provided, the Mellanox NIC seems to accept the flags and start wit= hout error.

Our ass= umption is that this should be consistent between NICs, is there a reason t= hat one NIC would allow for starting testpmd without explicitly enabling sc= atter and the other wouldn't? Should this flag always be present, and is it an error that testpmd can start wit= hout it in the first place?

Here is= the error provided when attempting to run on a Mellanox NIC:

mlx5_ne= t: port 0 Rx queue 0: Scatter offload is not configured and no enough mbuf = space(2048) to contain the maximum RX packet length(9000) with head-room(12= 8)
mlx5_net: port 0 unable to allocate rx queue index 0
Fail to configure port 0 rx queues
Start ports failed

Thank y= ou for any insight,
Jeremy<= br>

Hello Jeremy,

I can share a little bit of what I've seen while working on our device= s.

The client can specify the max packet size, MTU, mbuf size, and whethe= r to enable Rx or Tx s/g (separately). For performance reasons we don't wan= t to enable s/g if it's not needed.

Now, the client can easily set things up with a small MTU, start proce= ssing, stop the port, and increase the MTU - beyond what a single mbuf woul= d hold. To avoid having to tear down and rebuild the queues on an MTU chang= e to enable s/g support, we automatically enable Rx s/g if the client presents mbufs which are too small to hold the= max MTU.

Unfortunately the API to configure the Tx queues doesn't tell us anyth= ing about the mbuf size, and there's nothing stopping the client from confi= guring Tx before Rx. So we can't reliably auto-enable Tx s/g, and it is pos= sible to get into a config where the Rx side produces chained mbufs which the Tx side can't handle.

To avoid this misconfig we have some versions of our PMD set to fail t= o start if Rx s/g is enabled but Tx s/g isn't.

Hope this helps,
Andrew

--_000_696A8523BDBC43BDA998710EB58AA920amdcom_--