From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0CAEA0032 for ; Mon, 15 Nov 2021 18:39:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E53D541168; Mon, 15 Nov 2021 18:38:59 +0100 (CET) Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50136.outbound.protection.outlook.com [40.107.5.136]) by mails.dpdk.org (Postfix) with ESMTP id 415AC40E2D for ; Thu, 11 Nov 2021 14:25:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dR+rGLjzkUCGfUOgoL8PHBfUg5rUUHGQpwMOug7PwKN3YiCrnx1EQ79avVk79E539XS34OrCoMPOaEVCma9h+1oC3FpbdAEOTdlyF78ioiJ8g3JFTjm4YpIQ9XAQNSK21RVyeVJw0OoRjBFhZ55Lihr2rOl8QrW9BK6ZrX3tJ5o7EB1SsqRTMMo6/1K/koZxn7Urh3Sq0ze9nmomx52b2luz7XsT+nQvmc03mWmOOqT+9ulhLigC0QhjRyJjbwNghDoIkrB57Jvd/ARhatRIX7nNQi8SvDN5rKPVS89cqX8VIJubQR0wz5IB07Ntd1Gk65GM8ZM0DbY33y8YTllHCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9ux/PmVk3wW5H46kakvFl7eEp7tMfnu3XnEyhf55Ed0=; b=gsncmC1cTDbIyH7rEPzl+2s5ibe/Ibs4jCBMoWih8IVQJ9pN1dQaMcs9ssBBep9uEkO7qfE0YMOW4PejaMRKN6RmNwCiMA5SsVWXSz70LQuXFYBxdbtTEnkW0q+PT7iRl2l1AQ6ci3Nrd/G0luxjGe7FV4af4KQfmghh2AQOjKTKNgzqWj98vZpWKqfSwU8MVe/TnI7Ficfj1m1TbGF51HY5CI0+4ve2VGI8w4p2BWDM2ES0LMYYzBtiWspuhwzHWmeq6eVIcS5ka6lWV3N5PmdKWsMzKgKA6puYoqEI6+eHpWmHxapZWION72giFujlZgsxykfyJyEc0gSprIWjqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=infovista.com; dmarc=pass action=none header.from=infovista.com; dkim=pass header.d=infovista.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=infovista.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9ux/PmVk3wW5H46kakvFl7eEp7tMfnu3XnEyhf55Ed0=; b=YaINyDDUlNTbqS69OoF86T5atoanF2GSbCEu8kBoQlNBIegKEfhtqj6oKedoid9wUm8uqNzE6/m/TzteIqWQ0RoJoXxzCdmkGFt8fWB/Hk43SakIwM9K5wjIxriatmNhDY8/6NcD1WL6j8/hG/PyqYH73xZQ6pbO8aAKoLZButR3aKQZjO/meCHzQmZQ3HiCKn9XokaRwtxtj6+fF3T3+Jl86y0PsdfOk0O9UOzYeB/cOL97Sht2Z2ugi9KrIF/348yDhivpgnxlGFj+Jmh8ZbRPjh6N0rAbCxIExR9pMTKe3/vxwKn8GMmMWCOlScxCl/fchQI1LpWgfeRiNPOazA== Received: from AS8PR03MB6774.eurprd03.prod.outlook.com (2603:10a6:20b:295::8) by AM6PR03MB4199.eurprd03.prod.outlook.com (2603:10a6:20b:12::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4690.16; Thu, 11 Nov 2021 13:25:31 +0000 Received: from AS8PR03MB6774.eurprd03.prod.outlook.com ([fe80::ad0f:7ab7:21e5:ec01]) by AS8PR03MB6774.eurprd03.prod.outlook.com ([fe80::ad0f:7ab7:21e5:ec01%6]) with mapi id 15.20.4669.016; Thu, 11 Nov 2021 13:25:30 +0000 From: Francesco Montorsi To: Asaf Penso , "Yan, Xiaoping (NSB - CN/Hangzhou)" , Gerry Wan , Slava Ovsiienko , Matan Azrad , Raslan Darawsheh CC: Martin Weiser , David Marchand , "users@dpdk.org" Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Topic: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Index: AddxgEIiozH3tSNDT0OXrcwVnnhgFgGYzhKQAnzpw1AMLuUsIAAAHwsQAC3gpaAAbMB0fQAAiMkAAAK/4qAAMP0qUAAAeDsgAr30ELAABX0bQAAAmv+QAADIAmAAA1qcAADELI+w//+QHQD/8aKEUIAcUsCA//5RfeCAAt1OAP/9/QSQ/+ecMMD/zthinv+dK07w Date: Thu, 11 Nov 2021 13:25:30 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=infovista.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: be404cd1-2779-4e59-08fd-08d9a516bbe4 x-ms-traffictypediagnostic: AM6PR03MB4199: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8882; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: naPtb4puqzSgOgtB3PHWVQTsBJOpMD8M58zMm2ojpVf4Cb6g/n+cyL32XrVjmngoir7tFPSNx6oO9ggDWILiHqyU3XnNCHS6AgM2/BN7Pe9TD8V3ZZEQ6q5xq2BAZNX5yVS2hICLvQoskwEL7dBjDy83B9k79/2EewGt+eO9KcFQwI//kBuTdwjHWGkE+txJmSfaLfHNP13PoDOfkHSMH89sUY3JcSSeq87YyWQPbsBx1fNffMKp+1maUQd4GWBGJFAQTRs9OxeNHPkauxkgeB8N4swHAm7SWme/W7gb+jpFclJsvtk3ostemlyxFlDLaVrGQmNUCp0HXbEUWIoPMoqVXU6u9MtHF6cRHJcqSaKEBg/qBNhOuzls2MKLN9XMy7K2IA+rLxQX7Zjno6cTZoVZgLT4P16FK3F1Ej+C0/KlWwywY3VZYBolI+mtteo5Np+VlpnHKlXmUZwiZLJwJY8XbKBdIzgT3Hz+g8T8xXhn8k4EoQ6ySojm/XJ4iBJQwjfN7X096n16TBqsnKXS/sbYegOh4NPaxuIHDhjHziBCecRQnp6bbEjgU95FI+B8eSEoHzDpJjz9jvrldtTm1CS+jv78xjs/ULwOBbIrsehdkiU53lRZVWOQq/IkZ7veOH1xgMghnaiqGl2jqqvNyCN3nCqNIJAv8JMXjvilvIdj6zBAjPzArPQGohAfPQ3gUDstvUIb/UNEMdt1sgGHQJJ7CzIgGJYBYf0ehRDDL9Gw5hCeiywRk0s2ARrkpBx1l6YC/8HCD/NxBa2p3FwzSWDWc6cGhQW4rzQo+fhX3DEdS7i+fEKoZkkQPPdDh5z4zmI2a43KfeIfcYduoWGLYA== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AS8PR03MB6774.eurprd03.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(8676002)(9686003)(55016002)(38070700005)(122000001)(64756008)(53546011)(508600001)(166002)(5660300002)(966005)(6506007)(66556008)(76116006)(86362001)(7696005)(45080400002)(71200400001)(33656002)(66946007)(30864003)(8936002)(66476007)(66446008)(38100700002)(316002)(83380400001)(52536014)(9326002)(2906002)(110136005)(26005)(4326008)(44832011)(54906003)(186003)(579004)(559001); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-2022-jp?B?SFE1emJDbWI4WjluT0lRQlFTSG1HVUxOd3BwWkpRWnRrR284ZWhWQ2hY?= =?iso-2022-jp?B?WUw5ZXZsTDBmVDlTWmY4WGR2UnhxWjJFS2V6V2Ezd2ZTR3ZVQWRaQkhv?= =?iso-2022-jp?B?b09VdnpBMmREOU9rS3R2RHljOTBDa29YazRzKy91SnRKcmQrV21jWXI1?= =?iso-2022-jp?B?d3c2Z2VWcWdTTHNhR2FFU3h0Qmo1YlphTXdGb0c3eFNmTDBaSjB4aXQ2?= =?iso-2022-jp?B?a0NoMjQ4eTliSGVNTEZpeWhUMUwxRnJSS0pCU1A1ZEZlQ05GcFR6RkZW?= =?iso-2022-jp?B?cU9Xam9tdENhTDB3d0JCYlZNRzJRT2hCelBQdWpQWUxiMFhETSswUDUr?= =?iso-2022-jp?B?dGR3d2dObUxzSHc0UVVvTW44MHNJQ1hTTlFnS0ZlMmhrdXRvYm5YVVZY?= =?iso-2022-jp?B?VytHT1NYd1dtOWVUTGMzMk56aUt5b2ZpSUN6a3A5eUxPVHJrdElwYk1w?= =?iso-2022-jp?B?K2FWNEwrckthMm5IUm1uYlY5blVPd05OZzVHK0ZINEZUUG5qYXB2akt6?= =?iso-2022-jp?B?OVR0aUwxK3NqbFdsRHo2d1hkWXdZQzZvbCtoSWRYcWI5bEg0dTErUjlh?= =?iso-2022-jp?B?YWdkakNrWjRZeU5SemVvVGhpMW5xdk4zYnBwVy8rL3V5R2VlL256OVNE?= =?iso-2022-jp?B?UXVSN3F0bko4YzVCQU4wRFNONWovUDB2ZFFrcHVRSEdrSEprMVFpdS9F?= =?iso-2022-jp?B?ZytXdzM2cTM4YjFGWXZUZlNFQVdqSEZxWHdtRC9WbEJOY2xqQTZLSGR5?= =?iso-2022-jp?B?Y0JlcnBVMHlFRjlldlpvU2VKVTFpMXN5R1A4dEJYcEFMNW4yNm8xUmFN?= =?iso-2022-jp?B?RWZpSWlLWWpOVmc4UUU3TFhSUEUrNkJtaUJsVHFGK0VzRXBFSlkvV3h3?= =?iso-2022-jp?B?YVVZYWUvek1PQTBsbmw2NWhRdDJDY1NMbHZ1a0tkZ2duVXhMYmorUFRq?= =?iso-2022-jp?B?a2x5RHFDUnhvQU9EZVV0ZXVlNmNPK00wNFBvMllMN3h4ODhEWmZqK3do?= =?iso-2022-jp?B?ZWRaREtqSUltQldXelhTMnJsbkR5Nm1Kdyttd2txTkVTQk0wL3h2OGZ6?= =?iso-2022-jp?B?ZHBmY2RnamxQT3hLV1dGUWZPOGQ0YjBuTXRSTEdKL1dReWMybTFFWXk0?= =?iso-2022-jp?B?MDhpTUNHZWFNYk1Tdk51NU1iS2Q3V1BONnhmYWl4eUg3M1FoMVRscTBo?= =?iso-2022-jp?B?R0svNFdRalZUSHFTQkEzZ3ZwZldtaE1CRTg5d0ZSM1VQcTE4MVJkS3Z2?= =?iso-2022-jp?B?ZUJDWSt0WWw0TFA3WHlFTmYvNDc3YU9OY2RiWDZoWllTemU3b3N1eVZF?= =?iso-2022-jp?B?NW1WdytWL1JudTBNdi92L1ZxUTN0TU5MdXR0Y01sNmkvMUt1RnlpeDhP?= =?iso-2022-jp?B?UjdzMnhjRWdLS09qOGVSV3VZTUp5Mm5OTjB5TjRwbVorSTJ2WHlNT2Vt?= =?iso-2022-jp?B?SmpkTHdpbmhHaTV1akZwLzhTZU1ISXZTNUx3eEVpanNjNnRvcHJTQnFu?= =?iso-2022-jp?B?S0VPRE9mWmFSVE1Ra1FIWGliZlg5ZWFwZ2RqR3pCR3pwR2thdnJ1VUNN?= =?iso-2022-jp?B?c3U4OHh5dVllYXlQd1RXVW5jRlJVSXJrM1ZmQWtPT2lPTmdkQWF4dnBH?= =?iso-2022-jp?B?cDJCTzFoNUdoNXVpd3VZUGNaTGxqdmRoOHNWdWc4NjZJUE94UDQyZUJH?= =?iso-2022-jp?B?NkZTd0hlK0VQajRRbzZGTHN0eHZKczY5Si9ZUFNKSjNVdlRiTlEwRkNX?= =?iso-2022-jp?B?UnN2Mlc3cEN2YmlpdXl1Kzdxb05DRmVWMHRJNFJkZE96eFQ4T3IwUkhz?= =?iso-2022-jp?B?L0hzUDB2Ym5YcVpRdkFuK0UrY3BlaGZvR3dVMlNJZnMyMG80WUFtUHgz?= =?iso-2022-jp?B?cHIxdm82bS9ta1lkT0xSelZVdU41TDhFZENGN1FmTk8vNC9DbXh2UG1I?= =?iso-2022-jp?B?b2FRcnlXSWNiYjJQYnZ6TEtOMmJKMWpkUURHNXZOZEdSdzgvMUpkRHpN?= =?iso-2022-jp?B?UFNsMFRHaWJGdEFSYnBGZWlWekR2aXhqemk5emdHdDdHSU5nZ3VyZ1pt?= =?iso-2022-jp?B?bnJhWVdJb1hyNmdVaVVhTFB3ZEdDRHhpclVtK0xYNTk0c2Q1c2VnaXll?= =?iso-2022-jp?B?WThFRDBZUHBGbU51YUVlZUtKazdTaUpZNDYvQjBoYnBXVnlCK29OdWJF?= =?iso-2022-jp?B?ajhHWnVpSjJxazZBLytSMHg4QjBEKzdFdU01Uk1JTjBDWGFma3V2eHhz?= =?iso-2022-jp?B?RUorem94bXpmRG1YenBwZ1RoNlVHK0VLMWg4dUhCOFNjR05vWDBQT3ZB?= =?iso-2022-jp?B?NWwzb3FKYnpiaWtvK25yUElBNnhwT2NrdGRqc2c1WXdQdllmM3ZqZlBE?= =?iso-2022-jp?B?U2xxTGNndXFUOUNCK1psOS80S2FHZVZRSXR1cFJLWWEwVlRvSEMrUWxP?= =?iso-2022-jp?B?K3RuNDV2MUZ3UWdSVXp4MTVPT21odGR3SmhMK28rZXh6WjNtM0NpQjRL?= =?iso-2022-jp?B?NTBVWUI1?= Content-Type: multipart/alternative; boundary="_000_AS8PR03MB677484C6BE34E108EB6F4AF991949AS8PR03MB6774eurp_" MIME-Version: 1.0 X-OriginatorOrg: infovista.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB6774.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: be404cd1-2779-4e59-08fd-08d9a516bbe4 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2021 13:25:30.8240 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: c8d853de-982e-4404-92ff-b4189dc94e37 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: GKHCoNrRDTNO1jA8DW9LlNY8z8h0pWQboRCkA0w2YP9b+2qeiXKTMJGe4E3HHOBhmaCfFOB+Ui9KWAsPi5MI1LtEVswAfmY7xnoZBFKhWOwt6/dkYN1yyVFJb1yVqtJG X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR03MB4199 X-MS-Exchange-CrossPremises-AuthAs: Internal X-MS-Exchange-CrossPremises-AuthMechanism: 04 X-MS-Exchange-CrossPremises-AuthSource: AS8PR03MB6774.eurprd03.prod.outlook.com X-MS-Exchange-CrossPremises-TransportTrafficType: Email X-MS-Exchange-CrossPremises-SCL: 1 X-MS-Exchange-CrossPremises-messagesource: StoreDriver X-MS-Exchange-CrossPremises-BCC: X-MS-Exchange-CrossPremises-originalclientipaddress: 93.46.81.137 X-MS-Exchange-CrossPremises-transporttraffictype: Email X-MS-Exchange-CrossPremises-antispam-scancontext: DIR:Originating; SFV:NSPM; SKIP:0; X-MS-Exchange-CrossPremises-processed-by-journaling: Journal Agent X-OrganizationHeadersPreserved: AM6PR03MB4199.eurprd03.prod.outlook.com X-Mailman-Approved-At: Mon, 15 Nov 2021 18:38:58 +0100 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_AS8PR03MB677484C6BE34E108EB6F4AF991949AS8PR03MB6774eurp_ Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable Hi Asaf, Thanks for your quick answer. I=1B$B!G=1B(Bm trying to upgrade, will update you shortly. However I think from reading the full email thread https://inbox.dpdk.org/users/DM8PR12MB5494459B49353FACCACEF3C3CDB89@DM8PR12= MB5494.namprd12.prod.outlook.com/t/#mc9927dd8f5f092d5042d95fa520b29765d17dd= f8 that upgrade is not fixing this problem (at least it didn=1B$B!G=1B(Bt fix = it for Yan FWICS) So please check on your side if possible. Reproducing the problem just requires overloading the receiver side with to= o many PPS=1B$B!D=1B(B Thanks a lot, Francesco From: Asaf Penso Sent: Thursday, November 11, 2021 6:28 AM To: Francesco Montorsi ; Yan, Xiaoping (N= SB - CN/Hangzhou) ; Gerry Wan ; Slava Ovsiienko ; Matan Azrad ; Raslan Darawsheh Cc: Martin Weiser ; David Marchand ; users@dpdk.org Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets CAUTION: External Email : Be wary of clicking links or if this claims to be= internal. Hello Francesco, To ensure the issue still exists, could you try the latest 19.11 LTS? 19.11= .5 is a bit out dated and doesn't contain a lot of DPDK fixes. In the meanwhile, I'll check internally about this issue and update. Regards, Asaf Penso ________________________________ From: Francesco Montorsi > Sent: Thursday, November 11, 2021 1:53:43 AM To: Yan, Xiaoping (NSB - CN/Hangzhou) >; Gerry Wan >; Asaf Penso >; S= lava Ovsiienko >; Mat= an Azrad >; Raslan Darawsheh > Cc: Martin Weiser >; David Marchand >; users@dpdk.org > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi all, I hit the exact same problem reported by Yan. I=1B$B!G=1B(Bm using: * 2 Mellanox CX5 MT28800 installed on 2 different servers, connected to= gether * Device FW (as reported by DPDK): 16.31.1014 * DPDK 19.11.5 (from 6WindGate actually) I sent roughly 360M packets from one server to the other using =1B$B!H=1B(B= testpmd=1B$B!I=1B(B (in -forward-mode=3Dtxonly). My DPDK application on the other server is reporting the following xstats c= ounters (: CounterName PORT0 PORT1 TOTAL rx_good_packets: 76727920, 0, 76727920 tx_good_packets: 0, 0, 0 rx_good_bytes: 4910586880, 0, 4910586880 tx_good_bytes: 0, 0, 0 rx_missed_errors: 0, 0, 0 rx_errors: 0, 0, 0 tx_errors: 0, 0, 0 rx_mbuf_allocation_errors: 0, 0, 0 rx_q0packets: 0, 0, 0 rx_q0bytes: 0, 0, 0 rx_q0errors: 0, 0, 0 rx_q1packets: 0, 0, 0 rx_q1bytes: 0, 0, 0 rx_q1errors: 0, 0, 0 rx_q2packets: 0, 0, 0 rx_q2bytes: 0, 0, 0 rx_q2errors: 0, 0, 0 rx_q3packets: 0, 0, 0 rx_q3bytes: 0, 0, 0 rx_q3errors: 0, 0, 0 rx_q4packets: 0, 0, 0 rx_q4bytes: 0, 0, 0 rx_q4errors: 0, 0, 0 rx_q5packets: 76727920, 0, 76727920 rx_q5bytes: 4910586880, 0, 4910586880 rx_q5errors: 0, 0, 0 rx_q6packets: 0, 0, 0 rx_q6bytes: 0, 0, 0 rx_q6errors: 0, 0, 0 rx_q7packets: 0, 0, 0 rx_q7bytes: 0, 0, 0 rx_q7errors: 0, 0, 0 rx_q8packets: 0, 0, 0 rx_q8bytes: 0, 0, 0 rx_q8errors: 0, 0, 0 rx_q9packets: 0, 0, 0 rx_q9bytes: 0, 0, 0 rx_q9errors: 0, 0, 0 rx_q10packets: 0, 0, 0 rx_q10bytes: 0, 0, 0 rx_q10errors: 0, 0, 0 rx_q11packets: 0, 0, 0 rx_q11bytes: 0, 0, 0 rx_q11errors: 0, 0, 0 tx_q0packets: 0, 0, 0 tx_q0bytes: 0, 0, 0 rx_wqe_err: 0, 0, 0 rx_port_unicast_packets: 360316064, 0, 360316064 rx_port_unicast_bytes: 23060228096, 0, 23060228096 tx_port_unicast_packets: 0, 0, 0 tx_port_unicast_bytes: 0, 0, 0 rx_port_multicast_packets: 0, 0, 0 rx_port_multicast_bytes: 0, 0, 0 tx_port_multicast_packets: 0, 0, 0 tx_port_multicast_bytes: 0, 0, 0 rx_port_broadcast_packets: 0, 0, 0 rx_port_broadcast_bytes: 0, 0, 0 tx_port_broadcast_packets: 0, 0, 0 tx_port_broadcast_bytes: 0, 0, 0 tx_packets_phy: 0, 0, 0 rx_packets_phy: 0, 0, 0 rx_crc_errors_phy: 0, 0, 0 tx_bytes_phy: 0, 0, 0 rx_bytes_phy: 0, 0, 0 rx_in_range_len_errors_phy 0, 0, 0 rx_symbol_err_phy: 0, 0, 0 rx_discards_phy: 0, 0, 0 tx_discards_phy: 0, 0, 0 tx_errors_phy: 0, 0, 0 rx_out_of_buffer: 0, 0, 0 So rx_good_packets is roughly 76M pkts, while rx_port_unicast_packets has c= ounted correctly all 360M pkts sent by testpmd. Of course my application layer has been able to dequeue from the DPDK port = only 76M pkts so the remaining (rx_port_unicast_packets- rx_good_packets) g= ot lost but are not reported in the =1B$B!H=1B(Bimissed=1B$B!I=1B(B counter= or =1B$B!H=1B(Bierrors=1B$B!I=1B(B counter of rte_eth_stats=1B$B!D=1B(B It would be not so easy for me to test against latest DPDK=1B$B!D=1B(B also= from what Yan has reported, the issue is still there in DPDK stable 20.11.= 3=1B$B!D=1B(B @Mellanox maintainers: any update on this issue? Is there a workaround to g= et the dropped packet back into the rte_eth_stats counters? Thanks Francesco Montorsi From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Friday, October 29, 2021 2:44 AM To: Gerry Wan >; Asaf Penso= >; Slava Ovsiienko >; Matan Azrad >; Raslan Darawsheh > Cc: Martin Weiser >; David Marchand >; users@dpdk.org Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, Yes, it=1B$B!G=1B(Bs always zero=1B$B!D=1B(B It seems dpdk-stable-20.11-3 already include this patch. [xiaopiya@fedora30 dpdk-stable-20.11.3]$ patch -p1 < ./4-4-net-mlx5-fix-imi= ssed-statistics.diff patching file drivers/net/mlx5/linux/mlx5_os.c Reversed (or previously applied) patch detected! Assume -R? [n] n Apply anyway? [n] ^C [xiaopiya@fedora30 dpdk-stable-20.11.3]$ grep -r "mlx5_queue_counter_id_pre= pare" ./ ./drivers/net/mlx5/linux/mlx5_os.c:mlx5_queue_counter_id_prepare(struct rte= _eth_dev *dev) ./drivers/net/mlx5/linux/mlx5_os.c: mlx5_queue_counter_id_prepa= re(eth_dev); ./drivers/net/mlx5/linux/mlx5_os.c.rej:+mlx5_queue_counter_id_prepare(struc= t rte_eth_dev *dev) ./drivers/net/mlx5/linux/mlx5_os.c.rej:+ mlx5_queue_counter_= id_prepare(eth_dev); ./4-4-net-mlx5-fix-imissed-statistics.diff:+mlx5_queue_counter_id_prepare(s= truct rte_eth_dev *dev) ./4-4-net-mlx5-fix-imissed-statistics.diff:+ mlx5_queue_counter_= id_prepare(eth_dev); Thank you. Best regards Yan Xiaoping From: Gerry Wan > Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B28=1B$BF|=1B(B 9:58 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Martin Weiser >; David Marchand >; Asaf Penso >; Slava Ovsiienko >; Matan Azrad >; Raslan Darawsh= eh >; users@dpdk.org Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Are the rx_missed_errors/rx_out_of_buffer counters always showing 0 no matt= er how fast you push your generator? I had a similar issue with missing counters on DPDK 20.11 that was fixed on= 21.05 by applying this patch: http://patchwork.dpdk.org/project/dpdk/patch/1614249901-307665-5-git-send-e= mail-matan@nvidia.com/ Potentially relevant thread: https://inbox.dpdk.org/users/CAAcwi38rs2Vk9MKhRGS3kAK+=3DdYAnDdECT7f+Ts-f13= cANYB+Q@mail.gmail.com/ On Wed, Oct 27, 2021 at 6:39 PM Yan, Xiaoping (NSB - CN/Hangzhou) > wrote: Hi, I checked the counter from PF with ethtool -S, there is no counter named 'r= x_prio0_buf_discard' Anyway, I checked all counters from ethtool output, there is not any counte= r reflects the dropped packets. Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava Ovsiien= ko @Raslan Darawsheh Thank you. Best regards Yan Xiaoping -----Original Message----- From: Martin Weiser > Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B27=1B$BF|=1B(B 15:54 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; David Marchand > Cc: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org>; Slava Ovsiienko >; Matan Azrad >; Raslan Darawsheh > Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, you may want to check the counter 'rx_prio0_buf_discard' with ethtool (whic= h is not available in DPDK xstats as it seems that this counter is global f= or the card and not available per port). I opened a ticket a while ago regarding this issue: https://bugs.dpdk.org/show_bug.cgi?id=3D749 Best regards, Martin Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou): > Hi, > > I tried with dpdk 20.11-3 downloaded from > https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz > Problem still exist: > 1. there is packet loss with 2mpps (small packet), 2. no counter for > the dropped packet in NIC. > > traffic generator stats: sends 41990938, receives back 41986105, lost > 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110 > port xstats: rx_unicast_packets: 41990938 (all packets reached to the NI= C port), rx_good_packets: 41986111 (some is lost), but there is not any cou= nter of the lost packet. > > Here is the log: > [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem > "5000,0" -a 0000:03:06.7 -- -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D5= 12 > --txd=3D512 > EAL: Detected 28 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected static linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: No available hugepages reported in hugepages-2048kB > EAL: Probing VFIO support... > EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7 > (socket 0) > mlx5_pci: cannot bind mlx5 socket: Read-only file system > mlx5_pci: Cannot initialize socket: Read-only file system > EAL: No legacy callbacks, legacy socket not created Interactive-mode > selected > testpmd: create a new mbuf pool : n=3D155456, size=3D2176, > socket=3D0 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=3Dpaired and odd forward ports number, the last po= rt will pair with itself. > > Configuring Port 0 (socket 0) > Port 0: 7A:9A:8A:A6:86:93 > Checking link statuses... > Done > testpmd> port stop 0 > Stopping ports... > Checking link statuses... > Done > testpmd> vlan set filter on 0 > testpmd> rx_vlan add 767 0 > testpmd> port start 0 > Port 0: 7A:9A:8A:A6:86:93 > Checking link statuses... > Done > testpmd> set fwd 5tswap > Set 5tswap packet forwarding mode > testpmd> start > 5tswap packet forwarding - ports=3D1 - cores=3D1 - streams=3D1 - NUMA > support enabled, MP allocation mode: native Logical Core 3 (socket 0) for= wards packets on 1 streams: > RX P=3D0/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) > peer=3D02:00:00:00:00:00 > > 5tswap packet forwarding packets/burst=3D32 > nb forwarding cores=3D1 - nb forwarding ports=3D1 > port 0: RX queue number: 1 Tx queue number: 1 > Rx offloads=3D0x200 Tx offloads=3D0x0 > RX queue: 0 > RX desc=3D512 - RX free threshold=3D64 > RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > RX Offloads=3D0x200 > TX queue: 0 > TX desc=3D512 - TX free threshold=3D0 > TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 > TX offloads=3D0x0 - TX RS bit threshold=3D0 > > testpmd> show fwd stats all > > ---------------------- Forward statistics for port 0 ----------------= ------ > RX-packets: 41986110 RX-dropped: 0 RX-total: 4198611= 0 > TX-packets: 41986110 TX-dropped: 0 TX-total: 4198611= 0 > > ---------------------------------------------------------------------- > ------ > > testpmd> show port xstats 0 > ###### NIC extended statistics for port 0 > rx_good_packets: 41986111 > tx_good_packets: 41986111 > rx_good_bytes: 3106973594 > tx_good_bytes: 3106973594 > rx_missed_errors: 0 > rx_errors: 0 > tx_errors: 0 > rx_mbuf_allocation_errors: 0 > rx_q0_packets: 41986111 > rx_q0_bytes: 3106973594 > rx_q0_errors: 0 > tx_q0_packets: 41986111 > tx_q0_bytes: 3106973594 > rx_wqe_errors: 0 > rx_unicast_packets: 41990938 > rx_unicast_bytes: 3107329412 > tx_unicast_packets: 41986111 > tx_unicast_bytes: 3106973594 > rx_multicast_packets: 1 > rx_multicast_bytes: 114 > tx_multicast_packets: 0 > tx_multicast_bytes: 0 > rx_broadcast_packets: 5 > rx_broadcast_bytes: 1710 > tx_broadcast_packets: 0 > tx_broadcast_bytes: 0 > tx_phy_packets: 0 > rx_phy_packets: 0 > rx_phy_crc_errors: 0 > tx_phy_bytes: 0 > rx_phy_bytes: 0 > rx_phy_in_range_len_errors: 0 > rx_phy_symbol_errors: 0 > rx_phy_discard_packets: 0 > tx_phy_discard_packets: 0 > tx_phy_errors: 0 > rx_out_of_buffer: 0 > tx_pp_missed_interrupt_errors: 0 > tx_pp_rearm_queue_errors: 0 > tx_pp_clock_queue_errors: 0 > tx_pp_timestamp_past_errors: 0 > tx_pp_timestamp_future_errors: 0 > tx_pp_jitter: 0 > tx_pp_wander: 0 > tx_pp_sync_lost: 0 > testpmd> q > Command not found > testpmd> exit > Command not found > testpmd> quit > Telling cores to stop... > Waiting for lcores to finish... > > ---------------------- Forward statistics for port 0 ----------------= ------ > RX-packets: 41986112 RX-dropped: 0 RX-total: 4198611= 2 > TX-packets: 41986112 TX-dropped: 0 TX-total: 4198611= 2 > > ---------------------------------------------------------------------- > ------ > > Best regards > Yan Xiaoping > > -----Original Message----- > From: David Marchand > > Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B18=1B$BF|=1B(B 18:45 > To: Yan, Xiaoping (NSB - CN/Hangzhou) > > Cc: Asaf Penso >; users@dpdk.or= g; Slava Ovsiienko > >; Matan Azrad >; Raslan > Darawsheh > > Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and > rx_good_packets > > On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) > wrote: >> I have cloned dpdk code from github >> >> [xiaopiya@fedora30 dpdk]$ git remote -v origin >> https://github.com/DPDK/dpdk.git (fetch) origin >> https://github.com/DPDK/dpdk.git (push) >> >> which tag should I use? >> >> Or do I have to download 20.11.3 from git.dpdk.org? >> >> Sorry, I don=1B$B!G=1B(Bt know the relation between https://github.com/D= PDK and git.dpdk.org? > Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.or= g servers. > > The official git repos and releases tarballs are on dpdk.org servers. > The list of official releases tarballs is at: > http://core.dpdk.org/download/ The main repo git is at: > https://git.dpdk.org/dpdk/ The LTS/sta= ble releases repo git is at: > https://git.dpdk.org/dpdk-stable/ > > > -- > David Marchand > --_000_AS8PR03MB677484C6BE34E108EB6F4AF991949AS8PR03MB6774eurp_ Content-Type: text/html; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable

Hi Asaf,

Thanks for your quick an= swer.

I=1B$B!G=1B(Bm trying to= upgrade, will update you shortly.

However I think from rea= ding the full email thread

 =

https://i= nbox.dpdk.org/users/DM8PR12MB5494459B49353FACCACEF3C3CDB89@DM8PR12MB5494.na= mprd12.prod.outlook.com/t/#mc9927dd8f5f092d5042d95fa520b29765d17ddf8

 =

that upgrade is not fixi= ng this problem (at least it didn=1B$B!G=1B(Bt fix it for Yan FWICS)

So please check on your = side if possible.

Reproducing the problem = just requires overloading the receiver side with too many PPS=1B$B!D=1B(B

 =

Thanks a lot,=

Francesco

 =

From: Asaf Penso <asafp@nvidia.co= m>
Sent: Thursday, November 11, 2021 6:28 AM
To: Francesco Montorsi <francesco.montorsi@infovista.com>; Yan= , Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; Gerry = Wan <gerryw@stanford.edu>; Slava Ovsiienko <viacheslavo@nvidia.com= >; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nv= idia.com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David M= archand <david.marchand@redhat.com>; users@dpdk.org
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

CAUTION: External Email : Be wary of clicking links or if this claims to be interna= l.

Hello Fran= cesco,

To ensure = the issue still exists, could you try the latest 19.11 LTS? 19.11.5 is a bi= t out dated and doesn't contain a lot of DPDK fixes.

In the mea= nwhile, I'll check internally about this issue and update.

 

Regards,

Asaf Penso


From: France= sco Montorsi <france= sco.montorsi@infovista.com>
Sent: Thursday, November 11, 2021 1:53:43 AM
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; Gerry Wan <<= a href=3D"mailto:gerryw@stanford.edu">gerryw@stanford.edu>; Asaf Pen= so <asafp@nvidia.com>; Slava Ovsiienko <viacheslavo@= nvidia.com>; Matan Azrad <mat= an@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David Marchand <david.marchand@redhat.com>; users@dpdk.org <users@dpdk.org>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi all,

I hit the exact same problem reported by Yan.<= /o:p>

I=1B$B!G=1B(Bm using:

  • 2 Mellanox CX5 MT28800 installed on 2 different servers, connected t= ogether
  • Device FW (as reported by DPDK): 16.31.1014
  • DPDK 19.11.5 (from 6WindGate actually)

 

I sent roughly 360M packets from one server to the = other using =1B$B!H=1B(Btestpmd=1B$B!I=1B(B (in –forward-mode=3Dtxonl= y).

My DPDK application on the other server is reportin= g the following xstats counters (:

 

CounterName    &nbs= p;            &= nbsp;    PORT0        POR= T1        TOTAL

rx_good_packets:    = ;          76727920, &nbs= p;         0,    767= 27920

tx_good_packets:    = ;            &n= bsp;    0,        &n= bsp;  0,           0=

rx_good_bytes:    &= nbsp;         4910586880, &nbs= p;         0,  4910586880

tx_good_bytes:    &= nbsp;           &nbs= p;      0,      &nbs= p;    0,        &nbs= p;  0

rx_missed_errors:   &nbs= p;            &= nbsp;   0,         &= nbsp; 0,           0

rx_errors:     = ;            &n= bsp;         0,   &n= bsp;       0,     &n= bsp;     0

tx_errors:     = ;            &n= bsp;         0,   &n= bsp;       0,     &n= bsp;     0

rx_mbuf_allocation_errors:  &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q0packets:    &n= bsp;            = ;       0,      = ;     0,        &nbs= p;  0

rx_q0bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q0errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q1packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q1bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,       =     0

rx_q1errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q2packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q2bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q2errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q3packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q3bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q3errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q4packets:    &n= bsp;            = ;       0,      &nbs= p;    0,        = ;   0

rx_q4bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q4errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q5packets:    &n= bsp;            7672= 7920,           0, &= nbsp;  76727920

rx_q5bytes:    &nbs= p;            491058= 6880,           0,  = 4910586880

rx_q5errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q6packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q6bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q6errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q7packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q7bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q7errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q8packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q8bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q8errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q9packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q9bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_q9errors:    &nb= sp;            =         0,     =       0,       =     0

rx_q10packets:    &= nbsp;           &nbs= p;      0,      &nbs= p;    0,         &nb= sp; 0

rx_q10bytes:    &nb= sp;            =         0,     =       0,       =     0

rx_q10errors:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_q11packets:    &= nbsp;           &nbs= p;      0,      &nbs= p;    0,        &nbs= p;  0

rx_q11bytes:    &nb= sp;            =         0,     =       0,        = ;   0

rx_q11errors:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

tx_q0packets:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

tx_q0bytes:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_wqe_err:    &nbs= p;            &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_port_unicast_packets:  &nb= sp;  360316064,         &= nbsp; 0,   360316064

rx_port_unicast_bytes:   = ;  23060228096,         &= nbsp; 0, 23060228096

tx_port_unicast_packets:  &nb= sp;          0,  &nb= sp;        0,    &nb= sp;      0

tx_port_unicast_bytes:   = ;            0, = ;          0,   = ;        0

rx_port_multicast_packets:  &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_port_multicast_bytes:  &nb= sp;          0,  &nb= sp;        0,    &nb= sp;      0

tx_port_multicast_packets:  &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

tx_port_multicast_bytes:  &nb= sp;          0,   &n= bsp;       0,    &nb= sp;      0

rx_port_broadcast_packets:  &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_port_broadcast_bytes:  &nb= sp;          0,  &nb= sp;        0,    &nb= sp;      0

tx_port_broadcast_packets:  &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

tx_port_broadcast_bytes:  &nb= sp;          0,  &nb= sp;        0,    &nb= sp;      0

tx_packets_phy:    =             &nb= sp;     0,       &nb= sp;   0,         &nb= sp; 0

rx_packets_phy:    =             &nb= sp;     0,       &nb= sp;   0,         &nb= sp; 0

rx_crc_errors_phy:   &nb= sp;            =    0,          = 0,           0

tx_bytes_phy:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_bytes_phy:    &n= bsp;            = ;       0,      = ;     0,        = ;   0

rx_in_range_len_errors_phy  &= nbsp;        0,    &= nbsp;      0,      &= nbsp;    0

rx_symbol_err_phy:   &nb= sp;            =    0,          = 0,           0

rx_discards_phy:    = ;            &n= bsp;    0,        &n= bsp;  0,           0=

tx_discards_phy:    = ;            &n= bsp;    0,        &n= bsp;  0,           0=

tx_errors_phy:    &= nbsp;           &nbs= p;      0,      &nbs= p;    0,         &nb= sp; 0

rx_out_of_buffer:   &nbs= p;            &= nbsp;   0,         &= nbsp; 0,           0

 

So rx_good_packets is roughly 76M pkts, while rx_port_unicast_packets has coun= ted correctly all 360M pkts sent by testpmd.

Of course my application layer has been able to deq= ueue from the DPDK port only 76M pkts so the remaining (rx_port= _unicast_packets- rx_good_packets) got lost but are not reported in the =1B$B!H=1B(Bimissed=1B$B!I=1B(B count= er or =1B$B!H=1B(Bierrors=1B$B!I=1B(B counter of rte_eth_stats=1B$B!D=1B(B<= o:p>

 

It would be not so easy for me to test against late= st DPDK=1B$B!D=1B(B also from what Yan has reported, the issue is still the= re in DPDK stable 20.11.3=1B$B!D=1B(B

 

@Mellanox maintainers: any update on this issue? Is= there a workaround to get the dropped packet back into the rte_eth_stats c= ounters?

 

 

Thanks

 

Francesco Montorsi

 

 

From: Yan, Xiaoping (NSB - CN/Hangzhou) <= xiaoping.yan@nokia-sbell.co= m>
Sent: Friday, October 29, 2021 2:44 AM
To: Gerry Wan <gerryw@stan= ford.edu>; Asaf Penso <asafp@= nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.= com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David Marchand <david.marchand@redhat.com>; users@dpdk.org
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

Yes, it=1B$B!G=1B(Bs always zero=1B$B!D=1B(B

It seems dpdk-stable-20.11-3 already include this p= atch.

[xiaopiya@fedora30 dpdk-= stable-20.11.3]$ patch -p1 < ./4-4-net-mlx5-fix-imissed-statistics.diff<= o:p>

patching file drivers/ne= t/mlx5/linux/mlx5_os.c

Reversed (or previously = applied) patch detected!  Assume -R? [n] n

Apply anyway? [n] ^C

[xiaopiya@fedora30 dpdk-= stable-20.11.3]$ grep -r "mlx5_queue_counter_id_prepare" ./<= /o:p>

./drivers/net/mlx5/linux= /mlx5_os.c:mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)

./drivers/net/mlx5/linux= /mlx5_os.c:          &nbs= p;  mlx5_queue_counter_id_prepare(eth_dev);

./drivers/net/mlx5/linux= /mlx5_os.c.rej:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)=

./drivers/net/mlx5/linux= /mlx5_os.c.rej:+          = ;      mlx5_queue_counter_id_prepare(eth_dev);

./4-4-net-mlx5-fix-imiss= ed-statistics.diff:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)<= o:p>

./4-4-net-mlx5-fix-imiss= ed-statistics.diff:+         &= nbsp;  mlx5_queue_counter_id_prepare(eth_dev);

 

Thank you.

 

Best regards

Yan Xiaoping

 

From: Gerry Wan <gerryw@stanford.edu>
Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B28=1B$BF|= =1B(B 9:58
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David Marchand <david.marchand@redhat.com>; A= saf Penso <asafp@nvidia.com>; Slava Ovsiienko <viacheslavo@= nvidia.com>; Matan Azrad <mat= an@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; users@dpdk.org
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Are the rx_missed_errors/rx_out_of_buffer counters = always showing 0 no matter how fast you push your generator?

 

I had a similar issue with missing counters on DPDK= 20.11 that was fixed on 21.05 by applying this patch:

 

On Wed, Oct 27, 2021 at 6:39 PM Yan, Xiaoping (NSB = - CN/Hangzhou) <xiaoping= .yan@nokia-sbell.com> wrote:

Hi,

I checked the counter from PF with ethtool -S, there is no counter named 'r= x_prio0_buf_discard'
Anyway, I checked all counters from ethtool output, there is not any counte= r reflects the dropped packets.

Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava Ovsiien= ko @Raslan Darawsheh

Thank you.


Best regards
Yan Xiaoping

-----Original Message-----
From: Martin Weiser <martin.weiser@allegro-packets.com>
Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B27=1B$BF|=1B(= B 15:54
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; Davi= d Marchand <david.marchand@redhat.com>
Cc: Asaf Penso <as= afp@nvidia.com>; users@dpdk.org; Sla= va Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets

Hi,

you may want to check the counter 'rx_prio0_buf_discard' with ethtool (whic= h is not available in DPDK xstats as it seems that this counter is global f= or the card and not available per port).
I opened a ticket a while ago regarding this issue:
https://bugs.dpdk.org/show_bug.cgi?id=3D749

Best regards,
Martin


Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from
> https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet), 2. no counter for <= br> > the dropped packet in NIC.
>
> traffic generator stats: sends 41990938, receives back 41986105, lost =
> 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110 > port xstats: rx_unicast_packets: 41990938  (all packets reached t= o the NIC port), rx_good_packets: 41986111 (some is lost), but there is not= any counter of the lost packet.
>
> Here is the log:
> [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-m= em
> "5000,0" -a 0000:03:06.7  -- -i --nb-cores=3D1 --portma= sk=3D0x1 --rxd=3D512
> --txd=3D512
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7
> (socket 0)
> mlx5_pci: cannot bind mlx5 socket: Read-only file system
> mlx5_pci: Cannot initialize socket: Read-only file system
> EAL: No legacy callbacks, legacy socket not created Interactive-mode <= br> > selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=3D155456, size=3D= 2176,
> socket=3D0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=3Dpaired and odd forward ports number, the last= port will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> port stop 0
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> vlan set filter on 0
> testpmd> rx_vlan add 767 0
> testpmd> port start 0
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> set fwd 5tswap
> Set 5tswap packet forwarding mode
> testpmd> start
> 5tswap packet forwarding - ports=3D1 - cores=3D1 - streams=3D1 - NUMA =
> support enabled, MP allocation mode: native Logical Core 3 (socket 0) = forwards packets on 1 streams:
>    RX P=3D0/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0)=
> peer=3D02:00:00:00:00:00
>
>    5tswap packet forwarding packets/burst=3D32
>    nb forwarding cores=3D1 - nb forwarding ports=3D1
>    port 0: RX queue number: 1 Tx queue number: 1
>      Rx offloads=3D0x200 Tx offloads=3D0x0
>      RX queue: 0
>        RX desc=3D512 - RX free threshold=3D64
>        RX threshold registers: pthresh=3D0 hthresh= =3D0  wthresh=3D0
>        RX Offloads=3D0x200
>      TX queue: 0
>        TX desc=3D512 - TX free threshold=3D0
>        TX threshold registers: pthresh=3D0 hthresh= =3D0  wthresh=3D0
>        TX offloads=3D0x0 - TX RS bit threshold=3D0=
>
> testpmd> show fwd stats all
>
>    ---------------------- Forward statistics for port 0 = ; ----------------------
>    RX-packets: 41986110       RX-dropped= : 0             RX-total: 41986110
>    TX-packets: 41986110       TX-dropped= : 0             TX-total: 41986110
>   
> ----------------------------------------------------------------------=
> ------
>
> testpmd> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 41986111
> tx_good_packets: 41986111
> rx_good_bytes: 3106973594
> tx_good_bytes: 3106973594
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 41986111
> rx_q0_bytes: 3106973594
> rx_q0_errors: 0
> tx_q0_packets: 41986111
> tx_q0_bytes: 3106973594
> rx_wqe_errors: 0
> rx_unicast_packets: 41990938
> rx_unicast_bytes: 3107329412
> tx_unicast_packets: 41986111
> tx_unicast_bytes: 3106973594
> rx_multicast_packets: 1
> rx_multicast_bytes: 114
> tx_multicast_packets: 0
> tx_multicast_bytes: 0
> rx_broadcast_packets: 5
> rx_broadcast_bytes: 1710
> tx_broadcast_packets: 0
> tx_broadcast_bytes: 0
> tx_phy_packets: 0
> rx_phy_packets: 0
> rx_phy_crc_errors: 0
> tx_phy_bytes: 0
> rx_phy_bytes: 0
> rx_phy_in_range_len_errors: 0
> rx_phy_symbol_errors: 0
> rx_phy_discard_packets: 0
> tx_phy_discard_packets: 0
> tx_phy_errors: 0
> rx_out_of_buffer: 0
> tx_pp_missed_interrupt_errors: 0
> tx_pp_rearm_queue_errors: 0
> tx_pp_clock_queue_errors: 0
> tx_pp_timestamp_past_errors: 0
> tx_pp_timestamp_future_errors: 0
> tx_pp_jitter: 0
> tx_pp_wander: 0
> tx_pp_sync_lost: 0
> testpmd> q
> Command not found
> testpmd> exit
> Command not found
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
>
>    ---------------------- Forward statistics for port 0 = ; ----------------------
>    RX-packets: 41986112       RX-dropped= : 0             RX-total: 41986112
>    TX-packets: 41986112       TX-dropped= : 0             TX-total: 41986112
>   
> ----------------------------------------------------------------------=
> ------
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B18=1B$BF= |=1B(B 18:45
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com><= br> > Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Sla= va Ovsiienko
> <viache= slavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan
> Darawsheh <= rasland@nvidia.com>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and <= br> > rx_good_packets
>
> On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <= ;xiaoping= .yan@nokia-sbell.com> wrote:
>> I have cloned dpdk code from github
>>
>> [xiaopiya@fedora30 dpdk]$ git remote -v origin
>> https://github.com/DPDK/dpdk.git (fetch) origin
>> https://github.com/DPDK/dpdk.git (push)
>>
>> which tag should I use?
>>
>> Or do I have to download 20.11.3 from git.dpdk.org?
>>
>> Sorry, I don=1B$B!G=1B(Bt know the relation between https://github.com/DPDK and  git.dpdk.org?
> Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org servers.
>
> The official git repos and releases tarballs are on dpdk.org servers.
> The list of official releases tarballs is at:
> http://core.dpdk.org/download/ The main repo git is at:
> https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at:
> https://git.dpdk.org/dpdk-stable/
>
>
> --
> David Marchand
>

--_000_AS8PR03MB677484C6BE34E108EB6F4AF991949AS8PR03MB6774eurp_--