From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 240F1A0C43 for ; Fri, 22 Oct 2021 16:55:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 43F4841207; Fri, 22 Oct 2021 16:54:58 +0200 (CEST) Received: from AZHDRRW-EX02.NVIDIA.COM (azhdrrw-ex02.nvidia.com [20.64.145.131]) by mails.dpdk.org (Postfix) with ESMTP id 25BEB40041 for ; Thu, 14 Oct 2021 13:48:56 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (104.47.59.173) by mxs.oss.nvidia.com (10.13.234.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Thu, 14 Oct 2021 04:48:55 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YgGxEnG/j3sNIZAsniSpZy9+gbiQhi7fKTQAzg2y5FPZFLwkMDKlcIoyMcq+ng5URMaWh0wlw0djqh6QttbPbX3+8/ahEyMDYS0H2OKad8Rj2Kcv2ZIw5ZnmglytmIKHjrZidqOi6rHFRhu7QES/zkkI3MLrrgYqljXoPOFNZvWJyB7g13mvhSJxaw/k66eVwk9+9ycICcABWER/GakMIfSr12YcHJIpa+2DPmO4bHPQfMPLCLP5s5IRTV54fE0V4d4asCDXdbreWe24Xcewu9otRsSRvlM/wjtkWrsJv5xVFhi2eD++1eaC1q2U6fhZJvJlyTm9HSDbFIhZzXZbqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/qrQPjuTs+FMRl4NGuqENRojiCNJL89j5e6r9tLX8WI=; b=WWkfkuWy37HwyxyUGXUQMTBuCLyA0cJTqyIlyk3t2O3mpkkCa0L1JzYzrY/Ik8Cv6XnDlndkh29342up8lbbcnn56zxT7wmt7jef3VxFXgyl8RgrZ+6lr0x1+OriFaTek+/LjMwYeWPFlYpb4OAYjl0Q1OGd3Yvr9G7OAgrRYUtU4yyIKambJbPDrXt0Js7CEszjKhFfcIOlGkj/F9XGTYCHqFYdzFS+UHKpzZqjVe3KAunjtRiwgzCj/dyNMV8Ut8Xg0AiiyCwJVNSzNIDoAhoC09ewuIG5YKVRYoIxg4W3ymGSzQkDvNmZ9mSc+IcfAVkTzw1/rwdEr0HRl7Uiyg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/qrQPjuTs+FMRl4NGuqENRojiCNJL89j5e6r9tLX8WI=; b=Hiw2LGapvv9V3pdn6HuEuBWQ/uP+gyae4XGjLesiZc/o6s5193NlwjWm/3J7T/VWkUGpvIyUxqY4GvIUydAAYgnShlgNnwkbbycC5oaeoPjWIeivRxWniHowzdCHwzdjbLMnYLQu2c1eRNOTPkFU4ML+t46iKGTD34cgoDiF/VSfBCkKX615O0LqXKI76Ve25sLZpdtZ4woaRPnu5+OJ0dwaGyn7TWfJVUN7IXKQTtgMBDeYA2/9f00fDEKhrb5sFYYn0ZVhKYXM/drXiJFqbzV4T08G61QT017ld/t61ySACHZe0Oj9T8R3E+h2PDuNYCb2tHhmyUCqQgLPeqrqOw== Received: from DM8PR12MB5494.namprd12.prod.outlook.com (2603:10b6:8:24::9) by DM4PR12MB5310.namprd12.prod.outlook.com (2603:10b6:5:39e::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16; Thu, 14 Oct 2021 11:48:53 +0000 Received: from DM8PR12MB5494.namprd12.prod.outlook.com ([fe80::75c4:48d0:a18e:f9e7]) by DM8PR12MB5494.namprd12.prod.outlook.com ([fe80::75c4:48d0:a18e:f9e7%8]) with mapi id 15.20.4608.017; Thu, 14 Oct 2021 11:48:53 +0000 From: Asaf Penso To: "Yan, Xiaoping (NSB - CN/Hangzhou)" , "users@dpdk.org" CC: Slava Ovsiienko , Matan Azrad , Raslan Darawsheh Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Topic: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Index: AddxgEIiozH3tSNDT0OXrcwVnnhgFgGYzhKQAnzpw1AMLuUsIAAAHwsQAC3gpaAAbMB0fQAAiMkAAAK/4qAAMP0qUAAAeDsgAr30ELAABX0bQAAAmv+QAADIAmAAA1qcAA== Date: Thu, 14 Oct 2021 11:48:53 +0000 Message-ID: References: <0fbb0c1b5a2b4fd3b95d7238db101698@nokia-sbell.com> <16515cb8ebbb4e7d833914040fa5943f@nokia-sbell.com> <22e029bf2ae6407e910cb2dcf11d49ce@nokia-sbell.com> <4b0dd266b53541c7bc4964c29e24a0e6@nokia-sbell.com> <6a5324c1ba0740b4aff5ff861a9f125a@nokia-sbell.com> <66a6a919379744518f623c186ba4ff96@nokia-sbell.com> <01df3faaba304633b93daa26d76baf65@nokia-sbell.com> In-Reply-To: <01df3faaba304633b93daa26d76baf65@nokia-sbell.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: nokia-sbell.com; dkim=none (message not signed) header.d=none; nokia-sbell.com; dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 8bc7b845-a972-4fc9-2e96-08d98f0898b9 x-ms-traffictypediagnostic: DM4PR12MB5310: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: QDhg7BAY3NgoAGzMjrosB3K9UO8zIR66Q1BtOuVtwXhRim/1q3M6w/tL0l2/AXtltOb/rOUNWZOm50imX27V9IF0zvcPdLEo5Gqe7l++OQCSn+wbJa08hxsP+0rLF5YrlEgrHsNjIB088cAuOTuwhLxanR3V4Y60agftzkfqclFrnoxUaZV6V93DMJovfOEgm2qSD2u7L6LExzyuQCuVNoElJe8H1B0xFBJ46M+bsB+HOerKslcTM866w+ykDNUW8+QSt2D13giwOZHvZVO+DDZM3liUcJOnszzyf6ctHlsEJtPPso8U5uMIM2fSCezF4Oo3pJdvALh4Q+D32W4BzRqHVLYXhkOYWze/tpfX+XhmbMhLRT5MrKWZIOdlNFuxEX5R6xbjeyTzFjJG1eUtawdR2oELaq3X2VooDRuDu8cjpy+0m2igoPsb1BotnEyxr5wyvPHwU1LjKtb1V2NDKXbwv63nYCdITvjiIv+qI6drQCHnO7fy2m0PVysydjaIk6Ga0ChbQwyVZu9YlshLxPcJj3bR59bQIs+t/o99Qeda/MW6xe3kxdedzUcRXaxmqHnJr5BuMYniKS8uUhX9YALfeY1ayazjseiI1tvMlEBp2OP4Cd512MAF4fyIUz0r6V1hDXu9hqQtsbTdRmWK/EAdTbC/f7pq4A2og3hmMgvUQeglm+Ka6tuXFBEfWcWEZXEnNNhQSLaukYh/M7DO1AnkVpmoqvA9Vw9EP3e61skHEJCOsvG4DmIIIAK7AtvwxkqmOkHOxbDxAuLbduVbno92JEJ93JRvHLMTDtPE6y181Dszdk6PD33ATn+mcc2oENJW3dnRZ+SOgg4Do9H4wQMHPgEUC2iTmXJeHczq27c= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM8PR12MB5494.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(4326008)(66476007)(66556008)(66446008)(8676002)(76116006)(8936002)(64756008)(66946007)(21615005)(54906003)(110136005)(5660300002)(52536014)(33656002)(316002)(2906002)(86362001)(30864003)(38100700002)(122000001)(166002)(7696005)(83380400001)(71200400001)(107886003)(55016002)(26005)(186003)(966005)(6506007)(53546011)(9686003)(38070700005)(508600001)(21314003)(559001)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-2022-jp?B?dEkrMWxBaVZJYkoyZmEwbFZMaFAva0ZaZ0d0VHhnTHJTc0thRXRPRXNk?= =?iso-2022-jp?B?UW5NYjI2TVNKM0kyU2RyblpkYnNQbi81NGc5MEVNQ1UwRXFuTGVFQTVn?= =?iso-2022-jp?B?RitUTVJIbHYxSjhPelJGSFdqbjRoU1IrUVIzbmlRZXh3K1dhV2Jxay9x?= =?iso-2022-jp?B?dWcrNTYzdU1DUUZlUTdiZ0Z3WHlOVnpQY0xtUnNVRFdwREF5L2R4enZH?= =?iso-2022-jp?B?cW5XRUttQVlPOEdjKy92RzhtUDhRSHhKRlJZN2JUMzNFbGRVeFBob3dU?= =?iso-2022-jp?B?VkU1MERUOTcxenJqYzNoVDFXL0ZlTCtrbEtQaVI3YVRaV2ZueUQxb2hk?= =?iso-2022-jp?B?elc5RElDUXRqaTc2bHVjRFhuWVphT0RqK1E0SlRmY0tIMGFHRmJhaGli?= =?iso-2022-jp?B?SHMwN2t5OXlPSFdRUUtLZkRSdFNrREtvbGRTOHkyaDdpaVE1RzdsTkRW?= =?iso-2022-jp?B?QWd6MExXRGduSHpoalpyN0dVL2ZlMjQ3dks1bTI4aWVaUDRLQWlCWHpF?= =?iso-2022-jp?B?elJCV3hVc2l3aGR0Nis5ZVZKRnh6eTVSRS9yeWZyRHppc2M4WnlLNzlx?= =?iso-2022-jp?B?akVTcVQzaWROWFUzTjYwaDh5YTBFSzgyeTFBN1N5VHRKSHlLenBoaVhj?= =?iso-2022-jp?B?SndhNzAyYldrcHZYWG9CRHdqWlEzOG15TFozU0orYnEwc1pxbzFxZ0pu?= =?iso-2022-jp?B?V3BDQ0VGTURtZkZ4YVFGaURRQXFyLzJ5T2RPZE4ybmQyejNZcStOZ3U4?= =?iso-2022-jp?B?eWUyRkhYaGt3SW9WOXJGTVpZZ2FIQ1hBOUhGdDArVTYyaXArWU1nS1l3?= =?iso-2022-jp?B?QndoT2l3ang0T1ZTUVJNYTZkMTFlYmN5MnVtQkhKVDZyejczQVpVMFhj?= =?iso-2022-jp?B?UWVOM2UrQUg1N01kS0F5VnlqVjFEUTVpVkcrTmRVeTNHSUIzODRIdkVs?= =?iso-2022-jp?B?eGJSRVBFSG5TSGc4T0x6bmZ1OUpTUC9ZcTJ3eDQ3L3VFWi9qbzJTWGdq?= =?iso-2022-jp?B?dmRPZDJFMEN5U2FBZnBTVkN0dGVxSGp3SWlUNkIxMlN1TlRFZXZEY25x?= =?iso-2022-jp?B?Z05qNkI4azk5SzdTVkpTQlJUSUIrbzhRYUltcW1PNE93T1FJTElkMW9p?= =?iso-2022-jp?B?RFUzT1d5QlZiS3ZHeVg5a09uT21JM0QvUXd0ZjJFOHR1VTgzUTkxQ0Va?= =?iso-2022-jp?B?T2VyWFk0eXJWbnZRTXpBaVNjY1lrckloMmQvTld6Sk53aXFBR0VZeGd3?= =?iso-2022-jp?B?amUwWVJ6ZWx1bC9BOHlKZE1rQVZud0J6V3NCTk9BZWEzQUZIaS9BN3Jh?= =?iso-2022-jp?B?NW5uQndoazJ5NmFuZmlJc3A5alI0TEppUTBIaFlrblg2OTdGWmF2bGgz?= =?iso-2022-jp?B?cHI2V09VdnZxRjF2VzRwRlhmbEh4dHVWaHpwOU5WT1dBRGZyMk04aFpy?= =?iso-2022-jp?B?U0RpYyttdGwydlg3L1ZIMEd5dEVCMytaY2hONWVNcjdEdG1PdUw5SDhi?= =?iso-2022-jp?B?ZktsSnFUVmQzUFBodWlRNDYyc0hzVWFPNjFGbndOdHVyWjR5Zmpia1Rz?= =?iso-2022-jp?B?Q052Sk4zMjJWbk9DQjdVdG9JcDF5cGJUY3hyMEZKLzVDZS9LRE9zZitn?= =?iso-2022-jp?B?MXFOTm5HVFRBV3NlQ3lFRmdtZDJTQjQyTjVJemp1RDhHOFdLdG9XZ0Za?= =?iso-2022-jp?B?Sy9ZWXdhOVJhN0laNGw3MHVSMXc0dEFZeElYZmxmV2Z4N2ZBajQza3Zj?= =?iso-2022-jp?B?MTNIMmdWZjFJbGlyNGlDd0RCa2NmcVdqNHcrY0ozYzI1dDM2TnlWTThq?= =?iso-2022-jp?B?Ni80OEtRc01LTGE1OVlwTmF0bUpLVStCZXk3R05oUkN0MDlweEMrbXBr?= =?iso-2022-jp?B?NFhuUEdESEE4Zlp4aGpMQTlyQ1JOa1JGQ1FMeXdWa0pqdE1lNEpNdjZv?= Content-Type: multipart/alternative; boundary="_000_DM8PR12MB54943C1B86DFCA92EEFE570FCDB89DM8PR12MB5494namp_" MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM8PR12MB5494.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8bc7b845-a972-4fc9-2e96-08d98f0898b9 X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Oct 2021 11:48:53.3652 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: x2RYAuzNHt9/jwf2ifxnZfx6OAiJKU9ssi1+81FMKrZyldTJBHAlA2a9unpRzctmBYIu7rbJPwEMdHOi7cR93Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5310 X-Mailman-Approved-At: Fri, 22 Oct 2021 16:54:54 +0200 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_DM8PR12MB54943C1B86DFCA92EEFE570FCDB89DM8PR12MB5494namp_ Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable This is the commit id which we think solves the issue you see: https://git.dpdk.org/dpdk-stable/commit/?h=3Dv20.11.3&id=3Dede02cfc4783446c= 4068a5a1746f045465364fac Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: Thursday, October 14, 2021 1:15 PM To: Asaf Penso ; users@dpdk.org Cc: Slava Ovsiienko ; Matan Azrad ; Raslan Darawsheh Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, Ok, I will try. (probably some days later as I=1B$B!G=1B(Bm busing with ano= ther task right now) Could you also share me the commit id for those fixes? Thank you. Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B14=1B$BF|=1B(B 17:51 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; users@dpdk.org Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Can you please try the last LTS 20.11.3? We have some related fixes and we think the issue is already solved. Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Thursday, October 14, 2021 12:33 PM To: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org> Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I=1B$B!G=1B(Bm using 20.11 commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11) Author: Thomas Monjalon > Date: Fri Nov 27 19:48:48 2020 +0100 version: 20.11.0 Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B14=1B$BF|=1B(B 14:56 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; users@dpdk.org Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Are you using the latest stable 20.11.3? If not, can you try? Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Thursday, September 30, 2021 11:05 AM To: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org> Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, In below log, we can clearly see packets are dropped between counter rx_uni= cast_packets and rx_good_packets But there is not any error/miss counter tell why/where packet is dropped. Is this a known bug/limitation of Mellanox card? Any suggestion? Counter in test center(traffic generator): Tx count: 617496152 Rx count: 617475672 Drop: 20480 testpmd started with: dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -= - -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D512 --txd=3D512 testpmd> port stop 0 testpmd> vlan set filter on 0 testpmd> rx_vlan add 767 0 testpmd> port start 0 testpmd> set fwd 5tswap testpmd> start testpmd> show fwd stats all ---------------------- Forward statistics for port 0 -------------------= --- RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727 TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727 -------------------------------------------------------------------------= --- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++= ++ RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727 TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= +++ testpmd> show port xstats 0 ###### NIC extended statistics for port 0 rx_good_packets: 617475731 tx_good_packets: 617475730 rx_good_bytes: 45693207378 tx_good_bytes: 45693207036 rx_missed_errors: 0 rx_errors: 0 tx_errors: 0 rx_mbuf_allocation_errors: 0 rx_q0_packets: 617475731 rx_q0_bytes: 45693207378 rx_q0_errors: 0 tx_q0_packets: 617475730 tx_q0_bytes: 45693207036 rx_wqe_errors: 0 rx_unicast_packets: 617496152 rx_unicast_bytes: 45694715248 tx_unicast_packets: 617475730 tx_unicast_bytes: 45693207036 rx_multicast_packets: 3 rx_multicast_bytes: 342 tx_multicast_packets: 0 tx_multicast_bytes: 0 rx_broadcast_packets: 56 rx_broadcast_bytes: 7308 tx_broadcast_packets: 0 tx_broadcast_bytes: 0 tx_phy_packets: 0 rx_phy_packets: 0 rx_phy_crc_errors: 0 tx_phy_bytes: 0 rx_phy_bytes: 0 rx_phy_in_range_len_errors: 0 rx_phy_symbol_errors: 0 rx_phy_discard_packets: 0 tx_phy_discard_packets: 0 tx_phy_errors: 0 rx_out_of_buffer: 0 tx_pp_missed_interrupt_errors: 0 tx_pp_rearm_queue_errors: 0 tx_pp_clock_queue_errors: 0 tx_pp_timestamp_past_errors: 0 tx_pp_timestamp_future_errors: 0 tx_pp_jitter: 0 tx_pp_wander: 0 tx_pp_sync_lost: 0 Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 16:26 To: 'Asaf Penso' > Cc: 'Slava Ovsiienko' >; 'Matan Azrad' >; 'Raslan Dara= wsheh' >; Xu, Meng-Maggie (NS= B - CN/Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, We replaced the NIC also (originally it was cx-4, now it is cx-5), but resu= lt is the same. Do you know why the packet is dropped between rx_port_unicast_packets and r= x_good_packets, but there is no error/miss counter? And do you know mlx5_xxx kernel thread? They have cpu affinity to all cpu cores, including the core used by fastpat= h/testpmd. Would it affect? [cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548 pid 74548's current affinity list: 0-27 [cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5 903 - - mlx5_health0000 904 - - mlx5_page_alloc 907 - - mlx5_cmd_0000:0 916 - - mlx5_events 917 - - mlx5_esw_wq 918 - - mlx5_fw_tracer 919 - - mlx5_hv_vhca 921 - - mlx5_fc 924 - - mlx5_health0000 925 - - mlx5_page_alloc 927 - - mlx5_cmd_0000:0 935 - - mlx5_events 936 - - mlx5_esw_wq 937 - - mlx5_fw_tracer 938 - - mlx5_hv_vhca 939 - - mlx5_fc 941 - - mlx5_health0000 942 - - mlx5_page_alloc Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 15:03 To: 'Asaf Penso' > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, It is 20.11 (We upgraded to 20.11 recently). Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 14:47 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets What dpdk version are you using? 19.11 doesn't support 5tswap mode in testpmd. Regards, Asaf Penso ________________________________ From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Monday, September 27, 2021 5:55:21 AM To: Asaf Penso > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I tried also with testpmd with such command and configuration: dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -= - -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D512 --txd=3D512 testpmd> port stop 0 testpmd> vlan set filter on 0 testpmd> rx_vlan add 767 0 testpmd> port start 0 testpmd> set fwd 5tswap testpmd> start it only gets 1.4mpps. with 1.5mpps, it starts to drop packets occasionally. Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B26=1B$BF|=1B(B 13:19 To: 'Asaf Penso' > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I was using 6wind fastpath instead of testpmd. >> Do you configure any flow? I think not, but is there any command to check? >> Do you work in isolate mode? Do you mean the CPU? The dpdk application (6wind fastpath) run inside container and it is using = CPU core from exclusive pool On the otherhand, the cpu isolation is done by host infrastructure and a bi= t complicated, I=1B$B!G=1B(Bm not sure if there is really no any other task= run in this core. BTW, we recently switched the host infra to redhat openshift container plat= form, and same problem is there=1B$B!D=1B(B We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx. I raised also a ticket to mellanox Support https://support.mellanox.com/s/case/5001T00001ZC0jzQAD There is log about cpu affinity, and some mlx5_xxx threads seems strange to= me=1B$B!D=1B(B Can you please also check the ticket? Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B26=1B$BF|=1B(B 12:57 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, Could you please share the testpmd command line you are using? Do you configure any flow? Do you work in isolate mode? Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Monday, July 26, 2021 7:52 AM To: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org> Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, dpdk version in use is 19.11 I have not tried with latest upstream version. It seems performance is affected by IPv6 neighbor advertisement packets com= ing to this interface 05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor ad= vertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32 0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008 0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1 0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000 0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80 0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201 0x0050: 6ef1 9f4e 8a01 Somehow, there are about 100 such packets per second coming to the interfac= e, and packet loss happens. When we change default vlan in switch so that there is no such packets come= to the interface (the mlx5 VF under test), there is not packet loss anymor= e. In both cases, all packets have arrived to rx_vport_unicast_packets. In the packet loss case, we see less packets in rx_good_packets (rx_vport_u= nicast_packets =3D rx_good_packets + lost packet). If the dpdk application is too slow to receive all packets from the VF, is = there any counter to indicate this? Any suggestion? Thank you. Best regards Yan Xiaoping -----Original Message----- From: Asaf Penso > Sent: 2021=1B$BG/=1B(B7=1B$B7n=1B(B13=1B$BF|=1B(B 20:36 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; users@dpdk.org Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hello Yan, Can you please mention which DPDK version you use and whether you see this = issue also with latest upstream version? Regards, Asaf Penso >-----Original Message----- >From: users > On Beh= alf Of Yan, Xiaoping (NSB - >CN/Hangzhou) >Sent: Monday, July 5, 2021 1:08 PM >To: users@dpdk.org >Subject: [dpdk-users] mlx5 VF packet lost between >rx_port_unicast_packets and rx_good_packets > >Hi, > >When doing traffic loopback test on a mlx5 VF, we found there are some >packet loss (not all packet received back ). > >>From xstats counters, I found all packets have been received in >rx_port_unicast_packets, but rx_good_packets has lower counter, and >rx_port_unicast_packets - rx_good_packets =3D lost packets i.e. packet >lost between rx_port_unicast_packets and rx_good_packets. >But I can not find any other counter indicating where exactly those >packets are lost. > >Any idea? > >Attached is the counter logs. (bf is before the test, af is after the >test, fp-cli dpdk-port-stats is the command used to get xstats, and >ethtool -S _f1 (the vf >used) also printed) Test equipment reports that it sends: 2911176 >packets, >receives: 2909474, dropped: 1702 And the xstats (after - before) shows >rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop >(2911177 - rx_good_packets) is 1702 > >BTW, I also noticed this discussion "packet loss between phy and good >counter" >http://mails.dpdk.org/archives/users/2018-July/003271.html >but my case seems to be different as packet also received in >rx_port_unicast_packets, and I checked counter from pf (ethtool -S >ens1f0 in attached log), rx_discards_phy is not increasing. > >Thank you. > >Best regards >Yan Xiaoping --_000_DM8PR12MB54943C1B86DFCA92EEFE570FCDB89DM8PR12MB5494namp_ Content-Type: text/html; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable

This is the commit id wh= ich we think solves the issue you see:

https://git.dpdk.org/dpdk-stable/commit/?h=3Dv20.11.3&= id=3Dede02cfc4783446c4068a5a1746f045465364fac

 =

Regards,

Asaf Penso

 =

From: Yan, Xiaoping (NSB - CN/Hangzh= ou) <xiaoping.yan@nokia-sbell.com>
Sent: Thursday, October 14, 2021 1:15 PM
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <= matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

Ok, I will try. (probably some days later as I=1B$B= !G=1B(Bm busing with another task right now)

Could you also share me the commit id for those fix= es?

Thank you.

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@= nvidia.com>
Sent: 2021
=1B$BG/=1B(= B10=1B$B7n=1B(B14=1B$BF|=1B(B 17:51
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <vi= acheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Can you please try the l= ast LTS 20.11.3?

We have some related fix= es and we think the issue is already solved.

 =

Regards,

Asaf Penso

 =

From: Yan, Xiaoping (NSB - CN/Hangzh= ou) <xiaoping.yan@nokia-= sbell.com>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso <asafp@nvidia.= com>; users@dpdk.org
Cc: Slava Ovsiienko <vi= acheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I=1B$B!G=1B(Bm using 20.11

commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HE= AD, tag: v20.11)

Author: Thomas Monjalon <thomas@monjalon.net>

Date:   Fri Nov 27 19:48:48 2020 +0100

 

    version: 20.11.0

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@= nvidia.com>
Sent: 2021
=1B$BG/=1B(= B10=1B$B7n=1B(B14=1B$BF|=1B(B 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <vi= acheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Are you using the latest= stable 20.11.3? If not, can you try?

 =

Regards,

Asaf Penso

 =

From: Yan, Xiaoping (NSB - CN/Hangzh= ou) <xiaoping.yan@nokia-= sbell.com>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.= com>; users@dpdk.org
Cc: Slava Ovsiienko <vi= acheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

In below log, we can clearly see packets are droppe= d between counter rx_unicast_packets  and rx_good_packets

But there is not any error/miss counter tell why/wh= ere packet is dropped.

Is this a known bug/limitation of Mellanox card?

Any suggestion?

 

Counter in  test center(traffic generator):

        &nb= sp;     Tx count: 617496152

        &nb= sp;     Rx count: 617475672

        &nb= sp;     Drop: 20480

 

testpmd started with:

dpdk-testpmd -l "2,3" --legacy-mem --sock= et-mem "5000,0" -a 0000:03:07.0  -- -i --nb-cores=3D1 --port= mask=3D0x1 --rxd=3D512 --txd=3D512

testpmd> port stop 0

testpmd> vlan set filter on 0<= /p>

testpmd> rx_vlan add 767 0

testpmd> port start 0

testpmd> set fwd 5tswap

testpmd> start

testpmd> show fwd stats all

 

  ---------------------- Forward statistics fo= r port 0  ----------------------

  RX-packets: 617475727    = ;  RX-dropped: 0         =     RX-total: 617475727

  TX-packets: 617475727    = ;  TX-dropped: 0         =     TX-total: 617475727

  --------------------------------------------= --------------------------------

 

  +++++++++++++++ Accumulated forward statisti= cs for all ports+++++++++++++++

  RX-packets: 617475727    = ;  RX-dropped: 0         =     RX-total: 617475727

  TX-packets: 617475727    = ;  TX-dropped: 0         =     TX-total: 617475727

  ++++++++++++++++++++++++++++++++++++++++++++= ++++++++++++++++++++++++++++++++

testpmd> show port xstats 0

###### NIC extended statistics for port 0

rx_good_packets: 617475731=

tx_good_packets: 617475730

rx_good_bytes: 45693207378

tx_good_bytes: 45693207036

rx_missed_errors: 0

rx_errors: 0

tx_errors: 0

rx_mbuf_allocation_errors: 0

rx_q0_packets: 617475731

rx_q0_bytes: 45693207378

rx_q0_errors: 0

tx_q0_packets: 617475730

tx_q0_bytes: 45693207036

rx_wqe_errors: 0

rx_unicast_packets: 617496152<= /b>

rx_unicast_bytes: 45694715248

tx_unicast_packets: 617475730

tx_unicast_bytes: 45693207036

rx_multicast_packets: 3

rx_multicast_bytes: 342

tx_multicast_packets: 0

tx_multicast_bytes: 0

rx_broadcast_packets: 56

rx_broadcast_bytes: 7308

tx_broadcast_packets: 0

tx_broadcast_bytes: 0

tx_phy_packets: 0

rx_phy_packets: 0

rx_phy_crc_errors: 0

tx_phy_bytes: 0

rx_phy_bytes: 0

rx_phy_in_range_len_errors: 0

rx_phy_symbol_errors: 0

rx_phy_discard_packets: 0

tx_phy_discard_packets: 0

tx_phy_errors: 0

rx_out_of_buffer: 0

tx_pp_missed_interrupt_errors: 0<= /p>

tx_pp_rearm_queue_errors: 0

tx_pp_clock_queue_errors: 0

tx_pp_timestamp_past_errors: 0

tx_pp_timestamp_future_errors: 0<= /p>

tx_pp_jitter: 0

tx_pp_wander: 0

tx_pp_sync_lost: 0

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangzh= ou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 16:26
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: 'Slava Ovsiienko' <
viacheslavo@nvidia.com>; 'Matan Azrad' <matan@nvidia.com>; 'Raslan Darawsheh' <rasla= nd@nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

We replaced the NIC also (originally it was cx-4, n= ow it is cx-5), but result is the same.

Do you know why the packet is dropped between rx_po= rt_unicast_packets and rx_good_packets, but there is no error/miss counter?=

 

And do you know mlx5_xxx kernel thread?<= /span>

They have cpu affinity to all cpu cores, including = the core used by fastpath/testpmd.

Would it affect?

 

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 7= 4548

pid 74548's current affinity list: 0-27

 

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,t= id,psr,comm | grep mlx5

    903     = ;  -   - mlx5_health0000

    904     = ;  -   - mlx5_page_alloc

    907     = ;  -   - mlx5_cmd_0000:0

    916     = ;  -   - mlx5_events

    917     = ;  -   - mlx5_esw_wq

    918     = ;  -   - mlx5_fw_tracer

    919     = ;  -   - mlx5_hv_vhca

    921     = ;  -   - mlx5_fc

    924     = ;  -   - mlx5_health0000

    925     = ;  -   - mlx5_page_alloc

    927     = ;  -   - mlx5_cmd_0000:0

    935     = ;  -   - mlx5_events

    936     = ;  -   - mlx5_esw_wq

    937     = ;  -   - mlx5_fw_tracer

    938     = ;  -   - mlx5_hv_vhca

    939     = ;  -   - mlx5_fc

    941     = ;  -   - mlx5_health0000

    942     = ;  -   - mlx5_page_alloc

 

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangzh= ou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 15:03
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

It is 20.11 (We upgraded to 20.11 recently).

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@nvidia.com>
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com><= br> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

What dpdk version are you using?

19.11 doesn't support 5tswap mode in testpmd.

 

Regards,

Asaf Penso


From: Yan, X= iaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <
asafp= @nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I tried also with testpmd with such command and co= nfiguration:

dpdk-testpmd -l "4,5" --legacy-mem --soc= ket-mem "5000,0" -a 0000:03:02.0  -- -i --nb-cores=3D1 --por= tmask=3D0x1 --rxd=3D512 --txd=3D512

testpmd> port stop 0

testpmd> vlan set filter on 0=

testpmd> rx_vlan add 767 0

testpmd> port start 0

testpmd> set fwd 5tswap

testpmd> start

 

it only gets 1.4mpps.

with 1.5mpps, it starts to drop packets occasional= ly.

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangz= hou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B26=1B$BF|=1B(B 13:19
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I was using 6wind fastpath instead of testpmd.

>> Do you configure any flow?

I think not, but is there any command to check?

>> Do you work in isolate mode?<= /o:p>

Do you mean the CPU?

The dpdk application (6wind fastpath) run inside c= ontainer and it is using CPU core from exclusive pool

On the otherhand, the cpu isolation is done by hos= t infrastructure and a bit complicated, I=1B$B!G=1B(Bm not sure if there is= really no any other task run in this core.

 

 

BTW, we recently switched the host infra to redhat= openshift container platform, and same problem is there=1B$B!D=1B(B=

We can get 1.6mpps with intel 810 NIC, but we can = only gets 1mpps for mlx.

I raised also a ticket to mellanox Support<= o:p>

https://support.mellanox.com/s/case/5001T00001ZC0jzQAD

There is log about cpu affinity, and some mlx5_xxx= threads seems strange to me=1B$B!D=1B(B

Can you please also check the ticket?<= /o:p>

 

 

 

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@nvidia.com>
Sent: 2021
=1B$BG/=1B(= B9=1B$B7n=1B(B26=1B$BF|=1B(B 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com><= br> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

Could you please share the testpmd command line yo= u are using?

Do you configure any flow? Do you work in isolate = mode?

 

Regards,

Asaf Penso

 

From: Yan, Xiaoping (NSB - CN/Hangz= hou) <xiaoping.y= an@nokia-sbell.com>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <
asafp= @nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

dpdk version in use is 19.11

I have not tried with latest upstream version.

 

It seems performance is affected by IPv6 neighbor = advertisement packets coming to this interface

05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 >= ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, = length 32

        0x0000:=   3333 0000 0001 6ef1 9f4e 8a01 86dd 6008

        0x0010:=   fe44 0020 3aff fe80 0000 0000 0000 6cf1

        0x0020:=   9fff fe4e 8a01 ff02 0000 0000 0000 0000

        0x0030:=   0000 0000 0001 8800 96d9 2000 0000 fe80

        0x0040:=   0000 0000 0000 6cf1 9fff fe4e 8a01 0201

        0x0050:=   6ef1 9f4e 8a01

Somehow, there are about 100 such packets per seco= nd coming to the interface, and packet loss happens.

When we change default vlan in switch so that ther= e is no such packets come to the interface (the mlx5 VF under test), there = is not packet loss anymore.

 

In both cases, all packets have arrived to rx_vpor= t_unicast_packets.

In the packet loss case, we see less packets in rx= _good_packets (rx_vport_unicast_packets =3D rx_good_packets + lost packet).=

If the dpdk application is too slow to receive all= packets from the VF, is there any counter to indicate this?

 

Any suggestion?

Thank you.

 

Best regards

Yan Xiaoping

 

-----Original Message-----
From: Asaf Penso <
asafp@nvid= ia.com>
Sent: 2021
=1B$BG/=1B(B7=1B$B7n=1B(B13=1B$BF|=1B(B= 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <
v= iacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets

 

Hello Yan,

 

Can you please mention which DPDK version you use = and whether you see this issue also with latest upstream version?

 

Regards,

Asaf Penso

 

>-----Original Message-----

>From: users <users-bounces@dpdk.org> On Behalf Of Yan, Xiaoping (NSB -

>CN/Hangzhou)

>Sent: Monday, July 5, 2021 1:08 PM=

>Subject: [dpdk-users] mlx5 VF packet lost betw= een

>rx_port_unicast_packets and rx_good_packets

>Hi,

>When doing traffic loopback test on a mlx5 VF,= we found there are some

>packet loss (not all packet received back ).

>From xstats counters, I found all packets have= been received in

>rx_port_unicast_packets, but rx_good_packets h= as lower counter, and

>rx_port_unicast_packets - rx_good_packets =3D = lost packets i.e. packet

>lost between rx_port_unicast_packets and rx_go= od_packets.

>But I can not find any other counter indicatin= g where exactly those

>packets are lost.

>Any idea?

>Attached is the counter logs. (bf is before th= e test, af is after the

>test, fp-cli dpdk-port-stats is the command us= ed to get xstats, and

>ethtool -S _f1 (the vf

>used) also printed) Test equipment reports tha= t it sends: 2911176

>packets,

>receives:  2909474, dropped: 1702 And the= xstats (after - before) shows

>rx_port_unicast_packets 2911177,  rx_good= _packets 2909475, so drop

>(2911177 - rx_good_packets) is 1702

>BTW, I also noticed this discussion "pack= et loss between phy and good

>counter"

>but my case seems to be different as packet al= so received in

>rx_port_unicast_packets, and I checked counter= from pf  (ethtool -S

>ens1f0 in attached log), rx_discards_phy is no= t increasing.

>Thank you.

>Best regards

>Yan Xiaoping

 

 

--_000_DM8PR12MB54943C1B86DFCA92EEFE570FCDB89DM8PR12MB5494namp_--