From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81D80A0C43 for ; Fri, 22 Oct 2021 16:55:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FC08411E4; Fri, 22 Oct 2021 16:54:56 +0200 (CEST) Received: from AZHDRRW-EX01.nvidia.com (azhdrrw-ex01.nvidia.com [20.51.104.162]) by mails.dpdk.org (Postfix) with ESMTP id A20E040041 for ; Thu, 14 Oct 2021 08:56:00 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.176) by mxs.oss.nvidia.com (10.13.234.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Wed, 13 Oct 2021 23:55:59 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FTbByjEDm/O6oIdmqjBKmCiUHXKawETjgMmP65E9QJudoFshwmwSDbNxs2fyOqlWO8wOAUTowOt6r48LTz3RtSxbGyWBFxzF/r7NZCDbUvaCVom6RnMifr6TOWnHvpM19D//FaHZ8DyDTedFxHMaajm7Ba8Q/mSpPABrBt4aPsNpUPkPrS3X5IF4XIvBQmC8m/WE9It6fEponB7Ed/wQa3r+qeRcF1th285UtizVn2GPaVk/NXiCh1xAU1Xsig3RCaV6BTbwGV4iEKma/WOVIdXMIprG69HPo6uPf6xFhwFqZa91oNJJy14jb/d/JXBDvg/My6hnfNKiFN7oe6QeBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RufNlOXFpBqHtbngQ3WScOe3tqKz/UnHe4S43EURdu0=; b=PjPHBDz0YKchj5Cra/FH8KwlBIhQW29svjLS4510gV/ruTjd0GMTvKl9kGZb81A3PnU+4ltubjpdo2c/fiFrQQ1/UPDw/VW/1wdvW9aoMMwzFnObX9kx5jf5hfLTgm4ykHtodmjfeCqW6BnKxozZ+obYLECnQlw2Daqws4S8BSAKoD/E2aLbCCtufkhKiXNJW+uZFSuJiYR0rP+gCDOI5vsLo+SMMWjPOf2vh/ENCmH4oaLta90sy8VtDjV/OqA7javZ2kRha5bmbZozV2PR+0bqE845YQDR8UUGA871AWGsMpb4Vyh3E+T+zlAIRknM6fAtACEAh3YdkzAS01LbmA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RufNlOXFpBqHtbngQ3WScOe3tqKz/UnHe4S43EURdu0=; b=tfG9oXuhP0gsVBQml5xVQjWeZzXZwopF4mfwXGtBj35NJTSFtzpj5TZUnl41Mvj54ZOnimnM5LOzOGxsW5lMRsFGy62aQ127XzSSEQHPD810dVzc3z8kuNx9RfQBNcVy9uyVinArE1nBVeF0ljogxzyApAMWgmk5o/HWWhqWDGBuUyjqsPRYW0wo+gEUJuSNS/hFmmarm8/w1YlqcObtycht4gUnE5iOL8XsDv6h8T7E1NPsA/JcJUMnnaJWaavaNT888gbacNdundtrkVWwR8+a9lmWv/vG5ttsDyuF020FfKwGYI0g3QhIhrg8UV09B6CfzW6FsU1Y5RjcvEkuGA== Received: from DM8PR12MB5494.namprd12.prod.outlook.com (2603:10b6:8:24::9) by DM4PR12MB5135.namprd12.prod.outlook.com (2603:10b6:5:392::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16; Thu, 14 Oct 2021 06:55:58 +0000 Received: from DM8PR12MB5494.namprd12.prod.outlook.com ([fe80::75c4:48d0:a18e:f9e7]) by DM8PR12MB5494.namprd12.prod.outlook.com ([fe80::75c4:48d0:a18e:f9e7%8]) with mapi id 15.20.4608.017; Thu, 14 Oct 2021 06:55:58 +0000 From: Asaf Penso To: "Yan, Xiaoping (NSB - CN/Hangzhou)" , "users@dpdk.org" CC: Slava Ovsiienko , Matan Azrad , Raslan Darawsheh Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Topic: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Index: AddxgEIiozH3tSNDT0OXrcwVnnhgFgGYzhKQAnzpw1AMLuUsIAAAHwsQAC3gpaAAbMB0fQAAiMkAAAK/4qAAMP0qUAAAeDsgAr30ELA= Date: Thu, 14 Oct 2021 06:55:57 +0000 Message-ID: References: <0fbb0c1b5a2b4fd3b95d7238db101698@nokia-sbell.com> <16515cb8ebbb4e7d833914040fa5943f@nokia-sbell.com> <22e029bf2ae6407e910cb2dcf11d49ce@nokia-sbell.com> <4b0dd266b53541c7bc4964c29e24a0e6@nokia-sbell.com> <6a5324c1ba0740b4aff5ff861a9f125a@nokia-sbell.com> In-Reply-To: <6a5324c1ba0740b4aff5ff861a9f125a@nokia-sbell.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: nokia-sbell.com; dkim=none (message not signed) header.d=none; nokia-sbell.com; dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ff3a12d5-d072-4e5b-0510-08d98edfacf5 x-ms-traffictypediagnostic: DM4PR12MB5135: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: tcgM/PsaQpZe60Rrlk1X97mtNi7U8m/+uHQ5s07QdJUJmwo0N10PwnT+vvEskT09W+YkIiSIQWiIycpBd6dQqNjcJQDCa5XmDEwYPM8raLM0jBe0xSeqOI28FDA3NSYIttK7N+dCxOhY4ODvaD/cXfWNRtPKsUsO6H07pU3Ta/W1O3UCY48BoGIIPb3J8Egb5LXR3gca3C+Ej/CP+a0/fzhEHQTmZri6R4uSOSMeZY1QB5PsAxedcR7eamJrSPtvW2CTRzEkVwwS95rPQMEkdQgWGw56F5hENxh1EnZ+AgI/cnMveXaTDc/bzWotjfPgdQrIMOJ77V+/VAc5HjwliFLo+YJJnNpPN0IwUtgZQOMCRMV/oNgDbZ847AEUn0fOD2NSrE32kQLuWBXbVAIDWetxsrilGnuQ+FiNLDeS1+JRqqhPXIbh3ixn6jTdgdTPl0veXV9Q+YsPYS9CQ8FkQsF/jL+XAMOwHkGrZszNkf5s5le5Jewg76T6XPjexHwBFkziU9+SCFtzxjMyo3aZ/6t9lM/X8RQTKOvE99tKRb2RcytiauiLLf/5QveMmPykeOdUmAcaEiPbzV5m9udQv0nhExc9dy2M20Q4ZkqyPPnyODpna1etYKRq3DNq5BF6uAF+gx+mFzuTtzD5veadbJ603aBjsDbrBX+b4RRwszznHOB0zHfcv9gNl8xoBA+pVkX2nNqYW/Kmmp4FDqrsN1rZL4FEaFEFS2ekYW91M98bnCygK7Iw3oopvHDk1zHCvfrPOSHcVIsf2GuoXi6lJCeDQLZ6xUQ/ZEjkyyXKxhlnmDr5iuZGadwOyML2MCs2qphcY61q4Bi5Q68IKcFFN9DB9jShpEC+gQUFWOK+sMQ= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM8PR12MB5494.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(122000001)(38100700002)(316002)(5660300002)(53546011)(66946007)(66446008)(64756008)(66556008)(66476007)(8936002)(6506007)(38070700005)(26005)(186003)(166002)(52536014)(21615005)(508600001)(30864003)(8676002)(71200400001)(55016002)(33656002)(966005)(54906003)(110136005)(4326008)(107886003)(76116006)(7696005)(86362001)(2906002)(9686003)(83380400001)(21314003)(559001)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-2022-jp?B?ais4VDBNa1pBaG96eG0zVCtFTW9xbzErTjIvNTI0QmVUTSsveGRsTDBr?= =?iso-2022-jp?B?OXdpZDBOZ3JVZzQrMUNZRC9aS3dXUFJxOGhjSkZDUWxFaEhka0dwUWxJ?= =?iso-2022-jp?B?K3NmcEhQUy8vbUI3Tmk4emlmNGFkbzdXMEhadmM4TndDNVR1QXg2Mzd2?= =?iso-2022-jp?B?bFNvUmpZbXc2YWthbWZVTFFaZHV3d3llSDZ3NnpjUnh2K000NlhqVDB5?= =?iso-2022-jp?B?YVVmUmNUcE5yZ0ZralZ3SjhDY2lzb1pBSVdPa2Y5UGZTYlc4NitKaXNV?= =?iso-2022-jp?B?V2ViM3dMM2dUNmt0aHNQRmFoNmtqb0RqQXRqVnRnYXV4UElOYWk1M2xT?= =?iso-2022-jp?B?bHdiaU5PS1FaR2E5d1BNVm16TGhBT2d5YUVDQXNmYmRSYVJEajR0QTl4?= =?iso-2022-jp?B?QkhPZ2w3VFpwU3BMMUlZaHFxVWFQQzl1NVl6bkRObHplWHc5bVZGTlNk?= =?iso-2022-jp?B?NWk0VGhLQjB3MmNlSU1EZUY3d0dhN3AvSkNzTW1kdkc1UlV1V1VWWjFH?= =?iso-2022-jp?B?VExjRXdvRHFtaENwbk9WQzVSTWFXeU1nczJUeWo1eFFFU09LeDlPL20z?= =?iso-2022-jp?B?UDZTVHd1eVV5WkxJWkZ4cWN0bkpNNWJqUjNCdUpneHRwS2VnOFFwa2Mz?= =?iso-2022-jp?B?NTk3Z3AvWEoxYXRHbU5mYWN1VlUxRm9kUkdPTnE2a2pzTHBMR3RQZUxI?= =?iso-2022-jp?B?K0tRNkUrVHVzNFRxaWllREtiWWN0eHFCZStGcHBwRytLUWdZSjc5Y3VF?= =?iso-2022-jp?B?Z2hGSGpmaGRYeDdjaHFjSStUWXdUTngwSE9Db25xTFh5SmlobUdBT0xO?= =?iso-2022-jp?B?R0NVQThXUUxvdy9zWmFPWW5wNHBJdS9iNldXNXR6UGZEU3l6R2dXbCtC?= =?iso-2022-jp?B?QzY5dWJhcXBqMVgyV05kcWU1MWg4VmRYTmx4ZTdhZ2JXcUpKNlYrbFd1?= =?iso-2022-jp?B?WmtDVjFJeVhvV290N0NPdDViTTVUVkFTaHkrb3o0RmdoRXlkajd5S0Nx?= =?iso-2022-jp?B?bHd6aElPWkNVWDNrRjU3Z2gxeEEvQW9mMk5sdlJpT05TazNHT1kvdml6?= =?iso-2022-jp?B?SElaRm5KWkYzamV4bDZnN2V2TGFtWTZBbVlhK3FSczd0Sm9VczdLby9a?= =?iso-2022-jp?B?NVFrQSsyUE84bzE2ZnQwWGNjdWpaUVVUalhLc0txaDFxbTBqVzUvbktD?= =?iso-2022-jp?B?WG5CT3BPWnVaVUEwUyttUmozcllhNS9iS2lPTUJHR21FR2JRVDVpYnRs?= =?iso-2022-jp?B?cGs0WkFTSExiL1BoYkNOaHdwTVYyMHBRcXlCREZMSEhUMVowUWZjVGZI?= =?iso-2022-jp?B?TExCZmRlU3YwUFprR3RjMTFtR09VaW9MQ01wTFJ1WE03Vm9OcnlDQUxG?= =?iso-2022-jp?B?NDJsbXR3YnhBc2wwSGQ4WGMxRTJZZzBhMXhpMGRYbFI4QjVidjJ5eHRN?= =?iso-2022-jp?B?UEhzTm94V3IwLzVrclBVT0FBRFAvamtQbnlqUE5DVkdmQ0JITVlYR1Na?= =?iso-2022-jp?B?STY1emdiRytXQnBRc3h0dDN6Kyt2aUZyMUg5anYwSDN0ZHpsZXJ1VVNi?= =?iso-2022-jp?B?ckdZbmFYMFUvNXRPcks4ZnQ4dnprWEdvNFl3SGwvS3VTdWk5bUMzMy9S?= =?iso-2022-jp?B?dlkzalRWZkxlUXRzUlpsamJnd0VaTnZiSHN6TCszZ3NJQmhySFdSU2hK?= =?iso-2022-jp?B?Qk1pZ0plUWxZRmtGSmNiV1pWbmZvMkZsQTVhbVRwekRuQ2lkOC9oRkt2?= =?iso-2022-jp?B?WlZTaEVXYUd2bEplVmJmeUVnSmdhWGlnZStVTXdIeHY4WFJ4WGUrb3lC?= =?iso-2022-jp?B?cWx3WDNNamFyTjE5RW9JYlhEQUlzdVFxNDVXaHpEZCs5UHRsVFFOMnlz?= =?iso-2022-jp?B?bTM2MWRHN1M2MUpLVXBWUWx1SFd0SEpoNG11ZkFmYXFLUklqRWNVREl4?= Content-Type: multipart/alternative; boundary="_000_DM8PR12MB5494459B49353FACCACEF3C3CDB89DM8PR12MB5494namp_" MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM8PR12MB5494.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: ff3a12d5-d072-4e5b-0510-08d98edfacf5 X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Oct 2021 06:55:57.8612 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: aedOP9rKkkMZ7xFGjOJsD5CbHoRHFCVYp4qZ90juuDprwogWJLQUcyYfqCQwQYbFl+j+S92kK4SpCLeDSpDscQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5135 X-Mailman-Approved-At: Fri, 22 Oct 2021 16:54:54 +0200 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_DM8PR12MB5494459B49353FACCACEF3C3CDB89DM8PR12MB5494namp_ Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable Are you using the latest stable 20.11.3? If not, can you try? Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: Thursday, September 30, 2021 11:05 AM To: Asaf Penso ; users@dpdk.org Cc: Slava Ovsiienko ; Matan Azrad ; Raslan Darawsheh Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, In below log, we can clearly see packets are dropped between counter rx_uni= cast_packets and rx_good_packets But there is not any error/miss counter tell why/where packet is dropped. Is this a known bug/limitation of Mellanox card? Any suggestion? Counter in test center(traffic generator): Tx count: 617496152 Rx count: 617475672 Drop: 20480 testpmd started with: dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -= - -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D512 --txd=3D512 testpmd> port stop 0 testpmd> vlan set filter on 0 testpmd> rx_vlan add 767 0 testpmd> port start 0 testpmd> set fwd 5tswap testpmd> start testpmd> show fwd stats all ---------------------- Forward statistics for port 0 -------------------= --- RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727 TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727 -------------------------------------------------------------------------= --- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++= ++ RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727 TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= +++ testpmd> show port xstats 0 ###### NIC extended statistics for port 0 rx_good_packets: 617475731 tx_good_packets: 617475730 rx_good_bytes: 45693207378 tx_good_bytes: 45693207036 rx_missed_errors: 0 rx_errors: 0 tx_errors: 0 rx_mbuf_allocation_errors: 0 rx_q0_packets: 617475731 rx_q0_bytes: 45693207378 rx_q0_errors: 0 tx_q0_packets: 617475730 tx_q0_bytes: 45693207036 rx_wqe_errors: 0 rx_unicast_packets: 617496152 rx_unicast_bytes: 45694715248 tx_unicast_packets: 617475730 tx_unicast_bytes: 45693207036 rx_multicast_packets: 3 rx_multicast_bytes: 342 tx_multicast_packets: 0 tx_multicast_bytes: 0 rx_broadcast_packets: 56 rx_broadcast_bytes: 7308 tx_broadcast_packets: 0 tx_broadcast_bytes: 0 tx_phy_packets: 0 rx_phy_packets: 0 rx_phy_crc_errors: 0 tx_phy_bytes: 0 rx_phy_bytes: 0 rx_phy_in_range_len_errors: 0 rx_phy_symbol_errors: 0 rx_phy_discard_packets: 0 tx_phy_discard_packets: 0 tx_phy_errors: 0 rx_out_of_buffer: 0 tx_pp_missed_interrupt_errors: 0 tx_pp_rearm_queue_errors: 0 tx_pp_clock_queue_errors: 0 tx_pp_timestamp_past_errors: 0 tx_pp_timestamp_future_errors: 0 tx_pp_jitter: 0 tx_pp_wander: 0 tx_pp_sync_lost: 0 Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 16:26 To: 'Asaf Penso' > Cc: 'Slava Ovsiienko' >; 'Matan Azrad' >; 'Raslan Dara= wsheh' >; Xu, Meng-Maggie (NS= B - CN/Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, We replaced the NIC also (originally it was cx-4, now it is cx-5), but resu= lt is the same. Do you know why the packet is dropped between rx_port_unicast_packets and r= x_good_packets, but there is no error/miss counter? And do you know mlx5_xxx kernel thread? They have cpu affinity to all cpu cores, including the core used by fastpat= h/testpmd. Would it affect? [cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548 pid 74548's current affinity list: 0-27 [cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5 903 - - mlx5_health0000 904 - - mlx5_page_alloc 907 - - mlx5_cmd_0000:0 916 - - mlx5_events 917 - - mlx5_esw_wq 918 - - mlx5_fw_tracer 919 - - mlx5_hv_vhca 921 - - mlx5_fc 924 - - mlx5_health0000 925 - - mlx5_page_alloc 927 - - mlx5_cmd_0000:0 935 - - mlx5_events 936 - - mlx5_esw_wq 937 - - mlx5_fw_tracer 938 - - mlx5_hv_vhca 939 - - mlx5_fc 941 - - mlx5_health0000 942 - - mlx5_page_alloc Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 15:03 To: 'Asaf Penso' > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, It is 20.11 (We upgraded to 20.11 recently). Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 14:47 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets What dpdk version are you using? 19.11 doesn't support 5tswap mode in testpmd. Regards, Asaf Penso ________________________________ From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Monday, September 27, 2021 5:55:21 AM To: Asaf Penso > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I tried also with testpmd with such command and configuration: dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -= - -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D512 --txd=3D512 testpmd> port stop 0 testpmd> vlan set filter on 0 testpmd> rx_vlan add 767 0 testpmd> port start 0 testpmd> set fwd 5tswap testpmd> start it only gets 1.4mpps. with 1.5mpps, it starts to drop packets occasionally. Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B26=1B$BF|=1B(B 13:19 To: 'Asaf Penso' > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I was using 6wind fastpath instead of testpmd. >> Do you configure any flow? I think not, but is there any command to check? >> Do you work in isolate mode? Do you mean the CPU? The dpdk application (6wind fastpath) run inside container and it is using = CPU core from exclusive pool On the otherhand, the cpu isolation is done by host infrastructure and a bi= t complicated, I=1B$B!G=1B(Bm not sure if there is really no any other task= run in this core. BTW, we recently switched the host infra to redhat openshift container plat= form, and same problem is there=1B$B!D=1B(B We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx. I raised also a ticket to mellanox Support https://support.mellanox.com/s/case/5001T00001ZC0jzQAD There is log about cpu affinity, and some mlx5_xxx threads seems strange to= me=1B$B!D=1B(B Can you please also check the ticket? Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B26=1B$BF|=1B(B 12:57 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, Could you please share the testpmd command line you are using? Do you configure any flow? Do you work in isolate mode? Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Monday, July 26, 2021 7:52 AM To: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org> Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, dpdk version in use is 19.11 I have not tried with latest upstream version. It seems performance is affected by IPv6 neighbor advertisement packets com= ing to this interface 05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor ad= vertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32 0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008 0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1 0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000 0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80 0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201 0x0050: 6ef1 9f4e 8a01 Somehow, there are about 100 such packets per second coming to the interfac= e, and packet loss happens. When we change default vlan in switch so that there is no such packets come= to the interface (the mlx5 VF under test), there is not packet loss anymor= e. In both cases, all packets have arrived to rx_vport_unicast_packets. In the packet loss case, we see less packets in rx_good_packets (rx_vport_u= nicast_packets =3D rx_good_packets + lost packet). If the dpdk application is too slow to receive all packets from the VF, is = there any counter to indicate this? Any suggestion? Thank you. Best regards Yan Xiaoping -----Original Message----- From: Asaf Penso > Sent: 2021=1B$BG/=1B(B7=1B$B7n=1B(B13=1B$BF|=1B(B 20:36 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; users@dpdk.org Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hello Yan, Can you please mention which DPDK version you use and whether you see this = issue also with latest upstream version? Regards, Asaf Penso >-----Original Message----- >From: users > On Beh= alf Of Yan, Xiaoping (NSB - >CN/Hangzhou) >Sent: Monday, July 5, 2021 1:08 PM >To: users@dpdk.org >Subject: [dpdk-users] mlx5 VF packet lost between >rx_port_unicast_packets and rx_good_packets > >Hi, > >When doing traffic loopback test on a mlx5 VF, we found there are some >packet loss (not all packet received back ). > >>From xstats counters, I found all packets have been received in >rx_port_unicast_packets, but rx_good_packets has lower counter, and >rx_port_unicast_packets - rx_good_packets =3D lost packets i.e. packet >lost between rx_port_unicast_packets and rx_good_packets. >But I can not find any other counter indicating where exactly those >packets are lost. > >Any idea? > >Attached is the counter logs. (bf is before the test, af is after the >test, fp-cli dpdk-port-stats is the command used to get xstats, and >ethtool -S _f1 (the vf >used) also printed) Test equipment reports that it sends: 2911176 >packets, >receives: 2909474, dropped: 1702 And the xstats (after - before) shows >rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop >(2911177 - rx_good_packets) is 1702 > >BTW, I also noticed this discussion "packet loss between phy and good >counter" >http://mails.dpdk.org/archives/users/2018-July/003271.html >but my case seems to be different as packet also received in >rx_port_unicast_packets, and I checked counter from pf (ethtool -S >ens1f0 in attached log), rx_discards_phy is not increasing. > >Thank you. > >Best regards >Yan Xiaoping --_000_DM8PR12MB5494459B49353FACCACEF3C3CDB89DM8PR12MB5494namp_ Content-Type: text/html; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable

Are you using the latest= stable 20.11.3? If not, can you try?

 =

Regards,

Asaf Penso

 =

From: Yan, Xiaoping (NSB - CN/Hangzh= ou) <xiaoping.yan@nokia-sbell.com>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <= matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

In below log, we can clearly see packets are droppe= d between counter rx_unicast_packets  and rx_good_packets

But there is not any error/miss counter tell why/wh= ere packet is dropped.

Is this a known bug/limitation of Mellanox card?

Any suggestion?

 

Counter in  test center(traffic generator):

        &nb= sp;     Tx count: 617496152

        &nb= sp;     Rx count: 617475672

        &nb= sp;     Drop: 20480

 

testpmd started with:

dpdk-testpmd -l "2,3" --legacy-mem --sock= et-mem "5000,0" -a 0000:03:07.0  -- -i --nb-cores=3D1 --port= mask=3D0x1 --rxd=3D512 --txd=3D512

testpmd> port stop 0

testpmd> vlan set filter on 0<= /p>

testpmd> rx_vlan add 767 0

testpmd> port start 0

testpmd> set fwd 5tswap

testpmd> start

testpmd> show fwd stats all

 

  ---------------------- Forward statistics fo= r port 0  ----------------------

  RX-packets: 617475727    = ;  RX-dropped: 0         =     RX-total: 617475727

  TX-packets: 617475727    = ;  TX-dropped: 0         =     TX-total: 617475727

  --------------------------------------------= --------------------------------

 

  +++++++++++++++ Accumulated forward statisti= cs for all ports+++++++++++++++

  RX-packets: 617475727    = ;  RX-dropped: 0         =     RX-total: 617475727

  TX-packets: 617475727    = ;  TX-dropped: 0         =     TX-total: 617475727

  ++++++++++++++++++++++++++++++++++++++++++++= ++++++++++++++++++++++++++++++++

testpmd> show port xstats 0

###### NIC extended statistics for port 0

rx_good_packets: 617475731=

tx_good_packets: 617475730

rx_good_bytes: 45693207378

tx_good_bytes: 45693207036

rx_missed_errors: 0

rx_errors: 0

tx_errors: 0

rx_mbuf_allocation_errors: 0

rx_q0_packets: 617475731

rx_q0_bytes: 45693207378

rx_q0_errors: 0

tx_q0_packets: 617475730

tx_q0_bytes: 45693207036

rx_wqe_errors: 0

rx_unicast_packets: 617496152<= /b>

rx_unicast_bytes: 45694715248

tx_unicast_packets: 617475730

tx_unicast_bytes: 45693207036

rx_multicast_packets: 3

rx_multicast_bytes: 342

tx_multicast_packets: 0

tx_multicast_bytes: 0

rx_broadcast_packets: 56

rx_broadcast_bytes: 7308

tx_broadcast_packets: 0

tx_broadcast_bytes: 0

tx_phy_packets: 0

rx_phy_packets: 0

rx_phy_crc_errors: 0

tx_phy_bytes: 0

rx_phy_bytes: 0

rx_phy_in_range_len_errors: 0

rx_phy_symbol_errors: 0

rx_phy_discard_packets: 0

tx_phy_discard_packets: 0

tx_phy_errors: 0

rx_out_of_buffer: 0

tx_pp_missed_interrupt_errors: 0<= /p>

tx_pp_rearm_queue_errors: 0

tx_pp_clock_queue_errors: 0

tx_pp_timestamp_past_errors: 0

tx_pp_timestamp_future_errors: 0<= /p>

tx_pp_jitter: 0

tx_pp_wander: 0

tx_pp_sync_lost: 0

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangzh= ou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 16:26
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: 'Slava Ovsiienko' <
viacheslavo@nvidia.com>; 'Matan Azrad' <matan@nvidia.com>; 'Raslan Darawsheh' <rasla= nd@nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

We replaced the NIC also (originally it was cx-4, n= ow it is cx-5), but result is the same.

Do you know why the packet is dropped between rx_po= rt_unicast_packets and rx_good_packets, but there is no error/miss counter?=

 

And do you know mlx5_xxx kernel thread?<= /span>

They have cpu affinity to all cpu cores, including = the core used by fastpath/testpmd.

Would it affect?

 

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 7= 4548

pid 74548's current affinity list: 0-27

 

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,t= id,psr,comm | grep mlx5

    903     = ;  -   - mlx5_health0000

    904     = ;  -   - mlx5_page_alloc

    907     = ;  -   - mlx5_cmd_0000:0

    916     = ;  -   - mlx5_events

    917     = ;  -   - mlx5_esw_wq

    918     = ;  -   - mlx5_fw_tracer

    919     = ;  -   - mlx5_hv_vhca

    921     = ;  -   - mlx5_fc

    924     = ;  -   - mlx5_health0000

    925     = ;  -   - mlx5_page_alloc

    927     = ;  -   - mlx5_cmd_0000:0

    935     = ;  -   - mlx5_events

    936     = ;  -   - mlx5_esw_wq

    937     = ;  -   - mlx5_fw_tracer

    938     = ;  -   - mlx5_hv_vhca

    939     = ;  -   - mlx5_fc

    941     = ;  -   - mlx5_health0000

    942     = ;  -   - mlx5_page_alloc

 

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangzh= ou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 15:03
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

It is 20.11 (We upgraded to 20.11 recently).

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@nvidia.com>
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com><= br> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

What dpdk version are you using?

19.11 doesn't support 5tswap mode in testpmd.

 

Regards,

Asaf Penso


From: Yan, X= iaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <
asafp= @nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I tried also with testpmd with such command and co= nfiguration:

dpdk-testpmd -l "4,5" --legacy-mem --soc= ket-mem "5000,0" -a 0000:03:02.0  -- -i --nb-cores=3D1 --por= tmask=3D0x1 --rxd=3D512 --txd=3D512

testpmd> port stop 0

testpmd> vlan set filter on 0=

testpmd> rx_vlan add 767 0

testpmd> port start 0

testpmd> set fwd 5tswap

testpmd> start

 

it only gets 1.4mpps.

with 1.5mpps, it starts to drop packets occasional= ly.

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangz= hou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B26=1B$BF|=1B(B 13:19
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I was using 6wind fastpath instead of testpmd.

>> Do you configure any flow?

I think not, but is there any command to check?

>> Do you work in isolate mode?<= /o:p>

Do you mean the CPU?

The dpdk application (6wind fastpath) run inside c= ontainer and it is using CPU core from exclusive pool

On the otherhand, the cpu isolation is done by hos= t infrastructure and a bit complicated, I=1B$B!G=1B(Bm not sure if there is= really no any other task run in this core.

 

 

BTW, we recently switched the host infra to redhat= openshift container platform, and same problem is there=1B$B!D=1B(B=

We can get 1.6mpps with intel 810 NIC, but we can = only gets 1mpps for mlx.

I raised also a ticket to mellanox Support<= o:p>

https://support.mellanox.com/s/case/5001T00001ZC0jzQAD

There is log about cpu affinity, and some mlx5_xxx= threads seems strange to me=1B$B!D=1B(B

Can you please also check the ticket?<= /o:p>

 

 

 

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@nvidia.com>
Sent: 2021
=1B$BG/=1B(= B9=1B$B7n=1B(B26=1B$BF|=1B(B 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com><= br> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

Could you please share the testpmd command line yo= u are using?

Do you configure any flow? Do you work in isolate = mode?

 

Regards,

Asaf Penso

 

From: Yan, Xiaoping (NSB - CN/Hangz= hou) <xiaoping.y= an@nokia-sbell.com>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <
asafp= @nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

dpdk version in use is 19.11

I have not tried with latest upstream version.

 

It seems performance is affected by IPv6 neighbor = advertisement packets coming to this interface

05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 >= ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, = length 32

        0x0000:=   3333 0000 0001 6ef1 9f4e 8a01 86dd 6008

        0x0010:=   fe44 0020 3aff fe80 0000 0000 0000 6cf1

        0x0020:=   9fff fe4e 8a01 ff02 0000 0000 0000 0000

        0x0030:=   0000 0000 0001 8800 96d9 2000 0000 fe80

        0x0040:=   0000 0000 0000 6cf1 9fff fe4e 8a01 0201

        0x0050:=   6ef1 9f4e 8a01

Somehow, there are about 100 such packets per seco= nd coming to the interface, and packet loss happens.

When we change default vlan in switch so that ther= e is no such packets come to the interface (the mlx5 VF under test), there = is not packet loss anymore.

 

In both cases, all packets have arrived to rx_vpor= t_unicast_packets.

In the packet loss case, we see less packets in rx= _good_packets (rx_vport_unicast_packets =3D rx_good_packets + lost packet).=

If the dpdk application is too slow to receive all= packets from the VF, is there any counter to indicate this?

 

Any suggestion?

Thank you.

 

Best regards

Yan Xiaoping

 

-----Original Message-----
From: Asaf Penso <
asafp@nvid= ia.com>
Sent: 2021
=1B$BG/=1B(B7=1B$B7n=1B(B13=1B$BF|=1B(B= 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <
v= iacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets

 

Hello Yan,

 

Can you please mention which DPDK version you use = and whether you see this issue also with latest upstream version?

 

Regards,

Asaf Penso

 

>-----Original Message-----

>From: users <users-bounces@dpdk.org> On Behalf Of Yan, Xiaoping (NSB -

>CN/Hangzhou)

>Sent: Monday, July 5, 2021 1:08 PM=

>Subject: [dpdk-users] mlx5 VF packet lost betw= een

>rx_port_unicast_packets and rx_good_packets

>Hi,

>When doing traffic loopback test on a mlx5 VF,= we found there are some

>packet loss (not all packet received back ).

>From xstats counters, I found all packets have= been received in

>rx_port_unicast_packets, but rx_good_packets h= as lower counter, and

>rx_port_unicast_packets - rx_good_packets =3D = lost packets i.e. packet

>lost between rx_port_unicast_packets and rx_go= od_packets.

>But I can not find any other counter indicatin= g where exactly those

>packets are lost.

>Any idea?

>Attached is the counter logs. (bf is before th= e test, af is after the

>test, fp-cli dpdk-port-stats is the command us= ed to get xstats, and

>ethtool -S _f1 (the vf

>used) also printed) Test equipment reports tha= t it sends: 2911176

>packets,

>receives:  2909474, dropped: 1702 And the= xstats (after - before) shows

>rx_port_unicast_packets 2911177,  rx_good= _packets 2909475, so drop

>(2911177 - rx_good_packets) is 1702

>BTW, I also noticed this discussion "pack= et loss between phy and good

>counter"

>but my case seems to be different as packet al= so received in

>rx_port_unicast_packets, and I checked counter= from pf  (ethtool -S

>ens1f0 in attached log), rx_discards_phy is no= t increasing.

>Thank you.

>Best regards

>Yan Xiaoping

 

 

--_000_DM8PR12MB5494459B49353FACCACEF3C3CDB89DM8PR12MB5494namp_--