From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81E81A0C43 for ; Fri, 22 Oct 2021 16:55:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5CD2411FF; Fri, 22 Oct 2021 16:54:57 +0200 (CEST) Received: from AZHDRRW-EX01.nvidia.com (azhdrrw-ex01.nvidia.com [20.51.104.162]) by mails.dpdk.org (Postfix) with ESMTP id 0990B40041 for ; Thu, 14 Oct 2021 11:50:41 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (104.47.57.48) by mxs.oss.nvidia.com (10.13.234.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Thu, 14 Oct 2021 02:50:40 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iGMq3yCF/LiwEc9xMhFqCU1q4fhtwxBC81ksJNyueIwriKtkbHcGcGOmfUf1p+znDXE4iOxhjRUd2u8R9fRF/c4Hy2GESS2z1bdWk4shGLcyvg3OGjkfWUglKmf4fZfcOoAAgnmvb4QZUvUgj40y7Mtq8f3N2iHxjZFwPJqD9c80Mu+1Kb532228n776aEy+TstbN72WGB5wnyoCogYB1jtPZmJVlCHg4E1U59v2FFdH48uDCyt8z2ez8atgbF/XLqAizgh1nJMD7CrzP+wfJiMDKfbbNv7+bTp6UVgu10wl/Tx0kdyRUVGHW5NJtUIqDQZo8NkiCv4ryC0mjkVqtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GH22fBugbdTACFd1elKkkhHke8umWSYvsNc0k8NIaQw=; b=YHyA9PHUahKUROG6QtXN6bvwpveqBbmXrYAlgfgyaMgkmADLnZcHT1pDWZPTbStKzw82gW3PzpIZmTYtI4vfjjNgB/SR+Oc6/+f181V6N3YOqETL4fwW6za8Oi8vwBMfaHL8yr6EWarUDpDl3NRENeDO7HVIPMgcozG49ePgP2BZBa9QtYggW6JdvU6e6vK3rLvRQkpSJmZDoHaIUf8GqrZpLS4ypZ/Iw4b0QxpV2/Tg0M+VBawKIjMIRjki3p+CrD4xi355HSRk0xGmIFVzGrdE8Z7Nflk6apLdrH+To1pQIyUfc59Qhz+S0NfeWOK4rsB3V38Re34wVebptMmktg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GH22fBugbdTACFd1elKkkhHke8umWSYvsNc0k8NIaQw=; b=XvbdTW7TGzx3eE50IsMLQbn/l7f9wLc8jvJfjz5EBbZpXcG7arxs3q7cvWGdcEuk3K1Aj7GwZGb2sud1F1WhQwr8tTNgMUwRUPc0d3p0BmRivNx9TLrvwVT7Vn0o5/F2XdhCSaQYiAUruh8Yb9gdzipbuhO8cYPUoSickpGf6XV4EoVPPQ9svcxxgQzsqdmCah6Oh/SSzFU46wOqArwaA411fex++IhHuKQoks+oorS991ZfcxnsFAfhYfHbGRVhFmiV2q6GCs0NnF6ZrH011WU2Lc+3/KP174p+EbwhW/TDPpJPxckN2XTGHOsbK8+cp3sMDqsaHF0V/G1Uhw7PAQ== Received: from DM8PR12MB5494.namprd12.prod.outlook.com (2603:10b6:8:24::9) by DM6PR12MB5549.namprd12.prod.outlook.com (2603:10b6:5:209::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16; Thu, 14 Oct 2021 09:50:39 +0000 Received: from DM8PR12MB5494.namprd12.prod.outlook.com ([fe80::75c4:48d0:a18e:f9e7]) by DM8PR12MB5494.namprd12.prod.outlook.com ([fe80::75c4:48d0:a18e:f9e7%8]) with mapi id 15.20.4608.017; Thu, 14 Oct 2021 09:50:38 +0000 From: Asaf Penso To: "Yan, Xiaoping (NSB - CN/Hangzhou)" , "users@dpdk.org" CC: Slava Ovsiienko , Matan Azrad , Raslan Darawsheh Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Topic: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Thread-Index: AddxgEIiozH3tSNDT0OXrcwVnnhgFgGYzhKQAnzpw1AMLuUsIAAAHwsQAC3gpaAAbMB0fQAAiMkAAAK/4qAAMP0qUAAAeDsgAr30ELAABX0bQAAAmv+Q Date: Thu, 14 Oct 2021 09:50:38 +0000 Message-ID: References: <0fbb0c1b5a2b4fd3b95d7238db101698@nokia-sbell.com> <16515cb8ebbb4e7d833914040fa5943f@nokia-sbell.com> <22e029bf2ae6407e910cb2dcf11d49ce@nokia-sbell.com> <4b0dd266b53541c7bc4964c29e24a0e6@nokia-sbell.com> <6a5324c1ba0740b4aff5ff861a9f125a@nokia-sbell.com> <66a6a919379744518f623c186ba4ff96@nokia-sbell.com> In-Reply-To: <66a6a919379744518f623c186ba4ff96@nokia-sbell.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: nokia-sbell.com; dkim=none (message not signed) header.d=none; nokia-sbell.com; dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 36f16903-7872-4714-d2bc-08d98ef8140f x-ms-traffictypediagnostic: DM6PR12MB5549: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 8cfKH/9jOorFZRn0RV341f2r+2pNaptGwlr2uPsu1XThY0MTD/pnikJYgpGk2IWMt8ATmR5QqlCOOePIi5fN8wtFRbtgW74KeDtIriA66loEGJh2Dz77XuvI+otXbE6PwTE3gXW4pO+6wNQ5UikFnDs9XRNHsOxfYFh8shN6yzv6e6hvyfymQM2OD1UU85jA7/4YebEpiK0fitjw5F2iJRLe+F6x6x0f9G2WzJNyIw14Y8zyV4YiUPy3jru6Z0Gxz2x6E3PCKJDbZInnboLQgAMbhpQv6jY46FI2itmKCvADEcoyNl/e6V6IPu52kTVFYTwjQwFshSa/kfx695AQ0P2VMSw3rZ+/CNGzsmXLlmBisnPQ2d3ODiO59mrqpZbckan5JoUpklKD1Y2tfzzW7WPXa9jS011aT0iHHlNNP7qf5qpqgymjdkSdYC6Sn6F3TyYvVBRowgcQQ8r9OC8mZiBHoGIC+FHADoLoKIERI5vUtXrykdQ8jr//cqVW1hPYHZLMW0yIEvpX+ipspcGAK7d71q2nsqDpEzpadDtBGMP3+cSdb0P0C3cfYDLIc1rC1hlFZuTEFQqVVW6wjz4K9aikhYiEuDnnJ5KUoV7C/pz+5KRSXzliP2BxmAJUpmnme14axdCRxtnmYXD+RXphuBcZKscFSxzVrsdWG5e+hiAF8MUXSX0715OCNKP6zDr5gTzSOehQKV/NDAlo/czVF9XR601XSPHIh6xi/1YmunRJoe3Hv7R0F57hHzFDUC4eSt08LpDqPHeBi0Rs0YoHNNQjWbbLnis6X3mQbgxzj5FlUjfV52UMYaUh5ARwFZ7rQCX7cq9ODjd2w1CswCL8YjG0R97HQTrQbM/eKfQnKKg= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM8PR12MB5494.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(76116006)(66556008)(8676002)(66476007)(66946007)(122000001)(38100700002)(30864003)(83380400001)(64756008)(107886003)(4326008)(38070700005)(66446008)(966005)(186003)(508600001)(8936002)(110136005)(54906003)(33656002)(2906002)(86362001)(316002)(21615005)(166002)(55016002)(9686003)(5660300002)(71200400001)(6506007)(7696005)(26005)(53546011)(52536014)(21314003)(559001)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-2022-jp?B?YkIzWFUxN05CT1RMaHE5a1NLYVltYm9XYWZTQm45WGlwTVJKVmdicmFW?= =?iso-2022-jp?B?UTdHN2FqNy9EQ1R0clJ0NkJoWm5WRDNYWnR6UU0yaVNXRXloWkg2d0h4?= =?iso-2022-jp?B?VUR0WGZWZlVwMko4WHFMSmtsR1dvMW5DZ09Nbkg3RG43T2dhbUxib0Zl?= =?iso-2022-jp?B?cXdHdTRGU1AvREFKV1ZWTFZoUWxBb2RoUldkRjdCS0JHNGg0bER3Smx2?= =?iso-2022-jp?B?R21OUTVaTStldEJmbWJCMVpUT2dlcVNaZFg2YXJBSm1KQ09oRGtBbkNs?= =?iso-2022-jp?B?dHZVY290TExtYUZxTW1Pb29FaEZsWHY2Y3NUeWNJRlNoenJZbGNSQy9B?= =?iso-2022-jp?B?YTU3Y2RnQi93VXJ5QWdFOXFZWDFUcHhLeURRdFJOTTkwN3g4MUVoNWhv?= =?iso-2022-jp?B?b2kxcGE4elY0ZHVIall2eXk5RXVlT1gzcmtON1V0VDV4enMram1mZWh2?= =?iso-2022-jp?B?TmxzZUJLeDJFRjlsa3BlTzVHMGdZOStQNTZhUUx1YUd6VVdJMUppdjJp?= =?iso-2022-jp?B?STYybStjMlFUZmRBcUl6K3M1dG0zTHJMN2xuZDlCeUtqWHh2NStXMXB5?= =?iso-2022-jp?B?TTFiODVicXhHQ2xRQklLUjRwemFuQjVsR25KVnRGV2FWQ2FoWTUxRGlF?= =?iso-2022-jp?B?M0dTRGZGbWg2VXAza2gyMytQM0kzazBQSlpraTV0UUZRaHlmZUwydFlm?= =?iso-2022-jp?B?V2JwL2FRazc3ZWJnbkVSNUVmQXEvTUpoT2JnditBS0xEdmFlUkZxeWZK?= =?iso-2022-jp?B?QXRtTlg5SEx6ZGZVRlh5ZjNEeVpLQTIyNnozK3VwTkw5M2NhNFJuWWpD?= =?iso-2022-jp?B?blRMbE5WNnpRTmtFWnZweUthU2J5OHpVVTF0ZklPTGJuZlR3dnhVWjRm?= =?iso-2022-jp?B?aVRKb3lYVE8xMU9qSnYwR0F4NGNDRFRqWXA4b0x6MmNkdE5OazZOOEw4?= =?iso-2022-jp?B?Z29FNCs5SVkwVDNIVEFFS3ErU2Z3OWhoTHRYWUgyL2E4VUJIaGQ0Q3I3?= =?iso-2022-jp?B?TGNFS0FKMVo3QUZBKzFhdW9RSlQ0RHFZc0czRThTYW5jV0I5ZjJ4K1BD?= =?iso-2022-jp?B?V1pqczNxN0tRTjhKZWdMVVZQb0lzRzVpVWNCcm5yZ3A5dWQ3SW5ZdnFK?= =?iso-2022-jp?B?ZURXbS9OSlRoZ3NTcSt0ajFibGI5ckpLNGlqcDNlSTJTQ3k1eGhoWkZB?= =?iso-2022-jp?B?M0ZOV3M0K3I0c2dobGRnVUF3WG54T2oyTzlzanY0aVgvbG9OOC8zUVB4?= =?iso-2022-jp?B?NStvK3d3eEEwQ1NNRDlwaEFibnVwTUlERHhNc2psUy9vcGtuZ1RuTkdZ?= =?iso-2022-jp?B?R0dXNHVocUNJUkYxUU9PQXQzMVNyKzkrS0VyL2x0a2Z6SWVrdmpkQStT?= =?iso-2022-jp?B?NEFwcDI3QjR3ZDJnamJTUTQxVFdsZStCOVkrbWVxN3kxQmtpV2xUTnZw?= =?iso-2022-jp?B?MUR3OCtmQTNhalFnQkJKakVoWGUrbGtoUklBaWdlc3AzOTJWRlV2dlJC?= =?iso-2022-jp?B?czcwL3ZiTXNaSWF1TzJ3bzZtL1A0SVhJalByVDUxak55dFBKSkdWeGNn?= =?iso-2022-jp?B?UHBWdHY2c3VsRHJPMjB5c0k2SXpwMituTFZtUUtmTjF0WU5SUk05a1M0?= =?iso-2022-jp?B?bkdObk1jODlNN2JLNzU1bHhJaTRYak5TWXFmL1NOL0FVS1l1QXpKeFpX?= =?iso-2022-jp?B?M0kwYkJ0bDNqSVdkT1NqajdhK25UYVRLMWhqV241bkRZK2FKeE1xMzVL?= =?iso-2022-jp?B?bkNnUm9HK2RPRHB5cml0OVNBQ1UyVThKd3NtL1NIdnMxb2M1MVF3ZSs5?= =?iso-2022-jp?B?YVNyc3RwTlgyRk1JQUpuWUgxK0VIVnV2UGRsQzhmNGR1d3lZb0tRVFQz?= =?iso-2022-jp?B?RE1DMVlSVGtacWk3VUdTSzJ6a0Y2Z2NTM2drdHBUV2JuTXpuTHdIV2VQ?= Content-Type: multipart/alternative; boundary="_000_DM8PR12MB549476611802F3669482B4EECDB89DM8PR12MB5494namp_" MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM8PR12MB5494.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 36f16903-7872-4714-d2bc-08d98ef8140f X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Oct 2021 09:50:38.8613 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: W0T6TUpIAc1xVidUVSJ18JecpgnJ3oq1360iReG2TNP+ZawctQeGwJrlww0J3izMNqVWL97i/7UdHPeaMT9k1g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5549 X-Mailman-Approved-At: Fri, 22 Oct 2021 16:54:54 +0200 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_DM8PR12MB549476611802F3669482B4EECDB89DM8PR12MB5494namp_ Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable Can you please try the last LTS 20.11.3? We have some related fixes and we think the issue is already solved. Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: Thursday, October 14, 2021 12:33 PM To: Asaf Penso ; users@dpdk.org Cc: Slava Ovsiienko ; Matan Azrad ; Raslan Darawsheh Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I=1B$B!G=1B(Bm using 20.11 commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11) Author: Thomas Monjalon > Date: Fri Nov 27 19:48:48 2020 +0100 version: 20.11.0 Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B10=1B$B7n=1B(B14=1B$BF|=1B(B 14:56 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; users@dpdk.org Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Are you using the latest stable 20.11.3? If not, can you try? Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Thursday, September 30, 2021 11:05 AM To: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org> Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, In below log, we can clearly see packets are dropped between counter rx_uni= cast_packets and rx_good_packets But there is not any error/miss counter tell why/where packet is dropped. Is this a known bug/limitation of Mellanox card? Any suggestion? Counter in test center(traffic generator): Tx count: 617496152 Rx count: 617475672 Drop: 20480 testpmd started with: dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -= - -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D512 --txd=3D512 testpmd> port stop 0 testpmd> vlan set filter on 0 testpmd> rx_vlan add 767 0 testpmd> port start 0 testpmd> set fwd 5tswap testpmd> start testpmd> show fwd stats all ---------------------- Forward statistics for port 0 -------------------= --- RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727 TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727 -------------------------------------------------------------------------= --- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++= ++ RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727 TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= +++ testpmd> show port xstats 0 ###### NIC extended statistics for port 0 rx_good_packets: 617475731 tx_good_packets: 617475730 rx_good_bytes: 45693207378 tx_good_bytes: 45693207036 rx_missed_errors: 0 rx_errors: 0 tx_errors: 0 rx_mbuf_allocation_errors: 0 rx_q0_packets: 617475731 rx_q0_bytes: 45693207378 rx_q0_errors: 0 tx_q0_packets: 617475730 tx_q0_bytes: 45693207036 rx_wqe_errors: 0 rx_unicast_packets: 617496152 rx_unicast_bytes: 45694715248 tx_unicast_packets: 617475730 tx_unicast_bytes: 45693207036 rx_multicast_packets: 3 rx_multicast_bytes: 342 tx_multicast_packets: 0 tx_multicast_bytes: 0 rx_broadcast_packets: 56 rx_broadcast_bytes: 7308 tx_broadcast_packets: 0 tx_broadcast_bytes: 0 tx_phy_packets: 0 rx_phy_packets: 0 rx_phy_crc_errors: 0 tx_phy_bytes: 0 rx_phy_bytes: 0 rx_phy_in_range_len_errors: 0 rx_phy_symbol_errors: 0 rx_phy_discard_packets: 0 tx_phy_discard_packets: 0 tx_phy_errors: 0 rx_out_of_buffer: 0 tx_pp_missed_interrupt_errors: 0 tx_pp_rearm_queue_errors: 0 tx_pp_clock_queue_errors: 0 tx_pp_timestamp_past_errors: 0 tx_pp_timestamp_future_errors: 0 tx_pp_jitter: 0 tx_pp_wander: 0 tx_pp_sync_lost: 0 Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 16:26 To: 'Asaf Penso' > Cc: 'Slava Ovsiienko' >; 'Matan Azrad' >; 'Raslan Dara= wsheh' >; Xu, Meng-Maggie (NS= B - CN/Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, We replaced the NIC also (originally it was cx-4, now it is cx-5), but resu= lt is the same. Do you know why the packet is dropped between rx_port_unicast_packets and r= x_good_packets, but there is no error/miss counter? And do you know mlx5_xxx kernel thread? They have cpu affinity to all cpu cores, including the core used by fastpat= h/testpmd. Would it affect? [cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548 pid 74548's current affinity list: 0-27 [cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5 903 - - mlx5_health0000 904 - - mlx5_page_alloc 907 - - mlx5_cmd_0000:0 916 - - mlx5_events 917 - - mlx5_esw_wq 918 - - mlx5_fw_tracer 919 - - mlx5_hv_vhca 921 - - mlx5_fc 924 - - mlx5_health0000 925 - - mlx5_page_alloc 927 - - mlx5_cmd_0000:0 935 - - mlx5_events 936 - - mlx5_esw_wq 937 - - mlx5_fw_tracer 938 - - mlx5_hv_vhca 939 - - mlx5_fc 941 - - mlx5_health0000 942 - - mlx5_page_alloc Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 15:03 To: 'Asaf Penso' > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, It is 20.11 (We upgraded to 20.11 recently). Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B29=1B$BF|=1B(B 14:47 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets What dpdk version are you using? 19.11 doesn't support 5tswap mode in testpmd. Regards, Asaf Penso ________________________________ From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Monday, September 27, 2021 5:55:21 AM To: Asaf Penso > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I tried also with testpmd with such command and configuration: dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -= - -i --nb-cores=3D1 --portmask=3D0x1 --rxd=3D512 --txd=3D512 testpmd> port stop 0 testpmd> vlan set filter on 0 testpmd> rx_vlan add 767 0 testpmd> port start 0 testpmd> set fwd 5tswap testpmd> start it only gets 1.4mpps. with 1.5mpps, it starts to drop packets occasionally. Best regards Yan Xiaoping From: Yan, Xiaoping (NSB - CN/Hangzhou) Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B26=1B$BF|=1B(B 13:19 To: 'Asaf Penso' > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= >; Xu, Meng-Maggie (NSB - CN= /Hangzhou) > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, I was using 6wind fastpath instead of testpmd. >> Do you configure any flow? I think not, but is there any command to check? >> Do you work in isolate mode? Do you mean the CPU? The dpdk application (6wind fastpath) run inside container and it is using = CPU core from exclusive pool On the otherhand, the cpu isolation is done by host infrastructure and a bi= t complicated, I=1B$B!G=1B(Bm not sure if there is really no any other task= run in this core. BTW, we recently switched the host infra to redhat openshift container plat= form, and same problem is there=1B$B!D=1B(B We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx. I raised also a ticket to mellanox Support https://support.mellanox.com/s/case/5001T00001ZC0jzQAD There is log about cpu affinity, and some mlx5_xxx threads seems strange to= me=1B$B!D=1B(B Can you please also check the ticket? Best regards Yan Xiaoping From: Asaf Penso > Sent: 2021=1B$BG/=1B(B9=1B$B7n=1B(B26=1B$BF|=1B(B 12:57 To: Yan, Xiaoping (NSB - CN/Hangzhou) > Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, Could you please share the testpmd command line you are using? Do you configure any flow? Do you work in isolate mode? Regards, Asaf Penso From: Yan, Xiaoping (NSB - CN/Hangzhou) > Sent: Monday, July 26, 2021 7:52 AM To: Asaf Penso >; users@dpdk.org<= mailto:users@dpdk.org> Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hi, dpdk version in use is 19.11 I have not tried with latest upstream version. It seems performance is affected by IPv6 neighbor advertisement packets com= ing to this interface 05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor ad= vertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32 0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008 0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1 0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000 0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80 0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201 0x0050: 6ef1 9f4e 8a01 Somehow, there are about 100 such packets per second coming to the interfac= e, and packet loss happens. When we change default vlan in switch so that there is no such packets come= to the interface (the mlx5 VF under test), there is not packet loss anymor= e. In both cases, all packets have arrived to rx_vport_unicast_packets. In the packet loss case, we see less packets in rx_good_packets (rx_vport_u= nicast_packets =3D rx_good_packets + lost packet). If the dpdk application is too slow to receive all packets from the VF, is = there any counter to indicate this? Any suggestion? Thank you. Best regards Yan Xiaoping -----Original Message----- From: Asaf Penso > Sent: 2021=1B$BG/=1B(B7=1B$B7n=1B(B13=1B$BF|=1B(B 20:36 To: Yan, Xiaoping (NSB - CN/Hangzhou) >; users@dpdk.org Cc: Slava Ovsiienko >= ; Matan Azrad >; Raslan Darawsheh= > Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets Hello Yan, Can you please mention which DPDK version you use and whether you see this = issue also with latest upstream version? Regards, Asaf Penso >-----Original Message----- >From: users > On Beh= alf Of Yan, Xiaoping (NSB - >CN/Hangzhou) >Sent: Monday, July 5, 2021 1:08 PM >To: users@dpdk.org >Subject: [dpdk-users] mlx5 VF packet lost between >rx_port_unicast_packets and rx_good_packets > >Hi, > >When doing traffic loopback test on a mlx5 VF, we found there are some >packet loss (not all packet received back ). > >>From xstats counters, I found all packets have been received in >rx_port_unicast_packets, but rx_good_packets has lower counter, and >rx_port_unicast_packets - rx_good_packets =3D lost packets i.e. packet >lost between rx_port_unicast_packets and rx_good_packets. >But I can not find any other counter indicating where exactly those >packets are lost. > >Any idea? > >Attached is the counter logs. (bf is before the test, af is after the >test, fp-cli dpdk-port-stats is the command used to get xstats, and >ethtool -S _f1 (the vf >used) also printed) Test equipment reports that it sends: 2911176 >packets, >receives: 2909474, dropped: 1702 And the xstats (after - before) shows >rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop >(2911177 - rx_good_packets) is 1702 > >BTW, I also noticed this discussion "packet loss between phy and good >counter" >http://mails.dpdk.org/archives/users/2018-July/003271.html >but my case seems to be different as packet also received in >rx_port_unicast_packets, and I checked counter from pf (ethtool -S >ens1f0 in attached log), rx_discards_phy is not increasing. > >Thank you. > >Best regards >Yan Xiaoping --_000_DM8PR12MB549476611802F3669482B4EECDB89DM8PR12MB5494namp_ Content-Type: text/html; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable

Can you please try the l= ast LTS 20.11.3?

We have some related fix= es and we think the issue is already solved.

 =

Regards,

Asaf Penso

 =

From: Yan, Xiaoping (NSB - CN/Hangzh= ou) <xiaoping.yan@nokia-sbell.com>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <= matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I=1B$B!G=1B(Bm using 20.11

commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HE= AD, tag: v20.11)

Author: Thomas Monjalon <thomas@monjalon.net>

Date:   Fri Nov 27 19:48:48 2020 +0100

 

    version: 20.11.0

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@= nvidia.com>
Sent: 2021
=1B$BG/=1B(= B10=1B$B7n=1B(B14=1B$BF|=1B(B 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <vi= acheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Are you using the latest= stable 20.11.3? If not, can you try?

 =

Regards,

Asaf Penso

 =

From: Yan, Xiaoping (NSB - CN/Hangzh= ou) <xiaoping.yan@nokia-= sbell.com>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.= com>; users@dpdk.org
Cc: Slava Ovsiienko <vi= acheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

In below log, we can clearly see packets are droppe= d between counter rx_unicast_packets  and rx_good_packets

But there is not any error/miss counter tell why/wh= ere packet is dropped.

Is this a known bug/limitation of Mellanox card?

Any suggestion?

 

Counter in  test center(traffic generator):

        &nb= sp;     Tx count: 617496152

        &nb= sp;     Rx count: 617475672

        &nb= sp;     Drop: 20480

 

testpmd started with:

dpdk-testpmd -l "2,3" --legacy-mem --sock= et-mem "5000,0" -a 0000:03:07.0  -- -i --nb-cores=3D1 --port= mask=3D0x1 --rxd=3D512 --txd=3D512

testpmd> port stop 0

testpmd> vlan set filter on 0<= /p>

testpmd> rx_vlan add 767 0

testpmd> port start 0

testpmd> set fwd 5tswap

testpmd> start

testpmd> show fwd stats all

 

  ---------------------- Forward statistics fo= r port 0  ----------------------

  RX-packets: 617475727    = ;  RX-dropped: 0         =     RX-total: 617475727

  TX-packets: 617475727    = ;  TX-dropped: 0         =     TX-total: 617475727

  --------------------------------------------= --------------------------------

 

  +++++++++++++++ Accumulated forward statisti= cs for all ports+++++++++++++++

  RX-packets: 617475727    = ;  RX-dropped: 0         =     RX-total: 617475727

  TX-packets: 617475727    = ;  TX-dropped: 0         =     TX-total: 617475727

  ++++++++++++++++++++++++++++++++++++++++++++= ++++++++++++++++++++++++++++++++

testpmd> show port xstats 0

###### NIC extended statistics for port 0

rx_good_packets: 617475731=

tx_good_packets: 617475730

rx_good_bytes: 45693207378

tx_good_bytes: 45693207036

rx_missed_errors: 0

rx_errors: 0

tx_errors: 0

rx_mbuf_allocation_errors: 0

rx_q0_packets: 617475731

rx_q0_bytes: 45693207378

rx_q0_errors: 0

tx_q0_packets: 617475730

tx_q0_bytes: 45693207036

rx_wqe_errors: 0

rx_unicast_packets: 617496152<= /b>

rx_unicast_bytes: 45694715248

tx_unicast_packets: 617475730

tx_unicast_bytes: 45693207036

rx_multicast_packets: 3

rx_multicast_bytes: 342

tx_multicast_packets: 0

tx_multicast_bytes: 0

rx_broadcast_packets: 56

rx_broadcast_bytes: 7308

tx_broadcast_packets: 0

tx_broadcast_bytes: 0

tx_phy_packets: 0

rx_phy_packets: 0

rx_phy_crc_errors: 0

tx_phy_bytes: 0

rx_phy_bytes: 0

rx_phy_in_range_len_errors: 0

rx_phy_symbol_errors: 0

rx_phy_discard_packets: 0

tx_phy_discard_packets: 0

tx_phy_errors: 0

rx_out_of_buffer: 0

tx_pp_missed_interrupt_errors: 0<= /p>

tx_pp_rearm_queue_errors: 0

tx_pp_clock_queue_errors: 0

tx_pp_timestamp_past_errors: 0

tx_pp_timestamp_future_errors: 0<= /p>

tx_pp_jitter: 0

tx_pp_wander: 0

tx_pp_sync_lost: 0

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangzh= ou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 16:26
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: 'Slava Ovsiienko' <
viacheslavo@nvidia.com>; 'Matan Azrad' <matan@nvidia.com>; 'Raslan Darawsheh' <rasla= nd@nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

We replaced the NIC also (originally it was cx-4, n= ow it is cx-5), but result is the same.

Do you know why the packet is dropped between rx_po= rt_unicast_packets and rx_good_packets, but there is no error/miss counter?=

 

And do you know mlx5_xxx kernel thread?<= /span>

They have cpu affinity to all cpu cores, including = the core used by fastpath/testpmd.

Would it affect?

 

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 7= 4548

pid 74548's current affinity list: 0-27

 

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,t= id,psr,comm | grep mlx5

    903     = ;  -   - mlx5_health0000

    904     = ;  -   - mlx5_page_alloc

    907     = ;  -   - mlx5_cmd_0000:0

    916     = ;  -   - mlx5_events

    917     = ;  -   - mlx5_esw_wq

    918     = ;  -   - mlx5_fw_tracer

    919     = ;  -   - mlx5_hv_vhca

    921     = ;  -   - mlx5_fc

    924     = ;  -   - mlx5_health0000

    925     = ;  -   - mlx5_page_alloc

    927     = ;  -   - mlx5_cmd_0000:0

    935     = ;  -   - mlx5_events

    936     = ;  -   - mlx5_esw_wq

    937     = ;  -   - mlx5_fw_tracer

    938     = ;  -   - mlx5_hv_vhca

    939     = ;  -   - mlx5_fc

    941     = ;  -   - mlx5_health0000

    942     = ;  -   - mlx5_page_alloc

 

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangzh= ou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 15:03
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

It is 20.11 (We upgraded to 20.11 recently).

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@nvidia.com>
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B29=1B$BF|=1B(B 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com><= br> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

What dpdk version are you using?

19.11 doesn't support 5tswap mode in testpmd.

 

Regards,

Asaf Penso


From: Yan, X= iaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <
asafp= @nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I tried also with testpmd with such command and co= nfiguration:

dpdk-testpmd -l "4,5" --legacy-mem --soc= ket-mem "5000,0" -a 0000:03:02.0  -- -i --nb-cores=3D1 --por= tmask=3D0x1 --rxd=3D512 --txd=3D512

testpmd> port stop 0

testpmd> vlan set filter on 0=

testpmd> rx_vlan add 767 0

testpmd> port start 0

testpmd> set fwd 5tswap

testpmd> start

 

it only gets 1.4mpps.

with 1.5mpps, it starts to drop packets occasional= ly.

 

 

Best regards

Yan Xiaoping

 

From: Yan, Xiaoping (NSB - CN/Hangz= hou)
Sent: 2021
=1B= $BG/=1B(B9= =1B$B7n=1B(B26=1B$BF|=1B(B 13:19
To: 'Asaf Penso' <
asa= fp@nvidia.com>
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggi= e.xu@nokia-sbell.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

I was using 6wind fastpath instead of testpmd.

>> Do you configure any flow?

I think not, but is there any command to check?

>> Do you work in isolate mode?<= /o:p>

Do you mean the CPU?

The dpdk application (6wind fastpath) run inside c= ontainer and it is using CPU core from exclusive pool

On the otherhand, the cpu isolation is done by hos= t infrastructure and a bit complicated, I=1B$B!G=1B(Bm not sure if there is= really no any other task run in this core.

 

 

BTW, we recently switched the host infra to redhat= openshift container platform, and same problem is there=1B$B!D=1B(B=

We can get 1.6mpps with intel 810 NIC, but we can = only gets 1mpps for mlx.

I raised also a ticket to mellanox Support<= o:p>

https://support.mellanox.com/s/case/5001T00001ZC0jzQAD

There is log about cpu affinity, and some mlx5_xxx= threads seems strange to me=1B$B!D=1B(B

Can you please also check the ticket?<= /o:p>

 

 

 

 

Best regards

Yan Xiaoping

 

From: Asaf Penso <asafp@nvidia.com>
Sent: 2021
=1B$BG/=1B(= B9=1B$B7n=1B(B26=1B$BF|=1B(B 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com><= br> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

Could you please share the testpmd command line yo= u are using?

Do you configure any flow? Do you work in isolate = mode?

 

Regards,

Asaf Penso

 

From: Yan, Xiaoping (NSB - CN/Hangz= hou) <xiaoping.y= an@nokia-sbell.com>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <
asafp= @nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <
viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and= rx_good_packets

 

Hi,

 

dpdk version in use is 19.11

I have not tried with latest upstream version.

 

It seems performance is affected by IPv6 neighbor = advertisement packets coming to this interface

05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 >= ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, = length 32

        0x0000:=   3333 0000 0001 6ef1 9f4e 8a01 86dd 6008

        0x0010:=   fe44 0020 3aff fe80 0000 0000 0000 6cf1

        0x0020:=   9fff fe4e 8a01 ff02 0000 0000 0000 0000

        0x0030:=   0000 0000 0001 8800 96d9 2000 0000 fe80

        0x0040:=   0000 0000 0000 6cf1 9fff fe4e 8a01 0201

        0x0050:=   6ef1 9f4e 8a01

Somehow, there are about 100 such packets per seco= nd coming to the interface, and packet loss happens.

When we change default vlan in switch so that ther= e is no such packets come to the interface (the mlx5 VF under test), there = is not packet loss anymore.

 

In both cases, all packets have arrived to rx_vpor= t_unicast_packets.

In the packet loss case, we see less packets in rx= _good_packets (rx_vport_unicast_packets =3D rx_good_packets + lost packet).=

If the dpdk application is too slow to receive all= packets from the VF, is there any counter to indicate this?

 

Any suggestion?

Thank you.

 

Best regards

Yan Xiaoping

 

-----Original Message-----
From: Asaf Penso <
asafp@nvid= ia.com>
Sent: 2021
=1B$BG/=1B(B7=1B$B7n=1B(B13=1B$BF|=1B(B= 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <
v= iacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland= @nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_goo= d_packets

 

Hello Yan,

 

Can you please mention which DPDK version you use = and whether you see this issue also with latest upstream version?

 

Regards,

Asaf Penso

 

>-----Original Message-----

>From: users <users-bounces@dpdk.org> On Behalf Of Yan, Xiaoping (NSB -

>CN/Hangzhou)

>Sent: Monday, July 5, 2021 1:08 PM=

>Subject: [dpdk-users] mlx5 VF packet lost betw= een

>rx_port_unicast_packets and rx_good_packets

>Hi,

>When doing traffic loopback test on a mlx5 VF,= we found there are some

>packet loss (not all packet received back ).

>From xstats counters, I found all packets have= been received in

>rx_port_unicast_packets, but rx_good_packets h= as lower counter, and

>rx_port_unicast_packets - rx_good_packets =3D = lost packets i.e. packet

>lost between rx_port_unicast_packets and rx_go= od_packets.

>But I can not find any other counter indicatin= g where exactly those

>packets are lost.

>Any idea?

>Attached is the counter logs. (bf is before th= e test, af is after the

>test, fp-cli dpdk-port-stats is the command us= ed to get xstats, and

>ethtool -S _f1 (the vf

>used) also printed) Test equipment reports tha= t it sends: 2911176

>packets,

>receives:  2909474, dropped: 1702 And the= xstats (after - before) shows

>rx_port_unicast_packets 2911177,  rx_good= _packets 2909475, so drop

>(2911177 - rx_good_packets) is 1702

>BTW, I also noticed this discussion "pack= et loss between phy and good

>counter"

>but my case seems to be different as packet al= so received in

>rx_port_unicast_packets, and I checked counter= from pf  (ethtool -S

>ens1f0 in attached log), rx_discards_phy is no= t increasing.

>Thank you.

>Best regards

>Yan Xiaoping

 

 

--_000_DM8PR12MB549476611802F3669482B4EECDB89DM8PR12MB5494namp_--