From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31625A034D for ; Wed, 12 Jan 2022 05:21:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A084C4120A; Wed, 12 Jan 2022 05:21:22 +0100 (CET) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 04CC640141; Wed, 12 Jan 2022 05:21:19 +0100 (CET) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4JYZ6F22Z3zZf6v; Wed, 12 Jan 2022 12:17:41 +0800 (CST) Received: from dggpemm100014.china.huawei.com (7.185.36.55) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 12 Jan 2022 12:21:17 +0800 Received: from dggpemm500008.china.huawei.com (7.185.36.136) by dggpemm100014.china.huawei.com (7.185.36.55) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 12 Jan 2022 12:21:17 +0800 Received: from dggpemm500008.china.huawei.com ([7.185.36.136]) by dggpemm500008.china.huawei.com ([7.185.36.136]) with mapi id 15.01.2308.020; Wed, 12 Jan 2022 12:21:17 +0800 From: wangyunjian To: Dmitry Kozlyuk , "dev@dpdk.org" , "users@dpdk.org" , Matan Azrad , "Slava Ovsiienko" CC: Huangshaozhang , dingxiaoxiong , "scotthuang@nvidia.com" Subject: RE: [dpdk-dev] [dpdk-users] A question about Mellanox ConnectX-5 and ConnectX-4 Lx nic can't send packets? Thread-Topic: [dpdk-dev] [dpdk-users] A question about Mellanox ConnectX-5 and ConnectX-4 Lx nic can't send packets? Thread-Index: AdgGouxggzMKFueuQj26eloGqHtleQAGEX4wAAH0tLAABl3CcAAjMZcg Date: Wed, 12 Jan 2022 04:21:16 +0000 Message-ID: References: <6232d9136c60431a9ea72b83b9a4760a@huawei.com> <2507d6c0239547c8b3f30870578ce392@huawei.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.174.242.157] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CFilter-Loop: Reflected X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org > -----Original Message----- > From: Dmitry Kozlyuk [mailto:dkozlyuk@nvidia.com] > Sent: Tuesday, January 11, 2022 7:42 PM > To: wangyunjian ; dev@dpdk.org; users@dpdk.org; > Matan Azrad ; Slava Ovsiienko > Cc: Huangshaozhang ; dingxiaoxiong > > Subject: RE: [dpdk-dev] [dpdk-users] A question about Mellanox ConnectX-5= and > ConnectX-4 Lx nic can't send packets? >=20 > > From: wangyunjian > [...] > > > From: Dmitry Kozlyuk [mailto:dkozlyuk@nvidia.com] > [...] > > > Thanks for attaching all the details. > > > Can you please reproduce it with --log-level=3Dpmd.common.mlx5:debug > > > and send the logs? > > > > > > > For example, if the environment is configured with 10GB hugepages > > > > but each hugepage is physically discontinuous, this problem can be > > > > reproduced. > > > > # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xFC0 --iova-mode pa= -- > legacy-mem -a af:00.0 -a af:00.1 --log-level=3Dpmd.common.mlx5:debug -m > 0,8192 -- -a -i --forward-mode=3Dfwd --rxq=3D2 --txq=3D2 > --total-num-mbufs=3D1000000 > [...] > > mlx5_common: Collecting chunks of regular mempool mb_pool_0 > > mlx5_common: Created a new MR 0x92827 in PD 0x4864ab0 for address > > range [0x75cb6c000, 0x780000000] (592003072 bytes) for mempool > > mb_pool_0 > > mlx5_common: Created a new MR 0x93528 in PD 0x4864ab0 for address > > range [0x7dcb6c000, 0x800000000] (592003072 bytes) for mempool > > mb_pool_0 > > mlx5_common: Created a new MR 0x94529 in PD 0x4864ab0 for address > > range [0x85cb6c000, 0x880000000] (592003072 bytes) for mempool > > mb_pool_0 > > mlx5_common: Created a new MR 0x9562a in PD 0x4864ab0 for address > > range [0x8d6cca000, 0x8fa15e000] (592003072 bytes) for mempool > > mb_pool_0 >=20 > Thanks for the logs, UUIC they are from a successful run. > I have reproduced an equivalent hugepage layout and mempool spread > between hugepages, but I don't see the error behavior in several tries. Your colleague Scott Huang (scotthuang@nvidia.com) has been able to reproduce this problem, so you can contact him. > What are the logs in case of error? > Please note that the offending commit you found (fec28ca0e3a9) indeed > introduced a few issues, but they were fixed later, so I'm testing with 2= 1.11, not > that commit. > Unfortunately, none of those issues resembled yours.