DPDK usage discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
To: wangyunjian <wangyunjian@huawei.com>,
	"dev@dpdk.org" <dev@dpdk.org>, "users@dpdk.org" <users@dpdk.org>,
	Matan Azrad <matan@nvidia.com>,
	Slava Ovsiienko <viacheslavo@nvidia.com>
Cc: Huangshaozhang <huangshaozhang@huawei.com>,
	dingxiaoxiong <dingxiaoxiong@huawei.com>
Subject: RE: [dpdk-dev] [dpdk-users] A question about Mellanox ConnectX-5 and ConnectX-4 Lx nic can't send packets?
Date: Tue, 11 Jan 2022 11:42:03 +0000	[thread overview]
Message-ID: <BN8PR12MB2899F547FFCE14392EF59123B9519@BN8PR12MB2899.namprd12.prod.outlook.com> (raw)
In-Reply-To: <2507d6c0239547c8b3f30870578ce392@huawei.com>

> From: wangyunjian <wangyunjian@huawei.com>
[...]
> > From: Dmitry Kozlyuk [mailto:dkozlyuk@nvidia.com]
[...]
> > Thanks for attaching all the details.
> > Can you please reproduce it with --log-level=pmd.common.mlx5:debug and
> > send the logs?
> >
> > > For example, if the environment is configured with 10GB hugepages but
> > > each hugepage is physically discontinuous, this problem can be
> > > reproduced.
> 
> # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xFC0 --iova-mode pa -- legacy-mem -a af:00.0 -a af:00.1 --log-level=pmd.common.mlx5:debug -m 0,8192 -- -a -i --forward-mode=fwd --rxq=2 --txq=2   --total-num-mbufs=1000000
[...]
> mlx5_common: Collecting chunks of regular mempool mb_pool_0
> mlx5_common: Created a new MR 0x92827 in PD 0x4864ab0 for address range [0x75cb6c000, 0x780000000] (592003072 bytes) for mempool mb_pool_0
> mlx5_common: Created a new MR 0x93528 in PD 0x4864ab0 for address range [0x7dcb6c000, 0x800000000] (592003072 bytes) for mempool mb_pool_0
> mlx5_common: Created a new MR 0x94529 in PD 0x4864ab0 for address range [0x85cb6c000, 0x880000000] (592003072 bytes) for mempool mb_pool_0
> mlx5_common: Created a new MR 0x9562a in PD 0x4864ab0 for address range [0x8d6cca000, 0x8fa15e000] (592003072 bytes) for mempool mb_pool_0

Thanks for the logs, UUIC they are from a successful run.
I have reproduced an equivalent hugepage layout
and mempool spread between hugepages,
but I don't see the error behavior in several tries.
What are the logs in case of error?
Please note that the offending commit you found (fec28ca0e3a9)
indeed introduced a few issues, but they were fixed later,
so I'm testing with 21.11, not that commit.
Unfortunately, none of those issues resembled yours.

  reply	other threads:[~2022-01-11 11:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-11  6:08 wangyunjian
2022-01-11  7:36 ` Dmitry Kozlyuk
2022-01-11  8:21   ` wangyunjian
2022-01-11 11:42     ` Dmitry Kozlyuk [this message]
2022-01-11 12:29       ` wangyunjian
2022-01-12  4:21       ` wangyunjian
2022-01-11  8:45   ` wangyunjian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BN8PR12MB2899F547FFCE14392EF59123B9519@BN8PR12MB2899.namprd12.prod.outlook.com \
    --to=dkozlyuk@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dingxiaoxiong@huawei.com \
    --cc=huangshaozhang@huawei.com \
    --cc=matan@nvidia.com \
    --cc=users@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    --cc=wangyunjian@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).