From: chetan bhasin <chetan.bhasin017@gmail.com>
To: dev@dpdk.org
Subject: [dpdk-dev] Facing issue with mellanox after increasing number of buffers
Date: Fri, 21 Feb 2020 10:00:24 +0530 [thread overview]
Message-ID: <CACZZ+Y7SQB2xpPDw2g70eJdYsUDKyy7FCDTTmBz2OKxUHF+6TQ@mail.gmail.com> (raw)
Hi,
We are using DPDK underneath VPP . We are facing issue when we increase
buffers from 100k to 300k after upgrading vpp version (18.01--> 19.08).
As per log following error is seen
net_mlx5: port %u unable to find virtually contiguous chunk for address
(%p). rte_memseg_contig_walk() failed.\n%.0s", ap=ap@entry=0x7f3379c4fac8
Vpp 18.01 uses (dpdk version 17.11.4)
Vpp 19.08 uses (dpdk version 19.05)
With vpp 20.01 (uses dpdk version 19.08) no issue see till 400k buffers.
*Back trace looks like : *
format=0x7f3376768df8 "net_mlx5: port %u unable to find virtually
contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s",
ap=ap@entry=0x7f3379c4fac8)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
#6 0x00007f3375ab2c12 in rte_log (level=level@entry=5, logtype=<optimized
out>,
format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find
virtually contiguous chunk for address (%p). rte_memseg_contig_walk()
failed.\n%.0s")
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
#7 0x00007f3375dc47fa in mlx5_mr_create_primary (dev=dev@entry=0x7f3376e9d940
<rte_eth_devices>,
entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627
#8 0x00007f3375abe238 in mlx5_mr_create (addr=69384463936,
entry=0x7ef5c00d02ca, dev=0x7f3376e9d940 <rte_eth_devices>)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:833
#9 mlx5_mr_lookup_dev (dev=0x7f3376e9d940 <rte_eth_devices>,
mr_ctrl=mr_ctrl@entry=0x7ef5c00d022e, entry=0x7ef5c00d02ca,
addr=69384463936)
*Crash back-trace looks like :*
#0 mlx5_tx_complete (txq=<optimized out>)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_rxtx.h:588
#1 mlx5_tx_burst (dpdk_txq=<optimized out>, pkts=0x7fc85686c000, pkts_n=1)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_rxtx.c:563
#2 0x00007fc852d1912e in rte_eth_tx_burst (nb_pkts=1,
tx_pkts=0x7fc85686c000, queue_id=0, port_id=<optimized out>)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/install-vpp-native/external/include/dpdk/rte_ethdev.h:4309
#3 tx_burst_vector_internal (n_left=1, mb=0x7fc85686c000,
xd=0x7fc8568749c0, vm=0x7fc856803800)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:179
#4 dpdk_device_class_tx_fn (vm=0x7fc856803800, node=<optimized out>,
f=<optimized out>)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:376
#5 0x00007fc9585fe0da in dispatch_node (last_time_stamp=<optimized out>,
frame=0x7fc85637c780,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL,
node=0x7fc85697f440, vm=0x7fc856803800)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/vlib/main.c:1255
#6 dispatch_pending_node (vm=vm@entry=0x7fc856803800,
pending_frame_index=pending_frame_index@entry=6,
last_time_stamp=<optimized out>)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/vlib/main.c:1430
Thanks,
Chetan
next reply other threads:[~2020-02-21 4:30 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-21 4:30 chetan bhasin [this message]
2020-02-25 15:20 ` Asaf Penso
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACZZ+Y7SQB2xpPDw2g70eJdYsUDKyy7FCDTTmBz2OKxUHF+6TQ@mail.gmail.com \
--to=chetan.bhasin017@gmail.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).