DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Facing issue with mellanox after increasing number of buffers
@ 2020-02-21  4:30 chetan bhasin
  2020-02-25 15:20 ` Asaf Penso
  0 siblings, 1 reply; 2+ messages in thread
From: chetan bhasin @ 2020-02-21  4:30 UTC (permalink / raw)
  To: dev

Hi,

We are using DPDK underneath VPP . We are facing issue when we increase
buffers from 100k to 300k after upgrading vpp version (18.01--> 19.08).
As per log following error is seen
net_mlx5: port %u unable to find virtually contiguous chunk for address
(%p). rte_memseg_contig_walk() failed.\n%.0s", ap=ap@entry=0x7f3379c4fac8

Vpp 18.01 uses (dpdk version 17.11.4)
Vpp 19.08 uses (dpdk version 19.05)
With vpp 20.01 (uses dpdk version 19.08) no issue see till 400k buffers.


*Back trace looks like : *
format=0x7f3376768df8 "net_mlx5: port %u unable to find virtually
contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s",
ap=ap@entry=0x7f3379c4fac8)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
#6  0x00007f3375ab2c12 in rte_log (level=level@entry=5, logtype=<optimized
out>,
    format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find
virtually contiguous chunk for address (%p). rte_memseg_contig_walk()
failed.\n%.0s")
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
#7  0x00007f3375dc47fa in mlx5_mr_create_primary (dev=dev@entry=0x7f3376e9d940
<rte_eth_devices>,
    entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627
#8  0x00007f3375abe238 in mlx5_mr_create (addr=69384463936,
entry=0x7ef5c00d02ca, dev=0x7f3376e9d940 <rte_eth_devices>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:833
#9  mlx5_mr_lookup_dev (dev=0x7f3376e9d940 <rte_eth_devices>,
mr_ctrl=mr_ctrl@entry=0x7ef5c00d022e, entry=0x7ef5c00d02ca,
    addr=69384463936)


*Crash back-trace looks like :*
#0  mlx5_tx_complete (txq=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_rxtx.h:588
#1  mlx5_tx_burst (dpdk_txq=<optimized out>, pkts=0x7fc85686c000, pkts_n=1)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_rxtx.c:563
#2  0x00007fc852d1912e in rte_eth_tx_burst (nb_pkts=1,
tx_pkts=0x7fc85686c000, queue_id=0, port_id=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/install-vpp-native/external/include/dpdk/rte_ethdev.h:4309
#3  tx_burst_vector_internal (n_left=1, mb=0x7fc85686c000,
xd=0x7fc8568749c0, vm=0x7fc856803800)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:179
#4  dpdk_device_class_tx_fn (vm=0x7fc856803800, node=<optimized out>,
f=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:376
#5  0x00007fc9585fe0da in dispatch_node (last_time_stamp=<optimized out>,
frame=0x7fc85637c780,
    dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL,
node=0x7fc85697f440, vm=0x7fc856803800)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/vlib/main.c:1255
#6  dispatch_pending_node (vm=vm@entry=0x7fc856803800,
pending_frame_index=pending_frame_index@entry=6,
    last_time_stamp=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/vlib/main.c:1430


Thanks,
Chetan

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] Facing issue with mellanox after increasing number of buffers
  2020-02-21  4:30 [dpdk-dev] Facing issue with mellanox after increasing number of buffers chetan bhasin
@ 2020-02-25 15:20 ` Asaf Penso
  0 siblings, 0 replies; 2+ messages in thread
From: Asaf Penso @ 2020-02-25 15:20 UTC (permalink / raw)
  To: chetan bhasin, dev

Thank you Chetan for this mail.
We will look internally at the below issue and will come back after our analysis.

Regards,
Asaf Penso

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of chetan bhasin
> Sent: Friday, February 21, 2020 6:30 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Facing issue with mellanox after increasing number of
> buffers
> 
> Hi,
> 
> We are using DPDK underneath VPP . We are facing issue when we increase
> buffers from 100k to 300k after upgrading vpp version (18.01--> 19.08).
> As per log following error is seen
> net_mlx5: port %u unable to find virtually contiguous chunk for address
> (%p). rte_memseg_contig_walk() failed.\n%.0s",
> ap=ap@entry=0x7f3379c4fac8
> 
> Vpp 18.01 uses (dpdk version 17.11.4)
> Vpp 19.08 uses (dpdk version 19.05)
> With vpp 20.01 (uses dpdk version 19.08) no issue see till 400k buffers.
> 
> 
> *Back trace looks like : *
> format=0x7f3376768df8 "net_mlx5: port %u unable to find virtually
> contiguous chunk for address (%p). rte_memseg_contig_walk()
> failed.\n%.0s",
> ap=ap@entry=0x7f3379c4fac8)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> 19.05/lib/librte_eal/common/eal_common_log.c:427
> #6  0x00007f3375ab2c12 in rte_log (level=level@entry=5,
> logtype=<optimized
> out>,
>     format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find
> virtually contiguous chunk for address (%p). rte_memseg_contig_walk()
> failed.\n%.0s")
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> 19.05/lib/librte_eal/common/eal_common_log.c:443
> #7  0x00007f3375dc47fa in mlx5_mr_create_primary
> (dev=dev@entry=0x7f3376e9d940
> <rte_eth_devices>,
>     entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> 19.05/drivers/net/mlx5/mlx5_mr.c:627
> #8  0x00007f3375abe238 in mlx5_mr_create (addr=69384463936,
> entry=0x7ef5c00d02ca, dev=0x7f3376e9d940 <rte_eth_devices>)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-
> 19.05/drivers/net/mlx5/mlx5_mr.c:833
> #9  mlx5_mr_lookup_dev (dev=0x7f3376e9d940 <rte_eth_devices>,
> mr_ctrl=mr_ctrl@entry=0x7ef5c00d022e, entry=0x7ef5c00d02ca,
>     addr=69384463936)
> 
> 
> *Crash back-trace looks like :*
> #0  mlx5_tx_complete (txq=<optimized out>)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-
> 19.05/drivers/net/mlx5/mlx5_rxtx.h:588
> #1  mlx5_tx_burst (dpdk_txq=<optimized out>, pkts=0x7fc85686c000,
> pkts_n=1)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-
> 19.05/drivers/net/mlx5/mlx5_rxtx.c:563
> #2  0x00007fc852d1912e in rte_eth_tx_burst (nb_pkts=1,
> tx_pkts=0x7fc85686c000, queue_id=0, port_id=<optimized out>)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/build-root/install-vpp-
> native/external/include/dpdk/rte_ethdev.h:4309
> #3  tx_burst_vector_internal (n_left=1, mb=0x7fc85686c000,
> xd=0x7fc8568749c0, vm=0x7fc856803800)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:179
> #4  dpdk_device_class_tx_fn (vm=0x7fc856803800, node=<optimized out>,
> f=<optimized out>)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:376
> #5  0x00007fc9585fe0da in dispatch_node (last_time_stamp=<optimized
> out>,
> frame=0x7fc85637c780,
>     dispatch_state=VLIB_NODE_STATE_POLLING,
> type=VLIB_NODE_TYPE_INTERNAL,
> node=0x7fc85697f440, vm=0x7fc856803800)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/src/vlib/main.c:1255
> #6  dispatch_pending_node (vm=vm@entry=0x7fc856803800,
> pending_frame_index=pending_frame_index@entry=6,
>     last_time_stamp=<optimized out>)
>     at
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-
> party/vpp/vpp_1908/src/vlib/main.c:1430
> 
> 
> Thanks,
> Chetan

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-02-25 15:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-21  4:30 [dpdk-dev] Facing issue with mellanox after increasing number of buffers chetan bhasin
2020-02-25 15:20 ` Asaf Penso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).