* [dpdk-dev] MLX5 driver - number of descriptors overflow
@ 2018-11-21 9:01 Amedeo Sapio
2018-11-25 23:40 ` Yongseok Koh
0 siblings, 1 reply; 2+ messages in thread
From: Amedeo Sapio @ 2018-11-21 9:01 UTC (permalink / raw)
To: dev
Hello,
I experienced a problem with the MLX5 driver running a code that is working
fine with an Intel card. I have found that the reason of this error is an
overflow of the uint16_t number of descriptors in the mlx driver.
Here the details:
- The NIC is a Mellanox ConnectX-5 100G.
- This is a summary code that I run to initialize the port:
ret = rte_eth_dev_configure(dpdk_par.portid, 1, 1, &port_conf);
dpdk_par.port_rx_ring_size = dev_info.rx_desc_lim.nb_max;
dpdk_par.port_tx_ring_size = dev_info.tx_desc_lim.nb_max;
ret = rte_eth_dev_adjust_nb_rx_tx_desc(dpdk_par.portid,
&dpdk_par.port_rx_ring_size, &dpdk_par.port_tx_ring_size);
ret = rte_eth_rx_queue_setup(dpdk_par.portid, 0,
dpdk_par.port_rx_ring_size, rte_eth_dev_socket_id(dpdk_par.portid),
&rx_conf, dpdk_data.pool);
ret = rte_eth_tx_queue_setup(dpdk_par.portid, 0,
dpdk_par.port_tx_ring_size, rte_eth_dev_socket_id(dpdk_par.portid),
&tx_conf);
ret = rte_eth_dev_start(dpdk_par.portid);
- The "rte_eth_dev_start" function returns -ENOMEM = -12 (Out of memory)
- I see that "dev_info.rx_desc_lim.nb_max" is 65535. This value is rounded
to the next power of 2 in "mlx5_rx_queue_setup", which overflows and
becomes 0.
I thought that "rte_eth_dev_adjust_nb_rx_tx_desc" should have adjusted the
value, but clearly it has not.
Thanks,
---
Amedeo
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-dev] MLX5 driver - number of descriptors overflow
2018-11-21 9:01 [dpdk-dev] MLX5 driver - number of descriptors overflow Amedeo Sapio
@ 2018-11-25 23:40 ` Yongseok Koh
0 siblings, 0 replies; 2+ messages in thread
From: Yongseok Koh @ 2018-11-25 23:40 UTC (permalink / raw)
To: Amedeo Sapio; +Cc: dev
> On Nov 21, 2018, at 1:01 AM, Amedeo Sapio <amedeo.sapio@gmail.com> wrote:
>
> Hello,
>
> I experienced a problem with the MLX5 driver running a code that is working
> fine with an Intel card. I have found that the reason of this error is an
> overflow of the uint16_t number of descriptors in the mlx driver.
>
> Here the details:
>
> - The NIC is a Mellanox ConnectX-5 100G.
>
> - This is a summary code that I run to initialize the port:
>
> ret = rte_eth_dev_configure(dpdk_par.portid, 1, 1, &port_conf);
>
> dpdk_par.port_rx_ring_size = dev_info.rx_desc_lim.nb_max;
> dpdk_par.port_tx_ring_size = dev_info.tx_desc_lim.nb_max;
>
> ret = rte_eth_dev_adjust_nb_rx_tx_desc(dpdk_par.portid,
> &dpdk_par.port_rx_ring_size, &dpdk_par.port_tx_ring_size);
>
> ret = rte_eth_rx_queue_setup(dpdk_par.portid, 0,
> dpdk_par.port_rx_ring_size, rte_eth_dev_socket_id(dpdk_par.portid),
> &rx_conf, dpdk_data.pool);
> ret = rte_eth_tx_queue_setup(dpdk_par.portid, 0,
> dpdk_par.port_tx_ring_size, rte_eth_dev_socket_id(dpdk_par.portid),
> &tx_conf);
>
> ret = rte_eth_dev_start(dpdk_par.portid);
>
> - The "rte_eth_dev_start" function returns -ENOMEM = -12 (Out of memory)
>
> - I see that "dev_info.rx_desc_lim.nb_max" is 65535. This value is rounded
> to the next power of 2 in "mlx5_rx_queue_setup", which overflows and
> becomes 0.
>
> I thought that "rte_eth_dev_adjust_nb_rx_tx_desc" should have adjusted the
> value, but clearly it has not.
Nice catch! You are right. We should've set it appropriately.
Will come up with a patch. Meanwhile, can you please use a workaround?, e.g.,
rte_log2_u32(rte_align32prevpow2(dev_info.rx_desc_lim.nb_max));
Will let you know when we push a patch.
Thanks,
Yongseok
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2018-11-25 23:40 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-21 9:01 [dpdk-dev] MLX5 driver - number of descriptors overflow Amedeo Sapio
2018-11-25 23:40 ` Yongseok Koh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).