DPDK patches and discussions
 help / color / mirror / Atom feed
* Testing scatter support for PMDs using testpmd
@ 2024-01-24 17:16 Jeremy Spewock
  2024-01-26 15:04 ` Boyer, Andrew
  0 siblings, 1 reply; 2+ messages in thread
From: Jeremy Spewock @ 2024-01-24 17:16 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, rasland

[-- Attachment #1: Type: text/plain, Size: 1458 bytes --]

Hello maintainers,

In porting over the first ethdev suite to the new DTS framework, there was
an inconsistency that we found and we were wondering if anyone would be
able to shed some light on this. In general the inconsistency pertains to
Intel and Mellanox NICs, where one will throw an error and not start
testpmd while the other will work as expected.

In the original DTS suite for testing scattered packets, testpmd is started
with the flags --max-packet-len=9000 and --mbuf-size=2048. This starts and
works fine on Intel NICs, but when you use the same flags on a Mellanox
NIC, it will produce the error seen below. There is a flag that exists for
testpmd called --enable-scatter and when this flag is provided, the
Mellanox NIC seems to accept the flags and start without error.

Our assumption is that this should be consistent between NICs, is there a
reason that one NIC would allow for starting testpmd without explicitly
enabling scatter and the other wouldn't? Should this flag always be
present, and is it an error that testpmd can start without it in the first
place?

Here is the error provided when attempting to run on a Mellanox NIC:

mlx5_net: port 0 Rx queue 0: Scatter offload is not configured and no
enough mbuf space(2048) to contain the maximum RX packet length(9000) with
head-room(128)
mlx5_net: port 0 unable to allocate rx queue index 0
Fail to configure port 0 rx queues
Start ports failed

Thank you for any insight,
Jeremy

[-- Attachment #2: Type: text/html, Size: 2497 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Testing scatter support for PMDs using testpmd
  2024-01-24 17:16 Testing scatter support for PMDs using testpmd Jeremy Spewock
@ 2024-01-26 15:04 ` Boyer, Andrew
  0 siblings, 0 replies; 2+ messages in thread
From: Boyer, Andrew @ 2024-01-26 15:04 UTC (permalink / raw)
  To: Jeremy Spewock; +Cc: dev, qi.z.zhang, rasland

[-- Attachment #1: Type: text/plain, Size: 2656 bytes --]



On Jan 24, 2024, at 12:16 PM, Jeremy Spewock <jspewock@iol.unh.edu> wrote:

Hello maintainers,

In porting over the first ethdev suite to the new DTS framework, there was an inconsistency that we found and we were wondering if anyone would be able to shed some light on this. In general the inconsistency pertains to Intel and Mellanox NICs, where one will throw an error and not start testpmd while the other will work as expected.

In the original DTS suite for testing scattered packets, testpmd is started with the flags --max-packet-len=9000 and --mbuf-size=2048. This starts and works fine on Intel NICs, but when you use the same flags on a Mellanox NIC, it will produce the error seen below. There is a flag that exists for testpmd called --enable-scatter and when this flag is provided, the Mellanox NIC seems to accept the flags and start without error.

Our assumption is that this should be consistent between NICs, is there a reason that one NIC would allow for starting testpmd without explicitly enabling scatter and the other wouldn't? Should this flag always be present, and is it an error that testpmd can start without it in the first place?

Here is the error provided when attempting to run on a Mellanox NIC:

mlx5_net: port 0 Rx queue 0: Scatter offload is not configured and no enough mbuf space(2048) to contain the maximum RX packet length(9000) with head-room(128)
mlx5_net: port 0 unable to allocate rx queue index 0
Fail to configure port 0 rx queues
Start ports failed

Thank you for any insight,
Jeremy

Hello Jeremy,

I can share a little bit of what I've seen while working on our devices.

The client can specify the max packet size, MTU, mbuf size, and whether to enable Rx or Tx s/g (separately). For performance reasons we don't want to enable s/g if it's not needed.

Now, the client can easily set things up with a small MTU, start processing, stop the port, and increase the MTU - beyond what a single mbuf would hold. To avoid having to tear down and rebuild the queues on an MTU change to enable s/g support, we automatically enable Rx s/g if the client presents mbufs which are too small to hold the max MTU.

Unfortunately the API to configure the Tx queues doesn't tell us anything about the mbuf size, and there's nothing stopping the client from configuring Tx before Rx. So we can't reliably auto-enable Tx s/g, and it is possible to get into a config where the Rx side produces chained mbufs which the Tx side can't handle.

To avoid this misconfig we have some versions of our PMD set to fail to start if Rx s/g is enabled but Tx s/g isn't.

Hope this helps,
Andrew


[-- Attachment #2: Type: text/html, Size: 4291 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-01-26 15:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-24 17:16 Testing scatter support for PMDs using testpmd Jeremy Spewock
2024-01-26 15:04 ` Boyer, Andrew

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).