test suite reviews and discussions
 help / color / mirror / Atom feed
From: Nicholas Pratte <npratte@iol.unh.edu>
To: Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
Cc: "dts@dpdk.org" <dts@dpdk.org>
Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS.
Date: Tue, 26 Nov 2024 14:39:01 -0500	[thread overview]
Message-ID: <CAKXZ7eg5KiKtWknRjr--Bi9Bcx0Qf1i6GcG34uKpWxfm86k07w@mail.gmail.com> (raw)
In-Reply-To: <AS8P193MB1605C8F07614F6CC11DD01EF8B232@AS8P193MB1605.EURP193.PROD.OUTLOOK.COM>

Hello Bharati,

I see you have discovered some of the oddities regarding DPDK and MTU!
There are many interesting problems associated with MTU within DPDK.
You can see some elaboration, based on all the things I've discovered,
on these issues in some of my comments below, of which and in some
form, directly relate to the issues you've pointed out in this thread.
Thank you for starting this conversation!

Patrick, please let me know if I missed anything or if more
clarification could be provided in spots; I double checked, but I
could be missing something.

> I am Bharati Bhole. I am a new member of DTS mailing list.

Welcome!

> I have recently started working on DTS for my company and facing some issues/failures while running the DTS.
> Please help me with understanding the test cases and expected behaviours.
<snip>
>
> When the test set the max_pkt_len for testpmd and calculate the expected acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of MTU update test, I have seen that 2 VLANs NIC are being considered while calculating acceptable packets size but in JumboFrames I dont see it).

Patrick's comment is correct! Looking at DPDK without any sort of
depth, it is incredibly difficult to understand what "MTU" actually
means within DPDK, and this lack of enforced definition has created
significant delays in getting a comprehensive set of MTU test suites
merged within the currently-maintained DTS framework. Deeper
discussions need to be had on this issue, and I hope to get the
community engaged with this again in the near future.

If you take a deeper look into DPDK's source code, and each individual
vendor's driver code, you will see that each vendor makes a unique
assumption of what is considered L3 information and L2 information.
Some vendors will assume that an Ethernet frame should expect a Dot1Q
tag, CRC, and the 14 byte Ethernet frame; other vendors may not make
this assumption. As Patrick mentions, the new DTS framework is making
an effort to move away from conforming to unique vendor behavior
within its code in favor of code that works universally across all
vendors, and in doing so, the latest version of the JumboFrames test
suite that I've written makes an effort to assess MTUs based on a
universal agreement of what is assumed to be in a Layer 2 frame; of
course, the test suite does not work across all vendors with the
existing vacuous MTU definition. To deal with this issue, I refactored
the test suite to include a +/-5 byte buffer in packets being sent to
assess the inner and outer boundaries of the set MTU, but this has its
own issues (namely in assessing an MTU at the exact MTU value itself)
in some cases, so this solution is expected to be temporary until a
better alternative is found.

I encourage you to take a look at this thread below; it is a link to
an archived thread on the DPDK mailing list, it provides some insight
into what is meant by 'max-pkt-len' and MTU within DPDK's logic:

https://inbox.dpdk.org/dev/0f4866fc-b76d-cf83-75e8-86326c02814b@intel.com/

You may also want to look at the original RFC that I published for
JumboFrames which discusses this in some detail as well. You will also
find a link to an existing Bugzilla ticket aimed at finding solutions
to MTU.

https://inbox.dpdk.org/dev/20240524183604.6925-2-npratte@iol.unh.edu

>
> In function jumboframes_send_packet() -
> --<snip>--
> if received:
>             if self.nic.startswith("fastlinq"):
>                 self.verify(
>                     self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
>                     and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
>                     and (rx_bytes == pktsize),
>                     "packet pass assert error",
>                 )
>             else:
>                 self.verify(
>                     self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
>                     and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
>                     and ((rx_bytes + 4) == pktsize),
>                     "packet pass assert error",
>                 )
>         else:
>             self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
>         return out
> --<snip>--
>
>       Can someone please tell me why these tx_butes and rx_bytes calculations are different for Qlogic NICs and other NICs?

QLogic NICs are interesting in that they only accept buffer sizes
restricted to powers of 2, if I am remembering correctly. It's unique
to these NICs, it's not seen in any other vendor as far as I know, and
as Patrick mentioned in his response, legacy DTS was designed around
conforming to standards across multiple vendors. Thus, in the "new"
DTS framework, it's likely that we wouldn't consider these NICs at all
in some test cases. But this is a background as to why this code logic
exists.

>
>
> 2. TestSuite_stats_checks.py :
>        The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500).
>
>       In function send_packet_of_size_to_tx_port()  line no. 174 to 185
>       --<snip>--
>
>   if received:
>             self.verify(tx_pkts_difference >= 1, "No packet was sent")
>             self.verify(
>                 tx_pkts_difference == rx_pkts_difference,
>                 "different numbers of packets sent and received",
>             )
>             self.verify(
>                 tx_bytes_difference == rx_bytes_difference,
>                 "different number of bytes sent and received",
>             )
>             self.verify(tx_err_difference == 1, "unexpected tx error")
>             self.verify(rx_err_difference == 0, "unexpected rx error")
>
>       --<snip>--
>
>       This test expects packets with payload size 30 to pass RX and TX which is working fine and for packet with payload size 1500, the test expecting RX and to pass and TX to fail?
>       I did not get this part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN =  1526

I don't have much to say about stats_checks.py, as I have never looked
into this suite personally. But I do think that the issue you are
pointing out here will naturally transition to discussion towards
something that I discovered when running some manual MTU tests, all
the way back when I wrote my initial version of the test suite: there
is currently no existing functionality that properly tests MTUs at its
set boundary. Below you'll see a link to a DPDK mailing list archive
discussing this very issue in greater detail. I encourage you to look
into this as a context despite the fact that it doesn't entirely
relate to the issues you've brought up here, but I do believe it is
relevant to the overall doubt you are expressing in this thread:

https://inbox.dpdk.org/dev/e2554b78-cdda-aa33-ac6d-59a543a10640@intel.com/

Take a look at the 'Thread Overview' in this archive, you'll see a
brief back-and-forth between Stephen and Morten discussing this issue
in greater detail.

<snip>

      parent reply	other threads:[~2024-11-26 19:39 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-22 14:42 Bharati Bhole - Geminus
2024-11-22 16:59 ` Patrick Robb
2024-11-22 17:37   ` Bharati Bhole - Geminus
2024-11-25 10:45   ` Bharati Bhole - Geminus
2024-11-25 15:57     ` Patrick Robb
2024-11-25 17:36       ` Bharati Bhole - Geminus
2024-11-25 21:36         ` Patrick Robb
2024-11-26 19:39 ` Nicholas Pratte [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKXZ7eg5KiKtWknRjr--Bi9Bcx0Qf1i6GcG34uKpWxfm86k07w@mail.gmail.com \
    --to=npratte@iol.unh.edu \
    --cc=c_bharatib@xsightlabs.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).