* Doubts in JumboFrames and stats_checks tests in DTS.
@ 2024-11-22 14:42 Bharati Bhole - Geminus
2024-11-22 16:59 ` Patrick Robb
2024-11-26 19:39 ` Nicholas Pratte
0 siblings, 2 replies; 8+ messages in thread
From: Bharati Bhole - Geminus @ 2024-11-22 14:42 UTC (permalink / raw)
To: dts
[-- Attachment #1: Type: text/plain, Size: 3267 bytes --]
Hi,
I am Bharati Bhole. I am a new member of DTS mailing list.
I have recently started working on DTS for my company and facing some issues/failures while running the DTS.
Please help me with understanding the test cases and expected behaviours.
I am trying to understand the DTS behaviour for following TCs:
1. JumboFrames :
1.
When the test set the max_pkt_len for testpmd and calculate the expected acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of MTU update test, I have seen that 2 VLANs NIC are being considered while calculating acceptable packets size but in JumboFrames I dont see it).
2.
In function jumboframes_send_packet() -
--<snip>--
if received:
if self.nic.startswith("fastlinq"):
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
and (rx_bytes == pktsize),
"packet pass assert error",
)
else:
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
and ((rx_bytes + 4) == pktsize),
"packet pass assert error",
)
else:
self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
return out
--<snip>--
Can someone please tell me why these tx_butes and rx_bytes calculations are different for Qlogic NICs and other NICs?
1.
2. TestSuite_stats_checks.py :
The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500).
In function send_packet_of_size_to_tx_port() line no. 174 to 185
--<snip>--
if received:
self.verify(tx_pkts_difference >= 1, "No packet was sent")
self.verify(
tx_pkts_difference == rx_pkts_difference,
"different numbers of packets sent and received",
)
self.verify(
tx_bytes_difference == rx_bytes_difference,
"different number of bytes sent and received",
)
self.verify(tx_err_difference == 1, "unexpected tx error")
self.verify(rx_err_difference == 0, "unexpected rx error")
--<snip>--
This test expects packets with payload size 30 to pass RX and TX which is working fine and for packet with payload size 1500, the test expecting RX and to pass and TX to fail?
I did not get this part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN = 1526
So according the to my understanding the packets should be dropped and rx_error counter should increase and there should not be any increment in good/error packet for TX port.
Can someone please tell what is the gap/missing part in my understanding?
Thanks,
Bharati Bhole.
[-- Attachment #2: Type: text/html, Size: 15661 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-22 14:42 Doubts in JumboFrames and stats_checks tests in DTS Bharati Bhole - Geminus
@ 2024-11-22 16:59 ` Patrick Robb
2024-11-22 17:37 ` Bharati Bhole - Geminus
2024-11-25 10:45 ` Bharati Bhole - Geminus
2024-11-26 19:39 ` Nicholas Pratte
1 sibling, 2 replies; 8+ messages in thread
From: Patrick Robb @ 2024-11-22 16:59 UTC (permalink / raw)
To: Bharati Bhole - Geminus
Cc: dts, Nicholas Pratte, Dean Marx, Paul Szczepanek, Luca Vizzarro,
NBU-Contact-Thomas Monjalon (EXTERNAL),
dev
[-- Attachment #1: Type: text/plain, Size: 8357 bytes --]
Hi Bharati,
Welcome to the DTS mailing list. I will try to provide some answers based
on my experience running DTS at the DPDK Community Lab at UNH. I will also
flag that this "legacy" version of DTS is deprecated and getting minimal
maintenance. The majority of the current efforts for DTS are directed
towards the rewrite which exists within the /dts dir of the DPDK repo:
https://git.dpdk.org/dpdk/tree/dts
With that being said, of course the legacy repo is still useful and I
encourage you to use it, so I will provide some comments inline below:
On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus <
c_bharatib@xsightlabs.com> wrote:
> Hi,
>
> I am Bharati Bhole. I am a new member of DTS mailing list.
> I have recently started working on DTS for my company and facing some
> issues/failures while running the DTS.
> Please help me with understanding the test cases and expected behaviours.
>
> I am trying to understand the DTS behaviour for following TCs:
>
> 1. JumboFrames :
>
> 1. When the test set the max_pkt_len for testpmd and calculate the
> expected acceptable packet size, does it consider NICs supporting 2 VLANS?
> (In case of MTU update test, I have seen that 2 VLANs NIC are being
> considered while calculating acceptable packets size but in JumboFrames I
> dont see it).
>
>
No, 2 VLANs is not properly accounted for in the Jumboframes testsuite.
And, this is actually highly topical, as this is an ongoing point of
discussion in rewriting jumboframes and mtu_update for the new DTS
framework (the testcases are getting combined into 1 testsuite). I will
paste the function from mtu_update of legacy DTS which you may be referring
to:
------------------------------
def send_packet_of_size_to_port(self, port_id: int, pktsize: int):
# The packet total size include ethernet header, ip header, and
payload.
# ethernet header length is 18 bytes, ip standard header length is
20 bytes.
# pktlen = pktsize - ETHER_HEADER_LEN
if self.kdriver in ["igb", "igc", "ixgbe"]:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
else:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
out = self.send_scapy_packet(
port_id,
f'Ether(dst=dutmac,
src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',
------------------------------
One difference between legacy DTS and the "new" DTS is that in legacy DTS a
master list of devices/drivers was maintained, and there were an endless
amount of conditions like this where a device list would be checked, and
then some behavior modified based on that list. Because this strategy leads
to bugs, it's unresponsive to changes in driver code, hard to maintain, and
for other reasons, we are no longer follow this approach in new DTS. Now,
if we want to toggle different behavior (like determine max_pkt_len for a
given MTU for a given device) that needs to be accomplished by querying
testpmd for device info (there are various testpmd runtime commands for
this). And, in situations where testpmd doesn't expose the information we
need for checking device behavior in a particular testsuite - testpmd needs
to be updated to allow for this.
I am CC'ing Nick who is the person writing the new jumboframes + MTU
testsuite, which (work in progress) is on patchwork here:
https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npratte@iol.unh.edu/
Nick, maybe you can include the mailing list threads Thomas linke you, and
explain your current understanding of how to handle this issue? This won't
really help Bharati in the short term, but at least it will clarify to him
how this issue will be handled in the new DTS framework, which presumably
he will upgrade to using at some point.
> 1.
> 2. In function jumboframes_send_packet() -
> --<snip>--
> if received:
> * if self.nic.startswith("fastlinq"):*
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
> and (rx_bytes == pktsize),
> "packet pass assert error",
> )
> * else:*
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes *+ 4*,
> pktsize))
> and ((rx_bytes *+ 4*) == pktsize),
> "packet pass assert error",
> )
> else:
> self.verify(rx_err == 1 or tx_pkts == 0, "packet drop
> assert error")
> return out
> --<snip>--
>
> Can someone please tell me why these tx_butes and rx_bytes calculations
> are different for Qlogic NICs and other NICs?
>
I don't know the reason why fastlinq has this behavior in DPDK, so I'm
CCing the dev mailing list - maybe someone there will have the historical
knowledge to answer.
Otherwise, in terms of DTS, this is again an example of a workflow which we
do not allow in new DTS.
>
>
> 3.
>
> 2. TestSuite_stats_checks.py :
> The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and
> ETH/IP/RAW(1500).
>
> In function send_packet_of_size_to_tx_port() line no. 174 to 185
> --<snip>--
>
> if received:
> self.verify(tx_pkts_difference >= 1, "No packet was sent")
> self.verify(
> tx_pkts_difference == rx_pkts_difference,
> "different numbers of packets sent and received",
> )
> self.verify(
> tx_bytes_difference == rx_bytes_difference,
> "different number of bytes sent and received",
> )
> self.verify(*tx_err_difference* == 1, "unexpected tx error")
> self.verify(*rx_err_difference *== 0, "unexpected rx error")
>
> --<snip>--
>
> This test expects packets with payload size 30 to pass RX and TX which is
> working fine and for packet with payload size 1500, the test expecting RX
> and to pass and TX to fail?
> I did not get this part. The defailt MTU size is 1500. When scapy sends
> the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even
> if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN =
> 1526
> So according the to my understanding the packets should be dropped and
> rx_error counter should increase and there should not be any increment in
> good/error packet for TX port.
>
This is not a testsuite that we run at our lab but I have read through the
testplan and test file. I think your math makes sense and I would expect
that rx_err_difference would be 1 in this scenario. When we rework this
testsuite, obviously we will need to start testpmd with various NICs, send
packets with RAW(1500) and see if port stats shows rx_err 1 or 0. I am
curious to see if this is the universal behavior in DPDK, or just some
unique behavior from Intel 700 series (legacy DTS was often written towards
the behavior of this device). A goal in rewriting our tests is ensuring
that DPDK apis (which we reach through testpmd) truly return the same
behavior across different NICs.
Sorry about the half answer. Maybe someone else from the dev mailing list
can provide a response about how this RAW(1500) packet can be received on
rx port on any DPDK device.
I can say that we do have this stats_checks testsuite marked as a candidate
to rewrite for new DTS in this current development cycle (DPDK 25.03).
Maybe we can loop you into these conversations, since you have an interest
in the subject? And, there's no pressure on this, but I will just add you
to the invite list for the DPDK DTS meetings (meets once every 2 weeks) in
case you want to join and discuss.
>
> Can someone please tell what is the gap/missing part in my understanding?
>
> Thanks,
> Bharati Bhole.
>
>
Thanks for getting involved - I'm glad to see more companies making use of
DTS.
[-- Attachment #2: Type: text/html, Size: 19261 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-22 16:59 ` Patrick Robb
@ 2024-11-22 17:37 ` Bharati Bhole - Geminus
2024-11-25 10:45 ` Bharati Bhole - Geminus
1 sibling, 0 replies; 8+ messages in thread
From: Bharati Bhole - Geminus @ 2024-11-22 17:37 UTC (permalink / raw)
To: Patrick Robb
Cc: dts, Nicholas Pratte, Dean Marx, Paul Szczepanek, Luca Vizzarro,
NBU-Contact-Thomas Monjalon (EXTERNAL),
dev
[-- Attachment #1: Type: text/plain, Size: 8871 bytes --]
Hi Patrick,
Thanks a lot for the quick response.
Thank you for adding me in the discussion meeting.
Thank you,
Bharati.
________________________________
From: Patrick Robb <probb@iol.unh.edu>
Sent: Friday, November 22, 2024 10:29:18 PM
To: Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
Cc: dts@dpdk.org <dts@dpdk.org>; Nicholas Pratte <npratte@iol.unh.edu>; Dean Marx <dmarx@iol.unh.edu>; Paul Szczepanek <Paul.Szczepanek@arm.com>; Luca Vizzarro <Luca.Vizzarro@arm.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; dev <dev@dpdk.org>
Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS.
Hi Bharati,
Welcome to the DTS mailing list. I will try to provide some answers based on my experience running DTS at the DPDK Community Lab at UNH. I will also flag that this "legacy" version of DTS is deprecated and getting minimal maintenance. The majority of the current efforts for DTS are directed towards the rewrite which exists within the /dts dir of the DPDK repo: https://git.dpdk.org/dpdk/tree/dts
With that being said, of course the legacy repo is still useful and I encourage you to use it, so I will provide some comments inline below:
On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus <c_bharatib@xsightlabs.com<mailto:c_bharatib@xsightlabs.com>> wrote:
Hi,
I am Bharati Bhole. I am a new member of DTS mailing list.
I have recently started working on DTS for my company and facing some issues/failures while running the DTS.
Please help me with understanding the test cases and expected behaviours.
I am trying to understand the DTS behaviour for following TCs:
1. JumboFrames :
1.
When the test set the max_pkt_len for testpmd and calculate the expected acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of MTU update test, I have seen that 2 VLANs NIC are being considered while calculating acceptable packets size but in JumboFrames I dont see it).
No, 2 VLANs is not properly accounted for in the Jumboframes testsuite. And, this is actually highly topical, as this is an ongoing point of discussion in rewriting jumboframes and mtu_update for the new DTS framework (the testcases are getting combined into 1 testsuite). I will paste the function from mtu_update of legacy DTS which you may be referring to:
------------------------------
def send_packet_of_size_to_port(self, port_id: int, pktsize: int):
# The packet total size include ethernet header, ip header, and payload.
# ethernet header length is 18 bytes, ip standard header length is 20 bytes.
# pktlen = pktsize - ETHER_HEADER_LEN
if self.kdriver in ["igb", "igc", "ixgbe"]:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
else:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
out = self.send_scapy_packet(
port_id,
f'Ether(dst=dutmac, src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',
------------------------------
One difference between legacy DTS and the "new" DTS is that in legacy DTS a master list of devices/drivers was maintained, and there were an endless amount of conditions like this where a device list would be checked, and then some behavior modified based on that list. Because this strategy leads to bugs, it's unresponsive to changes in driver code, hard to maintain, and for other reasons, we are no longer follow this approach in new DTS. Now, if we want to toggle different behavior (like determine max_pkt_len for a given MTU for a given device) that needs to be accomplished by querying testpmd for device info (there are various testpmd runtime commands for this). And, in situations where testpmd doesn't expose the information we need for checking device behavior in a particular testsuite - testpmd needs to be updated to allow for this.
I am CC'ing Nick who is the person writing the new jumboframes + MTU testsuite, which (work in progress) is on patchwork here: https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npratte@iol.unh.edu/
Nick, maybe you can include the mailing list threads Thomas linke you, and explain your current understanding of how to handle this issue? This won't really help Bharati in the short term, but at least it will clarify to him how this issue will be handled in the new DTS framework, which presumably he will upgrade to using at some point.
1.
2.
In function jumboframes_send_packet() -
--<snip>--
if received:
if self.nic.startswith("fastlinq"):
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
and (rx_bytes == pktsize),
"packet pass assert error",
)
else:
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
and ((rx_bytes + 4) == pktsize),
"packet pass assert error",
)
else:
self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
return out
--<snip>--
Can someone please tell me why these tx_butes and rx_bytes calculations are different for Qlogic NICs and other NICs?
I don't know the reason why fastlinq has this behavior in DPDK, so I'm CCing the dev mailing list - maybe someone there will have the historical knowledge to answer.
Otherwise, in terms of DTS, this is again an example of a workflow which we do not allow in new DTS.
1.
2. TestSuite_stats_checks.py :
The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500).
In function send_packet_of_size_to_tx_port() line no. 174 to 185
--<snip>--
if received:
self.verify(tx_pkts_difference >= 1, "No packet was sent")
self.verify(
tx_pkts_difference == rx_pkts_difference,
"different numbers of packets sent and received",
)
self.verify(
tx_bytes_difference == rx_bytes_difference,
"different number of bytes sent and received",
)
self.verify(tx_err_difference == 1, "unexpected tx error")
self.verify(rx_err_difference == 0, "unexpected rx error")
--<snip>--
This test expects packets with payload size 30 to pass RX and TX which is working fine and for packet with payload size 1500, the test expecting RX and to pass and TX to fail?
I did not get this part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN = 1526
So according the to my understanding the packets should be dropped and rx_error counter should increase and there should not be any increment in good/error packet for TX port.
This is not a testsuite that we run at our lab but I have read through the testplan and test file. I think your math makes sense and I would expect that rx_err_difference would be 1 in this scenario. When we rework this testsuite, obviously we will need to start testpmd with various NICs, send packets with RAW(1500) and see if port stats shows rx_err 1 or 0. I am curious to see if this is the universal behavior in DPDK, or just some unique behavior from Intel 700 series (legacy DTS was often written towards the behavior of this device). A goal in rewriting our tests is ensuring that DPDK apis (which we reach through testpmd) truly return the same behavior across different NICs.
Sorry about the half answer. Maybe someone else from the dev mailing list can provide a response about how this RAW(1500) packet can be received on rx port on any DPDK device.
I can say that we do have this stats_checks testsuite marked as a candidate to rewrite for new DTS in this current development cycle (DPDK 25.03). Maybe we can loop you into these conversations, since you have an interest in the subject? And, there's no pressure on this, but I will just add you to the invite list for the DPDK DTS meetings (meets once every 2 weeks) in case you want to join and discuss.
Can someone please tell what is the gap/missing part in my understanding?
Thanks,
Bharati Bhole.
Thanks for getting involved - I'm glad to see more companies making use of DTS.
[-- Attachment #2: Type: text/html, Size: 22698 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-22 16:59 ` Patrick Robb
2024-11-22 17:37 ` Bharati Bhole - Geminus
@ 2024-11-25 10:45 ` Bharati Bhole - Geminus
2024-11-25 15:57 ` Patrick Robb
1 sibling, 1 reply; 8+ messages in thread
From: Bharati Bhole - Geminus @ 2024-11-25 10:45 UTC (permalink / raw)
To: Patrick Robb
Cc: dts, Nicholas Pratte, Dean Marx, Paul Szczepanek, Luca Vizzarro,
NBU-Contact-Thomas Monjalon (EXTERNAL),
dev
[-- Attachment #1: Type: text/plain, Size: 9194 bytes --]
Hi Patrik,
I used site - https://dpdk.org/git/dpdk to clone the DPDK code. I tried to go through the DTS/README.md file.
This file says, it uses docker container for dev as well as test execution. But I did not find any steps for setting up the test environment for it.
I tried to look for the steps at https://doc.dpdk.org/guides/tools/dts.html but its not there.
Can you please point me to the document for the execution steps?
Thanks,
Bharati.
________________________________
From: Patrick Robb <probb@iol.unh.edu>
Sent: 22 November 2024 10:29 PM
To: Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
Cc: dts@dpdk.org <dts@dpdk.org>; Nicholas Pratte <npratte@iol.unh.edu>; Dean Marx <dmarx@iol.unh.edu>; Paul Szczepanek <Paul.Szczepanek@arm.com>; Luca Vizzarro <Luca.Vizzarro@arm.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; dev <dev@dpdk.org>
Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS.
Hi Bharati,
Welcome to the DTS mailing list. I will try to provide some answers based on my experience running DTS at the DPDK Community Lab at UNH. I will also flag that this "legacy" version of DTS is deprecated and getting minimal maintenance. The majority of the current efforts for DTS are directed towards the rewrite which exists within the /dts dir of the DPDK repo: https://git.dpdk.org/dpdk/tree/dts
With that being said, of course the legacy repo is still useful and I encourage you to use it, so I will provide some comments inline below:
On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus <c_bharatib@xsightlabs.com<mailto:c_bharatib@xsightlabs.com>> wrote:
Hi,
I am Bharati Bhole. I am a new member of DTS mailing list.
I have recently started working on DTS for my company and facing some issues/failures while running the DTS.
Please help me with understanding the test cases and expected behaviours.
I am trying to understand the DTS behaviour for following TCs:
1. JumboFrames :
1.
When the test set the max_pkt_len for testpmd and calculate the expected acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of MTU update test, I have seen that 2 VLANs NIC are being considered while calculating acceptable packets size but in JumboFrames I dont see it).
No, 2 VLANs is not properly accounted for in the Jumboframes testsuite. And, this is actually highly topical, as this is an ongoing point of discussion in rewriting jumboframes and mtu_update for the new DTS framework (the testcases are getting combined into 1 testsuite). I will paste the function from mtu_update of legacy DTS which you may be referring to:
------------------------------
def send_packet_of_size_to_port(self, port_id: int, pktsize: int):
# The packet total size include ethernet header, ip header, and payload.
# ethernet header length is 18 bytes, ip standard header length is 20 bytes.
# pktlen = pktsize - ETHER_HEADER_LEN
if self.kdriver in ["igb", "igc", "ixgbe"]:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
else:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
out = self.send_scapy_packet(
port_id,
f'Ether(dst=dutmac, src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',
------------------------------
One difference between legacy DTS and the "new" DTS is that in legacy DTS a master list of devices/drivers was maintained, and there were an endless amount of conditions like this where a device list would be checked, and then some behavior modified based on that list. Because this strategy leads to bugs, it's unresponsive to changes in driver code, hard to maintain, and for other reasons, we are no longer follow this approach in new DTS. Now, if we want to toggle different behavior (like determine max_pkt_len for a given MTU for a given device) that needs to be accomplished by querying testpmd for device info (there are various testpmd runtime commands for this). And, in situations where testpmd doesn't expose the information we need for checking device behavior in a particular testsuite - testpmd needs to be updated to allow for this.
I am CC'ing Nick who is the person writing the new jumboframes + MTU testsuite, which (work in progress) is on patchwork here: https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npratte@iol.unh.edu/
Nick, maybe you can include the mailing list threads Thomas linke you, and explain your current understanding of how to handle this issue? This won't really help Bharati in the short term, but at least it will clarify to him how this issue will be handled in the new DTS framework, which presumably he will upgrade to using at some point.
1.
2.
In function jumboframes_send_packet() -
--<snip>--
if received:
if self.nic.startswith("fastlinq"):
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
and (rx_bytes == pktsize),
"packet pass assert error",
)
else:
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
and ((rx_bytes + 4) == pktsize),
"packet pass assert error",
)
else:
self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
return out
--<snip>--
Can someone please tell me why these tx_butes and rx_bytes calculations are different for Qlogic NICs and other NICs?
I don't know the reason why fastlinq has this behavior in DPDK, so I'm CCing the dev mailing list - maybe someone there will have the historical knowledge to answer.
Otherwise, in terms of DTS, this is again an example of a workflow which we do not allow in new DTS.
1.
2. TestSuite_stats_checks.py :
The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500).
In function send_packet_of_size_to_tx_port() line no. 174 to 185
--<snip>--
if received:
self.verify(tx_pkts_difference >= 1, "No packet was sent")
self.verify(
tx_pkts_difference == rx_pkts_difference,
"different numbers of packets sent and received",
)
self.verify(
tx_bytes_difference == rx_bytes_difference,
"different number of bytes sent and received",
)
self.verify(tx_err_difference == 1, "unexpected tx error")
self.verify(rx_err_difference == 0, "unexpected rx error")
--<snip>--
This test expects packets with payload size 30 to pass RX and TX which is working fine and for packet with payload size 1500, the test expecting RX and to pass and TX to fail?
I did not get this part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN = 1526
So according the to my understanding the packets should be dropped and rx_error counter should increase and there should not be any increment in good/error packet for TX port.
This is not a testsuite that we run at our lab but I have read through the testplan and test file. I think your math makes sense and I would expect that rx_err_difference would be 1 in this scenario. When we rework this testsuite, obviously we will need to start testpmd with various NICs, send packets with RAW(1500) and see if port stats shows rx_err 1 or 0. I am curious to see if this is the universal behavior in DPDK, or just some unique behavior from Intel 700 series (legacy DTS was often written towards the behavior of this device). A goal in rewriting our tests is ensuring that DPDK apis (which we reach through testpmd) truly return the same behavior across different NICs.
Sorry about the half answer. Maybe someone else from the dev mailing list can provide a response about how this RAW(1500) packet can be received on rx port on any DPDK device.
I can say that we do have this stats_checks testsuite marked as a candidate to rewrite for new DTS in this current development cycle (DPDK 25.03). Maybe we can loop you into these conversations, since you have an interest in the subject? And, there's no pressure on this, but I will just add you to the invite list for the DPDK DTS meetings (meets once every 2 weeks) in case you want to join and discuss.
Can someone please tell what is the gap/missing part in my understanding?
Thanks,
Bharati Bhole.
Thanks for getting involved - I'm glad to see more companies making use of DTS.
[-- Attachment #2: Type: text/html, Size: 25225 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-25 10:45 ` Bharati Bhole - Geminus
@ 2024-11-25 15:57 ` Patrick Robb
2024-11-25 17:36 ` Bharati Bhole - Geminus
0 siblings, 1 reply; 8+ messages in thread
From: Patrick Robb @ 2024-11-25 15:57 UTC (permalink / raw)
To: Bharati Bhole - Geminus
Cc: dts, Nicholas Pratte, Dean Marx, Paul Szczepanek, Luca Vizzarro,
NBU-Contact-Thomas Monjalon (EXTERNAL),
dev
[-- Attachment #1: Type: text/plain, Size: 9953 bytes --]
Hi Bharati,
It might be easiest to address your questions over a video conference call
instead of email. Would this be okay?
I am free tomorrow 11/26 16:00-18:00 UTC, or Wednesday 11/27 14:00-16:00
UTC and 20:00-22:00 UTC. Or I have other availability if none of these work.
On Mon, Nov 25, 2024 at 5:45 AM Bharati Bhole - Geminus <
c_bharatib@xsightlabs.com> wrote:
> Hi Patrik,
>
> I used site - https://dpdk.org/git/dpdk to clone the DPDK code. I tried
> to go through the DTS/README.md file.
>
> This file says, it uses docker container for dev as well as test
> execution. But I did not find any steps for setting up the test environment
> for it.
>
> I tried to look for the steps at
> https://doc.dpdk.org/guides/tools/dts.html but its not there.
> Can you please point me to the document for the execution steps?
>
> Thanks,
> Bharati.
>
> ------------------------------
> *From:* Patrick Robb <probb@iol.unh.edu>
> *Sent:* 22 November 2024 10:29 PM
> *To:* Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
> *Cc:* dts@dpdk.org <dts@dpdk.org>; Nicholas Pratte <npratte@iol.unh.edu>;
> Dean Marx <dmarx@iol.unh.edu>; Paul Szczepanek <Paul.Szczepanek@arm.com>;
> Luca Vizzarro <Luca.Vizzarro@arm.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; dev <dev@dpdk.org>
> *Subject:* Re: Doubts in JumboFrames and stats_checks tests in DTS.
>
> Hi Bharati,
>
> Welcome to the DTS mailing list. I will try to provide some answers based
> on my experience running DTS at the DPDK Community Lab at UNH. I will also
> flag that this "legacy" version of DTS is deprecated and getting minimal
> maintenance. The majority of the current efforts for DTS are directed
> towards the rewrite which exists within the /dts dir of the DPDK repo:
> https://git.dpdk.org/dpdk/tree/dts
>
> With that being said, of course the legacy repo is still useful and I
> encourage you to use it, so I will provide some comments inline below:
>
> On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus <
> c_bharatib@xsightlabs.com> wrote:
>
> Hi,
>
> I am Bharati Bhole. I am a new member of DTS mailing list.
> I have recently started working on DTS for my company and facing some
> issues/failures while running the DTS.
> Please help me with understanding the test cases and expected behaviours.
>
> I am trying to understand the DTS behaviour for following TCs:
>
> 1. JumboFrames :
>
> 1. When the test set the max_pkt_len for testpmd and calculate the
> expected acceptable packet size, does it consider NICs supporting 2 VLANS?
> (In case of MTU update test, I have seen that 2 VLANs NIC are being
> considered while calculating acceptable packets size but in JumboFrames I
> dont see it).
>
>
> No, 2 VLANs is not properly accounted for in the Jumboframes testsuite.
> And, this is actually highly topical, as this is an ongoing point of
> discussion in rewriting jumboframes and mtu_update for the new DTS
> framework (the testcases are getting combined into 1 testsuite). I will
> paste the function from mtu_update of legacy DTS which you may be referring
> to:
>
> ------------------------------
>
> def send_packet_of_size_to_port(self, port_id: int, pktsize: int):
>
> # The packet total size include ethernet header, ip header, and
> payload.
> # ethernet header length is 18 bytes, ip standard header length is
> 20 bytes.
> # pktlen = pktsize - ETHER_HEADER_LEN
> if self.kdriver in ["igb", "igc", "ixgbe"]:
> max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
> padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
> else:
> max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
> padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
> out = self.send_scapy_packet(
> port_id,
> f'Ether(dst=dutmac,
> src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',
>
> ------------------------------
>
> One difference between legacy DTS and the "new" DTS is that in legacy DTS
> a master list of devices/drivers was maintained, and there were an endless
> amount of conditions like this where a device list would be checked, and
> then some behavior modified based on that list. Because this strategy leads
> to bugs, it's unresponsive to changes in driver code, hard to maintain, and
> for other reasons, we are no longer follow this approach in new DTS. Now,
> if we want to toggle different behavior (like determine max_pkt_len for a
> given MTU for a given device) that needs to be accomplished by querying
> testpmd for device info (there are various testpmd runtime commands for
> this). And, in situations where testpmd doesn't expose the information we
> need for checking device behavior in a particular testsuite - testpmd needs
> to be updated to allow for this.
>
> I am CC'ing Nick who is the person writing the new jumboframes + MTU
> testsuite, which (work in progress) is on patchwork here:
> https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npratte@iol.unh.edu/
>
> Nick, maybe you can include the mailing list threads Thomas linke you, and
> explain your current understanding of how to handle this issue? This won't
> really help Bharati in the short term, but at least it will clarify to him
> how this issue will be handled in the new DTS framework, which presumably
> he will upgrade to using at some point.
>
>
> 1.
> 2. In function jumboframes_send_packet() -
> --<snip>--
> if received:
> * if self.nic.startswith("fastlinq"):*
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
> and (rx_bytes == pktsize),
> "packet pass assert error",
> )
> * else:*
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes *+ 4*,
> pktsize))
> and ((rx_bytes *+ 4*) == pktsize),
> "packet pass assert error",
> )
> else:
> self.verify(rx_err == 1 or tx_pkts == 0, "packet drop
> assert error")
> return out
> --<snip>--
>
> Can someone please tell me why these tx_butes and rx_bytes calculations
> are different for Qlogic NICs and other NICs?
>
>
> I don't know the reason why fastlinq has this behavior in DPDK, so I'm
> CCing the dev mailing list - maybe someone there will have the historical
> knowledge to answer.
>
> Otherwise, in terms of DTS, this is again an example of a workflow which
> we do not allow in new DTS.
>
>
>
>
> 3.
>
> 2. TestSuite_stats_checks.py :
> The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and
> ETH/IP/RAW(1500).
>
> In function send_packet_of_size_to_tx_port() line no. 174 to 185
> --<snip>--
>
> if received:
> self.verify(tx_pkts_difference >= 1, "No packet was sent")
> self.verify(
> tx_pkts_difference == rx_pkts_difference,
> "different numbers of packets sent and received",
> )
> self.verify(
> tx_bytes_difference == rx_bytes_difference,
> "different number of bytes sent and received",
> )
> self.verify(*tx_err_difference* == 1, "unexpected tx error")
> self.verify(*rx_err_difference *== 0, "unexpected rx error")
>
> --<snip>--
>
> This test expects packets with payload size 30 to pass RX and TX which is
> working fine and for packet with payload size 1500, the test expecting RX
> and to pass and TX to fail?
> I did not get this part. The defailt MTU size is 1500. When scapy sends
> the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even
> if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN =
> 1526
> So according the to my understanding the packets should be dropped and
> rx_error counter should increase and there should not be any increment in
> good/error packet for TX port.
>
>
> This is not a testsuite that we run at our lab but I have read through the
> testplan and test file. I think your math makes sense and I would expect
> that rx_err_difference would be 1 in this scenario. When we rework this
> testsuite, obviously we will need to start testpmd with various NICs, send
> packets with RAW(1500) and see if port stats shows rx_err 1 or 0. I am
> curious to see if this is the universal behavior in DPDK, or just some
> unique behavior from Intel 700 series (legacy DTS was often written towards
> the behavior of this device). A goal in rewriting our tests is ensuring
> that DPDK apis (which we reach through testpmd) truly return the same
> behavior across different NICs.
>
> Sorry about the half answer. Maybe someone else from the dev mailing list
> can provide a response about how this RAW(1500) packet can be received on
> rx port on any DPDK device.
>
> I can say that we do have this stats_checks testsuite marked as a
> candidate to rewrite for new DTS in this current development cycle (DPDK
> 25.03). Maybe we can loop you into these conversations, since you have an
> interest in the subject? And, there's no pressure on this, but I will just
> add you to the invite list for the DPDK DTS meetings (meets once every 2
> weeks) in case you want to join and discuss.
>
>
>
> Can someone please tell what is the gap/missing part in my understanding?
>
> Thanks,
> Bharati Bhole.
>
>
> Thanks for getting involved - I'm glad to see more companies making use of
> DTS.
>
[-- Attachment #2: Type: text/html, Size: 23968 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-25 15:57 ` Patrick Robb
@ 2024-11-25 17:36 ` Bharati Bhole - Geminus
2024-11-25 21:36 ` Patrick Robb
0 siblings, 1 reply; 8+ messages in thread
From: Bharati Bhole - Geminus @ 2024-11-25 17:36 UTC (permalink / raw)
To: Patrick Robb
Cc: dts, Nicholas Pratte, Dean Marx, Paul Szczepanek, Luca Vizzarro,
NBU-Contact-Thomas Monjalon (EXTERNAL),
dev
[-- Attachment #1: Type: text/plain, Size: 10491 bytes --]
Hi Patrick,
11/26 16:00 UTC works for me.
Please let me know which link to join.
Thanks,
Bharati.
________________________________
From: Patrick Robb <probb@iol.unh.edu>
Sent: Monday, November 25, 2024 9:27:29 PM
To: Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
Cc: dts@dpdk.org <dts@dpdk.org>; Nicholas Pratte <npratte@iol.unh.edu>; Dean Marx <dmarx@iol.unh.edu>; Paul Szczepanek <Paul.Szczepanek@arm.com>; Luca Vizzarro <Luca.Vizzarro@arm.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; dev <dev@dpdk.org>
Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS.
Hi Bharati,
It might be easiest to address your questions over a video conference call instead of email. Would this be okay?
I am free tomorrow 11/26 16:00-18:00 UTC, or Wednesday 11/27 14:00-16:00 UTC and 20:00-22:00 UTC. Or I have other availability if none of these work.
On Mon, Nov 25, 2024 at 5:45 AM Bharati Bhole - Geminus <c_bharatib@xsightlabs.com<mailto:c_bharatib@xsightlabs.com>> wrote:
Hi Patrik,
I used site - https://dpdk.org/git/dpdk to clone the DPDK code. I tried to go through the DTS/README.md file.
This file says, it uses docker container for dev as well as test execution. But I did not find any steps for setting up the test environment for it.
I tried to look for the steps at https://doc.dpdk.org/guides/tools/dts.html but its not there.
Can you please point me to the document for the execution steps?
Thanks,
Bharati.
________________________________
From: Patrick Robb <probb@iol.unh.edu<mailto:probb@iol.unh.edu>>
Sent: 22 November 2024 10:29 PM
To: Bharati Bhole - Geminus <c_bharatib@xsightlabs.com<mailto:c_bharatib@xsightlabs.com>>
Cc: dts@dpdk.org<mailto:dts@dpdk.org> <dts@dpdk.org<mailto:dts@dpdk.org>>; Nicholas Pratte <npratte@iol.unh.edu<mailto:npratte@iol.unh.edu>>; Dean Marx <dmarx@iol.unh.edu<mailto:dmarx@iol.unh.edu>>; Paul Szczepanek <Paul.Szczepanek@arm.com<mailto:Paul.Szczepanek@arm.com>>; Luca Vizzarro <Luca.Vizzarro@arm.com<mailto:Luca.Vizzarro@arm.com>>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>; dev <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS.
Hi Bharati,
Welcome to the DTS mailing list. I will try to provide some answers based on my experience running DTS at the DPDK Community Lab at UNH. I will also flag that this "legacy" version of DTS is deprecated and getting minimal maintenance. The majority of the current efforts for DTS are directed towards the rewrite which exists within the /dts dir of the DPDK repo: https://git.dpdk.org/dpdk/tree/dts
With that being said, of course the legacy repo is still useful and I encourage you to use it, so I will provide some comments inline below:
On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus <c_bharatib@xsightlabs.com<mailto:c_bharatib@xsightlabs.com>> wrote:
Hi,
I am Bharati Bhole. I am a new member of DTS mailing list.
I have recently started working on DTS for my company and facing some issues/failures while running the DTS.
Please help me with understanding the test cases and expected behaviours.
I am trying to understand the DTS behaviour for following TCs:
1. JumboFrames :
1.
When the test set the max_pkt_len for testpmd and calculate the expected acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of MTU update test, I have seen that 2 VLANs NIC are being considered while calculating acceptable packets size but in JumboFrames I dont see it).
No, 2 VLANs is not properly accounted for in the Jumboframes testsuite. And, this is actually highly topical, as this is an ongoing point of discussion in rewriting jumboframes and mtu_update for the new DTS framework (the testcases are getting combined into 1 testsuite). I will paste the function from mtu_update of legacy DTS which you may be referring to:
------------------------------
def send_packet_of_size_to_port(self, port_id: int, pktsize: int):
# The packet total size include ethernet header, ip header, and payload.
# ethernet header length is 18 bytes, ip standard header length is 20 bytes.
# pktlen = pktsize - ETHER_HEADER_LEN
if self.kdriver in ["igb", "igc", "ixgbe"]:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
else:
max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
out = self.send_scapy_packet(
port_id,
f'Ether(dst=dutmac, src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',
------------------------------
One difference between legacy DTS and the "new" DTS is that in legacy DTS a master list of devices/drivers was maintained, and there were an endless amount of conditions like this where a device list would be checked, and then some behavior modified based on that list. Because this strategy leads to bugs, it's unresponsive to changes in driver code, hard to maintain, and for other reasons, we are no longer follow this approach in new DTS. Now, if we want to toggle different behavior (like determine max_pkt_len for a given MTU for a given device) that needs to be accomplished by querying testpmd for device info (there are various testpmd runtime commands for this). And, in situations where testpmd doesn't expose the information we need for checking device behavior in a particular testsuite - testpmd needs to be updated to allow for this.
I am CC'ing Nick who is the person writing the new jumboframes + MTU testsuite, which (work in progress) is on patchwork here: https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npratte@iol.unh.edu/
Nick, maybe you can include the mailing list threads Thomas linke you, and explain your current understanding of how to handle this issue? This won't really help Bharati in the short term, but at least it will clarify to him how this issue will be handled in the new DTS framework, which presumably he will upgrade to using at some point.
1.
2.
In function jumboframes_send_packet() -
--<snip>--
if received:
if self.nic.startswith("fastlinq"):
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
and (rx_bytes == pktsize),
"packet pass assert error",
)
else:
self.verify(
self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
and ((rx_bytes + 4) == pktsize),
"packet pass assert error",
)
else:
self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
return out
--<snip>--
Can someone please tell me why these tx_butes and rx_bytes calculations are different for Qlogic NICs and other NICs?
I don't know the reason why fastlinq has this behavior in DPDK, so I'm CCing the dev mailing list - maybe someone there will have the historical knowledge to answer.
Otherwise, in terms of DTS, this is again an example of a workflow which we do not allow in new DTS.
1.
2. TestSuite_stats_checks.py :
The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500).
In function send_packet_of_size_to_tx_port() line no. 174 to 185
--<snip>--
if received:
self.verify(tx_pkts_difference >= 1, "No packet was sent")
self.verify(
tx_pkts_difference == rx_pkts_difference,
"different numbers of packets sent and received",
)
self.verify(
tx_bytes_difference == rx_bytes_difference,
"different number of bytes sent and received",
)
self.verify(tx_err_difference == 1, "unexpected tx error")
self.verify(rx_err_difference == 0, "unexpected rx error")
--<snip>--
This test expects packets with payload size 30 to pass RX and TX which is working fine and for packet with payload size 1500, the test expecting RX and to pass and TX to fail?
I did not get this part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN = 1526
So according the to my understanding the packets should be dropped and rx_error counter should increase and there should not be any increment in good/error packet for TX port.
This is not a testsuite that we run at our lab but I have read through the testplan and test file. I think your math makes sense and I would expect that rx_err_difference would be 1 in this scenario. When we rework this testsuite, obviously we will need to start testpmd with various NICs, send packets with RAW(1500) and see if port stats shows rx_err 1 or 0. I am curious to see if this is the universal behavior in DPDK, or just some unique behavior from Intel 700 series (legacy DTS was often written towards the behavior of this device). A goal in rewriting our tests is ensuring that DPDK apis (which we reach through testpmd) truly return the same behavior across different NICs.
Sorry about the half answer. Maybe someone else from the dev mailing list can provide a response about how this RAW(1500) packet can be received on rx port on any DPDK device.
I can say that we do have this stats_checks testsuite marked as a candidate to rewrite for new DTS in this current development cycle (DPDK 25.03). Maybe we can loop you into these conversations, since you have an interest in the subject? And, there's no pressure on this, but I will just add you to the invite list for the DPDK DTS meetings (meets once every 2 weeks) in case you want to join and discuss.
Can someone please tell what is the gap/missing part in my understanding?
Thanks,
Bharati Bhole.
Thanks for getting involved - I'm glad to see more companies making use of DTS.
[-- Attachment #2: Type: text/html, Size: 27623 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-25 17:36 ` Bharati Bhole - Geminus
@ 2024-11-25 21:36 ` Patrick Robb
0 siblings, 0 replies; 8+ messages in thread
From: Patrick Robb @ 2024-11-25 21:36 UTC (permalink / raw)
To: Bharati Bhole - Geminus
Cc: dts, Nicholas Pratte, Dean Marx, Paul Szczepanek, Luca Vizzarro,
NBU-Contact-Thomas Monjalon (EXTERNAL),
dev
[-- Attachment #1: Type: text/plain, Size: 11831 bytes --]
Hi Bharati,
Thanks, here is the meeting info. I'll see you tomorrow!
---------------------------
Patrick Robb is inviting you to a scheduled Zoom meeting.
Topic: Bharati & Patrick DTS Discussion
Time: Nov 26, 2024 11:00 AM Eastern Time (US and Canada)
Join from PC, Mac, Linux, iOS or Android: https://unh.zoom.us/j/92634291594
Keyboard shortcuts are available to navigate this Zoom meeting or webinar:
https://support.zoom.us/hc/en-us/articles/205683899-Hot-Keys-and-Keyboard-for-Zoom
Or iPhone one-tap: 13017158592,92634291594# or 13052241968,92634291594#
Or Telephone:
Dial: +1 301 715 8592 (US Toll)
Meeting ID: 926 3429 1594
International numbers available: https://unh.zoom.us/u/aHJatyofh
Or a H.323/SIP room system:
H.323: rc.unh.edu or 162.255.37.11 (US West) or 162.255.36.11 (US East)
Meeting ID: 926 3429 1594
SIP: 92634291594@zoomcrc.com
TROUBLESHOOTING STEPS:
Audio Echo In A Meeting:
https://support.zoom.us/hc/en-us/articles/202050538-Audio-Echo-In-A-Meeting
Want to Join a Test Meeting?: https://zoom.us/test
On Mon, Nov 25, 2024 at 12:37 PM Bharati Bhole - Geminus <
c_bharatib@xsightlabs.com> wrote:
> Hi Patrick,
>
> 11/26 16:00 UTC works for me.
> Please let me know which link to join.
>
> Thanks,
> Bharati.
> ------------------------------
> *From:* Patrick Robb <probb@iol.unh.edu>
> *Sent:* Monday, November 25, 2024 9:27:29 PM
> *To:* Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
> *Cc:* dts@dpdk.org <dts@dpdk.org>; Nicholas Pratte <npratte@iol.unh.edu>;
> Dean Marx <dmarx@iol.unh.edu>; Paul Szczepanek <Paul.Szczepanek@arm.com>;
> Luca Vizzarro <Luca.Vizzarro@arm.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; dev <dev@dpdk.org>
> *Subject:* Re: Doubts in JumboFrames and stats_checks tests in DTS.
>
> Hi Bharati,
>
> It might be easiest to address your questions over a video conference call
> instead of email. Would this be okay?
>
> I am free tomorrow 11/26 16:00-18:00 UTC, or Wednesday 11/27 14:00-16:00
> UTC and 20:00-22:00 UTC. Or I have other availability if none of these work.
>
> On Mon, Nov 25, 2024 at 5:45 AM Bharati Bhole - Geminus <
> c_bharatib@xsightlabs.com> wrote:
>
> Hi Patrik,
>
> I used site - https://dpdk.org/git/dpdk to clone the DPDK code. I tried
> to go through the DTS/README.md file.
>
> This file says, it uses docker container for dev as well as test
> execution. But I did not find any steps for setting up the test environment
> for it.
>
> I tried to look for the steps at
> https://doc.dpdk.org/guides/tools/dts.html but its not there.
> Can you please point me to the document for the execution steps?
>
> Thanks,
> Bharati.
>
> ------------------------------
> *From:* Patrick Robb <probb@iol.unh.edu>
> *Sent:* 22 November 2024 10:29 PM
> *To:* Bharati Bhole - Geminus <c_bharatib@xsightlabs.com>
> *Cc:* dts@dpdk.org <dts@dpdk.org>; Nicholas Pratte <npratte@iol.unh.edu>;
> Dean Marx <dmarx@iol.unh.edu>; Paul Szczepanek <Paul.Szczepanek@arm.com>;
> Luca Vizzarro <Luca.Vizzarro@arm.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; dev <dev@dpdk.org>
> *Subject:* Re: Doubts in JumboFrames and stats_checks tests in DTS.
>
> Hi Bharati,
>
> Welcome to the DTS mailing list. I will try to provide some answers based
> on my experience running DTS at the DPDK Community Lab at UNH. I will also
> flag that this "legacy" version of DTS is deprecated and getting minimal
> maintenance. The majority of the current efforts for DTS are directed
> towards the rewrite which exists within the /dts dir of the DPDK repo:
> https://git.dpdk.org/dpdk/tree/dts
>
> With that being said, of course the legacy repo is still useful and I
> encourage you to use it, so I will provide some comments inline below:
>
> On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus <
> c_bharatib@xsightlabs.com> wrote:
>
> Hi,
>
> I am Bharati Bhole. I am a new member of DTS mailing list.
> I have recently started working on DTS for my company and facing some
> issues/failures while running the DTS.
> Please help me with understanding the test cases and expected behaviours.
>
> I am trying to understand the DTS behaviour for following TCs:
>
> 1. JumboFrames :
>
> 1. When the test set the max_pkt_len for testpmd and calculate the
> expected acceptable packet size, does it consider NICs supporting 2 VLANS?
> (In case of MTU update test, I have seen that 2 VLANs NIC are being
> considered while calculating acceptable packets size but in JumboFrames I
> dont see it).
>
>
> No, 2 VLANs is not properly accounted for in the Jumboframes testsuite.
> And, this is actually highly topical, as this is an ongoing point of
> discussion in rewriting jumboframes and mtu_update for the new DTS
> framework (the testcases are getting combined into 1 testsuite). I will
> paste the function from mtu_update of legacy DTS which you may be referring
> to:
>
> ------------------------------
>
> def send_packet_of_size_to_port(self, port_id: int, pktsize: int):
>
> # The packet total size include ethernet header, ip header, and
> payload.
> # ethernet header length is 18 bytes, ip standard header length is
> 20 bytes.
> # pktlen = pktsize - ETHER_HEADER_LEN
> if self.kdriver in ["igb", "igc", "ixgbe"]:
> max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
> padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
> else:
> max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
> padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
> out = self.send_scapy_packet(
> port_id,
> f'Ether(dst=dutmac,
> src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',
>
> ------------------------------
>
> One difference between legacy DTS and the "new" DTS is that in legacy DTS
> a master list of devices/drivers was maintained, and there were an endless
> amount of conditions like this where a device list would be checked, and
> then some behavior modified based on that list. Because this strategy leads
> to bugs, it's unresponsive to changes in driver code, hard to maintain, and
> for other reasons, we are no longer follow this approach in new DTS. Now,
> if we want to toggle different behavior (like determine max_pkt_len for a
> given MTU for a given device) that needs to be accomplished by querying
> testpmd for device info (there are various testpmd runtime commands for
> this). And, in situations where testpmd doesn't expose the information we
> need for checking device behavior in a particular testsuite - testpmd needs
> to be updated to allow for this.
>
> I am CC'ing Nick who is the person writing the new jumboframes + MTU
> testsuite, which (work in progress) is on patchwork here:
> https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npratte@iol.unh.edu/
>
> Nick, maybe you can include the mailing list threads Thomas linke you, and
> explain your current understanding of how to handle this issue? This won't
> really help Bharati in the short term, but at least it will clarify to him
> how this issue will be handled in the new DTS framework, which presumably
> he will upgrade to using at some point.
>
>
> 1.
> 2. In function jumboframes_send_packet() -
> --<snip>--
> if received:
> * if self.nic.startswith("fastlinq"):*
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
> and (rx_bytes == pktsize),
> "packet pass assert error",
> )
> * else:*
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes *+ 4*,
> pktsize))
> and ((rx_bytes *+ 4*) == pktsize),
> "packet pass assert error",
> )
> else:
> self.verify(rx_err == 1 or tx_pkts == 0, "packet drop
> assert error")
> return out
> --<snip>--
>
> Can someone please tell me why these tx_butes and rx_bytes calculations
> are different for Qlogic NICs and other NICs?
>
>
> I don't know the reason why fastlinq has this behavior in DPDK, so I'm
> CCing the dev mailing list - maybe someone there will have the historical
> knowledge to answer.
>
> Otherwise, in terms of DTS, this is again an example of a workflow which
> we do not allow in new DTS.
>
>
>
>
> 3.
>
> 2. TestSuite_stats_checks.py :
> The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and
> ETH/IP/RAW(1500).
>
> In function send_packet_of_size_to_tx_port() line no. 174 to 185
> --<snip>--
>
> if received:
> self.verify(tx_pkts_difference >= 1, "No packet was sent")
> self.verify(
> tx_pkts_difference == rx_pkts_difference,
> "different numbers of packets sent and received",
> )
> self.verify(
> tx_bytes_difference == rx_bytes_difference,
> "different number of bytes sent and received",
> )
> self.verify(*tx_err_difference* == 1, "unexpected tx error")
> self.verify(*rx_err_difference *== 0, "unexpected rx error")
>
> --<snip>--
>
> This test expects packets with payload size 30 to pass RX and TX which is
> working fine and for packet with payload size 1500, the test expecting RX
> and to pass and TX to fail?
> I did not get this part. The defailt MTU size is 1500. When scapy sends
> the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even
> if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN =
> 1526
> So according the to my understanding the packets should be dropped and
> rx_error counter should increase and there should not be any increment in
> good/error packet for TX port.
>
>
> This is not a testsuite that we run at our lab but I have read through the
> testplan and test file. I think your math makes sense and I would expect
> that rx_err_difference would be 1 in this scenario. When we rework this
> testsuite, obviously we will need to start testpmd with various NICs, send
> packets with RAW(1500) and see if port stats shows rx_err 1 or 0. I am
> curious to see if this is the universal behavior in DPDK, or just some
> unique behavior from Intel 700 series (legacy DTS was often written towards
> the behavior of this device). A goal in rewriting our tests is ensuring
> that DPDK apis (which we reach through testpmd) truly return the same
> behavior across different NICs.
>
> Sorry about the half answer. Maybe someone else from the dev mailing list
> can provide a response about how this RAW(1500) packet can be received on
> rx port on any DPDK device.
>
> I can say that we do have this stats_checks testsuite marked as a
> candidate to rewrite for new DTS in this current development cycle (DPDK
> 25.03). Maybe we can loop you into these conversations, since you have an
> interest in the subject? And, there's no pressure on this, but I will just
> add you to the invite list for the DPDK DTS meetings (meets once every 2
> weeks) in case you want to join and discuss.
>
>
>
> Can someone please tell what is the gap/missing part in my understanding?
>
> Thanks,
> Bharati Bhole.
>
>
> Thanks for getting involved - I'm glad to see more companies making use of
> DTS.
>
>
[-- Attachment #2: Type: text/html, Size: 27799 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Doubts in JumboFrames and stats_checks tests in DTS.
2024-11-22 14:42 Doubts in JumboFrames and stats_checks tests in DTS Bharati Bhole - Geminus
2024-11-22 16:59 ` Patrick Robb
@ 2024-11-26 19:39 ` Nicholas Pratte
1 sibling, 0 replies; 8+ messages in thread
From: Nicholas Pratte @ 2024-11-26 19:39 UTC (permalink / raw)
To: Bharati Bhole - Geminus; +Cc: dts
Hello Bharati,
I see you have discovered some of the oddities regarding DPDK and MTU!
There are many interesting problems associated with MTU within DPDK.
You can see some elaboration, based on all the things I've discovered,
on these issues in some of my comments below, of which and in some
form, directly relate to the issues you've pointed out in this thread.
Thank you for starting this conversation!
Patrick, please let me know if I missed anything or if more
clarification could be provided in spots; I double checked, but I
could be missing something.
> I am Bharati Bhole. I am a new member of DTS mailing list.
Welcome!
> I have recently started working on DTS for my company and facing some issues/failures while running the DTS.
> Please help me with understanding the test cases and expected behaviours.
<snip>
>
> When the test set the max_pkt_len for testpmd and calculate the expected acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of MTU update test, I have seen that 2 VLANs NIC are being considered while calculating acceptable packets size but in JumboFrames I dont see it).
Patrick's comment is correct! Looking at DPDK without any sort of
depth, it is incredibly difficult to understand what "MTU" actually
means within DPDK, and this lack of enforced definition has created
significant delays in getting a comprehensive set of MTU test suites
merged within the currently-maintained DTS framework. Deeper
discussions need to be had on this issue, and I hope to get the
community engaged with this again in the near future.
If you take a deeper look into DPDK's source code, and each individual
vendor's driver code, you will see that each vendor makes a unique
assumption of what is considered L3 information and L2 information.
Some vendors will assume that an Ethernet frame should expect a Dot1Q
tag, CRC, and the 14 byte Ethernet frame; other vendors may not make
this assumption. As Patrick mentions, the new DTS framework is making
an effort to move away from conforming to unique vendor behavior
within its code in favor of code that works universally across all
vendors, and in doing so, the latest version of the JumboFrames test
suite that I've written makes an effort to assess MTUs based on a
universal agreement of what is assumed to be in a Layer 2 frame; of
course, the test suite does not work across all vendors with the
existing vacuous MTU definition. To deal with this issue, I refactored
the test suite to include a +/-5 byte buffer in packets being sent to
assess the inner and outer boundaries of the set MTU, but this has its
own issues (namely in assessing an MTU at the exact MTU value itself)
in some cases, so this solution is expected to be temporary until a
better alternative is found.
I encourage you to take a look at this thread below; it is a link to
an archived thread on the DPDK mailing list, it provides some insight
into what is meant by 'max-pkt-len' and MTU within DPDK's logic:
https://inbox.dpdk.org/dev/0f4866fc-b76d-cf83-75e8-86326c02814b@intel.com/
You may also want to look at the original RFC that I published for
JumboFrames which discusses this in some detail as well. You will also
find a link to an existing Bugzilla ticket aimed at finding solutions
to MTU.
https://inbox.dpdk.org/dev/20240524183604.6925-2-npratte@iol.unh.edu
>
> In function jumboframes_send_packet() -
> --<snip>--
> if received:
> if self.nic.startswith("fastlinq"):
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
> and (rx_bytes == pktsize),
> "packet pass assert error",
> )
> else:
> self.verify(
> self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
> and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
> and ((rx_bytes + 4) == pktsize),
> "packet pass assert error",
> )
> else:
> self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
> return out
> --<snip>--
>
> Can someone please tell me why these tx_butes and rx_bytes calculations are different for Qlogic NICs and other NICs?
QLogic NICs are interesting in that they only accept buffer sizes
restricted to powers of 2, if I am remembering correctly. It's unique
to these NICs, it's not seen in any other vendor as far as I know, and
as Patrick mentioned in his response, legacy DTS was designed around
conforming to standards across multiple vendors. Thus, in the "new"
DTS framework, it's likely that we wouldn't consider these NICs at all
in some test cases. But this is a background as to why this code logic
exists.
>
>
> 2. TestSuite_stats_checks.py :
> The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500).
>
> In function send_packet_of_size_to_tx_port() line no. 174 to 185
> --<snip>--
>
> if received:
> self.verify(tx_pkts_difference >= 1, "No packet was sent")
> self.verify(
> tx_pkts_difference == rx_pkts_difference,
> "different numbers of packets sent and received",
> )
> self.verify(
> tx_bytes_difference == rx_bytes_difference,
> "different number of bytes sent and received",
> )
> self.verify(tx_err_difference == 1, "unexpected tx error")
> self.verify(rx_err_difference == 0, "unexpected rx error")
>
> --<snip>--
>
> This test expects packets with payload size 30 to pass RX and TX which is working fine and for packet with payload size 1500, the test expecting RX and to pass and TX to fail?
> I did not get this part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN = 1526
I don't have much to say about stats_checks.py, as I have never looked
into this suite personally. But I do think that the issue you are
pointing out here will naturally transition to discussion towards
something that I discovered when running some manual MTU tests, all
the way back when I wrote my initial version of the test suite: there
is currently no existing functionality that properly tests MTUs at its
set boundary. Below you'll see a link to a DPDK mailing list archive
discussing this very issue in greater detail. I encourage you to look
into this as a context despite the fact that it doesn't entirely
relate to the issues you've brought up here, but I do believe it is
relevant to the overall doubt you are expressing in this thread:
https://inbox.dpdk.org/dev/e2554b78-cdda-aa33-ac6d-59a543a10640@intel.com/
Take a look at the 'Thread Overview' in this archive, you'll see a
brief back-and-forth between Stephen and Morten discussing this issue
in greater detail.
<snip>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-11-26 19:39 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-22 14:42 Doubts in JumboFrames and stats_checks tests in DTS Bharati Bhole - Geminus
2024-11-22 16:59 ` Patrick Robb
2024-11-22 17:37 ` Bharati Bhole - Geminus
2024-11-25 10:45 ` Bharati Bhole - Geminus
2024-11-25 15:57 ` Patrick Robb
2024-11-25 17:36 ` Bharati Bhole - Geminus
2024-11-25 21:36 ` Patrick Robb
2024-11-26 19:39 ` Nicholas Pratte
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).