From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E0B545DB0; Tue, 26 Nov 2024 20:39:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55EA340B8D; Tue, 26 Nov 2024 20:39:15 +0100 (CET) Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by mails.dpdk.org (Postfix) with ESMTP id E5E014026C for ; Tue, 26 Nov 2024 20:39:13 +0100 (CET) Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-53dcdfa7ad8so809303e87.3 for ; Tue, 26 Nov 2024 11:39:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1732649953; x=1733254753; darn=dpdk.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=K0SdY+s6pLWIvVpZF4y8NdxqkFC2vkGDpJtaEkwMXVg=; b=eI42CD2OPufbi4RGvLrtM2hbLlIm0ljiX8JLPQaPUOpogljA9Tc1xh/uTGbHF38/Rj y+Y5qRS/Fbact2ESs23QbzgcTDHYOfxNt65CbimN77t3QvvDL89zkUZos16KmSLEDyO0 53Jj2W5KH4tpjrMAJDcl/buW8sjo+Teizfd9Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732649953; x=1733254753; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K0SdY+s6pLWIvVpZF4y8NdxqkFC2vkGDpJtaEkwMXVg=; b=arVFGXrNOV8AYuwnhbUs5dUgeJhoquKRJlF3mTHz150Ht7EG6MOnRHTWD90dMRbRs0 WHziNWio41nUmsvupaSQ2q4i0x0QH53dcQPwvRAtiRdUqZP8EHqeDzti6SGCpPNpZv0x 6zOzuUgHSkpgECDnlnYvofC8qI7q5NHtW+JmsDnXIL0EatrbY+Vo+GqmtFCDs9H/Wa5b rx7EJX/G51osGT3riFoAQ0wPyGL0d58bzVbzp7iuiLmWR/aC9gqNk4Cb6V0A9G2Ufed6 EppqYSftVr+yFGST2Y4oAAdkpztZYJGkU4pKqNNNFg91kWSV8B3cHRUjKsqEFNyt3RtI WhSw== X-Gm-Message-State: AOJu0YxJvFQYntPF1eDp5JS2+3VFx2RBnsY1uJ5oxstwrXsCc/r6x0iW X4eKYMx4lk9Dph+Cv8glW0TW59JZWkmbK9z2SDRO3LjXp+WdWAVxETXDPbZTgrjRHID+GGuKi86 H84t8/KnFq2a+l/VAoxCw34zHJit0s63qIavctRTIwVHLY+G6E1A= X-Gm-Gg: ASbGncsMl6dzt8soOstqTWyts4gbaG2lk8q8ax23AmNHziTAbPQBSaPLIZ2Srm58grr GI2RSKdl4iQ7yAD4TNoxHr1HX3U2NakX7LtOm/oi7wlTzAw+g6Z1yPWi0UXgyC40vHA== X-Google-Smtp-Source: AGHT+IFk960VgZl8odQPQLjIpTPrA2atQvyDcgz0pu7h7tY0hhTwzPqbPlABs+VoGrove+Y3mCHCO6FumLlgD7WLULA= X-Received: by 2002:a2e:be83:0:b0:2fb:591d:3de1 with SMTP id 38308e7fff4ca-2ffd60ac410mr289071fa.7.1732649953114; Tue, 26 Nov 2024 11:39:13 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Nicholas Pratte Date: Tue, 26 Nov 2024 14:39:01 -0500 Message-ID: Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS. To: Bharati Bhole - Geminus Cc: "dts@dpdk.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Hello Bharati, I see you have discovered some of the oddities regarding DPDK and MTU! There are many interesting problems associated with MTU within DPDK. You can see some elaboration, based on all the things I've discovered, on these issues in some of my comments below, of which and in some form, directly relate to the issues you've pointed out in this thread. Thank you for starting this conversation! Patrick, please let me know if I missed anything or if more clarification could be provided in spots; I double checked, but I could be missing something. > I am Bharati Bhole. I am a new member of DTS mailing list. Welcome! > I have recently started working on DTS for my company and facing some iss= ues/failures while running the DTS. > Please help me with understanding the test cases and expected behaviours. > > When the test set the max_pkt_len for testpmd and calculate the expected = acceptable packet size, does it consider NICs supporting 2 VLANS? (In case = of MTU update test, I have seen that 2 VLANs NIC are being considered while= calculating acceptable packets size but in JumboFrames I dont see it). Patrick's comment is correct! Looking at DPDK without any sort of depth, it is incredibly difficult to understand what "MTU" actually means within DPDK, and this lack of enforced definition has created significant delays in getting a comprehensive set of MTU test suites merged within the currently-maintained DTS framework. Deeper discussions need to be had on this issue, and I hope to get the community engaged with this again in the near future. If you take a deeper look into DPDK's source code, and each individual vendor's driver code, you will see that each vendor makes a unique assumption of what is considered L3 information and L2 information. Some vendors will assume that an Ethernet frame should expect a Dot1Q tag, CRC, and the 14 byte Ethernet frame; other vendors may not make this assumption. As Patrick mentions, the new DTS framework is making an effort to move away from conforming to unique vendor behavior within its code in favor of code that works universally across all vendors, and in doing so, the latest version of the JumboFrames test suite that I've written makes an effort to assess MTUs based on a universal agreement of what is assumed to be in a Layer 2 frame; of course, the test suite does not work across all vendors with the existing vacuous MTU definition. To deal with this issue, I refactored the test suite to include a +/-5 byte buffer in packets being sent to assess the inner and outer boundaries of the set MTU, but this has its own issues (namely in assessing an MTU at the exact MTU value itself) in some cases, so this solution is expected to be temporary until a better alternative is found. I encourage you to take a look at this thread below; it is a link to an archived thread on the DPDK mailing list, it provides some insight into what is meant by 'max-pkt-len' and MTU within DPDK's logic: https://inbox.dpdk.org/dev/0f4866fc-b76d-cf83-75e8-86326c02814b@intel.com/ You may also want to look at the original RFC that I published for JumboFrames which discusses this in some detail as well. You will also find a link to an existing Bugzilla ticket aimed at finding solutions to MTU. https://inbox.dpdk.org/dev/20240524183604.6925-2-npratte@iol.unh.edu > > In function jumboframes_send_packet() - > ---- > if received: > if self.nic.startswith("fastlinq"): > self.verify( > self.pmdout.check_tx_bytes(tx_pkts, rx_pkts) > and (self.pmdout.check_tx_bytes(tx_bytes, pktsize)) > and (rx_bytes =3D=3D pktsize), > "packet pass assert error", > ) > else: > self.verify( > self.pmdout.check_tx_bytes(tx_pkts, rx_pkts) > and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize= )) > and ((rx_bytes + 4) =3D=3D pktsize), > "packet pass assert error", > ) > else: > self.verify(rx_err =3D=3D 1 or tx_pkts =3D=3D 0, "packet drop= assert error") > return out > ---- > > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82Can someone please = tell me why these tx_butes and rx_bytes calculations are different for Qlog= ic NICs and other NICs? QLogic NICs are interesting in that they only accept buffer sizes restricted to powers of 2, if I am remembering correctly. It's unique to these NICs, it's not seen in any other vendor as far as I know, and as Patrick mentioned in his response, legacy DTS was designed around conforming to standards across multiple vendors. Thus, in the "new" DTS framework, it's likely that we wouldn't consider these NICs at all in some test cases. But this is a background as to why this code logic exists. > > > 2. TestSuite_stats_checks.py : > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82 The test, test_sta= ts_checks is sending 2 packets of ETH/IP/RAW(30) and ETH/IP/RAW(1500). > > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82In function send_pa= cket_of_size_to_tx_port() line no. 174 to 185 > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82---- > > =E2=80=82=E2=80=82if received: > self.verify(tx_pkts_difference >=3D 1, "No packet was sent") > self.verify( > tx_pkts_difference =3D=3D rx_pkts_difference, > "different numbers of packets sent and received", > ) > self.verify( > tx_bytes_difference =3D=3D rx_bytes_difference, > "different number of bytes sent and received", > ) > self.verify(tx_err_difference =3D=3D 1, "unexpected tx error"= ) > self.verify(rx_err_difference =3D=3D 0, "unexpected rx error"= ) > > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82---- > > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82This test expects p= ackets with payload size 30 to pass RX and TX which is working fine and for= packet with payload size 1500, the test expecting RX and to pass and TX to= fail? > =E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82=E2=80=82I did not get this = part. The defailt MTU size is 1500. When scapy sends the packet with ETH+IP= +1500 the packet size is 18+20+1500 =3D 1538. And even if the NIC supports = 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN =3D 1526 I don't have much to say about stats_checks.py, as I have never looked into this suite personally. But I do think that the issue you are pointing out here will naturally transition to discussion towards something that I discovered when running some manual MTU tests, all the way back when I wrote my initial version of the test suite: there is currently no existing functionality that properly tests MTUs at its set boundary. Below you'll see a link to a DPDK mailing list archive discussing this very issue in greater detail. I encourage you to look into this as a context despite the fact that it doesn't entirely relate to the issues you've brought up here, but I do believe it is relevant to the overall doubt you are expressing in this thread: https://inbox.dpdk.org/dev/e2554b78-cdda-aa33-ac6d-59a543a10640@intel.com/ Take a look at the 'Thread Overview' in this archive, you'll see a brief back-and-forth between Stephen and Morten discussing this issue in greater detail.