* [dts] low pass rate executing dts complete test suite @ 2018-05-21 11:28 M R, Chengappa (Network Function Virtualization) 2018-05-21 13:49 ` Tu, Lijuan 0 siblings, 1 reply; 11+ messages in thread From: M R, Chengappa (Network Function Virtualization) @ 2018-05-21 11:28 UTC (permalink / raw) To: dts [-- Attachment #1.1: Type: text/plain, Size: 2908 bytes --] Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image001.png@01D3F124.B18A1690] [cid:image002.png@01D3F124.B18A1690] [cid:image003.png@01D3F124.B18A1690] [cid:image004.png@01D3F124.E9740D90] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 17196 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-05-21 11:28 [dts] low pass rate executing dts complete test suite M R, Chengappa (Network Function Virtualization) @ 2018-05-21 13:49 ` Tu, Lijuan 2018-05-22 1:42 ` Xu, GangX 0 siblings, 1 reply; 11+ messages in thread From: Tu, Lijuan @ 2018-05-21 13:49 UTC (permalink / raw) To: M R, Chengappa (Network Function Virtualization), dts [-- Attachment #1.1: Type: text/plain, Size: 3707 bytes --] Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image001.png@01D3F14B.FBB523F0] [cid:image002.png@01D3F14B.FBB523F0] [cid:image003.png@01D3F14B.FBB523F0] [cid:image004.png@01D3F14B.FBB523F0] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 21083 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-05-21 13:49 ` Tu, Lijuan @ 2018-05-22 1:42 ` Xu, GangX 2018-05-22 2:57 ` M R, Chengappa (Network Function Virtualization) 0 siblings, 1 reply; 11+ messages in thread From: Xu, GangX @ 2018-05-22 1:42 UTC (permalink / raw) To: M R, Chengappa (Network Function Virtualization), dts [-- Attachment #1.1: Type: text/plain, Size: 4276 bytes --] Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com>; dts@dpdk.org Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image001.png@01D3F1B0.6E097CD0] [cid:image002.png@01D3F1B0.6E097CD0] [cid:image003.png@01D3F1B0.6E097CD0] [cid:image004.png@01D3F1B0.6E097CD0] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 21195 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-05-22 1:42 ` Xu, GangX @ 2018-05-22 2:57 ` M R, Chengappa (Network Function Virtualization) 2018-05-22 6:57 ` Tu, Lijuan 0 siblings, 1 reply; 11+ messages in thread From: M R, Chengappa (Network Function Virtualization) @ 2018-05-22 2:57 UTC (permalink / raw) To: Xu, GangX, dts [-- Attachment #1.1: Type: text/plain, Size: 5194 bytes --] Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com>; dts@dpdk.org Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image001.png@01D3F1A6.300ADDC0] [cid:image002.png@01D3F1A6.300ADDC0] [cid:image003.png@01D3F1A6.300ADDC0] [cid:image004.png@01D3F1A6.300ADDC0] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 25433 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-05-22 2:57 ` M R, Chengappa (Network Function Virtualization) @ 2018-05-22 6:57 ` Tu, Lijuan 2018-06-01 10:16 ` M R, Chengappa (Network Function Virtualization) 0 siblings, 1 reply; 11+ messages in thread From: Tu, Lijuan @ 2018-05-22 6:57 UTC (permalink / raw) To: M R, Chengappa (Network Function Virtualization), Xu, GangX, dts [-- Attachment #1.1: Type: text/plain, Size: 5754 bytes --] Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image001.png@01D3F1DC.70D961B0] [cid:image002.png@01D3F1DC.70D961B0] [cid:image003.png@01D3F1DC.70D961B0] [cid:image004.png@01D3F1DC.70D961B0] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 28901 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-05-22 6:57 ` Tu, Lijuan @ 2018-06-01 10:16 ` M R, Chengappa (Network Function Virtualization) 2018-06-08 8:20 ` Liu, Yong 0 siblings, 1 reply; 11+ messages in thread From: M R, Chengappa (Network Function Virtualization) @ 2018-06-01 10:16 UTC (permalink / raw) To: Xu, GangX, dts [-- Attachment #1.1.1: Type: text/plain, Size: 7095 bytes --] Dear All, I am back with some more findings on executing DTS on Intel environment and this time around the pass rate is somewhat around 26.5% I was able to achieve this pass rate by the recommendations provided by Xu GangX! However, I am still puzzled as to why I am not having a good pass rate even for Intel NICs It would be of great help if anyone from the community help me achieve good numbers. Am sharing the output file (attached to with this mail) for reference. Also I would be interested to know if anyone has achieved 100% pass rate for DTS on INTEL NIC , if so I would like to have a look at the configuration files used for the same and also would like to know the DTS and DPDK versions and Intel NIC type used to achieve 100% pass rate. Furthermore, am also sharing the screen shot for of the NIC information configured on TESTER and DUT from the iLO GUI TESTER [cid:image006.png@01D3F9BF.850BF720] DUT [cid:image005.png@01D3F9BF.1C718DB0] Thanking in advance. Chengappa From: Tu, Lijuan [mailto:lijuan.tu@intel.com] Sent: Tuesday, May 22, 2018 12:27 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com>; Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: RE: low pass rate executing dts complete test suite Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image001.png@01D3F9B9.EF825B40] [cid:image002.png@01D3F9B9.EF825B40] [cid:image003.png@01D3F9B9.EF825B40] [cid:image004.png@01D3F9B9.EF825B40] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.1.2: Type: text/html, Size: 36046 bytes --] [-- Attachment #1.2: image001.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #1.3: image002.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #1.4: image003.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #1.5: image004.png --] [-- Type: image/png, Size: 85027 bytes --] [-- Attachment #1.6: image005.png --] [-- Type: image/png, Size: 75529 bytes --] [-- Attachment #1.7: image006.png --] [-- Type: image/png, Size: 70397 bytes --] [-- Attachment #2: output_31052018.zip --] [-- Type: application/x-zip-compressed, Size: 2797640 bytes --] [-- Attachment #3: crbs.cfg --] [-- Type: application/octet-stream, Size: 468 bytes --] #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 [-- Attachment #4: execution.cfg --] [-- Type: application/octet-stream, Size: 479 bytes --] [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites= cmdline, hello_world, multiprocess, timer, blacklist, mac_filter, ieee1588, checksum_offload, jumboframes, ipfrag, link_flowctrl, vlan, ip_pipeline, dynamic_config, generic_filter, dual_vlan, shutdown_api, fdir, ipv4_reassembly, scatter, l2fwd, kni, uni_pkt targets=x86_64-native-linuxapp-gcc parameters=nic_type=cfg:func=true [-- Attachment #5: ports.cfg --] [-- Type: application/octet-stream, Size: 652 bytes --] # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=0000:05:00.0,peer=0000:05:00.0; pci=0000:05:00.1,peer=0000:05:00.1; # pci=0000:05:00.0,intf=ens2f0; # pci=0000:05:00.0,intf=ens2f0; ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-06-01 10:16 ` M R, Chengappa (Network Function Virtualization) @ 2018-06-08 8:20 ` Liu, Yong 2018-06-08 8:40 ` M R, Chengappa (Network Function Virtualization) 0 siblings, 1 reply; 11+ messages in thread From: Liu, Yong @ 2018-06-08 8:20 UTC (permalink / raw) To: M R, Chengappa (Network Function Virtualization), Xu, GangX, dts [-- Attachment #1.1: Type: text/plain, Size: 8337 bytes --] Hi Chengappa, Most likely your low pass rate was caused by unexpected packets receipt on DUT ports. You need to check whether network manager or dhcp client running on tester server. Test executor as owner of the test server should maintain clean environment. Thus DTS can work as normally. In our internal regression report, DTS can archive almost 95% pass rate. And this number can be higher after merge some bug fixes. +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ | UB1610 | FVL25G | Functional | 4.8.0-22-generic | 6.2.0 | x86_64-native-linuxapp-gcc | 5/160 | +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ Thanks, Marvin From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Friday, June 01, 2018 6:16 PM To: Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: Re: [dts] low pass rate executing dts complete test suite Importance: High Dear All, I am back with some more findings on executing DTS on Intel environment and this time around the pass rate is somewhat around 26.5% I was able to achieve this pass rate by the recommendations provided by Xu GangX! However, I am still puzzled as to why I am not having a good pass rate even for Intel NICs It would be of great help if anyone from the community help me achieve good numbers. Am sharing the output file (attached to with this mail) for reference. Also I would be interested to know if anyone has achieved 100% pass rate for DTS on INTEL NIC , if so I would like to have a look at the configuration files used for the same and also would like to know the DTS and DPDK versions and Intel NIC type used to achieve 100% pass rate. Furthermore, am also sharing the screen shot for of the NIC information configured on TESTER and DUT from the iLO GUI TESTER [cid:image006.png@01D3FF43.6C9E6D10] DUT [cid:image007.png@01D3FF43.6C9E6D10] Thanking in advance. Chengappa From: Tu, Lijuan [mailto:lijuan.tu@intel.com] Sent: Tuesday, May 22, 2018 12:27 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image008.png@01D3FF43.6C9E6D10] [cid:image009.png@01D3FF43.6C9E6D10] [cid:image010.png@01D3FF43.6C9E6D10] [cid:image011.png@01D3FF43.6C9E6D10] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 35203 bytes --] [-- Attachment #2: image006.png --] [-- Type: image/png, Size: 70397 bytes --] [-- Attachment #3: image007.png --] [-- Type: image/png, Size: 75529 bytes --] [-- Attachment #4: image008.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #5: image009.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #6: image010.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #7: image011.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-06-08 8:20 ` Liu, Yong @ 2018-06-08 8:40 ` M R, Chengappa (Network Function Virtualization) 2018-06-10 14:27 ` Liu, Yong 2018-06-11 6:41 ` Xu, GangX 0 siblings, 2 replies; 11+ messages in thread From: M R, Chengappa (Network Function Virtualization) @ 2018-06-08 8:40 UTC (permalink / raw) To: Liu, Yong, Xu, GangX, dts [-- Attachment #1.1: Type: text/plain, Size: 9714 bytes --] Greetings Marvin, Thank you for acknowledging the queries that I had asked and also confining the pass rate. When you say one should maintain a clean environment, what does this imply ? In fact we have taken care of all the pre-requisites that DTS recommends so what to know if we are missing any more configurations on the DUT or TESTER. Furthermore, is it possible to share the list of NICs which are achieving higher pass rates - say both intel and non-intel. I am deducing the following from the table that you have mentioned below - please confirm if my assumption is correct UB1610 Kernel: 4.8.0-22-generic GCC: 6.2.0 NIC: FVL25G (fortville25g) Target: x86_64-native-linuxapp-gcc Fail/Total: 5/160 As asked appreciate if someone from the community help us know what we are missing on our environment that we are hitting a low pass rate. In accordance with the same, had shared all the log files and the configuration file from my environment to see if I can get any pointers to proceed ahead. Thanking in advance, Chengappa From: Liu, Yong [mailto:yong.liu@intel.com] Sent: Friday, June 08, 2018 1:50 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com>; Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: RE: low pass rate executing dts complete test suite Hi Chengappa, Most likely your low pass rate was caused by unexpected packets receipt on DUT ports. You need to check whether network manager or dhcp client running on tester server. Test executor as owner of the test server should maintain clean environment. Thus DTS can work as normally. In our internal regression report, DTS can archive almost 95% pass rate. And this number can be higher after merge some bug fixes. +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ | UB1610 | FVL25G | Functional | 4.8.0-22-generic | 6.2.0 | x86_64-native-linuxapp-gcc | 5/160 | +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ Thanks, Marvin From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Friday, June 01, 2018 6:16 PM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Importance: High Dear All, I am back with some more findings on executing DTS on Intel environment and this time around the pass rate is somewhat around 26.5% I was able to achieve this pass rate by the recommendations provided by Xu GangX! However, I am still puzzled as to why I am not having a good pass rate even for Intel NICs It would be of great help if anyone from the community help me achieve good numbers. Am sharing the output file (attached to with this mail) for reference. Also I would be interested to know if anyone has achieved 100% pass rate for DTS on INTEL NIC , if so I would like to have a look at the configuration files used for the same and also would like to know the DTS and DPDK versions and Intel NIC type used to achieve 100% pass rate. Furthermore, am also sharing the screen shot for of the NIC information configured on TESTER and DUT from the iLO GUI TESTER [cid:image001.png@01D3FF31.1CF00A10] DUT [cid:image002.png@01D3FF31.1CF00A10] Thanking in advance. Chengappa From: Tu, Lijuan [mailto:lijuan.tu@intel.com] Sent: Tuesday, May 22, 2018 12:27 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image003.png@01D3FF31.1CF00A10] [cid:image004.png@01D3FF31.1CF00A10] [cid:image005.png@01D3FF31.1CF00A10] [cid:image006.png@01D3FF31.1CF00A10] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 41659 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 70397 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 75529 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #6: image005.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #7: image006.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-06-08 8:40 ` M R, Chengappa (Network Function Virtualization) @ 2018-06-10 14:27 ` Liu, Yong 2018-06-10 16:12 ` M R, Chengappa (Network Function Virtualization) 2018-06-11 6:41 ` Xu, GangX 1 sibling, 1 reply; 11+ messages in thread From: Liu, Yong @ 2018-06-10 14:27 UTC (permalink / raw) To: M R, Chengappa (Network Function Virtualization), Xu, GangX, dts [-- Attachment #1.1: Type: text/plain, Size: 10720 bytes --] Chengappa, We'd like to share our setting up and running logs to you. Let me find the right person. As to the reason, almost all of the test cases are just depend on the packets transmit/receipt on tester. So if tester port generated some unexpected packets, will cause test case failure. There're lots of occasions will cause that like network manager is enable on tester port or ipv6 is enabled. So in our lab, each one need to manually check environment and then do test execution. Our guide missed that part due to we did not realized it was important for outside user. We are collecting those information and will post to starting guide later. Thanks, Marvin From: M R, Chengappa (Network Function Virtualization) [mailto:cm-r@hpe.com] Sent: Friday, June 08, 2018 4:41 PM To: Liu, Yong <yong.liu@intel.com>; Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: RE: low pass rate executing dts complete test suite Greetings Marvin, Thank you for acknowledging the queries that I had asked and also confining the pass rate. When you say one should maintain a clean environment, what does this imply ? In fact we have taken care of all the pre-requisites that DTS recommends so what to know if we are missing any more configurations on the DUT or TESTER. Furthermore, is it possible to share the list of NICs which are achieving higher pass rates - say both intel and non-intel. I am deducing the following from the table that you have mentioned below - please confirm if my assumption is correct UB1610 Kernel: 4.8.0-22-generic GCC: 6.2.0 NIC: FVL25G (fortville25g) Target: x86_64-native-linuxapp-gcc Fail/Total: 5/160 As asked appreciate if someone from the community help us know what we are missing on our environment that we are hitting a low pass rate. In accordance with the same, had shared all the log files and the configuration file from my environment to see if I can get any pointers to proceed ahead. Thanking in advance, Chengappa From: Liu, Yong [mailto:yong.liu@intel.com] Sent: Friday, June 08, 2018 1:50 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa, Most likely your low pass rate was caused by unexpected packets receipt on DUT ports. You need to check whether network manager or dhcp client running on tester server. Test executor as owner of the test server should maintain clean environment. Thus DTS can work as normally. In our internal regression report, DTS can archive almost 95% pass rate. And this number can be higher after merge some bug fixes. +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ | UB1610 | FVL25G | Functional | 4.8.0-22-generic | 6.2.0 | x86_64-native-linuxapp-gcc | 5/160 | +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ Thanks, Marvin From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Friday, June 01, 2018 6:16 PM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Importance: High Dear All, I am back with some more findings on executing DTS on Intel environment and this time around the pass rate is somewhat around 26.5% I was able to achieve this pass rate by the recommendations provided by Xu GangX! However, I am still puzzled as to why I am not having a good pass rate even for Intel NICs It would be of great help if anyone from the community help me achieve good numbers. Am sharing the output file (attached to with this mail) for reference. Also I would be interested to know if anyone has achieved 100% pass rate for DTS on INTEL NIC , if so I would like to have a look at the configuration files used for the same and also would like to know the DTS and DPDK versions and Intel NIC type used to achieve 100% pass rate. Furthermore, am also sharing the screen shot for of the NIC information configured on TESTER and DUT from the iLO GUI TESTER [cid:image001.png@01D40108.31E03390] DUT [cid:image002.png@01D40108.31E03390] Thanking in advance. Chengappa From: Tu, Lijuan [mailto:lijuan.tu@intel.com] Sent: Tuesday, May 22, 2018 12:27 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image003.png@01D40108.31E03390] [cid:image004.png@01D40108.31E03390] [cid:image005.png@01D40108.31E03390] [cid:image006.png@01D40108.31E03390] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 43733 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 70397 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 75529 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #6: image005.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #7: image006.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-06-10 14:27 ` Liu, Yong @ 2018-06-10 16:12 ` M R, Chengappa (Network Function Virtualization) 0 siblings, 0 replies; 11+ messages in thread From: M R, Chengappa (Network Function Virtualization) @ 2018-06-10 16:12 UTC (permalink / raw) To: Liu, Yong, Xu, GangX, dts [-- Attachment #1.1: Type: text/plain, Size: 11443 bytes --] Marvin, Sharing the setting of your lab and the running logs will really help us. I shall look forward to your mail once you have the information so that I can replicate the same in our environment and see if it is working for us. Thanks again for following this up diligently and look for your mail on the details pertaining to your environment. Best Regards, Chengappa From: Liu, Yong [mailto:yong.liu@intel.com] Sent: Sunday, June 10, 2018 7:58 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com>; Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: RE: low pass rate executing dts complete test suite Chengappa, We'd like to share our setting up and running logs to you. Let me find the right person. As to the reason, almost all of the test cases are just depend on the packets transmit/receipt on tester. So if tester port generated some unexpected packets, will cause test case failure. There're lots of occasions will cause that like network manager is enable on tester port or ipv6 is enabled. So in our lab, each one need to manually check environment and then do test execution. Our guide missed that part due to we did not realized it was important for outside user. We are collecting those information and will post to starting guide later. Thanks, Marvin From: M R, Chengappa (Network Function Virtualization) [mailto:cm-r@hpe.com] Sent: Friday, June 08, 2018 4:41 PM To: Liu, Yong <yong.liu@intel.com<mailto:yong.liu@intel.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Greetings Marvin, Thank you for acknowledging the queries that I had asked and also confining the pass rate. When you say one should maintain a clean environment, what does this imply ? In fact we have taken care of all the pre-requisites that DTS recommends so what to know if we are missing any more configurations on the DUT or TESTER. Furthermore, is it possible to share the list of NICs which are achieving higher pass rates - say both intel and non-intel. I am deducing the following from the table that you have mentioned below - please confirm if my assumption is correct UB1610 Kernel: 4.8.0-22-generic GCC: 6.2.0 NIC: FVL25G (fortville25g) Target: x86_64-native-linuxapp-gcc Fail/Total: 5/160 As asked appreciate if someone from the community help us know what we are missing on our environment that we are hitting a low pass rate. In accordance with the same, had shared all the log files and the configuration file from my environment to see if I can get any pointers to proceed ahead. Thanking in advance, Chengappa From: Liu, Yong [mailto:yong.liu@intel.com] Sent: Friday, June 08, 2018 1:50 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa, Most likely your low pass rate was caused by unexpected packets receipt on DUT ports. You need to check whether network manager or dhcp client running on tester server. Test executor as owner of the test server should maintain clean environment. Thus DTS can work as normally. In our internal regression report, DTS can archive almost 95% pass rate. And this number can be higher after merge some bug fixes. +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ | UB1610 | FVL25G | Functional | 4.8.0-22-generic | 6.2.0 | x86_64-native-linuxapp-gcc | 5/160 | +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ Thanks, Marvin From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Friday, June 01, 2018 6:16 PM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Importance: High Dear All, I am back with some more findings on executing DTS on Intel environment and this time around the pass rate is somewhat around 26.5% I was able to achieve this pass rate by the recommendations provided by Xu GangX! However, I am still puzzled as to why I am not having a good pass rate even for Intel NICs It would be of great help if anyone from the community help me achieve good numbers. Am sharing the output file (attached to with this mail) for reference. Also I would be interested to know if anyone has achieved 100% pass rate for DTS on INTEL NIC , if so I would like to have a look at the configuration files used for the same and also would like to know the DTS and DPDK versions and Intel NIC type used to achieve 100% pass rate. Furthermore, am also sharing the screen shot for of the NIC information configured on TESTER and DUT from the iLO GUI TESTER [cid:image001.png@01D40103.DE2D08D0] DUT [cid:image002.png@01D40103.DE2D08D0] Thanking in advance. Chengappa From: Tu, Lijuan [mailto:lijuan.tu@intel.com] Sent: Tuesday, May 22, 2018 12:27 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image003.png@01D40103.DE2D08D0] [cid:image004.png@01D40103.DE2D08D0] [cid:image005.png@01D40103.DE2D08D0] [cid:image006.png@01D40103.DE2D08D0] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.2: Type: text/html, Size: 47584 bytes --] [-- Attachment #2: image001.png --] [-- Type: image/png, Size: 70397 bytes --] [-- Attachment #3: image002.png --] [-- Type: image/png, Size: 75529 bytes --] [-- Attachment #4: image003.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #5: image004.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #6: image005.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #7: image006.png --] [-- Type: image/png, Size: 85027 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dts] low pass rate executing dts complete test suite 2018-06-08 8:40 ` M R, Chengappa (Network Function Virtualization) 2018-06-10 14:27 ` Liu, Yong @ 2018-06-11 6:41 ` Xu, GangX 1 sibling, 0 replies; 11+ messages in thread From: Xu, GangX @ 2018-06-11 6:41 UTC (permalink / raw) To: M R, Chengappa (Network Function Virtualization), Liu, Yong, dts [-- Attachment #1.1.1: Type: text/plain, Size: 10134 bytes --] Hi chengappa The attached is dts we used. It's include log in output/ Thanks Xu gang From: M R, Chengappa (Network Function Virtualization) [mailto:cm-r@hpe.com] Sent: Friday, June 8, 2018 4:41 PM To: Liu, Yong <yong.liu@intel.com>; Xu, GangX <gangx.xu@intel.com>; dts@dpdk.org Subject: RE: low pass rate executing dts complete test suite Greetings Marvin, Thank you for acknowledging the queries that I had asked and also confining the pass rate. When you say one should maintain a clean environment, what does this imply ? In fact we have taken care of all the pre-requisites that DTS recommends so what to know if we are missing any more configurations on the DUT or TESTER. Furthermore, is it possible to share the list of NICs which are achieving higher pass rates - say both intel and non-intel. I am deducing the following from the table that you have mentioned below - please confirm if my assumption is correct UB1610 Kernel: 4.8.0-22-generic GCC: 6.2.0 NIC: FVL25G (fortville25g) Target: x86_64-native-linuxapp-gcc Fail/Total: 5/160 As asked appreciate if someone from the community help us know what we are missing on our environment that we are hitting a low pass rate. In accordance with the same, had shared all the log files and the configuration file from my environment to see if I can get any pointers to proceed ahead. Thanking in advance, Chengappa From: Liu, Yong [mailto:yong.liu@intel.com] Sent: Friday, June 08, 2018 1:50 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa, Most likely your low pass rate was caused by unexpected packets receipt on DUT ports. You need to check whether network manager or dhcp client running on tester server. Test executor as owner of the test server should maintain clean environment. Thus DTS can work as normally. In our internal regression report, DTS can archive almost 95% pass rate. And this number can be higher after merge some bug fixes. +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ | UB1610 | FVL25G | Functional | 4.8.0-22-generic | 6.2.0 | x86_64-native-linuxapp-gcc | 5/160 | +------------+---------------------------+------------+--------------------------------+--------+-------------------------------------+----------+ Thanks, Marvin From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Friday, June 01, 2018 6:16 PM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Importance: High Dear All, I am back with some more findings on executing DTS on Intel environment and this time around the pass rate is somewhat around 26.5% I was able to achieve this pass rate by the recommendations provided by Xu GangX! However, I am still puzzled as to why I am not having a good pass rate even for Intel NICs It would be of great help if anyone from the community help me achieve good numbers. Am sharing the output file (attached to with this mail) for reference. Also I would be interested to know if anyone has achieved 100% pass rate for DTS on INTEL NIC , if so I would like to have a look at the configuration files used for the same and also would like to know the DTS and DPDK versions and Intel NIC type used to achieve 100% pass rate. Furthermore, am also sharing the screen shot for of the NIC information configured on TESTER and DUT from the iLO GUI TESTER [cid:image001.png@01D40190.E452B150] DUT [cid:image002.png@01D40190.E452B150] Thanking in advance. Chengappa From: Tu, Lijuan [mailto:lijuan.tu@intel.com] Sent: Tuesday, May 22, 2018 12:27 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa answers in-line. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Tuesday, May 22, 2018 10:57 AM To: Xu, GangX <gangx.xu@intel.com<mailto:gangx.xu@intel.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Greetings Xu, Yes seems like I have had used only one port - let me re-run by making the necessary changes in ports.cfg and get back with the observations. @ Lijuan Tu, Please find the answers in-line. Thanks & Regards, Chengappa From: Xu, GangX [mailto:gangx.xu@intel.com] Sent: Tuesday, May 22, 2018 7:12 AM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: RE: low pass rate executing dts complete test suite Hi Chengappa: How many ports do you use? In the information you provided, I think you used only one port. Many of the case in dts need two ports or more. e.g. test suite pmd_bonded need 4 ports, you provide 1, it failed at set_up_all show "blocked set_up_all failed" issue. Xu gang From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Tu, Lijuan Sent: Monday, May 21, 2018 9:49 PM To: M R, Chengappa (Network Function Virtualization) <cm-r@hpe.com<mailto:cm-r@hpe.com>>; dts@dpdk.org<mailto:dts@dpdk.org> Subject: Re: [dts] low pass rate executing dts complete test suite Hi chengappa, Here are my points: 1, the link might have some issues, such as link down From the result "receive 0 , expect 64", I think , the link is down You can check your link first before running dts. [Chengappa] sure, will check on this, thank you for the inputs. 2, DPDK will change during different version , so DTS will change with it. Which DPDK and DTS 's version do you use? From the failed result "set up failed" and timeout , I think your testpmd started failed. Logs in output, you can check them to find the real reason, maybe testpmd's parameters changed. [Chengappa] Am using DPDK 18.02 with DTS 17.08.0, kindly let me know if there are any issues with the combination am using or version mismatch here? With the log traces I was able to see that the testpmd failed to start - is there anything that I need to tweak to change the testpmd's parameters to be changed? [Lijuan] I think using DTS 18.02 is the best choices , As I know , there are some changes in offloads, vlan filter/strip default value, crc-strip , promise mode etc. You can check them form git logs. From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of M R, Chengappa (Network Function Virtualization) Sent: Monday, May 21, 2018 7:28 PM To: dts@dpdk.org<mailto:dts@dpdk.org> Subject: [dts] low pass rate executing dts complete test suite Hello DTS community, As we were investigation on DTS for Niantic (Intel Corporation 82599ES 10-Gigabit) platform and we are seeing very less pass rate. We were under the assumption if we take care of the initial pre-requisites for TESTER and TARGET DUT as described here - https://dpdk.org/doc/dts/gsg/sys_reqs.html the DTS framework help us achieve a good pass rate. Could you please help me by letting know if there are any fine tuning for the TARGET DUT so that I can see a better numbers than the current pass rate of 12.4% [root@tester dts]# cat output/statistics.txt Passed = 17 Failed = 45 Blocked = 68 Pass rate = 12.4 Some of the specification on the TARGET DUT and current running configuration are as follows [root@targetdut ~]# lspci | egrep -i --color 'network|ethernet' 04:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 04:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) [root@tester dts]# cat conf/ports.cfg # DUT Port Configuration # [DUT IP] # ports= # pci=Pci BDF,intf=Kernel interface; # pci=Pci BDF,mac=Mac address,peer=Tester Pci BDF,numa=Port Numa # pci=Pci BDF,peer=IXIA:card.port # pci=Pci BDF,peer=Tester Pci BDF,tp_ip=$(IP),tp_path=$({PERL_PATH); # pci=Pci BDF,peer=Tester Pci BDF,sec_port=yes,first_port=Pci BDF; # [VM NAME] virtual machine name; This section is for virutal scenario # ports = # dev_idx=device index of ports info, peer=Tester Pci BDF [10.70.2.6] ports = pci=05:00.0,intf=ens2f0; pci=05:00.1,intf=ens2f1; [root@tester dts]# cat executions/execution.cfg [Execution1] crbs=10.70.2.6 drivername=igb_uio test_suites=hello_world test_suites= cmdline, hello_world targets=x86_64-native-linuxapp-gcc parameters=nic_type=niantic:func=true [root@tester dts]# cat conf/crbs.cfg #DUT crbs Configuration #[DUT IP] # dut_ip: DUT ip address # dut_user: Login DUT username # dut_passwd: Login DUT password # os: operation system type linux or freebsd # tester_ip: Tester ip address # tester_passwd: Tester password # ixia_group: IXIA group name # channels: Board channel number # bypass_core0: Whether by pass core0 [10.70.2.6] dut_ip=10.70.2.6 dut_user=root dut_passwd=HP1nvent os=linux tester_ip=10.70.2.5 tester_passwd=HP1nvent channels=4 Snippet of the current results [cid:image003.png@01D40190.E452B150] [cid:image004.png@01D40190.E452B150] [cid:image005.png@01D40190.E452B150] [cid:image006.png@01D40190.E452B150] Thanking in advance, Chengappa If everyone is moving forward together, then success takes care of itself....!! [-- Attachment #1.1.2: Type: text/html, Size: 41696 bytes --] [-- Attachment #1.2: image001.png --] [-- Type: image/png, Size: 70397 bytes --] [-- Attachment #1.3: image002.png --] [-- Type: image/png, Size: 75529 bytes --] [-- Attachment #1.4: image003.png --] [-- Type: image/png, Size: 75429 bytes --] [-- Attachment #1.5: image004.png --] [-- Type: image/png, Size: 90260 bytes --] [-- Attachment #1.6: image005.png --] [-- Type: image/png, Size: 94096 bytes --] [-- Attachment #1.7: image006.png --] [-- Type: image/png, Size: 85027 bytes --] [-- Attachment #2: dts.zip --] [-- Type: application/x-zip-compressed, Size: 2183580 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2018-06-11 6:45 UTC | newest] Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-05-21 11:28 [dts] low pass rate executing dts complete test suite M R, Chengappa (Network Function Virtualization) 2018-05-21 13:49 ` Tu, Lijuan 2018-05-22 1:42 ` Xu, GangX 2018-05-22 2:57 ` M R, Chengappa (Network Function Virtualization) 2018-05-22 6:57 ` Tu, Lijuan 2018-06-01 10:16 ` M R, Chengappa (Network Function Virtualization) 2018-06-08 8:20 ` Liu, Yong 2018-06-08 8:40 ` M R, Chengappa (Network Function Virtualization) 2018-06-10 14:27 ` Liu, Yong 2018-06-10 16:12 ` M R, Chengappa (Network Function Virtualization) 2018-06-11 6:41 ` Xu, GangX
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).