test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH V1] tests/veb_switch use different cores for 2 vfs
  2020-11-09 15:29 [dts] [PATCH V1] tests/veb_switch use different cores for 2 vfs sunqin
@ 2020-11-09  7:07 ` Sun, QinX
  2020-11-11  6:30 ` Tu, Lijuan
  1 sibling, 0 replies; 3+ messages in thread
From: Sun, QinX @ 2020-11-09  7:07 UTC (permalink / raw)
  To: dts

[-- Attachment #1: Type: text/plain, Size: 311 bytes --]

Tested-by: Sun, QinX <qinx.sun@intel.com>
 
Regards,
Sun Qin

> -----Original Message-----
> From: sunqin <qinx.sun@intel.com>
> Sent: Monday, November 9, 2020 11:30 PM
> To: dts@dpdk.org
> Cc: Sun, QinX <qinx.sun@intel.com>
> Subject: [dts] [PATCH V1] tests/veb_switch use different cores for 2 vfs

[-- Attachment #2: TestVEBSwitching.log --]
[-- Type: application/octet-stream, Size: 6474 bytes --]

09/11/2020 15:36:36                            dts: 
TEST SUITE : TestVEBSwitching
09/11/2020 15:36:36                            dts: NIC :        columbiaville_25g
09/11/2020 15:36:36             dut.10.240.183.254: 
09/11/2020 15:36:36                         tester: 
09/11/2020 15:36:39               TestVEBSwitching: Test Case test_VEB_switching_inter_vfs Begin
09/11/2020 15:36:39             dut.10.240.183.254: 
09/11/2020 15:36:39                         tester: 
09/11/2020 15:36:39             dut.10.240.183.254: kill_all: called by dut and has no prefix list.
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.0/vendor
09/11/2020 15:36:52             dut.10.240.183.254: 0x8086
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.0/device
09/11/2020 15:36:52             dut.10.240.183.254: 0x1889
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.0/vendor
09/11/2020 15:36:52             dut.10.240.183.254: 0x8086
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.0/device
09/11/2020 15:36:52             dut.10.240.183.254: 0x1889
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.1/vendor
09/11/2020 15:36:52             dut.10.240.183.254: 0x8086
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.1/device
09/11/2020 15:36:52             dut.10.240.183.254: 0x1889
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.1/vendor
09/11/2020 15:36:52             dut.10.240.183.254: 0x8086
09/11/2020 15:36:52             dut.10.240.183.254: cat /sys/bus/pci/devices/0000\:03\:01.1/device
09/11/2020 15:36:52             dut.10.240.183.254: 0x1889
09/11/2020 15:36:52             dut.10.240.183.254: ip link set ens865f0 vf 0 mac 00:11:22:33:44:11
09/11/2020 15:36:53             dut.10.240.183.254: 
09/11/2020 15:36:53             dut.10.240.183.254: ip link set ens865f0 vf 1 mac 00:11:22:33:44:12
09/11/2020 15:36:53             dut.10.240.183.254: 
09/11/2020 15:36:55             dut.10.240.183.254: x86_64-native-linuxapp-gcc/app/dpdk-testpmd  -l 1,2 -n 4 -w 0000:03:01.0  --file-prefix=test1_17370_20201109153610   -- -i --eth-peer=0,00:11:22:33:44:12
09/11/2020 15:36:56             dut.10.240.183.254: EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/test1_17370_20201109153610/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:03:01.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
iavf_init_rss(): RSS is enabled by PF by default
iavf_configure_queues(): request RXDID[22] in Queue[0]

Port 0: link state change event

Port 0: link state change event
Port 0: 00:11:22:33:44:11
Checking link statuses...
Done
09/11/2020 15:37:06             dut.10.240.183.254: set fwd txonly
09/11/2020 15:37:06             dut.10.240.183.254: 
Set txonly packet forwarding mode
09/11/2020 15:37:06             dut.10.240.183.254: set promisc all off
09/11/2020 15:37:06             dut.10.240.183.254: 
09/11/2020 15:37:17             dut.10.240.183.254: start
09/11/2020 15:37:17             dut.10.240.183.254: 
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=00:11:22:33:44:12

  txonly packet forwarding packets/burst=32
  packet len=64 - nb packet segments=1
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
09/11/2020 15:37:19             dut.10.240.183.254: stop
09/11/2020 15:37:19             dut.10.240.183.254: 
Telling cores to ...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 68876416       TX-dropped: 0             TX-total: 68876416
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 68876416       TX-dropped: 0             TX-total: 68876416
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
09/11/2020 15:37:19             dut.10.240.183.254: show port stats 0
09/11/2020 15:37:19             dut.10.240.183.254: 

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 68876416   TX-errors: 0          TX-bytes:  4408090624

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
09/11/2020 15:37:19               TestVEBSwitching: Test Case test_VEB_switching_inter_vfs Result PASSED:
09/11/2020 15:37:22             dut.10.240.183.254: quit
09/11/2020 15:37:23             dut.10.240.183.254: 

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Port 0 is closed
Done

Bye...
09/11/2020 15:37:28             dut.10.240.183.254: kill_all: called by dut and prefix list has value.
09/11/2020 15:37:29                            dts: 
TEST SUITE ENDED: TestVEBSwitching

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts]  [PATCH V1] tests/veb_switch use different cores for 2 vfs
@ 2020-11-09 15:29 sunqin
  2020-11-09  7:07 ` Sun, QinX
  2020-11-11  6:30 ` Tu, Lijuan
  0 siblings, 2 replies; 3+ messages in thread
From: sunqin @ 2020-11-09 15:29 UTC (permalink / raw)
  To: dts; +Cc: sunqin

When several testpmd are started at the same time, different cores should be used

Signed-off-by: sunqin <qinx.sun@intel.com>
---
 tests/TestSuite_veb_switch.py | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tests/TestSuite_veb_switch.py b/tests/TestSuite_veb_switch.py
index f51a8d3..8189478 100644
--- a/tests/TestSuite_veb_switch.py
+++ b/tests/TestSuite_veb_switch.py
@@ -239,11 +239,13 @@ class TestVEBSwitching(TestCase):
         the packets. Check Inter VF-VF MAC switch.
         """
         self.setup_env(driver='default')
-        self.pmdout.start_testpmd("Default", prefix="test1", ports=[self.sriov_vfs_port[0].pci], param="--eth-peer=0,%s" % self.vf1_mac)
+        self.dut.init_reserved_core()
+        cores_vf1 = self.dut.get_reserved_core('2C',0)
+        self.pmdout.start_testpmd(cores_vf1, prefix="test1", ports=[self.sriov_vfs_port[0].pci], param="--eth-peer=0,%s" % self.vf1_mac)
         self.dut.send_expect("set fwd txonly", "testpmd>")
         self.dut.send_expect("set promisc all off", "testpmd>")
-
-        self.pmdout_2.start_testpmd("Default", prefix="test2", ports=[self.sriov_vfs_port[1].pci])
+        cores_vf2 = self.dut.get_reserved_core('2C',0)
+        self.pmdout_2.start_testpmd(cores_vf2, prefix="test2", ports=[self.sriov_vfs_port[1].pci])
         self.session_secondary.send_expect("set fwd rxonly", "testpmd>")
         self.session_secondary.send_expect("set promisc all off", "testpmd>")
         self.session_secondary.send_expect("start", "testpmd>", 5)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dts] [PATCH V1] tests/veb_switch use different cores for 2 vfs
  2020-11-09 15:29 [dts] [PATCH V1] tests/veb_switch use different cores for 2 vfs sunqin
  2020-11-09  7:07 ` Sun, QinX
@ 2020-11-11  6:30 ` Tu, Lijuan
  1 sibling, 0 replies; 3+ messages in thread
From: Tu, Lijuan @ 2020-11-11  6:30 UTC (permalink / raw)
  To: Sun, QinX, dts; +Cc: Sun, QinX

> When several testpmd are started at the same time, different cores should
> be used
> 
> Signed-off-by: sunqin <qinx.sun@intel.com>

Applied

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-11-11  6:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-09 15:29 [dts] [PATCH V1] tests/veb_switch use different cores for 2 vfs sunqin
2020-11-09  7:07 ` Sun, QinX
2020-11-11  6:30 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).