From: "Zhu, WenhuiX" <wenhuix.zhu@intel.com>
To: "Zhu, ShuaiX" <shuaix.zhu@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "Zhu, ShuaiX" <shuaix.zhu@intel.com>
Subject: Re: [dts] [PATCH V1] tests/hotplug:Remove port close due to dpdk changes.
Date: Tue, 15 Oct 2019 02:12:01 +0000 [thread overview]
Message-ID: <E08767FB2CE10642B6780736EAC070F6049E0BAA@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <1571105647-137779-1-git-send-email-shuaix.zhu@intel.com>
[-- Attachment #1: Type: text/plain, Size: 1160 bytes --]
Tested-by : Zhu, WenhuiX <wenhuix.zhu@intel.com>
-----Original Message-----
From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of zhu,shuai
Sent: Tuesday, October 15, 2019 10:14 AM
To: dts@dpdk.org
Cc: Zhu, ShuaiX <shuaix.zhu@intel.com>
Subject: [dts] [PATCH V1] tests/hotplug:Remove port close due to dpdk changes.
dpdk patch is made for
https://jira.devtools.intel.com/browse/DPDK-11602. The behaviour of the close operation is changed.
Signed-off-by: zhu,shuai <shuaix.zhu@intel.com>
---
tests/TestSuite_hotplug.py | 1 -
1 file changed, 1 deletion(-)
diff --git a/tests/TestSuite_hotplug.py b/tests/TestSuite_hotplug.py index a7a7228..8a0d743 100644
--- a/tests/TestSuite_hotplug.py
+++ b/tests/TestSuite_hotplug.py
@@ -83,7 +83,6 @@ class TestPortHotPlug(TestCase):
self.dut.send_expect("port stop %s" % port,"Stopping ports",60)
# sleep 10 seconds for fortville update link stats
time.sleep(10)
- self.dut.send_expect("port close %s" % port,"Closing ports...",60)
self.dut.send_expect("port detach %s" % port,"is detached",60)
def test_after_attach(self):
--
2.17.2
[-- Attachment #2: TestPortHotPlug.log --]
[-- Type: application/octet-stream, Size: 25882 bytes --]
15/10/2019 01:36:45 dts:
TEST SUITE : TestPortHotPlug
15/10/2019 01:36:45 dts: NIC : fortville_spirit
15/10/2019 01:36:45 dut.10.240.176.141:
15/10/2019 01:36:45 tester:
15/10/2019 01:36:45 TestPortHotPlug: Test Case test_after_attach Begin
15/10/2019 01:36:45 dut.10.240.176.141:
15/10/2019 01:36:45 tester:
15/10/2019 01:36:45 dut.10.240.176.141: ./usertools/dpdk-devbind.py -u 0000:82:00.1
15/10/2019 01:36:46 dut.10.240.176.141:
15/10/2019 01:36:46 dut.10.240.176.141: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1e -n 4 -- -i
15/10/2019 01:36:57 dut.10.240.176.141: EAL: Detected 88 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL: probe driver: 8086:6f20 rawdev_ioat
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL: probe driver: 8086:6f21 rawdev_ioat
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL: probe driver: 8086:6f22 rawdev_ioat
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL: probe driver: 8086:6f23 rawdev_ioat
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL: probe driver: 8086:6f24 rawdev_ioat
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL: probe driver: 8086:6f25 rawdev_ioat
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL: probe driver: 8086:6f26 rawdev_ioat
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL: probe driver: 8086:6f27 rawdev_ioat
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:80:04.0 on NUMA socket 1
EAL: probe driver: 8086:6f20 rawdev_ioat
EAL: PCI device 0000:80:04.1 on NUMA socket 1
EAL: probe driver: 8086:6f21 rawdev_ioat
EAL: PCI device 0000:80:04.2 on NUMA socket 1
EAL: probe driver: 8086:6f22 rawdev_ioat
EAL: PCI device 0000:80:04.3 on NUMA socket 1
EAL: probe driver: 8086:6f23 rawdev_ioat
EAL: PCI device 0000:80:04.4 on NUMA socket 1
EAL: probe driver: 8086:6f24 rawdev_ioat
EAL: PCI device 0000:80:04.5 on NUMA socket 1
EAL: probe driver: 8086:6f25 rawdev_ioat
EAL: PCI device 0000:80:04.6 on NUMA socket 1
EAL: probe driver: 8086:6f26 rawdev_ioat
EAL: PCI device 0000:80:04.7 on NUMA socket 1
EAL: probe driver: 8086:6f27 rawdev_ioat
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
i40e_GLQF_reg_init(): i40e device 0000:82:00.0 changed global register [0x002689a0]. original: 0x00000000, new: 0x00000029
i40e_GLQF_reg_init(): i40e device 0000:82:00.0 changed global register [0x00268ca4]. original: 0x00001840, new: 0x00009420
i40e_aq_debug_write_global_register(): i40e device 0000:82:00.0 changed global register [0x0026c7a0]. original: 0xa8, after: 0x28
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 1)
Port 0: 68:05:CA:30:6A:80
Checking link statuses...
Done
15/10/2019 01:37:00 dut.10.240.176.141: port attach 0000:82:00.1
15/10/2019 01:37:00 dut.10.240.176.141: port attach 0000:82:00.1
Attaching a new port...
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
15/10/2019 01:37:00 dut.10.240.176.141: port start 1
15/10/2019 01:37:00 dut.10.240.176.141: port start 1
15/10/2019 01:37:10 dut.10.240.176.141: show port info 1
15/10/2019 01:37:10 dut.10.240.176.141: show port info 1
********************* Infos for port 1 *********************
MAC address: 68:05:CA:30:6A:81
Device name: 0000:82:00.1
Driver name: net_i40e
Devargs:
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 40000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
Hash key size in bytes: 52
Redirection table size: 512
Supported RSS offload flow types:
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-sctp
ipv4-other
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-sctp
ipv6-other
l2_payload
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum number of VMDq pools: 64
Current number of RX queues: 1
Max possible RX queues: 320
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 1
Max possible TX queues: 320
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 255
Max segment number per MTU/TSO: 8
15/10/2019 01:37:10 dut.10.240.176.141: start
15/10/2019 01:37:10 dut.10.240.176.141: start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=2048 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=2048 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
15/10/2019 01:37:10 dut.10.240.176.141: port detach 1
15/10/2019 01:37:10 dut.10.240.176.141: port detach 1
Removing a device...
15/10/2019 01:37:10 dut.10.240.176.141: stop
15/10/2019 01:37:10 dut.10.240.176.141: stop
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
15/10/2019 01:37:10 dut.10.240.176.141: port stop 1
15/10/2019 01:37:10 dut.10.240.176.141: port stop 1
15/10/2019 01:37:20 dut.10.240.176.141: port detach 1
15/10/2019 01:37:21 dut.10.240.176.141: port detach 1
Removing a device...
Port was not closed
15/10/2019 01:37:21 dut.10.240.176.141: port attach 0000:82:00.1
15/10/2019 01:37:21 dut.10.240.176.141: port attach 0000:82:00.1
Attaching a new port...
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
15/10/2019 01:37:21 dut.10.240.176.141: port start 1
15/10/2019 01:37:21 dut.10.240.176.141: port start 1
15/10/2019 01:37:31 dut.10.240.176.141: show port info 1
15/10/2019 01:37:31 dut.10.240.176.141: show port info 1
********************* Infos for port 1 *********************
MAC address: 68:05:CA:30:6A:81
Device name: 0000:82:00.1
Driver name: net_i40e
Devargs:
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 40000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
Hash key size in bytes: 52
Redirection table size: 512
Supported RSS offload flow types:
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-sctp
ipv4-other
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-sctp
ipv6-other
l2_payload
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum number of VMDq pools: 64
Current number of RX queues: 1
Max possible RX queues: 320
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 1
Max possible TX queues: 320
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 255
Max segment number per MTU/TSO: 8
15/10/2019 01:37:31 dut.10.240.176.141: start
15/10/2019 01:37:31 dut.10.240.176.141: start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=2048 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=2048 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
15/10/2019 01:37:31 dut.10.240.176.141: port detach 1
15/10/2019 01:37:31 dut.10.240.176.141: port detach 1
Removing a device...
15/10/2019 01:37:31 dut.10.240.176.141: clear port stats 1
15/10/2019 01:37:31 dut.10.240.176.141: clear port stats 1
NIC statistics for port 1 cleared
15/10/2019 01:37:31 tester: scp -v /home/autoregression/zhushuai/output/tmp/pcap/scapy_ens160f1.pcap1571074651.9 root@10.240.176.174:/tmp/tester/
15/10/2019 01:37:33 tester: scp -v /home/autoregression/zhushuai/output/tmp/pcap/scapy_ens160f1.cmd1571074651.9 root@10.240.176.174:/tmp/tester/
15/10/2019 01:37:34 tester: python /tmp/tester/scapy_ens160f1.cmd1571074651.9
15/10/2019 01:37:35 tester: packet ready for sending...
Ether(src='00:00:20:00:00:00', dst='68:05:ca:30:6a:81', type=2048)/IP(frag=0, src='127.0.0.1', proto=17, tos=0, dst='127.0.0.1', chksum=31932, len=46, id=1, version=4, flags=0, ihl=5, ttl=64)/UDP(dport=53, sport=53, len=26, chksum=58930)/DNS(aa=0, qr=0, an='', ad=0, nscount=22616, qdcount=22616, ns='', tc=0, rd=0, arcount=22616, length=None, ar='', opcode=11, ra=0, cd=1, z=1, rcode=8, id=22616, ancount=22616, qd='')/Raw(load='XXXXXX')
.
Sent 1 packets.
15/10/2019 01:37:35 dut.10.240.176.141: show port stats 1
15/10/2019 01:37:35 dut.10.240.176.141: show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-missed: 0 RX-bytes: 60
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
15/10/2019 01:37:35 dut.10.240.176.141: quit
15/10/2019 01:37:38 dut.10.240.176.141: quit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 1 TX-dropped: 0 TX-total: 1
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 1 TX-dropped: 0 TX-total: 1
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Port 0: link state change event
Done
Shutting down port 0...
Closing ports...
Port 1: link state change event
Done
Shutting down port 1...
Closing ports...
Done
Bye...
15/10/2019 01:37:38 TestPortHotPlug: Test Case test_after_attach Result PASSED:
15/10/2019 01:37:38 dut.10.240.176.141: kill_all: called by dut and has no prefix list.
15/10/2019 01:37:41 dut.10.240.176.141: ./usertools/dpdk-devbind.py --bind=igb_uio 0000:82:00.1
15/10/2019 01:37:42 dut.10.240.176.141: Notice: 0000:82:00.1 already bound to driver igb_uio, skipping
15/10/2019 01:37:44 TestPortHotPlug: Test Case test_before_attach Begin
15/10/2019 01:37:44 dut.10.240.176.141:
15/10/2019 01:37:44 tester:
15/10/2019 01:37:44 dut.10.240.176.141: ./usertools/dpdk-devbind.py -u 0000:82:00.1
15/10/2019 01:37:44 dut.10.240.176.141:
15/10/2019 01:37:47 dut.10.240.176.141: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1e -n 4 -- -i
15/10/2019 01:37:58 dut.10.240.176.141: EAL: Detected 88 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL: probe driver: 8086:6f20 rawdev_ioat
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL: probe driver: 8086:6f21 rawdev_ioat
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL: probe driver: 8086:6f22 rawdev_ioat
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL: probe driver: 8086:6f23 rawdev_ioat
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL: probe driver: 8086:6f24 rawdev_ioat
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL: probe driver: 8086:6f25 rawdev_ioat
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL: probe driver: 8086:6f26 rawdev_ioat
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL: probe driver: 8086:6f27 rawdev_ioat
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:80:04.0 on NUMA socket 1
EAL: probe driver: 8086:6f20 rawdev_ioat
EAL: PCI device 0000:80:04.1 on NUMA socket 1
EAL: probe driver: 8086:6f21 rawdev_ioat
EAL: PCI device 0000:80:04.2 on NUMA socket 1
EAL: probe driver: 8086:6f22 rawdev_ioat
EAL: PCI device 0000:80:04.3 on NUMA socket 1
EAL: probe driver: 8086:6f23 rawdev_ioat
EAL: PCI device 0000:80:04.4 on NUMA socket 1
EAL: probe driver: 8086:6f24 rawdev_ioat
EAL: PCI device 0000:80:04.5 on NUMA socket 1
EAL: probe driver: 8086:6f25 rawdev_ioat
EAL: PCI device 0000:80:04.6 on NUMA socket 1
EAL: probe driver: 8086:6f26 rawdev_ioat
EAL: PCI device 0000:80:04.7 on NUMA socket 1
EAL: probe driver: 8086:6f27 rawdev_ioat
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: 68:05:CA:30:6A:80
Configuring Port 1 (socket 1)
Port 1: 68:05:CA:30:6A:81
Checking link statuses...
Done
15/10/2019 01:37:58 dut.10.240.176.141: port stop 1
15/10/2019 01:37:58 dut.10.240.176.141: port stop 1
15/10/2019 01:38:08 dut.10.240.176.141: port detach 1
15/10/2019 01:38:08 dut.10.240.176.141: port detach 1
Removing a device...
Port was not closed
15/10/2019 01:38:08 dut.10.240.176.141: port attach 0000:82:00.1
15/10/2019 01:38:08 dut.10.240.176.141: port attach 0000:82:00.1
Attaching a new port...
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1583 net_i40e
15/10/2019 01:38:08 dut.10.240.176.141: port start 1
15/10/2019 01:38:08 dut.10.240.176.141: port start 1
15/10/2019 01:38:19 dut.10.240.176.141: show port info 1
15/10/2019 01:38:19 dut.10.240.176.141: show port info 1
********************* Infos for port 1 *********************
MAC address: 68:05:CA:30:6A:81
Device name: 0000:82:00.1
Driver name: net_i40e
Devargs:
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 40000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
Hash key size in bytes: 52
Redirection table size: 512
Supported RSS offload flow types:
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-sctp
ipv4-other
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-sctp
ipv6-other
l2_payload
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum number of VMDq pools: 64
Current number of RX queues: 1
Max possible RX queues: 320
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 1
Max possible TX queues: 320
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 255
Max segment number per MTU/TSO: 8
15/10/2019 01:38:19 dut.10.240.176.141: start
15/10/2019 01:38:19 dut.10.240.176.141: start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=2048 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=2048 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
15/10/2019 01:38:19 dut.10.240.176.141: port detach 1
15/10/2019 01:38:19 dut.10.240.176.141: port detach 1
Removing a device...
15/10/2019 01:38:19 dut.10.240.176.141: clear port stats 1
15/10/2019 01:38:19 dut.10.240.176.141: clear port stats 1
NIC statistics for port 1 cleared
15/10/2019 01:38:19 tester: scp -v /home/autoregression/zhushuai/output/tmp/pcap/scapy_ens160f1.pcap1571074699.48 root@10.240.176.174:/tmp/tester/
15/10/2019 01:38:20 tester: scp -v /home/autoregression/zhushuai/output/tmp/pcap/scapy_ens160f1.cmd1571074699.48 root@10.240.176.174:/tmp/tester/
15/10/2019 01:38:22 tester: python /tmp/tester/scapy_ens160f1.cmd1571074699.48
15/10/2019 01:38:22 tester: packet ready for sending...
Ether(src='00:00:20:00:00:00', dst='68:05:ca:30:6a:81', type=2048)/IP(frag=0, src='127.0.0.1', proto=17, tos=0, dst='127.0.0.1', chksum=31932, len=46, id=1, version=4, flags=0, ihl=5, ttl=64)/UDP(dport=53, sport=53, len=26, chksum=58930)/DNS(aa=0, qr=0, an='', ad=0, nscount=22616, qdcount=22616, ns='', tc=0, rd=0, arcount=22616, length=None, ar='', opcode=11, ra=0, cd=1, z=1, rcode=8, id=22616, ancount=22616, qd='')/Raw(load='XXXXXX')
.
Sent 1 packets.
15/10/2019 01:38:22 dut.10.240.176.141: show port stats 1
15/10/2019 01:38:23 dut.10.240.176.141: show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-missed: 0 RX-bytes: 60
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
15/10/2019 01:38:23 dut.10.240.176.141: quit
15/10/2019 01:38:25 dut.10.240.176.141: quit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 1 TX-dropped: 0 TX-total: 1
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 1 TX-dropped: 0 TX-total: 1
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Port 0: link state change event
Done
Shutting down port 0...
Closing ports...
Port 1: link state change event
Done
Shutting down port 1...
Closing ports...
Done
Bye...
15/10/2019 01:38:25 TestPortHotPlug: Test Case test_before_attach Result PASSED:
15/10/2019 01:38:25 dut.10.240.176.141: kill_all: called by dut and has no prefix list.
15/10/2019 01:38:28 dut.10.240.176.141: ./usertools/dpdk-devbind.py --bind=igb_uio 0000:82:00.1
15/10/2019 01:38:29 dut.10.240.176.141: Notice: 0000:82:00.1 already bound to driver igb_uio, skipping
15/10/2019 01:38:31 dut.10.240.176.141: kill_all: called by dut and has no prefix list.
15/10/2019 01:38:34 dts:
TEST SUITE ENDED: TestPortHotPlug
next prev parent reply other threads:[~2019-10-15 2:12 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-15 2:14 zhu,shuai
2019-10-15 2:12 ` Zhu, WenhuiX [this message]
2019-10-21 2:34 ` Tu, Lijuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E08767FB2CE10642B6780736EAC070F6049E0BAA@shsmsx102.ccr.corp.intel.com \
--to=wenhuix.zhu@intel.com \
--cc=dts@dpdk.org \
--cc=shuaix.zhu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).