* [dpdk-dev] pktgen-dpdk send too big packet, and stop sending packets after few seconds
@ 2019-06-23 8:00 Xia Rui
2019-06-23 13:26 ` Wiles, Keith
0 siblings, 1 reply; 4+ messages in thread
From: Xia Rui @ 2019-06-23 8:00 UTC (permalink / raw)
To: dpdk dev community
Hello, everyone.
I am using pktgen-dpdk and testpmd to test the functionality of ovs-dpdk. The network topology is :
+-------------+----------------------+ host(OVS-DPDK) +-----------------------+-----------------+
| | vhost-user port 1 |<----------------------------------->| vhost-user port 3 | |
| +----------------------+ +-----------------------+ |
| container | pktgen | | testpmd | container |
| +----------------------+ +-------------------+ |
| | vhost-user port 2 |<------------------------------------>| vhost-user port 4 | |
+--------------+---------------------+ +----------------------+----------------+
The version of my platform:
1. host OS: ubuntu 16.04.5 LTS
2. host linux kernel: 4.15.0-15
3. host OVS: 2.8.0
4. host DPDK : 17.05.2
5. container pktgen-dpdk: 3.4.9 + DPDK 17.05.2
6. container DPDK (testpmd): 17.05.2
There are two docker containers. One is running pktgen-dpdk with:
############################pktgen-dpdk start script############################
./app/x86_64-native-linuxapp-gcc/pktgen -c 0x70 --master-lcore 4 -n 1 --file-prefix pktgen --no-pci \
--vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1' \
--vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/var/run/openvswitch/vhost-user2' \
-- -T -P -m "5.0,6.1"
############################pktgen-dpdk start script END############################
The other is running testpmd with:
############################testpmd start script############################
testpmd -c 0xE0 -n 1 --socket-mem=1024,0 --file-prefix testpmd --no-pci \
--vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/var/run/openvswitch/vhost-user3' \
--vdev 'net_virtio_user4,mac=00:00:00:00:00:04,path=/var/run/openvswitch/vhost-user4' \
-- -i --burst=64 --disable-hw-vlan --txd=2048 --rxd=2048 --auto-start --coremask=0xc0
############################testpmd start script END############################
The two containers are connected using ovs-docker in the host. I create four vhost-user ports, two of which are for pktgen, the other of which are for testpmd. I connect the vhost-user ports by adding the
routes between them. The start script of ovs is:
############################ovs-dpdk start script############################
sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
--remote=db:Open_vSwitch,Open_vSwitch,manager_options \
--private-key=db:Open_vSwitch,SSL,private_key \
--certificate=db:Open_vSwitch,SSL,certificate \
--bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
--pidfile --detach
sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x02
sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x04
sudo ovs-vswitchd --pidfile --detach --log-file=/var/log/openvswitch/vhost-ovs-vswitchd.log
sudo /usr/local/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start
############################ovs-dpdk start script END############################
When I start the pktgen-dpdk to send packets, there are something wrong.
First, I set the packet size to 64Bytes, there are some big packets (more than 64Bytes). I set the rate to 10%, 300 packets, and get:
###########################CMD shot#######################
Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
Flags:Port : P--------------:0 P--------------:1
Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
Pkts/s Max/Rx : 256/0 300/0 556/0
Max/Tx : 300/0 300/0 600/0
MBits/s Rx/Tx : 0/0 0/0 0/0
Broadcast : 0 0
Multicast : 0 0
64 Bytes : 1104 1148
65-127 : 64 152
128-255 : 0 0
256-511 : 0 0
512-1023 : 0 0
1024-1518 : 0 0
Runts/Jumbos : 0/0 0/0
Errors Rx/Tx : 0/0 0/0
Total Rx Pkts : 1168 1300
Tx Pkts : 1300 1300
Rx MBs : 0 0
Tx MBs : 1 1
ARP/ICMP Pkts : 0/0 0/0
:
Pattern Type : abcd... abcd...
Tx Count/% Rate : 300 /10% 300 /10%
PktSize/Tx Burst : 64 / 64 64 / 64
Src/Dest Port : 1234 / 5678 1234 / 5678
Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
802.1p CoS : 0 0
ToS Value: : 0 0
- DSCP value : 0 0
- IPP value : 0 0
Dst IP Address : 192.168.1.1 192.168.0.1
Src IP Address : 192.168.0.1/24 192.168.1.1/24
Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
-- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
###########################CMD shot END#######################
There ought to be no packets greater than 64Bytes, but there exist.
Second, I reset the configuration ("rst") and try to start send packets continuously. However, the pktgen works few seconds and stop sending packets, with output:
###########################CMD shot#######################
Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
Flags:Port : P--------------:0 P--------------:1
Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
Pkts/s Max/Rx : 176288/0 146016/0 308224/0
Max/Tx : 1344832/0 767520/0 1535040/0
MBits/s Rx/Tx : 0/0 0/0 0/0
Broadcast : 0 0
Multicast : 0 0
64 Bytes : 15872 15104
65-127 : 50368 61248
128-255 : 44096 62848
256-511 : 51840 93216
512-1023 : 63264 151456
1024-1518 : 51936 126240
Runts/Jumbos : 0/0 0/0
Errors Rx/Tx : 0/0 0/0
Total Rx Pkts : 277376 510112
Tx Pkts : 4529248 1162368
Rx MBs : 1215 2701
Tx MBs : 665276 57380
ARP/ICMP Pkts : 0/0 0/0
:
Pattern Type : abcd... abcd...
Tx Count/% Rate : Forever /100% Forever /100%
PktSize/Tx Burst : 64 / 64 64 / 64
Src/Dest Port : 1234 / 5678 1234 / 5678
Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
802.1p CoS : 0 0
ToS Value: : 0 0
- DSCP value : 0 0
- IPP value : 0 0
Dst IP Address : 192.168.1.1 192.168.0.1
Src IP Address : 192.168.0.1/24 192.168.1.1/24
Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
-- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
###########################CMD shot END#######################
Pktgen is stuck at this setting. There are a lot of large packets!
I check the log of ovs-dpdk and get:
#############################ovs-dpdk log#############################
2019-06-23T07:51:39.349Z|00022|netdev_dpdk(pmd8)|WARN|Dropped 5803564 log messages in last 102581 seconds (most recently, 102576 seconds ago) due to excessive rate
2019-06-23T07:51:39.349Z|00023|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00024|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00025|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00026|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00027|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00028|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00029|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00030|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00031|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00032|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00033|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00034|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00035|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00036|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00037|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00038|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00039|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00040|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00041|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
2019-06-23T07:51:39.349Z|00042|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
#############################ovs-dpdk log END#############################
Thank you for share your ideas.
Best wishes,
Xia Rui
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] pktgen-dpdk send too big packet, and stop sending packets after few seconds
2019-06-23 8:00 [dpdk-dev] pktgen-dpdk send too big packet, and stop sending packets after few seconds Xia Rui
@ 2019-06-23 13:26 ` Wiles, Keith
2019-06-24 9:22 ` Xia Rui
0 siblings, 1 reply; 4+ messages in thread
From: Wiles, Keith @ 2019-06-23 13:26 UTC (permalink / raw)
To: Xia Rui; +Cc: dpdk dev community
> On Jun 23, 2019, at 3:00 AM, Xia Rui <xiarui_work@163.com> wrote:
>
> Hello, everyone.
> I am using pktgen-dpdk and testpmd to test the functionality of ovs-dpdk. The network topology is :
> +-------------+----------------------+ host(OVS-DPDK) +-----------------------+-----------------+
> | | vhost-user port 1 |<----------------------------------->| vhost-user port 3 | |
> | +----------------------+ +-----------------------+ |
> | container | pktgen | | testpmd | container |
> | +----------------------+ +-------------------+ |
> | | vhost-user port 2 |<------------------------------------>| vhost-user port 4 | |
> +--------------+---------------------+ +----------------------+----------------+
>
> The version of my platform:
> 1. host OS: ubuntu 16.04.5 LTS
> 2. host linux kernel: 4.15.0-15
> 3. host OVS: 2.8.0
> 4. host DPDK : 17.05.2
> 5. container pktgen-dpdk: 3.4.9 + DPDK 17.05.2
> 6. container DPDK (testpmd): 17.05.2
At one point virtio drivers were changing the length of the tx packet in the mbuf for some type of meta data being shared between the two virtio points. Pktgen expected the length to remain the same when the mbuf was returned to pktgen mempool. This explains the non-64 byte frames. The fix was added to virtio in later releases to restore the length in the mbuf, plus I may’ve added code to pktgen in later release to fix the length back to what I expected when sending the packets. I looked in the latests version of pktgen and found the code in pktgen_setup_cb() which reset the length in the packet. The code should call rte_pktmbuf_reset() routine to restore the mbuf to expected state.
If OVS can work with a later version of DPDK then I would upgrade to the latest release, if not then I would look at and compare the two versions of virtio PMD and see if you can find that fix. If not then you can look in pktgen and repair the length in the mbuf before it is sent plus you may have to fix the location of the read/write offsets in the mbuf as well. This change will effect pktgen performance, but for virtio that should not be a problem. You can also connect the Maintainers of virtio and see if they remember the fix.
For the traffic stopping, I do not remember if this fix solved that problem.
>
> There are two docker containers. One is running pktgen-dpdk with:
> ############################pktgen-dpdk start script############################
> ./app/x86_64-native-linuxapp-gcc/pktgen -c 0x70 --master-lcore 4 -n 1 --file-prefix pktgen --no-pci \
> --vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1' \
> --vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/var/run/openvswitch/vhost-user2' \
> -- -T -P -m "5.0,6.1"
> ############################pktgen-dpdk start script END############################
>
> The other is running testpmd with:
> ############################testpmd start script############################
> testpmd -c 0xE0 -n 1 --socket-mem=1024,0 --file-prefix testpmd --no-pci \
> --vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/var/run/openvswitch/vhost-user3' \
> --vdev 'net_virtio_user4,mac=00:00:00:00:00:04,path=/var/run/openvswitch/vhost-user4' \
> -- -i --burst=64 --disable-hw-vlan --txd=2048 --rxd=2048 --auto-start --coremask=0xc0
> ############################testpmd start script END############################
>
> The two containers are connected using ovs-docker in the host. I create four vhost-user ports, two of which are for pktgen, the other of which are for testpmd. I connect the vhost-user ports by adding the
> routes between them. The start script of ovs is:
>
> ############################ovs-dpdk start script############################
> sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
> --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
> --private-key=db:Open_vSwitch,SSL,private_key \
> --certificate=db:Open_vSwitch,SSL,certificate \
> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
> --pidfile --detach
>
> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
>
> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x02
>
> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x04
>
> sudo ovs-vswitchd --pidfile --detach --log-file=/var/log/openvswitch/vhost-ovs-vswitchd.log
>
> sudo /usr/local/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start
> ############################ovs-dpdk start script END############################
>
> When I start the pktgen-dpdk to send packets, there are something wrong.
>
> First, I set the packet size to 64Bytes, there are some big packets (more than 64Bytes). I set the rate to 10%, 300 packets, and get:
>
> ###########################CMD shot#######################
> Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
> Flags:Port : P--------------:0 P--------------:1
> Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
> Pkts/s Max/Rx : 256/0 300/0 556/0
> Max/Tx : 300/0 300/0 600/0
> MBits/s Rx/Tx : 0/0 0/0 0/0
> Broadcast : 0 0
> Multicast : 0 0
> 64 Bytes : 1104 1148
> 65-127 : 64 152
> 128-255 : 0 0
> 256-511 : 0 0
> 512-1023 : 0 0
> 1024-1518 : 0 0
> Runts/Jumbos : 0/0 0/0
> Errors Rx/Tx : 0/0 0/0
> Total Rx Pkts : 1168 1300
> Tx Pkts : 1300 1300
> Rx MBs : 0 0
> Tx MBs : 1 1
> ARP/ICMP Pkts : 0/0 0/0
> :
> Pattern Type : abcd... abcd...
> Tx Count/% Rate : 300 /10% 300 /10%
> PktSize/Tx Burst : 64 / 64 64 / 64
> Src/Dest Port : 1234 / 5678 1234 / 5678
> Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
> 802.1p CoS : 0 0
> ToS Value: : 0 0
> - DSCP value : 0 0
> - IPP value : 0 0
> Dst IP Address : 192.168.1.1 192.168.0.1
> Src IP Address : 192.168.0.1/24 192.168.1.1/24
> Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
> Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
> VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
>
> -- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
> ###########################CMD shot END#######################
>
> There ought to be no packets greater than 64Bytes, but there exist.
>
> Second, I reset the configuration ("rst") and try to start send packets continuously. However, the pktgen works few seconds and stop sending packets, with output:
>
>
> ###########################CMD shot#######################
> Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
> Flags:Port : P--------------:0 P--------------:1
> Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
> Pkts/s Max/Rx : 176288/0 146016/0 308224/0
> Max/Tx : 1344832/0 767520/0 1535040/0
> MBits/s Rx/Tx : 0/0 0/0 0/0
> Broadcast : 0 0
> Multicast : 0 0
> 64 Bytes : 15872 15104
> 65-127 : 50368 61248
> 128-255 : 44096 62848
> 256-511 : 51840 93216
> 512-1023 : 63264 151456
> 1024-1518 : 51936 126240
> Runts/Jumbos : 0/0 0/0
> Errors Rx/Tx : 0/0 0/0
> Total Rx Pkts : 277376 510112
> Tx Pkts : 4529248 1162368
> Rx MBs : 1215 2701
> Tx MBs : 665276 57380
> ARP/ICMP Pkts : 0/0 0/0
> :
> Pattern Type : abcd... abcd...
> Tx Count/% Rate : Forever /100% Forever /100%
> PktSize/Tx Burst : 64 / 64 64 / 64
> Src/Dest Port : 1234 / 5678 1234 / 5678
> Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
> 802.1p CoS : 0 0
> ToS Value: : 0 0
> - DSCP value : 0 0
> - IPP value : 0 0
> Dst IP Address : 192.168.1.1 192.168.0.1
> Src IP Address : 192.168.0.1/24 192.168.1.1/24
> Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
> Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
> VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
>
> -- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
> ###########################CMD shot END#######################
>
> Pktgen is stuck at this setting. There are a lot of large packets!
>
> I check the log of ovs-dpdk and get:
>
> #############################ovs-dpdk log#############################
> 2019-06-23T07:51:39.349Z|00022|netdev_dpdk(pmd8)|WARN|Dropped 5803564 log messages in last 102581 seconds (most recently, 102576 seconds ago) due to excessive rate
> 2019-06-23T07:51:39.349Z|00023|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00024|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00025|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00026|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00027|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00028|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00029|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00030|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00031|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00032|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00033|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00034|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00035|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00036|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00037|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00038|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00039|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00040|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00041|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> 2019-06-23T07:51:39.349Z|00042|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> #############################ovs-dpdk log END#############################
> Thank you for share your ideas.
>
>
> Best wishes,
> Xia Rui
Regards,
Keith
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] pktgen-dpdk send too big packet, and stop sending packets after few seconds
2019-06-23 13:26 ` Wiles, Keith
@ 2019-06-24 9:22 ` Xia Rui
2019-06-24 13:17 ` Wiles, Keith
0 siblings, 1 reply; 4+ messages in thread
From: Xia Rui @ 2019-06-24 9:22 UTC (permalink / raw)
To: Wiles, Keith; +Cc: dpdk dev community
Thank you for your reply. I am trying to use the latest version of pktgen-dpdk, ovs, and dpdk. But I meet some problem when setting up the platform.
The version of my platform:
1. OVS: 2.11.1
2. DPDK: 18.11
3. pktgen-dpdk: 3.6.6
Using the previous version (3.4.9), pktgen-dpdk works well. This time it shows up some error:
##############################error log##################################
Pktgen:/> PANIC in pktgen_main_rxtx_loop():
*** port 0 socket ID 4294967295 has different socket ID for lcore 5 socket ID 0
7: [/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7fc5d6a0288f]]
6: [/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fc5d6cd96db]]
5: [./app/x86_64-native-linuxapp-gcc/pktgen(eal_thread_loop+0x1e1) [0x57c641]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_launch_one_lcore+0xb7) [0x49d247]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen() [0x49bef5]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x46352c]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x5829fb]]
Aborted (core dumped)
##############################error log END##################################
When I look up the log of OVS, I find:
##############################OVS error log##################################
2019-06-24T09:11:13.780Z|00113|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2019-06-24T09:11:13.780Z|00114|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2019-06-24T09:11:13.781Z|00115|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/vhost-user1' has been removed
2019-06-24T09:11:13.781Z|00116|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2019-06-24T09:11:13.781Z|00117|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2019-06-24T09:11:13.781Z|00118|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/vhost-user2' has been removed
##############################OVS error log#############################################
Could you share some hints to me, thank you for your time.
Best wishes
Xia Rui
At 2019-06-23 21:26:15, "Wiles, Keith" <keith.wiles@intel.com> wrote:
>
>
>> On Jun 23, 2019, at 3:00 AM, Xia Rui <xiarui_work@163.com> wrote:
>>
>> Hello, everyone.
>> I am using pktgen-dpdk and testpmd to test the functionality of ovs-dpdk. The network topology is :
>> +-------------+----------------------+ host(OVS-DPDK) +-----------------------+-----------------+
>> | | vhost-user port 1 |<----------------------------------->| vhost-user port 3 | |
>> | +----------------------+ +-----------------------+ |
>> | container | pktgen | | testpmd | container |
>> | +----------------------+ +-------------------+ |
>> | | vhost-user port 2 |<------------------------------------>| vhost-user port 4 | |
>> +--------------+---------------------+ +----------------------+----------------+
>>
>> The version of my platform:
>> 1. host OS: ubuntu 16.04.5 LTS
>> 2. host linux kernel: 4.15.0-15
>> 3. host OVS: 2.8.0
>> 4. host DPDK : 17.05.2
>> 5. container pktgen-dpdk: 3.4.9 + DPDK 17.05.2
>> 6. container DPDK (testpmd): 17.05.2
>
>At one point virtio drivers were changing the length of the tx packet in the mbuf for some type of meta data being shared between the two virtio points. Pktgen expected the length to remain the same when the mbuf was returned to pktgen mempool. This explains the non-64 byte frames. The fix was added to virtio in later releases to restore the length in the mbuf, plus I may’ve added code to pktgen in later release to fix the length back to what I expected when sending the packets. I looked in the latests version of pktgen and found the code in pktgen_setup_cb() which reset the length in the packet. The code should call rte_pktmbuf_reset() routine to restore the mbuf to expected state.
>
>If OVS can work with a later version of DPDK then I would upgrade to the latest release, if not then I would look at and compare the two versions of virtio PMD and see if you can find that fix. If not then you can look in pktgen and repair the length in the mbuf before it is sent plus you may have to fix the location of the read/write offsets in the mbuf as well. This change will effect pktgen performance, but for virtio that should not be a problem. You can also connect the Maintainers of virtio and see if they remember the fix.
>
>For the traffic stopping, I do not remember if this fix solved that problem.
>>
>> There are two docker containers. One is running pktgen-dpdk with:
>> ############################pktgen-dpdk start script############################
>> ./app/x86_64-native-linuxapp-gcc/pktgen -c 0x70 --master-lcore 4 -n 1 --file-prefix pktgen --no-pci \
>> --vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1' \
>> --vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/var/run/openvswitch/vhost-user2' \
>> -- -T -P -m "5.0,6.1"
>> ############################pktgen-dpdk start script END############################
>>
>> The other is running testpmd with:
>> ############################testpmd start script############################
>> testpmd -c 0xE0 -n 1 --socket-mem=1024,0 --file-prefix testpmd --no-pci \
>> --vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/var/run/openvswitch/vhost-user3' \
>> --vdev 'net_virtio_user4,mac=00:00:00:00:00:04,path=/var/run/openvswitch/vhost-user4' \
>> -- -i --burst=64 --disable-hw-vlan --txd=2048 --rxd=2048 --auto-start --coremask=0xc0
>> ############################testpmd start script END############################
>>
>> The two containers are connected using ovs-docker in the host. I create four vhost-user ports, two of which are for pktgen, the other of which are for testpmd. I connect the vhost-user ports by adding the
>> routes between them. The start script of ovs is:
>>
>> ############################ovs-dpdk start script############################
>> sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
>> --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
>> --private-key=db:Open_vSwitch,SSL,private_key \
>> --certificate=db:Open_vSwitch,SSL,certificate \
>> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
>> --pidfile --detach
>>
>> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
>>
>> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x02
>>
>> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x04
>>
>> sudo ovs-vswitchd --pidfile --detach --log-file=/var/log/openvswitch/vhost-ovs-vswitchd.log
>>
>> sudo /usr/local/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start
>> ############################ovs-dpdk start script END############################
>>
>> When I start the pktgen-dpdk to send packets, there are something wrong.
>>
>> First, I set the packet size to 64Bytes, there are some big packets (more than 64Bytes). I set the rate to 10%, 300 packets, and get:
>>
>> ###########################CMD shot#######################
>> Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
>> Flags:Port : P--------------:0 P--------------:1
>> Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
>> Pkts/s Max/Rx : 256/0 300/0 556/0
>> Max/Tx : 300/0 300/0 600/0
>> MBits/s Rx/Tx : 0/0 0/0 0/0
>> Broadcast : 0 0
>> Multicast : 0 0
>> 64 Bytes : 1104 1148
>> 65-127 : 64 152
>> 128-255 : 0 0
>> 256-511 : 0 0
>> 512-1023 : 0 0
>> 1024-1518 : 0 0
>> Runts/Jumbos : 0/0 0/0
>> Errors Rx/Tx : 0/0 0/0
>> Total Rx Pkts : 1168 1300
>> Tx Pkts : 1300 1300
>> Rx MBs : 0 0
>> Tx MBs : 1 1
>> ARP/ICMP Pkts : 0/0 0/0
>> :
>> Pattern Type : abcd... abcd...
>> Tx Count/% Rate : 300 /10% 300 /10%
>> PktSize/Tx Burst : 64 / 64 64 / 64
>> Src/Dest Port : 1234 / 5678 1234 / 5678
>> Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
>> 802.1p CoS : 0 0
>> ToS Value: : 0 0
>> - DSCP value : 0 0
>> - IPP value : 0 0
>> Dst IP Address : 192.168.1.1 192.168.0.1
>> Src IP Address : 192.168.0.1/24 192.168.1.1/24
>> Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
>> Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
>> VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
>>
>> -- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
>> ###########################CMD shot END#######################
>>
>> There ought to be no packets greater than 64Bytes, but there exist.
>>
>> Second, I reset the configuration ("rst") and try to start send packets continuously. However, the pktgen works few seconds and stop sending packets, with output:
>>
>>
>> ###########################CMD shot#######################
>> Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
>> Flags:Port : P--------------:0 P--------------:1
>> Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
>> Pkts/s Max/Rx : 176288/0 146016/0 308224/0
>> Max/Tx : 1344832/0 767520/0 1535040/0
>> MBits/s Rx/Tx : 0/0 0/0 0/0
>> Broadcast : 0 0
>> Multicast : 0 0
>> 64 Bytes : 15872 15104
>> 65-127 : 50368 61248
>> 128-255 : 44096 62848
>> 256-511 : 51840 93216
>> 512-1023 : 63264 151456
>> 1024-1518 : 51936 126240
>> Runts/Jumbos : 0/0 0/0
>> Errors Rx/Tx : 0/0 0/0
>> Total Rx Pkts : 277376 510112
>> Tx Pkts : 4529248 1162368
>> Rx MBs : 1215 2701
>> Tx MBs : 665276 57380
>> ARP/ICMP Pkts : 0/0 0/0
>> :
>> Pattern Type : abcd... abcd...
>> Tx Count/% Rate : Forever /100% Forever /100%
>> PktSize/Tx Burst : 64 / 64 64 / 64
>> Src/Dest Port : 1234 / 5678 1234 / 5678
>> Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
>> 802.1p CoS : 0 0
>> ToS Value: : 0 0
>> - DSCP value : 0 0
>> - IPP value : 0 0
>> Dst IP Address : 192.168.1.1 192.168.0.1
>> Src IP Address : 192.168.0.1/24 192.168.1.1/24
>> Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
>> Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
>> VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
>>
>> -- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
>> ###########################CMD shot END#######################
>>
>> Pktgen is stuck at this setting. There are a lot of large packets!
>>
>> I check the log of ovs-dpdk and get:
>>
>> #############################ovs-dpdk log#############################
>> 2019-06-23T07:51:39.349Z|00022|netdev_dpdk(pmd8)|WARN|Dropped 5803564 log messages in last 102581 seconds (most recently, 102576 seconds ago) due to excessive rate
>> 2019-06-23T07:51:39.349Z|00023|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00024|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00025|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00026|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00027|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00028|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00029|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00030|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00031|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00032|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00033|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00034|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00035|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00036|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00037|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00038|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00039|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00040|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00041|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> 2019-06-23T07:51:39.349Z|00042|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
>> #############################ovs-dpdk log END#############################
>> Thank you for share your ideas.
>>
>>
>> Best wishes,
>> Xia Rui
>
>Regards,
>Keith
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] pktgen-dpdk send too big packet, and stop sending packets after few seconds
2019-06-24 9:22 ` Xia Rui
@ 2019-06-24 13:17 ` Wiles, Keith
0 siblings, 0 replies; 4+ messages in thread
From: Wiles, Keith @ 2019-06-24 13:17 UTC (permalink / raw)
To: Xia Rui; +Cc: dpdk dev community
> On Jun 24, 2019, at 4:22 AM, Xia Rui <xiarui_work@163.com> wrote:
>
> Thank you for your reply. I am trying to use the latest version of pktgen-dpdk, ovs, and dpdk. But I meet some problem when setting up the platform.
>
> The version of my platform:
> 1. OVS: 2.11.1
> 2. DPDK: 18.11
> 3. pktgen-dpdk: 3.6.6
>
> Using the previous version (3.4.9), pktgen-dpdk works well. This time it shows up some error:
>
> ##############################error log##################################
> Pktgen:/> PANIC in pktgen_main_rxtx_loop():
> *** port 0 socket ID 4294967295 has different socket ID for lcore 5 socket ID 0
What this messages means is port is attached to a different socket than lcore 5 is sitting on. In this case lcore 5 appears to be on socket 0, but the port is getting an odd value back from the DPDK call and getting 0xFFFFFFFF. I believe the problem is you are using VMs and DPDK is not able to determine the socket id for devices.
Look in the code app/pktgen.c for the message and here is a patch which should fix this problem. Because you are on a different version it may not align correctly, but the changes are simple.
diff --git a/app/pktgen.c b/app/pktgen.c
index e051dbe..8fb475c 100644
--- a/app/pktgen.c
+++ b/app/pktgen.c
@@ -1344,8 +1344,10 @@ pktgen_main_rxtx_loop(uint8_t lid)
for (idx = 0; idx < rxcnt; idx++) {
uint16_t pid = infos[idx]->pid;
- if (rte_eth_dev_socket_id(pid) != (int)rte_socket_id())
- rte_panic("*** port %u socket ID %u has different socket ID for lcore %u socket ID %d\n",
+ int dev_sock = rte_eth_dev_socket_id(pid);
+
+ if (dev_sock != SOCKET_ID_ANY && dev_sock != (int)rte_socket_id())
+ rte_panic("*** port %u on socket ID %u has different socket ID for lcore %u socket ID %d\n",
pid, rte_eth_dev_socket_id(pid),
rte_lcore_id(), rte_socket_id());
}
@@ -1429,8 +1431,9 @@ pktgen_main_tx_loop(uint8_t lid)
for (idx = 0; idx < txcnt; idx++) {
uint16_t pid = infos[idx]->pid;
+ int dev_sock = rte_eth_dev_socket_id(pid);
- if (rte_eth_dev_socket_id(pid) != (int)rte_socket_id())
+ if (dev_sock != SOCKET_ID_ANY && dev_sock != (int)rte_socket_id())
rte_panic("*** port %u on socket ID %u has different socket ID for lcore %u on socket ID %d\n",
pid, rte_eth_dev_socket_id(pid),
rte_lcore_id(), rte_socket_id());
@@ -1513,8 +1516,10 @@ pktgen_main_rx_loop(uint8_t lid)
for (idx = 0; idx < rxcnt; idx++) {
uint16_t pid = infos[idx]->pid;
- if (rte_eth_dev_socket_id(pid) != (int)rte_socket_id())
- rte_panic("*** port %u socket ID %u has different socket ID for lcore %u socket ID %d\n”,
+ int dev_sock = rte_eth_dev_socket_id(pid);
+
+ if (dev_sock != SOCKET_ID_ANY && dev_sock != (int)rte_socket_id())
+ rte_panic("*** port %u on socket ID %u has different socket ID for lcore %u socket ID %d\n",
pid, rte_eth_dev_socket_id(pid),
rte_lcore_id(), rte_socket_id());
}
> 7: [/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7fc5d6a0288f]]
> 6: [/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fc5d6cd96db]]
> 5: [./app/x86_64-native-linuxapp-gcc/pktgen(eal_thread_loop+0x1e1) [0x57c641]]
> 4: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_launch_one_lcore+0xb7) [0x49d247]]
> 3: [./app/x86_64-native-linuxapp-gcc/pktgen() [0x49bef5]]
> 2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x46352c]]
> 1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x5829fb]]
> Aborted (core dumped)
> ##############################error log END##################################
>
>
> When I look up the log of OVS, I find:
> ##############################OVS error log##################################
> 2019-06-24T09:11:13.780Z|00113|dpdk|ERR|VHOST_CONFIG: recvmsg failed
> 2019-06-24T09:11:13.780Z|00114|dpdk|INFO|VHOST_CONFIG: vhost peer closed
> 2019-06-24T09:11:13.781Z|00115|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/vhost-user1' has been removed
> 2019-06-24T09:11:13.781Z|00116|dpdk|ERR|VHOST_CONFIG: recvmsg failed
> 2019-06-24T09:11:13.781Z|00117|dpdk|INFO|VHOST_CONFIG: vhost peer closed
> 2019-06-24T09:11:13.781Z|00118|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/vhost-user2' has been removed
> ##############################OVS error log#############################################
>
> Could you share some hints to me, thank you for your time.
>
> Best wishes
> Xia Rui
>
>
>
>
>
> At 2019-06-23 21:26:15, "Wiles, Keith" <keith.wiles@intel.com> wrote:
> >
> >
> >> On Jun 23, 2019, at 3:00 AM, Xia Rui <xiarui_work@163.com> wrote:
> >>
> >> Hello, everyone.
> >> I am using pktgen-dpdk and testpmd to test the functionality of ovs-dpdk. The network topology is :
> >> +-------------+----------------------+ host(OVS-DPDK) +-----------------------+-----------------+
> >> | | vhost-user port 1 |<----------------------------------->| vhost-user port 3 | |
> >> | +----------------------+ +-----------------------+ |
> >> | container | pktgen | | testpmd | container |
> >> | +----------------------+ +-------------------+ |
> >> | | vhost-user port 2 |<------------------------------------>| vhost-user port 4 | |
> >> +--------------+---------------------+ +----------------------+----------------+
> >>
> >> The version of my platform:
> >> 1. host OS: ubuntu 16.04.5 LTS
> >> 2. host linux kernel: 4.15.0-15
> >> 3. host OVS: 2.8.0
> >> 4. host DPDK : 17.05.2
> >> 5. container pktgen-dpdk: 3.4.9 + DPDK 17.05.2
> >> 6. container DPDK (testpmd): 17.05.2
> >
> >At one point virtio drivers were changing the length of the tx packet in the mbuf for some type of meta data being shared between the two virtio points. Pktgen expected the length to remain the same when the mbuf was returned to pktgen mempool. This explains the non-64 byte frames. The fix was added to virtio in later releases to restore the length in the mbuf, plus I may’ve added code to pktgen in later release to fix the length back to what I expected when sending the packets. I looked in the latests version of pktgen and found the code in pktgen_setup_cb() which reset the length in the packet. The code should call rte_pktmbuf_reset() routine to restore the mbuf to expected state.
> >
> >If OVS can work with a later version of DPDK then I would upgrade to the latest release, if not then I would look at and compare the two versions of virtio PMD and see if you can find that fix. If not then you can look in pktgen and repair the length in the mbuf before it is sent plus you may have to fix the location of the read/write offsets in the mbuf as well. This change will effect pktgen performance, but for virtio that should not be a problem. You can also connect the Maintainers of virtio and see if they remember the fix.
> >
> >For the traffic stopping, I do not remember if this fix solved that problem.
> >>
> >> There are two docker containers. One is running pktgen-dpdk with:
> >> ############################pktgen-dpdk start script############################
> >> ./app/x86_64-native-linuxapp-gcc/pktgen -c 0x70 --master-lcore 4 -n 1 --file-prefix pktgen --no-pci \
> >> --vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1' \
> >> --vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/var/run/openvswitch/vhost-user2' \
> >> -- -T -P -m "5.0,6.1"
> >> ############################pktgen-dpdk start script END############################
> >>
> >> The other is running testpmd with:
> >> ############################testpmd start script############################
> >> testpmd -c 0xE0 -n 1 --socket-mem=1024,0 --file-prefix testpmd --no-pci \
> >> --vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/var/run/openvswitch/vhost-user3' \
> >> --vdev 'net_virtio_user4,mac=00:00:00:00:00:04,path=/var/run/openvswitch/vhost-user4' \
> >> -- -i --burst=64 --disable-hw-vlan --txd=2048 --rxd=2048 --auto-start --coremask=0xc0
> >> ############################testpmd start script END############################
> >>
> >> The two containers are connected using ovs-docker in the host. I create four vhost-user ports, two of which are for pktgen, the other of which are for testpmd. I connect the vhost-user ports by adding the
> >> routes between them. The start script of ovs is:
> >>
> >> ############################ovs-dpdk start script############################
> >> sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
> >> --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
> >> --private-key=db:Open_vSwitch,SSL,private_key \
> >> --certificate=db:Open_vSwitch,SSL,certificate \
> >> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
> >> --pidfile --detach
> >>
> >> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> >>
> >> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x02
> >>
> >> sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x04
> >>
> >> sudo ovs-vswitchd --pidfile --detach --log-file=/var/log/openvswitch/vhost-ovs-vswitchd.log
> >>
> >> sudo /usr/local/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start
> >> ############################ovs-dpdk start script END############################
> >>
> >> When I start the pktgen-dpdk to send packets, there are something wrong.
> >>
> >> First, I set the packet size to 64Bytes, there are some big packets (more than 64Bytes). I set the rate to 10%, 300 packets, and get:
> >>
> >> ###########################CMD shot#######################
> >> Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
> >> Flags:Port : P--------------:0 P--------------:1
> >> Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
> >> Pkts/s Max/Rx : 256/0 300/0 556/0
> >> Max/Tx : 300/0 300/0 600/0
> >> MBits/s Rx/Tx : 0/0 0/0 0/0
> >> Broadcast : 0 0
> >> Multicast : 0 0
> >> 64 Bytes : 1104 1148
> >> 65-127 : 64 152
> >> 128-255 : 0 0
> >> 256-511 : 0 0
> >> 512-1023 : 0 0
> >> 1024-1518 : 0 0
> >> Runts/Jumbos : 0/0 0/0
> >> Errors Rx/Tx : 0/0 0/0
> >> Total Rx Pkts : 1168 1300
> >> Tx Pkts : 1300 1300
> >> Rx MBs : 0 0
> >> Tx MBs : 1 1
> >> ARP/ICMP Pkts : 0/0 0/0
> >> :
> >> Pattern Type : abcd... abcd...
> >> Tx Count/% Rate : 300 /10% 300 /10%
> >> PktSize/Tx Burst : 64 / 64 64 / 64
> >> Src/Dest Port : 1234 / 5678 1234 / 5678
> >> Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
> >> 802.1p CoS : 0 0
> >> ToS Value: : 0 0
> >> - DSCP value : 0 0
> >> - IPP value : 0 0
> >> Dst IP Address : 192.168.1.1 192.168.0.1
> >> Src IP Address : 192.168.0.1/24 192.168.1.1/24
> >> Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
> >> Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
> >> VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
> >>
> >> -- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
> >> ###########################CMD shot END#######################
> >>
> >> There ought to be no packets greater than 64Bytes, but there exist.
> >>
> >> Second, I reset the configuration ("rst") and try to start send packets continuously. However, the pktgen works few seconds and stop sending packets, with output:
> >>
> >>
> >> ###########################CMD shot#######################
> >> Ports 0-1 of 2 <Main Page> Copyright (c) <2010-2017>, Intel Corporation
> >> Flags:Port : P--------------:0 P--------------:1
> >> Link State : <UP-10000-FD> <UP-10000-FD> ----TotalRate----
> >> Pkts/s Max/Rx : 176288/0 146016/0 308224/0
> >> Max/Tx : 1344832/0 767520/0 1535040/0
> >> MBits/s Rx/Tx : 0/0 0/0 0/0
> >> Broadcast : 0 0
> >> Multicast : 0 0
> >> 64 Bytes : 15872 15104
> >> 65-127 : 50368 61248
> >> 128-255 : 44096 62848
> >> 256-511 : 51840 93216
> >> 512-1023 : 63264 151456
> >> 1024-1518 : 51936 126240
> >> Runts/Jumbos : 0/0 0/0
> >> Errors Rx/Tx : 0/0 0/0
> >> Total Rx Pkts : 277376 510112
> >> Tx Pkts : 4529248 1162368
> >> Rx MBs : 1215 2701
> >> Tx MBs : 665276 57380
> >> ARP/ICMP Pkts : 0/0 0/0
> >> :
> >> Pattern Type : abcd... abcd...
> >> Tx Count/% Rate : Forever /100% Forever /100%
> >> PktSize/Tx Burst : 64 / 64 64 / 64
> >> Src/Dest Port : 1234 / 5678 1234 / 5678
> >> Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
> >> 802.1p CoS : 0 0
> >> ToS Value: : 0 0
> >> - DSCP value : 0 0
> >> - IPP value : 0 0
> >> Dst IP Address : 192.168.1.1 192.168.0.1
> >> Src IP Address : 192.168.0.1/24 192.168.1.1/24
> >> Dst MAC Address : 00:00:00:00:00:02 00:00:00:00:00:01
> >> Src MAC Address : 00:00:00:00:00:01 00:00:00:00:00:02
> >> VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0
> >>
> >> -- Pktgen Ver: 3.4.9 (DPDK 17.05.2) Powered by DPDK --------------------------
> >> ###########################CMD shot END#######################
> >>
> >> Pktgen is stuck at this setting. There are a lot of large packets!
> >>
> >> I check the log of ovs-dpdk and get:
> >>
> >> #############################ovs-dpdk log#############################
> >> 2019-06-23T07:51:39.349Z|00022|netdev_dpdk(pmd8)|WARN|Dropped 5803564 log messages in last 102581 seconds (most recently, 102576 seconds ago) due to excessive rate
> >> 2019-06-23T07:51:39.349Z|00023|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00024|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00025|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00026|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00027|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00028|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00029|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00030|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00031|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00032|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00033|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00034|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00035|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00036|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00037|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00038|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00039|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00040|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00041|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> 2019-06-23T07:51:39.349Z|00042|netdev_dpdk(pmd8)|WARN|vhost-user3: Too big size 1524 max_packet_len 1518
> >> #############################ovs-dpdk log END#############################
> >> Thank you for share your ideas.
> >>
> >>
> >> Best wishes,
> >> Xia Rui
> >
> >Regards,
> >Keith
> >
>
>
>
>
Regards,
Keith
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-06-24 13:17 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-23 8:00 [dpdk-dev] pktgen-dpdk send too big packet, and stop sending packets after few seconds Xia Rui
2019-06-23 13:26 ` Wiles, Keith
2019-06-24 9:22 ` Xia Rui
2019-06-24 13:17 ` Wiles, Keith
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).