DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Issue with Pktgen and OVS-DPDK
@ 2017-05-02 12:20 Gabriel Ionescu
  2017-05-02 20:24 ` Wiles, Keith
  0 siblings, 1 reply; 17+ messages in thread
From: Gabriel Ionescu @ 2017-05-02 12:20 UTC (permalink / raw)
  To: users

Hi,

I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports and I am seeing an issue where Pktgen does not look like it generates packets correctly.

For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.

The OVS bridge is created with:
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser ofport_request=2
ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1

DPDK-Pktgen is launched with the following command so that packets generated through port 0 are received by port 1 and viceversa:
pktgen -c 0xF --file-prefix pktgen --no-pci \
                                --vdev=virtio_user0,path=/tmp/vhost-user1 \
                                --vdev=virtio_user1,path=/tmp/vhost-user2 \
                                -- -P -m "[0:1].0, [2:3].1"

In Pktgen, the default settings are used for both ports:

-          Tx Count: Forever

-          Rate: 100%

-          PktSize: 64

-          Tx Burst: 32

Whenever I start generating packets through one of the ports (in this example port 0 by running start 0), the OVS logs throw warnings similar to:
2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped 1194956 log messages in last 49 seconds (most recently, 41 seconds ago) due to excessive rate
2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped 1344988 log messages in last 11 seconds (most recently, 0 seconds ago) due to excessive rate
2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 57564 max_packet_len 1518
Port 1 does not receive any packets.

When running Pktgen with the -socket-mem option (e.g. --socket-mem 512), the behavior is different, but with the same warnings thrown by OVS: port 1 receives some packages, but with different sizes, even though they are generated on port 0 with a 64b size:
  Flags:Port      :   P--------------:0   P--------------:1
Link State        :       <UP-10000-FD>       <UP-10000-FD>     ----TotalRate----
Pkts/s Max/Rx     :                 0/0             35136/0               35136/0
       Max/Tx     :        238144/25504                 0/0          238144/25504
MBits/s Rx/Tx     :             0/13270                 0/0               0/13270
Broadcast         :                   0                   0
Multicast         :                   0                   0
  64 Bytes        :                   0                 288
  65-127          :                   0                1440
  128-255         :                   0                2880
  256-511         :                   0                6336
  512-1023        :                   0               12096
  1024-1518       :                   0               12096
Runts/Jumbos      :                 0/0                 0/0
Errors Rx/Tx      :                 0/0                 0/0
Total Rx Pkts     :                   0               35136
      Tx Pkts     :             1571584                   0
      Rx MBs      :                   0                 227
      Tx MBs      :              412777                   0
ARP/ICMP Pkts     :                 0/0                 0/0
                  :
Pattern Type      :             abcd...             abcd...
Tx Count/% Rate   :       Forever /100%       Forever /100%
PktSize/Tx Burst  :           64 /   32           64 /   32
Src/Dest Port     :         1234 / 5678         1234 / 5678
Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
Dst  IP Address   :         192.168.1.1         192.168.0.1
Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0

-- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK -------------------

If packets are generated from an external source and testpmd is used to forward traffic between the two vHost-user ports, the warnings are not thrown by the OVS bridge.

Should this setup work?
Is this an issue or am I setting something up wrong?

Thank you,
Gabriel Ionescu

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2017-05-02 12:20 [dpdk-users] Issue with Pktgen and OVS-DPDK Gabriel Ionescu
@ 2017-05-02 20:24 ` Wiles, Keith
  2018-01-09  6:38   ` Hu, Xuekun
  0 siblings, 1 reply; 17+ messages in thread
From: Wiles, Keith @ 2017-05-02 20:24 UTC (permalink / raw)
  To: Gabriel Ionescu; +Cc: users

Comments inline:
> On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu@enea.com> wrote:
> 
> Hi,
> 
> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports and I am seeing an issue where Pktgen does not look like it generates packets correctly.
> 
> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> 
> The OVS bridge is created with:
> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
> ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser ofport_request=2
> ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> 
> DPDK-Pktgen is launched with the following command so that packets generated through port 0 are received by port 1 and viceversa:
> pktgen -c 0xF --file-prefix pktgen --no-pci \
>                                --vdev=virtio_user0,path=/tmp/vhost-user1 \
>                                --vdev=virtio_user1,path=/tmp/vhost-user2 \
>                                -- -P -m "[0:1].0, [2:3].1”

The above command line is wrong as Pktgen needs or takes the first lcore for display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did not see some performance or lockup problem. I really need to add a test for these types of problem :-( You can just have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using -l 0-4 option.

Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4 byte Frame Checksum. This does work and it must be in how vhost-user is testing for the packet size. In the mbuf you have payload size and the buffer size. The Buffer size could be 1524, but the payload or frame size will be 60 bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to me that vhost-user is not looking at the correct struct rte_mbuf member variable in its testing.

> 
> In Pktgen, the default settings are used for both ports:
> 
> -          Tx Count: Forever
> 
> -          Rate: 100%
> 
> -          PktSize: 64
> 
> -          Tx Burst: 32
> 
> Whenever I start generating packets through one of the ports (in this example port 0 by running start 0), the OVS logs throw warnings similar to:
> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped 1194956 log messages in last 49 seconds (most recently, 41 seconds ago) due to excessive rate
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 1524 max_packet_len 1518
> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped 1344988 log messages in last 11 seconds (most recently, 0 seconds ago) due to excessive rate
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2: Too big size 57564 max_packet_len 1518
> Port 1 does not receive any packets.
> 
> When running Pktgen with the -socket-mem option (e.g. --socket-mem 512), the behavior is different, but with the same warnings thrown by OVS: port 1 receives some packages, but with different sizes, even though they are generated on port 0 with a 64b size:
>  Flags:Port      :   P--------------:0   P--------------:1
> Link State        :       <UP-10000-FD>       <UP-10000-FD>     ----TotalRate----
> Pkts/s Max/Rx     :                 0/0             35136/0               35136/0
>       Max/Tx     :        238144/25504                 0/0          238144/25504
> MBits/s Rx/Tx     :             0/13270                 0/0               0/13270
> Broadcast         :                   0                   0
> Multicast         :                   0                   0
>  64 Bytes        :                   0                 288
>  65-127          :                   0                1440
>  128-255         :                   0                2880
>  256-511         :                   0                6336
>  512-1023        :                   0               12096
>  1024-1518       :                   0               12096
> Runts/Jumbos      :                 0/0                 0/0
> Errors Rx/Tx      :                 0/0                 0/0
> Total Rx Pkts     :                   0               35136
>      Tx Pkts     :             1571584                   0
>      Rx MBs      :                   0                 227
>      Tx MBs      :              412777                   0
> ARP/ICMP Pkts     :                 0/0                 0/0
>                  :
> Pattern Type      :             abcd...             abcd...
> Tx Count/% Rate   :       Forever /100%       Forever /100%
> PktSize/Tx Burst  :           64 /   32           64 /   32
> Src/Dest Port     :         1234 / 5678         1234 / 5678
> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> Dst  IP Address   :         192.168.1.1         192.168.0.1
> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> 
> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK -------------------
> 
> If packets are generated from an external source and testpmd is used to forward traffic between the two vHost-user ports, the warnings are not thrown by the OVS bridge.
> 
> Should this setup work?
> Is this an issue or am I setting something up wrong?
> 
> Thank you,
> Gabriel Ionescu

Regards,
Keith


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2017-05-02 20:24 ` Wiles, Keith
@ 2018-01-09  6:38   ` Hu, Xuekun
  2018-01-09 13:00     ` Chen, Junjie J
  0 siblings, 1 reply; 17+ messages in thread
From: Hu, Xuekun @ 2018-01-09  6:38 UTC (permalink / raw)
  To: Wiles, Keith, Gabriel Ionescu, Tan, Jianfeng; +Cc: users

Hi, Keith

Any updates on this issue? We met similar behavior that ovs-dpdk reports they receive packet with size increment 12 bytes until more than 1518, then pktgen stops sending packets, while we only ask pktgen to generate 64B packet. And it only happens with two vhost-user ports in same server. If the pktgen is running in another server, then no such issue. 

We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.

We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 2.5 has no such problem. So seems like it is a compatibility issue with pktgen/dpdk/qemu? 

Thanks. 
Thx, Xuekun

-----Original Message-----
From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
Sent: Wednesday, May 03, 2017 4:24 AM
To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK

Comments inline:
> On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu@enea.com> wrote:
> 
> Hi,
> 
> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports and I am seeing an issue where Pktgen does not look like it generates packets correctly.
> 
> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> 
> The OVS bridge is created with:
> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev 
> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 
> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0 
> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser 
> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 
> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> 
> DPDK-Pktgen is launched with the following command so that packets generated through port 0 are received by port 1 and viceversa:
> pktgen -c 0xF --file-prefix pktgen --no-pci \
>                                --vdev=virtio_user0,path=/tmp/vhost-user1 \
>                                --vdev=virtio_user1,path=/tmp/vhost-user2 \
>                                -- -P -m "[0:1].0, [2:3].1”

The above command line is wrong as Pktgen needs or takes the first lcore for display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did not see some performance or lockup problem. I really need to add a test for these types of problem :-( You can just have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using -l 0-4 option.

Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4 byte Frame Checksum. This does work and it must be in how vhost-user is testing for the packet size. In the mbuf you have payload size and the buffer size. The Buffer size could be 1524, but the payload or frame size will be 60 bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to me that vhost-user is not looking at the correct struct rte_mbuf member variable in its testing.

> 
> In Pktgen, the default settings are used for both ports:
> 
> -          Tx Count: Forever
> 
> -          Rate: 100%
> 
> -          PktSize: 64
> 
> -          Tx Burst: 32
> 
> Whenever I start generating packets through one of the ports (in this example port 0 by running start 0), the OVS logs throw warnings similar to:
> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped 1194956 
> log messages in last 49 seconds (most recently, 41 seconds ago) due to 
> excessive rate
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2: Too 
> big size 1524 max_packet_len 1518
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2: Too 
> big size 1524 max_packet_len 1518
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2: Too 
> big size 1524 max_packet_len 1518
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2: Too 
> big size 1524 max_packet_len 1518 
> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped 1344988 
> log messages in last 11 seconds (most recently, 0 seconds ago) due to 
> excessive rate
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2: Too 
> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> 
> When running Pktgen with the -socket-mem option (e.g. --socket-mem 512), the behavior is different, but with the same warnings thrown by OVS: port 1 receives some packages, but with different sizes, even though they are generated on port 0 with a 64b size:
>  Flags:Port      :   P--------------:0   P--------------:1
> Link State        :       <UP-10000-FD>       <UP-10000-FD>     ----TotalRate----
> Pkts/s Max/Rx     :                 0/0             35136/0               35136/0
>       Max/Tx     :        238144/25504                 0/0          238144/25504
> MBits/s Rx/Tx     :             0/13270                 0/0               0/13270
> Broadcast         :                   0                   0
> Multicast         :                   0                   0
>  64 Bytes        :                   0                 288
>  65-127          :                   0                1440
>  128-255         :                   0                2880
>  256-511         :                   0                6336
>  512-1023        :                   0               12096
>  1024-1518       :                   0               12096
> Runts/Jumbos      :                 0/0                 0/0
> Errors Rx/Tx      :                 0/0                 0/0
> Total Rx Pkts     :                   0               35136
>      Tx Pkts     :             1571584                   0
>      Rx MBs      :                   0                 227
>      Tx MBs      :              412777                   0
> ARP/ICMP Pkts     :                 0/0                 0/0
>                  :
> Pattern Type      :             abcd...             abcd...
> Tx Count/% Rate   :       Forever /100%       Forever /100%
> PktSize/Tx Burst  :           64 /   32           64 /   32
> Src/Dest Port     :         1234 / 5678         1234 / 5678
> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> Dst  IP Address   :         192.168.1.1         192.168.0.1
> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> 
> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK 
> -------------------
> 
> If packets are generated from an external source and testpmd is used to forward traffic between the two vHost-user ports, the warnings are not thrown by the OVS bridge.
> 
> Should this setup work?
> Is this an issue or am I setting something up wrong?
> 
> Thank you,
> Gabriel Ionescu

Regards,
Keith


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-09  6:38   ` Hu, Xuekun
@ 2018-01-09 13:00     ` Chen, Junjie J
  2018-01-09 13:43       ` [dpdk-users] 答复: " wang.yong19
  2018-01-09 14:04       ` [dpdk-users] " Wiles, Keith
  0 siblings, 2 replies; 17+ messages in thread
From: Chen, Junjie J @ 2018-01-09 13:00 UTC (permalink / raw)
  To: Hu, Xuekun, Wiles, Keith, Gabriel Ionescu, Tan, Jianfeng; +Cc: users

Hi 
There are two defects may cause this issue:

1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low performance in VM virtio pmd mode
diff --git a/lib/common/mbuf.h b/lib/common/mbuf.h
index 759f95d..93065f6 100644
— a/lib/common/mbuf.h
+++ b/lib/common/mbuf.h
@@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
m->nb_segs = 1;
m->port = 0xff;

+	m->data_len = m->pkt_len;
m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
}

2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix Tx packet length stats

You could patch both these two patch to try it.

Cheers
JJ


> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> Sent: Tuesday, January 9, 2018 2:38 PM
> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Hi, Keith
> 
> Any updates on this issue? We met similar behavior that ovs-dpdk reports they
> receive packet with size increment 12 bytes until more than 1518, then pktgen
> stops sending packets, while we only ask pktgen to generate 64B packet. And
> it only happens with two vhost-user ports in same server. If the pktgen is
> running in another server, then no such issue.
> 
> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> 
> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 2.5
> has no such problem. So seems like it is a compatibility issue with
> pktgen/dpdk/qemu?
> 
> Thanks.
> Thx, Xuekun
> 
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> Sent: Wednesday, May 03, 2017 4:24 AM
> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Comments inline:
> > On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> wrote:
> >
> > Hi,
> >
> > I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports
> and I am seeing an issue where Pktgen does not look like it generates packets
> correctly.
> >
> > For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >
> > The OVS bridge is created with:
> > ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> > ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> > type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> > vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> > ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >
> > DPDK-Pktgen is launched with the following command so that packets
> generated through port 0 are received by port 1 and viceversa:
> > pktgen -c 0xF --file-prefix pktgen --no-pci \
> >
> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >
> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >                                -- -P -m "[0:1].0, [2:3].1”
> 
> The above command line is wrong as Pktgen needs or takes the first lcore for
> display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a
> lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores
> (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m
> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did
> not see some performance or lockup problem. I really need to add a test for
> these types of problem :-( You can just have 5 lcores for the VM, which then
> pktgen shares lcore 0 with Linux using -l 0-4 option.
> 
> Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4
> byte Frame Checksum. This does work and it must be in how vhost-user is
> testing for the packet size. In the mbuf you have payload size and the buffer
> size. The Buffer size could be 1524, but the payload or frame size will be 60
> bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to
> me that vhost-user is not looking at the correct struct rte_mbuf member
> variable in its testing.
> 
> >
> > In Pktgen, the default settings are used for both ports:
> >
> > -          Tx Count: Forever
> >
> > -          Rate: 100%
> >
> > -          PktSize: 64
> >
> > -          Tx Burst: 32
> >
> > Whenever I start generating packets through one of the ports (in this
> example port 0 by running start 0), the OVS logs throw warnings similar to:
> > 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> 1194956
> > log messages in last 49 seconds (most recently, 41 seconds ago) due to
> > excessive rate
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> > 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> 1344988
> > log messages in last 11 seconds (most recently, 0 seconds ago) due to
> > excessive rate
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >
> > When running Pktgen with the -socket-mem option (e.g. --socket-mem 512),
> the behavior is different, but with the same warnings thrown by OVS: port 1
> receives some packages, but with different sizes, even though they are
> generated on port 0 with a 64b size:
> >  Flags:Port      :   P--------------:0   P--------------:1
> > Link State        :       <UP-10000-FD>       <UP-10000-FD>
> ----TotalRate----
> > Pkts/s Max/Rx     :                 0/0             35136/0
> 35136/0
> >       Max/Tx     :        238144/25504                 0/0
> 238144/25504
> > MBits/s Rx/Tx     :             0/13270                 0/0
> 0/13270
> > Broadcast         :                   0                   0
> > Multicast         :                   0                   0
> >  64 Bytes        :                   0                 288
> >  65-127          :                   0                1440
> >  128-255         :                   0                2880
> >  256-511         :                   0                6336
> >  512-1023        :                   0               12096
> >  1024-1518       :                   0               12096
> > Runts/Jumbos      :                 0/0                 0/0
> > Errors Rx/Tx      :                 0/0                 0/0
> > Total Rx Pkts     :                   0               35136
> >      Tx Pkts     :             1571584                   0
> >      Rx MBs      :                   0                 227
> >      Tx MBs      :              412777                   0
> > ARP/ICMP Pkts     :                 0/0                 0/0
> >                  :
> > Pattern Type      :             abcd...             abcd...
> > Tx Count/% Rate   :       Forever /100%       Forever /100%
> > PktSize/Tx Burst  :           64 /   32           64 /   32
> > Src/Dest Port     :         1234 / 5678         1234 / 5678
> > Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > Dst  IP Address   :         192.168.1.1         192.168.0.1
> > Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >
> > -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > -------------------
> >
> > If packets are generated from an external source and testpmd is used to
> forward traffic between the two vHost-user ports, the warnings are not thrown
> by the OVS bridge.
> >
> > Should this setup work?
> > Is this an issue or am I setting something up wrong?
> >
> > Thank you,
> > Gabriel Ionescu
> 
> Regards,
> Keith


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-09 13:00     ` Chen, Junjie J
@ 2018-01-09 13:43       ` wang.yong19
  2018-01-09 14:04       ` [dpdk-users] " Wiles, Keith
  1 sibling, 0 replies; 17+ messages in thread
From: wang.yong19 @ 2018-01-09 13:43 UTC (permalink / raw)
  To: junjie.j.chen
  Cc: xuekun.hu, keith.wiles, Gabriel.Ionescu, jianfeng.tan, users

Hi,
We have patched the two patches you list below. However, the problem remains.

------------------origin------------------
发件人: <junjie.j.chen@intel.com>;
收件人: <xuekun.hu@intel.com>; <keith.wiles@intel.com>; <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
抄送人: <users@dpdk.org>;
日 期 :2018年01月09日 21:00
主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Hi
There are two defects may cause this issue:

1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low performance in VM virtio pmd mode
diff --git a/lib/common/mbuf.h b/lib/common/mbuf.h
index 759f95d..93065f6 100644
— a/lib/common/mbuf.h
+++ b/lib/common/mbuf.h
@@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
m->nb_segs = 1;
m->port = 0xff;

+    m->data_len = m->pkt_len;
m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
}

2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix Tx packet length stats

You could patch both these two patch to try it.

Cheers
JJ


> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> Sent: Tuesday, January 9, 2018 2:38 PM
> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Hi, Keith
>
> Any updates on this issue? We met similar behavior that ovs-dpdk reports they
> receive packet with size increment 12 bytes until more than 1518, then pktgen
> stops sending packets, while we only ask pktgen to generate 64B packet. And
> it only happens with two vhost-user ports in same server. If the pktgen is
> running in another server, then no such issue.
>
> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
>
> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 2.5
> has no such problem. So seems like it is a compatibility issue with
> pktgen/dpdk/qemu?
>
> Thanks.
> Thx, Xuekun
>
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> Sent: Wednesday, May 03, 2017 4:24 AM
> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Comments inline:
> > On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> wrote:
> >
> > Hi,
> >
> > I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports
> and I am seeing an issue where Pktgen does not look like it generates packets
> correctly.
> >
> > For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >
> > The OVS bridge is created with:
> > ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> > ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> > type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> > vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> > ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >
> > DPDK-Pktgen is launched with the following command so that packets
> generated through port 0 are received by port 1 and viceversa:
> > pktgen -c 0xF --file-prefix pktgen --no-pci \
> >
> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >
> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >                                -- -P -m "[0:1].0, [2:3].1”
>
> The above command line is wrong as Pktgen needs or takes the first lcore for
> display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a
> lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores
> (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m
> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did
> not see some performance or lockup problem. I really need to add a test for
> these types of problem :-( You can just have 5 lcores for the VM, which then
> pktgen shares lcore 0 with Linux using -l 0-4 option.
>
> Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4
> byte Frame Checksum. This does work and it must be in how vhost-user is
> testing for the packet size. In the mbuf you have payload size and the buffer
> size. The Buffer size could be 1524, but the payload or frame size will be 60
> bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to
> me that vhost-user is not looking at the correct struct rte_mbuf member
> variable in its testing.
>
> >
> > In Pktgen, the default settings are used for both ports:
> >
> > -          Tx Count: Forever
> >
> > -          Rate: 100%
> >
> > -          PktSize: 64
> >
> > -          Tx Burst: 32
> >
> > Whenever I start generating packets through one of the ports (in this
> example port 0 by running start 0), the OVS logs throw warnings similar to:
> > 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> 1194956
> > log messages in last 49 seconds (most recently, 41 seconds ago) due to
> > excessive rate
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> > 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> 1344988
> > log messages in last 11 seconds (most recently, 0 seconds ago) due to
> > excessive rate
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >
> > When running Pktgen with the -socket-mem option (e.g. --socket-mem 512),
> the behavior is different, but with the same warnings thrown by OVS: port 1
> receives some packages, but with different sizes, even though they are
> generated on port 0 with a 64b size:
> >  Flags:Port      :   P--------------:0   P--------------:1
> > Link State        :       <UP-10000-FD>       <UP-10000-FD>
> ----TotalRate----
> > Pkts/s Max/Rx     :                 0/0             35136/0
> 35136/0
> >       Max/Tx     :        238144/25504                 0/0
> 238144/25504
> > MBits/s Rx/Tx     :             0/13270                 0/0
> 0/13270
> > Broadcast         :                   0                   0
> > Multicast         :                   0                   0
> >  64 Bytes        :                   0                 288
> >  65-127          :                   0                1440
> >  128-255         :                   0                2880
> >  256-511         :                   0                6336
> >  512-1023        :                   0               12096
> >  1024-1518       :                   0               12096
> > Runts/Jumbos      :                 0/0                 0/0
> > Errors Rx/Tx      :                 0/0                 0/0
> > Total Rx Pkts     :                   0               35136
> >      Tx Pkts     :             1571584                   0
> >      Rx MBs      :                   0                 227
> >      Tx MBs      :              412777                   0
> > ARP/ICMP Pkts     :                 0/0                 0/0
> >                  :
> > Pattern Type      :             abcd...             abcd...
> > Tx Count/% Rate   :       Forever /100%       Forever /100%
> > PktSize/Tx Burst  :           64 /   32           64 /   32
> > Src/Dest Port     :         1234 / 5678         1234 / 5678
> > Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > Dst  IP Address   :         192.168.1.1         192.168.0.1
> > Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >
> > -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > -------------------
> >
> > If packets are generated from an external source and testpmd is used to
> forward traffic between the two vHost-user ports, the warnings are not thrown
> by the OVS bridge.
> >
> > Should this setup work?
> > Is this an issue or am I setting something up wrong?
> >
> > Thank you,
> > Gabriel Ionescu
>
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-09 13:00     ` Chen, Junjie J
  2018-01-09 13:43       ` [dpdk-users] 答复: " wang.yong19
@ 2018-01-09 14:04       ` Wiles, Keith
  2018-01-09 14:29         ` [dpdk-users] 答复: " wang.yong19
  1 sibling, 1 reply; 17+ messages in thread
From: Wiles, Keith @ 2018-01-09 14:04 UTC (permalink / raw)
  To: Chen, Junjie J; +Cc: Hu, Xuekun, Gabriel Ionescu, Tan, Jianfeng, users



> On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
> 
> Hi 
> There are two defects may cause this issue:
> 
> 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low performance in VM virtio pmd mode
> diff --git a/lib/common/mbuf.h b/lib/common/mbuf.h
> index 759f95d..93065f6 100644
> — a/lib/common/mbuf.h
> +++ b/lib/common/mbuf.h
> @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> m->nb_segs = 1;
> m->port = 0xff;
> 
> +	m->data_len = m->pkt_len;
> m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> RTE_PKTMBUF_HEADROOM : m->buf_len;
> }

This patch is in Pktgen 3.4.6
> 
> 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix Tx packet length stats
> 
> You could patch both these two patch to try it.
> 
> Cheers
> JJ
> 
> 
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
>> Sent: Tuesday, January 9, 2018 2:38 PM
>> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
>> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>> 
>> Hi, Keith
>> 
>> Any updates on this issue? We met similar behavior that ovs-dpdk reports they
>> receive packet with size increment 12 bytes until more than 1518, then pktgen
>> stops sending packets, while we only ask pktgen to generate 64B packet. And
>> it only happens with two vhost-user ports in same server. If the pktgen is
>> running in another server, then no such issue.
>> 
>> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
>> 
>> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 2.5
>> has no such problem. So seems like it is a compatibility issue with
>> pktgen/dpdk/qemu?
>> 
>> Thanks.
>> Thx, Xuekun
>> 
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
>> Sent: Wednesday, May 03, 2017 4:24 AM
>> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>> 
>> Comments inline:
>>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu@enea.com>
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports
>> and I am seeing an issue where Pktgen does not look like it generates packets
>> correctly.
>>> 
>>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
>>> 
>>> The OVS bridge is created with:
>>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
>>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
>>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
>>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
>>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
>>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
>>> 
>>> DPDK-Pktgen is launched with the following command so that packets
>> generated through port 0 are received by port 1 and viceversa:
>>> pktgen -c 0xF --file-prefix pktgen --no-pci \
>>> 
>> --vdev=virtio_user0,path=/tmp/vhost-user1 \
>>> 
>> --vdev=virtio_user1,path=/tmp/vhost-user2 \
>>>                               -- -P -m "[0:1].0, [2:3].1”
>> 
>> The above command line is wrong as Pktgen needs or takes the first lcore for
>> display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a
>> lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores
>> (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m
>> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did
>> not see some performance or lockup problem. I really need to add a test for
>> these types of problem :-( You can just have 5 lcores for the VM, which then
>> pktgen shares lcore 0 with Linux using -l 0-4 option.
>> 
>> Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4
>> byte Frame Checksum. This does work and it must be in how vhost-user is
>> testing for the packet size. In the mbuf you have payload size and the buffer
>> size. The Buffer size could be 1524, but the payload or frame size will be 60
>> bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to
>> me that vhost-user is not looking at the correct struct rte_mbuf member
>> variable in its testing.
>> 
>>> 
>>> In Pktgen, the default settings are used for both ports:
>>> 
>>> -          Tx Count: Forever
>>> 
>>> -          Rate: 100%
>>> 
>>> -          PktSize: 64
>>> 
>>> -          Tx Burst: 32
>>> 
>>> Whenever I start generating packets through one of the ports (in this
>> example port 0 by running start 0), the OVS logs throw warnings similar to:
>>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
>> 1194956
>>> log messages in last 49 seconds (most recently, 41 seconds ago) due to
>>> excessive rate
>>> 
>> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>> 
>> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>> 
>> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>> 
>> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
>> 1344988
>>> log messages in last 11 seconds (most recently, 0 seconds ago) due to
>>> excessive rate
>>> 
>> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
>>> 
>>> When running Pktgen with the -socket-mem option (e.g. --socket-mem 512),
>> the behavior is different, but with the same warnings thrown by OVS: port 1
>> receives some packages, but with different sizes, even though they are
>> generated on port 0 with a 64b size:
>>> Flags:Port      :   P--------------:0   P--------------:1
>>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
>> ----TotalRate----
>>> Pkts/s Max/Rx     :                 0/0             35136/0
>> 35136/0
>>>      Max/Tx     :        238144/25504                 0/0
>> 238144/25504
>>> MBits/s Rx/Tx     :             0/13270                 0/0
>> 0/13270
>>> Broadcast         :                   0                   0
>>> Multicast         :                   0                   0
>>> 64 Bytes        :                   0                 288
>>> 65-127          :                   0                1440
>>> 128-255         :                   0                2880
>>> 256-511         :                   0                6336
>>> 512-1023        :                   0               12096
>>> 1024-1518       :                   0               12096
>>> Runts/Jumbos      :                 0/0                 0/0
>>> Errors Rx/Tx      :                 0/0                 0/0
>>> Total Rx Pkts     :                   0               35136
>>>     Tx Pkts     :             1571584                   0
>>>     Rx MBs      :                   0                 227
>>>     Tx MBs      :              412777                   0
>>> ARP/ICMP Pkts     :                 0/0                 0/0
>>>                 :
>>> Pattern Type      :             abcd...             abcd...
>>> Tx Count/% Rate   :       Forever /100%       Forever /100%
>>> PktSize/Tx Burst  :           64 /   32           64 /   32
>>> Src/Dest Port     :         1234 / 5678         1234 / 5678
>>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
>>> Dst  IP Address   :         192.168.1.1         192.168.0.1
>>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
>>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
>>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
>>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
>>> 
>>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
>>> -------------------
>>> 
>>> If packets are generated from an external source and testpmd is used to
>> forward traffic between the two vHost-user ports, the warnings are not thrown
>> by the OVS bridge.
>>> 
>>> Should this setup work?
>>> Is this an issue or am I setting something up wrong?
>>> 
>>> Thank you,
>>> Gabriel Ionescu
>> 
>> Regards,
>> Keith
> 

Regards,
Keith


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-09 14:04       ` [dpdk-users] " Wiles, Keith
@ 2018-01-09 14:29         ` wang.yong19
  2018-01-10  1:32           ` [dpdk-users] " Hu, Xuekun
  0 siblings, 1 reply; 17+ messages in thread
From: wang.yong19 @ 2018-01-09 14:29 UTC (permalink / raw)
  To: keith.wiles
  Cc: junjie.j.chen, xuekun.hu, Gabriel.Ionescu, jianfeng.tan, users

Hi,
We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the problem is resolved.
But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are included), the problem remains.
It seems that there are still something wrong with pktgen-3.4.6 and dpdk-17.11.


------------------origin------------------
发件人: <keith.wiles@intel.com>;
收件人: <junjie.j.chen@intel.com>;
抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月09日 22:04
主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK


> On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
>
> Hi
> There are two defects may cause this issue:
>
> 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low performance in VM virtio pmd mode
> diff --git a/lib/common/mbuf.h b/lib/common/mbuf.h
> index 759f95d..93065f6 100644
> — a/lib/common/mbuf.h
> +++ b/lib/common/mbuf.h
> @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> m->nb_segs = 1;
> m->port = 0xff;
>
> +    m->data_len = m->pkt_len;
> m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> RTE_PKTMBUF_HEADROOM : m->buf_len;
> }

This patch is in Pktgen 3.4.6
>
> 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix Tx packet length stats
>
> You could patch both these two patch to try it.
>
> Cheers
> JJ
>
>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
>> Sent: Tuesday, January 9, 2018 2:38 PM
>> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
>> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>>
>> Hi, Keith
>>
>> Any updates on this issue? We met similar behavior that ovs-dpdk reports they
>> receive packet with size increment 12 bytes until more than 1518, then pktgen
>> stops sending packets, while we only ask pktgen to generate 64B packet. And
>> it only happens with two vhost-user ports in same server. If the pktgen is
>> running in another server, then no such issue.
>>
>> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
>>
>> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 2.5
>> has no such problem. So seems like it is a compatibility issue with
>> pktgen/dpdk/qemu?
>>
>> Thanks.
>> Thx, Xuekun
>>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
>> Sent: Wednesday, May 03, 2017 4:24 AM
>> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>>
>> Comments inline:
>>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu@enea.com>
>> wrote:
>>>
>>> Hi,
>>>
>>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports
>> and I am seeing an issue where Pktgen does not look like it generates packets
>> correctly.
>>>
>>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
>>>
>>> The OVS bridge is created with:
>>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
>>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
>>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
>>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
>>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
>>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
>>>
>>> DPDK-Pktgen is launched with the following command so that packets
>> generated through port 0 are received by port 1 and viceversa:
>>> pktgen -c 0xF --file-prefix pktgen --no-pci \
>>>
>> --vdev=virtio_user0,path=/tmp/vhost-user1 \
>>>
>> --vdev=virtio_user1,path=/tmp/vhost-user2 \
>>>                               -- -P -m "[0:1].0, [2:3].1”
>>
>> The above command line is wrong as Pktgen needs or takes the first lcore for
>> display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a
>> lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores
>> (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m
>> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did
>> not see some performance or lockup problem. I really need to add a test for
>> these types of problem :-( You can just have 5 lcores for the VM, which then
>> pktgen shares lcore 0 with Linux using -l 0-4 option.
>>
>> Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4
>> byte Frame Checksum. This does work and it must be in how vhost-user is
>> testing for the packet size. In the mbuf you have payload size and the buffer
>> size. The Buffer size could be 1524, but the payload or frame size will be 60
>> bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to
>> me that vhost-user is not looking at the correct struct rte_mbuf member
>> variable in its testing.
>>
>>>
>>> In Pktgen, the default settings are used for both ports:
>>>
>>> -          Tx Count: Forever
>>>
>>> -          Rate: 100%
>>>
>>> -          PktSize: 64
>>>
>>> -          Tx Burst: 32
>>>
>>> Whenever I start generating packets through one of the ports (in this
>> example port 0 by running start 0), the OVS logs throw warnings similar to:
>>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
>> 1194956
>>> log messages in last 49 seconds (most recently, 41 seconds ago) due to
>>> excessive rate
>>>
>> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>>
>> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>>
>> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>>
>> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
>> 1344988
>>> log messages in last 11 seconds (most recently, 0 seconds ago) due to
>>> excessive rate
>>>
>> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
>>>
>>> When running Pktgen with the -socket-mem option (e.g. --socket-mem 512),
>> the behavior is different, but with the same warnings thrown by OVS: port 1
>> receives some packages, but with different sizes, even though they are
>> generated on port 0 with a 64b size:
>>> Flags:Port      :   P--------------:0   P--------------:1
>>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
>> ----TotalRate----
>>> Pkts/s Max/Rx     :                 0/0             35136/0
>> 35136/0
>>>      Max/Tx     :        238144/25504                 0/0
>> 238144/25504
>>> MBits/s Rx/Tx     :             0/13270                 0/0
>> 0/13270
>>> Broadcast         :                   0                   0
>>> Multicast         :                   0                   0
>>> 64 Bytes        :                   0                 288
>>> 65-127          :                   0                1440
>>> 128-255         :                   0                2880
>>> 256-511         :                   0                6336
>>> 512-1023        :                   0               12096
>>> 1024-1518       :                   0               12096
>>> Runts/Jumbos      :                 0/0                 0/0
>>> Errors Rx/Tx      :                 0/0                 0/0
>>> Total Rx Pkts     :                   0               35136
>>>     Tx Pkts     :             1571584                   0
>>>     Rx MBs      :                   0                 227
>>>     Tx MBs      :              412777                   0
>>> ARP/ICMP Pkts     :                 0/0                 0/0
>>>                 :
>>> Pattern Type      :             abcd...             abcd...
>>> Tx Count/% Rate   :       Forever /100%       Forever /100%
>>> PktSize/Tx Burst  :           64 /   32           64 /   32
>>> Src/Dest Port     :         1234 / 5678         1234 / 5678
>>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
>>> Dst  IP Address   :         192.168.1.1         192.168.0.1
>>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
>>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
>>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
>>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
>>>
>>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
>>> -------------------
>>>
>>> If packets are generated from an external source and testpmd is used to
>> forward traffic between the two vHost-user ports, the warnings are not thrown
>> by the OVS bridge.
>>>
>>> Should this setup work?
>>> Is this an issue or am I setting something up wrong?
>>>
>>> Thank you,
>>> Gabriel Ionescu
>>
>> Regards,
>> Keith
>

Regards,
Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-09 14:29         ` [dpdk-users] 答复: " wang.yong19
@ 2018-01-10  1:32           ` Hu, Xuekun
  2018-01-10  1:46             ` Chen, Junjie J
  0 siblings, 1 reply; 17+ messages in thread
From: Hu, Xuekun @ 2018-01-10  1:32 UTC (permalink / raw)
  To: wang.yong19, Wiles, Keith
  Cc: Chen, Junjie J, Gabriel.Ionescu, Tan, Jianfeng, users

Maybe the new qemu (starting from 2.8) introduced some new features that break the pktgen and dpdk compatibility? 

-----Original Message-----
From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn] 
Sent: Tuesday, January 09, 2018 10:30 PM
To: Wiles, Keith <keith.wiles@intel.com>
Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng <jianfeng.tan@intel.com>; users@dpdk.org
Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK

Hi,
We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the problem is resolved.
But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are included), the problem remains.
It seems that there are still something wrong with pktgen-3.4.6 and dpdk-17.11.


------------------origin------------------
发件人: <keith.wiles@intel.com>;
收件人: <junjie.j.chen@intel.com>;
抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月09日 22:04
主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK


> On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
>
> Hi
> There are two defects may cause this issue:
>
> 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low 
> performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h 
> b/lib/common/mbuf.h index 759f95d..93065f6 100644 — 
> a/lib/common/mbuf.h
> +++ b/lib/common/mbuf.h
> @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> m->nb_segs = 1;
> m->port = 0xff;
>
> +    m->data_len = m->pkt_len;
> m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> RTE_PKTMBUF_HEADROOM : m->buf_len;
> }

This patch is in Pktgen 3.4.6
>
> 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix 
> Tx packet length stats
>
> You could patch both these two patch to try it.
>
> Cheers
> JJ
>
>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
>> Sent: Tuesday, January 9, 2018 2:38 PM
>> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu 
>> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>>
>> Hi, Keith
>>
>> Any updates on this issue? We met similar behavior that ovs-dpdk 
>> reports they receive packet with size increment 12 bytes until more 
>> than 1518, then pktgen stops sending packets, while we only ask 
>> pktgen to generate 64B packet. And it only happens with two 
>> vhost-user ports in same server. If the pktgen is running in another server, then no such issue.
>>
>> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
>>
>> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 
>> 2.5 has no such problem. So seems like it is a compatibility issue 
>> with pktgen/dpdk/qemu?
>>
>> Thanks.
>> Thx, Xuekun
>>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
>> Sent: Wednesday, May 03, 2017 4:24 AM
>> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>>
>> Comments inline:
>>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu 
>>> <Gabriel.Ionescu@enea.com>
>> wrote:
>>>
>>> Hi,
>>>
>>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user 
>>> ports
>> and I am seeing an issue where Pktgen does not look like it generates 
>> packets correctly.
>>>
>>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
>>>
>>> The OVS bridge is created with:
>>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev 
>>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 
>>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
>>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
>>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 
>>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
>>>
>>> DPDK-Pktgen is launched with the following command so that packets
>> generated through port 0 are received by port 1 and viceversa:
>>> pktgen -c 0xF --file-prefix pktgen --no-pci \
>>>
>> --vdev=virtio_user0,path=/tmp/vhost-user1 \
>>>
>> --vdev=virtio_user1,path=/tmp/vhost-user2 \
>>>                               -- -P -m "[0:1].0, [2:3].1”
>>
>> The above command line is wrong as Pktgen needs or takes the first 
>> lcore for display output and timers. I would not use -c -0xF, but -l 
>> 1-5 instead, as it is a lot easier to understand IMO. Using this 
>> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore 
>> VM) one for Pktgen and 4 for the two ports. -m
>> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am 
>> concerned you did not see some performance or lockup problem. I 
>> really need to add a test for these types of problem :-( You can just 
>> have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using -l 0-4 option.
>>
>> Pktgen when requested to send 64 byte frames it sends 60 byte payload 
>> + 4 byte Frame Checksum. This does work and it must be in how 
>> vhost-user is testing for the packet size. In the mbuf you have 
>> payload size and the buffer size. The Buffer size could be 1524, but 
>> the payload or frame size will be 60 bytes as the 4 bytes FCS is 
>> appended to the frame by the hardware. It seems to me that vhost-user 
>> is not looking at the correct struct rte_mbuf member variable in its testing.
>>
>>>
>>> In Pktgen, the default settings are used for both ports:
>>>
>>> -          Tx Count: Forever
>>>
>>> -          Rate: 100%
>>>
>>> -          PktSize: 64
>>>
>>> -          Tx Burst: 32
>>>
>>> Whenever I start generating packets through one of the ports (in 
>>> this
>> example port 0 by running start 0), the OVS logs throw warnings similar to:
>>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
>> 1194956
>>> log messages in last 49 seconds (most recently, 41 seconds ago) due 
>>> to excessive rate
>>>
>> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>>
>> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>>
>> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>>
>> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 1524 max_packet_len 1518
>>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
>> 1344988
>>> log messages in last 11 seconds (most recently, 0 seconds ago) due 
>>> to excessive rate
>>>
>> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
>> Too
>>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
>>>
>>> When running Pktgen with the -socket-mem option (e.g. --socket-mem 
>>> 512),
>> the behavior is different, but with the same warnings thrown by OVS: 
>> port 1 receives some packages, but with different sizes, even though 
>> they are generated on port 0 with a 64b size:
>>> Flags:Port      :   P--------------:0   P--------------:1
>>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
>> ----TotalRate----
>>> Pkts/s Max/Rx     :                 0/0             35136/0
>> 35136/0
>>>      Max/Tx     :        238144/25504                 0/0
>> 238144/25504
>>> MBits/s Rx/Tx     :             0/13270                 0/0
>> 0/13270
>>> Broadcast         :                   0                   0
>>> Multicast         :                   0                   0
>>> 64 Bytes        :                   0                 288
>>> 65-127          :                   0                1440
>>> 128-255         :                   0                2880
>>> 256-511         :                   0                6336
>>> 512-1023        :                   0               12096
>>> 1024-1518       :                   0               12096
>>> Runts/Jumbos      :                 0/0                 0/0
>>> Errors Rx/Tx      :                 0/0                 0/0
>>> Total Rx Pkts     :                   0               35136
>>>     Tx Pkts     :             1571584                   0
>>>     Rx MBs      :                   0                 227
>>>     Tx MBs      :              412777                   0
>>> ARP/ICMP Pkts     :                 0/0                 0/0
>>>                 :
>>> Pattern Type      :             abcd...             abcd...
>>> Tx Count/% Rate   :       Forever /100%       Forever /100%
>>> PktSize/Tx Burst  :           64 /   32           64 /   32
>>> Src/Dest Port     :         1234 / 5678         1234 / 5678
>>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
>>> Dst  IP Address   :         192.168.1.1         192.168.0.1
>>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
>>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
>>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
>>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
>>>
>>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
>>> -------------------
>>>
>>> If packets are generated from an external source and testpmd is used 
>>> to
>> forward traffic between the two vHost-user ports, the warnings are 
>> not thrown by the OVS bridge.
>>>
>>> Should this setup work?
>>> Is this an issue or am I setting something up wrong?
>>>
>>> Thank you,
>>> Gabriel Ionescu
>>
>> Regards,
>> Keith
>

Regards,
Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-10  1:32           ` [dpdk-users] " Hu, Xuekun
@ 2018-01-10  1:46             ` Chen, Junjie J
  2018-01-10  9:49               ` [dpdk-users] 答复: RE: " wang.yong19
                                 ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Chen, Junjie J @ 2018-01-10  1:46 UTC (permalink / raw)
  To: Hu, Xuekun, wang.yong19, Wiles, Keith
  Cc: Gabriel.Ionescu, Tan, Jianfeng, users

Start from qemu 2.7, virtio default use 1.0 instead of 0.9, which add a flag (VIRTIO_F_VERSION_1) to device feature.

Actually, qemu use disable-legacy=on,disable-modern=off to support virtio 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.

Cheers
JJ


> -----Original Message-----
> From: Hu, Xuekun
> Sent: Wednesday, January 10, 2018 9:32 AM
> To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Gabriel.Ionescu@enea.com; Tan,
> Jianfeng <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Maybe the new qemu (starting from 2.8) introduced some new features that
> break the pktgen and dpdk compatibility?
> 
> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Tuesday, January 09, 2018 10:30 PM
> To: Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Hi,
> We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> problem is resolved.
> But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> included), the problem remains.
> It seems that there are still something wrong with pktgen-3.4.6 and
> dpdk-17.11.
> 
> 
> ------------------origin------------------
> 发件人: <keith.wiles@intel.com>;
> 收件人: <junjie.j.chen@intel.com>;
> 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> <jianfeng.tan@intel.com>; <users@dpdk.org>;
> 日 期 :2018年01月09日 22:04
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> 
> > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
> >
> > Hi
> > There are two defects may cause this issue:
> >
> > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > a/lib/common/mbuf.h
> > +++ b/lib/common/mbuf.h
> > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > +    m->data_len = m->pkt_len;
> > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > }
> 
> This patch is in Pktgen 3.4.6
> >
> > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix
> > Tx packet length stats
> >
> > You could patch both these two patch to try it.
> >
> > Cheers
> > JJ
> >
> >
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> >> Sent: Tuesday, January 9, 2018 2:38 PM
> >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Hi, Keith
> >>
> >> Any updates on this issue? We met similar behavior that ovs-dpdk
> >> reports they receive packet with size increment 12 bytes until more
> >> than 1518, then pktgen stops sending packets, while we only ask
> >> pktgen to generate 64B packet. And it only happens with two
> >> vhost-user ports in same server. If the pktgen is running in another server,
> then no such issue.
> >>
> >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> >>
> >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> >> 2.5 has no such problem. So seems like it is a compatibility issue
> >> with pktgen/dpdk/qemu?
> >>
> >> Thanks.
> >> Thx, Xuekun
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> >> Sent: Wednesday, May 03, 2017 4:24 AM
> >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Comments inline:
> >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> >>> <Gabriel.Ionescu@enea.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> >>> ports
> >> and I am seeing an issue where Pktgen does not look like it generates
> >> packets correctly.
> >>>
> >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >>>
> >>> The OVS bridge is created with:
> >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> >>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> >>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >>>
> >>> DPDK-Pktgen is launched with the following command so that packets
> >> generated through port 0 are received by port 1 and viceversa:
> >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> >>>
> >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >>>
> >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >>>                               -- -P -m "[0:1].0, [2:3].1”
> >>
> >> The above command line is wrong as Pktgen needs or takes the first
> >> lcore for display output and timers. I would not use -c -0xF, but -l
> >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> >> VM) one for Pktgen and 4 for the two ports. -m
> >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> >> concerned you did not see some performance or lockup problem. I
> >> really need to add a test for these types of problem :-( You can just
> >> have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using
> -l 0-4 option.
> >>
> >> Pktgen when requested to send 64 byte frames it sends 60 byte payload
> >> + 4 byte Frame Checksum. This does work and it must be in how
> >> vhost-user is testing for the packet size. In the mbuf you have
> >> payload size and the buffer size. The Buffer size could be 1524, but
> >> the payload or frame size will be 60 bytes as the 4 bytes FCS is
> >> appended to the frame by the hardware. It seems to me that vhost-user
> >> is not looking at the correct struct rte_mbuf member variable in its testing.
> >>
> >>>
> >>> In Pktgen, the default settings are used for both ports:
> >>>
> >>> -          Tx Count: Forever
> >>>
> >>> -          Rate: 100%
> >>>
> >>> -          PktSize: 64
> >>>
> >>> -          Tx Burst: 32
> >>>
> >>> Whenever I start generating packets through one of the ports (in
> >>> this
> >> example port 0 by running start 0), the OVS logs throw warnings similar to:
> >>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1194956
> >>> log messages in last 49 seconds (most recently, 41 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1344988
> >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >>>
> >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> >>> 512),
> >> the behavior is different, but with the same warnings thrown by OVS:
> >> port 1 receives some packages, but with different sizes, even though
> >> they are generated on port 0 with a 64b size:
> >>> Flags:Port      :   P--------------:0   P--------------:1
> >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> >> ----TotalRate----
> >>> Pkts/s Max/Rx     :                 0/0             35136/0
> >> 35136/0
> >>>      Max/Tx     :        238144/25504                 0/0
> >> 238144/25504
> >>> MBits/s Rx/Tx     :             0/13270                 0/0
> >> 0/13270
> >>> Broadcast         :                   0                   0
> >>> Multicast         :                   0                   0
> >>> 64 Bytes        :                   0                 288
> >>> 65-127          :                   0                1440
> >>> 128-255         :                   0                2880
> >>> 256-511         :                   0                6336
> >>> 512-1023        :                   0               12096
> >>> 1024-1518       :                   0               12096
> >>> Runts/Jumbos      :                 0/0                 0/0
> >>> Errors Rx/Tx      :                 0/0                 0/0
> >>> Total Rx Pkts     :                   0               35136
> >>>     Tx Pkts     :             1571584                   0
> >>>     Rx MBs      :                   0                 227
> >>>     Tx MBs      :              412777                   0
> >>> ARP/ICMP Pkts     :                 0/0                 0/0
> >>>                 :
> >>> Pattern Type      :             abcd...             abcd...
> >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >>>
> >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> >>> -------------------
> >>>
> >>> If packets are generated from an external source and testpmd is used
> >>> to
> >> forward traffic between the two vHost-user ports, the warnings are
> >> not thrown by the OVS bridge.
> >>>
> >>> Should this setup work?
> >>> Is this an issue or am I setting something up wrong?
> >>>
> >>> Thank you,
> >>> Gabriel Ionescu
> >>
> >> Regards,
> >> Keith
> >
> 
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: RE: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-10  1:46             ` Chen, Junjie J
@ 2018-01-10  9:49               ` wang.yong19
  2018-01-10 10:15               ` wang.yong19
  2018-01-10 11:44               ` [dpdk-users] 答复: " qin.chunhua
  2 siblings, 0 replies; 17+ messages in thread
From: wang.yong19 @ 2018-01-10  9:49 UTC (permalink / raw)
  To: junjie.j.chen
  Cc: xuekun.hu, keith.wiles, Gabriel.Ionescu, jianfeng.tan, users

Hi,
Thanks a lot for your advice.  
We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two patches below, the problem was resolved.
Now we met a new problem in the above situation. We set mac of the virtio port before we start generating flow.
At first, everything is OK. Then, we stop the flow and restart the same flow without any other modifications.
We found the source mac of the flow was different from what we had set to the virtio port.
Moreover, the source mac was different every time we restart the flow.
What's going on? Do you know any patches to fix this problem if we can't change the version of virtio?
We are looking forward to receiving your reply. Thank you!


------------------origin------------------
发件人: <junjie.j.chen@intel.com>;
收件人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月10日 09:47
主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Start from qemu 2.7, virtio default use 1.0 instead of 0.9, which add a flag (VIRTIO_F_VERSION_1) to device feature.

Actually, qemu use disable-legacy=on,disable-modern=off to support virtio 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.

Cheers
JJ


> -----Original Message-----
> From: Hu, Xuekun
> Sent: Wednesday, January 10, 2018 9:32 AM
> To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Gabriel.Ionescu@enea.com; Tan,
> Jianfeng <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Maybe the new qemu (starting from 2.8) introduced some new features that
> break the pktgen and dpdk compatibility?
>
> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Tuesday, January 09, 2018 10:30 PM
> To: Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Hi,
> We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> problem is resolved.
> But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> included), the problem remains.
> It seems that there are still something wrong with pktgen-3.4.6 and
> dpdk-17.11.
>
>
> ------------------origin------------------
> 发件人: <keith.wiles@intel.com>;
> 收件人: <junjie.j.chen@intel.com>;
> 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> <jianfeng.tan@intel.com>; <users@dpdk.org>;
> 日 期 :2018年01月09日 22:04
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
>
> > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
> >
> > Hi
> > There are two defects may cause this issue:
> >
> > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > a/lib/common/mbuf.h
> > +++ b/lib/common/mbuf.h
> > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > +    m->data_len = m->pkt_len;
> > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > }
>
> This patch is in Pktgen 3.4.6
> >
> > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix
> > Tx packet length stats
> >
> > You could patch both these two patch to try it.
> >
> > Cheers
> > JJ
> >
> >
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> >> Sent: Tuesday, January 9, 2018 2:38 PM
> >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Hi, Keith
> >>
> >> Any updates on this issue? We met similar behavior that ovs-dpdk
> >> reports they receive packet with size increment 12 bytes until more
> >> than 1518, then pktgen stops sending packets, while we only ask
> >> pktgen to generate 64B packet. And it only happens with two
> >> vhost-user ports in same server. If the pktgen is running in another server,
> then no such issue.
> >>
> >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> >>
> >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> >> 2.5 has no such problem. So seems like it is a compatibility issue
> >> with pktgen/dpdk/qemu?
> >>
> >> Thanks.
> >> Thx, Xuekun
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> >> Sent: Wednesday, May 03, 2017 4:24 AM
> >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Comments inline:
> >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> >>> <Gabriel.Ionescu@enea.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> >>> ports
> >> and I am seeing an issue where Pktgen does not look like it generates
> >> packets correctly.
> >>>
> >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >>>
> >>> The OVS bridge is created with:
> >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> >>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> >>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >>>
> >>> DPDK-Pktgen is launched with the following command so that packets
> >> generated through port 0 are received by port 1 and viceversa:
> >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> >>>
> >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >>>
> >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >>>                               -- -P -m "[0:1].0, [2:3].1”
> >>
> >> The above command line is wrong as Pktgen needs or takes the first
> >> lcore for display output and timers. I would not use -c -0xF, but -l
> >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> >> VM) one for Pktgen and 4 for the two ports. -m
> >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> >> concerned you did not see some performance or lockup problem. I
> >> really need to add a test for these types of problem :-( You can just
> >> have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using
> -l 0-4 option.
> >>
> >> Pktgen when requested to send 64 byte frames it sends 60 byte payload
> >> + 4 byte Frame Checksum. This does work and it must be in how
> >> vhost-user is testing for the packet size. In the mbuf you have
> >> payload size and the buffer size. The Buffer size could be 1524, but
> >> the payload or frame size will be 60 bytes as the 4 bytes FCS is
> >> appended to the frame by the hardware. It seems to me that vhost-user
> >> is not looking at the correct struct rte_mbuf member variable in its testing.
> >>
> >>>
> >>> In Pktgen, the default settings are used for both ports:
> >>>
> >>> -          Tx Count: Forever
> >>>
> >>> -          Rate: 100%
> >>>
> >>> -          PktSize: 64
> >>>
> >>> -          Tx Burst: 32
> >>>
> >>> Whenever I start generating packets through one of the ports (in
> >>> this
> >> example port 0 by running start 0), the OVS logs throw warnings similar to:
> >>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1194956
> >>> log messages in last 49 seconds (most recently, 41 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1344988
> >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >>>
> >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> >>> 512),
> >> the behavior is different, but with the same warnings thrown by OVS:
> >> port 1 receives some packages, but with different sizes, even though
> >> they are generated on port 0 with a 64b size:
> >>> Flags:Port      :   P--------------:0   P--------------:1
> >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> >> ----TotalRate----
> >>> Pkts/s Max/Rx     :                 0/0             35136/0
> >> 35136/0
> >>>      Max/Tx     :        238144/25504                 0/0
> >> 238144/25504
> >>> MBits/s Rx/Tx     :             0/13270                 0/0
> >> 0/13270
> >>> Broadcast         :                   0                   0
> >>> Multicast         :                   0                   0
> >>> 64 Bytes        :                   0                 288
> >>> 65-127          :                   0                1440
> >>> 128-255         :                   0                2880
> >>> 256-511         :                   0                6336
> >>> 512-1023        :                   0               12096
> >>> 1024-1518       :                   0               12096
> >>> Runts/Jumbos      :                 0/0                 0/0
> >>> Errors Rx/Tx      :                 0/0                 0/0
> >>> Total Rx Pkts     :                   0               35136
> >>>     Tx Pkts     :             1571584                   0
> >>>     Rx MBs      :                   0                 227
> >>>     Tx MBs      :              412777                   0
> >>> ARP/ICMP Pkts     :                 0/0                 0/0
> >>>                 :
> >>> Pattern Type      :             abcd...             abcd...
> >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >>>
> >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> >>> -------------------
> >>>
> >>> If packets are generated from an external source and testpmd is used
> >>> to
> >> forward traffic between the two vHost-user ports, the warnings are
> >> not thrown by the OVS bridge.
> >>>
> >>> Should this setup work?
> >>> Is this an issue or am I setting something up wrong?
> >>>
> >>> Thank you,
> >>> Gabriel Ionescu
> >>
> >> Regards,
> >> Keith
> >
>
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: RE: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-10  1:46             ` Chen, Junjie J
  2018-01-10  9:49               ` [dpdk-users] 答复: RE: " wang.yong19
@ 2018-01-10 10:15               ` wang.yong19
  2018-01-10 11:44               ` [dpdk-users] 答复: " qin.chunhua
  2 siblings, 0 replies; 17+ messages in thread
From: wang.yong19 @ 2018-01-10 10:15 UTC (permalink / raw)
  To: junjie.j.chen
  Cc: xuekun.hu, keith.wiles, Gabriel.Ionescu, jianfeng.tan, users

Hi,
Thanks a lot for your advice. 
We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two patches below, the problem was resolved.
Now we met a new problem in the above situation. We set mac of the virtio port before we start generating flow.
At first, everything is OK. Then, we stop the flow and restart the same flow without any other modifications.
We found the source mac of the flow was different from what we had set to the virtio port.
Moreover, the source mac was different every time we restart the flow.
What's going on? Do you know any patches to fix this problem if we can't change the version of virtio?
We are looking forward to receiving your reply. Thank you!


------------------origin------------------
发件人: <junjie.j.chen@intel.com>;
收件人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月10日 09:47
主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Start from qemu 2.7, virtio default use 1.0 instead of 0.9, which add a flag (VIRTIO_F_VERSION_1) to device feature.

Actually, qemu use disable-legacy=on,disable-modern=off to support virtio 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.

Cheers
JJ


> -----Original Message-----
> From: Hu, Xuekun
> Sent: Wednesday, January 10, 2018 9:32 AM
> To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Gabriel.Ionescu@enea.com; Tan,
> Jianfeng <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Maybe the new qemu (starting from 2.8) introduced some new features that
> break the pktgen and dpdk compatibility?
>
> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Tuesday, January 09, 2018 10:30 PM
> To: Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Hi,
> We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> problem is resolved.
> But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> included), the problem remains.
> It seems that there are still something wrong with pktgen-3.4.6 and
> dpdk-17.11.
>
>
> ------------------origin------------------
> 发件人: <keith.wiles@intel.com>;
> 收件人: <junjie.j.chen@intel.com>;
> 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> <jianfeng.tan@intel.com>; <users@dpdk.org>;
> 日 期 :2018年01月09日 22:04
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
>
> > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
> >
> > Hi
> > There are two defects may cause this issue:
> >
> > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > a/lib/common/mbuf.h
> > +++ b/lib/common/mbuf.h
> > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > +    m->data_len = m->pkt_len;
> > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > }
>
> This patch is in Pktgen 3.4.6
> >
> > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix
> > Tx packet length stats
> >
> > You could patch both these two patch to try it.
> >
> > Cheers
> > JJ
> >
> >
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> >> Sent: Tuesday, January 9, 2018 2:38 PM
> >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Hi, Keith
> >>
> >> Any updates on this issue? We met similar behavior that ovs-dpdk
> >> reports they receive packet with size increment 12 bytes until more
> >> than 1518, then pktgen stops sending packets, while we only ask
> >> pktgen to generate 64B packet. And it only happens with two
> >> vhost-user ports in same server. If the pktgen is running in another server,
> then no such issue.
> >>
> >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> >>
> >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> >> 2.5 has no such problem. So seems like it is a compatibility issue
> >> with pktgen/dpdk/qemu?
> >>
> >> Thanks.
> >> Thx, Xuekun
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> >> Sent: Wednesday, May 03, 2017 4:24 AM
> >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Comments inline:
> >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> >>> <Gabriel.Ionescu@enea.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> >>> ports
> >> and I am seeing an issue where Pktgen does not look like it generates
> >> packets correctly.
> >>>
> >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >>>
> >>> The OVS bridge is created with:
> >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> >>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> >>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >>>
> >>> DPDK-Pktgen is launched with the following command so that packets
> >> generated through port 0 are received by port 1 and viceversa:
> >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> >>>
> >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >>>
> >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >>>                               -- -P -m "[0:1].0, [2:3].1”
> >>
> >> The above command line is wrong as Pktgen needs or takes the first
> >> lcore for display output and timers. I would not use -c -0xF, but -l
> >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> >> VM) one for Pktgen and 4 for the two ports. -m
> >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> >> concerned you did not see some performance or lockup problem. I
> >> really need to add a test for these types of problem :-( You can just
> >> have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using
> -l 0-4 option.
> >>
> >> Pktgen when requested to send 64 byte frames it sends 60 byte payload
> >> + 4 byte Frame Checksum. This does work and it must be in how
> >> vhost-user is testing for the packet size. In the mbuf you have
> >> payload size and the buffer size. The Buffer size could be 1524, but
> >> the payload or frame size will be 60 bytes as the 4 bytes FCS is
> >> appended to the frame by the hardware. It seems to me that vhost-user
> >> is not looking at the correct struct rte_mbuf member variable in its testing.
> >>
> >>>
> >>> In Pktgen, the default settings are used for both ports:
> >>>
> >>> -          Tx Count: Forever
> >>>
> >>> -          Rate: 100%
> >>>
> >>> -          PktSize: 64
> >>>
> >>> -          Tx Burst: 32
> >>>
> >>> Whenever I start generating packets through one of the ports (in
> >>> this
> >> example port 0 by running start 0), the OVS logs throw warnings similar to:
> >>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1194956
> >>> log messages in last 49 seconds (most recently, 41 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1344988
> >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >>>
> >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> >>> 512),
> >> the behavior is different, but with the same warnings thrown by OVS:
> >> port 1 receives some packages, but with different sizes, even though
> >> they are generated on port 0 with a 64b size:
> >>> Flags:Port      :   P--------------:0   P--------------:1
> >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> >> ----TotalRate----
> >>> Pkts/s Max/Rx     :                 0/0             35136/0
> >> 35136/0
> >>>      Max/Tx     :        238144/25504                 0/0
> >> 238144/25504
> >>> MBits/s Rx/Tx     :             0/13270                 0/0
> >> 0/13270
> >>> Broadcast         :                   0                   0
> >>> Multicast         :                   0                   0
> >>> 64 Bytes        :                   0                 288
> >>> 65-127          :                   0                1440
> >>> 128-255         :                   0                2880
> >>> 256-511         :                   0                6336
> >>> 512-1023        :                   0               12096
> >>> 1024-1518       :                   0               12096
> >>> Runts/Jumbos      :                 0/0                 0/0
> >>> Errors Rx/Tx      :                 0/0                 0/0
> >>> Total Rx Pkts     :                   0               35136
> >>>     Tx Pkts     :             1571584                   0
> >>>     Rx MBs      :                   0                 227
> >>>     Tx MBs      :              412777                   0
> >>> ARP/ICMP Pkts     :                 0/0                 0/0
> >>>                 :
> >>> Pattern Type      :             abcd...             abcd...
> >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >>>
> >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> >>> -------------------
> >>>
> >>> If packets are generated from an external source and testpmd is used
> >>> to
> >> forward traffic between the two vHost-user ports, the warnings are
> >> not thrown by the OVS bridge.
> >>>
> >>> Should this setup work?
> >>> Is this an issue or am I setting something up wrong?
> >>>
> >>> Thank you,
> >>> Gabriel Ionescu
> >>
> >> Regards,
> >> Keith
> >
>
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-10  1:46             ` Chen, Junjie J
  2018-01-10  9:49               ` [dpdk-users] 答复: RE: " wang.yong19
  2018-01-10 10:15               ` wang.yong19
@ 2018-01-10 11:44               ` qin.chunhua
  2018-01-10 14:01                 ` [dpdk-users] " Wiles, Keith
  2018-01-11  9:35                 ` Chen, Junjie J
  2 siblings, 2 replies; 17+ messages in thread
From: qin.chunhua @ 2018-01-10 11:44 UTC (permalink / raw)
  To: junjie.j.chen
  Cc: xuekun.hu, wang.yong19, keith.wiles, Gabriel.Ionescu,
	jianfeng.tan, users

Hi,
Thanks a lot for your advice.
We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two patches below, the problem was resolved.
Now we met a new problem in the above situation. We set mac of the virtio port before we start generating flow.
At first, everything is OK. Then, we stop the flow and restart the same flow without any other modifications.
We found the source mac of the flow was different from what we had set to the virtio port.
Moreover, the source mac was different every time we restart the flow.
What's going on? Do you know any patches to fix this problem if we can't change the version of virtio?
Looking forward to receiving your reply. Thank you!



------------------原始邮件------------------
发件人: <junjie.j.chen@intel.com>;
收件人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月10日 09:47
主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Start from qemu 2.7, virtio default use 1.0 instead of 0.9, which add a flag (VIRTIO_F_VERSION_1) to device feature.

Actually, qemu use disable-legacy=on,disable-modern=off to support virtio 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.

Cheers
JJ


> -----Original Message-----
> From: Hu, Xuekun
> Sent: Wednesday, January 10, 2018 9:32 AM
> To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Gabriel.Ionescu@enea.com; Tan,
> Jianfeng <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Maybe the new qemu (starting from 2.8) introduced some new features that
> break the pktgen and dpdk compatibility?
>
> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Tuesday, January 09, 2018 10:30 PM
> To: Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Hi,
> We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> problem is resolved.
> But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> included), the problem remains.
> It seems that there are still something wrong with pktgen-3.4.6 and
> dpdk-17.11.
>
>
> ------------------origin------------------
> 发件人: <keith.wiles@intel.com>;
> 收件人: <junjie.j.chen@intel.com>;
> 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> <jianfeng.tan@intel.com>; <users@dpdk.org>;
> 日 期 :2018年01月09日 22:04
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
>
> > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
> >
> > Hi
> > There are two defects may cause this issue:
> >
> > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > a/lib/common/mbuf.h
> > +++ b/lib/common/mbuf.h
> > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > +    m->data_len = m->pkt_len;
> > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > }
>
> This patch is in Pktgen 3.4.6
> >
> > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix
> > Tx packet length stats
> >
> > You could patch both these two patch to try it.
> >
> > Cheers
> > JJ
> >
> >
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> >> Sent: Tuesday, January 9, 2018 2:38 PM
> >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Hi, Keith
> >>
> >> Any updates on this issue? We met similar behavior that ovs-dpdk
> >> reports they receive packet with size increment 12 bytes until more
> >> than 1518, then pktgen stops sending packets, while we only ask
> >> pktgen to generate 64B packet. And it only happens with two
> >> vhost-user ports in same server. If the pktgen is running in another server,
> then no such issue.
> >>
> >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> >>
> >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> >> 2.5 has no such problem. So seems like it is a compatibility issue
> >> with pktgen/dpdk/qemu?
> >>
> >> Thanks.
> >> Thx, Xuekun
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> >> Sent: Wednesday, May 03, 2017 4:24 AM
> >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Comments inline:
> >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> >>> <Gabriel.Ionescu@enea.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> >>> ports
> >> and I am seeing an issue where Pktgen does not look like it generates
> >> packets correctly.
> >>>
> >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >>>
> >>> The OVS bridge is created with:
> >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> >>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> >>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >>>
> >>> DPDK-Pktgen is launched with the following command so that packets
> >> generated through port 0 are received by port 1 and viceversa:
> >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> >>>
> >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >>>
> >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >>>                               -- -P -m "[0:1].0, [2:3].1”
> >>
> >> The above command line is wrong as Pktgen needs or takes the first
> >> lcore for display output and timers. I would not use -c -0xF, but -l
> >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> >> VM) one for Pktgen and 4 for the two ports. -m
> >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> >> concerned you did not see some performance or lockup problem. I
> >> really need to add a test for these types of problem :-( You can just
> >> have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using
> -l 0-4 option.
> >>
> >> Pktgen when requested to send 64 byte frames it sends 60 byte payload
> >> + 4 byte Frame Checksum. This does work and it must be in how
> >> vhost-user is testing for the packet size. In the mbuf you have
> >> payload size and the buffer size. The Buffer size could be 1524, but
> >> the payload or frame size will be 60 bytes as the 4 bytes FCS is
> >> appended to the frame by the hardware. It seems to me that vhost-user
> >> is not looking at the correct struct rte_mbuf member variable in its testing.
> >>
> >>>
> >>> In Pktgen, the default settings are used for both ports:
> >>>
> >>> -          Tx Count: Forever
> >>>
> >>> -          Rate: 100%
> >>>
> >>> -          PktSize: 64
> >>>
> >>> -          Tx Burst: 32
> >>>
> >>> Whenever I start generating packets through one of the ports (in
> >>> this
> >> example port 0 by running start 0), the OVS logs throw warnings similar to:
> >>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1194956
> >>> log messages in last 49 seconds (most recently, 41 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1344988
> >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >>>
> >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> >>> 512),
> >> the behavior is different, but with the same warnings thrown by OVS:
> >> port 1 receives some packages, but with different sizes, even though
> >> they are generated on port 0 with a 64b size:
> >>> Flags:Port      :   P--------------:0   P--------------:1
> >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> >> ----TotalRate----
> >>> Pkts/s Max/Rx     :                 0/0             35136/0
> >> 35136/0
> >>>      Max/Tx     :        238144/25504                 0/0
> >> 238144/25504
> >>> MBits/s Rx/Tx     :             0/13270                 0/0
> >> 0/13270
> >>> Broadcast         :                   0                   0
> >>> Multicast         :                   0                   0
> >>> 64 Bytes        :                   0                 288
> >>> 65-127          :                   0                1440
> >>> 128-255         :                   0                2880
> >>> 256-511         :                   0                6336
> >>> 512-1023        :                   0               12096
> >>> 1024-1518       :                   0               12096
> >>> Runts/Jumbos      :                 0/0                 0/0
> >>> Errors Rx/Tx      :                 0/0                 0/0
> >>> Total Rx Pkts     :                   0               35136
> >>>     Tx Pkts     :             1571584                   0
> >>>     Rx MBs      :                   0                 227
> >>>     Tx MBs      :              412777                   0
> >>> ARP/ICMP Pkts     :                 0/0                 0/0
> >>>                 :
> >>> Pattern Type      :             abcd...             abcd...
> >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >>>
> >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> >>> -------------------
> >>>
> >>> If packets are generated from an external source and testpmd is used
> >>> to
> >> forward traffic between the two vHost-user ports, the warnings are
> >> not thrown by the OVS bridge.
> >>>
> >>> Should this setup work?
> >>> Is this an issue or am I setting something up wrong?
> >>>
> >>> Thank you,
> >>> Gabriel Ionescu
> >>
> >> Regards,
> >> Keith
> >
>
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-10 11:44               ` [dpdk-users] 答复: " qin.chunhua
@ 2018-01-10 14:01                 ` Wiles, Keith
  2018-01-11  9:35                 ` Chen, Junjie J
  1 sibling, 0 replies; 17+ messages in thread
From: Wiles, Keith @ 2018-01-10 14:01 UTC (permalink / raw)
  To: qin.chunhua
  Cc: Chen, Junjie J, Hu, Xuekun, wang.yong19, Gabriel.Ionescu, Tan,
	Jianfeng, users

Not sure what this email is about, are you sending the email is plain text format or what?

> On Jan 10, 2018, at 5:44 AM, qin.chunhua@zte.com.cn wrote:
> 
> SGksClRoYW5rcyBhIGxvdCBmb3IgeW91ciBhZHZpY2UuCldlIHVzZWQgcGt0Z2VuLTMuMC4xMCAr
> IGRwZGstMTcuMDIuMSArIHZpcnRpbzEuMCBhcHBsaWVkIHRoZSB0d28gcGF0Y2hlcyBiZWxvdywg
> dGhlIHByb2JsZW0gd2FzIHJlc29sdmVkLgpOb3cgd2UgbWV0IGEgbmV3IHByb2JsZW0gaW4gdGhl
> IGFib3ZlIHNpdHVhdGlvbi4gV2Ugc2V0IG1hYyBvZiB0aGUgdmlydGlvIHBvcnQgYmVmb3JlIHdl
> IHN0YXJ0IGdlbmVyYXRpbmcgZmxvdy4KQXQgZmlyc3QsIGV2ZXJ5dGhpbmcgaXMgT0suIFRoZW4s
> IHdlIHN0b3AgdGhlIGZsb3cgYW5kIHJlc3RhcnQgdGhlIHNhbWUgZmxvdyB3aXRob3V0IGFueSBv
> dGhlciBtb2RpZmljYXRpb25zLgpXZSBmb3VuZCB0aGUgc291cmNlIG1hYyBvZiB0aGUgZmxvdyB3
> YXMgZGlmZmVyZW50IGZyb20gd2hhdCB3ZSBoYWQgc2V0IHRvIHRoZSB2aXJ0aW8gcG9ydC4KTW9y
> ZW92ZXIsIHRoZSBzb3VyY2UgbWFjIHdhcyBkaWZmZXJlbnQgZXZlcnkgdGltZSB3ZSByZXN0YXJ0
> IHRoZSBmbG93LgpXaGF0J3MgZ29pbmcgb24/IERvIHlvdSBrbm93IGFueSBwYXRjaGVzIHRvIGZp
> eCB0aGlzIHByb2JsZW0gaWYgd2UgY2FuJ3QgY2hhbmdlIHRoZSB2ZXJzaW9uIG9mIHZpcnRpbz8K
> TG9va2luZyBmb3J3YXJkIHRvIHJlY2VpdmluZyB5b3VyIHJlcGx5LiBUaGFuayB5b3UhCgoKCi0t
> LS0tLS0tLS0tLS0tLS0tLeWOn+Wni+mCruS7ti0tLS0tLS0tLS0tLS0tLS0tLQrlj5Hku7bkurrv
> vJogPGp1bmppZS5qLmNoZW5AaW50ZWwuY29tPjsK5pS25Lu25Lq677yaIDx4dWVrdW4uaHVAaW50
> ZWwuY29tPjvmsarli4cxMDAzMjg4NjsgPGtlaXRoLndpbGVzQGludGVsLmNvbT47CuaKhOmAgeS6
> uu+8miA8R2FicmllbC5Jb25lc2N1QGVuZWEuY29tPjsgPGppYW5mZW5nLnRhbkBpbnRlbC5jb20+
> OyA8dXNlcnNAZHBkay5vcmc+Owrml6Ug5pyfIO+8mjIwMTjlubQwMeaciDEw5pelIDA5OjQ3CuS4
> uyDpopgg77yaUmU6IFtkcGRrLXVzZXJzXSBJc3N1ZSB3aXRoIFBrdGdlbiBhbmQgT1ZTLURQREsK
> U3RhcnQgZnJvbSBxZW11IDIuNywgdmlydGlvIGRlZmF1bHQgdXNlIDEuMCBpbnN0ZWFkIG9mIDAu
> OSwgd2hpY2ggYWRkIGEgZmxhZyAoVklSVElPX0ZfVkVSU0lPTl8xKSB0byBkZXZpY2UgZmVhdHVy
> ZS4KCkFjdHVhbGx5LCBxZW11IHVzZSBkaXNhYmxlLWxlZ2FjeT1vbixkaXNhYmxlLW1vZGVybj1v
> ZmYgdG8gc3VwcG9ydCB2aXJ0aW8gMS4wLiBhbiB1c2UgZGlzYWJsZS1sZWdhY3k9b2ZmLGRpc2Fi
> bGUtbW9kZXJuPW9uIHRvIHN1cHBvcnQgdmlydGlvIDAuOS4gU28geW91IGNhbiB1c2UgdmlydGlv
> IDAuOSBvbiBxZW11IDIuNysgdG8gd29ya2Fyb3VuZCB0aGlzLgoKQ2hlZXJzCkpKCgoKPiAtLS0t
> LU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQo+IEZyb206IEh1LCBYdWVrdW4KPiBTZW50OiBXZWRuZXNk
> YXksIEphbnVhcnkgMTAsIDIwMTggOTozMiBBTQo+IFRvOiB3YW5nLnlvbmcxOUB6dGUuY29tLmNu
> OyBXaWxlcywgS2VpdGggPGtlaXRoLndpbGVzQGludGVsLmNvbT4KPiBDYzogQ2hlbiwgSnVuamll
> IEogPGp1bmppZS5qLmNoZW5AaW50ZWwuY29tPjsgR2FicmllbC5Jb25lc2N1QGVuZWEuY29tOyBU
> YW4sCj4gSmlhbmZlbmcgPGppYW5mZW5nLnRhbkBpbnRlbC5jb20+OyB1c2Vyc0BkcGRrLm9yZwo+
> IFN1YmplY3Q6IFJFOiBSZTogW2RwZGstdXNlcnNdIElzc3VlIHdpdGggUGt0Z2VuIGFuZCBPVlMt
> RFBESwo+Cj4gTWF5YmUgdGhlIG5ldyBxZW11IChzdGFydGluZyBmcm9tIDIuOCkgaW50cm9kdWNl
> ZCBzb21lIG5ldyBmZWF0dXJlcyB0aGF0Cj4gYnJlYWsgdGhlIHBrdGdlbiBhbmQgZHBkayBjb21w
> YXRpYmlsaXR5Pwo+Cj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0KPiBGcm9tOiB3YW5nLnlv
> bmcxOUB6dGUuY29tLmNuIFttYWlsdG86d2FuZy55b25nMTlAenRlLmNvbS5jbl0KPiBTZW50OiBU
> dWVzZGF5LCBKYW51YXJ5IDA5LCAyMDE4IDEwOjMwIFBNCj4gVG86IFdpbGVzLCBLZWl0aCA8a2Vp
> dGgud2lsZXNAaW50ZWwuY29tPgo+IENjOiBDaGVuLCBKdW5qaWUgSiA8anVuamllLmouY2hlbkBp
> bnRlbC5jb20+OyBIdSwgWHVla3VuCj4gPHh1ZWt1bi5odUBpbnRlbC5jb20+OyBHYWJyaWVsLklv
> bmVzY3VAZW5lYS5jb207IFRhbiwgSmlhbmZlbmcKPiA8amlhbmZlbmcudGFuQGludGVsLmNvbT47
> IHVzZXJzQGRwZGsub3JnCj4gU3ViamVjdDog562U5aSNOiBSZTogW2RwZGstdXNlcnNdIElzc3Vl
> IHdpdGggUGt0Z2VuIGFuZCBPVlMtRFBESwo+Cj4gSGksCj4gV2UgdXNlIHBrdGdlbi0zLjAuMTAg
> KyBkcGRrLTE3LjAyLjEgYXBwbGllZCB0aGUgdHdvIHBhdGNoZXMgYmVsb3csIHRoZQo+IHByb2Js
> ZW0gaXMgcmVzb2x2ZWQuCj4gQnV0IHdoZW4gd2UgdXNlIHBrdGdlbi0zLjQuNiArIGRwZGstMTcu
> MTEodGhlIHR3byBwYXRjaGVzIGJlbG93IGFyZQo+IGluY2x1ZGVkKSwgdGhlIHByb2JsZW0gcmVt
> YWlucy4KPiBJdCBzZWVtcyB0aGF0IHRoZXJlIGFyZSBzdGlsbCBzb21ldGhpbmcgd3Jvbmcgd2l0
> aCBwa3RnZW4tMy40LjYgYW5kCj4gZHBkay0xNy4xMS4KPgo+Cj4gLS0tLS0tLS0tLS0tLS0tLS0t
> b3JpZ2luLS0tLS0tLS0tLS0tLS0tLS0tCj4g5Y+R5Lu25Lq677yaIDxrZWl0aC53aWxlc0BpbnRl
> bC5jb20+Owo+IOaUtuS7tuS6uu+8miA8anVuamllLmouY2hlbkBpbnRlbC5jb20+Owo+IOaKhOmA
> geS6uu+8miA8eHVla3VuLmh1QGludGVsLmNvbT47IDxHYWJyaWVsLklvbmVzY3VAZW5lYS5jb20+
> Owo+IDxqaWFuZmVuZy50YW5AaW50ZWwuY29tPjsgPHVzZXJzQGRwZGsub3JnPjsKPiDml6Ug5pyf
> IO+8mjIwMTjlubQwMeaciDA55pelIDIyOjA0Cj4g5Li7IOmimCDvvJpSZTogW2RwZGstdXNlcnNd
> IElzc3VlIHdpdGggUGt0Z2VuIGFuZCBPVlMtRFBESwo+Cj4KPiA+IE9uIEphbiA5LCAyMDE4LCBh
> dCA3OjAwIEFNLCBDaGVuLCBKdW5qaWUgSiA8anVuamllLmouY2hlbkBpbnRlbC5jb20+IHdyb3Rl
> Ogo+ID4KPiA+IEhpCj4gPiBUaGVyZSBhcmUgdHdvIGRlZmVjdHMgbWF5IGNhdXNlIHRoaXMgaXNz
> dWU6Cj4gPgo+ID4gMSkgaW4gcGt0Z2VuLCBzZWUgdGhpcyBwYXRjaCBbZHBkay1kZXZdIFtQQVRD
> SF0gcGt0Z2VuLWRwZGs6IGZpeCBsb3cKPiA+IHBlcmZvcm1hbmNlIGluIFZNIHZpcnRpbyBwbWQg
> bW9kZSBkaWZmIC0tZ2l0IGEvbGliL2NvbW1vbi9tYnVmLmgKPiA+IGIvbGliL2NvbW1vbi9tYnVm
> LmggaW5kZXggNzU5Zjk1ZC4uOTMwNjVmNiAxMDA2NDQg4oCUCj4gPiBhL2xpYi9jb21tb24vbWJ1
> Zi5oCj4gPiArKysgYi9saWIvY29tbW9uL21idWYuaAo+ID4gQEAgLTE4LDYgKzE4LDcgQEAgcGt0
> bWJ1Zl9yZXNldChzdHJ1Y3QgcnRlX21idWYgKm0pCj4gPiBtLT5uYl9zZWdzID0gMTsKPiA+IG0t
> PnBvcnQgPSAweGZmOwo+ID4KPiA+ICsgICAgbS0+ZGF0YV9sZW4gPSBtLT5wa3RfbGVuOwo+ID4g
> bS0+ZGF0YV9vZmYgPSAoUlRFX1BLVE1CVUZfSEVBRFJPT00gPD0gbS0+YnVmX2xlbikgPwo+ID4g
> UlRFX1BLVE1CVUZfSEVBRFJPT00gOiBtLT5idWZfbGVuOwo+ID4gfQo+Cj4gVGhpcyBwYXRjaCBp
> cyBpbiBQa3RnZW4gMy40LjYKPiA+Cj4gPiAyKSBpbiB2aXJ0aW9fcnh0eC5jLCBwbGVhc2Ugc2Vl
> IGNvbW1pdCBmMTIxNmMxZWNhNWE1LiBuZXQvdmlydGlvOiBmaXgKPiA+IFR4IHBhY2tldCBsZW5n
> dGggc3RhdHMKPiA+Cj4gPiBZb3UgY291bGQgcGF0Y2ggYm90aCB0aGVzZSB0d28gcGF0Y2ggdG8g
> dHJ5IGl0Lgo+ID4KPiA+IENoZWVycwo+ID4gSkoKPiA+Cj4gPgo+ID4+IC0tLS0tT3JpZ2luYWwg
> TWVzc2FnZS0tLS0tCj4gPj4gRnJvbTogdXNlcnMgW21haWx0bzp1c2Vycy1ib3VuY2VzQGRwZGsu
> b3JnXSBPbiBCZWhhbGYgT2YgSHUsIFh1ZWt1bgo+ID4+IFNlbnQ6IFR1ZXNkYXksIEphbnVhcnkg
> OSwgMjAxOCAyOjM4IFBNCj4gPj4gVG86IFdpbGVzLCBLZWl0aCA8a2VpdGgud2lsZXNAaW50ZWwu
> Y29tPjsgR2FicmllbCBJb25lc2N1Cj4gPj4gPEdhYnJpZWwuSW9uZXNjdUBlbmVhLmNvbT47IFRh
> biwgSmlhbmZlbmcgPGppYW5mZW5nLnRhbkBpbnRlbC5jb20+Cj4gPj4gQ2M6IHVzZXJzQGRwZGsu
> b3JnCj4gPj4gU3ViamVjdDogUmU6IFtkcGRrLXVzZXJzXSBJc3N1ZSB3aXRoIFBrdGdlbiBhbmQg
> T1ZTLURQREsKPiA+Pgo+ID4+IEhpLCBLZWl0aAo+ID4+Cj4gPj4gQW55IHVwZGF0ZXMgb24gdGhp
> cyBpc3N1ZT8gV2UgbWV0IHNpbWlsYXIgYmVoYXZpb3IgdGhhdCBvdnMtZHBkawo+ID4+IHJlcG9y
> dHMgdGhleSByZWNlaXZlIHBhY2tldCB3aXRoIHNpemUgaW5jcmVtZW50IDEyIGJ5dGVzIHVudGls
> IG1vcmUKPiA+PiB0aGFuIDE1MTgsIHRoZW4gcGt0Z2VuIHN0b3BzIHNlbmRpbmcgcGFja2V0cywg
> d2hpbGUgd2Ugb25seSBhc2sKPiA+PiBwa3RnZW4gdG8gZ2VuZXJhdGUgNjRCIHBhY2tldC4gQW5k
> IGl0IG9ubHkgaGFwcGVucyB3aXRoIHR3bwo+ID4+IHZob3N0LXVzZXIgcG9ydHMgaW4gc2FtZSBz
> ZXJ2ZXIuIElmIHRoZSBwa3RnZW4gaXMgcnVubmluZyBpbiBhbm90aGVyIHNlcnZlciwKPiB0aGVu
> IG5vIHN1Y2ggaXNzdWUuCj4gPj4KPiA+PiBXZSB0ZXN0ZWQgdGhlIGxhc3RlZCBwa3RnZW4gMy40
> LjYsIGFuZCBPVlMtRFBESyAyLjgsIHdpdGggRFBESyAxNy4xMS4KPiA+Pgo+ID4+IFdlIGFsc28g
> Zm91bmQgcWVtdTIuOC4xIGFuZCBxZW11Mi4xMCBoYXZlIHRoaXMgcHJvYmxlbSwgd2hpbGUgcWVt
> dQo+ID4+IDIuNSBoYXMgbm8gc3VjaCBwcm9ibGVtLiBTbyBzZWVtcyBsaWtlIGl0IGlzIGEgY29t
> cGF0aWJpbGl0eSBpc3N1ZQo+ID4+IHdpdGggcGt0Z2VuL2RwZGsvcWVtdT8KPiA+Pgo+ID4+IFRo
> YW5rcy4KPiA+PiBUaHgsIFh1ZWt1bgo+ID4+Cj4gPj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0t
> LS0KPiA+PiBGcm9tOiB1c2VycyBbbWFpbHRvOnVzZXJzLWJvdW5jZXNAZHBkay5vcmddIE9uIEJl
> aGFsZiBPZiBXaWxlcywgS2VpdGgKPiA+PiBTZW50OiBXZWRuZXNkYXksIE1heSAwMywgMjAxNyA0
> OjI0IEFNCj4gPj4gVG86IEdhYnJpZWwgSW9uZXNjdSA8R2FicmllbC5Jb25lc2N1QGVuZWEuY29t
> Pgo+ID4+IENjOiB1c2Vyc0BkcGRrLm9yZwo+ID4+IFN1YmplY3Q6IFJlOiBbZHBkay11c2Vyc10g
> SXNzdWUgd2l0aCBQa3RnZW4gYW5kIE9WUy1EUERLCj4gPj4KPiA+PiBDb21tZW50cyBpbmxpbmU6
> Cj4gPj4+IE9uIE1heSAyLCAyMDE3LCBhdCA4OjIwIEFNLCBHYWJyaWVsIElvbmVzY3UKPiA+Pj4g
> PEdhYnJpZWwuSW9uZXNjdUBlbmVhLmNvbT4KPiA+PiB3cm90ZToKPiA+Pj4KPiA+Pj4gSGksCj4g
> Pj4+Cj4gPj4+IEkgYW0gdXNpbmcgRFBESy1Qa3RnZW4gd2l0aCBhbiBPVlMgYnJpZGdlIHRoYXQg
> aGFzIHR3byB2SG9zdC11c2VyCj4gPj4+IHBvcnRzCj4gPj4gYW5kIEkgYW0gc2VlaW5nIGFuIGlz
> c3VlIHdoZXJlIFBrdGdlbiBkb2VzIG5vdCBsb29rIGxpa2UgaXQgZ2VuZXJhdGVzCj4gPj4gcGFj
> a2V0cyBjb3JyZWN0bHkuCj4gPj4+Cj4gPj4+IEZvciB0aGlzIHNldHVwIEkgYW0gdXNpbmcgRFBE
> SyAxNy4wMiwgUGt0Z2VuIDMuMi44IGFuZCBPVlMgMi43LjAuCj4gPj4+Cj4gPj4+IFRoZSBPVlMg
> YnJpZGdlIGlzIGNyZWF0ZWQgd2l0aDoKPiA+Pj4gb3ZzLXZzY3RsIGFkZC1iciBvdnNicjAgLS0g
> c2V0IGJyaWRnZSBvdnNicjAgZGF0YXBhdGhfdHlwZT1uZXRkZXYKPiA+Pj4gb3ZzLXZzY3RsIGFk
> ZC1wb3J0IG92c2JyMCB2aG9zdC11c2VyMSAtLSBzZXQgSW50ZXJmYWNlIHZob3N0LXVzZXIxCj4g
> Pj4+IHR5cGU9ZHBka3Zob3N0dXNlciBvZnBvcnRfcmVxdWVzdD0xIG92cy12c2N0bCBhZGQtcG9y
> dCBvdnNicjAKPiA+Pj4gdmhvc3QtdXNlcjIgLS0gc2V0IEludGVyZmFjZSB2aG9zdC11c2VyMiB0
> eXBlPWRwZGt2aG9zdHVzZXIKPiA+Pj4gb2Zwb3J0X3JlcXVlc3Q9MiBvdnMtb2ZjdGwgYWRkLWZs
> b3cgb3ZzYnIwIGluX3BvcnQ9MSxhY3Rpb249b3V0cHV0OjIKPiA+Pj4gb3ZzLW9mY3RsIGFkZC1m
> bG93IG92c2JyMCBpbl9wb3J0PTIsYWN0aW9uPW91dHB1dDoxCj4gPj4+Cj4gPj4+IERQREstUGt0
> Z2VuIGlzIGxhdW5jaGVkIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kIHNvIHRoYXQgcGFja2V0
> cwo+ID4+IGdlbmVyYXRlZCB0aHJvdWdoIHBvcnQgMCBhcmUgcmVjZWl2ZWQgYnkgcG9ydCAxIGFu
> ZCB2aWNldmVyc2E6Cj4gPj4+IHBrdGdlbiAtYyAweEYgLS1maWxlLXByZWZpeCBwa3RnZW4gLS1u
> by1wY2kgXAo+ID4+Pgo+ID4+IC0tdmRldj12aXJ0aW9fdXNlcjAscGF0aD0vdG1wL3Zob3N0LXVz
> ZXIxIFwKPiA+Pj4KPiA+PiAtLXZkZXY9dmlydGlvX3VzZXIxLHBhdGg9L3RtcC92aG9zdC11c2Vy
> MiBcCj4gPj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC0tIC1QIC1tICJbMDoxXS4w
> LCBbMjozXS4x4oCdCj4gPj4KPiA+PiBUaGUgYWJvdmUgY29tbWFuZCBsaW5lIGlzIHdyb25nIGFz
> IFBrdGdlbiBuZWVkcyBvciB0YWtlcyB0aGUgZmlyc3QKPiA+PiBsY29yZSBmb3IgZGlzcGxheSBv
> dXRwdXQgYW5kIHRpbWVycy4gSSB3b3VsZCBub3QgdXNlIC1jIC0weEYsIGJ1dCAtbAo+ID4+IDEt
> NSBpbnN0ZWFkLCBhcyBpdCBpcyBhIGxvdCBlYXNpZXIgdG8gdW5kZXJzdGFuZCBJTU8uIFVzaW5n
> IHRoaXMKPiA+PiBvcHRpb24gLWwgMS01IHlvdSBhcmUgdXNpbmcgNSBsY29yZXMgKHNraXBwaW5n
> IGxjb3JlIDAgaW4gYSA2IGxjb3JlCj4gPj4gVk0pIG9uZSBmb3IgUGt0Z2VuIGFuZCA0IGZvciB0
> aGUgdHdvIHBvcnRzLiAtbQo+ID4+IFsyOjNdLjAgLW0gWzQ6NV0uMSBsZWF2aW5nIGxjb3JlIDEg
> Zm9yIFBrdGdlbiB0byB1c2UgYW5kIEkgYW0KPiA+PiBjb25jZXJuZWQgeW91IGRpZCBub3Qgc2Vl
> IHNvbWUgcGVyZm9ybWFuY2Ugb3IgbG9ja3VwIHByb2JsZW0uIEkKPiA+PiByZWFsbHkgbmVlZCB0
> byBhZGQgYSB0ZXN0IGZvciB0aGVzZSB0eXBlcyBvZiBwcm9ibGVtIDotKCBZb3UgY2FuIGp1c3QK
> PiA+PiBoYXZlIDUgbGNvcmVzIGZvciB0aGUgVk0sIHdoaWNoIHRoZW4gcGt0Z2VuIHNoYXJlcyBs
> Y29yZSAwIHdpdGggTGludXggdXNpbmcKPiAtbCAwLTQgb3B0aW9uLgo+ID4+Cj4gPj4gUGt0Z2Vu
> IHdoZW4gcmVxdWVzdGVkIHRvIHNlbmQgNjQgYnl0ZSBmcmFtZXMgaXQgc2VuZHMgNjAgYnl0ZSBw
> YXlsb2FkCj4gPj4gKyA0IGJ5dGUgRnJhbWUgQ2hlY2tzdW0uIFRoaXMgZG9lcyB3b3JrIGFuZCBp
> dCBtdXN0IGJlIGluIGhvdwo+ID4+IHZob3N0LXVzZXIgaXMgdGVzdGluZyBmb3IgdGhlIHBhY2tl
> dCBzaXplLiBJbiB0aGUgbWJ1ZiB5b3UgaGF2ZQo+ID4+IHBheWxvYWQgc2l6ZSBhbmQgdGhlIGJ1
> ZmZlciBzaXplLiBUaGUgQnVmZmVyIHNpemUgY291bGQgYmUgMTUyNCwgYnV0Cj4gPj4gdGhlIHBh
> eWxvYWQgb3IgZnJhbWUgc2l6ZSB3aWxsIGJlIDYwIGJ5dGVzIGFzIHRoZSA0IGJ5dGVzIEZDUyBp
> cwo+ID4+IGFwcGVuZGVkIHRvIHRoZSBmcmFtZSBieSB0aGUgaGFyZHdhcmUuIEl0IHNlZW1zIHRv
> IG1lIHRoYXQgdmhvc3QtdXNlcgo+ID4+IGlzIG5vdCBsb29raW5nIGF0IHRoZSBjb3JyZWN0IHN0
> cnVjdCBydGVfbWJ1ZiBtZW1iZXIgdmFyaWFibGUgaW4gaXRzIHRlc3RpbmcuCj4gPj4KPiA+Pj4K
> PiA+Pj4gSW4gUGt0Z2VuLCB0aGUgZGVmYXVsdCBzZXR0aW5ncyBhcmUgdXNlZCBmb3IgYm90aCBw
> b3J0czoKPiA+Pj4KPiA+Pj4gLSAgICAgICAgICBUeCBDb3VudDogRm9yZXZlcgo+ID4+Pgo+ID4+
> PiAtICAgICAgICAgIFJhdGU6IDEwMCUKPiA+Pj4KPiA+Pj4gLSAgICAgICAgICBQa3RTaXplOiA2
> NAo+ID4+Pgo+ID4+PiAtICAgICAgICAgIFR4IEJ1cnN0OiAzMgo+ID4+Pgo+ID4+PiBXaGVuZXZl
> ciBJIHN0YXJ0IGdlbmVyYXRpbmcgcGFja2V0cyB0aHJvdWdoIG9uZSBvZiB0aGUgcG9ydHMgKGlu
> Cj4gPj4+IHRoaXMKPiA+PiBleGFtcGxlIHBvcnQgMCBieSBydW5uaW5nIHN0YXJ0IDApLCB0aGUg
> T1ZTIGxvZ3MgdGhyb3cgd2FybmluZ3Mgc2ltaWxhciB0bzoKPiA+Pj4gMjAxNy0wNS0wMlQwOToy
> MzowNC43NDFafDAwMDIyfG5ldGRldl9kcGRrKHBtZDkpfFdBUk58RHJvcHBlZAo+ID4+IDExOTQ5
> NTYKPiA+Pj4gbG9nIG1lc3NhZ2VzIGluIGxhc3QgNDkgc2Vjb25kcyAobW9zdCByZWNlbnRseSwg
> NDEgc2Vjb25kcyBhZ28pIGR1ZQo+ID4+PiB0byBleGNlc3NpdmUgcmF0ZQo+ID4+Pgo+ID4+Cj4g
> MjAxNy0wNS0wMlQwOToyMzowNC43NDFafDAwMDIzfG5ldGRldl9kcGRrKHBtZDkpfFdBUk58dmhv
> c3QtdXNlcjI6Cj4gPj4gVG9vCj4gPj4+IGJpZyBzaXplIDE1MjQgbWF4X3BhY2tldF9sZW4gMTUx
> OAo+ID4+Pgo+ID4+Cj4gMjAxNy0wNS0wMlQwOToyMzowNC43NDFafDAwMDI0fG5ldGRldl9kcGRr
> KHBtZDkpfFdBUk58dmhvc3QtdXNlcjI6Cj4gPj4gVG9vCj4gPj4+IGJpZyBzaXplIDE1MjQgbWF4
> X3BhY2tldF9sZW4gMTUxOAo+ID4+Pgo+ID4+Cj4gMjAxNy0wNS0wMlQwOToyMzowNC43NDFafDAw
> MDI1fG5ldGRldl9kcGRrKHBtZDkpfFdBUk58dmhvc3QtdXNlcjI6Cj4gPj4gVG9vCj4gPj4+IGJp
> ZyBzaXplIDE1MjQgbWF4X3BhY2tldF9sZW4gMTUxOAo+ID4+Pgo+ID4+Cj4gMjAxNy0wNS0wMlQw
> OToyMzowNC43NDFafDAwMDI2fG5ldGRldl9kcGRrKHBtZDkpfFdBUk58dmhvc3QtdXNlcjI6Cj4g
> Pj4gVG9vCj4gPj4+IGJpZyBzaXplIDE1MjQgbWF4X3BhY2tldF9sZW4gMTUxOAo+ID4+PiAyMDE3
> LTA1LTAyVDA5OjIzOjE1Ljc2MVp8MDAwMjd8bmV0ZGV2X2RwZGsocG1kOSl8V0FSTnxEcm9wcGVk
> Cj4gPj4gMTM0NDk4OAo+ID4+PiBsb2cgbWVzc2FnZXMgaW4gbGFzdCAxMSBzZWNvbmRzIChtb3N0
> IHJlY2VudGx5LCAwIHNlY29uZHMgYWdvKSBkdWUKPiA+Pj4gdG8gZXhjZXNzaXZlIHJhdGUKPiA+
> Pj4KPiA+Pgo+IDIwMTctMDUtMDJUMDk6MjM6MTUuNzYxWnwwMDAyOHxuZXRkZXZfZHBkayhwbWQ5
> KXxXQVJOfHZob3N0LXVzZXIyOgo+ID4+IFRvbwo+ID4+PiBiaWcgc2l6ZSA1NzU2NCBtYXhfcGFj
> a2V0X2xlbiAxNTE4IFBvcnQgMSBkb2VzIG5vdCByZWNlaXZlIGFueSBwYWNrZXRzLgo+ID4+Pgo+
> ID4+PiBXaGVuIHJ1bm5pbmcgUGt0Z2VuIHdpdGggdGhlIC1zb2NrZXQtbWVtIG9wdGlvbiAoZS5n
> LiAtLXNvY2tldC1tZW0KPiA+Pj4gNTEyKSwKPiA+PiB0aGUgYmVoYXZpb3IgaXMgZGlmZmVyZW50
> LCBidXQgd2l0aCB0aGUgc2FtZSB3YXJuaW5ncyB0aHJvd24gYnkgT1ZTOgo+ID4+IHBvcnQgMSBy
> ZWNlaXZlcyBzb21lIHBhY2thZ2VzLCBidXQgd2l0aCBkaWZmZXJlbnQgc2l6ZXMsIGV2ZW4gdGhv
> dWdoCj4gPj4gdGhleSBhcmUgZ2VuZXJhdGVkIG9uIHBvcnQgMCB3aXRoIGEgNjRiIHNpemU6Cj4g
> Pj4+IEZsYWdzOlBvcnQgICAgICA6ICAgUC0tLS0tLS0tLS0tLS0tOjAgICBQLS0tLS0tLS0tLS0t
> LS06MQo+ID4+PiBMaW5rIFN0YXRlICAgICAgICA6ICAgICAgIDxVUC0xMDAwMC1GRD4gICAgICAg
> PFVQLTEwMDAwLUZEPgo+ID4+IC0tLS1Ub3RhbFJhdGUtLS0tCj4gPj4+IFBrdHMvcyBNYXgvUngg
> ICAgIDogICAgICAgICAgICAgICAgIDAvMCAgICAgICAgICAgICAzNTEzNi8wCj4gPj4gMzUxMzYv
> MAo+ID4+PiAgICAgIE1heC9UeCAgICAgOiAgICAgICAgMjM4MTQ0LzI1NTA0ICAgICAgICAgICAg
> ICAgICAwLzAKPiA+PiAyMzgxNDQvMjU1MDQKPiA+Pj4gTUJpdHMvcyBSeC9UeCAgICAgOiAgICAg
> ICAgICAgICAwLzEzMjcwICAgICAgICAgICAgICAgICAwLzAKPiA+PiAwLzEzMjcwCj4gPj4+IEJy
> b2FkY2FzdCAgICAgICAgIDogICAgICAgICAgICAgICAgICAgMCAgICAgICAgICAgICAgICAgICAw
> Cj4gPj4+IE11bHRpY2FzdCAgICAgICAgIDogICAgICAgICAgICAgICAgICAgMCAgICAgICAgICAg
> ICAgICAgICAwCj4gPj4+IDY0IEJ5dGVzICAgICAgICA6ICAgICAgICAgICAgICAgICAgIDAgICAg
> ICAgICAgICAgICAgIDI4OAo+ID4+PiA2NS0xMjcgICAgICAgICAgOiAgICAgICAgICAgICAgICAg
> ICAwICAgICAgICAgICAgICAgIDE0NDAKPiA+Pj4gMTI4LTI1NSAgICAgICAgIDogICAgICAgICAg
> ICAgICAgICAgMCAgICAgICAgICAgICAgICAyODgwCj4gPj4+IDI1Ni01MTEgICAgICAgICA6ICAg
> ICAgICAgICAgICAgICAgIDAgICAgICAgICAgICAgICAgNjMzNgo+ID4+PiA1MTItMTAyMyAgICAg
> ICAgOiAgICAgICAgICAgICAgICAgICAwICAgICAgICAgICAgICAgMTIwOTYKPiA+Pj4gMTAyNC0x
> NTE4ICAgICAgIDogICAgICAgICAgICAgICAgICAgMCAgICAgICAgICAgICAgIDEyMDk2Cj4gPj4+
> IFJ1bnRzL0p1bWJvcyAgICAgIDogICAgICAgICAgICAgICAgIDAvMCAgICAgICAgICAgICAgICAg
> MC8wCj4gPj4+IEVycm9ycyBSeC9UeCAgICAgIDogICAgICAgICAgICAgICAgIDAvMCAgICAgICAg
> ICAgICAgICAgMC8wCj4gPj4+IFRvdGFsIFJ4IFBrdHMgICAgIDogICAgICAgICAgICAgICAgICAg
> MCAgICAgICAgICAgICAgIDM1MTM2Cj4gPj4+ICAgICBUeCBQa3RzICAgICA6ICAgICAgICAgICAg
> IDE1NzE1ODQgICAgICAgICAgICAgICAgICAgMAo+ID4+PiAgICAgUnggTUJzICAgICAgOiAgICAg
> ICAgICAgICAgICAgICAwICAgICAgICAgICAgICAgICAyMjcKPiA+Pj4gICAgIFR4IE1CcyAgICAg
> IDogICAgICAgICAgICAgIDQxMjc3NyAgICAgICAgICAgICAgICAgICAwCj4gPj4+IEFSUC9JQ01Q
> IFBrdHMgICAgIDogICAgICAgICAgICAgICAgIDAvMCAgICAgICAgICAgICAgICAgMC8wCj4gPj4+
> ICAgICAgICAgICAgICAgICA6Cj4gPj4+IFBhdHRlcm4gVHlwZSAgICAgIDogICAgICAgICAgICAg
> YWJjZC4uLiAgICAgICAgICAgICBhYmNkLi4uCj4gPj4+IFR4IENvdW50LyUgUmF0ZSAgIDogICAg
> ICAgRm9yZXZlciAvMTAwJSAgICAgICBGb3JldmVyIC8xMDAlCj4gPj4+IFBrdFNpemUvVHggQnVy
> c3QgIDogICAgICAgICAgIDY0IC8gICAzMiAgICAgICAgICAgNjQgLyAgIDMyCj4gPj4+IFNyYy9E
> ZXN0IFBvcnQgICAgIDogICAgICAgICAxMjM0IC8gNTY3OCAgICAgICAgIDEyMzQgLyA1Njc4Cj4g
> Pj4+IFBrdCBUeXBlOlZMQU4gSUQgIDogICAgIElQdjQgLyBUQ1A6MDAwMSAgICAgSVB2NCAvIFRD
> UDowMDAxCj4gPj4+IERzdCAgSVAgQWRkcmVzcyAgIDogICAgICAgICAxOTIuMTY4LjEuMSAgICAg
> ICAgIDE5Mi4xNjguMC4xCj4gPj4+IFNyYyAgSVAgQWRkcmVzcyAgIDogICAgICAxOTIuMTY4LjAu
> MS8yNCAgICAgIDE5Mi4xNjguMS4xLzI0Cj4gPj4+IERzdCBNQUMgQWRkcmVzcyAgIDogICBhNjo3
> MTo0ZToyZjplZTo1ZCAgIGI2OjM4OmRkOjM0OmIyOjkzCj4gPj4+IFNyYyBNQUMgQWRkcmVzcyAg
> IDogICBiNjozODpkZDozNDpiMjo5MyAgIGE2OjcxOjRlOjJmOmVlOjVkCj4gPj4+IFZlbmRJRC9Q
> Q0kgQWRkciAgIDogICAwMDAwOjAwMDAvMDA6MDAuMCAgIDAwMDA6MDAwMC8wMDowMC4wCj4gPj4+
> Cj4gPj4+IC0tIFBrdGdlbiBWZXI6IDMuMi44IChEUERLIDE3LjAyLjApICBQb3dlcmVkIGJ5IElu
> dGVsKHIpIERQREsKPiA+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tLQo+ID4+Pgo+ID4+PiBJZiBwYWNr
> ZXRzIGFyZSBnZW5lcmF0ZWQgZnJvbSBhbiBleHRlcm5hbCBzb3VyY2UgYW5kIHRlc3RwbWQgaXMg
> dXNlZAo+ID4+PiB0bwo+ID4+IGZvcndhcmQgdHJhZmZpYyBiZXR3ZWVuIHRoZSB0d28gdkhvc3Qt
> dXNlciBwb3J0cywgdGhlIHdhcm5pbmdzIGFyZQo+ID4+IG5vdCB0aHJvd24gYnkgdGhlIE9WUyBi
> cmlkZ2UuCj4gPj4+Cj4gPj4+IFNob3VsZCB0aGlzIHNldHVwIHdvcms/Cj4gPj4+IElzIHRoaXMg
> YW4gaXNzdWUgb3IgYW0gSSBzZXR0aW5nIHNvbWV0aGluZyB1cCB3cm9uZz8KPiA+Pj4KPiA+Pj4g
> VGhhbmsgeW91LAo+ID4+PiBHYWJyaWVsIElvbmVzY3UKPiA+Pgo+ID4+IFJlZ2FyZHMsCj4gPj4g
> S2VpdGgKPiA+Cj4KPiBSZWdhcmRzLAo+IEtlaXRo

Regards,
Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-10 11:44               ` [dpdk-users] 答复: " qin.chunhua
  2018-01-10 14:01                 ` [dpdk-users] " Wiles, Keith
@ 2018-01-11  9:35                 ` Chen, Junjie J
  2018-01-11 10:51                   ` [dpdk-users] 答复: RE: " wang.yong19
  1 sibling, 1 reply; 17+ messages in thread
From: Chen, Junjie J @ 2018-01-11  9:35 UTC (permalink / raw)
  To: qin.chunhua
  Cc: Hu, Xuekun, wang.yong19, Wiles, Keith, Gabriel.Ionescu, Tan,
	Jianfeng, users

Could please try this patch for app/pktgen.c

@@ -877,6 +877,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
 {
        pkt_data_t *data = (pkt_data_t *)opaque;
        struct rte_mbuf *m = (struct rte_mbuf *)obj;
+       pktmbuf_reset(m);
        port_info_t *info;
        pkt_seq_t *pkt;
        uint16_t qid;


it works on my setup.

Cheers
JJ


> -----Original Message-----
> From: qin.chunhua@zte.com.cn [mailto:qin.chunhua@zte.com.cn]
> Sent: Wednesday, January 10, 2018 7:45 PM
> To: Chen, Junjie J <junjie.j.chen@intel.com>
> Cc: Hu, Xuekun <xuekun.hu@intel.com>; wang.yong19@zte.com.cn; Wiles,
> Keith <keith.wiles@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Hi,
> Thanks a lot for your advice.
> We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two patches
> below, the problem was resolved.
> Now we met a new problem in the above situation. We set mac of the virtio
> port before we start generating flow.
> At first, everything is OK. Then, we stop the flow and restart the same flow
> without any other modifications.
> We found the source mac of the flow was different from what we had set to
> the virtio port.
> Moreover, the source mac was different every time we restart the flow.
> What's going on? Do you know any patches to fix this problem if we can't
> change the version of virtio?
> Looking forward to receiving your reply. Thank you!
> 
> 
> 
> ------------------原始邮件------------------
> 发件人: <junjie.j.chen@intel.com>;
> 收件人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
> 抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
> <users@dpdk.org>;
> 日 期 :2018年01月10日 09:47
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Start from qemu
> 2.7, virtio default use 1.0 instead of 0.9, which add a flag
> (VIRTIO_F_VERSION_1) to device feature.
> 
> Actually, qemu use disable-legacy=on,disable-modern=off to support virtio
> 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So
> you can use virtio 0.9 on qemu 2.7+ to workaround this.
> 
> Cheers
> JJ
> 
> 
> > -----Original Message-----
> > From: Hu, Xuekun
> > Sent: Wednesday, January 10, 2018 9:32 AM
> > To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> > Cc: Chen, Junjie J <junjie.j.chen@intel.com>;
> > Gabriel.Ionescu@enea.com; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > users@dpdk.org
> > Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Maybe the new qemu (starting from 2.8) introduced some new features
> > that break the pktgen and dpdk compatibility?
> >
> > -----Original Message-----
> > From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> > Sent: Tuesday, January 09, 2018 10:30 PM
> > To: Wiles, Keith <keith.wiles@intel.com>
> > Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> > <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> > <jianfeng.tan@intel.com>; users@dpdk.org
> > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Hi,
> > We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> > problem is resolved.
> > But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> > included), the problem remains.
> > It seems that there are still something wrong with pktgen-3.4.6 and
> > dpdk-17.11.
> >
> >
> > ------------------origin------------------
> > 发件人: <keith.wiles@intel.com>;
> > 收件人: <junjie.j.chen@intel.com>;
> > 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> > <jianfeng.tan@intel.com>; <users@dpdk.org>;
> > 日 期 :2018年01月09日 22:04
> > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> >
> > > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com>
> wrote:
> > >
> > > Hi
> > > There are two defects may cause this issue:
> > >
> > > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > > a/lib/common/mbuf.h
> > > +++ b/lib/common/mbuf.h
> > > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > > m->nb_segs = 1;
> > > m->port = 0xff;
> > >
> > > +    m->data_len = m->pkt_len;
> > > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > > }
> >
> > This patch is in Pktgen 3.4.6
> > >
> > > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio:
> > > fix Tx packet length stats
> > >
> > > You could patch both these two patch to try it.
> > >
> > > Cheers
> > > JJ
> > >
> > >
> > >> -----Original Message-----
> > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> > >> Sent: Tuesday, January 9, 2018 2:38 PM
> > >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> > >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> > >> Cc: users@dpdk.org
> > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >>
> > >> Hi, Keith
> > >>
> > >> Any updates on this issue? We met similar behavior that ovs-dpdk
> > >> reports they receive packet with size increment 12 bytes until more
> > >> than 1518, then pktgen stops sending packets, while we only ask
> > >> pktgen to generate 64B packet. And it only happens with two
> > >> vhost-user ports in same server. If the pktgen is running in
> > >> another server,
> > then no such issue.
> > >>
> > >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> > >>
> > >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> > >> 2.5 has no such problem. So seems like it is a compatibility issue
> > >> with pktgen/dpdk/qemu?
> > >>
> > >> Thanks.
> > >> Thx, Xuekun
> > >>
> > >> -----Original Message-----
> > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles,
> > >> Keith
> > >> Sent: Wednesday, May 03, 2017 4:24 AM
> > >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> > >> Cc: users@dpdk.org
> > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >>
> > >> Comments inline:
> > >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> > >>> <Gabriel.Ionescu@enea.com>
> > >> wrote:
> > >>>
> > >>> Hi,
> > >>>
> > >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> > >>> ports
> > >> and I am seeing an issue where Pktgen does not look like it
> > >> generates packets correctly.
> > >>>
> > >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> > >>>
> > >>> The OVS bridge is created with:
> > >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> > >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> > >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> > >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > >>> ofport_request=2 ovs-ofctl add-flow ovsbr0
> > >>> in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0
> > >>> in_port=2,action=output:1
> > >>>
> > >>> DPDK-Pktgen is launched with the following command so that packets
> > >> generated through port 0 are received by port 1 and viceversa:
> > >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> > >>>
> > >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> > >>>
> > >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> > >>>                               -- -P -m "[0:1].0, [2:3].1”
> > >>
> > >> The above command line is wrong as Pktgen needs or takes the first
> > >> lcore for display output and timers. I would not use -c -0xF, but
> > >> -l
> > >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> > >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> > >> VM) one for Pktgen and 4 for the two ports. -m
> > >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> > >> concerned you did not see some performance or lockup problem. I
> > >> really need to add a test for these types of problem :-( You can
> > >> just have 5 lcores for the VM, which then pktgen shares lcore 0
> > >> with Linux using
> > -l 0-4 option.
> > >>
> > >> Pktgen when requested to send 64 byte frames it sends 60 byte
> > >> payload
> > >> + 4 byte Frame Checksum. This does work and it must be in how
> > >> vhost-user is testing for the packet size. In the mbuf you have
> > >> payload size and the buffer size. The Buffer size could be 1524,
> > >> but the payload or frame size will be 60 bytes as the 4 bytes FCS
> > >> is appended to the frame by the hardware. It seems to me that
> > >> vhost-user is not looking at the correct struct rte_mbuf member
> variable in its testing.
> > >>
> > >>>
> > >>> In Pktgen, the default settings are used for both ports:
> > >>>
> > >>> -          Tx Count: Forever
> > >>>
> > >>> -          Rate: 100%
> > >>>
> > >>> -          PktSize: 64
> > >>>
> > >>> -          Tx Burst: 32
> > >>>
> > >>> Whenever I start generating packets through one of the ports (in
> > >>> this
> > >> example port 0 by running start 0), the OVS logs throw warnings similar
> to:
> > >>>
> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> > >> 1194956
> > >>> log messages in last 49 seconds (most recently, 41 seconds ago)
> > >>> due to excessive rate
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> > >> 1344988
> > >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> > >>> to excessive rate
> > >>>
> > >>
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 57564 max_packet_len 1518 Port 1 does not receive any
> packets.
> > >>>
> > >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> > >>> 512),
> > >> the behavior is different, but with the same warnings thrown by OVS:
> > >> port 1 receives some packages, but with different sizes, even
> > >> though they are generated on port 0 with a 64b size:
> > >>> Flags:Port      :   P--------------:0   P--------------:1
> > >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> > >> ----TotalRate----
> > >>> Pkts/s Max/Rx     :                 0/0             35136/0
> > >> 35136/0
> > >>>      Max/Tx     :        238144/25504                 0/0
> > >> 238144/25504
> > >>> MBits/s Rx/Tx     :             0/13270                 0/0
> > >> 0/13270
> > >>> Broadcast         :                   0
> 0
> > >>> Multicast         :                   0                   0
> > >>> 64 Bytes        :                   0                 288
> > >>> 65-127          :                   0                1440
> > >>> 128-255         :                   0                2880
> > >>> 256-511         :                   0                6336
> > >>> 512-1023        :                   0               12096
> > >>> 1024-1518       :                   0               12096
> > >>> Runts/Jumbos      :                 0/0
> 0/0
> > >>> Errors Rx/Tx      :                 0/0                 0/0
> > >>> Total Rx Pkts     :                   0               35136
> > >>>     Tx Pkts     :             1571584                   0
> > >>>     Rx MBs      :                   0                 227
> > >>>     Tx MBs      :              412777                   0
> > >>> ARP/ICMP Pkts     :                 0/0                 0/0
> > >>>                 :
> > >>> Pattern Type      :             abcd...             abcd...
> > >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> > >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> > >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> > >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> > >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> > >>>
> > >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > >>> -------------------
> > >>>
> > >>> If packets are generated from an external source and testpmd is
> > >>> used to
> > >> forward traffic between the two vHost-user ports, the warnings are
> > >> not thrown by the OVS bridge.
> > >>>
> > >>> Should this setup work?
> > >>> Is this an issue or am I setting something up wrong?
> > >>>
> > >>> Thank you,
> > >>> Gabriel Ionescu
> > >>
> > >> Regards,
> > >> Keith
> > >
> >
> > Regards,
> > Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: RE: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-11  9:35                 ` Chen, Junjie J
@ 2018-01-11 10:51                   ` wang.yong19
  2018-01-11 11:13                     ` [dpdk-users] " Chen, Junjie J
  0 siblings, 1 reply; 17+ messages in thread
From: wang.yong19 @ 2018-01-11 10:51 UTC (permalink / raw)
  To: junjie.j.chen
  Cc: qin.chunhua, xuekun.hu, keith.wiles, Gabriel.Ionescu,
	jianfeng.tan, users

This patch works in our VMs. 
We really appreciate for your help!


------------------origin------------------
发件人: <junjie.j.chen@intel.com>;
收件人:秦春华10013690;
抄送人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>; <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月11日 17:35
主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Could please try this patch for app/pktgen.c

@@ -877,6 +877,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
{
pkt_data_t *data = (pkt_data_t *)opaque;
struct rte_mbuf *m = (struct rte_mbuf *)obj;
+       pktmbuf_reset(m);
port_info_t *info;
pkt_seq_t *pkt;
uint16_t qid;


it works on my setup.

Cheers
JJ


> -----Original Message-----
> From: qin.chunhua@zte.com.cn [mailto:qin.chunhua@zte.com.cn]
> Sent: Wednesday, January 10, 2018 7:45 PM
> To: Chen, Junjie J <junjie.j.chen@intel.com>
> Cc: Hu, Xuekun <xuekun.hu@intel.com>; wang.yong19@zte.com.cn; Wiles,
> Keith <keith.wiles@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Hi,
> Thanks a lot for your advice.
> We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two patches
> below, the problem was resolved.
> Now we met a new problem in the above situation. We set mac of the virtio
> port before we start generating flow.
> At first, everything is OK. Then, we stop the flow and restart the same flow
> without any other modifications.
> We found the source mac of the flow was different from what we had set to
> the virtio port.
> Moreover, the source mac was different every time we restart the flow.
> What's going on? Do you know any patches to fix this problem if we can't
> change the version of virtio?
> Looking forward to receiving your reply. Thank you!
>
>
>
> ------------------原始邮件------------------
> 发件人: <junjie.j.chen@intel.com>;
> 收件人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
> 抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
> <users@dpdk.org>;
> 日 期 :2018年01月10日 09:47
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Start from qemu
> 2.7, virtio default use 1.0 instead of 0.9, which add a flag
> (VIRTIO_F_VERSION_1) to device feature.
>
> Actually, qemu use disable-legacy=on,disable-modern=off to support virtio
> 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So
> you can use virtio 0.9 on qemu 2.7+ to workaround this.
>
> Cheers
> JJ
>
>
> > -----Original Message-----
> > From: Hu, Xuekun
> > Sent: Wednesday, January 10, 2018 9:32 AM
> > To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> > Cc: Chen, Junjie J <junjie.j.chen@intel.com>;
> > Gabriel.Ionescu@enea.com; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > users@dpdk.org
> > Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Maybe the new qemu (starting from 2.8) introduced some new features
> > that break the pktgen and dpdk compatibility?
> >
> > -----Original Message-----
> > From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> > Sent: Tuesday, January 09, 2018 10:30 PM
> > To: Wiles, Keith <keith.wiles@intel.com>
> > Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> > <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> > <jianfeng.tan@intel.com>; users@dpdk.org
> > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Hi,
> > We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> > problem is resolved.
> > But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> > included), the problem remains.
> > It seems that there are still something wrong with pktgen-3.4.6 and
> > dpdk-17.11.
> >
> >
> > ------------------origin------------------
> > 发件人: <keith.wiles@intel.com>;
> > 收件人: <junjie.j.chen@intel.com>;
> > 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> > <jianfeng.tan@intel.com>; <users@dpdk.org>;
> > 日 期 :2018年01月09日 22:04
> > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> >
> > > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com>
> wrote:
> > >
> > > Hi
> > > There are two defects may cause this issue:
> > >
> > > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > > a/lib/common/mbuf.h
> > > +++ b/lib/common/mbuf.h
> > > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > > m->nb_segs = 1;
> > > m->port = 0xff;
> > >
> > > +    m->data_len = m->pkt_len;
> > > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > > }
> >
> > This patch is in Pktgen 3.4.6
> > >
> > > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio:
> > > fix Tx packet length stats
> > >
> > > You could patch both these two patch to try it.
> > >
> > > Cheers
> > > JJ
> > >
> > >
> > >> -----Original Message-----
> > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> > >> Sent: Tuesday, January 9, 2018 2:38 PM
> > >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> > >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> > >> Cc: users@dpdk.org
> > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >>
> > >> Hi, Keith
> > >>
> > >> Any updates on this issue? We met similar behavior that ovs-dpdk
> > >> reports they receive packet with size increment 12 bytes until more
> > >> than 1518, then pktgen stops sending packets, while we only ask
> > >> pktgen to generate 64B packet. And it only happens with two
> > >> vhost-user ports in same server. If the pktgen is running in
> > >> another server,
> > then no such issue.
> > >>
> > >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> > >>
> > >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> > >> 2.5 has no such problem. So seems like it is a compatibility issue
> > >> with pktgen/dpdk/qemu?
> > >>
> > >> Thanks.
> > >> Thx, Xuekun
> > >>
> > >> -----Original Message-----
> > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles,
> > >> Keith
> > >> Sent: Wednesday, May 03, 2017 4:24 AM
> > >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> > >> Cc: users@dpdk.org
> > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >>
> > >> Comments inline:
> > >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> > >>> <Gabriel.Ionescu@enea.com>
> > >> wrote:
> > >>>
> > >>> Hi,
> > >>>
> > >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> > >>> ports
> > >> and I am seeing an issue where Pktgen does not look like it
> > >> generates packets correctly.
> > >>>
> > >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> > >>>
> > >>> The OVS bridge is created with:
> > >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> > >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> > >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> > >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > >>> ofport_request=2 ovs-ofctl add-flow ovsbr0
> > >>> in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0
> > >>> in_port=2,action=output:1
> > >>>
> > >>> DPDK-Pktgen is launched with the following command so that packets
> > >> generated through port 0 are received by port 1 and viceversa:
> > >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> > >>>
> > >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> > >>>
> > >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> > >>>                               -- -P -m "[0:1].0, [2:3].1”
> > >>
> > >> The above command line is wrong as Pktgen needs or takes the first
> > >> lcore for display output and timers. I would not use -c -0xF, but
> > >> -l
> > >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> > >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> > >> VM) one for Pktgen and 4 for the two ports. -m
> > >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> > >> concerned you did not see some performance or lockup problem. I
> > >> really need to add a test for these types of problem :-( You can
> > >> just have 5 lcores for the VM, which then pktgen shares lcore 0
> > >> with Linux using
> > -l 0-4 option.
> > >>
> > >> Pktgen when requested to send 64 byte frames it sends 60 byte
> > >> payload
> > >> + 4 byte Frame Checksum. This does work and it must be in how
> > >> vhost-user is testing for the packet size. In the mbuf you have
> > >> payload size and the buffer size. The Buffer size could be 1524,
> > >> but the payload or frame size will be 60 bytes as the 4 bytes FCS
> > >> is appended to the frame by the hardware. It seems to me that
> > >> vhost-user is not looking at the correct struct rte_mbuf member
> variable in its testing.
> > >>
> > >>>
> > >>> In Pktgen, the default settings are used for both ports:
> > >>>
> > >>> -          Tx Count: Forever
> > >>>
> > >>> -          Rate: 100%
> > >>>
> > >>> -          PktSize: 64
> > >>>
> > >>> -          Tx Burst: 32
> > >>>
> > >>> Whenever I start generating packets through one of the ports (in
> > >>> this
> > >> example port 0 by running start 0), the OVS logs throw warnings similar
> to:
> > >>>
> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> > >> 1194956
> > >>> log messages in last 49 seconds (most recently, 41 seconds ago)
> > >>> due to excessive rate
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> > >>
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 1524 max_packet_len 1518
> > >>>
> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> > >> 1344988
> > >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> > >>> to excessive rate
> > >>>
> > >>
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > >> Too
> > >>> big size 57564 max_packet_len 1518 Port 1 does not receive any
> packets.
> > >>>
> > >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> > >>> 512),
> > >> the behavior is different, but with the same warnings thrown by OVS:
> > >> port 1 receives some packages, but with different sizes, even
> > >> though they are generated on port 0 with a 64b size:
> > >>> Flags:Port      :   P--------------:0   P--------------:1
> > >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> > >> ----TotalRate----
> > >>> Pkts/s Max/Rx     :                 0/0             35136/0
> > >> 35136/0
> > >>>      Max/Tx     :        238144/25504                 0/0
> > >> 238144/25504
> > >>> MBits/s Rx/Tx     :             0/13270                 0/0
> > >> 0/13270
> > >>> Broadcast         :                   0
> 0
> > >>> Multicast         :                   0                   0
> > >>> 64 Bytes        :                   0                 288
> > >>> 65-127          :                   0                1440
> > >>> 128-255         :                   0                2880
> > >>> 256-511         :                   0                6336
> > >>> 512-1023        :                   0               12096
> > >>> 1024-1518       :                   0               12096
> > >>> Runts/Jumbos      :                 0/0
> 0/0
> > >>> Errors Rx/Tx      :                 0/0                 0/0
> > >>> Total Rx Pkts     :                   0               35136
> > >>>     Tx Pkts     :             1571584                   0
> > >>>     Rx MBs      :                   0                 227
> > >>>     Tx MBs      :              412777                   0
> > >>> ARP/ICMP Pkts     :                 0/0                 0/0
> > >>>                 :
> > >>> Pattern Type      :             abcd...             abcd...
> > >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> > >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> > >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> > >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> > >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> > >>>
> > >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > >>> -------------------
> > >>>
> > >>> If packets are generated from an external source and testpmd is
> > >>> used to
> > >> forward traffic between the two vHost-user ports, the warnings are
> > >> not thrown by the OVS bridge.
> > >>>
> > >>> Should this setup work?
> > >>> Is this an issue or am I setting something up wrong?
> > >>>
> > >>> Thank you,
> > >>> Gabriel Ionescu
> > >>
> > >> Regards,
> > >> Keith
> > >
> >
> > Regards,
> > Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
  2018-01-11 10:51                   ` [dpdk-users] 答复: RE: " wang.yong19
@ 2018-01-11 11:13                     ` Chen, Junjie J
  2018-01-11 11:24                       ` [dpdk-users] 答复: RE: RE: " wang.yong19
  0 siblings, 1 reply; 17+ messages in thread
From: Chen, Junjie J @ 2018-01-11 11:13 UTC (permalink / raw)
  To: wang.yong19
  Cc: qin.chunhua, Hu, Xuekun, Wiles, Keith, Gabriel.Ionescu, Tan,
	Jianfeng, users

Great, it would be better to send your email in plain text format, so that others can read in most of email client.

Cheers
JJ


> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Thursday, January 11, 2018 6:51 PM
> To: Chen, Junjie J <junjie.j.chen@intel.com>
> Cc: qin.chunhua@zte.com.cn; Hu, Xuekun <xuekun.hu@intel.com>; Wiles,
> Keith <keith.wiles@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> This patch works in our VMs.
> We really appreciate for your help!
> 
> 
> ------------------origin------------------
> 发件人: <junjie.j.chen@intel.com>;
> 收件人:秦春华10013690;
> 抄送人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
> <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
> <users@dpdk.org>;
> 日 期 :2018年01月11日 17:35
> 主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Could please
> try this patch for app/pktgen.c
> 
> @@ -877,6 +877,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
> { pkt_data_t *data = (pkt_data_t *)opaque; struct rte_mbuf *m = (struct
> rte_mbuf *)obj;
> +       pktmbuf_reset(m);
> port_info_t *info;
> pkt_seq_t *pkt;
> uint16_t qid;
> 
> 
> it works on my setup.
> 
> Cheers
> JJ
> 
> 
> > -----Original Message-----
> > From: qin.chunhua@zte.com.cn [mailto:qin.chunhua@zte.com.cn]
> > Sent: Wednesday, January 10, 2018 7:45 PM
> > To: Chen, Junjie J <junjie.j.chen@intel.com>
> > Cc: Hu, Xuekun <xuekun.hu@intel.com>; wang.yong19@zte.com.cn;
> Wiles,
> > Keith <keith.wiles@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> > <jianfeng.tan@intel.com>; users@dpdk.org
> > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Hi,
> > Thanks a lot for your advice.
> > We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two
> > patches below, the problem was resolved.
> > Now we met a new problem in the above situation. We set mac of the
> > virtio port before we start generating flow.
> > At first, everything is OK. Then, we stop the flow and restart the
> > same flow without any other modifications.
> > We found the source mac of the flow was different from what we had set
> > to the virtio port.
> > Moreover, the source mac was different every time we restart the flow.
> > What's going on? Do you know any patches to fix this problem if we
> > can't change the version of virtio?
> > Looking forward to receiving your reply. Thank you!
> >
> >
> >
> > ------------------原始邮件------------------
> > 发件人: <junjie.j.chen@intel.com>;
> > 收件人: <xuekun.hu@intel.com>;汪勇10032886;
> <keith.wiles@intel.com>;
> > 抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
> > <users@dpdk.org>;
> > 日 期 :2018年01月10日 09:47
> > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Start from
> qemu
> > 2.7, virtio default use 1.0 instead of 0.9, which add a flag
> > (VIRTIO_F_VERSION_1) to device feature.
> >
> > Actually, qemu use disable-legacy=on,disable-modern=off to support
> > virtio 1.0. an use disable-legacy=off,disable-modern=on to support
> > virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.
> >
> > Cheers
> > JJ
> >
> >
> > > -----Original Message-----
> > > From: Hu, Xuekun
> > > Sent: Wednesday, January 10, 2018 9:32 AM
> > > To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> > > Cc: Chen, Junjie J <junjie.j.chen@intel.com>;
> > > Gabriel.Ionescu@enea.com; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > > users@dpdk.org
> > > Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > > Maybe the new qemu (starting from 2.8) introduced some new features
> > > that break the pktgen and dpdk compatibility?
> > >
> > > -----Original Message-----
> > > From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> > > Sent: Tuesday, January 09, 2018 10:30 PM
> > > To: Wiles, Keith <keith.wiles@intel.com>
> > > Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> > > <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> > > <jianfeng.tan@intel.com>; users@dpdk.org
> > > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > > Hi,
> > > We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below,
> > > the problem is resolved.
> > > But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> > > included), the problem remains.
> > > It seems that there are still something wrong with pktgen-3.4.6 and
> > > dpdk-17.11.
> > >
> > >
> > > ------------------origin------------------
> > > 发件人: <keith.wiles@intel.com>;
> > > 收件人: <junjie.j.chen@intel.com>;
> > > 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> > > <jianfeng.tan@intel.com>; <users@dpdk.org>;
> > > 日 期 :2018年01月09日 22:04
> > > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > >
> > > > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J
> > > > <junjie.j.chen@intel.com>
> > wrote:
> > > >
> > > > Hi
> > > > There are two defects may cause this issue:
> > > >
> > > > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix
> > > > low performance in VM virtio pmd mode diff --git
> > > > a/lib/common/mbuf.h b/lib/common/mbuf.h index 759f95d..93065f6
> > > > 100644 — a/lib/common/mbuf.h
> > > > +++ b/lib/common/mbuf.h
> > > > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > > > m->nb_segs = 1;
> > > > m->port = 0xff;
> > > >
> > > > +    m->data_len = m->pkt_len;
> > > > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > > > RTE_PKTMBUF_HEADROOM : m->buf_len; }
> > >
> > > This patch is in Pktgen 3.4.6
> > > >
> > > > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio:
> > > > fix Tx packet length stats
> > > >
> > > > You could patch both these two patch to try it.
> > > >
> > > > Cheers
> > > > JJ
> > > >
> > > >
> > > >> -----Original Message-----
> > > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu,
> > > >> Xuekun
> > > >> Sent: Tuesday, January 9, 2018 2:38 PM
> > > >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> > > >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng
> > > >> <jianfeng.tan@intel.com>
> > > >> Cc: users@dpdk.org
> > > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > > >>
> > > >> Hi, Keith
> > > >>
> > > >> Any updates on this issue? We met similar behavior that ovs-dpdk
> > > >> reports they receive packet with size increment 12 bytes until
> > > >> more than 1518, then pktgen stops sending packets, while we only
> > > >> ask pktgen to generate 64B packet. And it only happens with two
> > > >> vhost-user ports in same server. If the pktgen is running in
> > > >> another server,
> > > then no such issue.
> > > >>
> > > >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK
> 17.11.
> > > >>
> > > >> We also found qemu2.8.1 and qemu2.10 have this problem, while
> > > >> qemu
> > > >> 2.5 has no such problem. So seems like it is a compatibility
> > > >> issue with pktgen/dpdk/qemu?
> > > >>
> > > >> Thanks.
> > > >> Thx, Xuekun
> > > >>
> > > >> -----Original Message-----
> > > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles,
> > > >> Keith
> > > >> Sent: Wednesday, May 03, 2017 4:24 AM
> > > >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> > > >> Cc: users@dpdk.org
> > > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > > >>
> > > >> Comments inline:
> > > >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> > > >>> <Gabriel.Ionescu@enea.com>
> > > >> wrote:
> > > >>>
> > > >>> Hi,
> > > >>>
> > > >>> I am using DPDK-Pktgen with an OVS bridge that has two
> > > >>> vHost-user ports
> > > >> and I am seeing an issue where Pktgen does not look like it
> > > >> generates packets correctly.
> > > >>>
> > > >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> > > >>>
> > > >>> The OVS bridge is created with:
> > > >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0
> > > >>> datapath_type=netdev ovs-vsctl add-port ovsbr0 vhost-user1 --
> > > >>> set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
> > > >>> ovs-vsctl add-port ovsbr0
> > > >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > > >>> ofport_request=2 ovs-ofctl add-flow ovsbr0
> > > >>> in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0
> > > >>> in_port=2,action=output:1
> > > >>>
> > > >>> DPDK-Pktgen is launched with the following command so that
> > > >>> packets
> > > >> generated through port 0 are received by port 1 and viceversa:
> > > >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> > > >>>
> > > >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> > > >>>
> > > >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> > > >>>                               -- -P -m "[0:1].0, [2:3].1”
> > > >>
> > > >> The above command line is wrong as Pktgen needs or takes the
> > > >> first lcore for display output and timers. I would not use -c
> > > >> -0xF, but -l
> > > >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> > > >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6
> > > >> lcore
> > > >> VM) one for Pktgen and 4 for the two ports. -m
> > > >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> > > >> concerned you did not see some performance or lockup problem. I
> > > >> really need to add a test for these types of problem :-( You can
> > > >> just have 5 lcores for the VM, which then pktgen shares lcore 0
> > > >> with Linux using
> > > -l 0-4 option.
> > > >>
> > > >> Pktgen when requested to send 64 byte frames it sends 60 byte
> > > >> payload
> > > >> + 4 byte Frame Checksum. This does work and it must be in how
> > > >> vhost-user is testing for the packet size. In the mbuf you have
> > > >> payload size and the buffer size. The Buffer size could be 1524,
> > > >> but the payload or frame size will be 60 bytes as the 4 bytes FCS
> > > >> is appended to the frame by the hardware. It seems to me that
> > > >> vhost-user is not looking at the correct struct rte_mbuf member
> > variable in its testing.
> > > >>
> > > >>>
> > > >>> In Pktgen, the default settings are used for both ports:
> > > >>>
> > > >>> -          Tx Count: Forever
> > > >>>
> > > >>> -          Rate: 100%
> > > >>>
> > > >>> -          PktSize: 64
> > > >>>
> > > >>> -          Tx Burst: 32
> > > >>>
> > > >>> Whenever I start generating packets through one of the ports (in
> > > >>> this
> > > >> example port 0 by running start 0), the OVS logs throw warnings
> > > >> similar
> > to:
> > > >>>
> > 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> > > >> 1194956
> > > >>> log messages in last 49 seconds (most recently, 41 seconds ago)
> > > >>> due to excessive rate
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> > > >> 1344988
> > > >>> log messages in last 11 seconds (most recently, 0 seconds ago)
> > > >>> due to excessive rate
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 57564 max_packet_len 1518 Port 1 does not receive any
> > packets.
> > > >>>
> > > >>> When running Pktgen with the -socket-mem option (e.g.
> > > >>> --socket-mem 512),
> > > >> the behavior is different, but with the same warnings thrown by OVS:
> > > >> port 1 receives some packages, but with different sizes, even
> > > >> though they are generated on port 0 with a 64b size:
> > > >>> Flags:Port      :   P--------------:0   P--------------:1
> > > >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> > > >> ----TotalRate----
> > > >>> Pkts/s Max/Rx     :                 0/0
> 35136/0
> > > >> 35136/0
> > > >>>      Max/Tx     :        238144/25504
> 0/0
> > > >> 238144/25504
> > > >>> MBits/s Rx/Tx     :             0/13270
> 0/0
> > > >> 0/13270
> > > >>> Broadcast         :                   0
> > 0
> > > >>> Multicast         :                   0
> 0
> > > >>> 64 Bytes        :                   0                 288
> > > >>> 65-127          :                   0
> 1440
> > > >>> 128-255         :                   0
> 2880
> > > >>> 256-511         :                   0
> 6336
> > > >>> 512-1023        :                   0
> 12096
> > > >>> 1024-1518       :                   0
> 12096
> > > >>> Runts/Jumbos      :                 0/0
> > 0/0
> > > >>> Errors Rx/Tx      :                 0/0
> 0/0
> > > >>> Total Rx Pkts     :                   0               35136
> > > >>>     Tx Pkts     :             1571584                   0
> > > >>>     Rx MBs      :                   0
> 227
> > > >>>     Tx MBs      :              412777
> 0
> > > >>> ARP/ICMP Pkts     :                 0/0
> 0/0
> > > >>>                 :
> > > >>> Pattern Type      :             abcd...             abcd...
> > > >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> > > >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> > > >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> > > >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > > >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> > > >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > > >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > > >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > > >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> > > >>>
> > > >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > > >>> -------------------
> > > >>>
> > > >>> If packets are generated from an external source and testpmd is
> > > >>> used to
> > > >> forward traffic between the two vHost-user ports, the warnings
> > > >> are not thrown by the OVS bridge.
> > > >>>
> > > >>> Should this setup work?
> > > >>> Is this an issue or am I setting something up wrong?
> > > >>>
> > > >>> Thank you,
> > > >>> Gabriel Ionescu
> > > >>
> > > >> Regards,
> > > >> Keith
> > > >
> > >
> > > Regards,
> > > Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-users] 答复: RE: RE: Re:  Issue with Pktgen and OVS-DPDK
  2018-01-11 11:13                     ` [dpdk-users] " Chen, Junjie J
@ 2018-01-11 11:24                       ` wang.yong19
  0 siblings, 0 replies; 17+ messages in thread
From: wang.yong19 @ 2018-01-11 11:24 UTC (permalink / raw)
  To: junjie.j.chen
  Cc: qin.chunhua, xuekun.hu, keith.wiles, Gabriel.Ionescu,
	jianfeng.tan, users

I have chosen the plain text format every time.
Maybe there is something wrong with our email app. 
Anyway, thank you!

------------------origin------------------
发件人: <junjie.j.chen@intel.com>;
收件人:汪勇10032886;
抄送人:秦春华10013690; <xuekun.hu@intel.com>; <keith.wiles@intel.com>; <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月11日 19:13
主 题 :RE: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Great, it would be better to send your email in plain text format, so that others can read in most of email client.

Cheers
JJ


> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Thursday, January 11, 2018 6:51 PM
> To: Chen, Junjie J <junjie.j.chen@intel.com>
> Cc: qin.chunhua@zte.com.cn; Hu, Xuekun <xuekun.hu@intel.com>; Wiles,
> Keith <keith.wiles@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> This patch works in our VMs.
> We really appreciate for your help!
>
>
> ------------------origin------------------
> 发件人: <junjie.j.chen@intel.com>;
> 收件人:秦春华10013690;
> 抄送人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
> <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
> <users@dpdk.org>;
> 日 期 :2018年01月11日 17:35
> 主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Could please
> try this patch for app/pktgen.c
>
> @@ -877,6 +877,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
> { pkt_data_t *data = (pkt_data_t *)opaque; struct rte_mbuf *m = (struct
> rte_mbuf *)obj;
> +       pktmbuf_reset(m);
> port_info_t *info;
> pkt_seq_t *pkt;
> uint16_t qid;
>
>
> it works on my setup.
>
> Cheers
> JJ
>
>
> > -----Original Message-----
> > From: qin.chunhua@zte.com.cn [mailto:qin.chunhua@zte.com.cn]
> > Sent: Wednesday, January 10, 2018 7:45 PM
> > To: Chen, Junjie J <junjie.j.chen@intel.com>
> > Cc: Hu, Xuekun <xuekun.hu@intel.com>; wang.yong19@zte.com.cn;
> Wiles,
> > Keith <keith.wiles@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> > <jianfeng.tan@intel.com>; users@dpdk.org
> > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Hi,
> > Thanks a lot for your advice.
> > We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two
> > patches below, the problem was resolved.
> > Now we met a new problem in the above situation. We set mac of the
> > virtio port before we start generating flow.
> > At first, everything is OK. Then, we stop the flow and restart the
> > same flow without any other modifications.
> > We found the source mac of the flow was different from what we had set
> > to the virtio port.
> > Moreover, the source mac was different every time we restart the flow.
> > What's going on? Do you know any patches to fix this problem if we
> > can't change the version of virtio?
> > Looking forward to receiving your reply. Thank you!
> >
> >
> >
> > ------------------原始邮件------------------
> > 发件人: <junjie.j.chen@intel.com>;
> > 收件人: <xuekun.hu@intel.com>;汪勇10032886;
> <keith.wiles@intel.com>;
> > 抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>;
> > <users@dpdk.org>;
> > 日 期 :2018年01月10日 09:47
> > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Start from
> qemu
> > 2.7, virtio default use 1.0 instead of 0.9, which add a flag
> > (VIRTIO_F_VERSION_1) to device feature.
> >
> > Actually, qemu use disable-legacy=on,disable-modern=off to support
> > virtio 1.0. an use disable-legacy=off,disable-modern=on to support
> > virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.
> >
> > Cheers
> > JJ
> >
> >
> > > -----Original Message-----
> > > From: Hu, Xuekun
> > > Sent: Wednesday, January 10, 2018 9:32 AM
> > > To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> > > Cc: Chen, Junjie J <junjie.j.chen@intel.com>;
> > > Gabriel.Ionescu@enea.com; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > > users@dpdk.org
> > > Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > > Maybe the new qemu (starting from 2.8) introduced some new features
> > > that break the pktgen and dpdk compatibility?
> > >
> > > -----Original Message-----
> > > From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> > > Sent: Tuesday, January 09, 2018 10:30 PM
> > > To: Wiles, Keith <keith.wiles@intel.com>
> > > Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> > > <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> > > <jianfeng.tan@intel.com>; users@dpdk.org
> > > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > > Hi,
> > > We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below,
> > > the problem is resolved.
> > > But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> > > included), the problem remains.
> > > It seems that there are still something wrong with pktgen-3.4.6 and
> > > dpdk-17.11.
> > >
> > >
> > > ------------------origin------------------
> > > 发件人: <keith.wiles@intel.com>;
> > > 收件人: <junjie.j.chen@intel.com>;
> > > 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> > > <jianfeng.tan@intel.com>; <users@dpdk.org>;
> > > 日 期 :2018年01月09日 22:04
> > > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > >
> > > > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J
> > > > <junjie.j.chen@intel.com>
> > wrote:
> > > >
> > > > Hi
> > > > There are two defects may cause this issue:
> > > >
> > > > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix
> > > > low performance in VM virtio pmd mode diff --git
> > > > a/lib/common/mbuf.h b/lib/common/mbuf.h index 759f95d..93065f6
> > > > 100644 — a/lib/common/mbuf.h
> > > > +++ b/lib/common/mbuf.h
> > > > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > > > m->nb_segs = 1;
> > > > m->port = 0xff;
> > > >
> > > > +    m->data_len = m->pkt_len;
> > > > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > > > RTE_PKTMBUF_HEADROOM : m->buf_len; }
> > >
> > > This patch is in Pktgen 3.4.6
> > > >
> > > > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio:
> > > > fix Tx packet length stats
> > > >
> > > > You could patch both these two patch to try it.
> > > >
> > > > Cheers
> > > > JJ
> > > >
> > > >
> > > >> -----Original Message-----
> > > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu,
> > > >> Xuekun
> > > >> Sent: Tuesday, January 9, 2018 2:38 PM
> > > >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> > > >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng
> > > >> <jianfeng.tan@intel.com>
> > > >> Cc: users@dpdk.org
> > > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > > >>
> > > >> Hi, Keith
> > > >>
> > > >> Any updates on this issue? We met similar behavior that ovs-dpdk
> > > >> reports they receive packet with size increment 12 bytes until
> > > >> more than 1518, then pktgen stops sending packets, while we only
> > > >> ask pktgen to generate 64B packet. And it only happens with two
> > > >> vhost-user ports in same server. If the pktgen is running in
> > > >> another server,
> > > then no such issue.
> > > >>
> > > >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK
> 17.11.
> > > >>
> > > >> We also found qemu2.8.1 and qemu2.10 have this problem, while
> > > >> qemu
> > > >> 2.5 has no such problem. So seems like it is a compatibility
> > > >> issue with pktgen/dpdk/qemu?
> > > >>
> > > >> Thanks.
> > > >> Thx, Xuekun
> > > >>
> > > >> -----Original Message-----
> > > >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles,
> > > >> Keith
> > > >> Sent: Wednesday, May 03, 2017 4:24 AM
> > > >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> > > >> Cc: users@dpdk.org
> > > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > > >>
> > > >> Comments inline:
> > > >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> > > >>> <Gabriel.Ionescu@enea.com>
> > > >> wrote:
> > > >>>
> > > >>> Hi,
> > > >>>
> > > >>> I am using DPDK-Pktgen with an OVS bridge that has two
> > > >>> vHost-user ports
> > > >> and I am seeing an issue where Pktgen does not look like it
> > > >> generates packets correctly.
> > > >>>
> > > >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> > > >>>
> > > >>> The OVS bridge is created with:
> > > >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0
> > > >>> datapath_type=netdev ovs-vsctl add-port ovsbr0 vhost-user1 --
> > > >>> set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
> > > >>> ovs-vsctl add-port ovsbr0
> > > >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > > >>> ofport_request=2 ovs-ofctl add-flow ovsbr0
> > > >>> in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0
> > > >>> in_port=2,action=output:1
> > > >>>
> > > >>> DPDK-Pktgen is launched with the following command so that
> > > >>> packets
> > > >> generated through port 0 are received by port 1 and viceversa:
> > > >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> > > >>>
> > > >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> > > >>>
> > > >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> > > >>>                               -- -P -m "[0:1].0, [2:3].1”
> > > >>
> > > >> The above command line is wrong as Pktgen needs or takes the
> > > >> first lcore for display output and timers. I would not use -c
> > > >> -0xF, but -l
> > > >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> > > >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6
> > > >> lcore
> > > >> VM) one for Pktgen and 4 for the two ports. -m
> > > >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> > > >> concerned you did not see some performance or lockup problem. I
> > > >> really need to add a test for these types of problem :-( You can
> > > >> just have 5 lcores for the VM, which then pktgen shares lcore 0
> > > >> with Linux using
> > > -l 0-4 option.
> > > >>
> > > >> Pktgen when requested to send 64 byte frames it sends 60 byte
> > > >> payload
> > > >> + 4 byte Frame Checksum. This does work and it must be in how
> > > >> vhost-user is testing for the packet size. In the mbuf you have
> > > >> payload size and the buffer size. The Buffer size could be 1524,
> > > >> but the payload or frame size will be 60 bytes as the 4 bytes FCS
> > > >> is appended to the frame by the hardware. It seems to me that
> > > >> vhost-user is not looking at the correct struct rte_mbuf member
> > variable in its testing.
> > > >>
> > > >>>
> > > >>> In Pktgen, the default settings are used for both ports:
> > > >>>
> > > >>> -          Tx Count: Forever
> > > >>>
> > > >>> -          Rate: 100%
> > > >>>
> > > >>> -          PktSize: 64
> > > >>>
> > > >>> -          Tx Burst: 32
> > > >>>
> > > >>> Whenever I start generating packets through one of the ports (in
> > > >>> this
> > > >> example port 0 by running start 0), the OVS logs throw warnings
> > > >> similar
> > to:
> > > >>>
> > 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> > > >> 1194956
> > > >>> log messages in last 49 seconds (most recently, 41 seconds ago)
> > > >>> due to excessive rate
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> > > >> 1344988
> > > >>> log messages in last 11 seconds (most recently, 0 seconds ago)
> > > >>> due to excessive rate
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 57564 max_packet_len 1518 Port 1 does not receive any
> > packets.
> > > >>>
> > > >>> When running Pktgen with the -socket-mem option (e.g.
> > > >>> --socket-mem 512),
> > > >> the behavior is different, but with the same warnings thrown by OVS:
> > > >> port 1 receives some packages, but with different sizes, even
> > > >> though they are generated on port 0 with a 64b size:
> > > >>> Flags:Port      :   P--------------:0   P--------------:1
> > > >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> > > >> ----TotalRate----
> > > >>> Pkts/s Max/Rx     :                 0/0
> 35136/0
> > > >> 35136/0
> > > >>>      Max/Tx     :        238144/25504
> 0/0
> > > >> 238144/25504
> > > >>> MBits/s Rx/Tx     :             0/13270
> 0/0
> > > >> 0/13270
> > > >>> Broadcast         :                   0
> > 0
> > > >>> Multicast         :                   0
> 0
> > > >>> 64 Bytes        :                   0                 288
> > > >>> 65-127          :                   0
> 1440
> > > >>> 128-255         :                   0
> 2880
> > > >>> 256-511         :                   0
> 6336
> > > >>> 512-1023        :                   0
> 12096
> > > >>> 1024-1518       :                   0
> 12096
> > > >>> Runts/Jumbos      :                 0/0
> > 0/0
> > > >>> Errors Rx/Tx      :                 0/0
> 0/0
> > > >>> Total Rx Pkts     :                   0               35136
> > > >>>     Tx Pkts     :             1571584                   0
> > > >>>     Rx MBs      :                   0
> 227
> > > >>>     Tx MBs      :              412777
> 0
> > > >>> ARP/ICMP Pkts     :                 0/0
> 0/0
> > > >>>                 :
> > > >>> Pattern Type      :             abcd...             abcd...
> > > >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> > > >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> > > >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> > > >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > > >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> > > >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > > >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > > >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > > >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> > > >>>
> > > >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > > >>> -------------------
> > > >>>
> > > >>> If packets are generated from an external source and testpmd is
> > > >>> used to
> > > >> forward traffic between the two vHost-user ports, the warnings
> > > >> are not thrown by the OVS bridge.
> > > >>>
> > > >>> Should this setup work?
> > > >>> Is this an issue or am I setting something up wrong?
> > > >>>
> > > >>> Thank you,
> > > >>> Gabriel Ionescu
> > > >>
> > > >> Regards,
> > > >> Keith
> > > >
> > >
> > > Regards,
> > > Keith

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-01-11 11:24 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-02 12:20 [dpdk-users] Issue with Pktgen and OVS-DPDK Gabriel Ionescu
2017-05-02 20:24 ` Wiles, Keith
2018-01-09  6:38   ` Hu, Xuekun
2018-01-09 13:00     ` Chen, Junjie J
2018-01-09 13:43       ` [dpdk-users] 答复: " wang.yong19
2018-01-09 14:04       ` [dpdk-users] " Wiles, Keith
2018-01-09 14:29         ` [dpdk-users] 答复: " wang.yong19
2018-01-10  1:32           ` [dpdk-users] " Hu, Xuekun
2018-01-10  1:46             ` Chen, Junjie J
2018-01-10  9:49               ` [dpdk-users] 答复: RE: " wang.yong19
2018-01-10 10:15               ` wang.yong19
2018-01-10 11:44               ` [dpdk-users] 答复: " qin.chunhua
2018-01-10 14:01                 ` [dpdk-users] " Wiles, Keith
2018-01-11  9:35                 ` Chen, Junjie J
2018-01-11 10:51                   ` [dpdk-users] 答复: RE: " wang.yong19
2018-01-11 11:13                     ` [dpdk-users] " Chen, Junjie J
2018-01-11 11:24                       ` [dpdk-users] 答复: RE: RE: " wang.yong19

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).