DPDK usage discussions
 help / color / mirror / Atom feed
From: <wang.yong19@zte.com.cn>
To: <junjie.j.chen@intel.com>
Cc: <xuekun.hu@intel.com>, <keith.wiles@intel.com>,
	<Gabriel.Ionescu@enea.com>, <jianfeng.tan@intel.com>,
	<users@dpdk.org>
Subject: [dpdk-users] 答复: RE: Re:  Issue with Pktgen and OVS-DPDK
Date: Wed, 10 Jan 2018 18:15:52 +0800 (CST)	[thread overview]
Message-ID: <201801101815521410260@zte.com.cn> (raw)
In-Reply-To: <AA85A5A5E706C44BACB0BEFD5AC08BF631329FCB@SHSMSX101.ccr.corp.intel.com>

Hi,
Thanks a lot for your advice. 
We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two patches below, the problem was resolved.
Now we met a new problem in the above situation. We set mac of the virtio port before we start generating flow.
At first, everything is OK. Then, we stop the flow and restart the same flow without any other modifications.
We found the source mac of the flow was different from what we had set to the virtio port.
Moreover, the source mac was different every time we restart the flow.
What's going on? Do you know any patches to fix this problem if we can't change the version of virtio?
We are looking forward to receiving your reply. Thank you!


------------------origin------------------
发件人: <junjie.j.chen@intel.com>;
收件人: <xuekun.hu@intel.com>;汪勇10032886; <keith.wiles@intel.com>;
抄送人: <Gabriel.Ionescu@enea.com>; <jianfeng.tan@intel.com>; <users@dpdk.org>;
日 期 :2018年01月10日 09:47
主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
Start from qemu 2.7, virtio default use 1.0 instead of 0.9, which add a flag (VIRTIO_F_VERSION_1) to device feature.

Actually, qemu use disable-legacy=on,disable-modern=off to support virtio 1.0. an use disable-legacy=off,disable-modern=on to support virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.

Cheers
JJ


> -----Original Message-----
> From: Hu, Xuekun
> Sent: Wednesday, January 10, 2018 9:32 AM
> To: wang.yong19@zte.com.cn; Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Gabriel.Ionescu@enea.com; Tan,
> Jianfeng <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Maybe the new qemu (starting from 2.8) introduced some new features that
> break the pktgen and dpdk compatibility?
>
> -----Original Message-----
> From: wang.yong19@zte.com.cn [mailto:wang.yong19@zte.com.cn]
> Sent: Tuesday, January 09, 2018 10:30 PM
> To: Wiles, Keith <keith.wiles@intel.com>
> Cc: Chen, Junjie J <junjie.j.chen@intel.com>; Hu, Xuekun
> <xuekun.hu@intel.com>; Gabriel.Ionescu@enea.com; Tan, Jianfeng
> <jianfeng.tan@intel.com>; users@dpdk.org
> Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
> Hi,
> We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below, the
> problem is resolved.
> But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> included), the problem remains.
> It seems that there are still something wrong with pktgen-3.4.6 and
> dpdk-17.11.
>
>
> ------------------origin------------------
> 发件人: <keith.wiles@intel.com>;
> 收件人: <junjie.j.chen@intel.com>;
> 抄送人: <xuekun.hu@intel.com>; <Gabriel.Ionescu@enea.com>;
> <jianfeng.tan@intel.com>; <users@dpdk.org>;
> 日 期 :2018年01月09日 22:04
> 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
>
>
> > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J <junjie.j.chen@intel.com> wrote:
> >
> > Hi
> > There are two defects may cause this issue:
> >
> > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low
> > performance in VM virtio pmd mode diff --git a/lib/common/mbuf.h
> > b/lib/common/mbuf.h index 759f95d..93065f6 100644 —
> > a/lib/common/mbuf.h
> > +++ b/lib/common/mbuf.h
> > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > +    m->data_len = m->pkt_len;
> > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > RTE_PKTMBUF_HEADROOM : m->buf_len;
> > }
>
> This patch is in Pktgen 3.4.6
> >
> > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix
> > Tx packet length stats
> >
> > You could patch both these two patch to try it.
> >
> > Cheers
> > JJ
> >
> >
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Hu, Xuekun
> >> Sent: Tuesday, January 9, 2018 2:38 PM
> >> To: Wiles, Keith <keith.wiles@intel.com>; Gabriel Ionescu
> >> <Gabriel.Ionescu@enea.com>; Tan, Jianfeng <jianfeng.tan@intel.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Hi, Keith
> >>
> >> Any updates on this issue? We met similar behavior that ovs-dpdk
> >> reports they receive packet with size increment 12 bytes until more
> >> than 1518, then pktgen stops sending packets, while we only ask
> >> pktgen to generate 64B packet. And it only happens with two
> >> vhost-user ports in same server. If the pktgen is running in another server,
> then no such issue.
> >>
> >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> >>
> >> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu
> >> 2.5 has no such problem. So seems like it is a compatibility issue
> >> with pktgen/dpdk/qemu?
> >>
> >> Thanks.
> >> Thx, Xuekun
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Wiles, Keith
> >> Sent: Wednesday, May 03, 2017 4:24 AM
> >> To: Gabriel Ionescu <Gabriel.Ionescu@enea.com>
> >> Cc: users@dpdk.org
> >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >>
> >> Comments inline:
> >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> >>> <Gabriel.Ionescu@enea.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I am using DPDK-Pktgen with an OVS bridge that has two vHost-user
> >>> ports
> >> and I am seeing an issue where Pktgen does not look like it generates
> >> packets correctly.
> >>>
> >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >>>
> >>> The OVS bridge is created with:
> >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> >>> ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> >>> type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> >>> ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> >>> ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >>>
> >>> DPDK-Pktgen is launched with the following command so that packets
> >> generated through port 0 are received by port 1 and viceversa:
> >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> >>>
> >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >>>
> >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >>>                               -- -P -m "[0:1].0, [2:3].1”
> >>
> >> The above command line is wrong as Pktgen needs or takes the first
> >> lcore for display output and timers. I would not use -c -0xF, but -l
> >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6 lcore
> >> VM) one for Pktgen and 4 for the two ports. -m
> >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> >> concerned you did not see some performance or lockup problem. I
> >> really need to add a test for these types of problem :-( You can just
> >> have 5 lcores for the VM, which then pktgen shares lcore 0 with Linux using
> -l 0-4 option.
> >>
> >> Pktgen when requested to send 64 byte frames it sends 60 byte payload
> >> + 4 byte Frame Checksum. This does work and it must be in how
> >> vhost-user is testing for the packet size. In the mbuf you have
> >> payload size and the buffer size. The Buffer size could be 1524, but
> >> the payload or frame size will be 60 bytes as the 4 bytes FCS is
> >> appended to the frame by the hardware. It seems to me that vhost-user
> >> is not looking at the correct struct rte_mbuf member variable in its testing.
> >>
> >>>
> >>> In Pktgen, the default settings are used for both ports:
> >>>
> >>> -          Tx Count: Forever
> >>>
> >>> -          Rate: 100%
> >>>
> >>> -          PktSize: 64
> >>>
> >>> -          Tx Burst: 32
> >>>
> >>> Whenever I start generating packets through one of the ports (in
> >>> this
> >> example port 0 by running start 0), the OVS logs throw warnings similar to:
> >>> 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1194956
> >>> log messages in last 49 seconds (most recently, 41 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>>
> >>
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 1524 max_packet_len 1518
> >>> 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> >> 1344988
> >>> log messages in last 11 seconds (most recently, 0 seconds ago) due
> >>> to excessive rate
> >>>
> >>
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> >> Too
> >>> big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >>>
> >>> When running Pktgen with the -socket-mem option (e.g. --socket-mem
> >>> 512),
> >> the behavior is different, but with the same warnings thrown by OVS:
> >> port 1 receives some packages, but with different sizes, even though
> >> they are generated on port 0 with a 64b size:
> >>> Flags:Port      :   P--------------:0   P--------------:1
> >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> >> ----TotalRate----
> >>> Pkts/s Max/Rx     :                 0/0             35136/0
> >> 35136/0
> >>>      Max/Tx     :        238144/25504                 0/0
> >> 238144/25504
> >>> MBits/s Rx/Tx     :             0/13270                 0/0
> >> 0/13270
> >>> Broadcast         :                   0                   0
> >>> Multicast         :                   0                   0
> >>> 64 Bytes        :                   0                 288
> >>> 65-127          :                   0                1440
> >>> 128-255         :                   0                2880
> >>> 256-511         :                   0                6336
> >>> 512-1023        :                   0               12096
> >>> 1024-1518       :                   0               12096
> >>> Runts/Jumbos      :                 0/0                 0/0
> >>> Errors Rx/Tx      :                 0/0                 0/0
> >>> Total Rx Pkts     :                   0               35136
> >>>     Tx Pkts     :             1571584                   0
> >>>     Rx MBs      :                   0                 227
> >>>     Tx MBs      :              412777                   0
> >>> ARP/ICMP Pkts     :                 0/0                 0/0
> >>>                 :
> >>> Pattern Type      :             abcd...             abcd...
> >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >>>
> >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> >>> -------------------
> >>>
> >>> If packets are generated from an external source and testpmd is used
> >>> to
> >> forward traffic between the two vHost-user ports, the warnings are
> >> not thrown by the OVS bridge.
> >>>
> >>> Should this setup work?
> >>> Is this an issue or am I setting something up wrong?
> >>>
> >>> Thank you,
> >>> Gabriel Ionescu
> >>
> >> Regards,
> >> Keith
> >
>
> Regards,
> Keith

  parent reply	other threads:[~2018-01-10 10:15 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-02 12:20 [dpdk-users] " Gabriel Ionescu
2017-05-02 20:24 ` Wiles, Keith
2018-01-09  6:38   ` Hu, Xuekun
2018-01-09 13:00     ` Chen, Junjie J
2018-01-09 13:43       ` [dpdk-users] 答复: " wang.yong19
2018-01-09 14:04       ` [dpdk-users] " Wiles, Keith
2018-01-09 14:29         ` [dpdk-users] 答复: " wang.yong19
2018-01-10  1:32           ` [dpdk-users] " Hu, Xuekun
2018-01-10  1:46             ` Chen, Junjie J
2018-01-10  9:49               ` [dpdk-users] 答复: RE: " wang.yong19
2018-01-10 10:15               ` wang.yong19 [this message]
2018-01-10 11:44               ` [dpdk-users] 答复: " qin.chunhua
2018-01-10 14:01                 ` [dpdk-users] " Wiles, Keith
2018-01-11  9:35                 ` Chen, Junjie J
2018-01-11 10:51                   ` [dpdk-users] 答复: RE: " wang.yong19
2018-01-11 11:13                     ` [dpdk-users] " Chen, Junjie J
2018-01-11 11:24                       ` [dpdk-users] 答复: RE: RE: " wang.yong19

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201801101815521410260@zte.com.cn \
    --to=wang.yong19@zte.com.cn \
    --cc=Gabriel.Ionescu@enea.com \
    --cc=jianfeng.tan@intel.com \
    --cc=junjie.j.chen@intel.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    --cc=xuekun.hu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).