DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wiles, Keith" <keith.wiles@intel.com>
To: Damien Clabaut <damien.clabaut@corp.ovh.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4
Date: Thu, 28 Sep 2017 20:44:26 +0000	[thread overview]
Message-ID: <D74C7355-B8EF-40F5-BC4C-E98E59215EF0@intel.com> (raw)
In-Reply-To: <5cfff93d-213c-af2b-431f-437cd2eec6fc@corp.ovh.com>


> On Sep 26, 2017, at 8:09 AM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
> 
> Hello Keith and thank you for your answer,
> 
> The goal is indeed to generate as much traffic per machine as possible (we use pktgen-dpdk to benchmark datacenter routers before putting them on production).
> 
> For this we use all available CPU power to send packets.
> 
> Following your suggestion, I modified my command to:
> 
> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/8500Bpp.pcap


I just noticed you were sending 8500 byte frames and you have to modify Pktgen to increase the size of the mbufs in the mempool. I only configure the mbufs to 1518 byte buffers or really 2048 byte, but I only deal with 1518 max size. The size can be changed, but I am not next to a machine right now.

> 
> The issue is still reproduced, though will slightly lower performance (reaching linerate at 8500 Bpp does not require much processing power)
> 
> #sh int et 29/1 | i rate
>   5 seconds input rate 36.2 Gbps (90.5% with framing overhead), 0 packets/sec
>   5 seconds output rate 56 bps (0.0% with framing overhead), 0 packets/sec
> 
> Regards,
> 
> PS: Sorry for answering you directly, sending this message a second time on ML
> 
> 
> On 2017-09-25 09:46 PM, Wiles, Keith wrote:
>>> On Sep 25, 2017, at 6:19 PM, Damien Clabaut <damien.clabaut@corp.ovh.com> wrote:
>>> 
>>> Hello DPDK devs,
>>> 
>>> I am sending this message here as I did not find a bugtracker on the website.
>>> 
>>> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place,
>>> 
>>> Thank you.
>>> 
>>> Description of the issue:
>>> 
>>> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet.
>>> 
>>> The packet in question is generated using this Scapy command:
>>> 
>>> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500)))
>>> 
>>> The pcap is then replayed in pktgen-dpdk:
>>> 
>>> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap
>> This could be the issue as I can not setup your system (no cards). The pktgen command line is using 1-7 cores for TX/RX of packets. This means to pktgen to send the pcap from each core and this means the packet will be sent from each core. If you set the number of TX/RX cores to 1.0 then you should only see one. I assume you are using 1-7 cores to increase the bit rate closer to the performance of the card.
>> 
>>> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour
>>> 
>>> #sh int et29/1 | i rate
>>>   5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec
>>> 
>>> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this:
>>> 
>>> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500
>>> 
>>> The issue cannot be reproduced if any of the following conditions is met:
>>> 
>>> - Set the size in the Raw(RandBin()) to a value lower than 1500
>>> 
>>> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software).
>>> 
>>> Is this a known problem ?
>>> 
>>> I remain available for any question you may have.
>>> 
>>> Regards,
>>> 
>>> -- 
>>> Damien Clabaut
>>> R&D vRouter
>>> ovh.qc.ca
>>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> Damien Clabaut
> R&D vRouter
> ovh.qc.ca
> 

Regards,
Keith

  reply	other threads:[~2017-09-28 20:44 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-25 17:19 Damien Clabaut
2017-09-26  0:46 ` Yongseok Koh
2017-09-26 12:43   ` Damien Clabaut
2017-09-26  1:46 ` Wiles, Keith
2017-09-26 13:09   ` Damien Clabaut
2017-09-28 20:44     ` Wiles, Keith [this message]
2017-10-11 18:59       ` Yongseok Koh
2017-10-12 12:34         ` Damien Clabaut

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D74C7355-B8EF-40F5-BC4C-E98E59215EF0@intel.com \
    --to=keith.wiles@intel.com \
    --cc=damien.clabaut@corp.ovh.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).