From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 30712DE3 for ; Thu, 28 Sep 2017 22:44:28 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Sep 2017 13:44:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,450,1500966000"; d="scan'208";a="904887075" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by FMSMGA003.fm.intel.com with ESMTP; 28 Sep 2017 13:44:27 -0700 Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 28 Sep 2017 13:44:27 -0700 Received: from fmsmsx102.amr.corp.intel.com ([169.254.10.122]) by fmsmsx116.amr.corp.intel.com ([169.254.2.135]) with mapi id 14.03.0319.002; Thu, 28 Sep 2017 13:44:27 -0700 From: "Wiles, Keith" To: Damien Clabaut CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4 Thread-Index: AQHTNiL1FqoBnY5BT0m3rQOXyyu05aLG24iAgAC+3ACAA6PXgA== Date: Thu, 28 Sep 2017 20:44:26 +0000 Message-ID: References: <5f5aba8a-0dd2-fd64-891b-569d2a4627c8@corp.ovh.com> <5cfff93d-213c-af2b-431f-437cd2eec6fc@corp.ovh.com> In-Reply-To: <5cfff93d-213c-af2b-431f-437cd2eec6fc@corp.ovh.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.255.84.9] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Sep 2017 20:44:29 -0000 > On Sep 26, 2017, at 8:09 AM, Damien Clabaut = wrote: >=20 > Hello Keith and thank you for your answer, >=20 > The goal is indeed to generate as much traffic per machine as possible (w= e use pktgen-dpdk to benchmark datacenter routers before putting them on pr= oduction). >=20 > For this we use all available CPU power to send packets. >=20 > Following your suggestion, I modified my command to: >=20 > ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/85= 00Bpp.pcap I just noticed you were sending 8500 byte frames and you have to modify Pkt= gen to increase the size of the mbufs in the mempool. I only configure the = mbufs to 1518 byte buffers or really 2048 byte, but I only deal with 1518 m= ax size. The size can be changed, but I am not next to a machine right now. >=20 > The issue is still reproduced, though will slightly lower performance (re= aching linerate at 8500 Bpp does not require much processing power) >=20 > #sh int et 29/1 | i rate > 5 seconds input rate 36.2 Gbps (90.5% with framing overhead), 0 packets= /sec > 5 seconds output rate 56 bps (0.0% with framing overhead), 0 packets/se= c >=20 > Regards, >=20 > PS: Sorry for answering you directly, sending this message a second time = on ML >=20 >=20 > On 2017-09-25 09:46 PM, Wiles, Keith wrote: >>> On Sep 25, 2017, at 6:19 PM, Damien Clabaut wrote: >>>=20 >>> Hello DPDK devs, >>>=20 >>> I am sending this message here as I did not find a bugtracker on the we= bsite. >>>=20 >>> If this is the wrong place, I would kindly apologize and ask you to red= irect me to the proper place, >>>=20 >>> Thank you. >>>=20 >>> Description of the issue: >>>=20 >>> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packe= t. >>>=20 >>> The packet in question is generated using this Scapy command: >>>=20 >>> pkt=3D(Ether(src=3D"ec:0d:9a:37:d1:ab",dst=3D"7c:fe:90:31:0d:52")/Dot1Q= (vlan=3D2)/IP(dst=3D"192.168.0.254")/UDP(sport=3D1020,dport=3D1021)/Raw(Ran= dBin(size=3D8500))) >>>=20 >>> The pcap is then replayed in pktgen-dpdk: >>>=20 >>> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:p= cap/8500Bpp.pcap >> This could be the issue as I can not setup your system (no cards). The p= ktgen command line is using 1-7 cores for TX/RX of packets. This means to p= ktgen to send the pcap from each core and this means the packet will be sen= t from each core. If you set the number of TX/RX cores to 1.0 then you shou= ld only see one. I assume you are using 1-7 cores to increase the bit rate = closer to the performance of the card. >>=20 >>> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the s= witch towards which I generate traffic gets a strange behaviour >>>=20 >>> #sh int et29/1 | i rate >>> 5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packe= ts/sec >>>=20 >>> A capture of this traffic (I used a monitor session to redirect all to = a different port, connected to a machine on which I ran tcpdump) gives me t= his: >>>=20 >>> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (o= ui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >>> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (o= ui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >>> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (o= ui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >>> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (o= ui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >>> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (o= ui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >>>=20 >>> The issue cannot be reproduced if any of the following conditions is me= t: >>>=20 >>> - Set the size in the Raw(RandBin()) to a value lower than 1500 >>>=20 >>> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines = are indentical in terms of software). >>>=20 >>> Is this a known problem ? >>>=20 >>> I remain available for any question you may have. >>>=20 >>> Regards, >>>=20 >>> --=20 >>> Damien Clabaut >>> R&D vRouter >>> ovh.qc.ca >>>=20 >> Regards, >> Keith >>=20 >=20 > --=20 > Damien Clabaut > R&D vRouter > ovh.qc.ca >=20 Regards, Keith