From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from 1.mo302.mail-out.ovh.net (1.mo302.mail-out.ovh.net [137.74.110.74]) by dpdk.org (Postfix) with ESMTP id BD2CF1B1AC for ; Tue, 26 Sep 2017 15:09:20 +0200 (CEST) Received: from EX2.OVH.local (gw1.corp.ovh.com [51.255.55.226]) by mo302.mail-out.ovh.net (Postfix) with ESMTPS id 77CD1233B5 for ; Tue, 26 Sep 2017 15:09:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=corp.ovh.com; s=mailout; t=1506431360; bh=8my8xO4MmxPd3UO1RuG5fGR7FtVchkNX7sT2RN97vGE=; h=Subject:To:References:From:Date:In-Reply-To:From; b=tNrqsMKipp1sEfvZcv/+wsSqgBqoOHPQvNL3dWsMYk6Tx92mfvr+x8svu2/Xd21d9 t1rxlKgzDk1tcpPfjsGvLjBGozBBG89zl9mqA8f+pnxA5Grjr03kJLyzofgyPh4D3j gbR6UJHjayer8ZvYeQFhVvFUFExMbO//K7Uwiyvs= Received: from localhost.localdomain (149.56.136.2) by EX2.OVH.local (172.16.2.2) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1261.27; Tue, 26 Sep 2017 15:09:19 +0200 To: References: <5f5aba8a-0dd2-fd64-891b-569d2a4627c8@corp.ovh.com> From: Damien Clabaut Message-ID: <5cfff93d-213c-af2b-431f-437cd2eec6fc@corp.ovh.com> Date: Tue, 26 Sep 2017 09:09:18 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [149.56.136.2] X-ClientProxiedBy: EX3.OVH.local (172.16.2.3) To EX2.OVH.local (172.16.2.2) X-Ovh-Tracer-Id: 5872693916669211864 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: 0 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeelledrjedvgdeihecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemuceftddtnecu Subject: Re: [dpdk-dev] Issue with pktgen-dpdk replaying >1500bytes pcap on MCX4 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Sep 2017 13:09:20 -0000 Hello Keith and thank you for your answer, The goal is indeed to generate as much traffic per machine as possible (we use pktgen-dpdk to benchmark datacenter routers before putting them on production). For this we use all available CPU power to send packets. Following your suggestion, I modified my command to: ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m 1.0 -s 0:pcap/8500Bpp.pcap The issue is still reproduced, though will slightly lower performance (reaching linerate at 8500 Bpp does not require much processing power) #sh int et 29/1 | i rate   5 seconds input rate 36.2 Gbps (90.5% with framing overhead), 0 packets/sec   5 seconds output rate 56 bps (0.0% with framing overhead), 0 packets/sec Regards, PS: Sorry for answering you directly, sending this message a second time on ML On 2017-09-25 09:46 PM, Wiles, Keith wrote: >> On Sep 25, 2017, at 6:19 PM, Damien Clabaut wrote: >> >> Hello DPDK devs, >> >> I am sending this message here as I did not find a bugtracker on the website. >> >> If this is the wrong place, I would kindly apologize and ask you to redirect me to the proper place, >> >> Thank you. >> >> Description of the issue: >> >> I am using pktgen-dpdk to replay a pcap file containing exactly 1 packet. >> >> The packet in question is generated using this Scapy command: >> >> pkt=(Ether(src="ec:0d:9a:37:d1:ab",dst="7c:fe:90:31:0d:52")/Dot1Q(vlan=2)/IP(dst="192.168.0.254")/UDP(sport=1020,dport=1021)/Raw(RandBin(size=8500))) >> >> The pcap is then replayed in pktgen-dpdk: >> >> ./app/app/x86_64-native-linuxapp-gcc/pktgen -l 0-7 -- -m [1-7].0 -s 0:pcap/8500Bpp.pcap > This could be the issue as I can not setup your system (no cards). The pktgen command line is using 1-7 cores for TX/RX of packets. This means to pktgen to send the pcap from each core and this means the packet will be sent from each core. If you set the number of TX/RX cores to 1.0 then you should only see one. I assume you are using 1-7 cores to increase the bit rate closer to the performance of the card. > >> When I run this on a machine with Mellanox ConnectX-4 NIC (MCX4), the switch towards which I generate traffic gets a strange behaviour >> >> #sh int et29/1 | i rate >> 5 seconds input rate 39.4 Gbps (98.4% with framing overhead), 0 packets/sec >> >> A capture of this traffic (I used a monitor session to redirect all to a different port, connected to a machine on which I ran tcpdump) gives me this: >> >> 19:04:50.210792 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >> 19:04:50.210795 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >> 19:04:50.210796 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >> 19:04:50.210797 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >> 19:04:50.210799 00:00:00:00:00:00 (oui Ethernet) > 00:00:00:00:00:00 (oui Ethernet) Null Unnumbered, ef, Flags [Poll], length 1500 >> >> The issue cannot be reproduced if any of the following conditions is met: >> >> - Set the size in the Raw(RandBin()) to a value lower than 1500 >> >> - Send the packet from a Mellanox ConnectX-3 (MCX3) NIC (both machines are indentical in terms of software). >> >> Is this a known problem ? >> >> I remain available for any question you may have. >> >> Regards, >> >> -- >> Damien Clabaut >> R&D vRouter >> ovh.qc.ca >> > Regards, > Keith > -- Damien Clabaut R&D vRouter ovh.qc.ca