From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f182.google.com (mail-io0-f182.google.com [209.85.223.182]) by dpdk.org (Postfix) with ESMTP id 7FC572C27 for ; Thu, 13 Apr 2017 03:58:34 +0200 (CEST) Received: by mail-io0-f182.google.com with SMTP id r16so61819708ioi.2 for ; Wed, 12 Apr 2017 18:58:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=DnXBIZDZ2hhf0Xd69oxKoYgRK0JPqw99GvvO0ExWKQ8=; b=q9QeRt7DU9VQYwc5JvMMwIjU+/v5FznYLfJyrkSsxJQQ1HN6FcfB+rmpwAUcNzwF/C IP9klAcwLnGGbXggmdoLTkiTqSF3UyZ36wCmK26prx1zw5xtCXW52hRNPoKY52B+2rTU 9rk0FsuIJedb+PJA0FupP/CLm2LQU9AQowO/VcPjWv00rKVjOGDdDivHT2HwrR7IInLv JqtC+ndHG4fJt+AvF7P8a/OzYdlL8+5Ftijfijrc5C32VTR4VV2Y23QMShgWrS18qWwW cptfcQpmUcqEtSTn5ac0F+Ed6dhnj1I6eYxAeObePcJMuRFcOIjJ7RMqXriTZYev3kaR wN0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=DnXBIZDZ2hhf0Xd69oxKoYgRK0JPqw99GvvO0ExWKQ8=; b=WLNrK9/C5McwPcjQe8SRnEo+9cbS2Q8nRLNKtWxjYOJz1WlVYyO76LyaiCCj8RDaGL qMgr/ixiUS8gO4IkAZa5FU2NDOjMtSezKsvQJmDPN4LEI89OYIdF2JsXXWRYF7qiIFD3 IijRWytgs3LlvaI69yFCWZJJwIGWOGbiJTKkxwLtIjiLE9jhZ7YpbQpGqF1TzaITjN7u hIv+YbmU0ycWWya2ALhwJ3I+E9t0bfvfeeq7vv7q1NZniPPwVdmw28DtdH0eIhpnV32I ybyXF34VzqmnBI0kPCzmreBSYdNXhWspGkGvBBSaSf+R4jvloka3Sks+BpGx4KYmE0Pa 8OEA== X-Gm-Message-State: AN3rC/7GIuzBTNV7Hk+041zUSB7Ppq42DWnm9dUL/9fc7cd5p8y8obKu 2r+64V/vwY4sMBU3nE5uRkcLQC1P8g== X-Received: by 10.107.173.22 with SMTP id w22mr1131777ioe.119.1492048713813; Wed, 12 Apr 2017 18:58:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.145.19 with HTTP; Wed, 12 Apr 2017 18:57:53 -0700 (PDT) In-Reply-To: <5550cab3-aeba-ddb4-63e4-3821f91f0ebe@gmail.com> References: <5550cab3-aeba-ddb4-63e4-3821f91f0ebe@gmail.com> From: Shihabur Rahman Chowdhury Date: Wed, 12 Apr 2017 21:57:53 -0400 Message-ID: To: Dave Wallace Cc: "Wiles, Keith" , "users@dpdk.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Apr 2017 01:58:34 -0000 Hi Dave, We've checked this and we are using the lcores on the NUMA node where the PCI interface to the NIC is. Shihabur Rahman Chowdhury David R. Cheriton School of Computer Science University of Waterloo On Wed, Apr 12, 2017 at 9:56 PM, Dave Wallace wrote: > I have encountered a similar issue in the past on a system configuration > where the PCI interface to the NIC was on the other NUMA node. > > Something else to check... > -daw- > > > > On 04/12/2017 08:06 PM, Shihabur Rahman Chowdhury wrote: > >> We've disabled the pause frames. That also disables flow control I assume. >> Correct me if I am wrong. >> >> On the pktgen side, the dropped field and overrun field for Rx keeps >> increasing for ifconfig. Btw, both overrun and dropped fields have the >> exact same value always. >> >> Our ConnectX3 NIC has just a single 10G port. So there is no way to create >> such loopback connection. >> >> Thanks >> >> Shihabur Rahman Chowdhury >> David R. Cheriton School of Computer Science >> University of Waterloo >> >> >> >> On Wed, Apr 12, 2017 at 6:41 PM, Wiles, Keith >> wrote: >> >> On Apr 12, 2017, at 4:00 PM, Shihabur Rahman Chowdhury < >>>> >>> shihab.buet@gmail.com> wrote: >>> >>>> Hello, >>>> >>>> We are running a simple DPDK application and observing quite low >>>> throughput. We are currently testing a DPDK application with the >>>> >>> following >>> >>>> setup >>>> >>>> - 2 machines with 2xIntel Xeon E5-2620 CPUs >>>> - Each machine with a Mellanox single port 10G ConnectX3 card >>>> - Mellanox DPDK version 16.11 >>>> - Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3 >>>> >>>> The application is doing almost nothing. It is reading a batch of 64 >>>> packets from a single rxq, swapping the mac of each packet and writing >>>> it >>>> back to a single txq. The rx and tx is being handled by separate lcores >>>> >>> on >>> >>>> the same NUMA socket. We are running pktgen on another machine. With 64B >>>> sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in >>>> pktgen. We checked the NIC on the machine running the DPDK application >>>> (with ifconfig) and it looks like there is a large number of packets >>>> >>> being >>> >>>> dropped by the interface. Our connectx3 card should be theoretically be >>>> able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width 4, >>>> >>> the >>> >>>> theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when Tx >>>> rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to ~9Mpps. >>>> >>> Not sure what is going on here, when you drop the rate to 9Mpps I assume >>> you stop getting missed frames. >>> Do you have flow control enabled? >>> >>> On the pktgen side are you seeing missed RX packets? >>> Did you loopback the cable from pktgen machine to the other port on the >>> pktgen machine and did you get the same Rx/Tx performance in that >>> configuration? >>> >>> We would highly appriciate if we could get some pointers as to what can >>>> >>> be >>> >>>> possibly causing this mismatch in Rx and Tx. Ideally, we should be able >>>> >>> to >>> >>>> see ~14Mpps Rx well. Is it because we are using a single port? Or >>>> >>> something >>> >>>> else? >>>> >>>> FYI, we also ran the sample l2fwd application and test-pmd and got >>>> comparable results in the same setup. >>>> >>>> Thanks >>>> Shihab >>>> >>> Regards, >>> Keith >>> >>> >>> >