From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-f170.google.com (mail-qk0-f170.google.com [209.85.220.170]) by dpdk.org (Postfix) with ESMTP id D9EC62C27 for ; Thu, 13 Apr 2017 03:56:04 +0200 (CEST) Received: by mail-qk0-f170.google.com with SMTP id h67so38078083qke.0 for ; Wed, 12 Apr 2017 18:56:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:references:cc:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=qdLacMmd1uvK6JPUfN0GK5FuzvMFI2Ik4otyUH+4npc=; b=uFDfTrASAZnEW4RzxjDRaQfsEpifgX6qplk1UmxU6qeCedeTCXe79qnYstjHKYdSmc /iPREuYrKQWfBQq4PezWMyVr+JU5e15EDzoH9k67uPmUfyTH4I1R+K9MUOJUBH1M5ol4 UGyhLmcxI3ufQ03xccDuYvkm9etU4/gocWwjSqLhq9mr60gl0VCEhGUEQeCgQwHXCqdV ShU0fDtdeQFJBx8sPJrQU2EFms+afxdZNHj75ELUd5P1lHSZUywMTfZW+uCYHeippFFG hajvxuHHoFeZtPaONwaSSXzo3T6QHWAAq1L/PhyNavW0K7iKVTFqESVyQnjIQpYMHNjv BxeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=qdLacMmd1uvK6JPUfN0GK5FuzvMFI2Ik4otyUH+4npc=; b=J3oAPGWRKIYoM26X3BmnmsRAqtpqfAs2JUR3mMrRSmmGWSF/idrKgIon2mJnvZEtQV EPv8qE0c2jIQRBiCDhWXGqGrl1h4QGnAn16dU8Z8oIJ6nrTahitjDh39I0GhX1KC5wHm g1nVMYpt05qyFAq0ez55XMvKrSs/QNfrsH6ElKwCQI3CAzQKbfrV8wDO1ULAAcdzAtkZ 8FvMXI9+uk/dK7NqJke3Avc54K2bBDAhZf4eDaXmhKxBd/yOdrUErBspWA25f1yLYEd1 dR9hfDpZ37sttICosoTuoxrpZvE60nqIJ3FDLfVCUi70w1tGiuxCpLnBydUXdrA5eBnb LHPw== X-Gm-Message-State: AN3rC/6QXR4+qSsAkj+b+nPQpNDLjrZdZrUCbu2lxC/Rz5LWLW4jowOk qLzqqI3mmb5Z4VKPkjw= X-Received: by 10.55.5.20 with SMTP id 20mr544707qkf.16.1492048563848; Wed, 12 Apr 2017 18:56:03 -0700 (PDT) Received: from [192.168.1.10] (c-67-189-169-40.hsd1.ma.comcast.net. [67.189.169.40]) by smtp.gmail.com with ESMTPSA id r60sm14724905qtd.53.2017.04.12.18.56.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Apr 2017 18:56:03 -0700 (PDT) To: Shihabur Rahman Chowdhury , "Wiles, Keith" References: Cc: "users@dpdk.org" From: Dave Wallace Message-ID: <5550cab3-aeba-ddb4-63e4-3821f91f0ebe@gmail.com> Date: Wed, 12 Apr 2017 21:56:02 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Apr 2017 01:56:05 -0000 I have encountered a similar issue in the past on a system configuration where the PCI interface to the NIC was on the other NUMA node. Something else to check... -daw- On 04/12/2017 08:06 PM, Shihabur Rahman Chowdhury wrote: > We've disabled the pause frames. That also disables flow control I assume. > Correct me if I am wrong. > > On the pktgen side, the dropped field and overrun field for Rx keeps > increasing for ifconfig. Btw, both overrun and dropped fields have the > exact same value always. > > Our ConnectX3 NIC has just a single 10G port. So there is no way to create > such loopback connection. > > Thanks > > Shihabur Rahman Chowdhury > David R. Cheriton School of Computer Science > University of Waterloo > > > > On Wed, Apr 12, 2017 at 6:41 PM, Wiles, Keith wrote: > >>> On Apr 12, 2017, at 4:00 PM, Shihabur Rahman Chowdhury < >> shihab.buet@gmail.com> wrote: >>> Hello, >>> >>> We are running a simple DPDK application and observing quite low >>> throughput. We are currently testing a DPDK application with the >> following >>> setup >>> >>> - 2 machines with 2xIntel Xeon E5-2620 CPUs >>> - Each machine with a Mellanox single port 10G ConnectX3 card >>> - Mellanox DPDK version 16.11 >>> - Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3 >>> >>> The application is doing almost nothing. It is reading a batch of 64 >>> packets from a single rxq, swapping the mac of each packet and writing it >>> back to a single txq. The rx and tx is being handled by separate lcores >> on >>> the same NUMA socket. We are running pktgen on another machine. With 64B >>> sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in >>> pktgen. We checked the NIC on the machine running the DPDK application >>> (with ifconfig) and it looks like there is a large number of packets >> being >>> dropped by the interface. Our connectx3 card should be theoretically be >>> able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width 4, >> the >>> theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when Tx >>> rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to ~9Mpps. >> Not sure what is going on here, when you drop the rate to 9Mpps I assume >> you stop getting missed frames. >> Do you have flow control enabled? >> >> On the pktgen side are you seeing missed RX packets? >> Did you loopback the cable from pktgen machine to the other port on the >> pktgen machine and did you get the same Rx/Tx performance in that >> configuration? >> >>> We would highly appriciate if we could get some pointers as to what can >> be >>> possibly causing this mismatch in Rx and Tx. Ideally, we should be able >> to >>> see ~14Mpps Rx well. Is it because we are using a single port? Or >> something >>> else? >>> >>> FYI, we also ran the sample l2fwd application and test-pmd and got >>> comparable results in the same setup. >>> >>> Thanks >>> Shihab >> Regards, >> Keith >> >>