From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f170.google.com (mail-io0-f170.google.com [209.85.223.170]) by dpdk.org (Postfix) with ESMTP id 058A5F94 for ; Mon, 17 Apr 2017 19:44:16 +0200 (CEST) Received: by mail-io0-f170.google.com with SMTP id r16so158036545ioi.2 for ; Mon, 17 Apr 2017 10:44:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=3DFLxhmc5Y1ZuVH+mjKwZNt/adxe+50CcNND1TrHeZY=; b=MeaHMxCAkdSUhm14MH+e0Br++vFHjyybsM90pSoF9SRVRgY1Pwj796UwsM7CEYSaNL Khm57JrMQEVBNqftkjkkXD3DoKur3gvLjEY/Vq694kNK4YarS5XiZ9NQWUqR8j0Hr7Ru mg3hZM+0zWYXat/oPMx0C5+tjAgSNeVmLdmiGfUjnUHA6lThvKJDfKzTNf0Jzc6gAMmU ogoIGrq+YEGzFJdIQ1+H42E23U0JpI9Wg6pznAy2KjHGp/lRY0X7lhKv9e/a9vFpy1Lu twzq1P0wBj8twEIlMUNiyy3Uf78qAJhsiXllCcB+NYo/e75v4DwzmCPovWnEk4MAGL17 EgLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=3DFLxhmc5Y1ZuVH+mjKwZNt/adxe+50CcNND1TrHeZY=; b=eenqD35c4JBbhYsvc9D4iRH7K+QdSPHNOGtHHhKwk6lilRCnP57Zu1u6ioV2/HAu/A ADezzVTzMz/6ca0gFcortar9supi9vku3iLvKl5SNwHdTmECxUgE71sf5s3vKGwm0m3L 9EqVU2AgNNuEAEII4ElzlOl/RtzNaR8I/sEQ6UlvquNm9JPuYNITL14gRVjJMUmrlHZA qgW2SHXxrAaXaUauZN+esHjfjZMXKGxmdhiITlZi5+27LOmv6nKY5cRofLz1ich6OH/R Crb0M+G1O+0qHbRosnRx/W+7UhJVRQoRdcUcSaDNpQI9uoq4+cJ9/0ydmmnB0kWXUXZB 0MJA== X-Gm-Message-State: AN3rC/5Q5LtSgzPPuasN14VwSqfO+oxa0gYyD/OeZQp5zi7ixpFaueQ0 RXDIV6GK7T3pjeF5oL/jkYGUhW9Gwg== X-Received: by 10.36.31.143 with SMTP id d137mr11132105itd.95.1492451055653; Mon, 17 Apr 2017 10:44:15 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.145.19 with HTTP; Mon, 17 Apr 2017 10:43:35 -0700 (PDT) In-Reply-To: References: <5550cab3-aeba-ddb4-63e4-3821f91f0ebe@gmail.com> From: Shihabur Rahman Chowdhury Date: Mon, 17 Apr 2017 13:43:35 -0400 Message-ID: To: Kyle Larose Cc: Shahaf Shuler , Dave Wallace , Olga Shern , Adrien Mazarguil , "Wiles, Keith" , "users@dpdk.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Apr 2017 17:44:17 -0000 Thanks for the suggestions. We'll definitely try RSS on the distributor. In the meantime we implemented one optimization similar to l3fwd example. Before processing the packets, we prefetched a cache line from a fraction (currently 8 packets) of the batch. Then while processing packets we prefetched a cacheline for rest of the batch and then processed the prefetched packets. This along with running pktgen rx/tx on the same logical core improved throughput to ~8.76Mpps for 64B packets. Shihabur Rahman Chowdhury David R. Cheriton School of Computer Science University of Waterloo On Thu, Apr 13, 2017 at 11:49 AM, Kyle Larose wrote: > Hey Shihab, > > > > -----Original Message----- > > From: users [mailto:users-bounces@dpdk.org] On Behalf Of Shihabur Rahma= n > > Chowdhury > > Sent: Thursday, April 13, 2017 10:21 AM > > To: Shahaf Shuler > > Cc: Dave Wallace; Olga Shern; Adrien Mazarguil; Wiles, Keith; > users@dpdk.org > > Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox > ConnectX-3 > > card with DPDK > > > > > > =E2=80=8BTo give a bit more context, we are developing a set of packet = processors > > that can be independently deployed as separate processes and can be > scaled > > out independently as well. So a batch of packet goes through a sequence > of > > processes until at some point they are written to the Tx queue or gets > > dropped because of some processing decision. These packet processors ar= e > > running as secondary dpdk processes and the rx is being taking place at= a > > primary process (since Mellanox PMD does not allow Rx from a secondary > > process). In this example configuration, one primary process is doing t= he > > Rx, handing over the packet to another secondary process through a shar= ed > > ring and that secondary process is swapping the MAC and writing packets > to > > Tx queue. We are expecting some performance drop because of the cache > > invalidation across lcores (also we cannot use the same lcore for > different > > secondary process for mempool cache corruption), but again 7.3Mpps is > ~30+% > > overhead. > > > > Since you said, we tried the run to completion processing in the primar= y > > process (i.e., rx and tx is now on the same lcore). We also configured > > pktgent to handle rx and tx on the same lcore as well. With that we are > now > > getting ~9.9-10Mpps with 64B packets. With our multi-process setup that > > drops down to ~8.4Mpps. So it seems like pktgen was not configured > properly. > > It seems a bit counter-intuitive since from pktgen's side doing rx and > tx on > > different lcore should not cause any cache invalidation (set of rx and = tx > > packets are disjoint). So using different lcores should theoretically b= e > > better than handling both rx/tx in the same lcore for pkgetn. Am I > missing > > something here? > > > > Thanks > > It sounds to me like your bottleneck is the primary -- the packet > distributor. Consider the comment from Shahaf earlier: the best Mellanox > was able to achieve with testpmd (which is extremely simple) is 10Mpps pe= r > core. I've always found that receiving is more expensive than transmittin= g, > which means that if you're splitting your work on those dimensions, you'l= l > need to allocate more CPU to the receiver than the transmitter. This may = be > one of the reasons run to completion works out -- the lower tx load on th= at > core offsets the higher rx. > > If you want to continue using the packet distribution model, why don't yo= u > try using RSS/multiqueue on the distributor, and allocate two cores to it= ? > You'll need some entropy in the packets for it to distribute well, but > hopefully that's not a problem. :) > > Thanks, > > Kyle >