From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-out2.informatik.tu-muenchen.de (mail-out2.informatik.tu-muenchen.de [131.159.0.36]) by dpdk.org (Postfix) with ESMTP id C77F114E8 for ; Wed, 13 Dec 2017 12:35:33 +0100 (CET) Received: from [127.0.0.1] (localhost [127.0.0.1]) by mail.net.in.tum.de (Postfix) with ESMTPSA id 8680C282D023; Wed, 13 Dec 2017 12:35:27 +0100 (CET) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) From: Paul Emmerich In-Reply-To: Date: Wed, 13 Dec 2017 12:35:26 +0100 Cc: users@dpdk.org Content-Transfer-Encoding: quoted-printable Message-Id: <74DD2809-CE00-4CFD-8675-D27636EC3FC2@net.in.tum.de> References: To: hyperhead@gmail.com X-Mailer: Apple Mail (2.3124) Subject: Re: [dpdk-users] VF RSS availble in I350-T2? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Dec 2017 11:35:33 -0000 Did you consult the datasheet? It says that the VF only supports one = queue. Paul > Am 12.12.2017 um 13:58 schrieb .. : >=20 > I assume my message was ignored due to it not being related to dpdk > software? >=20 >=20 > On 11 December 2017 at 10:14, .. wrote: >=20 >> Hi, >>=20 >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting = some >> rx_dropped on the card when I start increasing traffic. (I have got = more >> with the same software out of a identical bare metal system) >>=20 >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel = not >> the driver installed with Centos), so the RSS parameters amongst = others are >> availbe to me >>=20 >> This then led me to investigate the interrupts on the tx rx ring = buffers >> and I noticed that the interface (vfs enabled) only had on tx/rx = queue. Its >> distributed between This is on the KVM Host >>=20 >> CPU0 CPU1 CPU2 CPU3 CPU4 >> CPU5 CPU6 CPU7 CPU8 >> 100: 1 33 137 0 0 >> 0 0 0 0 IR-PCI-MSI-edge ens2f1 >> 101: 2224 0 0 6309 178807 >> 0 0 0 0 IR-PCI-MSI-edge = ens2f1-TxRx-0 >>=20 >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues >>=20 >> On the VM I only get one tx one rx queue ( I know all the interrupts = are >> only using CPU0) but that is defined in our builds. >>=20 >> egrep "CPU|ens11" /proc/interrupts >> CPU0 CPU1 CPU2 CPU3 CPU4 >> CPU5 CPU6 CPU7 >> 34: 715885552 0 0 0 0 >> 0 0 0 0 PCI-MSI-edge = ens11-tx-0 >> 35: 559402399 0 0 0 0 >> 0 0 0 0 PCI-MSI-edge = ens11-rx-0 >>=20 >> I activated RSS in my card, and can set if, however if I use the = param >> max_vfs=3Dn then it defaults back to to 1 rx 1 tx queue per nic port >>=20 >> [ 392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx = queue(s), 1 >> tx queue(s) >> [ 393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx = queue(s), 1 >> tx queue(s) >>=20 >> I have been reading some of the dpdk older posts and see that VF RSS = is >> implemented in some cards, does anybody know if its available in this = card >> (from reading it only seemed the 10GB cards) >>=20 >> One of my plans aside from trying to create more RSS per VM is to add = more >> CPUS to the VM that are not isolated so that the rx and tx queues can >> distribute their load a bit to see if this helps. >>=20 >> Also is it worth investigating the VMDq options, however I understand = this >> to be less useful than SR-IOV which works well for me with KVM. >>=20 >>=20 >> Thanks in advance, >>=20 >> Rolando >>=20 --=20 Chair of Network Architectures and Services Department of Informatics Technical University of Munich Boltzmannstr. 3 85748 Garching bei M=C3=BCnchen, Germany=20