From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f52.google.com (mail-it0-f52.google.com [209.85.214.52]) by dpdk.org (Postfix) with ESMTP id 245B01D8E for ; Fri, 8 Dec 2017 10:57:56 +0100 (CET) Received: by mail-it0-f52.google.com with SMTP id z6so3729552iti.4 for ; Fri, 08 Dec 2017 01:57:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:reply-to:from:date:message-id:subject:to; bh=DU2c8acJVePIOGK8KLTIG1X7/MT+cdAgZMZh8M0T/kI=; b=U9XzRxgWkvhZ1/vHIA7GMgglTQrV6MtQpbSHq4/ISNlb/QhtSoC4TZ5zAeabv3hBQz 23n0ysWVQlCMiPuu5wVGmv2JEa05KFSvkzNHaOJcUfQ9/zpogiy3PgJE4oXyEDDinEou 9N7HhRd939lDhEJxrNT5bqIUq8N+ryTKScPztoxdBd0qdJEjm4us9ix9jw552HPUjqXi T0daIfzOIw8MACQy3yC53AwlLVqn57d5u9cV/SXsP6kEYLYeikTtR0EkJwREPkh9eWpN 8/J6Z6+uDL/Hq4w76CAjBcFN+mTg4cU3behIZ0Ng2tzwhaWYsSozgOwn6fgKm0cJEUxh 97Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:reply-to:from:date:message-id :subject:to; bh=DU2c8acJVePIOGK8KLTIG1X7/MT+cdAgZMZh8M0T/kI=; b=UG1rSIpi5QxPqtZC4bGTdMR5Np862VBeGI3nxwNSPp21/5yv3Ri/jN+zypUb0fmUV4 yMMfs4KDY6L1BJy3crMlTU2Ua2Xmk+7KfwqD55JAE2ereZOTF6ZfZGb1RwU2SdYoWGXX yHC/fzUutN5jLChtD7KAsK846DENiXvf6WSyqgc7iN6NMSUIR74EGC3UdtIRQMR1B7h3 35QtJ17tuz4rZ0yDrvgUhYrszO1zv2616yO2WtVv9Rv5b8o9D7ElNnmQF4BQnIT1iLD2 bL7XZwUVvAXqHTGMbewAa1nGmBnFjdnetD9nm8DXI7PP9KZJvBSJFw6PXlk5Y0nIHTp9 PiHA== X-Gm-Message-State: AKGB3mL4TwPlUJ1VtwFhKeRgHxGp6MkDSzQ8L/w9Zc7uRlgm28uLtcGI yOggRSnO2boLyfbyYzK1aXan2xjMgi2nowM401ahFCUy X-Google-Smtp-Source: AGs4zMZpMo61aAW+20QTWSgCG9Rg0JOFuSVy+3zEfOCg5wpA9IMhXQ+44WMMfyqMo9TICo6OKDRLafHYvIo4Vaaskqw= X-Received: by 10.107.142.72 with SMTP id q69mr16676788iod.205.1512727075083; Fri, 08 Dec 2017 01:57:55 -0800 (PST) MIME-Version: 1.0 Received: by 10.2.144.139 with HTTP; Fri, 8 Dec 2017 01:57:54 -0800 (PST) From: ".." Date: Fri, 8 Dec 2017 10:57:54 +0100 Message-ID: To: users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] VF RSS availble in I350-T2? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: hyperhead@gmail.com List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Dec 2017 09:57:56 -0000 Hi, I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some rx_dropped on the card when I start increasing traffic. (I have got more with the same software out of a identical bare metal system) I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not the driver installed with Centos), so the RSS parameters amongst others are availbe to me This then led me to investigate the interrupts on the tx rx ring buffers and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its distributed between This is on the KVM Host CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8 100: 1 33 137 0 0 0 0 0 0 IR-PCI-MSI-edge ens2f1 101: 2224 0 0 6309 178807 0 0 0 0 IR-PCI-MSI-edge ens2f1-TxRx-0 Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues On the VM I only get one tx one rx queue ( I know all the interrupts are only using CPU0) but that is defined in our builds. egrep "CPU|ens11" /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 34: 715885552 0 0 0 0 0 0 0 0 PCI-MSI-edge ens11-tx-0 35: 559402399 0 0 0 0 0 0 0 0 PCI-MSI-edge ens11-rx-0 I activated RSS in my card, and can set if, however if I use the param max_vfs=n then it defaults back to to 1 rx 1 tx queue per nic port [ 392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s) [ 393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s) I have been reading some of the dpdk older posts and see that VF RSS is implemented in some cards, does anybody know if its available in this card (from reading it only seemed the 10GB cards) One of my plans aside from trying to create more RSS per VM is to add more CPUS to the VM that are not isolated so that the rx and tx queues can distribute their load a bit to see if this helps. Also is it worth investigating the VMDq options, however I understand this to be less useful than SR-IOV which works well for me with KVM. Thanks in advance, Rolando