From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f52.google.com (mail-it0-f52.google.com [209.85.214.52]) by dpdk.org (Postfix) with ESMTP id 7AAFB567E for ; Mon, 11 Dec 2017 10:14:44 +0100 (CET) Received: by mail-it0-f52.google.com with SMTP id d137so14199223itc.2 for ; Mon, 11 Dec 2017 01:14:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:reply-to:from:date:message-id:subject:to; bh=bfBYRcthxV8v94cToDKwfFl9Gi1Bkzd8wO1GcLeHgZM=; b=Ocz0JO3UKKkUf0Vyu+7Y7WQbKzbnOVwkicJWz5IGcrMjC5BJcv0uCJy6gfO6EWnHdt VprEWuCo1f30zffRWHhmjzPmDI+4Uud/kHeGJM+NBMz4LtTlV3e5rQ/soooD7qy+5FgL oAKu7fMX+VBjqcRnG8duYpd1dWWXRn+jjNrj1uOKY4sfACF3qV9f4rR9qX6MA2U/XJtW y/R8YUghPyPJlHk2EHLHUHfi8iuTIKazf/v2cQlwbG1pkRWOhwxmlEfmmNuxJMGPT9ZU mrCkLe/e3rHnhrabZznO1V0ORCKPshwGyFvrbhNdm25yIsZAp+mhF22y3FmcZAKOmK2u YoTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:reply-to:from:date:message-id :subject:to; bh=bfBYRcthxV8v94cToDKwfFl9Gi1Bkzd8wO1GcLeHgZM=; b=fzxpWCCB38tRUXfToAiup7+7wZ9qU7vjowRUni5q9NtNpj8Bvq5BzC+DyWMy27VPgX i+SYcWLM1uww/O7/VnZnnUZJtSdzHQliEfQ+71UbYNOd/rSQhMLkqmobW03tq8ydFHzg igp/T0SOol1y44IIStY6eBq0Lpd2L6Po8QPqz9/K/ZySERIq4J9g6MnKN6uTcUtp9F1i u1h92TeF3QKlH/8d81bmyMmA39rsiLt+urS9N+/FDL4A9ZifPEIhmOwlFvhq/Vfmf6xj f8Aws9Q12OBIneE0DqcpWM+0mFqL4Nu4prLpkvY+w+X7UZKfLhtAa2x9zOQNuaSljsrQ iVEg== X-Gm-Message-State: AKGB3mIJ2hXGNvGerWoW0CQkfOrdlQoWRaLPPgvmRzu8z/7vFnE+C259 WDqeW8t/lf/hkomyo0fK7HeQbaHPs3hm8iRJWgw2APJn X-Google-Smtp-Source: AGs4zMagjRFuaVaACN4M56NyqIfOI7EYg9GUVs4VXYyre2KiO5rQ3NNHxLARAkEZfKEePpIrc1UApQCL5C/PZmRHMNQ= X-Received: by 10.107.128.152 with SMTP id k24mr19585583ioi.184.1512983683319; Mon, 11 Dec 2017 01:14:43 -0800 (PST) MIME-Version: 1.0 Received: by 10.2.144.139 with HTTP; Mon, 11 Dec 2017 01:14:42 -0800 (PST) From: ".." Date: Mon, 11 Dec 2017 10:14:42 +0100 Message-ID: To: users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] VF RSS availble in I350-T2? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: hyperhead@gmail.com List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Dec 2017 09:14:44 -0000 Hi, I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some rx_dropped on the card when I start increasing traffic. (I have got more with the same software out of a identical bare metal system) I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not the driver installed with Centos), so the RSS parameters amongst others are availbe to me This then led me to investigate the interrupts on the tx rx ring buffers and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its distributed between This is on the KVM Host CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8 100: 1 33 137 0 0 0 0 0 0 IR-PCI-MSI-edge ens2f1 101: 2224 0 0 6309 178807 0 0 0 0 IR-PCI-MSI-edge ens2f1-TxRx-0 Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues On the VM I only get one tx one rx queue ( I know all the interrupts are only using CPU0) but that is defined in our builds. egrep "CPU|ens11" /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 34: 715885552 0 0 0 0 0 0 0 0 PCI-MSI-edge ens11-tx-0 35: 559402399 0 0 0 0 0 0 0 0 PCI-MSI-edge ens11-rx-0 I activated RSS in my card, and can set if, however if I use the param max_vfs=n then it defaults back to to 1 rx 1 tx queue per nic port [ 392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s) [ 393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s) I have been reading some of the dpdk older posts and see that VF RSS is implemented in some cards, does anybody know if its available in this card (from reading it only seemed the 10GB cards) One of my plans aside from trying to create more RSS per VM is to add more CPUS to the VM that are not isolated so that the rx and tx queues can distribute their load a bit to see if this helps. Also is it worth investigating the VMDq options, however I understand this to be less useful than SR-IOV which works well for me with KVM. Thanks in advance, Rolando