From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53765A04B8; Tue, 5 May 2020 11:08:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5D2E91D58F; Tue, 5 May 2020 11:08:46 +0200 (CEST) Received: from relay3-d.mail.gandi.net (relay3-d.mail.gandi.net [217.70.183.195]) by dpdk.org (Postfix) with ESMTP id ECB0C1D58C for ; Tue, 5 May 2020 11:08:45 +0200 (CEST) X-Originating-IP: 86.246.31.132 Received: from u256.net (lfbn-idf2-1-566-132.w86-246.abo.wanadoo.fr [86.246.31.132]) (Authenticated sender: grive@u256.net) by relay3-d.mail.gandi.net (Postfix) with ESMTPSA id 2E49460006; Tue, 5 May 2020 09:08:24 +0000 (UTC) Date: Tue, 5 May 2020 11:08:19 +0200 From: =?utf-8?Q?Ga=C3=ABtan?= Rivet To: Jonatan Langlet Cc: dev@dpdk.org Message-ID: <20200505090811.wv3zs2zcyeynq5is@u256.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Subject: Re: [dpdk-dev] Multiple cores for DPDK behind SmartNIC X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello Jonatan, On 27/04/20 14:49 +0200, Jonatan Langlet wrote: > Hi group, > > We are building a setup with DPDK bound to VF ports of a Netronome Agilio > CX 2x40 (NFP4000) SmartNIC. > Netronome does some P4 processing of packets, and forwards through SR-IOV > to host where dpdk will continue processing. > > My problem: in DPDK I can not allocate more than a single RX-queue to the > ports. > Multiple dpdk processes can not pull from the same queue, which means that > my dpdk setup only works with a single core. > > Binding dpdk to PF ports on a simple Intel 2x10G NIC works without a > problem, multiple RX-queues (and hence multiple cores) work fine. > > > I bind dpdk to Netronome VF ports with the igb_uio driver. > I have seen vfio-pci mentioned, would using this driver allow multiple > RX-queues? We had some problems using this driver, which is why it has not > yet been tested. > > > If you need more information, I will be happy providing it > > > Thanks, > Jonatan The VF in the guest is managed by the vendor PMD, meaning here the NFP PMD applies. Either igb-uio or vfio-pci will only serve to expose mappings in userland, and does not control the port. This means that the NFP PMD is doing the work of telling the hardware to use multiple queues and initializing them. I am not familiar with this PMD, but reading the code quickly, I see nfp_net_enable_queues() at drivers/net/nfp/nfp_net.c:404, that does nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues), where enabled_queues is a uin64_t describing the enabled queues, derived from dev->data->nb_rx_queues. I think you need to look into this first. You can use `gdb -ex 'break nfp_net_enable_queues' -ex 'run' --args ` then `p dev->data->nb_rx_queues` to check that your config is properly passed down to the eth_dev, while initializing the PMD. It might be your app failing to write the config, your command line missing an --rxq=N somewhere (or whichever is the equivalent option in your app), or a failure at init for the VF -- some HW might impose limitations on their VF, and there it will really depend on your NIC and driver. You can reduce entropy by using first testpmd on your VF with --rxq=N option, start/stop in testpmd will show you the number of used queues. BR, -- Gaëtan