From: "Gaëtan Rivet" <grive@u256.net>
To: Jonatan Langlet <jonatanlanglet@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Multiple cores for DPDK behind SmartNIC
Date: Tue, 5 May 2020 11:08:19 +0200 [thread overview]
Message-ID: <20200505090811.wv3zs2zcyeynq5is@u256.net> (raw)
In-Reply-To: <CA+BBBUMAFqCAQT2bEWPUZXd1Zhc9BWFhPeHHdFTh3HmCxesMsQ@mail.gmail.com>
Hello Jonatan,
On 27/04/20 14:49 +0200, Jonatan Langlet wrote:
> Hi group,
>
> We are building a setup with DPDK bound to VF ports of a Netronome Agilio
> CX 2x40 (NFP4000) SmartNIC.
> Netronome does some P4 processing of packets, and forwards through SR-IOV
> to host where dpdk will continue processing.
>
> My problem: in DPDK I can not allocate more than a single RX-queue to the
> ports.
> Multiple dpdk processes can not pull from the same queue, which means that
> my dpdk setup only works with a single core.
>
> Binding dpdk to PF ports on a simple Intel 2x10G NIC works without a
> problem, multiple RX-queues (and hence multiple cores) work fine.
>
>
> I bind dpdk to Netronome VF ports with the igb_uio driver.
> I have seen vfio-pci mentioned, would using this driver allow multiple
> RX-queues? We had some problems using this driver, which is why it has not
> yet been tested.
>
>
> If you need more information, I will be happy providing it
>
>
> Thanks,
> Jonatan
The VF in the guest is managed by the vendor PMD, meaning here the NFP
PMD applies. Either igb-uio or vfio-pci will only serve to expose
mappings in userland, and does not control the port. This means that the
NFP PMD is doing the work of telling the hardware to use multiple queues
and initializing them.
I am not familiar with this PMD, but reading the code quickly, I see
nfp_net_enable_queues() at drivers/net/nfp/nfp_net.c:404, that does
nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues), where
enabled_queues is a uin64_t describing the enabled queues, derived
from dev->data->nb_rx_queues.
I think you need to look into this first. You can use
`gdb -ex 'break nfp_net_enable_queues' -ex 'run' --args <your-app-and-args-here>`
then `p dev->data->nb_rx_queues` to check that your config is
properly passed down to the eth_dev, while initializing the PMD.
It might be your app failing to write the config, your command line
missing an --rxq=N somewhere (or whichever is the equivalent option in
your app), or a failure at init for the VF -- some HW might impose
limitations on their VF, and there it will really depend on your NIC
and driver.
You can reduce entropy by using first testpmd on your VF with --rxq=N
option, start/stop in testpmd will show you the number of used queues.
BR,
--
Gaëtan
prev parent reply other threads:[~2020-05-05 9:08 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-27 12:49 Jonatan Langlet
2020-05-05 9:08 ` Gaëtan Rivet [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200505090811.wv3zs2zcyeynq5is@u256.net \
--to=grive@u256.net \
--cc=dev@dpdk.org \
--cc=jonatanlanglet@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).