* [dpdk-dev] Multiple cores for DPDK behind SmartNIC
@ 2020-04-27 12:49 Jonatan Langlet
2020-05-05 9:08 ` Gaëtan Rivet
0 siblings, 1 reply; 2+ messages in thread
From: Jonatan Langlet @ 2020-04-27 12:49 UTC (permalink / raw)
To: dev
Hi group,
We are building a setup with DPDK bound to VF ports of a Netronome Agilio
CX 2x40 (NFP4000) SmartNIC.
Netronome does some P4 processing of packets, and forwards through SR-IOV
to host where dpdk will continue processing.
My problem: in DPDK I can not allocate more than a single RX-queue to the
ports.
Multiple dpdk processes can not pull from the same queue, which means that
my dpdk setup only works with a single core.
Binding dpdk to PF ports on a simple Intel 2x10G NIC works without a
problem, multiple RX-queues (and hence multiple cores) work fine.
I bind dpdk to Netronome VF ports with the igb_uio driver.
I have seen vfio-pci mentioned, would using this driver allow multiple
RX-queues? We had some problems using this driver, which is why it has not
yet been tested.
If you need more information, I will be happy providing it
Thanks,
Jonatan
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-dev] Multiple cores for DPDK behind SmartNIC
2020-04-27 12:49 [dpdk-dev] Multiple cores for DPDK behind SmartNIC Jonatan Langlet
@ 2020-05-05 9:08 ` Gaëtan Rivet
0 siblings, 0 replies; 2+ messages in thread
From: Gaëtan Rivet @ 2020-05-05 9:08 UTC (permalink / raw)
To: Jonatan Langlet; +Cc: dev
Hello Jonatan,
On 27/04/20 14:49 +0200, Jonatan Langlet wrote:
> Hi group,
>
> We are building a setup with DPDK bound to VF ports of a Netronome Agilio
> CX 2x40 (NFP4000) SmartNIC.
> Netronome does some P4 processing of packets, and forwards through SR-IOV
> to host where dpdk will continue processing.
>
> My problem: in DPDK I can not allocate more than a single RX-queue to the
> ports.
> Multiple dpdk processes can not pull from the same queue, which means that
> my dpdk setup only works with a single core.
>
> Binding dpdk to PF ports on a simple Intel 2x10G NIC works without a
> problem, multiple RX-queues (and hence multiple cores) work fine.
>
>
> I bind dpdk to Netronome VF ports with the igb_uio driver.
> I have seen vfio-pci mentioned, would using this driver allow multiple
> RX-queues? We had some problems using this driver, which is why it has not
> yet been tested.
>
>
> If you need more information, I will be happy providing it
>
>
> Thanks,
> Jonatan
The VF in the guest is managed by the vendor PMD, meaning here the NFP
PMD applies. Either igb-uio or vfio-pci will only serve to expose
mappings in userland, and does not control the port. This means that the
NFP PMD is doing the work of telling the hardware to use multiple queues
and initializing them.
I am not familiar with this PMD, but reading the code quickly, I see
nfp_net_enable_queues() at drivers/net/nfp/nfp_net.c:404, that does
nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues), where
enabled_queues is a uin64_t describing the enabled queues, derived
from dev->data->nb_rx_queues.
I think you need to look into this first. You can use
`gdb -ex 'break nfp_net_enable_queues' -ex 'run' --args <your-app-and-args-here>`
then `p dev->data->nb_rx_queues` to check that your config is
properly passed down to the eth_dev, while initializing the PMD.
It might be your app failing to write the config, your command line
missing an --rxq=N somewhere (or whichever is the equivalent option in
your app), or a failure at init for the VF -- some HW might impose
limitations on their VF, and there it will really depend on your NIC
and driver.
You can reduce entropy by using first testpmd on your VF with --rxq=N
option, start/stop in testpmd will show you the number of used queues.
BR,
--
Gaëtan
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-05-05 9:08 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-27 12:49 [dpdk-dev] Multiple cores for DPDK behind SmartNIC Jonatan Langlet
2020-05-05 9:08 ` Gaëtan Rivet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).