DPDK usage discussions
 help / color / mirror / Atom feed
* pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
@ 2021-11-04  8:44 Hong Christian
  2021-11-04 10:19 ` Loftus, Ciara
  0 siblings, 1 reply; 5+ messages in thread
From: Hong Christian @ 2021-11-04  8:44 UTC (permalink / raw)
  To: users, xiaolong.ye, ciara.loftus

[-- Attachment #1: Type: text/plain, Size: 729 bytes --]

Hello DPDK users,

Sorry to disturb.

I am currently testing net_af_xdp device.
But I found the device configure always failed if I configure my rx queue != tx queue.
In my project, I use pipeline mode, and require 1 rx and several tx queues.

Example:
I run my app with paramter: "--no-pci --vdev net_af_xdp0,iface=ens12,queue_count=2 --vdev net_af_xdp1,iface=ens13,queue_count=2"
And config 1 rx and 2 tx queue, it will setup failed by print: "Port0 dev_configure = -22"

After checking some xdp docs, I found the rx and tx always bind to use, which connected to filling and completing ring.
But I still want to comfirm this with you ? Could you please share your comments ?
Thanks in advance.

Br,
Christian

[-- Attachment #2: Type: text/html, Size: 2961 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
  2021-11-04  8:44 pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration Hong Christian
@ 2021-11-04 10:19 ` Loftus, Ciara
  2021-11-05  5:42   ` 回复: " Hong Christian
  0 siblings, 1 reply; 5+ messages in thread
From: Loftus, Ciara @ 2021-11-04 10:19 UTC (permalink / raw)
  To: Hong Christian; +Cc: users, xiaolong.ye

> 
> Hello DPDK users,
> 
> Sorry to disturb.
> 
> I am currently testing net_af_xdp device.
> But I found the device configure always failed if I configure my rx queue != tx
> queue.
> In my project, I use pipeline mode, and require 1 rx and several tx queues.
> 
> Example:
> I run my app with paramter: "--no-pci --vdev
> net_af_xdp0,iface=ens12,queue_count=2 --vdev
> net_af_xdp1,iface=ens13,queue_count=2"
> And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> dev_configure = -22"
> 
> After checking some xdp docs, I found the rx and tx always bind to use,
> which connected to filling and completing ring.
> But I still want to comfirm this with you ? Could you please share your
> comments ?
> Thanks in advance.

Hi Christian,

Thanks for your question. Yes, at the moment this configuration is forbidden for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
However maybe this is an unnecessary restriction of the PMD. It is indeed possible to create a socket with either one rxq or one txq. I will put looking into the feasibility of enabling this in the PMD on my backlog.
In the meantime, one workaround you could try would be to create an even number of rxq and txqs but steer all traffic to the first rxq using some NIC filtering eg. tc.

Thanks,
Ciara

> 
> Br,
> Christian

^ permalink raw reply	[flat|nested] 5+ messages in thread

* 回复: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
  2021-11-04 10:19 ` Loftus, Ciara
@ 2021-11-05  5:42   ` Hong Christian
  2021-11-05  7:48     ` Loftus, Ciara
  0 siblings, 1 reply; 5+ messages in thread
From: Hong Christian @ 2021-11-05  5:42 UTC (permalink / raw)
  To: Loftus, Ciara; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 2338 bytes --]

Hi Ciara,

Thank you for your quick response and useful tips.
That's a good idea to change the rx flow, I will test it later.

Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is 3Gbps.
I also checked some statistics, it shows drops on xdp recv and app internel transfer, it seems xdp recv and send take times, since there is no difference on app side bettween the two tests(dpdk/xdp).

Are there any extra configuration is required for AF_XDP PMD ?
The XDP PMD should have similar performance as DPDK PMD under 10Gbps ?

Br,
Christian
________________________________
发件人: Loftus, Ciara <ciara.loftus@intel.com>
发送时间: 2021年11月4日 10:19
收件人: Hong Christian <hongguochun@hotmail.com>
抄送: users@dpdk.org <users@dpdk.org>; xiaolong.ye@intel.com <xiaolong.ye@intel.com>
主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration

>
> Hello DPDK users,
>
> Sorry to disturb.
>
> I am currently testing net_af_xdp device.
> But I found the device configure always failed if I configure my rx queue != tx
> queue.
> In my project, I use pipeline mode, and require 1 rx and several tx queues.
>
> Example:
> I run my app with paramter: "--no-pci --vdev
> net_af_xdp0,iface=ens12,queue_count=2 --vdev
> net_af_xdp1,iface=ens13,queue_count=2"
> And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> dev_configure = -22"
>
> After checking some xdp docs, I found the rx and tx always bind to use,
> which connected to filling and completing ring.
> But I still want to comfirm this with you ? Could you please share your
> comments ?
> Thanks in advance.

Hi Christian,

Thanks for your question. Yes, at the moment this configuration is forbidden for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
However maybe this is an unnecessary restriction of the PMD. It is indeed possible to create a socket with either one rxq or one txq. I will put looking into the feasibility of enabling this in the PMD on my backlog.
In the meantime, one workaround you could try would be to create an even number of rxq and txqs but steer all traffic to the first rxq using some NIC filtering eg. tc.

Thanks,
Ciara

>
> Br,
> Christian

[-- Attachment #2: Type: text/html, Size: 4734 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
  2021-11-05  5:42   ` 回复: " Hong Christian
@ 2021-11-05  7:48     ` Loftus, Ciara
  2021-11-05 10:05       ` 回复: " Hong Christian
  0 siblings, 1 reply; 5+ messages in thread
From: Loftus, Ciara @ 2021-11-05  7:48 UTC (permalink / raw)
  To: Hong Christian; +Cc: users

> 
> Hi Ciara,
> 
> Thank you for your quick response and useful tips.
> That's a good idea to change the rx flow, I will test it later.
> 
> Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The
> performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is
> 3Gbps.
> I also checked some statistics, it shows drops on xdp recv and app internel
> transfer, it seems xdp recv and send take times, since there is no difference
> on app side bettween the two tests(dpdk/xdp).
> 
> Are there any extra configuration is required for AF_XDP PMD ?
> The XDP PMD should have similar performance as DPDK PMD under 10Gbps
> ?

Hi Christian,

You're welcome. I have some suggestions for improving the performance.
1. Preferred busy polling
If you are willing to upgrade your kernel to >=5.11 and your DPDK to v21.05 you can avail of the preferred busy polling feature. Info on the benefits can be found here: http://mails.dpdk.org/archives/dev/2021-March/201172.html
Essentially it should improve the performance for a single core use case (driver and application on same core).
2. IRQ pinning
If you are not using the preferred busy polling feature, I suggest pinning the IRQ for your driver to a dedicated core that is not busy with other tasks eg. the application. For most devices you can find IRQ info in /proc/interrupts and you can change the pinning by modifying /proc/irq/<irq_number>/smp_affinity
3. Queue configuration
Make sure you are using all queues on the device. Check the output of ethtool -l <iface> and either set the PMD queue_count to equal the number of queues, or reduce the number of queues using ethtool -L <iface> combined N.

I can't confirm whether the performance should reach that of the driver-specific PMD, but hopefully some of the above helps getting some of the way there.

Thanks,
Ciara

> 
> Br,
> Christian
> ________________________________________
> 发件人: Loftus, Ciara <mailto:ciara.loftus@intel.com>
> 发送时间: 2021年11月4日 10:19
> 收件人: Hong Christian <mailto:hongguochun@hotmail.com>
> 抄送: mailto:users@dpdk.org <mailto:users@dpdk.org>;
> mailto:xiaolong.ye@intel.com <mailto:xiaolong.ye@intel.com>
> 主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue
> configuration
> 
> >
> > Hello DPDK users,
> >
> > Sorry to disturb.
> >
> > I am currently testing net_af_xdp device.
> > But I found the device configure always failed if I configure my rx queue !=
> tx
> > queue.
> > In my project, I use pipeline mode, and require 1 rx and several tx queues.
> >
> > Example:
> > I run my app with paramter: "--no-pci --vdev
> > net_af_xdp0,iface=ens12,queue_count=2 --vdev
> > net_af_xdp1,iface=ens13,queue_count=2"
> > And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> > dev_configure = -22"
> >
> > After checking some xdp docs, I found the rx and tx always bind to use,
> > which connected to filling and completing ring.
> > But I still want to comfirm this with you ? Could you please share your
> > comments ?
> > Thanks in advance.
> 
> Hi Christian,
> 
> Thanks for your question. Yes, at the moment this configuration is forbidden
> for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
> However maybe this is an unnecessary restriction of the PMD. It is indeed
> possible to create a socket with either one rxq or one txq. I will put looking
> into the feasibility of enabling this in the PMD on my backlog.
> In the meantime, one workaround you could try would be to create an even
> number of rxq and txqs but steer all traffic to the first rxq using some NIC
> filtering eg. tc.
> 
> Thanks,
> Ciara
> 
> >
> > Br,
> > Christian

^ permalink raw reply	[flat|nested] 5+ messages in thread

* 回复: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
  2021-11-05  7:48     ` Loftus, Ciara
@ 2021-11-05 10:05       ` Hong Christian
  0 siblings, 0 replies; 5+ messages in thread
From: Hong Christian @ 2021-11-05 10:05 UTC (permalink / raw)
  To: Loftus, Ciara; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 5196 bytes --]

Hi Ciara,

Thanks for those suggestions.

Compare to busy polling mode, I prefer to use one more core to pinning IRQ.

I configured the queue to 1 on NIC, and I checked smp_affinity is already different with my application.
IRQ on core 15/11, but my app is bind to core rx 1/2, tx 3.

[root@gf]$ cat /proc/interrupts | grep mlx | grep mlx5_comp0
 63:         48   46227515          0          0        151          0          0          0          0          0          0          0          0          0          1   19037579   PCI-MSI 196609-edge      mlx5_comp0@pci:0000:00:0c.0
102:         49          0          0          0          0          1          0          0          0      45030          0   11625905          0   50609158          0        308   PCI-MSI 212993-edge      mlx5_comp0@pci:0000:00:0d.0
[root@gf]$ cat /proc/irq/63/smp_affinity
8000
[root@gf]$ cat /proc/irq/102/smp_affinity
0800

The performance is increased a little, but no big changes...
I will continue investigate the issue. If there are other tips, please feel free to share with me. Thanks again. 🙂

Br,
Christian

________________________________
发件人: Loftus, Ciara <ciara.loftus@intel.com>
发送时间: 2021年11月5日 7:48
收件人: Hong Christian <hongguochun@hotmail.com>
抄送: users@dpdk.org <users@dpdk.org>
主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration

>
> Hi Ciara,
>
> Thank you for your quick response and useful tips.
> That's a good idea to change the rx flow, I will test it later.
>
> Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The
> performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is
> 3Gbps.
> I also checked some statistics, it shows drops on xdp recv and app internel
> transfer, it seems xdp recv and send take times, since there is no difference
> on app side bettween the two tests(dpdk/xdp).
>
> Are there any extra configuration is required for AF_XDP PMD ?
> The XDP PMD should have similar performance as DPDK PMD under 10Gbps
> ?

Hi Christian,

You're welcome. I have some suggestions for improving the performance.
1. Preferred busy polling
If you are willing to upgrade your kernel to >=5.11 and your DPDK to v21.05 you can avail of the preferred busy polling feature. Info on the benefits can be found here: http://mails.dpdk.org/archives/dev/2021-March/201172.html
Essentially it should improve the performance for a single core use case (driver and application on same core).
2. IRQ pinning
If you are not using the preferred busy polling feature, I suggest pinning the IRQ for your driver to a dedicated core that is not busy with other tasks eg. the application. For most devices you can find IRQ info in /proc/interrupts and you can change the pinning by modifying /proc/irq/<irq_number>/smp_affinity
3. Queue configuration
Make sure you are using all queues on the device. Check the output of ethtool -l <iface> and either set the PMD queue_count to equal the number of queues, or reduce the number of queues using ethtool -L <iface> combined N.

I can't confirm whether the performance should reach that of the driver-specific PMD, but hopefully some of the above helps getting some of the way there.

Thanks,
Ciara

>
> Br,
> Christian
> ________________________________________
> 发件人: Loftus, Ciara <mailto:ciara.loftus@intel.com>
> 发送时间: 2021年11月4日 10:19
> 收件人: Hong Christian <mailto:hongguochun@hotmail.com>
> 抄送: mailto:users@dpdk.org <mailto:users@dpdk.org>;
> mailto:xiaolong.ye@intel.com <mailto:xiaolong.ye@intel.com>
> 主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue
> configuration
>
> >
> > Hello DPDK users,
> >
> > Sorry to disturb.
> >
> > I am currently testing net_af_xdp device.
> > But I found the device configure always failed if I configure my rx queue !=
> tx
> > queue.
> > In my project, I use pipeline mode, and require 1 rx and several tx queues.
> >
> > Example:
> > I run my app with paramter: "--no-pci --vdev
> > net_af_xdp0,iface=ens12,queue_count=2 --vdev
> > net_af_xdp1,iface=ens13,queue_count=2"
> > And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> > dev_configure = -22"
> >
> > After checking some xdp docs, I found the rx and tx always bind to use,
> > which connected to filling and completing ring.
> > But I still want to comfirm this with you ? Could you please share your
> > comments ?
> > Thanks in advance.
>
> Hi Christian,
>
> Thanks for your question. Yes, at the moment this configuration is forbidden
> for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
> However maybe this is an unnecessary restriction of the PMD. It is indeed
> possible to create a socket with either one rxq or one txq. I will put looking
> into the feasibility of enabling this in the PMD on my backlog.
> In the meantime, one workaround you could try would be to create an even
> number of rxq and txqs but steer all traffic to the first rxq using some NIC
> filtering eg. tc.
>
> Thanks,
> Ciara
>
> >
> > Br,
> > Christian

[-- Attachment #2: Type: text/html, Size: 9553 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-11-05 10:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-04  8:44 pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration Hong Christian
2021-11-04 10:19 ` Loftus, Ciara
2021-11-05  5:42   ` 回复: " Hong Christian
2021-11-05  7:48     ` Loftus, Ciara
2021-11-05 10:05       ` 回复: " Hong Christian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).