DPDK usage discussions
 help / color / mirror / Atom feed
From: Hong Christian <hongguochun@hotmail.com>
To: "Loftus, Ciara" <ciara.loftus@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: 回复: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
Date: Fri, 5 Nov 2021 10:05:09 +0000	[thread overview]
Message-ID: <SY4P282MB2758DE788C58849EB6D93FC9AC8E9@SY4P282MB2758.AUSP282.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <PH0PR11MB4791AE89E14C944864759BF98E8E9@PH0PR11MB4791.namprd11.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 5196 bytes --]

Hi Ciara,

Thanks for those suggestions.

Compare to busy polling mode, I prefer to use one more core to pinning IRQ.

I configured the queue to 1 on NIC, and I checked smp_affinity is already different with my application.
IRQ on core 15/11, but my app is bind to core rx 1/2, tx 3.

[root@gf]$ cat /proc/interrupts | grep mlx | grep mlx5_comp0
 63:         48   46227515          0          0        151          0          0          0          0          0          0          0          0          0          1   19037579   PCI-MSI 196609-edge      mlx5_comp0@pci:0000:00:0c.0
102:         49          0          0          0          0          1          0          0          0      45030          0   11625905          0   50609158          0        308   PCI-MSI 212993-edge      mlx5_comp0@pci:0000:00:0d.0
[root@gf]$ cat /proc/irq/63/smp_affinity
8000
[root@gf]$ cat /proc/irq/102/smp_affinity
0800

The performance is increased a little, but no big changes...
I will continue investigate the issue. If there are other tips, please feel free to share with me. Thanks again. 🙂

Br,
Christian

________________________________
发件人: Loftus, Ciara <ciara.loftus@intel.com>
发送时间: 2021年11月5日 7:48
收件人: Hong Christian <hongguochun@hotmail.com>
抄送: users@dpdk.org <users@dpdk.org>
主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration

>
> Hi Ciara,
>
> Thank you for your quick response and useful tips.
> That's a good idea to change the rx flow, I will test it later.
>
> Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The
> performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is
> 3Gbps.
> I also checked some statistics, it shows drops on xdp recv and app internel
> transfer, it seems xdp recv and send take times, since there is no difference
> on app side bettween the two tests(dpdk/xdp).
>
> Are there any extra configuration is required for AF_XDP PMD ?
> The XDP PMD should have similar performance as DPDK PMD under 10Gbps
> ?

Hi Christian,

You're welcome. I have some suggestions for improving the performance.
1. Preferred busy polling
If you are willing to upgrade your kernel to >=5.11 and your DPDK to v21.05 you can avail of the preferred busy polling feature. Info on the benefits can be found here: http://mails.dpdk.org/archives/dev/2021-March/201172.html
Essentially it should improve the performance for a single core use case (driver and application on same core).
2. IRQ pinning
If you are not using the preferred busy polling feature, I suggest pinning the IRQ for your driver to a dedicated core that is not busy with other tasks eg. the application. For most devices you can find IRQ info in /proc/interrupts and you can change the pinning by modifying /proc/irq/<irq_number>/smp_affinity
3. Queue configuration
Make sure you are using all queues on the device. Check the output of ethtool -l <iface> and either set the PMD queue_count to equal the number of queues, or reduce the number of queues using ethtool -L <iface> combined N.

I can't confirm whether the performance should reach that of the driver-specific PMD, but hopefully some of the above helps getting some of the way there.

Thanks,
Ciara

>
> Br,
> Christian
> ________________________________________
> 发件人: Loftus, Ciara <mailto:ciara.loftus@intel.com>
> 发送时间: 2021年11月4日 10:19
> 收件人: Hong Christian <mailto:hongguochun@hotmail.com>
> 抄送: mailto:users@dpdk.org <mailto:users@dpdk.org>;
> mailto:xiaolong.ye@intel.com <mailto:xiaolong.ye@intel.com>
> 主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue
> configuration
>
> >
> > Hello DPDK users,
> >
> > Sorry to disturb.
> >
> > I am currently testing net_af_xdp device.
> > But I found the device configure always failed if I configure my rx queue !=
> tx
> > queue.
> > In my project, I use pipeline mode, and require 1 rx and several tx queues.
> >
> > Example:
> > I run my app with paramter: "--no-pci --vdev
> > net_af_xdp0,iface=ens12,queue_count=2 --vdev
> > net_af_xdp1,iface=ens13,queue_count=2"
> > And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> > dev_configure = -22"
> >
> > After checking some xdp docs, I found the rx and tx always bind to use,
> > which connected to filling and completing ring.
> > But I still want to comfirm this with you ? Could you please share your
> > comments ?
> > Thanks in advance.
>
> Hi Christian,
>
> Thanks for your question. Yes, at the moment this configuration is forbidden
> for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
> However maybe this is an unnecessary restriction of the PMD. It is indeed
> possible to create a socket with either one rxq or one txq. I will put looking
> into the feasibility of enabling this in the PMD on my backlog.
> In the meantime, one workaround you could try would be to create an even
> number of rxq and txqs but steer all traffic to the first rxq using some NIC
> filtering eg. tc.
>
> Thanks,
> Ciara
>
> >
> > Br,
> > Christian

[-- Attachment #2: Type: text/html, Size: 9553 bytes --]

      reply	other threads:[~2021-11-05 10:05 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-04  8:44 Hong Christian
2021-11-04 10:19 ` Loftus, Ciara
2021-11-05  5:42   ` 回复: " Hong Christian
2021-11-05  7:48     ` Loftus, Ciara
2021-11-05 10:05       ` Hong Christian [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SY4P282MB2758DE788C58849EB6D93FC9AC8E9@SY4P282MB2758.AUSP282.PROD.OUTLOOK.COM \
    --to=hongguochun@hotmail.com \
    --cc=ciara.loftus@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).