DPDK usage discussions
 help / color / mirror / Atom feed
From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: Hong Christian <hongguochun@hotmail.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration
Date: Fri, 5 Nov 2021 07:48:30 +0000
Message-ID: <PH0PR11MB4791AE89E14C944864759BF98E8E9@PH0PR11MB4791.namprd11.prod.outlook.com> (raw)
In-Reply-To: <SY4P282MB2758AAE8F4F06958B84961ADAC8E9@SY4P282MB2758.AUSP282.PROD.OUTLOOK.COM>

> 
> Hi Ciara,
> 
> Thank you for your quick response and useful tips.
> That's a good idea to change the rx flow, I will test it later.
> 
> Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The
> performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is
> 3Gbps.
> I also checked some statistics, it shows drops on xdp recv and app internel
> transfer, it seems xdp recv and send take times, since there is no difference
> on app side bettween the two tests(dpdk/xdp).
> 
> Are there any extra configuration is required for AF_XDP PMD ?
> The XDP PMD should have similar performance as DPDK PMD under 10Gbps
> ?

Hi Christian,

You're welcome. I have some suggestions for improving the performance.
1. Preferred busy polling
If you are willing to upgrade your kernel to >=5.11 and your DPDK to v21.05 you can avail of the preferred busy polling feature. Info on the benefits can be found here: http://mails.dpdk.org/archives/dev/2021-March/201172.html
Essentially it should improve the performance for a single core use case (driver and application on same core).
2. IRQ pinning
If you are not using the preferred busy polling feature, I suggest pinning the IRQ for your driver to a dedicated core that is not busy with other tasks eg. the application. For most devices you can find IRQ info in /proc/interrupts and you can change the pinning by modifying /proc/irq/<irq_number>/smp_affinity
3. Queue configuration
Make sure you are using all queues on the device. Check the output of ethtool -l <iface> and either set the PMD queue_count to equal the number of queues, or reduce the number of queues using ethtool -L <iface> combined N.

I can't confirm whether the performance should reach that of the driver-specific PMD, but hopefully some of the above helps getting some of the way there.

Thanks,
Ciara

> 
> Br,
> Christian
> ________________________________________
> 发件人: Loftus, Ciara <mailto:ciara.loftus@intel.com>
> 发送时间: 2021年11月4日 10:19
> 收件人: Hong Christian <mailto:hongguochun@hotmail.com>
> 抄送: mailto:users@dpdk.org <mailto:users@dpdk.org>;
> mailto:xiaolong.ye@intel.com <mailto:xiaolong.ye@intel.com>
> 主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue
> configuration
> 
> >
> > Hello DPDK users,
> >
> > Sorry to disturb.
> >
> > I am currently testing net_af_xdp device.
> > But I found the device configure always failed if I configure my rx queue !=
> tx
> > queue.
> > In my project, I use pipeline mode, and require 1 rx and several tx queues.
> >
> > Example:
> > I run my app with paramter: "--no-pci --vdev
> > net_af_xdp0,iface=ens12,queue_count=2 --vdev
> > net_af_xdp1,iface=ens13,queue_count=2"
> > And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> > dev_configure = -22"
> >
> > After checking some xdp docs, I found the rx and tx always bind to use,
> > which connected to filling and completing ring.
> > But I still want to comfirm this with you ? Could you please share your
> > comments ?
> > Thanks in advance.
> 
> Hi Christian,
> 
> Thanks for your question. Yes, at the moment this configuration is forbidden
> for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
> However maybe this is an unnecessary restriction of the PMD. It is indeed
> possible to create a socket with either one rxq or one txq. I will put looking
> into the feasibility of enabling this in the PMD on my backlog.
> In the meantime, one workaround you could try would be to create an even
> number of rxq and txqs but steer all traffic to the first rxq using some NIC
> filtering eg. tc.
> 
> Thanks,
> Ciara
> 
> >
> > Br,
> > Christian

  reply	other threads:[~2021-11-05  7:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-04  8:44 Hong Christian
2021-11-04 10:19 ` Loftus, Ciara
2021-11-05  5:42   ` 回复: " Hong Christian
2021-11-05  7:48     ` Loftus, Ciara [this message]
2021-11-05 10:05       ` Hong Christian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR11MB4791AE89E14C944864759BF98E8E9@PH0PR11MB4791.namprd11.prod.outlook.com \
    --to=ciara.loftus@intel.com \
    --cc=hongguochun@hotmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git