DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Ciara Loftus <ciara.loftus@intel.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v3 0/3] AF_XDP Preferred Busy Polling
Date: Wed, 10 Mar 2021 17:50:55 +0000	[thread overview]
Message-ID: <d0d5b21c-7694-b993-1e30-a89fa3b9de85@intel.com> (raw)
In-Reply-To: <20210310074816.3029-1-ciara.loftus@intel.com>

On 3/10/2021 7:48 AM, Ciara Loftus wrote:
> Single-core performance of AF_XDP at high loads can be poor because
> a heavily loaded NAPI context will never enter or allow for busy-polling.
> 
> 1C testpmd rxonly (both IRQs and PMD on core 0):
> ./dpdk-testpmd -l 0-1 --vdev=net_af_xdp0,iface=eth0 --main-lcore=1 -- \
> --forward-mode=rxonly
> 0.088Mpps
> 
> In order to achieve decent performance at high loads, it is currently
> recommended ensure the IRQs for the netdev queue and the core running
> the PMD are different.
> 
> 2C testpmd rxonly (IRQs on core 0, PMD on core 1):
> ./dpdk-testpmd -l 0-1 --vdev=net_af_xdp0,iface=eth0 --main-lcore=0 -- \
> --forward-mode=rxonly
> 19.26Mpps
> 
> However using an extra core is of course not ideal. The SO_PREFER_BUSY_POLL
> socket option was introduced in kernel v5.11 to help improve 1C performance.
> See [1].
> 
> This series sets this socket option on xsks created with DPDK (ie. instances of
> the AF_XDP PMD) unless explicitly disabled or not supported by the kernel. It
> was chosen to be enabled by default in order to bring the AF_XDP PMD in line
> with most other PMDs which execute on a single core.
> 
> The following system and netdev settings are recommended in conjunction with
> busy polling:
> echo 2 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs
> echo 200000 | sudo tee /sys/class/net/eth0/gro_flush_timeout
> 
> Re-running the 1C test with busy polling support and the above settings:
> ./dpdk-testpmd -l 0-1 --vdev=net_af_xdp0,iface=eth0 --main-lcore=1 -- \
> --forward-mode=rxonly
> 10.45Mpps
> 
> A new vdev arg is introduced called 'busy_budget' whose default value is 64.
> busy_budget is the value supplied to the kernel with the SO_BUSY_POLL_BUDGET
> socket option and represents the busy-polling NAPI budget ie. the number of
> packets the kernel will attempt to process in the netdev's NAPI context.
> 
> To set the busy budget to 256:
> ./dpdk-testpmd --vdev=net_af_xdp0,iface=eth0,busy_budget=256
> 14.06Mpps
> 
> If you still wish to run using 2 cores (one for PMD once for IRQs) it is
> recommended to disable busy polling to achieve optimal 2C performance:
> ./dpdk-testpmd --vdev=net_af_xdp0,iface=eth0,busy_budget=0
> 19.09Mpps
> 
> v2->v3:
> * Moved release notes update to correct location
> * Changed busy_budget from uint32_t to int since this is the type expected
> by setsockopt
> * Validate busy_budget arg is <= UINT16_MAX during parse
> 
> v1->v2:
> * Set batch size to default size of ring (2048)
> * Split batches > 2048 into multiples of 2048 or less and process all
> packets in the same manner that is done for other drivers eg. ixgbe:
> http://code.dpdk.org/dpdk/v21.02/source/drivers/net/ixgbe/ixgbe_rxtx.c#L318
> * Update commit log with reasoning behing batching changes
> * Update release notes with note on busy polling support
> * Fix return type for sycall_needed function when the wakeup flag is not
> present
> * Apprpriate log leveling
> * Set default_*xportconf burst sizes to the default busy budget size (64)
> * Detect support for busy polling via setsockopt instead of using the presence
> of the flag
> 
> RFC->v1:
> * Fixed behaviour of busy_budget=0
> * Ensure we bail out any of the new setsockopts fail
> 
> [1] https://lwn.net/Articles/837010/
> 
> 
> Ciara Loftus (3):
>    net/af_xdp: allow bigger batch sizes
>    net/af_xdp: Use recvfrom() instead of poll()
>    net/af_xdp: preferred busy polling
> 

for series,
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

Series applied to dpdk-next-net/main, thanks.

      parent reply	other threads:[~2021-03-10 17:51 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-18  9:23 [dpdk-dev] [PATCH RFC " Ciara Loftus
2021-02-18  9:23 ` [dpdk-dev] [PATCH RFC 1/3] net/af_xdp: Increase max batch size to 512 Ciara Loftus
2021-02-18  9:23 ` [dpdk-dev] [PATCH RFC 2/3] net/af_xdp: Use recvfrom() instead of poll() Ciara Loftus
2021-02-18  9:23 ` [dpdk-dev] [PATCH RFC 3/3] net/af_xdp: preferred busy polling Ciara Loftus
2021-02-24 11:18 ` [dpdk-dev] [PATCH 0/3] AF_XDP Preferred Busy Polling Ciara Loftus
2021-02-24 11:18   ` [dpdk-dev] [PATCH 1/3] net/af_xdp: Increase max batch size to 512 Ciara Loftus
2021-03-01 16:31     ` Ferruh Yigit
2021-03-03 15:07       ` Loftus, Ciara
2021-03-03 15:38         ` Ferruh Yigit
2021-02-24 11:18   ` [dpdk-dev] [PATCH 2/3] net/af_xdp: Use recvfrom() instead of poll() Ciara Loftus
2021-02-24 11:18   ` [dpdk-dev] [PATCH 3/3] net/af_xdp: preferred busy polling Ciara Loftus
2021-03-01 16:32     ` Ferruh Yigit
2021-03-04 12:26       ` Loftus, Ciara
2021-03-08 15:54         ` Loftus, Ciara
2021-03-09 10:19   ` [dpdk-dev] [PATCH v2 0/3] AF_XDP Preferred Busy Polling Ciara Loftus
2021-03-09 10:19     ` [dpdk-dev] [PATCH v2 1/3] net/af_xdp: allow bigger batch sizes Ciara Loftus
2021-03-09 16:33       ` Ferruh Yigit
2021-03-10  7:21         ` Loftus, Ciara
2021-03-09 10:19     ` [dpdk-dev] [PATCH v2 2/3] net/af_xdp: Use recvfrom() instead of poll() Ciara Loftus
2021-03-09 10:19     ` [dpdk-dev] [PATCH v2 3/3] net/af_xdp: preferred busy polling Ciara Loftus
2021-03-09 16:33       ` Ferruh Yigit
2021-03-10  7:55         ` Loftus, Ciara
2021-03-10  7:48     ` [dpdk-dev] [PATCH v3 0/3] AF_XDP Preferred Busy Polling Ciara Loftus
2021-03-10  7:48       ` [dpdk-dev] [PATCH v3 1/3] net/af_xdp: allow bigger batch sizes Ciara Loftus
2021-03-10  7:48       ` [dpdk-dev] [PATCH v3 2/3] net/af_xdp: Use recvfrom() instead of poll() Ciara Loftus
2021-03-10  7:48       ` [dpdk-dev] [PATCH v3 3/3] net/af_xdp: preferred busy polling Ciara Loftus
2021-03-10 17:50       ` Ferruh Yigit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0d5b21c-7694-b993-1e30-a89fa3b9de85@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).