DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ye Xiaolong <xiaolong.ye@intel.com>
To: Ciara Loftus <ciara.loftus@intel.com>
Cc: dev@dpdk.org, yinan.wang@intel.com
Subject: Re: [dpdk-dev] [PATCH] doc: update af_xdp documentation on MTU limitations
Date: Wed, 19 Feb 2020 08:57:57 +0800	[thread overview]
Message-ID: <20200219005757.GA111443@intel.com> (raw)
In-Reply-To: <20200218140359.7533-1-ciara.loftus@intel.com>

Nice doc about MTU of AF_XDP, thanks for the update.

On 02/18, Ciara Loftus wrote:
>Explain how kernel driver RX buffer sizes affect the maximum
>MTU size in practice.
>
>Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
>---
> doc/guides/nics/af_xdp.rst | 26 +++++++++++++++++++++++---
> 1 file changed, 23 insertions(+), 3 deletions(-)
>
>diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
>index b434b25df..07bdd29e2 100644
>--- a/doc/guides/nics/af_xdp.rst
>+++ b/doc/guides/nics/af_xdp.rst
>@@ -18,9 +18,6 @@ packets through the socket which would bypass the kernel network stack.
> Current implementation only supports single queue, multi-queues feature will
> be added later.
> 
>-Note that MTU of AF_XDP PMD is limited due to XDP lacks support for
>-fragmentation.
>-
> AF_XDP PMD enables need_wakeup flag by default if it is supported. This
> need_wakeup feature is used to support executing application and driver on the
> same core efficiently. This feature not only has a large positive performance
>@@ -57,3 +54,26 @@ The following example will set up an af_xdp interface in DPDK:
> .. code-block:: console
> 
>     --vdev net_af_xdp,iface=ens786f1
>+
>+Limitations
>+-----------
>+
>+- **MTU**
>+
>+  The MTU of the AF_XDP PMD is limited due to the XDP requirement of one packet
>+  per page. In the PMD we report the maximum MTU for zero copy to be equal
>+  to the page size less the frame overhead introduced by AF_XDP (XDP HR = 256)
>+  and DPDK (frame headroom = 320). With a 4K page size this works out at 3520.
>+  However in practice this value may be even smaller, due to differences between
>+  the supported RX buffer sizes of the underlying kernel netdev driver.
>+
>+  For example, the largest RX buffer size supported by the underlying kernel driver
>+  which is less than the page size (4096B) may be 3072B. In this case, the maximum
>+  MTU value will be at most 3072, but likely even smaller than this, once relevant
>+  headers are accounted for eg. Ethernet and VLAN.
>+
>+  To determine the actual maximum MTU value of the interface you are using with the
>+  AF_XDP PMD, consult the documentation for the kernel driver.
>+
>+  Note: The AF_XDP PMD will fail to initialise if an MTU which violates the driver's
>+  conditions as above is set prior to launching the application.
>-- 
>2.25.0
>

Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>

  reply	other threads:[~2020-02-19  0:59 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-18 14:03 Ciara Loftus
2020-02-19  0:57 ` Ye Xiaolong [this message]
2020-02-19 12:36   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200219005757.GA111443@intel.com \
    --to=xiaolong.ye@intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).