DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Li,Rongqing" <lirongqing@baidu.com>,
	"Loftus, Ciara" <ciara.loftus@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH][v2] net/af_xdp: avoid to unnecessary allocation and free mbuf in rx path
Date: Fri, 13 Nov 2020 17:40:45 +0000	[thread overview]
Message-ID: <5bcb4afc-b29d-9bc0-0ddb-476d01a1f7b1@intel.com> (raw)
In-Reply-To: <7f657f37e6ab448a891e7d6505ff5d77@baidu.com>

On 10/14/2020 1:15 PM, Li,Rongqing wrote:
> 
> 
>> -----Original Message-----
>> From: Loftus, Ciara [mailto:ciara.loftus@intel.com]
>> Sent: Friday, October 02, 2020 12:24 AM
>> To: Li,Rongqing <lirongqing@baidu.com>
>> Cc: dev@dpdk.org
>> Subject: RE: [PATCH][v2] net/af_xdp: avoid to unnecessary allocation and free
>> mbuf in rx path
>>
>>>
>>> when receive packets, the max bunch number of mbuf are allocated if
>>> hardware does not receive the max bunch number packets, it will free
>>> redundancy mbuf, that is low-performance
>>>
>>> so optimize rx performance, by allocating number of mbuf based on
>>> result of xsk_ring_cons__peek, to avoid to redundancy allocation, and
>>> free mbuf when receive packets
>>
>> Hi,
>>
>> Thanks for the patch and fixing the issue I raised.
> 
> Thanks for your finding
> 
>> With my testing so far I haven't measured an improvement in performance
>> with the patch.
>> Do you have data to share which shows the benefit of your patch?
>>
>> I agree the potential excess allocation of mbufs for the fill ring is not the most
>> optimal, but if doing it does not significantly impact the performance I would be
>> in favour of keeping that approach versus touching the cached_cons outside of
>> libbpf which is unconventional.
>>
>> If a benefit can be shown and we proceed with the approach, I would suggest
>> creating a new function for the cached consumer rollback eg.
>> xsk_ring_cons_cancel() or similar, and add a comment describing what it does.
>>
> 
> Thanks for your test.
> 
> Yes, it has benefit
> 
> We first see this issue when do some send performance, topo is like below
> 
> Qemu with vhost-user ----->ovs------->xdp interface
> 
> Qemu sends udp packets, xdp has not packets to receive, but it must be polled by ovs, and xdp must allocated/free mbuf unnecessary, with this packet, we has about 5% benefit for sending, this depends on flow table complexity
> 
> 
> When do rx benchmark, if packets per batch is reaching about 32, the benefit is very little.
> If packets per batch is far less than 32, we can see the cycle per packet is reduced obviously
> 

Hi Li, Ciara,

What is the status of this patch, is the patch justified and is a new versions 
requested/expected?


  reply	other threads:[~2020-11-13 17:40 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-25  6:45 Li RongQing
2020-10-01 16:24 ` Loftus, Ciara
2020-10-14 12:15   ` Li,Rongqing
2020-11-13 17:40     ` Ferruh Yigit [this message]
2020-11-16  7:04       ` Loftus, Ciara
2020-11-17  0:05         ` Li,Rongqing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5bcb4afc-b29d-9bc0-0ddb-476d01a1f7b1@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=lirongqing@baidu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).