DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Nicolau, Radu" <radu.nicolau@intel.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"Van Haaren, Harry" <harry.van.haaren@intel.com>,
	Jerin Jacob <jerinjacobk@gmail.com>
Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"jerinj@marvell.com" <jerinj@marvell.com>, nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH v1] event/sw: performance improvements
Date: Wed, 7 Oct 2020 11:44:39 +0100	[thread overview]
Message-ID: <a5002d5c-7c60-c189-bea7-8a61b7435755@intel.com> (raw)
In-Reply-To: <DM6PR11MB330881D3421DCCB8E9949CAF9A0D0@DM6PR11MB3308.namprd11.prod.outlook.com>


On 10/6/2020 11:13 AM, Ananyev, Konstantin wrote:
>>> -----Original Message-----
>>> From: Jerin Jacob <jerinjacobk@gmail.com>
>>> Sent: Monday, October 5, 2020 5:35 PM
>>> To: Nicolau, Radu <radu.nicolau@intel.com>
>>> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Richardson, Bruce
>>> <bruce.richardson@intel.com>; Ananyev, Konstantin
>>> <konstantin.ananyev@intel.com>; Van Haaren, Harry
>>> <harry.van.haaren@intel.com>; dev@dpdk.org; jerinj@marvell.com; nd
>>> <nd@arm.com>
>>> Subject: Re: [dpdk-dev] [PATCH v1] event/sw: performance improvements
>>>
>>> On Tue, Sep 29, 2020 at 2:32 PM Nicolau, Radu <radu.nicolau@intel.com> wrote:
>>>>
>>>> On 9/28/2020 5:02 PM, Honnappa Nagarahalli wrote:
>>>>> <snip>
>>>>>>> Add minimum burst throughout the scheduler pipeline and a flush counter.
>>>>>>> Replace ring API calls with local single threaded implementation where
>>>>>>> possible.
>>>>>>>
>>>>>>> Signed-off-by: Radu Nicolau mailto:radu.nicolau@intel.com
>>>>>>>
>>>>>>> Thanks for the patch, a few comments inline.
>>>>>>>
>>>>>>> Why not make these APIs part of the rte_ring library? You could further
>>>>>> optimize them by keeping the indices on the same cacheline.
>>>>>>> I'm not sure there is any need for non thread-safe rings outside this
>>>>>> particular case.
>>>>>>> [Honnappa] I think if we add the APIs, we will find the use cases.
>>>>>>> But, more than that, I understand that rte_ring structure is exposed to the
>>>>>> application. The reason for doing that is the inline functions that rte_ring
>>>>>> provides. IMO, we should still maintain modularity and should not use the
>>>>>> internals of the rte_ring structure outside of the library.
>>>>>>> +1 to that.
>>>>>>>
>>>>>>> BTW, is there any real perf benefit from such micor-optimisation?
>>>>>> I'd tend to view these as use-case specific, and I'm not sure we should clutter
>>>>>> up the ring library with yet more functions, especially since they can't be
>>>>>> mixed with the existing enqueue/dequeue functions, since they don't use
>>>>>> the head pointers.
>>>>> IMO, the ring library is pretty organized with the recent addition of HTS/RTS
>>> modes. This can be one of the modes and should allow us to use the existing
>>> functions (though additional functions are required as well).
>>>>> The other concern I have is, this implementation can be further optimized by
>>> using a single cache line for the pointers. It uses 2 cache lines just because of the
>>> layout of the rte_ring structure.
>>>>> There was a question earlier about the performance improvements of this
>>> patch? Are there any % performance improvements that can be shared?
>>>>> It is also possible to change the above functions to use the head/tail pointers
>>> from producer or the consumer cache line alone to check for perf differences.
>>>> I don't have a % for the final improvement for this change alone, but
>>>> there was some improvement in the memory overhead measurable during
>>>> development, which very likely resulted in the whole optimization having
>>>> more headroom.
>>>>
>>>> I agree that this may be further optimized, maybe by having a local
>>>> implementation of a ring-like container instead.
>>> Have we decided on the next steps for this patch? Is the plan to
>>> supersede this patch and have different
>>> one in rte_ring subsystem,
>> My preference is to merge this version of the patch;
>> 1) The ring helper functions are stripped to the SW PMD usage, and not valid to use in the general.
>> 2) Adding static inline APIs in an LTS without extensive doesn't seem a good idea.
>>
>> If Honnappa is OK with the above solution for 20.11, we can see about moving the rings part of the
>> code to rte_ring library location in 21.02, and give ourselves some time to settle the usage/API before
>> the next LTS.
>>
> As ring library maintainer I share Honnappa concern that another library not uses public ring API,
> but instead accesses ring internals directly. Obviously such coding practice is not welcomed
> as it makes harder to maintain/extend ring library in future.
> About 2) - these new API can(/shoud) be marked an experimental anyway.
> As another thing - it is still unclear what a performance gain we are talking about here.
> Is it really worth it comparing to just using SP/SC?

The change itself came after I analyzed the memory bound sections of the 
code, and I just did a quick test, I got about 3.5% improvement in 
throughput,  maybe not so much but significant for such a small change, 
and depending on the usecase it may be more.

As for the implementation itself, I would favour having a custom ring 
like container in the PMD code, this will solve the issue of using 
rte_ring internals while still allow for full optimisation. If this is 
acceptable, I will follow up by tomorrow.


  reply	other threads:[~2020-10-07 10:44 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-08 10:52 Radu Nicolau
2020-09-23 11:13 ` Van Haaren, Harry
2020-09-23 23:10   ` Honnappa Nagarahalli
2020-09-24 15:27     ` Nicolau, Radu
2020-09-24 17:38       ` Honnappa Nagarahalli
2020-09-24 18:02         ` Ananyev, Konstantin
2020-09-25 10:28           ` Bruce Richardson
2020-09-28 16:02             ` Honnappa Nagarahalli
2020-09-29  9:02               ` Nicolau, Radu
2020-10-05 16:35                 ` Jerin Jacob
2020-10-06  7:59                   ` Van Haaren, Harry
2020-10-06 10:13                     ` Ananyev, Konstantin
2020-10-07 10:44                       ` Nicolau, Radu [this message]
2020-10-07 10:52                         ` Ananyev, Konstantin
2020-10-13 19:11                           ` Jerin Jacob
2020-10-14  8:32                             ` Nicolau, Radu
2020-10-14 10:09                               ` Jerin Jacob
2020-10-14 10:21                                 ` Ananyev, Konstantin
2020-10-14 18:27                                   ` Jerin Jacob
2020-09-28  8:28 ` [dpdk-dev] [PATCH v2] " Radu Nicolau
2020-09-28 13:47   ` Van Haaren, Harry
2020-10-07 13:51 ` [dpdk-dev] [PATCH v3] " Radu Nicolau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5002d5c-7c60-c189-bea7-8a61b7435755@intel.com \
    --to=radu.nicolau@intel.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=jerinj@marvell.com \
    --cc=jerinjacobk@gmail.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).