From: Matan Azrad <matan@mellanox.com>
To: Chas Williams <3chas3@gmail.com>
Cc: "bluca@debian.org" <bluca@debian.org>,
"dev@dpdk.org" <dev@dpdk.org>,
Declan Doherty <declan.doherty@intel.com>,
Chas Williams <chas3@att.com>,
"stable@dpdk.org" <stable@dpdk.org>,
Eric Kinzie <ekinzie@pobox.com>
Subject: Re: [dpdk-dev] [PATCH v4] net/bonding: per-slave intermediate rx ring
Date: Wed, 22 Aug 2018 07:09:11 +0000 [thread overview]
Message-ID: <AM0PR0502MB4019BFECED087A97C68EBD70D2300@AM0PR0502MB4019.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <CAG2-Gkkfs3jyfc2dyBnwP_-wte_EwMzWV+bkiJgAua_5DB6LnQ@mail.gmail.com>
Hi Chas
From: Chas Williams
>On Tue, Aug 21, 2018 at 11:43 AM Matan Azrad <mailto:matan@mellanox.com> wrote:
>Hi Chas
>
>From: Chas Williams
>> On Tue, Aug 21, 2018 at 6:56 AM Matan Azrad <mailto:matan@mellanox.com>
>> wrote:
>> Hi
>>
>> From: Chas Williams
>> > This will need to be implemented for some of the other RX burst
>> > methods at some point for other modes to see this performance
>> > improvement (with the exception of active-backup).
>>
>> Yes, I think it should be done at least to
>> bond_ethdev_rx_burst_8023ad_fast_queue (should be easy) for now.
>>
>> There is some duplicated code between the various RX paths.
>> I would like to eliminate that as much as possible, so I was going to give that
>> some thought first.
>
>There is no reason to stay this function as is while its twin is changed.
>
>Unfortunately, this is all the patch I have at this time.
>
>
>>
>>
>> > On Thu, Aug 16, 2018 at 9:32 AM Luca Boccassi <mailto:bluca@debian.org> wrote:
>> >
>> > > During bond 802.3ad receive, a burst of packets is fetched from each
>> > > slave into a local array and appended to per-slave ring buffer.
>> > > Packets are taken from the head of the ring buffer and returned to
>> > > the caller. The number of mbufs provided to each slave is
>> > > sufficient to meet the requirements of the ixgbe vector receive.
>>
>> Luca,
>>
>> Can you explain these requirements of ixgbe?
>>
>> The ixgbe (and some other Intel PMDs) have vectorized RX routines that are
>> more efficient (if not faster) taking advantage of some advanced CPU
>> instructions. I think you need to be receiving at least 32 packets or more.
>
>So, why to do it in bond which is a generic driver for all the vendors PMDs,
>If for ixgbe and other Intel nics it is better you can force those PMDs to receive always 32 packets
>and to manage a ring by themselves.
>
>The drawback of the ring is some additional latency on the receive path.
>In testing, the additional latency hasn't been an issue for bonding.
When bonding does processing slower it may be a bottleneck for the packet processing for some application.
> The bonding PMD has a fair bit of overhead associated with the RX and TX path
>calculations. Most applications can just arrange to call the RX path
>with a sufficiently large receive. Bonding can't do this.
I didn't talk on application I talked on the slave PMDs,
The slave PMD can manage a ring by itself if it helps for its own performance.
The bonding should not be oriented to specific PMDs.
>> Did you check for other vendor PMDs? It may hurt performance there..
>>
>> I don't know, but I suspect probably not. For the most part you are typically
>> reading almost up to the vector requirement. But if one slave has just a
>> single packet, then you can't vectorize on the next slave.
>>
>
>I don't think that the ring overhead is better for PMDs which are not using the vectorized instructions.
>
>The non-vectorized PMDs are usually quite slow. The additional
>overhead doesn't make a difference in their performance.
We should not do things worse than they are.
next prev parent reply other threads:[~2018-08-22 7:09 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-15 15:46 [dpdk-dev] [PATCH] " Luca Boccassi
2018-08-15 16:06 ` [dpdk-dev] [PATCH v2] " Luca Boccassi
2018-08-16 12:52 ` [dpdk-dev] [PATCH v3] " Luca Boccassi
2018-08-16 13:32 ` [dpdk-dev] [PATCH v4] " Luca Boccassi
2018-08-20 14:11 ` Chas Williams
2018-08-21 10:56 ` Matan Azrad
2018-08-21 11:13 ` Luca Boccassi
2018-08-21 14:58 ` Chas Williams
2018-08-21 15:43 ` Matan Azrad
2018-08-21 18:19 ` Chas Williams
2018-08-22 7:09 ` Matan Azrad [this message]
2018-08-22 10:19 ` [dpdk-dev] [dpdk-stable] " Luca Boccassi
2018-08-22 11:42 ` Matan Azrad
2018-08-22 17:43 ` Eric Kinzie
2018-08-23 7:28 ` Matan Azrad
2018-08-23 15:51 ` Chas Williams
2018-08-26 7:40 ` Matan Azrad
2018-08-27 13:22 ` Chas Williams
2018-08-27 15:30 ` Matan Azrad
2018-08-27 15:51 ` Chas Williams
2018-08-28 9:51 ` Matan Azrad
2018-08-29 14:30 ` Chas Williams
2018-08-29 15:20 ` Matan Azrad
2018-08-31 16:01 ` Luca Boccassi
2018-09-02 11:34 ` Matan Azrad
2018-09-09 20:57 ` Chas Williams
2018-09-12 5:38 ` Matan Azrad
2018-09-19 18:09 ` [dpdk-dev] " Luca Boccassi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM0PR0502MB4019BFECED087A97C68EBD70D2300@AM0PR0502MB4019.eurprd05.prod.outlook.com \
--to=matan@mellanox.com \
--cc=3chas3@gmail.com \
--cc=bluca@debian.org \
--cc=chas3@att.com \
--cc=declan.doherty@intel.com \
--cc=dev@dpdk.org \
--cc=ekinzie@pobox.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).