DPDK patches and discussions
 help / color / mirror / Atom feed
From: Chengchang Tang <tangchengchang@huawei.com>
To: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "linuxarm@huawei.com" <linuxarm@huawei.com>,
	"chas3@att.com" <chas3@att.com>,
	"humin29@huawei.com" <humin29@huawei.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH 2/2] net/bonding: support configuring Tx offloading for bonding
Date: Thu, 10 Jun 2021 14:29:03 +0800	[thread overview]
Message-ID: <c2650d95-06c4-ca49-cfac-07f7edc04296@huawei.com> (raw)
In-Reply-To: <4fa26208-464d-e255-e5e7-21d4e0160bab@oktetlabs.ru>

Hi, Andrew and Ananyev

On 2021/6/9 17:37, Andrew Rybchenko wrote:
> On 6/9/21 12:11 PM, Ananyev, Konstantin wrote:
>>
>>>
>>>
>>> On 2021/6/8 17:49, Andrew Rybchenko wrote:
>>>> "for bonding" is redundant in the summary since it is already
>>>> "net/bonding"
>>>>
>>>> On 4/23/21 12:46 PM, Chengchang Tang wrote:
>>>>> Currently, the TX offloading of the bonding device will not take effect by
>>>>
>>>> TX -> Tx
>>>>
>>>>> using dev_configure. Because the related configuration will not be
>>>>> delivered to the slave devices in this way.
>>>>
>>>> I think it is a major problem that Tx offloads are actually
>>>> ignored. It should be a patches with "Fixes:" which addresses
>>>> it.
>>>>
>>>>> The Tx offloading capability of the bonding device is the intersection of
>>>>> the capability of all slave devices. Based on this, the following functions
>>>>> are added to the bonding driver:
>>>>> 1. If a Tx offloading is within the capability of the bonding device (i.e.
>>>>> all the slave devices support this Tx offloading), the enabling status of
>>>>> the offloading of all slave devices depends on the configuration of the
>>>>> bonding device.
>>>>>
>>>>> 2. For the Tx offloading that is not within the Tx offloading capability
>>>>> of the bonding device, the enabling status of the offloading on the slave
>>>>> devices is irrelevant to the bonding device configuration. And it depends
>>>>> on the original configuration of the slave devices.
>>>>>
>>>>> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
>>>>> ---
>>>>>  drivers/net/bonding/rte_eth_bond_pmd.c | 13 +++++++++++++
>>>>>  1 file changed, 13 insertions(+)
>>>>>
>>>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
>>>>> index 84af348..9922657 100644
>>>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
>>>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
>>>>> @@ -1712,6 +1712,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
>>>>>  	struct rte_flow_error flow_error;
>>>>>
>>>>>  	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
>>>>> +	uint64_t tx_offload_cap = internals->tx_offload_capa;
>>>>> +	uint64_t tx_offload;
>>>>>
>>>>>  	/* Stop slave */
>>>>>  	errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
>>>>> @@ -1759,6 +1761,17 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
>>>>>  		slave_eth_dev->data->dev_conf.rxmode.offloads &=
>>>>>  				~DEV_RX_OFFLOAD_JUMBO_FRAME;
>>>>>
>>>>> +	while (tx_offload_cap != 0) {
>>>>> +		tx_offload = 1ULL << __builtin_ctzll(tx_offload_cap);
>>>>> +		if (bonded_eth_dev->data->dev_conf.txmode.offloads & tx_offload)
>>>>> +			slave_eth_dev->data->dev_conf.txmode.offloads |=
>>>>> +				tx_offload;
>>>>> +		else
>>>>> +			slave_eth_dev->data->dev_conf.txmode.offloads &=
>>>>> +				~tx_offload;
>>>>> +		tx_offload_cap &= ~tx_offload;
>>>>> +	}
>>>>> +
>>>>
>>>> Frankly speaking I don't understand why it is that complicated.
>>>> ethdev rejects of unsupported Tx offloads. So, can't we simply:
>>>> slave_eth_dev->data->dev_conf.txmode.offloads =
>>>>     bonded_eth_dev->data->dev_conf.txmode.offloads;
>>>>
>>>
>>> Using such a complicated method is to increase the flexibility of the slave devices,
>>> allowing the Tx offloading of the slave devices to be incompletely consistent with
>>> the bond device. If some offloading can be turned on without bond device awareness,
>>> they can be retained in this case.
>>
>>
>> Not sure how that can that happen...
> 
> +1
> 
> @Chengchang could you provide an example how it could happen.
> 

For example:
device 1 capability: VLAN_INSERT | MBUF_FAST_FREE
device 2 capability: VLAN_INSERT
And the capability of bonded device will be VLAN_INSERT.
So, we can only set VLAN_INSERT for the bonded device. So what if we want to enable
MBUF_FAST_FREE in device 1 to improve performance? For the application, as long as it
can guarantee the condition of MBUF ref_cnt = 1, then it can run normally if
MBUF_FAST_FREE is turned on.

In my logic, if device 1 has been configured with MBUF_FAST_FREE, and then
added to the bonded device as a slave. The MBUF_FAST_FREE will be reserved.

>> From my understanding tx_offload for bond device has to be intersection of tx_offloads
>> of all slaves, no? Otherwise bond device might be misconfigured.
>> Anyway for that code snippet above, wouldn't the same be achived by:
>> slave_eth_dev->data->dev_conf.txmode.offloads &= internals->tx_offload_capa & bonded_eth_dev->data->dev_conf.txmode.offloads;
>> ?
> 

I think it will not achieved my purpose in the scenario I mentioned above.

> .
> 


  reply	other threads:[~2021-06-10  6:29 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-16 11:04 [dpdk-dev] [RFC 0/2] add Tx prepare support for bonding device Chengchang Tang
2021-04-16 11:04 ` [dpdk-dev] [RFC 1/2] net/bonding: add Tx prepare for bonding Chengchang Tang
2021-04-16 11:04 ` [dpdk-dev] [RFC 2/2] app/testpmd: add cmd for bonding Tx prepare Chengchang Tang
2021-04-16 11:12 ` [dpdk-dev] [RFC 0/2] add Tx prepare support for bonding device Min Hu (Connor)
2021-04-20  1:26 ` Ferruh Yigit
2021-04-20  2:44   ` Chengchang Tang
2021-04-20  8:33     ` Ananyev, Konstantin
2021-04-20 12:44       ` Chengchang Tang
2021-04-20 13:18         ` Ananyev, Konstantin
2021-04-20 14:06           ` Chengchang Tang
2021-04-23  9:46 ` [dpdk-dev] [PATCH " Chengchang Tang
2021-04-23  9:46   ` [dpdk-dev] [PATCH 1/2] net/bonding: support Tx prepare for bonding Chengchang Tang
2021-06-08  9:49     ` Andrew Rybchenko
2021-06-09  6:42       ` Chengchang Tang
2021-06-09  9:35         ` Andrew Rybchenko
2021-06-10  7:32           ` Chengchang Tang
2021-06-14 14:16             ` Andrew Rybchenko
2021-06-09 10:25         ` Ananyev, Konstantin
2021-06-10  6:46           ` Chengchang Tang
2021-06-14 11:36             ` Ananyev, Konstantin
2022-05-24 12:11       ` Min Hu (Connor)
2022-07-25  4:08     ` [PATCH v2 0/3] add Tx prepare support for bonding driver Chengwen Feng
2022-07-25  4:08       ` [PATCH v2 1/3] net/bonding: support Tx prepare Chengwen Feng
2022-09-13 10:22         ` Ferruh Yigit
2022-09-13 15:08           ` Chas Williams
2022-09-14  0:46           ` fengchengwen
2022-09-14 16:59             ` Chas Williams
2022-09-17  2:35               ` fengchengwen
2022-09-17 13:38                 ` Chas Williams
2022-09-19 14:07                   ` Konstantin Ananyev
2022-09-19 23:02                     ` Chas Williams
2022-09-22  2:12                       ` fengchengwen
2022-09-25 10:32                         ` Chas Williams
2022-09-26 10:18                       ` Konstantin Ananyev
2022-09-26 16:36                         ` Chas Williams
2022-07-25  4:08       ` [PATCH v2 2/3] net/bonding: support Tx prepare fail stats Chengwen Feng
2022-07-25  4:08       ` [PATCH v2 3/3] net/bonding: add testpmd cmd for Tx prepare Chengwen Feng
2022-07-25  7:04       ` [PATCH v2 0/3] add Tx prepare support for bonding driver humin (Q)
2022-09-13  1:41       ` fengchengwen
2022-09-17  4:15     ` [PATCH v3 " Chengwen Feng
2022-09-17  4:15       ` [PATCH v3 1/3] net/bonding: support Tx prepare Chengwen Feng
2022-09-17  4:15       ` [PATCH v3 2/3] net/bonding: support Tx prepare fail stats Chengwen Feng
2022-09-17  4:15       ` [PATCH v3 3/3] net/bonding: add testpmd cmd for Tx prepare Chengwen Feng
2022-10-09  3:36     ` [PATCH v4] net/bonding: call Tx prepare before Tx burst Chengwen Feng
2022-10-10 19:42       ` Chas Williams
2022-10-11 13:28         ` fengchengwen
2022-10-11 13:20     ` [PATCH v5] " Chengwen Feng
2022-10-15 15:26       ` Chas Williams
2022-10-18 14:25         ` fengchengwen
2022-10-20  7:07         ` Andrew Rybchenko
2021-04-23  9:46   ` [dpdk-dev] [PATCH 2/2] net/bonding: support configuring Tx offloading for bonding Chengchang Tang
2021-06-08  9:49     ` Andrew Rybchenko
2021-06-09  6:57       ` Chengchang Tang
2021-06-09  9:11         ` Ananyev, Konstantin
2021-06-09  9:37           ` Andrew Rybchenko
2021-06-10  6:29             ` Chengchang Tang [this message]
2021-06-14 11:05               ` Ananyev, Konstantin
2021-06-14 14:13                 ` Andrew Rybchenko
2021-04-30  6:26   ` [dpdk-dev] [PATCH 0/2] add Tx prepare support for bonding device Chengchang Tang
2021-04-30  6:47     ` Min Hu (Connor)
2021-06-03  1:44   ` Chengchang Tang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c2650d95-06c4-ca49-cfac-07f7edc04296@huawei.com \
    --to=tangchengchang@huawei.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=chas3@att.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=humin29@huawei.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=linuxarm@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).