* [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count
@ 2018-11-29 3:53 Haifeng Lin
0 siblings, 0 replies; 5+ messages in thread
From: Haifeng Lin @ 2018-11-29 3:53 UTC (permalink / raw)
To: dev; +Cc: chas3
1. when memcpy slaves the internals->active_slave_count 1
2. return internals->active_slave_count is 2
3. the slaves[1] would be a random invalid value
---
drivers/net/bonding/rte_eth_bond_api.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 21bcd50..ed7b02e 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -815,6 +815,7 @@
uint16_t len)
{
struct bond_dev_private *internals;
+ uint16_t active_slave_count;
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
@@ -824,13 +825,14 @@
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ active_slave_count = internals->active_slave_count;
+ if (active_slave_count > len)
return -1;
memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ active_slave_count * sizeof(internals->active_slaves[0]));
- return internals->active_slave_count;
+ return active_slave_count;
}
int
--
1.8.5.2.msysgit.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] 答复: [PATCH] net/bonding: fix double fetch for active_slave_count
2018-11-30 5:50 ` [dpdk-dev] 答复: " Linhaifeng
@ 2018-11-30 22:28 ` Chas Williams
0 siblings, 0 replies; 5+ messages in thread
From: Chas Williams @ 2018-11-30 22:28 UTC (permalink / raw)
To: Linhaifeng, dev; +Cc: chas3
The problem is that I can't see how the API can ever provide accurate
information. By the time you have the information it is potentially
stale. There really isn't a way to control if a slave is active since
that is protocol dependent.
rte_eth_bond_slaves_get() is safer in the sense that you control when
the slaves are added and removed from the bonding group. You can ensure
that you get a consistent answer.
Hopefully your protocol doesn't especially care if the slave is active
or not. You are sending the packets via rte_eth_bond_8023ad_ext_slow()?
On 11/30/18 12:50 AM, Linhaifeng wrote:
> Hi, Chars
>
> Thank you.
>
> I use it for send pkts to the dedicated queue of slaves.
>
> Maybe i should not use it. I would though another way.
>
> -----邮件原件-----
> 发件人: Chas Williams [mailto:3chas3@gmail.com]
> 发送时间: 2018年11月30日 11:27
> 收件人: Linhaifeng <haifeng.lin@huawei.com>; dev@dpdk.org
> 抄送: chas3@att.com
> 主题: Re: [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count
>
> I guess this is slightly more correct. There is still a race here though.
> After you make your copy of active_slave_count, the number of active slaves could go to 0 and the memcpy() would copy an invalid element, acitve_slaves[0]. There is no simple fix to this problem. Your patch reduces the opportunity for a race but doesn't eliminate it.
>
> What you are using this API for?
>
> On 11/29/18 12:32 AM, Haifeng Lin wrote:
>> 1. when memcpy slaves the internals->active_slave_count 1 2. return
>> internals->active_slave_count is 2 3. the slaves[1] would be a random
>> invalid value
>>
>> Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
>> ---
>> drivers/net/bonding/rte_eth_bond_api.c | 8 +++++---
>> 1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/bonding/rte_eth_bond_api.c
>> b/drivers/net/bonding/rte_eth_bond_api.c
>> index 21bcd50..ed7b02e 100644
>> --- a/drivers/net/bonding/rte_eth_bond_api.c
>> +++ b/drivers/net/bonding/rte_eth_bond_api.c
>> @@ -815,6 +815,7 @@
>> uint16_t len)
>> {
>> struct bond_dev_private *internals;
>> + uint16_t active_slave_count;
>>
>> if (valid_bonded_port_id(bonded_port_id) != 0)
>> return -1;
>> @@ -824,13 +825,14 @@
>>
>> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>>
>> - if (internals->active_slave_count > len)
>> + active_slave_count = internals->active_slave_count;
>> + if (active_slave_count > len)
>> return -1;
>>
>> memcpy(slaves, internals->active_slaves,
>> - internals->active_slave_count * sizeof(internals->active_slaves[0]));
>> + active_slave_count * sizeof(internals->active_slaves[0]));
>>
>> - return internals->active_slave_count;
>> + return active_slave_count;
>> }
>>
>> int
>>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dpdk-dev] 答复: [PATCH] net/bonding: fix double fetch for active_slave_count
2018-11-30 3:27 ` Chas Williams
@ 2018-11-30 5:50 ` Linhaifeng
2018-11-30 22:28 ` Chas Williams
0 siblings, 1 reply; 5+ messages in thread
From: Linhaifeng @ 2018-11-30 5:50 UTC (permalink / raw)
To: Chas Williams, dev; +Cc: chas3
Hi, Chars
Thank you.
I use it for send pkts to the dedicated queue of slaves.
Maybe i should not use it. I would though another way.
-----邮件原件-----
发件人: Chas Williams [mailto:3chas3@gmail.com]
发送时间: 2018年11月30日 11:27
收件人: Linhaifeng <haifeng.lin@huawei.com>; dev@dpdk.org
抄送: chas3@att.com
主题: Re: [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count
I guess this is slightly more correct. There is still a race here though.
After you make your copy of active_slave_count, the number of active slaves could go to 0 and the memcpy() would copy an invalid element, acitve_slaves[0]. There is no simple fix to this problem. Your patch reduces the opportunity for a race but doesn't eliminate it.
What you are using this API for?
On 11/29/18 12:32 AM, Haifeng Lin wrote:
> 1. when memcpy slaves the internals->active_slave_count 1 2. return
> internals->active_slave_count is 2 3. the slaves[1] would be a random
> invalid value
>
> Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
> ---
> drivers/net/bonding/rte_eth_bond_api.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/bonding/rte_eth_bond_api.c
> b/drivers/net/bonding/rte_eth_bond_api.c
> index 21bcd50..ed7b02e 100644
> --- a/drivers/net/bonding/rte_eth_bond_api.c
> +++ b/drivers/net/bonding/rte_eth_bond_api.c
> @@ -815,6 +815,7 @@
> uint16_t len)
> {
> struct bond_dev_private *internals;
> + uint16_t active_slave_count;
>
> if (valid_bonded_port_id(bonded_port_id) != 0)
> return -1;
> @@ -824,13 +825,14 @@
>
> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>
> - if (internals->active_slave_count > len)
> + active_slave_count = internals->active_slave_count;
> + if (active_slave_count > len)
> return -1;
>
> memcpy(slaves, internals->active_slaves,
> - internals->active_slave_count * sizeof(internals->active_slaves[0]));
> + active_slave_count * sizeof(internals->active_slaves[0]));
>
> - return internals->active_slave_count;
> + return active_slave_count;
> }
>
> int
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count
2018-11-29 5:32 Haifeng Lin
@ 2018-11-30 3:27 ` Chas Williams
2018-11-30 5:50 ` [dpdk-dev] 答复: " Linhaifeng
0 siblings, 1 reply; 5+ messages in thread
From: Chas Williams @ 2018-11-30 3:27 UTC (permalink / raw)
To: Haifeng Lin, dev; +Cc: chas3
I guess this is slightly more correct. There is still a race here though.
After you make your copy of active_slave_count, the number of active
slaves could go to 0 and the memcpy() would copy an invalid element,
acitve_slaves[0]. There is no simple fix to this problem. Your patch
reduces the opportunity for a race but doesn't eliminate it.
What you are using this API for?
On 11/29/18 12:32 AM, Haifeng Lin wrote:
> 1. when memcpy slaves the internals->active_slave_count 1
> 2. return internals->active_slave_count is 2
> 3. the slaves[1] would be a random invalid value
>
> Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
> ---
> drivers/net/bonding/rte_eth_bond_api.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
> index 21bcd50..ed7b02e 100644
> --- a/drivers/net/bonding/rte_eth_bond_api.c
> +++ b/drivers/net/bonding/rte_eth_bond_api.c
> @@ -815,6 +815,7 @@
> uint16_t len)
> {
> struct bond_dev_private *internals;
> + uint16_t active_slave_count;
>
> if (valid_bonded_port_id(bonded_port_id) != 0)
> return -1;
> @@ -824,13 +825,14 @@
>
> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>
> - if (internals->active_slave_count > len)
> + active_slave_count = internals->active_slave_count;
> + if (active_slave_count > len)
> return -1;
>
> memcpy(slaves, internals->active_slaves,
> - internals->active_slave_count * sizeof(internals->active_slaves[0]));
> + active_slave_count * sizeof(internals->active_slaves[0]));
>
> - return internals->active_slave_count;
> + return active_slave_count;
> }
>
> int
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count
@ 2018-11-29 5:32 Haifeng Lin
2018-11-30 3:27 ` Chas Williams
0 siblings, 1 reply; 5+ messages in thread
From: Haifeng Lin @ 2018-11-29 5:32 UTC (permalink / raw)
To: dev; +Cc: chas3
1. when memcpy slaves the internals->active_slave_count 1
2. return internals->active_slave_count is 2
3. the slaves[1] would be a random invalid value
Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
---
drivers/net/bonding/rte_eth_bond_api.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 21bcd50..ed7b02e 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -815,6 +815,7 @@
uint16_t len)
{
struct bond_dev_private *internals;
+ uint16_t active_slave_count;
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
@@ -824,13 +825,14 @@
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ active_slave_count = internals->active_slave_count;
+ if (active_slave_count > len)
return -1;
memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ active_slave_count * sizeof(internals->active_slaves[0]));
- return internals->active_slave_count;
+ return active_slave_count;
}
int
--
1.8.5.2.msysgit.0
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-11-30 22:28 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-29 3:53 [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count Haifeng Lin
2018-11-29 5:32 Haifeng Lin
2018-11-30 3:27 ` Chas Williams
2018-11-30 5:50 ` [dpdk-dev] 答复: " Linhaifeng
2018-11-30 22:28 ` Chas Williams
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).