DPDK patches and discussions
 help / color / mirror / Atom feed
From: Linhaifeng <haifeng.lin@huawei.com>
To: Chas Williams <3chas3@gmail.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "chas3@att.com" <chas3@att.com>
Subject: [dpdk-dev] 答复:  [PATCH] net/bonding: fix double fetch for active_slave_count
Date: Fri, 30 Nov 2018 05:50:13 +0000	[thread overview]
Message-ID: <4099DE2E54AFAD489356C6C9161D5333958924EF@DGGEML502-MBS.china.huawei.com> (raw)
In-Reply-To: <8fdfaff2-8f59-8d4b-db6c-a8a3c26fb4e2@gmail.com>

Hi, Chars

Thank you.

 I use it for send pkts to the dedicated queue of slaves.

Maybe i  should not use it. I would though another way.

-----邮件原件-----
发件人: Chas Williams [mailto:3chas3@gmail.com] 
发送时间: 2018年11月30日 11:27
收件人: Linhaifeng <haifeng.lin@huawei.com>; dev@dpdk.org
抄送: chas3@att.com
主题: Re: [dpdk-dev] [PATCH] net/bonding: fix double fetch for active_slave_count

I guess this is slightly more correct. There is still a race here though.
After you make your copy of active_slave_count, the number of active slaves could go to 0 and the memcpy() would copy an invalid element, acitve_slaves[0].  There is no simple fix to this problem.  Your patch reduces the opportunity for a race but doesn't eliminate it.

What you are using this API for?

On 11/29/18 12:32 AM, Haifeng Lin wrote:
> 1. when memcpy slaves the internals->active_slave_count 1 2. return 
> internals->active_slave_count is 2 3. the slaves[1] would be a random 
> invalid value
> 
> Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
> ---
>   drivers/net/bonding/rte_eth_bond_api.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_api.c 
> b/drivers/net/bonding/rte_eth_bond_api.c
> index 21bcd50..ed7b02e 100644
> --- a/drivers/net/bonding/rte_eth_bond_api.c
> +++ b/drivers/net/bonding/rte_eth_bond_api.c
> @@ -815,6 +815,7 @@
>   		uint16_t len)
>   {
>   	struct bond_dev_private *internals;
> +	uint16_t active_slave_count;
>   
>   	if (valid_bonded_port_id(bonded_port_id) != 0)
>   		return -1;
> @@ -824,13 +825,14 @@
>   
>   	internals = rte_eth_devices[bonded_port_id].data->dev_private;
>   
> -	if (internals->active_slave_count > len)
> +	active_slave_count = internals->active_slave_count;
> +	if (active_slave_count > len)
>   		return -1;
>   
>   	memcpy(slaves, internals->active_slaves,
> -	internals->active_slave_count * sizeof(internals->active_slaves[0]));
> +			active_slave_count * sizeof(internals->active_slaves[0]));
>   
> -	return internals->active_slave_count;
> +	return active_slave_count;
>   }
>   
>   int
> 

  reply	other threads:[~2018-11-30  5:50 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-29  5:32 [dpdk-dev] " Haifeng Lin
2018-11-30  3:27 ` Chas Williams
2018-11-30  5:50   ` Linhaifeng [this message]
2018-11-30 22:28     ` [dpdk-dev] 答复: " Chas Williams
  -- strict thread matches above, loose matches on Subject: below --
2018-11-29  3:53 [dpdk-dev] " Haifeng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4099DE2E54AFAD489356C6C9161D5333958924EF@DGGEML502-MBS.china.huawei.com \
    --to=haifeng.lin@huawei.com \
    --cc=3chas3@gmail.com \
    --cc=chas3@att.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).