From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 70FF0376C for ; Mon, 17 Jul 2017 18:31:42 +0200 (CEST) Received: from pure.maildistiller.com (unknown [10.110.50.29]) by dispatch1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTP id A779980079; Mon, 17 Jul 2017 16:31:41 +0000 (UTC) X-Virus-Scanned: Proofpoint Essentials engine Received: from mx4-us4.ppe-hosted.com (unknown [10.110.49.251]) by pure.maildistiller.com (Proofpoint Essentials ESMTP Server) with ESMTPS id B1EE480054; Mon, 17 Jul 2017 16:31:39 +0000 (UTC) Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx4-us4.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 7F4E660089; Mon, 17 Jul 2017 16:31:39 +0000 (UTC) Received: from [192.168.239.128] (85.187.13.33) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Mon, 17 Jul 2017 17:31:34 +0100 To: Stephen Hemminger CC: , Stephen Hemminger References: <20170714183027.16021-1-stephen@networkplumber.org> <20170714183027.16021-2-stephen@networkplumber.org> <0ce78069-6517-aaf1-cfe9-165192a4cc5f@solarflare.com> <20170717085853.3949de50@xeon-e3> <20170717092151.507a9ecb@xeon-e3> From: Andrew Rybchenko Message-ID: <96c1a91b-13f9-1aa8-a8a4-eb9be98b8751@solarflare.com> Date: Mon, 17 Jul 2017 19:31:30 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170717092151.507a9ecb@xeon-e3> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [85.187.13.33] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.100.1062-23200.003 X-TM-AS-Result: No--15.376200-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-MDID: 1500309100-mcclZpX+7qX9 Subject: Re: [dpdk-dev] [RFC 01/14] ethdev: add link status read/write functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Jul 2017 16:31:42 -0000 On 07/17/2017 07:21 PM, Stephen Hemminger wrote: > On Mon, 17 Jul 2017 19:12:01 +0300 > Andrew Rybchenko wrote: > >> On 07/17/2017 06:58 PM, Stephen Hemminger wrote: >>> On Sun, 16 Jul 2017 16:26:06 +0300 >>> Andrew Rybchenko wrote: >>> >>>> On 07/14/2017 09:30 PM, Stephen Hemminger wrote: >>>>> Many drivers are all doing copy/paste of the same code to atomicly >>>>> update the link status. Reduce duplication, and allow for future >>>>> changes by having common function for this. >>>>> >>>>> Signed-off-by: Stephen Hemminger >>>>> --- >>>>> lib/librte_ether/rte_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++ >>>>> lib/librte_ether/rte_ethdev.h | 28 ++++++++++++++++++++++++++++ >>>>> 2 files changed, 64 insertions(+) >>>>> >>>>> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c >>>>> index a1b744704f3a..7532fc6b65f0 100644 >>>>> --- a/lib/librte_ether/rte_ethdev.c >>>>> +++ b/lib/librte_ether/rte_ethdev.c >>>>> @@ -1332,6 +1332,42 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link) >>>>> } >>>>> >>>>> int >>>>> +_rte_eth_link_update(struct rte_eth_dev *dev, >>>>> + const struct rte_eth_link *link) >>>>> +{ >>>>> + volatile struct rte_eth_link *dev_link = &(dev->data->dev_link); >>>>> + struct rte_eth_link old; >>>>> + >>>>> + RTE_BUILD_BUG_ON(sizeof(*link) != sizeof(uint64_t)); >>>>> + >>>>> + old = *dev_link; >>>>> + >>>>> + /* Only reason we use cmpset rather than set is >>>>> + * that on some architecture may use sign bit as a flag value. >>>> May I ask to provide more details here. >>> rte_atomic64_set() takes an int64 argument. >>> This code (taken from ixgbe, virtio and other drivers) uses cmpset >>> to allow using uint64_t. >>> >>> My assumption is that some architecture in the past was using the >>> sign bit a a lock value or something. On 64 bit no special support >>> for 64bit atomic assignment is necessary. Not sure how this code >>> got inherited that way. >> Many thanks. May be it would be useful in the comment as well. > Maybe one of the original developers could clarify. > It would be cleaner just to do rte_atomcic64_set(), it might just > be a leftover semantic from Linux/BSD/??? where the original developer > was looking. Agree. >>>>> + */ >>>>> + while (rte_atomic64_cmpset((volatile uint64_t *)dev_link, >>>>> + *(volatile uint64_t *)dev_link, >>>>> + *(const uint64_t *)link) == 0) >>>> Shouldn't it be: >>>> do { >>>> old = *dev_link; >>>> } while (rte_atomic64_cmpset((volatile uint64_t *)dev_link, >>>> *(uint64_t *)&old, *(const uint64_t *)link) == 0); >>>> >>>> At least it has some sense to guarantee transition from old to new >>>> talking below comparison into account. >>> Since dev_link is volatile, the compiler is required to refetch >>> the pointer every time it evaluates the expression. Maybe clearer >>> to alias devlink to a volatile uint64_t ptr. >> I meant that dev_link value may change after old value saved in original >> patch, >> but before cmpset which actually replaces dev_link value here. As the result >> two _rte_eth_link_update() run in parallel changing to the same value >> may return >> "changes done", but actually only one did the job. >> I'm not sure if it is really important here, since requirements are not >> clear. > Since there is no locking here. There can not be a guarantee of ordering possible. > The only guarantee is that the set of values (duplex, speed, flags) is consistent. > I.e one caller wins, the streams don't get crossed. Results of the update operation is used by some driver to log link up/down change. So, it could result in duplicate up/down logs. Not a big deal, but could be confusing. I guess many are very busy right now with 17.08 release. So, I hope we'll see more feedback when 17.08 release is done.