From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id ECDE57E37 for ; Fri, 26 Sep 2014 16:55:42 +0200 (CEST) Received: from hmsreliant.think-freely.org ([2001:470:8:a08:7aac:c0ff:fec2:933b] helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.63) (envelope-from ) id 1XXX1y-0007fR-Fp; Fri, 26 Sep 2014 11:02:04 -0400 Date: Fri, 26 Sep 2014 11:01:56 -0400 From: Neil Horman To: "Wodkowski, PawelX" Message-ID: <20140926150156.GB5619@hmsreliant.think-freely.org> References: <1411649768-8084-1-git-send-email-michalx.k.jastrzebski@intel.com> <20140925150807.GD32725@hmsreliant.think-freely.org> <2601191342CEEE43887BDE71AB977258213769DE@IRSMSX105.ger.corp.intel.com> <20140925172358.GG32725@hmsreliant.think-freely.org> <2601191342CEEE43887BDE71AB97725821378B50@IRSMSX104.ger.corp.intel.com> <20140926114630.GA3930@hmsreliant.think-freely.org> <20140926134014.GB3930@hmsreliant.think-freely.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Status: No Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v2] Change alarm cancel function to thread-safe: X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Sep 2014 14:55:43 -0000 On Fri, Sep 26, 2014 at 02:01:05PM +0000, Wodkowski, PawelX wrote: > > > > Maybe I don't see something obvious? :) > > > > I think you're missing the fact that your patch doesn't do what you assert above > > either :) > > Issue is not in setting alarms but canceling it. If you look closer to my patch you > see that it address this issue (look at added *do { lock(); ....; unlock(); } while( )* > part). > I get where the issue is, and I'm looking at your patch. I see that you did some locking there. The issue I'm pointing out is that, if you call rte_eal_alarm_cancel on an alarm callback, you will exit the alarm_cancel function with, by definition, one alarm executing (the one you are currently running). You're patch works perfectly for the case where another thread calls cancel, in that it waits until the executing alarm is complete, but it doesn't work in the case where you are calling it from within the alarm callback. If you're goal is to guarantee that all the matching alarms are cancelled and complete, you haven't done that, because the recursive state is still unhandled. > > > > First, lets address rte_alarm_set. There is no notion of "re-arming" in this > > alarm implementation, because theres no ability to refer to a specific alarm > > from the callers perspective. When you call rte_eal_alarm_set you get a new > > alarm every time. So I don't really see a race there. It might not be exactly > > the behavior you want, but its not a race, becuase you're not modifying an > > alarm > > in the middle of execution, you're just creating a new alarm, which is safe. > > OK, it is safe, but this is not the case. > I don't know what you mean by this. We agree its safe, great. But it is the case as I've described it, you can see it from the implementation, every call to rte_eal_alarm_set starts with a malloc of a new alarm structure. > > > > There is a race in what you describe above, insofar as its possible that you > > might call rte_eal_alarm_cancel and return without having canceled all the > > matching alarms. I don't see any clear documentation on what the behavior is > > supposed to be, but if you want to ensure that all matching alarms are cancelled > > or complete on return from rte_eal_alarm_cancel, thats perfectly fine (in linux > > API parlance, thats usually denoted as a cancel_sync operation). > > Again, look at the patch. I changed documentation to inform about this behavior. > This is the documentation included in the patch: Change alarm cancel function to thread-safe. It eliminates a race between threads using rte_alarm_cancel and rte_alarm_set. neither have you compeltely described the race condition (though you now have previously in this thread), nor have you completely addressed it (calling rte_eal_alarm_cancel and rte_eal_alarm_set still behaves exactly as it did previously with a 2nd thread). > > > > For that race condition, you're correct, my patch doesn't address it, I see that > > now. Though your patch doesn't either. If you call rte_eal_alarm_cancel from > > within a callback function, then, by definition, you can't wait on the > > completion of the active alarm, because thats a deadlock. Its a necessecary > > evil, I grant you, but it means that you can't be guaranteed the cancelled and > > complete (cancel_sync) behavior that you want, at least not with the current > > api. If you want that behavior, you need to do one of two things: > > This patch does not break any API. It only removes undefined behavior. > I never said it did break ABI. I said that to completely fix it you would have to break ABI. And it doesn't really remove undefined behavior, because you still have the old behavior in the recursive case (which you may be ok with, I don't know, but if you really want to address the behavior, you should address this aspect of it). > > > > 1) Modify the api to allow callers to individually reference timer instances, so > > that when cancelling, we can return an appropriate return code to indicate to > > the caller that this alarm is in-progress. That way you can guarantee the > > caller that the specific alarm that you cancelled is either complete and cancelled > > or currently executing. Add an api to expicitly wait on a referenced alarm as > > well. This allows developers to know that, when executing an alarm callback, an > > -ECURRENTLYEXECUTING return code is ok, because they are in the currently > > executing context. > > This would brake API for sure. Yes, it would. Bruce Richardson just made a major ABI break with his mbuf cleanup set. If there was a time to change ABI here, now would be the time I think. Neil > >