From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl0-f68.google.com (mail-pl0-f68.google.com [209.85.160.68]) by dpdk.org (Postfix) with ESMTP id C64C01B1DA for ; Wed, 20 Dec 2017 21:19:48 +0100 (CET) Received: by mail-pl0-f68.google.com with SMTP id o2so9653958plk.12 for ; Wed, 20 Dec 2017 12:19:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cAmDFD73KVgw8AfFYw+2ZQA5B9ioEAGOGGsbSoih19s=; b=CKEVUZGhuqqiRSwooudFliiEIl1RH2+jt7KT8LyfploH37QtsKeoIcqugex7G0tLsD zYYxM7ukiGm/Y4dpnRKrItIMOPxzLNUHc5uaYIp9EernBjqKA14i0tG+srYWDwCywiPh jBMMiWeCskmCHDGY6fTKLw8/afFrNIjSU5hGZF99lzfDl8RpNsTVpS9seUDPR4zi5gLu RFr8AMbqFTIhuZEgj90BfJ4UCuaOm1ws1lCKd6EYieNKtEsSKp7eRJMkSASVSYownuIU gmxjVAp++SYgkf/l524BFRe363UJKJY5kAgpwcQDr9TqnV/fRVhYHMDmaZGsJw0cHjz1 zLvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cAmDFD73KVgw8AfFYw+2ZQA5B9ioEAGOGGsbSoih19s=; b=nFrc5j+aFZqWRxouXaJ/GIk81Dk6XjEJuoHqg/p0U7cICMno1JBHYkz+8Ju/yobXHp xA7ZZBquC9a5fP+tCB2w94ngMIV5h6Tym87DTIeamPJSgmQKQe7UpQE9Ryz6dkVYSTkQ j264OUlpaiE+ofBZdtqQq4fo2+q1yoBwW4snSWt5qwfHwe5q2GP6Arxsk1V3TCCILTC1 3nHe6SaIjV78K9VfaCLrTLjErrGmNisgbereqbo4aSI0v8UyOGHqMBryAI/mVBJxetOw LADMFK6Vo+LlZJUQ8DkHtC1dFsmGh74RmwTpWovtdE+MuI2Pi+HY0lzm1FHUWhwTdwKy jrBA== X-Gm-Message-State: AKGB3mJoqK0X6uO/lTP3zHC6E+VIGJINTAmFn69LZRBPMUrweuYKS3+O uFAx/wINwVVWR8fLl+rkdeSYTw== X-Google-Smtp-Source: ACJfBosKoghXe2N7+rqrVEdxAdZ8OUu50MwfGw25A0PQlJfjdYlG/BjG+hMZ/Hl4mKBQEqvi63mx7Q== X-Received: by 10.84.164.104 with SMTP id m37mr7998358plg.29.1513801187787; Wed, 20 Dec 2017 12:19:47 -0800 (PST) Received: from xeon-e3 (204-195-18-133.wavecable.com. [204.195.18.133]) by smtp.gmail.com with ESMTPSA id z78sm38725074pfk.115.2017.12.20.12.19.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 20 Dec 2017 12:19:47 -0800 (PST) Date: Wed, 20 Dec 2017 12:19:45 -0800 From: Stephen Hemminger To: Victor Kaplansky Cc: dev@dpdk.org, stable@dpdk.org, Jens Freimann , Maxime Coquelin , Yuanhan Liu , Tiwei Bie , Jianfeng Tan Message-ID: <20171220121945.0143b0af@xeon-e3> In-Reply-To: <634157847.2119460.1513800390896.JavaMail.zimbra@redhat.com> References: <20171220163752-mutt-send-email-victork@redhat.com> <20171220110616.21301e11@xeon-e3> <634157847.2119460.1513800390896.JavaMail.zimbra@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v4] vhost_user: protect active rings from async ring changes X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Dec 2017 20:19:49 -0000 On Wed, 20 Dec 2017 15:06:30 -0500 (EST) Victor Kaplansky wrote: > > Wrapping locking inline's adds nothing and makes life harder > > for static analysis tools. > > Yep. In this case it inhibits the details of how the locking is > implemented (e.g. the name of the lock). It also facilitates > replacement of locking mechanism, by another implementation. > See below. YAGNI You aren't gonna need it. Don't build infrastructure for things that you forsee. > > > > The bigger problem is that doing locking on all enqueue/dequeue > > can have a visible performance impact. Did you measure that? > > > > Could you invent an RCUish mechanism using compiler barriers? > > > > I've played a bit with measuring performance impact. Successful > lock adds on the average about 30 cycles on my Haswell cpu. > (and it successes 99.999...% of time). > > I can investigate it more, but my initial feeling is that adding a > memory barrier (the real one, not the compiler barrier) would add > about the same overhead. > > By the way, the way update_queuing_status() in > drivers/net/vhost/rte_eth_vhost.c tries to avoid contention with > the active queue by playing with "allow_queuing" and "while_queuing" > seems to be broken, since memory barriers are missing. CPU cycles alone don't matter on modern x86. What matters is cache and instructions per cycle. In this case locking requires locked instruction which causes the cpu prefetching and instruction pipeline to stall.