From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 191FDA0096 for ; Wed, 10 Apr 2019 13:30:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D8B105592; Wed, 10 Apr 2019 13:30:48 +0200 (CEST) Received: from mail-vs1-f66.google.com (mail-vs1-f66.google.com [209.85.217.66]) by dpdk.org (Postfix) with ESMTP id 5D9175589 for ; Wed, 10 Apr 2019 13:30:47 +0200 (CEST) Received: by mail-vs1-f66.google.com with SMTP id n4so1137719vsm.3 for ; Wed, 10 Apr 2019 04:30:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=xhkTRXoLixjUYpRDcrhXb50dXsSzVRrkiMPjR+fXNZY=; b=baSWmlTmF4p595+KewZ0SkDse2tiz5o3sBnKcm4LmjjCVWA7CLMzyNaR6HXMO5Ubxs UAYEGDQFrtWQvZbWYFRapHQY6XLWR9IFVqAUup3nPrlwk9eL4jP6A+cXC4DEOhlVgQhg Fk86IyjGSLFg53zRSoEPm3Yx/78SoG1Gy3b7xJfVQAPmYi2B/pZRn8tnViixt490x0No j2DX8HFJQ+1vNf7wUyboYANBNop6vbMX3xqCGwyNJb/uib/ew4gYFEYcq/nRY6TrXovv m0YN0GmNXWgny9WVBnTj4Q6AAZ3KVbC04q3nJGHA2kHWonPAn6VM512SGsdUUR8CeqY5 7KeA== X-Gm-Message-State: APjAAAXFsU8lqr1iJGyq9OIH7twt2i5IJvUW7iybUz65wmy3Ww6AndkH 443QlgGxt4j6Vt2q3x7JvgbC550k6G/pzPsmPvqLkg== X-Google-Smtp-Source: APXvYqwhIfzE6DywbkTlijjtMV3n1VWjfs4nwDj5WxlbYuQ6VZj5wwPfMARXj+nbgmsM4BJ/dTkmDfeJ45oivsktouY= X-Received: by 2002:a05:6102:199:: with SMTP id r25mr21674385vsq.166.1554895846684; Wed, 10 Apr 2019 04:30:46 -0700 (PDT) MIME-Version: 1.0 References: <20190409151902.14675-1-xiaolong.ye@intel.com> <20190410105327.110410-1-xiaolong.ye@intel.com> In-Reply-To: <20190410105327.110410-1-xiaolong.ye@intel.com> From: David Marchand Date: Wed, 10 Apr 2019 13:30:35 +0200 Message-ID: To: Xiaolong Ye , Karlsson Magnus Cc: dev , Ferruh Yigit , Qi Zhang , Topel Bjorn Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH] net/af_xdp: submit valid count to Tx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190410113035.uvPfwKomfC92KBA7zbKKdRxf0yQMr8Q_sAUNe5bQT8A@z> On Wed, Apr 10, 2019 at 12:58 PM Xiaolong Ye wrote: > Since tx callback only picks mbufs that can be copied to the xdp desc, it > should call xsk_ring_prod__submit with valid count other than nb_pkts. > > Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD") > > Reported-by: David Marchand > Signed-off-by: Xiaolong Ye > --- > drivers/net/af_xdp/rte_eth_af_xdp.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c > b/drivers/net/af_xdp/rte_eth_af_xdp.c > index 5cc643ce2..8c2ba5740 100644 > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c > @@ -300,7 +300,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, > uint16_t nb_pkts) > rte_pktmbuf_free(mbuf); > } > > - xsk_ring_prod__submit(&txq->tx, nb_pkts); > + xsk_ring_prod__submit(&txq->tx, valid); > Err, well, I think we will have an issue here. Taking from the 5.1.0-rc4 sources I have: static inline __u32 xsk_prod_nb_free(struct xsk_ring_prod *r, __u32 nb) { __u32 free_entries = r->cached_cons - r->cached_prod; if (free_entries >= nb) return free_entries; /* Refresh the local tail pointer. * cached_cons is r->size bigger than the real consumer pointer so * that this addition can be avoided in the more frequently * executed code that computs free_entries in the beginning of * this function. Without this optimization it whould have been * free_entries = r->cached_prod - r->cached_cons + r->size. */ r->cached_cons = *r->consumer + r->size; return r->cached_cons - r->cached_prod; } static inline size_t xsk_ring_prod__reserve(struct xsk_ring_prod *prod, size_t nb, __u32 *idx) { if (unlikely(xsk_prod_nb_free(prod, nb) < nb)) return 0; *idx = prod->cached_prod; prod->cached_prod += nb; return nb; } static inline void xsk_ring_prod__submit(struct xsk_ring_prod *prod, size_t nb) { /* Make sure everything has been written to the ring before signalling * this to the kernel. */ smp_wmb(); *prod->producer += nb; } If we reserve N slots, but only submit n slots, we end up with an incorrect opinion of the number of available slots later. Either xsk_ring_prod__submit should also update cached_prod (but I am not sure it was the intent of this api), or we must ensure that both reserve and submit are consistent. Did I miss some subtility ? -- David Marchand