From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31E7FA04DB; Wed, 9 Dec 2020 21:50:04 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5178DBE77; Wed, 9 Dec 2020 21:50:02 +0100 (CET) Received: from mail-yb1-f195.google.com (mail-yb1-f195.google.com [209.85.219.195]) by dpdk.org (Postfix) with ESMTP id C0996BE73 for ; Wed, 9 Dec 2020 21:50:00 +0100 (CET) Received: by mail-yb1-f195.google.com with SMTP id k78so2578264ybf.12 for ; Wed, 09 Dec 2020 12:50:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=m/sOHp7A6eudiZ6Ol2l+maGRiYwgH/fEP5pB5aYQM8M=; b=ML5QobJekt89pXQ4WvsEvsUE7CaJj8sFPOtSpTXBxvWJ4pFNkg3c0g9tFRenLfMEIB OE98Qdgnkm/Wb0NVlNf79aX6uNdSqIi+fGTM1j+SDBRxAilXHIeL/3lA8B5WQpLfbW3e wrt3WvSApXJ2rn6zbvYfCUnC0RCQIO5qdBXEEj/v8/kzLloO4Ng7CwdT1MO1VNYNmijH uvgeyjOsP0lEBkv2o4YUPo/byF9pWuSToPSkWfYqHQkmR1saimLeDmbR4iAFE/I2K7Zz H0Ff6LlGg01cegUbxdwelxSXM2jcP7arNWRXTU1mLsgfyJBaCdWcGM0uKl9UuqcroFIj Cfng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=m/sOHp7A6eudiZ6Ol2l+maGRiYwgH/fEP5pB5aYQM8M=; b=GW/ncNnRY1fbC2KWJTiTv9RAY3RUGOEUvi9zU1UvedmbMv8PW+6FnF7FECERUsH/NQ OsCcBxh2/TG8r8tymy6KR4MWSlhQB2WR5oIaf67CRLuvFMlJPxa6TsYH4kCVWK3boOa7 SEWywChqsBORH1eNeKLwxxswnD0Kjr2JIMIJ1bLPLSXhl5utHqa57FYXiUmu3QmXTdO1 gRj1QcldVcbF+QNX6cJwmDKCnizNaMQsuGlx0sI7CVe/QjNkwUMZ+v9DtMNw5cMLIPgK XkiDiIBNahBqlNL6tfXtV8/dFltS682trrXQqH986MJGyvxLzt9/URh+Jb9ZgETQgdYR sAkA== X-Gm-Message-State: AOAM530qXFydoXEcsjWYYzqbReviMbCH0OKg1drUCFjh0wUaqo9rIa8B jQ9MVfWxJ9Jto862RrOjLROh0o+N1zqurNB4UuY= X-Google-Smtp-Source: ABdhPJwTsE5eJ9w4BD5DxOPoqvGBW4jWqjSEH4QV1cjiWjGjHfCcYVTQqz5ip6nHAxdlwKs8kLudyaH9lCUGtoMM8E4= X-Received: by 2002:a25:7c43:: with SMTP id x64mr3945578ybc.267.1607546999119; Wed, 09 Dec 2020 12:49:59 -0800 (PST) MIME-Version: 1.0 References: <20201209192233.6518-1-ajit.khaparde@broadcom.com> <20201209192233.6518-18-ajit.khaparde@broadcom.com> In-Reply-To: <20201209192233.6518-18-ajit.khaparde@broadcom.com> From: Lance Richardson Date: Wed, 9 Dec 2020 15:49:48 -0500 Message-ID: To: Ajit Khaparde Cc: dev@dpdk.org Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v2 17/17] net/bnxt: modify ring index logic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Dec 9, 2020 at 2:28 PM Ajit Khaparde wrote: > > Change the ring logic so that the index increments > unbounded and mask it only when needed. > > Modify the existing macros so that the index is not masked. > Add a new macro RING_IDX() to mask it only when needed. > > Signed-off-by: Ajit Khaparde > --- > index d540e9eee..202291202 100644 > --- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h > +++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h > @@ -105,21 +105,21 @@ bnxt_tx_cmp_vec_fast(struct bnxt_tx_queue *txq, int nr_pkts) > struct bnxt_tx_ring_info *txr = txq->tx_ring; > uint32_t ring_mask = txr->tx_ring_struct->ring_mask; > struct rte_mbuf **free = txq->free; > - uint16_t cons = txr->tx_cons; > + uint16_t raw_cons = txr->tx_raw_cons; > unsigned int blk = 0; > > while (nr_pkts--) { > struct bnxt_sw_tx_bd *tx_buf; > > - tx_buf = &txr->tx_buf_ring[cons]; > - cons = (cons + 1) & ring_mask; > + raw_cons = (raw_cons + 1) & ring_mask; > + tx_buf = &txr->tx_buf_ring[raw_cons]; If the intention is (as stated in the commit log) to track the unmasked index and only mask it when needed, this does not accomplish that, and if the naming convention is for "raw" indices to refer to the unmasked version, this might be misleading. Maybe change to this: raw_cons = RING_NEXT(raw_cons); tx_buf = &txr->tx_buf_ring[raw_cons & ring_mask]; > free[blk++] = tx_buf->mbuf; > tx_buf->mbuf = NULL; > } > if (blk) > rte_mempool_put_bulk(free[0]->pool, (void **)free, blk); > > - txr->tx_cons = cons; > + txr->tx_raw_cons = raw_cons; > } > > static inline void > @@ -127,7 +127,7 @@ bnxt_tx_cmp_vec(struct bnxt_tx_queue *txq, int nr_pkts) > { > struct bnxt_tx_ring_info *txr = txq->tx_ring; > struct rte_mbuf **free = txq->free; > - uint16_t cons = txr->tx_cons; > + uint16_t raw_cons = txr->tx_raw_cons; > unsigned int blk = 0; > uint32_t ring_mask = txr->tx_ring_struct->ring_mask; > > @@ -135,8 +135,8 @@ bnxt_tx_cmp_vec(struct bnxt_tx_queue *txq, int nr_pkts) > struct bnxt_sw_tx_bd *tx_buf; > struct rte_mbuf *mbuf; > > - tx_buf = &txr->tx_buf_ring[cons]; > - cons = (cons + 1) & ring_mask; > + raw_cons = (raw_cons + 1) & ring_mask; > + tx_buf = &txr->tx_buf_ring[raw_cons]; See previous comment. > mbuf = rte_pktmbuf_prefree_seg(tx_buf->mbuf); > if (unlikely(mbuf == NULL)) > continue; > @@ -151,6 +151,6 @@ bnxt_tx_cmp_vec(struct bnxt_tx_queue *txq, int nr_pkts) > if (blk) > rte_mempool_put_bulk(free[0]->pool, (void **)free, blk); > > - txr->tx_cons = cons; > + txr->tx_raw_cons = raw_cons; > } > #endif /* _BNXT_RXTX_VEC_COMMON_H_ */