From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE4DB42C4E; Mon, 12 Jun 2023 17:52:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C165441138; Mon, 12 Jun 2023 17:52:51 +0200 (CEST) Received: from mail-ua1-f41.google.com (mail-ua1-f41.google.com [209.85.222.41]) by mails.dpdk.org (Postfix) with ESMTP id 9E9D940698 for ; Mon, 12 Jun 2023 17:52:50 +0200 (CEST) Received: by mail-ua1-f41.google.com with SMTP id a1e0cc1a2514c-783f17f0a00so5908241.2 for ; Mon, 12 Jun 2023 08:52:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686585170; x=1689177170; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=XcCMRIkUpjehYVtzU7pH/xFmRXleytYJW3TBHsvu48A=; b=qv4Azjmd/+uUX4YEBtSJax0NKsxog3yPM7tQupRLQaVmtl7OlH/vnpar7R3GjSFOeI 9Yq1QGCL0KqbQCNvJdMSsZiQqGX9aMf1OUJ37e7Du0U+FhzbXrcewau33+DYKv4suP14 N2sLzmlw+a1DKl2R1uzUaiYLDrTisgu5ubXBjlW/jZCx99VbzlRGf7g2cUl7EaRqEnwl vDg97OkM0+Lb8ieDez9kKJQ7NmEJFdQZXJy5jTEgbbntB+SdXKQ9UA8VVpm5ipNmrfmh 0Ug23My670sjmiK6C6uBG1at0cCxIC/xOC4EbjWZAZB3fQ17rOOZXKjasSD5HXLEPF/P IuWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686585170; x=1689177170; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XcCMRIkUpjehYVtzU7pH/xFmRXleytYJW3TBHsvu48A=; b=O4cLWBSVm2NDpRwQwIlw9xLZusLuO39brIEmrkkQ00zh/C86CDLoxTcnzN6KHzcuux sitC2h+4MBw9qBzoESHwAWUDsDEzp2JtEG8yakE8lg6qFJcutBpV53x+oxfuOzxy3HCe KSfYhiY1MgOVZRe5fao+H/SpuZakLHgNm6fv6AwHzlKN/2rz1XE4yoKRYBUQwINmUzGg 6N4ymJGJY1Jb2sPHz+8ru99ToFKSBPT/K6Nr2/EvPVPwKcfR3Y9R8UnTlUqVytCxz8iP 7WRxBFpLSDcOUZgbXipjZH0/pVxaIcsv+FNJtAai0RSxIquS5zjnHZF9xLiSzXUzb95G Wl+A== X-Gm-Message-State: AC+VfDzS5EtqNuIcej16Tzm2pzo18y8dKxd26Rxcvb5ZkR9IeoOENXXF Un+6iaEZ7hmtCmWA/GR65A5BNo51bE7Tfo6+dqxx3KZqy8689w== X-Google-Smtp-Source: ACHHUZ6chF2lLfHfr25gYK6Hrak+11icS1z/xH8afltGaiMYzu6+J9/L7JfjVeXxE8/rEGNAlT+qw++M8CJqQLmrLk0= X-Received: by 2002:a05:6102:3a4c:b0:434:7757:f025 with SMTP id c12-20020a0561023a4c00b004347757f025mr3513990vsu.0.1686585169826; Mon, 12 Jun 2023 08:52:49 -0700 (PDT) MIME-Version: 1.0 References: <20230516143752.4941-1-pbhagavatula@marvell.com> In-Reply-To: <20230516143752.4941-1-pbhagavatula@marvell.com> From: Jerin Jacob Date: Mon, 12 Jun 2023 21:22:23 +0530 Message-ID: Subject: Re: [PATCH 1/3] event/cnxk: align TX queue buffer adjustment To: pbhagavatula@marvell.com Cc: jerinj@marvell.com, Shijith Thotton , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , dev@dpdk.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, May 16, 2023 at 8:08=E2=80=AFPM wrote: > > From: Pavan Nikhilesh > > Remove recalculating SQB thresholds in Tx queue buffer adjustment. > The adjustment is already done during Tx queue setup. > > Signed-off-by: Pavan Nikhilesh > --- > Depends-on: series-27660 Depend patches merged to main tree. Please resent the patch to run through = CI. > > drivers/event/cnxk/cn10k_eventdev.c | 9 +-------- > drivers/event/cnxk/cn10k_tx_worker.h | 6 +++--- > drivers/event/cnxk/cn9k_eventdev.c | 9 +-------- > drivers/event/cnxk/cn9k_worker.h | 12 +++++++++--- > drivers/net/cnxk/cn10k_tx.h | 12 ++++++------ > drivers/net/cnxk/cn9k_tx.h | 5 +++-- > 6 files changed, 23 insertions(+), 30 deletions(-) > > diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn1= 0k_eventdev.c > index 89f32c4d1e..f7c6a83ff0 100644 > --- a/drivers/event/cnxk/cn10k_eventdev.c > +++ b/drivers/event/cnxk/cn10k_eventdev.c > @@ -840,16 +840,9 @@ cn10k_sso_txq_fc_update(const struct rte_eth_dev *et= h_dev, int32_t tx_queue_id) > sq =3D &cnxk_eth_dev->sqs[tx_queue_id]; > txq =3D eth_dev->data->tx_queues[tx_queue_id]; > sqes_per_sqb =3D 1U << txq->sqes_per_sqb_log2; > - sq->nb_sqb_bufs_adj =3D > - sq->nb_sqb_bufs - > - RTE_ALIGN_MUL_CEIL(sq->nb_sqb_bufs, sqes_per_sqb)= / > - sqes_per_sqb; > if (cnxk_eth_dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURI= TY) > - sq->nb_sqb_bufs_adj -=3D (cnxk_eth_dev->outb.nb_d= esc / > - (sqes_per_sqb - 1)); > + sq->nb_sqb_bufs_adj -=3D (cnxk_eth_dev->outb.nb_d= esc / sqes_per_sqb); > txq->nb_sqb_bufs_adj =3D sq->nb_sqb_bufs_adj; > - txq->nb_sqb_bufs_adj =3D > - ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_ad= j) / 100; > } > } > > diff --git a/drivers/event/cnxk/cn10k_tx_worker.h b/drivers/event/cnxk/cn= 10k_tx_worker.h > index c18786a14c..7b2798ad2e 100644 > --- a/drivers/event/cnxk/cn10k_tx_worker.h > +++ b/drivers/event/cnxk/cn10k_tx_worker.h > @@ -32,9 +32,9 @@ cn10k_sso_txq_fc_wait(const struct cn10k_eth_txq *txq) > static __rte_always_inline int32_t > cn10k_sso_sq_depth(const struct cn10k_eth_txq *txq) > { > - return (txq->nb_sqb_bufs_adj - > - __atomic_load_n((int16_t *)txq->fc_mem, __ATOMIC_RELAXED)= ) > - << txq->sqes_per_sqb_log2; > + int32_t avail =3D (int32_t)txq->nb_sqb_bufs_adj - > + (int32_t)__atomic_load_n(txq->fc_mem, __ATOMIC_RE= LAXED); > + return (avail << txq->sqes_per_sqb_log2) - avail; > } > > static __rte_always_inline uint16_t > diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k= _eventdev.c > index df23219f14..a9d603c22f 100644 > --- a/drivers/event/cnxk/cn9k_eventdev.c > +++ b/drivers/event/cnxk/cn9k_eventdev.c > @@ -893,16 +893,9 @@ cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth= _dev, int32_t tx_queue_id) > sq =3D &cnxk_eth_dev->sqs[tx_queue_id]; > txq =3D eth_dev->data->tx_queues[tx_queue_id]; > sqes_per_sqb =3D 1U << txq->sqes_per_sqb_log2; > - sq->nb_sqb_bufs_adj =3D > - sq->nb_sqb_bufs - > - RTE_ALIGN_MUL_CEIL(sq->nb_sqb_bufs, sqes_per_sqb)= / > - sqes_per_sqb; > if (cnxk_eth_dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURI= TY) > - sq->nb_sqb_bufs_adj -=3D (cnxk_eth_dev->outb.nb_d= esc / > - (sqes_per_sqb - 1)); > + sq->nb_sqb_bufs_adj -=3D (cnxk_eth_dev->outb.nb_d= esc / sqes_per_sqb); > txq->nb_sqb_bufs_adj =3D sq->nb_sqb_bufs_adj; > - txq->nb_sqb_bufs_adj =3D > - ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_ad= j) / 100; > } > } > > diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_w= orker.h > index 988cb3acb6..d15dd309fe 100644 > --- a/drivers/event/cnxk/cn9k_worker.h > +++ b/drivers/event/cnxk/cn9k_worker.h > @@ -711,6 +711,14 @@ cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq = *txq, uint64_t base, > } > #endif > > +static __rte_always_inline int32_t > +cn9k_sso_sq_depth(const struct cn9k_eth_txq *txq) > +{ > + int32_t avail =3D (int32_t)txq->nb_sqb_bufs_adj - > + (int32_t)__atomic_load_n(txq->fc_mem, __ATOMIC_RE= LAXED); > + return (avail << txq->sqes_per_sqb_log2) - avail; > +} > + > static __rte_always_inline uint16_t > cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd= , > uint64_t *txq_data, const uint32_t flags) > @@ -734,9 +742,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event= *ev, uint64_t *cmd, > if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) > handle_tx_completion_pkts(txq, 1, 1); > > - if (((txq->nb_sqb_bufs_adj - > - __atomic_load_n((int16_t *)txq->fc_mem, __ATOMIC_RELAXED)) > - << txq->sqes_per_sqb_log2) <=3D 0) > + if (cn9k_sso_sq_depth(txq) <=3D 0) > return 0; > cn9k_nix_tx_skeleton(txq, cmd, flags, 0); > cn9k_nix_xmit_prepare(txq, m, cmd, flags, txq->lso_tun_fmt, txq->= mark_flag, > diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h > index c9ec01cd9d..bab08a2d3b 100644 > --- a/drivers/net/cnxk/cn10k_tx.h > +++ b/drivers/net/cnxk/cn10k_tx.h > @@ -35,12 +35,13 @@ > > #define NIX_XMIT_FC_OR_RETURN(txq, pkts) = \ > do { = \ > + int64_t avail; = \ > /* Cached value is low, Update the fc_cache_pkts */ = \ > if (unlikely((txq)->fc_cache_pkts < (pkts))) { = \ > + avail =3D txq->nb_sqb_bufs_adj - *txq->fc_mem; = \ > /* Multiply with sqe_per_sqb to express in pkts *= / \ > (txq)->fc_cache_pkts =3D = \ > - ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem)= \ > - << (txq)->sqes_per_sqb_log2; = \ > + (avail << (txq)->sqes_per_sqb_log2) - ava= il; \ > /* Check it again for the room */ = \ > if (unlikely((txq)->fc_cache_pkts < (pkts))) = \ > return 0; = \ > @@ -113,10 +114,9 @@ cn10k_nix_vwqe_wait_fc(struct cn10k_eth_txq *txq, in= t64_t req) > if (cached < 0) { > /* Check if we have space else retry. */ > do { > - refill =3D > - (txq->nb_sqb_bufs_adj - > - __atomic_load_n(txq->fc_mem, __ATOMIC_RE= LAXED)) > - << txq->sqes_per_sqb_log2; > + refill =3D txq->nb_sqb_bufs_adj - > + __atomic_load_n(txq->fc_mem, __ATOMIC_RE= LAXED); > + refill =3D (refill << txq->sqes_per_sqb_log2) - r= efill; > } while (refill <=3D 0); > __atomic_compare_exchange(&txq->fc_cache_pkts, &cached, &= refill, > 0, __ATOMIC_RELEASE, > diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h > index e956c1ad2a..8efb75b505 100644 > --- a/drivers/net/cnxk/cn9k_tx.h > +++ b/drivers/net/cnxk/cn9k_tx.h > @@ -32,12 +32,13 @@ > > #define NIX_XMIT_FC_OR_RETURN(txq, pkts) = \ > do { = \ > + int64_t avail; = \ > /* Cached value is low, Update the fc_cache_pkts */ = \ > if (unlikely((txq)->fc_cache_pkts < (pkts))) { = \ > + avail =3D txq->nb_sqb_bufs_adj - *txq->fc_mem; = \ > /* Multiply with sqe_per_sqb to express in pkts *= / \ > (txq)->fc_cache_pkts =3D = \ > - ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem)= \ > - << (txq)->sqes_per_sqb_log2; = \ > + (avail << (txq)->sqes_per_sqb_log2) - ava= il; \ > /* Check it again for the room */ = \ > if (unlikely((txq)->fc_cache_pkts < (pkts))) = \ > return 0; = \ > -- > 2.39.1