DPDK patches and discussions
 help / color / mirror / Atom feed
From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: longli@microsoft.com
Cc: Ferruh Yigit <ferruh.yigit@amd.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	dev@dpdk.org
Subject: Re: [Patch v3] net/mana: use rte_pktmbuf_alloc_bulk for allocating RX WQEs
Date: Thu, 1 Feb 2024 08:16:16 -0800	[thread overview]
Message-ID: <20240201161616.GA13514@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> (raw)
In-Reply-To: <1706759150-6269-1-git-send-email-longli@linuxonhyperv.com>

On Wed, Jan 31, 2024 at 07:45:50PM -0800, longli@linuxonhyperv.com wrote:
> From: Long Li <longli@microsoft.com>
> 
> Instead of allocating mbufs one by one during RX, use
> rte_pktmbuf_alloc_bulk() to allocate them in a batch.
> 
> There are no measurable performance improvements in benchmarks. However,
> this patch should improve CPU cycles and reduce potential locking
> conflicts in real-world applications.
> 
> Signed-off-by: Long Li <longli@microsoft.com>
> ---
> Change in v2:
> use rte_calloc_socket() in place of rte_calloc()
> 
> v3:
> add more comment explaining the benefit of doing alloc_bulk.
> free mbufs that are failed to post
> 
>  drivers/net/mana/rx.c | 74 +++++++++++++++++++++++++++++--------------
>  1 file changed, 50 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/net/mana/rx.c b/drivers/net/mana/rx.c
> index acad5e26cd..6112db2219 100644
> --- a/drivers/net/mana/rx.c
> +++ b/drivers/net/mana/rx.c
> @@ -2,6 +2,7 @@
>   * Copyright 2022 Microsoft Corporation
>   */
>  #include <ethdev_driver.h>
> +#include <rte_malloc.h>
>  
>  #include <infiniband/verbs.h>
>  #include <infiniband/manadv.h>
> @@ -59,9 +60,8 @@ mana_rq_ring_doorbell(struct mana_rxq *rxq)
>  }
>  
>  static int
> -mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
> +mana_post_rx_wqe(struct mana_rxq *rxq, struct rte_mbuf *mbuf)
>  {
> -	struct rte_mbuf *mbuf = NULL;
>  	struct gdma_sgl_element sgl[1];
>  	struct gdma_work_request request;
>  	uint32_t wqe_size_in_bu;
> @@ -69,12 +69,6 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
>  	int ret;
>  	struct mana_mr_cache *mr;
>  
> -	mbuf = rte_pktmbuf_alloc(rxq->mp);
> -	if (!mbuf) {
> -		rxq->stats.nombuf++;
> -		return -ENOMEM;
> -	}
> -
>  	mr = mana_alloc_pmd_mr(&rxq->mr_btree, priv, mbuf);
>  	if (!mr) {
>  		DP_LOG(ERR, "failed to register RX MR");
> @@ -121,19 +115,32 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq)
>   * Post work requests for a Rx queue.
>   */
>  static int
> -mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq)
> +mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq, uint32_t count)
>  {
>  	int ret;
>  	uint32_t i;
> +	struct rte_mbuf **mbufs;
> +
> +	mbufs = rte_calloc_socket("mana_rx_mbufs", count, sizeof(struct rte_mbuf *),
> +				  0, rxq->mp->socket_id);
> +	if (!mbufs)
> +		return -ENOMEM;
> +
> +	ret = rte_pktmbuf_alloc_bulk(rxq->mp, mbufs, count);
> +	if (ret) {
> +		DP_LOG(ERR, "failed to allocate mbufs for RX");
> +		rxq->stats.nombuf += count;
> +		goto fail;
> +	}
>  
>  #ifdef RTE_ARCH_32
>  	rxq->wqe_cnt_to_short_db = 0;
>  #endif
> -	for (i = 0; i < rxq->num_desc; i++) {
> -		ret = mana_alloc_and_post_rx_wqe(rxq);
> +	for (i = 0; i < count; i++) {
> +		ret = mana_post_rx_wqe(rxq, mbufs[i]);
>  		if (ret) {
>  			DP_LOG(ERR, "failed to post RX ret = %d", ret);
> -			return ret;
> +			break;
>  		}
>  
>  #ifdef RTE_ARCH_32
> @@ -144,8 +151,16 @@ mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq)
>  #endif
>  	}
>  
> +	/* Free the remaining mbufs that are not posted */
> +	while (i < count) {
> +		rte_pktmbuf_free(mbufs[i]);
> +		i++;
> +	}

there is also rte_pktmbuf_free_bulk() that could be used. probably won't
make any material difference to perf though so just an fyi.


  reply	other threads:[~2024-02-01 16:16 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-25  2:42 [PATCH] " longli
2024-01-26  0:29 ` Stephen Hemminger
2024-01-26  1:13   ` Long Li
2024-01-30  1:13 ` [Patch v2] " longli
2024-01-30 10:19   ` Ferruh Yigit
2024-01-30 16:43     ` Stephen Hemminger
2024-01-30 18:05       ` Tyler Retzlaff
2024-01-30 22:42       ` Ferruh Yigit
2024-02-01  3:55         ` Long Li
2024-02-01 10:52           ` Ferruh Yigit
2024-02-02  1:21             ` Long Li
2024-02-01 16:33           ` Tyler Retzlaff
2024-02-02  1:22             ` Long Li
2024-01-30 21:30     ` Long Li
2024-01-30 22:34       ` Ferruh Yigit
2024-01-30 22:36         ` Long Li
2024-02-01  3:45   ` [Patch v3] " longli
2024-02-01 16:16     ` Tyler Retzlaff [this message]
2024-02-01 19:41       ` Long Li
2024-02-02  1:19     ` [Patch v4] net/mana: use rte_pktmbuf_alloc_bulk for allocating RX mbufs longli
2024-02-02 16:24       ` Stephen Hemminger
2024-02-06 18:06       ` Ferruh Yigit
2024-02-07  4:50         ` Long Li
2024-02-09  0:02       ` [Patch v5] net/mana: use rte_pktmbuf_alloc_bulk for allocating RX WQEs longli
2024-02-09 17:46         ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240201161616.GA13514@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net \
    --to=roretzla@linux.microsoft.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=longli@microsoft.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).