DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization
Date: Fri, 27 Feb 2015 11:17:28 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <1424992506-20484-1-git-send-email-vadim.suraev@gmail.com>

Hi Vadim,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of vadim.suraev@gmail.com
> Sent: Thursday, February 26, 2015 11:15 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization
> 
> From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com>
> 
> new function - rte_pktmbuf_free_bulk makes freeing long
> scattered (chained) pktmbufs belonging to the same pool
> more optimal using rte_mempool_put_bulk rather than calling
> rte_mempool_put for each segment.
> Inlike rte_pktmbuf_free, which calls rte_pktmbuf_free_seg,
> this function calls __rte_pktmbuf_prefree_seg. If non-NULL
> returned, the pointer is placed in an array. When array is
> filled or when the last segment is processed, rte_mempool_put_bulk
> is called. In case of multiple producers, performs 3 times better.
> 
> 
> Signed-off-by: vadim.suraev@gmail.com <vadim.suraev@gmail.com>
> ---
>  lib/librte_mbuf/rte_mbuf.h |   55 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 55 insertions(+)
> 
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 17ba791..1d6f848 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -824,6 +824,61 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
>  	}
>  }
> 
> +/* This macro defines the size of max bulk of mbufs to free for rte_pktmbuf_free_bulk */
> +#define MAX_MBUF_FREE_SIZE 32
> +
> +/* If RTE_LIBRTE_MBUF_DEBUG is enabled, checks if all mbufs must belong to the same mempool */
> +#ifdef RTE_LIBRTE_MBUF_DEBUG
> +
> +#define RTE_MBUF_MEMPOOL_CHECK1(m) struct rte_mempool *first_buffers_mempool = (m) ? (m)->pool : NULL
> +
> +#define RTE_MBUF_MEMPOOL_CHECK2(m) RTE_MBUF_ASSERT(first_buffers_mempool == (m)->pool)
> +
> +#else
> +
> +#define RTE_MBUF_MEMPOOL_CHECK1(m)
> +
> +#define RTE_MBUF_MEMPOOL_CHECK2(m)
> +
> +#endif
> +
> +/**
> + * Free chained (scattered) mbuf into its original mempool.
> + *
> + * All the mbufs in the chain must belong to the same mempool.

Seems really useful.
One thought - why to introduce the limitation that all mbufs have to be from the same mempool?
I think you can reorder it a bit, so it can handle situation when chained mbufs belong to different mempools.
Something like:
...
mbufs[mbufs_count] = head;
if (unlikely (head->mp != mbufs[0]->mp || mbufs_count == RTE_DIM(mbufs) - 1)) {
    rte_mempool_put_bulk(mbufs[0]->pool, mbufs, mbufs_count);
    mbufs[0] = mbufs[mbufs_count];
    mbufs_count = 0;
} 
mbufs_count++;
...
 
Another nit: probably better name it rte_pktmbuf_free_chain() or something?
For me _bulk implies that we have an array of mbufs that we need to free.
Actually probably would be another useful function to have:
rte_pktmbuf_free_seg_bulk(struct rte_mbuf *m[], uint32_t num);

Konstantin

> + *
> + * @param head
> + *   The head of mbufs to be freed chain
> + */
> +
> +static inline void __attribute__((always_inline))
> +rte_pktmbuf_free_bulk(struct rte_mbuf *head)
> +{
> +    void *mbufs[MAX_MBUF_FREE_SIZE];
> +    unsigned mbufs_count = 0;
> +    struct rte_mbuf *next;
> +
> +    RTE_MBUF_MEMPOOL_CHECK1(head);
> +
> +    while(head) {
> +        next = head->next;
> +        head->next = NULL;
> +        if(__rte_pktmbuf_prefree_seg(head)) {
> +            RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0);
> +            RTE_MBUF_MEMPOOL_CHECK2(head);
> +            mbufs[mbufs_count++] = head;
> +        }
> +        head = next;
> +        if(mbufs_count == MAX_MBUF_FREE_SIZE) {
> +            rte_mempool_put_bulk(((struct rte_mbuf *)mbufs[0])->pool,mbufs,mbufs_count);
> +            mbufs_count = 0;
> +        }
> +    }
> +    if(mbufs_count > 0) {
> +        rte_mempool_put_bulk(((struct rte_mbuf *)mbufs[0])->pool,mbufs,mbufs_count);
> +    }
> +}
> +
>  /**
>   * Creates a "clone" of the given packet mbuf.
>   *
> --
> 1.7.9.5

  parent reply	other threads:[~2015-02-27 11:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-26 23:15 vadim.suraev
2015-02-27  0:49 ` Stephen Hemminger
2015-02-27 11:17 ` Ananyev, Konstantin [this message]
2015-02-27 12:18   ` Vadim Suraev
     [not found]     ` <2601191342CEEE43887BDE71AB977258213F2E3D@irsmsx105.ger.corp.intel.com>
2015-02-27 13:10       ` Ananyev, Konstantin
2015-02-27 13:20     ` Olivier MATZ
2015-02-27 17:09       ` Vadim Suraev
2015-03-04  8:54         ` Olivier MATZ
2015-03-06 23:24           ` Vadim Suraev
2015-03-09  8:38             ` Olivier MATZ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=dev@dpdk.org \
    --cc=vadim.suraev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).