From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Sergey Vyazmitinov <s.vyazmitinov@brain4net.com>,
"olivier.matz@6wind.com" <olivier.matz@6wind.com>
Cc: "Yigit, Ferruh" <ferruh.yigit@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] kni: use bulk functions to allocate and free mbufs
Date: Wed, 11 Jan 2017 10:39:20 +0000 [thread overview]
Message-ID: <2601191342CEEE43887BDE71AB9772583F103987@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <1483048216-2936-1-git-send-email-s.vyazmitinov@brain4net.com>
Hi Sergey,
...
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 4476d75..707c300 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -1261,6 +1261,38 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
> }
>
> /**
> + * Free n packets mbuf back into its original mempool.
> + *
> + * Free each mbuf, and all its segments in case of chained buffers. Each
> + * segment is added back into its original mempool.
> + *
> + * @param mp
> + * The packets mempool.
> + * @param mbufs
> + * The packets mbufs array to be freed.
> + * @param n
> + * Number of packets.
> + */
> +static inline void rte_pktmbuf_free_bulk(struct rte_mempool *mp,
> + struct rte_mbuf **mbufs, unsigned n)
> +{
> + struct rte_mbuf *mbuf, *m_next;
> + unsigned i;
> + for (i = 0; i < n; ++i) {
> + mbuf = mbufs[i];
> + __rte_mbuf_sanity_check(mbuf, 1);
> +
> + mbuf = mbuf->next;
> + while (mbuf != NULL) {
> + m_next = mbuf->next;
> + rte_pktmbuf_free_seg(mbuf);
> + mbuf = m_next;
> + }
I think you forgot to call __rte_pktmbuf_prefree_seg(mbufs[i]); somewhere here.
Konstantin
> + }
> + rte_mempool_put_bulk(mp, (void * const *)mbufs, n);
> +}
> +
> +/**
> * Creates a "clone" of the given packet mbuf.
> *
> * Walks through all segments of the given packet mbuf, and for each of them:
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index d315d42..e612a0a 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1497,6 +1497,12 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
> return rte_mempool_get_bulk(mp, obj_p, 1);
> }
>
> +static inline int __attribute__((always_inline))
> +rte_mempool_get_n(struct rte_mempool *mp, void **obj_p, int n)
> +{
> + return rte_mempool_get_bulk(mp, obj_p, n);
> +}
> +
> /**
> * Return the number of entries in the mempool.
> *
> --
> 2.7.4
next prev parent reply other threads:[~2017-01-11 10:39 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-29 21:50 Sergey Vyazmitinov
2017-01-11 10:39 ` Ananyev, Konstantin [this message]
2017-01-11 16:17 ` Stephen Hemminger
2017-01-11 16:38 ` Olivier MATZ
2017-01-11 17:00 ` Ferruh Yigit
2017-01-11 17:28 ` Ananyev, Konstantin
2017-01-11 17:35 ` Stephen Hemminger
2017-01-11 17:43 ` Ananyev, Konstantin
2017-01-11 17:47 ` Ferruh Yigit
2017-01-11 18:25 ` Ananyev, Konstantin
2017-01-11 18:41 ` Ferruh Yigit
2017-01-11 18:56 ` Stephen Hemminger
2017-01-16 7:39 ` Yuanhan Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2601191342CEEE43887BDE71AB9772583F103987@irsmsx105.ger.corp.intel.com \
--to=konstantin.ananyev@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=olivier.matz@6wind.com \
--cc=s.vyazmitinov@brain4net.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).