From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 37A1D6A6A for ; Wed, 1 Jun 2016 18:20:12 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 01 Jun 2016 09:20:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,401,1459839600"; d="scan'208";a="966791816" Received: from sie-lab-214-251.ir.intel.com (HELO silpixa373510.ir.intel.com) ([10.237.214.251]) by orsmga001.jf.intel.com with ESMTP; 01 Jun 2016 09:20:10 -0700 From: David Hunt To: dev@dpdk.org Cc: olivier.matz@6wind.com, viktorin@rehivetech.com, jerin.jacob@caviumnetworks.com, David Hunt Date: Wed, 1 Jun 2016 17:19:58 +0100 Message-Id: <1464797998-76690-6-git-send-email-david.hunt@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1464797998-76690-1-git-send-email-david.hunt@intel.com> References: <1463665501-18325-1-git-send-email-david.hunt@intel.com> <1464797998-76690-1-git-send-email-david.hunt@intel.com> Subject: [dpdk-dev] [PATCH v6 5/5] mbuf: get default mempool handler from configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Jun 2016 16:20:12 -0000 By default, the mempool handler used for mbuf allocations is a multi producer and multi consumer ring. We could imagine a target (maybe some network processors?) that provides an hardware-assisted pool mechanism. In this case, the default configuration for this architecture would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_HANDLER. Signed-off-by: Olivier Matz Signed-off-by: David Hunt --- config/common_base | 1 + lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++---- 2 files changed, 23 insertions(+), 4 deletions(-) diff --git a/config/common_base b/config/common_base index 47c26f6..cd04f54 100644 --- a/config/common_base +++ b/config/common_base @@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n # CONFIG_RTE_LIBRTE_MBUF=y CONFIG_RTE_LIBRTE_MBUF_DEBUG=n +CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="ring_mp_mc" CONFIG_RTE_MBUF_REFCNT_ATOMIC=y CONFIG_RTE_PKTMBUF_HEADROOM=128 diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index eec1456..7d855f0 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, unsigned cache_size, uint16_t priv_size, uint16_t data_room_size, int socket_id) { + struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; unsigned elt_size; @@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, mbp_priv.mbuf_data_room_size = data_room_size; mbp_priv.mbuf_priv_size = priv_size; - return rte_mempool_create(name, n, elt_size, - cache_size, sizeof(struct rte_pktmbuf_pool_private), - rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL, - socket_id, 0); + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + if (mp == NULL) + return NULL; + + rte_errno = rte_mempool_set_handler(mp, + RTE_MBUF_DEFAULT_MEMPOOL_HANDLER); + if (rte_errno != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + if (rte_mempool_populate_default(mp) < 0) { + rte_mempool_free(mp); + return NULL; + } + + rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); + + return mp; } /* do some sanity checks on a mbuf: panic if it fails */ -- 2.5.5