From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 8628BCF19 for ; Fri, 17 Jun 2016 15:53:49 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 17 Jun 2016 06:53:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,484,1459839600"; d="scan'208";a="999993581" Received: from sie-lab-214-251.ir.intel.com (HELO silpixa373510.ir.intel.com) ([10.237.214.251]) by orsmga002.jf.intel.com with ESMTP; 17 Jun 2016 06:53:47 -0700 From: David Hunt To: dev@dpdk.org Cc: olivier.matz@6wind.com, viktorin@rehivetech.com, jerin.jacob@caviumnetworks.com, shreyansh.jain@nxp.com, David Hunt Date: Fri, 17 Jun 2016 14:53:38 +0100 Message-Id: <1466171618-27358-4-git-send-email-david.hunt@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1466171618-27358-1-git-send-email-david.hunt@intel.com> References: <1466080236-112618-1-git-send-email-david.hunt@intel.com> <1466171618-27358-1-git-send-email-david.hunt@intel.com> Subject: [dpdk-dev] [PATCH v14 3/3] mbuf: make default mempool ops configurable at build X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Jun 2016 13:53:49 -0000 By default, the mempool ops used for mbuf allocations is a multi producer and multi consumer ring. We could imagine a target (maybe some network processors?) that provides an hardware-assisted pool mechanism. In this case, the default configuration for this architecture would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS. Signed-off-by: Olivier Matz Signed-off-by: David Hunt Acked-by: Shreyansh Jain Acked-by: Olivier Matz --- config/common_base | 1 + lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++---- 2 files changed, 23 insertions(+), 4 deletions(-) diff --git a/config/common_base b/config/common_base index 11ac81e..5f230db 100644 --- a/config/common_base +++ b/config/common_base @@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n # CONFIG_RTE_LIBRTE_MBUF=y CONFIG_RTE_LIBRTE_MBUF_DEBUG=n +CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc" CONFIG_RTE_MBUF_REFCNT_ATOMIC=y CONFIG_RTE_PKTMBUF_HEADROOM=128 diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 2ece742..8cf5436 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, unsigned cache_size, uint16_t priv_size, uint16_t data_room_size, int socket_id) { + struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; unsigned elt_size; @@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, mbp_priv.mbuf_data_room_size = data_room_size; mbp_priv.mbuf_priv_size = priv_size; - return rte_mempool_create(name, n, elt_size, - cache_size, sizeof(struct rte_pktmbuf_pool_private), - rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL, - socket_id, 0); + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + if (mp == NULL) + return NULL; + + rte_errno = rte_mempool_set_ops_byname(mp, + RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + if (rte_errno != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + if (rte_mempool_populate_default(mp) < 0) { + rte_mempool_free(mp); + return NULL; + } + + rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); + + return mp; } /* do some sanity checks on a mbuf: panic if it fails */ -- 2.5.5