From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <olivier.matz@6wind.com>
Received: from mail-wi0-f170.google.com (mail-wi0-f170.google.com
 [209.85.212.170]) by dpdk.org (Postfix) with ESMTP id 62E3F5FEB
 for <dev@dpdk.org>; Mon, 20 Apr 2015 17:41:49 +0200 (CEST)
Received: by widdi4 with SMTP id di4so104577267wid.0
 for <dev@dpdk.org>; Mon, 20 Apr 2015 08:41:49 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20130820;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=evv3ZJHE3ft3pqBtNos9SiyPhWbE+uvFuvtraqgskqE=;
 b=eVZ1Ms8HW+fEgLwfWvTWoqnKDkrH9/wMo7dWPHFKIUYjQjY0LZBUp9gbpVzKjHTYCb
 nFIBt0VI94cpoZkeXVVwKJD7xUPwQlNkbwFtFNJUJWlaJmuKTu6Z04B01y1+gDNfaD1Y
 shnf5YTN79BzX4sMb49KXJt+muUMToUjNk0OY2diB64isrWP/WJqe3POz6nb1snXYR+h
 6mBoHZJQPGxggWI66O11dcDaDKGxSIbdjTQHL92EZIjRxeMcnXza2RZVsvsE7n39sBug
 Np686tlOS6C3ptuhzPWTxCNDL2q1mr3ebZKFPBj8ZcCOYUvH40pGHOEaPuR6vB/HNYI6
 ID3Q==
X-Gm-Message-State: ALoCoQnBJYPnpVZP8F6iRaVgZmT7lTK7FYSX4eHmXmOqth3ZWd2x1MxPev4oBMLpg4svW38lm00u
X-Received: by 10.180.74.37 with SMTP id q5mr26520916wiv.59.1429544509229;
 Mon, 20 Apr 2015 08:41:49 -0700 (PDT)
Received: from glumotte.dev.6wind.com (6wind.net2.nerim.net. [213.41.180.237])
 by mx.google.com with ESMTPSA id
 fm8sm11258951wib.9.2015.04.20.08.41.48
 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128);
 Mon, 20 Apr 2015 08:41:48 -0700 (PDT)
From: Olivier Matz <olivier.matz@6wind.com>
To: dev@dpdk.org
Date: Mon, 20 Apr 2015 17:41:26 +0200
Message-Id: <1429544496-22532-3-git-send-email-olivier.matz@6wind.com>
X-Mailer: git-send-email 2.1.4
In-Reply-To: <1429544496-22532-1-git-send-email-olivier.matz@6wind.com>
References: <1427829784-12323-2-git-send-email-zer0@droids-corp.org>
 <1429544496-22532-1-git-send-email-olivier.matz@6wind.com>
Subject: [dpdk-dev] [PATCH v4 02/12] examples: always initialize mbuf_pool
	private area
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Mon, 20 Apr 2015 15:41:49 -0000

The mbuf pool private area must always be populated in a mbuf pool.
The applications or drivers may expect that for a mbuf pool, the mbuf
pool private area (mbuf_data_room_size and mbuf_priv_size) are
properly filled.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 examples/ip_fragmentation/main.c | 4 ++--
 examples/ip_pipeline/init.c      | 8 ++++++--
 examples/ipv4_multicast/main.c   | 6 ++++--
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 93ea2a1..cf63718 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -764,8 +764,8 @@ init_mem(void)
 
 			mp = rte_mempool_create(buf, NB_MBUF,
 							   sizeof(struct rte_mbuf), 32,
-							   0,
-							   NULL, NULL,
+							   sizeof(struct rte_pktmbuf_pool_private),
+							   rte_pktmbuf_pool_init, NULL,
 							   rte_pktmbuf_init, NULL,
 							   socket, 0);
 			if (mp == NULL) {
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
index 96aee2b..61d71c3 100644
--- a/examples/ip_pipeline/init.c
+++ b/examples/ip_pipeline/init.c
@@ -363,6 +363,8 @@ app_get_ring_resp(uint32_t core_id)
 static void
 app_init_mbuf_pools(void)
 {
+	struct rte_pktmbuf_pool_private indirect_mbp_priv;
+
 	/* Init the buffer pool */
 	RTE_LOG(INFO, USER1, "Creating the mbuf pool ...\n");
 	app.pool = rte_mempool_create(
@@ -380,13 +382,15 @@ app_init_mbuf_pools(void)
 
 	/* Init the indirect buffer pool */
 	RTE_LOG(INFO, USER1, "Creating the indirect mbuf pool ...\n");
+	indirect_mbp_priv.mbuf_data_room_size = 0;
+	indirect_mbp_priv.mbuf_priv_size = sizeof(struct app_pkt_metadata);
 	app.indirect_pool = rte_mempool_create(
 		"indirect mempool",
 		app.pool_size,
 		sizeof(struct rte_mbuf) + sizeof(struct app_pkt_metadata),
 		app.pool_cache_size,
-		0,
-		NULL, NULL,
+		sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, &indirect_mbp_priv,
 		rte_pktmbuf_init, NULL,
 		rte_socket_id(),
 		0);
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index eed5611..19832d8 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -699,14 +699,16 @@ main(int argc, char **argv)
 		rte_exit(EXIT_FAILURE, "Cannot init packet mbuf pool\n");
 
 	header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF,
-	    HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL,
+	    HDR_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
+	    rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
 	    rte_socket_id(), 0);
 
 	if (header_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init header mbuf pool\n");
 
 	clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
-	    CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL,
+	    CLONE_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
+	    rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
 	    rte_socket_id(), 0);
 
 	if (clone_pool == NULL)
-- 
2.1.4