From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <arybchenko@solarflare.com>
Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com
 [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id D9C548E7C
 for <dev@dpdk.org>; Wed, 25 Apr 2018 18:32:33 +0200 (CEST)
X-Virus-Scanned: Proofpoint Essentials engine
Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26])
 (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id
 3691A140079; Wed, 25 Apr 2018 16:32:32 +0000 (UTC)
Received: from ocex03.SolarFlarecom.com (10.20.40.36) by
 ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id
 15.0.1044.25; Wed, 25 Apr 2018 09:32:29 -0700
Received: from opal.uk.solarflarecom.com (10.17.10.1) by
 ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id
 15.0.1044.25 via Frontend Transport; Wed, 25 Apr 2018 09:32:29 -0700
Received: from uklogin.uk.solarflarecom.com (uklogin.uk.solarflarecom.com
 [10.17.10.10])
 by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w3PGWSMG003762;
 Wed, 25 Apr 2018 17:32:28 +0100
Received: from uklogin.uk.solarflarecom.com (localhost.localdomain [127.0.0.1])
 by uklogin.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w3PGWSgr022770; 
 Wed, 25 Apr 2018 17:32:28 +0100
From: Andrew Rybchenko <arybchenko@solarflare.com>
To: <dev@dpdk.org>
CC: Olivier MATZ <olivier.matz@6wind.com>, "Artem V. Andreev"
 <Artem.Andreev@oktetlabs.ru>
Date: Wed, 25 Apr 2018 17:32:20 +0100
Message-ID: <1524673942-22726-5-git-send-email-arybchenko@solarflare.com>
X-Mailer: git-send-email 1.8.2.3
In-Reply-To: <1524673942-22726-1-git-send-email-arybchenko@solarflare.com>
References: <1511539591-20966-1-git-send-email-arybchenko@solarflare.com>
 <1524673942-22726-1-git-send-email-arybchenko@solarflare.com>
MIME-Version: 1.0
Content-Type: text/plain
X-MDID: 1524673953-GDV20o1Iasv2
Subject: [dpdk-dev] [PATCH v3 4/6] mempool/bucket: implement block dequeue
	operation
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 25 Apr 2018 16:32:34 -0000

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index ef822eb2a..24be24e96 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	const uint32_t header_size = bd->header_size;
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(bd);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	if (n > 0) {
+		if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring,
+						   first_objp, n, NULL) != n)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+						  (uint8_t *)*first_objp -
+						  header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		while (n-- > 0) {
+			hdr = (struct bucket_header *)*first_objp;
+			hdr->lcore_id = rte_lcore_id();
+			*first_objp++ = (uint8_t *)hdr + header_size;
+		}
+	}
+
+	return 0;
+}
+
 static void
 count_underfilled_buckets(struct rte_mempool *mp,
 			  void *opaque,
@@ -548,6 +588,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return n_objs;
 }
 
+static int
+bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info)
+{
+	struct bucket_data *bd = mp->pool_data;
+
+	info->contig_block_size = bd->obj_per_bucket;
+	return 0;
+}
+
+
 static const struct rte_mempool_ops ops_bucket = {
 	.name = "bucket",
 	.alloc = bucket_alloc,
@@ -557,6 +607,8 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_count = bucket_get_count,
 	.calc_mem_size = bucket_calc_mem_size,
 	.populate = bucket_populate,
+	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.14.1