DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors
@ 2019-07-05 10:33 vattunuru
  2019-07-07 14:21 ` Jerin Jacob Kollanukkaran
  2019-07-08  4:47 ` [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure vattunuru
  0 siblings, 2 replies; 6+ messages in thread
From: vattunuru @ 2019-07-05 10:33 UTC (permalink / raw)
  To: dev; +Cc: thomas, jerinj, Vamsi Attunuru

From: Vamsi Attunuru <vattunuru@marvell.com>

Patch fixes npa pool range errors observed while creating mempool.
During mempool creation, octeontx2 mempool driver populates pool
range fields before enqueueing the buffers. If any enqueue or dequeue
operation reaches npa hardware prior to the range field's HW context
update, those ops result in npa range errors. Patch adds a routine
to read back HW context and verify if range fields are updated or not.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
 drivers/mempool/octeontx2/otx2_mempool_ops.c | 37 ++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index e1764b0..a60a77a 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -600,6 +600,40 @@ npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
 }
 
 static int
+npa_lf_aura_range_update_check(uint64_t aura_handle)
+{
+	uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+	struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
+	struct npa_aura_lim *lim = lf->aura_lim;
+	struct npa_aq_enq_req *req;
+	struct npa_aq_enq_rsp *rsp;
+	struct npa_pool_s *pool;
+	int rc;
+
+	req  = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
+
+	req->aura_id = aura_id;
+	req->ctype = NPA_AQ_CTYPE_POOL;
+	req->op = NPA_AQ_INSTOP_READ;
+
+	rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
+	if (rc) {
+		otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
+		return rc;
+	}
+
+	pool = &rsp->pool;
+
+	if (lim[aura_id].ptr_start != pool->ptr_start ||
+		lim[aura_id].ptr_end != pool->ptr_end) {
+		otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
+		return -ERANGE;
+	}
+
+	return 0;
+}
+
+static int
 otx2_npa_alloc(struct rte_mempool *mp)
 {
 	uint32_t block_size, block_count;
@@ -724,6 +758,9 @@ otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
 
 	npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
 
+	if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
+		return -EBUSY;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }
-- 
2.8.4


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-07-08  9:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-05 10:33 [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors vattunuru
2019-07-07 14:21 ` Jerin Jacob Kollanukkaran
2019-07-07 17:24   ` Thomas Monjalon
2019-07-08  4:47 ` [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure vattunuru
2019-07-08  5:25   ` Jerin Jacob Kollanukkaran
2019-07-08  9:54     ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).