From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DF81E430D2; Tue, 22 Aug 2023 19:02:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B5D14323A; Tue, 22 Aug 2023 19:02:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0B0094021D for ; Tue, 22 Aug 2023 19:02:24 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37M8a6kq017249 for ; Tue, 22 Aug 2023 10:02:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=w40CvKDF4GhN8K0deFod1Qor/kA1cIABnHeq1NRmImc=; b=bdfHCFFJYKuyAvMOElJ/rpkemicoa6VvWP5uRiEEQVEqBIOvMUOZDAQefG6ZB+vOzw5i hXZ7uU5lWe6lhKnyrsxIqe0CTQ9kCwWazgyJSXzO+ouCWsxi+kd+7wOKVwKDZWqA40qo rkhFBzdRgeqFdjiUNZmsPdmqKZfP4oWKTeBytoNq0Qhqtn2tJdQBsijXrHy8uHuyyavP 6N4L9mYDPurE+KU1LfzhWmJXrAdv2+QaXUzKBVK85CdyK00pQ24UUha1XDTvu9hA19OK zK5G2X9zkvmyj3hlNBUmkOHmthdqsFfy/r9Pi51tfoAWIRFOj5kFIrlOGWwlsRyRcwux pA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3smdthbr94-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 22 Aug 2023 10:02:24 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 22 Aug 2023 10:02:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 22 Aug 2023 10:02:19 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1F14D3F7076; Tue, 22 Aug 2023 10:02:12 -0700 (PDT) From: Ashwin Sekhar T K To: , Ashwin Sekhar T K , Pavan Nikhilesh CC: , , , , , , , , Subject: [PATCH 3/3] mempool/cnxk: fix alloc from non-EAL pthreads Date: Tue, 22 Aug 2023 22:31:57 +0530 Message-ID: <20230822170157.2637286-3-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230822170157.2637286-1-asekhar@marvell.com> References: <20230731055514.1708500-1-asekhar@marvell.com> <20230822170157.2637286-1-asekhar@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: ZpWfXoU9Ptkq0lc6h0bt8mKtTcjEWY_W X-Proofpoint-ORIG-GUID: ZpWfXoU9Ptkq0lc6h0bt8mKtTcjEWY_W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-22_14,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For non-EAL pthreads, rte_lcore_id() will not be valid. So, batch allocation cannot be used as we won't have a dedicated alloc buffer for the thread. So, fallback to bulk alloc in such cases. Fixes: 91531e63f43b ("mempool/cnxk: add cn10k batch dequeue") Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cn10k_mempool_ops.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c index 2e46204c8d..2a5aad0008 100644 --- a/drivers/mempool/cnxk/cn10k_mempool_ops.c +++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c @@ -332,6 +332,12 @@ cn10k_mempool_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) struct batch_op_data *op_data; unsigned int count = 0; + /* For non-EAL threads, rte_lcore_id() will not be valid. Hence + * fallback to bulk alloc + */ + if (unlikely(rte_lcore_id() == LCORE_ID_ANY)) + return cnxk_mempool_deq(mp, obj_table, n); + op_data = batch_op_data_get(mp->pool_id); if (op_data->max_async_batch) count = mempool_deq_batch_async(mp, obj_table, n); -- 2.25.1