From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EADD74564F; Fri, 19 Jul 2024 12:01:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 91B1F42E7B; Fri, 19 Jul 2024 12:01:46 +0200 (CEST) Received: from AM0PR83CU005.outbound.protection.outlook.com (mail-westeuropeazon11010031.outbound.protection.outlook.com [52.101.69.31]) by mails.dpdk.org (Postfix) with ESMTP id C1A9242E60 for ; Fri, 19 Jul 2024 12:01:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RDm0fhpOzJ5g1yHLHwlXKfLkkOr+vtK7nOrQgcdW4b7S9O5hHf8VFcOYPpCglpc4o5FsNxZjk7PUUxGmR/vdy8GZpXD3tkGCFoYqFUWlFugdoARnGPoUqxaKG/8s7crIs8aaAbsUM3l37ESIjIEAKwc4GQy5Jwsvd+4zGfeMlLWnZB1+uE0YzMfo2+MDep8Ov3G4dTirJC7R/GBDV05DpHnYYhJcvso7PqU0vvPB7BctM88B/j2uvvIHQK+cGD4l21DaFnwKX0SMSKpWX05yrT0e4UDFAMALw3FLzse2K/o++jrNk911JE3WNGheuBR9my1Uz/p9aFhqFGmoG7CHMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4/HxFqyRZ0b3q1XoIOZ+mktAUQUc4l0FXD3rTmgn6pg=; b=j3fAegolG5taUpt8FiMGtaA7dYeydlV0x4cxjqOzABvhwRISWYC0IK4atTTcCXpFdbrtYiz2JIGEMuHVM3az/ZGksUjbeyZf6FUUwaJhRwy1G65IVE6GWlR4amAq/0oYSh8a5+vZzmKs1vgxH0jaNGCB/66rzoaDR7R4A9KRry9+QuKWhf+kj9wUbmno+SbqXTfCCst+pp+lsjG/+W6YjrO33/eGer/SrVZM7pSrCn77g05vYzTGSIK76cXPaTJPbQ9bk2hQiPN+JHOg81yKcDXHeE43EwCbbq/jV32nrAydY5+GBl2NwAfeZ8yc8/mOSulaX5kJWZt04qtZvuPiCA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4/HxFqyRZ0b3q1XoIOZ+mktAUQUc4l0FXD3rTmgn6pg=; b=ZRRQkfbCLFjWo73JcOEr+U3KIyhpHD9GPKXkMsi2CRoAnV3wFAsmZpd4RQsggGBBY/0kpSm1yLzZzOKjXzHDq2WYXlfv5RT+ZzM9m1RAaCocmrp0SSTo0DIlAiQqyRGnmH+fdqpfshcYi3Cs3TSNb563sInnQfnMna4A7SqOnIU= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from PAXPR04MB8205.eurprd04.prod.outlook.com (2603:10a6:102:1c2::20) by AS8PR04MB7815.eurprd04.prod.outlook.com (2603:10a6:20b:28a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7784.19; Fri, 19 Jul 2024 10:01:43 +0000 Received: from PAXPR04MB8205.eurprd04.prod.outlook.com ([fe80::7633:884d:5973:174f]) by PAXPR04MB8205.eurprd04.prod.outlook.com ([fe80::7633:884d:5973:174f%6]) with mapi id 15.20.7784.016; Fri, 19 Jul 2024 10:01:43 +0000 From: Gagandeep Singh To: dev@dpdk.org, Hemant Agrawal Cc: Jun Yang Subject: [PATCH 02/30] dma/dpaa2: support multiple HW queues Date: Fri, 19 Jul 2024 15:30:58 +0530 Message-Id: <20240719100126.1150373-2-g.singh@nxp.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240719100126.1150373-1-g.singh@nxp.com> References: <20240719100126.1150373-1-g.singh@nxp.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SG2PR01CA0184.apcprd01.prod.exchangelabs.com (2603:1096:4:189::9) To PAXPR04MB8205.eurprd04.prod.outlook.com (2603:10a6:102:1c2::20) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PAXPR04MB8205:EE_|AS8PR04MB7815:EE_ X-MS-Office365-Filtering-Correlation-Id: 55fd44ab-7b1d-4fc2-09d4-08dca7d9caa1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|366016|1800799024|376014|52116014|38350700014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?DYq7nRsblMcHwb+Q6N8zU01SDMCm2XPVIqjOXVo4GI/2EO688UxQhfExXaJt?= =?us-ascii?Q?EK9N5JjISUdzCHSK5V0lcu4Y3X8aX4POBJuA+jpqPPS7pnY5NvW7d/6MD41C?= =?us-ascii?Q?mskVJVjtdJVcXskHhl4/Y5jgcCkHyIC80FgmpvB2/q4Q8NlQo+HXLKD2B3d0?= =?us-ascii?Q?uTQc8QLG/WFIiauseIreAPLpxzSZYgjoyKJjd6btknB4UueeO5Q79iIIMaRh?= =?us-ascii?Q?EwFOkZkTcGEftuDkORzPT5pZh3gsA2FEroLPohFQxERg3fqxvdUN+68BjD5s?= =?us-ascii?Q?V2jM7aHk1sDIa4r5HNBZvQ1gA8QU6WYX1vsSfmJhFUSarVPJZIheymxc7UIk?= =?us-ascii?Q?AwoWLMxHhJiALZuP5HknRcTRZ5k83hhhjBqtNNtJWaLloyZK5nkCfzTg5zdQ?= =?us-ascii?Q?IE9Do7ixwOXJ8Pzkqaq6rtHgexFtTbGuj/EgQBEicC6XzKsrCxRuE+P7K2eJ?= =?us-ascii?Q?zN3Wu/dsjiWAlbdWHOZUJ5s9YKKlL7Q9HimhNeeTNKoaU3CRDtdAcuWW4l7s?= =?us-ascii?Q?1v7Ft7jXaz01sookdVmI1Ye2URk7OjNBYkoCK7qIYRNaU3WxL8Ss/9fKIqZ1?= =?us-ascii?Q?42rkg0Px6j5YgioapAVmiSHbckGjKSnFs2fJETy+7Blo7HM+RHrVuOb+VPOr?= =?us-ascii?Q?LP7so0TpJ1wYq0h6hHVNLz+qHc5MZ8Cm75C/GiGhNfa2DikpdpYaq1OBW7Xn?= =?us-ascii?Q?Ty4mhN2L2dcbIcB+o5V1phhsxiba7K0op+6Qxt9fym2oKa/TRdSBpc7lw35/?= =?us-ascii?Q?lbpGx2yU8XUAcYw/TaaZ9VX8U6ys21Ubi13KJqG8eyAZk/JT/93G7is6OEWI?= =?us-ascii?Q?KP2C11DOyXslB2xUUY/TeSJ5uD9TfrttnyLXNtwej9yMVF6P9YjluPP5WKHX?= =?us-ascii?Q?+5dnUedmG7EDbV6k0J6bvNnr/ZR3ndBD8fz0kuMtPZ96v9jDYHFCyxdrdapP?= =?us-ascii?Q?LX1vmtKkd2D7Lbw96PtCNrelPEpsd7h6nyrLKNoo3N6FKvnGCNwcb3ylbjGG?= =?us-ascii?Q?GU8AN35SxY7lrzIyM3JVWdRIiAqBCS5HAjBVhsf4uoSJY0WxmeeFoxcKD8cn?= =?us-ascii?Q?TZ7DP3JUf2rpQeVRYesA3R//V5d7TBaNLWSiIgxE+agYe9BELxE1kLwngd1w?= =?us-ascii?Q?bZ2t4X08AgnuKkWK0bJksH1XNUGWEiHRZz+PZZuPAx7oUygbh3h2vcKFkNhs?= =?us-ascii?Q?o4NVrA63R7a7J/4WeO/0OlnlV7cH9Ixg5blOqBYg4OvEWSle7MksPVQ6hQ15?= =?us-ascii?Q?wx8dZG1SJU8hA4730Xq245ELfKV5xUoBYnbNEIpDqVfNNwaJHK3GIcSLWZyI?= =?us-ascii?Q?YNHcE6W1QZ+N9X3aFOFK7d/dAHQIo3ld3xYnUaDzX5zPUvFOET4mLiReBJ65?= =?us-ascii?Q?wJsR81gSGqcTkIoaYEwi2hOKCCrLr0ie9Yc+B8lYClvUpOSLfw=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAXPR04MB8205.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(366016)(1800799024)(376014)(52116014)(38350700014); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?CZf1P6ywByh2FXtX+l6zKxOrddhivXuussHq3dRc4o2ue7369u6sZvIY6kjU?= =?us-ascii?Q?If1JJMdPutMB7Viraxs+9/s5v/JzWCCfg7IWhU6MkHQBjJ0D1T2SjSrNPUv+?= =?us-ascii?Q?OQTnOwR8r2cz+eLN9Mykyb+G0v3HL5r3J3Y9gCQv1b/Vk8TqLV8/iPxq65ek?= =?us-ascii?Q?LExkAWVVKoBqYssrevwWflBoFuysMtENC3OvJtThQpTs9noYCzKcVBG/GTWq?= =?us-ascii?Q?liPzcH//eQDopMrB7aXRF7/pveNoiVuc/znIZWyB+ssi+yACwWzn5HS2/DlS?= =?us-ascii?Q?s3m0bor8uk+wgnp1mSbviaETmKYVHsPPaCj+FHpP3BDgw7a0nrXqlC5Fj4BL?= =?us-ascii?Q?2rBh6fOoRR1VohKhkGy3Jd7JnrI9Mkby8TjeIdjoqFKzHhy6BLaG/DzJigP1?= =?us-ascii?Q?Ohgsey2F/mhNa0vVQqK1FJH1IOdZacezVaUY0BLAsHcxLRlnFccfTeiOIU0U?= =?us-ascii?Q?a3cBJ322IIjWstie3cHoMnUGqLHjBeEAFbbNmdZn50CcyLtROgRXdpug59kX?= =?us-ascii?Q?SIfN8oAe3NVwwEumIBFfCh9DjFnBtkCkagcKsWHfiJn1EFdg4GuUwp6ajSOi?= =?us-ascii?Q?5pm2R/ZIdsVcDvi7ZgqhLcUHuPbDyQpJkypZ0gW3I29SZec9+W/U73de2+/P?= =?us-ascii?Q?riZZU6sJkSX+zotCif31aXQCx0/V4DhTWyf/y9VMSr1hGmc5uTcIkuXHZzw1?= =?us-ascii?Q?zyR0hI2k6+EZ56nBqk1QI4f5a6mHrRP+ZUyRgu72J6QmTISVjPURErXmYsZI?= =?us-ascii?Q?ghmHjGW4j7zvjoUJSWInSUdHatfMDj3T18rLHLAnfxHgQKZlg29zYewPxsRj?= =?us-ascii?Q?Z1+yit6WIUDxHMGoYsfFCiiVPt+G7f+canjL2KhzEuuCCt/Q2VUj+Y388FpG?= =?us-ascii?Q?Ef/jMHgnAKwR9UxYS/aNiKV3dXai/kaJVxpmZvSwJWRXdTUkAh7rUUant3Ad?= =?us-ascii?Q?0dvy5G5i/NhRRGNgN9AtxiRKS4MMT+ALcqkLQ5d2biY7ALrfxTaaPqDZQzWQ?= =?us-ascii?Q?m6N2AJhj2OTqLAGOh4JTzCZtLCRU4ceQDzFesMPpX5M47jdaeROv8Fjyn4pg?= =?us-ascii?Q?A1utERk4D6yPCFKO8ESy0sSHk0ePir8Dqj+37KN8gHMy4+e7IP5k5Nu7f2lW?= =?us-ascii?Q?AE9WlkqEl326hFmmKiNrKW8VuHgty1SaI7DMcvS5lcHtZxS4GHRj7GLMF9re?= =?us-ascii?Q?7+4Ud/W9cjgCbs7cr38LjSubJwNIowzZcU28IuGcz9jHwvKO5Wk4yp48xtbQ?= =?us-ascii?Q?iVTEfxkyewTQkBJJPh9zeKptIhXU3JzEUwOKNmCVlq5fVtmSC508t7d3fHrT?= =?us-ascii?Q?8BJRmFrrmUq6QcZl+ca+NgAdJGg1oYqRlYOH7T/VBSuyJLyEuxqIlfEL0Csz?= =?us-ascii?Q?EMlkQL+5PBZcsnWOh2mMmahRttVzWEc6ZAElFQ1G+NBSYtIezdcIECCLGdUl?= =?us-ascii?Q?YJSVWmnIZ3CscfT83We5Cw/ERXxL7xoiFSgHr/5qKI5it9owrpdKzG3v0+ZT?= =?us-ascii?Q?/0o3e6u2NqgtGrQ1ogw9WrK1RMWsWumJPRBkVGAO2G7NsNz8FcVMrZl525Pl?= =?us-ascii?Q?nw3VmSkVuoCLr1p2QMo=3D?= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 55fd44ab-7b1d-4fc2-09d4-08dca7d9caa1 X-MS-Exchange-CrossTenant-AuthSource: PAXPR04MB8205.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2024 10:01:43.2940 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 36Nhyd1pcOUuTdpR63R4ZBNWJiajSnBaECe18apPsdY3yKYnIo+iK3cGo+iBf7Ki X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7815 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jun Yang Initialize and Configure queues of dma device according to hw queues supported from mc bus. Because multiple queues per device are supported, virt queues implementation are dropped. Signed-off-by: Jun Yang --- drivers/dma/dpaa2/dpaa2_qdma.c | 312 +++++++++++++++------------------ drivers/dma/dpaa2/dpaa2_qdma.h | 6 +- 2 files changed, 140 insertions(+), 178 deletions(-) diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c index 5954b552b5..945ba71e4a 100644 --- a/drivers/dma/dpaa2/dpaa2_qdma.c +++ b/drivers/dma/dpaa2/dpaa2_qdma.c @@ -478,9 +478,9 @@ dpdmai_dev_get_job_us(struct qdma_virt_queue *qdma_vq __rte_unused, static inline uint16_t dpdmai_dev_get_single_job_lf(struct qdma_virt_queue *qdma_vq, - const struct qbman_fd *fd, - struct rte_dpaa2_qdma_job **job, - uint16_t *nb_jobs) + const struct qbman_fd *fd, + struct rte_dpaa2_qdma_job **job, + uint16_t *nb_jobs) { struct qbman_fle *fle; struct rte_dpaa2_qdma_job **ppjob = NULL; @@ -512,9 +512,9 @@ dpdmai_dev_get_single_job_lf(struct qdma_virt_queue *qdma_vq, static inline uint16_t dpdmai_dev_get_sg_job_lf(struct qdma_virt_queue *qdma_vq, - const struct qbman_fd *fd, - struct rte_dpaa2_qdma_job **job, - uint16_t *nb_jobs) + const struct qbman_fd *fd, + struct rte_dpaa2_qdma_job **job, + uint16_t *nb_jobs) { struct qbman_fle *fle; struct rte_dpaa2_qdma_job **ppjob = NULL; @@ -548,12 +548,12 @@ dpdmai_dev_get_sg_job_lf(struct qdma_virt_queue *qdma_vq, /* Function to receive a QDMA job for a given device and queue*/ static int dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, - uint16_t *vq_id, - struct rte_dpaa2_qdma_job **job, - uint16_t nb_jobs) + uint16_t *vq_id, + struct rte_dpaa2_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_vq->dpdmai_dev; - struct dpaa2_queue *rxq = &(dpdmai_dev->rx_queue[0]); + struct dpaa2_queue *rxq; struct qbman_result *dq_storage, *dq_storage1 = NULL; struct qbman_pull_desc pulldesc; struct qbman_swp *swp; @@ -562,7 +562,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, uint8_t num_rx = 0; const struct qbman_fd *fd; uint16_t vqid, num_rx_ret; - uint16_t rx_fqid = rxq->fqid; + uint16_t rx_fqid; int ret, pull_size; if (qdma_vq->flags & DPAA2_QDMA_VQ_FD_SG_FORMAT) { @@ -575,15 +575,17 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR( - "Failed to allocate IO portal, tid: %d\n", + DPAA2_QDMA_ERR("Failed to allocate IO portal, tid(%d)", rte_gettid()); return 0; } } swp = DPAA2_PER_LCORE_PORTAL; + rxq = &dpdmai_dev->rx_queue[qdma_vq->vq_id]; + rx_fqid = rxq->fqid; - pull_size = (nb_jobs > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_jobs; + pull_size = (nb_jobs > dpaa2_dqrr_size) ? + dpaa2_dqrr_size : nb_jobs; q_storage = rxq->q_storage; if (unlikely(!q_storage->active_dqs)) { @@ -697,12 +699,12 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, static int dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq, - uint16_t *vq_id, - struct rte_dpaa2_qdma_job **job, - uint16_t nb_jobs) + uint16_t *vq_id, + struct rte_dpaa2_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_vq->dpdmai_dev; - struct dpaa2_queue *rxq = &(dpdmai_dev->rx_queue[0]); + struct dpaa2_queue *rxq; struct qbman_result *dq_storage; struct qbman_pull_desc pulldesc; struct qbman_swp *swp; @@ -710,7 +712,7 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq, uint8_t num_rx = 0; const struct qbman_fd *fd; uint16_t vqid, num_rx_ret; - uint16_t rx_fqid = rxq->fqid; + uint16_t rx_fqid; int ret, next_pull, num_pulled = 0; if (qdma_vq->flags & DPAA2_QDMA_VQ_FD_SG_FORMAT) { @@ -725,15 +727,15 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR( - "Failed to allocate IO portal, tid: %d\n", + DPAA2_QDMA_ERR("Failed to allocate IO portal, tid(%d)", rte_gettid()); return 0; } } swp = DPAA2_PER_LCORE_PORTAL; - rxq = &(dpdmai_dev->rx_queue[0]); + rxq = &dpdmai_dev->rx_queue[qdma_vq->vq_id]; + rx_fqid = rxq->fqid; do { dq_storage = rxq->q_storage->dq_storage[0]; @@ -810,7 +812,7 @@ dpdmai_dev_submit_multi(struct qdma_virt_queue *qdma_vq, uint16_t nb_jobs) { struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_vq->dpdmai_dev; - uint16_t txq_id = dpdmai_dev->tx_queue[0].fqid; + uint16_t txq_id = dpdmai_dev->tx_queue[qdma_vq->vq_id].fqid; struct qbman_fd fd[DPAA2_QDMA_MAX_DESC]; struct qbman_eq_desc eqdesc; struct qbman_swp *swp; @@ -931,8 +933,8 @@ dpaa2_qdma_submit(void *dev_private, uint16_t vchan) static int dpaa2_qdma_enqueue(void *dev_private, uint16_t vchan, - rte_iova_t src, rte_iova_t dst, - uint32_t length, uint64_t flags) + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; @@ -966,8 +968,8 @@ dpaa2_qdma_enqueue(void *dev_private, uint16_t vchan, int rte_dpaa2_qdma_copy_multi(int16_t dev_id, uint16_t vchan, - struct rte_dpaa2_qdma_job **jobs, - uint16_t nb_cpls) + struct rte_dpaa2_qdma_job **jobs, + uint16_t nb_cpls) { struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id]; struct dpaa2_dpdmai_dev *dpdmai_dev = obj->dev_private; @@ -978,14 +980,11 @@ rte_dpaa2_qdma_copy_multi(int16_t dev_id, uint16_t vchan, } static uint16_t -dpaa2_qdma_dequeue_multi(struct qdma_device *qdma_dev, - struct qdma_virt_queue *qdma_vq, - struct rte_dpaa2_qdma_job **jobs, - uint16_t nb_jobs) +dpaa2_qdma_dequeue_multi(struct qdma_virt_queue *qdma_vq, + struct rte_dpaa2_qdma_job **jobs, + uint16_t nb_jobs) { - struct qdma_virt_queue *temp_qdma_vq; - int ring_count; - int ret = 0, i; + int ret; if (qdma_vq->flags & DPAA2_QDMA_VQ_FD_SG_FORMAT) { /** Make sure there are enough space to get jobs.*/ @@ -1002,42 +1001,12 @@ dpaa2_qdma_dequeue_multi(struct qdma_device *qdma_dev, nb_jobs = RTE_MIN((qdma_vq->num_enqueues - qdma_vq->num_dequeues), nb_jobs); - if (qdma_vq->exclusive_hw_queue) { - /* In case of exclusive queue directly fetch from HW queue */ - ret = qdma_vq->dequeue_job(qdma_vq, NULL, jobs, nb_jobs); - if (ret < 0) { - DPAA2_QDMA_ERR( - "Dequeue from DPDMAI device failed: %d", ret); - return ret; - } - } else { - uint16_t temp_vq_id[DPAA2_QDMA_MAX_DESC]; - - /* Get the QDMA completed jobs from the software ring. - * In case they are not available on the ring poke the HW - * to fetch completed jobs from corresponding HW queues - */ - ring_count = rte_ring_count(qdma_vq->status_ring); - if (ring_count < nb_jobs) { - ret = qdma_vq->dequeue_job(qdma_vq, - temp_vq_id, jobs, nb_jobs); - for (i = 0; i < ret; i++) { - temp_qdma_vq = &qdma_dev->vqs[temp_vq_id[i]]; - rte_ring_enqueue(temp_qdma_vq->status_ring, - (void *)(jobs[i])); - } - ring_count = rte_ring_count( - qdma_vq->status_ring); - } - - if (ring_count) { - /* Dequeue job from the software ring - * to provide to the user - */ - ret = rte_ring_dequeue_bulk(qdma_vq->status_ring, - (void **)jobs, - ring_count, NULL); - } + ret = qdma_vq->dequeue_job(qdma_vq, NULL, jobs, nb_jobs); + if (ret < 0) { + DPAA2_QDMA_ERR("Dequeue from DMA%d-q%d failed(%d)", + qdma_vq->dpdmai_dev->dpdmai_id, + qdma_vq->vq_id, ret); + return ret; } qdma_vq->num_dequeues += ret; @@ -1046,9 +1015,9 @@ dpaa2_qdma_dequeue_multi(struct qdma_device *qdma_dev, static uint16_t dpaa2_qdma_dequeue_status(void *dev_private, uint16_t vchan, - const uint16_t nb_cpls, - uint16_t *last_idx, - enum rte_dma_status_code *st) + const uint16_t nb_cpls, + uint16_t *last_idx, + enum rte_dma_status_code *st) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; @@ -1056,7 +1025,7 @@ dpaa2_qdma_dequeue_status(void *dev_private, uint16_t vchan, struct rte_dpaa2_qdma_job *jobs[DPAA2_QDMA_MAX_DESC]; int ret, i; - ret = dpaa2_qdma_dequeue_multi(qdma_dev, qdma_vq, jobs, nb_cpls); + ret = dpaa2_qdma_dequeue_multi(qdma_vq, jobs, nb_cpls); for (i = 0; i < ret; i++) st[i] = jobs[i]->status; @@ -1071,8 +1040,8 @@ dpaa2_qdma_dequeue_status(void *dev_private, uint16_t vchan, static uint16_t dpaa2_qdma_dequeue(void *dev_private, - uint16_t vchan, const uint16_t nb_cpls, - uint16_t *last_idx, bool *has_error) + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; @@ -1082,7 +1051,7 @@ dpaa2_qdma_dequeue(void *dev_private, RTE_SET_USED(has_error); - ret = dpaa2_qdma_dequeue_multi(qdma_dev, qdma_vq, + ret = dpaa2_qdma_dequeue_multi(qdma_vq, jobs, nb_cpls); rte_mempool_put_bulk(qdma_vq->job_pool, (void **)jobs, ret); @@ -1103,16 +1072,15 @@ rte_dpaa2_qdma_completed_multi(int16_t dev_id, uint16_t vchan, struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; struct qdma_virt_queue *qdma_vq = &qdma_dev->vqs[vchan]; - return dpaa2_qdma_dequeue_multi(qdma_dev, qdma_vq, jobs, nb_cpls); + return dpaa2_qdma_dequeue_multi(qdma_vq, jobs, nb_cpls); } static int dpaa2_qdma_info_get(const struct rte_dma_dev *dev, - struct rte_dma_info *dev_info, - uint32_t info_sz) + struct rte_dma_info *dev_info, + uint32_t info_sz __rte_unused) { - RTE_SET_USED(dev); - RTE_SET_USED(info_sz); + struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | @@ -1120,7 +1088,7 @@ dpaa2_qdma_info_get(const struct rte_dma_dev *dev, RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_SILENT | RTE_DMA_CAPA_OPS_COPY; - dev_info->max_vchans = DPAA2_QDMA_MAX_VHANS; + dev_info->max_vchans = dpdmai_dev->num_queues; dev_info->max_desc = DPAA2_QDMA_MAX_DESC; dev_info->min_desc = DPAA2_QDMA_MIN_DESC; @@ -1129,12 +1097,13 @@ dpaa2_qdma_info_get(const struct rte_dma_dev *dev, static int dpaa2_qdma_configure(struct rte_dma_dev *dev, - const struct rte_dma_conf *dev_conf, - uint32_t conf_sz) + const struct rte_dma_conf *dev_conf, + uint32_t conf_sz) { char name[32]; /* RTE_MEMZONE_NAMESIZE = 32 */ struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; + uint16_t i; DPAA2_QDMA_FUNC_TRACE(); @@ -1142,9 +1111,9 @@ dpaa2_qdma_configure(struct rte_dma_dev *dev, /* In case QDMA device is not in stopped state, return -EBUSY */ if (qdma_dev->state == 1) { - DPAA2_QDMA_ERR( - "Device is in running state. Stop before config."); - return -1; + DPAA2_QDMA_ERR("%s Not stopped, configure failed.", + dev->data->dev_name); + return -EBUSY; } /* Allocate Virtual Queues */ @@ -1156,6 +1125,9 @@ dpaa2_qdma_configure(struct rte_dma_dev *dev, DPAA2_QDMA_ERR("qdma_virtual_queues allocation failed"); return -ENOMEM; } + for (i = 0; i < dev_conf->nb_vchans; i++) + qdma_dev->vqs[i].vq_id = i; + qdma_dev->num_vqs = dev_conf->nb_vchans; return 0; @@ -1257,13 +1229,12 @@ dpaa2_qdma_vchan_rbp_set(struct qdma_virt_queue *vq, static int dpaa2_qdma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, - const struct rte_dma_vchan_conf *conf, - uint32_t conf_sz) + const struct rte_dma_vchan_conf *conf, + uint32_t conf_sz) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; uint32_t pool_size; - char ring_name[32]; char pool_name[64]; int fd_long_format = 1; int sg_enable = 0, ret; @@ -1301,20 +1272,6 @@ dpaa2_qdma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, pool_size = QDMA_FLE_SINGLE_POOL_SIZE; } - if (qdma_dev->num_vqs == 1) - qdma_dev->vqs[vchan].exclusive_hw_queue = 1; - else { - /* Allocate a Ring for Virtual Queue in VQ mode */ - snprintf(ring_name, sizeof(ring_name), "status ring %d %d", - dev->data->dev_id, vchan); - qdma_dev->vqs[vchan].status_ring = rte_ring_create(ring_name, - conf->nb_desc, rte_socket_id(), 0); - if (!qdma_dev->vqs[vchan].status_ring) { - DPAA2_QDMA_ERR("Status ring creation failed for vq"); - return rte_errno; - } - } - snprintf(pool_name, sizeof(pool_name), "qdma_fle_pool_dev%d_qid%d", dpdmai_dev->dpdmai_id, vchan); qdma_dev->vqs[vchan].fle_pool = rte_mempool_create(pool_name, @@ -1410,8 +1367,8 @@ dpaa2_qdma_reset(struct rte_dma_dev *dev) /* In case QDMA device is not in stopped state, return -EBUSY */ if (qdma_dev->state == 1) { - DPAA2_QDMA_ERR( - "Device is in running state. Stop before reset."); + DPAA2_QDMA_ERR("%s Not stopped, reset failed.", + dev->data->dev_name); return -EBUSY; } @@ -1424,10 +1381,6 @@ dpaa2_qdma_reset(struct rte_dma_dev *dev) } } - /* Reset and free virtual queues */ - for (i = 0; i < qdma_dev->num_vqs; i++) { - rte_ring_free(qdma_dev->vqs[i].status_ring); - } rte_free(qdma_dev->vqs); qdma_dev->vqs = NULL; @@ -1504,29 +1457,35 @@ static int dpaa2_dpdmai_dev_uninit(struct rte_dma_dev *dev) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; - int ret; + struct dpaa2_queue *rxq; + int ret, i; DPAA2_QDMA_FUNC_TRACE(); ret = dpdmai_disable(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token); - if (ret) - DPAA2_QDMA_ERR("dmdmai disable failed"); + dpdmai_dev->token); + if (ret) { + DPAA2_QDMA_ERR("dpdmai(%d) disable failed", + dpdmai_dev->dpdmai_id); + } /* Set up the DQRR storage for Rx */ - struct dpaa2_queue *rxq = &(dpdmai_dev->rx_queue[0]); - - if (rxq->q_storage) { - dpaa2_free_dq_storage(rxq->q_storage); - rte_free(rxq->q_storage); + for (i = 0; i < dpdmai_dev->num_queues; i++) { + rxq = &dpdmai_dev->rx_queue[i]; + if (rxq->q_storage) { + dpaa2_free_dq_storage(rxq->q_storage); + rte_free(rxq->q_storage); + } } /* Close the device at underlying layer*/ ret = dpdmai_close(&dpdmai_dev->dpdmai, CMD_PRI_LOW, dpdmai_dev->token); - if (ret) - DPAA2_QDMA_ERR("Failure closing dpdmai device"); + if (ret) { + DPAA2_QDMA_ERR("dpdmai(%d) close failed", + dpdmai_dev->dpdmai_id); + } - return 0; + return ret; } static int @@ -1538,80 +1497,87 @@ dpaa2_dpdmai_dev_init(struct rte_dma_dev *dev, int dpdmai_id) struct dpdmai_rx_queue_attr rx_attr; struct dpdmai_tx_queue_attr tx_attr; struct dpaa2_queue *rxq; - int ret; + int ret, i; DPAA2_QDMA_FUNC_TRACE(); /* Open DPDMAI device */ dpdmai_dev->dpdmai_id = dpdmai_id; dpdmai_dev->dpdmai.regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX); - dpdmai_dev->qdma_dev = rte_malloc(NULL, sizeof(struct qdma_device), - RTE_CACHE_LINE_SIZE); + dpdmai_dev->qdma_dev = rte_malloc(NULL, + sizeof(struct qdma_device), RTE_CACHE_LINE_SIZE); ret = dpdmai_open(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->dpdmai_id, &dpdmai_dev->token); + dpdmai_dev->dpdmai_id, &dpdmai_dev->token); if (ret) { - DPAA2_QDMA_ERR("dpdmai_open() failed with err: %d", ret); + DPAA2_QDMA_ERR("%s: dma(%d) open failed(%d)", + __func__, dpdmai_dev->dpdmai_id, ret); return ret; } /* Get DPDMAI attributes */ ret = dpdmai_get_attributes(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token, &attr); + dpdmai_dev->token, &attr); if (ret) { - DPAA2_QDMA_ERR("dpdmai get attributes failed with err: %d", - ret); + DPAA2_QDMA_ERR("%s: dma(%d) get attributes failed(%d)", + __func__, dpdmai_dev->dpdmai_id, ret); goto init_err; } dpdmai_dev->num_queues = attr.num_of_queues; - /* Set up Rx Queue */ - memset(&rx_queue_cfg, 0, sizeof(struct dpdmai_rx_queue_cfg)); - ret = dpdmai_set_rx_queue(&dpdmai_dev->dpdmai, - CMD_PRI_LOW, - dpdmai_dev->token, - 0, 0, &rx_queue_cfg); - if (ret) { - DPAA2_QDMA_ERR("Setting Rx queue failed with err: %d", - ret); - goto init_err; - } + /* Set up Rx Queues */ + for (i = 0; i < dpdmai_dev->num_queues; i++) { + memset(&rx_queue_cfg, 0, sizeof(struct dpdmai_rx_queue_cfg)); + ret = dpdmai_set_rx_queue(&dpdmai_dev->dpdmai, + CMD_PRI_LOW, + dpdmai_dev->token, + i, 0, &rx_queue_cfg); + if (ret) { + DPAA2_QDMA_ERR("%s Q%d set failed(%d)", + dev->data->dev_name, i, ret); + goto init_err; + } - /* Allocate DQ storage for the DPDMAI Rx queues */ - rxq = &(dpdmai_dev->rx_queue[0]); - rxq->q_storage = rte_malloc("dq_storage", - sizeof(struct queue_storage_info_t), - RTE_CACHE_LINE_SIZE); - if (!rxq->q_storage) { - DPAA2_QDMA_ERR("q_storage allocation failed"); - ret = -ENOMEM; - goto init_err; - } + /* Allocate DQ storage for the DPDMAI Rx queues */ + rxq = &dpdmai_dev->rx_queue[i]; + rxq->q_storage = rte_malloc("dq_storage", + sizeof(struct queue_storage_info_t), + RTE_CACHE_LINE_SIZE); + if (!rxq->q_storage) { + DPAA2_QDMA_ERR("%s DQ info(Q%d) alloc failed", + dev->data->dev_name, i); + ret = -ENOMEM; + goto init_err; + } - memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t)); - ret = dpaa2_alloc_dq_storage(rxq->q_storage); - if (ret) { - DPAA2_QDMA_ERR("dpaa2_alloc_dq_storage failed"); - goto init_err; + memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t)); + ret = dpaa2_alloc_dq_storage(rxq->q_storage); + if (ret) { + DPAA2_QDMA_ERR("%s DQ storage(Q%d) alloc failed(%d)", + dev->data->dev_name, i, ret); + goto init_err; + } } - /* Get Rx and Tx queues FQID */ - ret = dpdmai_get_rx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token, 0, 0, &rx_attr); - if (ret) { - DPAA2_QDMA_ERR("Reading device failed with err: %d", - ret); - goto init_err; - } - dpdmai_dev->rx_queue[0].fqid = rx_attr.fqid; + /* Get Rx and Tx queues FQID's */ + for (i = 0; i < dpdmai_dev->num_queues; i++) { + ret = dpdmai_get_rx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, + dpdmai_dev->token, i, 0, &rx_attr); + if (ret) { + DPAA2_QDMA_ERR("Get DPDMAI%d-RXQ%d failed(%d)", + dpdmai_dev->dpdmai_id, i, ret); + goto init_err; + } + dpdmai_dev->rx_queue[i].fqid = rx_attr.fqid; - ret = dpdmai_get_tx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token, 0, 0, &tx_attr); - if (ret) { - DPAA2_QDMA_ERR("Reading device failed with err: %d", - ret); - goto init_err; + ret = dpdmai_get_tx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, + dpdmai_dev->token, i, 0, &tx_attr); + if (ret) { + DPAA2_QDMA_ERR("Get DPDMAI%d-TXQ%d failed(%d)", + dpdmai_dev->dpdmai_id, i, ret); + goto init_err; + } + dpdmai_dev->tx_queue[i].fqid = tx_attr.fqid; } - dpdmai_dev->tx_queue[0].fqid = tx_attr.fqid; /* Enable the device */ ret = dpdmai_enable(&dpdmai_dev->dpdmai, CMD_PRI_LOW, diff --git a/drivers/dma/dpaa2/dpaa2_qdma.h b/drivers/dma/dpaa2/dpaa2_qdma.h index 811906fcbc..786dcb9308 100644 --- a/drivers/dma/dpaa2/dpaa2_qdma.h +++ b/drivers/dma/dpaa2/dpaa2_qdma.h @@ -18,7 +18,7 @@ #define DPAA2_QDMA_MAX_SG_NB 64 -#define DPAA2_DPDMAI_MAX_QUEUES 1 +#define DPAA2_DPDMAI_MAX_QUEUES 16 /** FLE single job pool size: job pointer(uint64_t) + * 3 Frame list + 2 source/destination descriptor. @@ -245,8 +245,6 @@ typedef int (qdma_enqueue_multijob_t)( /** Represents a QDMA virtual queue */ struct qdma_virt_queue { - /** Status ring of the virtual queue */ - struct rte_ring *status_ring; /** Associated hw queue */ struct dpaa2_dpdmai_dev *dpdmai_dev; /** FLE pool for the queue */ @@ -255,8 +253,6 @@ struct qdma_virt_queue { struct dpaa2_qdma_rbp rbp; /** States if this vq is in use or not */ uint8_t in_use; - /** States if this vq has exclusively associated hw queue */ - uint8_t exclusive_hw_queue; /** Number of descriptor for the virtual DMA channel */ uint16_t nb_desc; /* Total number of enqueues on this VQ */ -- 2.25.1