From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-he1eur01on0076.outbound.protection.outlook.com [104.47.0.76]) by dpdk.org (Postfix) with ESMTP id AED9D2E81 for ; Thu, 3 May 2018 17:52:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=ybrS1jtnSf79WskvX2kunTroto8hXpLuRpj4WU7Ljis=; b=UJkmhUKPcg543xHeZqSQkEfQOat9PUmp6GDzBtoN+JxiF/lbAVhg1PWPWfFG5kE3qsH85EZxG0Hs/7jW90sHzQ0hxjZlNavnGkypmWKvDQMcFPAyyMZ/oVeijAvSSrpj+T+Vb7tmzYSJlhGncLw6oJLjtwqT7QyUAxY/9uFIXpo= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=nipun.gupta@nxp.com; Received: from b27504-OptiPlex-790.ap.freescale.net (14.142.187.166) by VI1PR0401MB2429.eurprd04.prod.outlook.com (2603:10a6:800:2a::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.715.24; Thu, 3 May 2018 15:52:43 +0000 From: Nipun Gupta To: thomas@monjalon.net, hemant.agrawal@nxp.com, shreyansh.jain@nxp.com Cc: dev@dpdk.org, Nipun Gupta Date: Thu, 3 May 2018 21:22:01 +0530 Message-Id: <1525362722-32726-8-git-send-email-nipun.gupta@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1525362722-32726-1-git-send-email-nipun.gupta@nxp.com> References: <1525280972-27736-1-git-send-email-nipun.gupta@nxp.com> <1525362722-32726-1-git-send-email-nipun.gupta@nxp.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [14.142.187.166] X-ClientProxiedBy: MA1PR0101CA0046.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:22::32) To VI1PR0401MB2429.eurprd04.prod.outlook.com (2603:10a6:800:2a::22) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4534165)(7168020)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:VI1PR0401MB2429; X-Microsoft-Exchange-Diagnostics: 1; VI1PR0401MB2429; 3:5DfwJP1BIUhTSUu2Yv/XvDK31zun5b8arDWzaKuqvtgdjAE82GaqErQSdt0dc1TQfH2oJVBEgprAcnhIGQArW/1fgZ4tfqmrLIIvwVI+t6dhuxYXmAnJnM8Vc+h7heeQJbelKEfLErmjoCJ7RBGiBIYvaEiMsFXGKfMGfjB9JkmDlmrFYMRZRI81lq+00depTXaVGYQ1t+t8J93YUht1512Qb3wD6unRJ3ZPtq1JiBQ6uPkBgtakKgsFH+zxQLT0; 25:ZN0K8SY26AF6uc4gjiAGPWwLz2PsaqMpRbghl0XSGF0NcRXAyU6TzsMERkqfiHbgZ8nWjQM+cW0Z0Z81UDUE5CxjZPCS+AEQdtC5yJgx5n4VfEodR4Uaelki12hBFpiT6o07ZAsXve3uGAejJhYKt4ctfwEsXYD4Pic7wmwo858q8yI0jD6KVhlGV/AgduFZ4vThO+EBODsgqHLV+UY9wgUoj3H1oXHrt+tUhTJ32GzxI4/e7c9PkttxQurdqdppbVaErh1PE5jvLxlVUMjAEJdyR/Mi7BNGtkCzwLdzjg3WxxXR/y5Bs8Bs2Yw++x+h5c7iYvyNvqj6N/sFvRkfyA==; 31:bd8GMHONdX6+6XmjB+BstSeWYnG/SXvXIhTh8ozLxuRSjKa/DGoeREKFySvqldSSSnMu6s7lj/XKDpnd0ERi44SbX/oBdyn6mjdObR6LPBA631sibzyl+tRb5bdEFJEFjpS62IEGZQBc2WxrupbBhTYCLjCkVOGiRd6Zerv2BtL6Yg6dbEfP+6K3dz49Jp951xqAi5GN89GWXD9b8ga4drz/SAmfm/V8AvOFUMSbEOQ= X-MS-TrafficTypeDiagnostic: VI1PR0401MB2429: X-Microsoft-Exchange-Diagnostics: 1; VI1PR0401MB2429; 20:cgYWIsCHBZOtWtexbRNBFoz7XYkU+GQ+iCP1iIofcLgjv5PSyOiM8y9+GHfOk9jFarwK4raFzU8uslQ3qjngZVg2YDPLXpri4zshuOQA9YuoUY1NPv84vdcBXJlASD/Xi86Ke4kSnyeEF5GOJafrk1pUSVMknmWF8CrA7Vi0LigBTWT93pzPyTWtRKVlhbVend2v7cXMzA+rvat3zA0Xaxc/XQ/nh872q2cT/Y9Xy3dPSeVMxic+uGmUqJO/ej495epyu3XLpe6ywsDIn6GKrCiM2VlFRcqaNPxvDzSy0htFRopyfV4wYoBeClZb+/Nb5/ABNnOOpWGlx62SPxO/lyE3T7BaEiVsjizfUvQGR/eANlBi9TPJeCuN6lqhTXJb8zQGMZRYiwkSnZVuildCFnrvtV6TyiUQ/yjdV32cfsGdKRKt/NtRsxA85ZgdSmHpga8paMycrHObp1d/b16eE+m2SwTDDUu1UjOxB48dmH9hjzi3e/ocXo8nqKWYbkzN; 4:LrYOneB+x2wh778mXE61vcJuPc/Nc6Z66HG6K/9G2iLWzqFxo0KYNOZw/1RiSzBokBuVCmKOcxmW9dRKVNUBJOrn8jlyzrWVfaSJmcUp/1I/rI3VRqIVaoHHndzRH/NlDC3fNUCLc4jBItZlyyKz4o+PfGSWlru8z3K3xZv56lVNT55V1A8LXOXAxJ7dKfphsT+PwhuHef4p20d5+o3Eoj/0l9G1MZWH2rVA4OPu6oBzWk/dkkpdZk25MdCypJJkUbZYqB2hurEboHBjQRcAgWlgXU28JSByJ6Xr6D5wNN3SOS1YDEAMS9yBMwyxGD1rNbzvpwWTrjCv28HfLfEbUzCd0lylIbuji+HYNz7VjHE= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(275809806118684); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(10201501046)(93006095)(93001095)(3231254)(944501410)(52105095)(3002001)(6055026)(6041310)(20161123558120)(20161123562045)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:VI1PR0401MB2429; BCL:0; PCL:0; RULEID:; SRVR:VI1PR0401MB2429; X-Forefront-PRVS: 066153096A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(979002)(376002)(366004)(396003)(39380400002)(346002)(39860400002)(199004)(189003)(6666003)(50466002)(6636002)(4326008)(68736007)(6512007)(5009440100003)(25786009)(97736004)(551934003)(6486002)(36756003)(106356001)(105586002)(2906002)(81166006)(6506007)(316002)(386003)(52116002)(59450400001)(66066001)(47776003)(86362001)(478600001)(446003)(305945005)(7736002)(48376002)(16586007)(76176011)(51416003)(55236004)(6116002)(486006)(186003)(476003)(16526019)(2616005)(11346002)(956004)(81156014)(3846002)(26005)(5660300001)(44832011)(50226002)(53936002)(8676002)(8936002)(110426005)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB2429; H:b27504-OptiPlex-790.ap.freescale.net; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; VI1PR0401MB2429; 23:QrdrLnrlho2HiAoviFiK9uKK0KkE5KhNyE4++aH?= =?us-ascii?Q?xahBSOQMJe0c1/w5U9FsoHn19QkvXs7rh8d3pGsOO7RekH3Qb3d+4WcPKHiu?= =?us-ascii?Q?M+eSgWqor597eRQapSRe6cPJBsj3cSP1o1+CFlN+xvp+mdzZNpIq7/9z5sWF?= =?us-ascii?Q?r+B5O/w7CvTLCizdNQ2k75LsD02LM8MKO6zNDdLzaU9yAd+ILu4W0JVtZ4do?= =?us-ascii?Q?jpqNYcxEh3Ata4FOT6nKR2WYwsfcBk3/DN/hrbt2yVg8248H6B2QXb9DOrKB?= =?us-ascii?Q?WeB9XG2Gq8lx2gB8xqMSLN0g7nMe6LQvNckVdpJdtTlmSR0BDXow186zQEoZ?= =?us-ascii?Q?LgP4fUhSIEEOy6y2qaSRzNVpu6CnxdUR9SzJlkOqDgsUhPXlH5Ey+zuqGt3n?= =?us-ascii?Q?3GthxugioyKwVDm4bjAF0nS8T47fJjpzI/btxen470QYwIFTdH8Z3yQeyA6Z?= =?us-ascii?Q?fFukNE2MjKx3JI/j9rxJQ1r9AjNXlsS0yaG5mkh8QBIf1HT3A3Z8Y/pV0Tr2?= =?us-ascii?Q?wRMN6cN1YlALY5WufaB8aRIJI9+6uyQrS73Py1ndUOe5zEOAUl/K4c9geVcc?= =?us-ascii?Q?CHleccJJdW+Z2BdzzeGaUI97Dj+kgO0/7sAIhXHlo3NI7rf3LPjo+2l0wAby?= =?us-ascii?Q?MUOvhdCxZiEkfUFxeqb+BTQ4mJ8eHEJ+lvwKNGrjHlTZbG9+BaM5fBa0XLCU?= =?us-ascii?Q?u8jmnngq7JqYU+0NFpXBgxzljRvhYiIUKOH17L/jXq4snJ4vn/jwMjoEqQP2?= =?us-ascii?Q?RlfpPg6yhueonyobMjlCM+gQomX+XG23uTSOQWMhQkzbzy0eJMGdndxqciaV?= =?us-ascii?Q?QR8qNXKXj5+GXPfasosF3Q8CLrc1rixNp8FvUWtWJd0t6Vv3fNyjxcTLbXhx?= =?us-ascii?Q?0Dwo6PeAuiLpbiqf1yBuQFkUXYMedCTzKxeq+ES8QujbEHf/n9pxWejz05ZG?= =?us-ascii?Q?KZP4Yvp/xnzxHvxURQIEcf+uUuvkgQd6Hl5v0ohPZmYSqHkJWdY4V/JZ3DYG?= =?us-ascii?Q?EytOEPkqOOhWofxnUSw8WdCcI6hDE4HHEvl1FN00sqRYNCcZInTqQ6A1Y8zU?= =?us-ascii?Q?NZrT19G8buZ2m1cI3lxteCZsb1klX58lcyblsR4FRHqzC24qQoMKdhDLkSMz?= =?us-ascii?Q?9gdvZ4dHYCxLg6Hzvwdg4tXKnxDv4w7jfZjZG4HjIfVqRR18xWoT4ULDc19D?= =?us-ascii?Q?BMyedNToG3WLKbg9LOo9FngwXy6q09NIz1BSBZ2obhSIK8Mkb8FkJQQrSJFX?= =?us-ascii?Q?/ZQoQCVWrugAeAmlyeQciGfFaGJYx0nmOrxrkSy6iodpUo81GVmiDoLtboel?= =?us-ascii?Q?joeTsPcO3XVjoH5bOdoeXUhNnHcdiLHMYCVXF7ZXUyA43jzGSbGBYHFx/lQt?= =?us-ascii?Q?jRktxf20KLFxahbGhTyul54Qo6aeUjEqAFeOjVbJfbtr2+WH/WeZPusEiOEF?= =?us-ascii?Q?twNOvgQSZ6Q=3D=3D?= X-Microsoft-Antispam-Message-Info: z8YC+W+3O4YXuF9BidixRgxOHrbqktUgcWMXFj+gchHxEzzAmF0sd9MmTDwVy8RL7+8Xx7yZ+9VBtBVNBeB93ozKT/7ZujFtIChCoWPmkBNJ3SZJaeaGEUSu3E1T50Y87qLmjQqeSt82xbD3jhRksuGvZIZOIH8iZHkRKHq+MHX89kCA4wiP81DgFnyaxK/Q X-Microsoft-Exchange-Diagnostics: 1; VI1PR0401MB2429; 6:F0tGjT6PoZsxHjz7YRqkIFH6rpDoBMFEmbPnK1a/500Xbjt1E7W8WI5095AZvJuyMASX0MFwYeo+EGIGDz7OwHpRbbuK8264O2c9ZUeKd45xxDhsyghl3H9bIZtCuvt3KriNBxaABAR5H4qwraAU7zG4ebNntfRw/5rnscWXO6HQF9tHIIuRtQPx6bjbwsLTiHxHPatJgiqH0CiUIlKkqerI7f3ge6g6PEWjNwDVEtL8QrlLSuN0BPq9fhBiPj+S+aaDC1a1LsDv1ykbKBupKI17mQgI9A1rX1vSBomO1q4PSIQlEbZcCDtYNBAQTnJCBAzyKx9zt+V9mCw6jVwyhm9w4zXcr6kIu3s6f79x58NdXX+xtJFJqRBo7FfehXiEbvIf5+YnFoiMP1A9oq+v1R3qltmxdclqWJcwSXbkTQcfACCUbi5IQ53H7QaDdZHuXhxsG7u5ykL7i/QMyR5HcQ==; 5:LsMyv3OKEFnLX8kDfRUbg5QQEiNrNBGxmpM5l5fYuE1Y/8olKuVXqXryzEkI6XQ+G3YMva5sY34Hn4M2xG+0uFP2j6L0MyKbo/YjsTtT2AvV3S7Crz0pVGUgunY29Mavn5SVFQijnrL9m8OYURkpJRsgNlhYchyFtUO3M6/2w5E=; 24:gDF5piOTiNI1oJd8/AwEaHp+J3Pir84tt/cMS18PJp0aXyXvQOX4pGTjPwknxeuZQqVFzzVPv3O3MvoYU5LiSgoerQyXpkW2jpfrftnnbpY= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; VI1PR0401MB2429; 7:9pE2q6o4NtMVVJcAtRhqHeUmjVtIPH/nXXfnf+FVKVwxaE8YdCciOwNQmlXBHaAfBagFJuQnQ9obVvKA4X5UIlwV6RB9ohZAwuXrTFQJwy/p1c1NOwjvKvxQYVbje2AbH9fSpw+tkQ4DRe9IssHsf9BIHC22BflYl5WYRlcFIeQdQG80DpDjboVeOr7OjsYR2iWUMfwlIfFIdXx2iOYXuf8BYt75zvUBWlWGljoX6TZg3+p61SyZlYPFC6swuyml X-MS-Office365-Filtering-Correlation-Id: a0dc8484-bf85-48e4-db6e-08d5b10de94d X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2018 15:52:43.6085 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a0dc8484-bf85-48e4-db6e-08d5b10de94d X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2429 Subject: [dpdk-dev] [PATCH v7 7/8] raw/dpaa2_qdma: support enq and deq operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 May 2018 15:52:47 -0000 Signed-off-by: Nipun Gupta Acked-by: Shreyansh Jain --- drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 333 +++++++++++++++++++++ drivers/raw/dpaa2_qdma/dpaa2_qdma.h | 21 ++ drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 70 +++++ .../raw/dpaa2_qdma/rte_pmd_dpaa2_qdma_version.map | 4 + 4 files changed, 428 insertions(+) diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c index b38a282..1d15c30 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -344,6 +344,339 @@ return i; } +static void +dpaa2_qdma_populate_fle(struct qbman_fle *fle, + uint64_t src, uint64_t dest, + size_t len, uint32_t flags) +{ + struct qdma_sdd *sdd; + + DPAA2_QDMA_FUNC_TRACE(); + + sdd = (struct qdma_sdd *)((uint8_t *)(fle) + + (DPAA2_QDMA_MAX_FLE * sizeof(struct qbman_fle))); + + /* first frame list to source descriptor */ + DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sdd)); + DPAA2_SET_FLE_LEN(fle, (2 * (sizeof(struct qdma_sdd)))); + + /* source and destination descriptor */ + DPAA2_SET_SDD_RD_COHERENT(sdd); /* source descriptor CMD */ + sdd++; + DPAA2_SET_SDD_WR_COHERENT(sdd); /* dest descriptor CMD */ + + fle++; + /* source frame list to source buffer */ + if (flags & RTE_QDMA_JOB_SRC_PHY) { + DPAA2_SET_FLE_ADDR(fle, src); + DPAA2_SET_FLE_BMT(fle); + } else { + DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(src)); + } + DPAA2_SET_FLE_LEN(fle, len); + + fle++; + /* destination frame list to destination buffer */ + if (flags & RTE_QDMA_JOB_DEST_PHY) { + DPAA2_SET_FLE_BMT(fle); + DPAA2_SET_FLE_ADDR(fle, dest); + } else { + DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(dest)); + } + DPAA2_SET_FLE_LEN(fle, len); + + /* Final bit: 1, for last frame list */ + DPAA2_SET_FLE_FIN(fle); +} + +static int +dpdmai_dev_enqueue(struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t txq_id, + uint16_t vq_id, + struct rte_qdma_job *job) +{ + struct qdma_io_meta *io_meta; + struct qbman_fd fd; + struct dpaa2_queue *txq; + struct qbman_fle *fle; + struct qbman_eq_desc eqdesc; + struct qbman_swp *swp; + int ret; + + DPAA2_QDMA_FUNC_TRACE(); + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_QDMA_ERR("Failure in affining portal"); + return 0; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + txq = &(dpdmai_dev->tx_queue[txq_id]); + + /* Prepare enqueue descriptor */ + qbman_eq_desc_clear(&eqdesc); + qbman_eq_desc_set_fq(&eqdesc, txq->fqid); + qbman_eq_desc_set_no_orp(&eqdesc, 0); + qbman_eq_desc_set_response(&eqdesc, 0, 0); + + /* + * Get an FLE/SDD from FLE pool. + * Note: IO metadata is before the FLE and SDD memory. + */ + ret = rte_mempool_get(qdma_dev.fle_pool, (void **)(&io_meta)); + if (ret) { + DPAA2_QDMA_DP_WARN("Memory alloc failed for FLE"); + return ret; + } + + /* Set the metadata */ + io_meta->cnxt = (size_t)job; + io_meta->id = vq_id; + + fle = (struct qbman_fle *)(io_meta + 1); + + /* populate Frame descriptor */ + memset(&fd, 0, sizeof(struct qbman_fd)); + DPAA2_SET_FD_ADDR(&fd, DPAA2_VADDR_TO_IOVA(fle)); + DPAA2_SET_FD_COMPOUND_FMT(&fd); + DPAA2_SET_FD_FRC(&fd, QDMA_SER_CTX); + + /* Populate FLE */ + memset(fle, 0, QDMA_FLE_POOL_SIZE); + dpaa2_qdma_populate_fle(fle, job->src, job->dest, job->len, job->flags); + + /* Enqueue the packet to the QBMAN */ + do { + ret = qbman_swp_enqueue_multiple(swp, &eqdesc, &fd, NULL, 1); + if (ret < 0 && ret != -EBUSY) + DPAA2_QDMA_ERR("Transmit failure with err: %d", ret); + } while (ret == -EBUSY); + + DPAA2_QDMA_DP_DEBUG("Successfully transmitted a packet"); + + return ret; +} + +int __rte_experimental +rte_qdma_vq_enqueue_multi(uint16_t vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) +{ + int i, ret; + + DPAA2_QDMA_FUNC_TRACE(); + + for (i = 0; i < nb_jobs; i++) { + ret = rte_qdma_vq_enqueue(vq_id, job[i]); + if (ret < 0) + break; + } + + return i; +} + +int __rte_experimental +rte_qdma_vq_enqueue(uint16_t vq_id, + struct rte_qdma_job *job) +{ + struct qdma_virt_queue *qdma_vq = &qdma_vqs[vq_id]; + struct qdma_hw_queue *qdma_pq = qdma_vq->hw_queue; + struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_pq->dpdmai_dev; + int ret; + + DPAA2_QDMA_FUNC_TRACE(); + + /* Return error in case of wrong lcore_id */ + if (rte_lcore_id() != qdma_vq->lcore_id) { + DPAA2_QDMA_ERR("QDMA enqueue for vqid %d on wrong core", + vq_id); + return -EINVAL; + } + + ret = dpdmai_dev_enqueue(dpdmai_dev, qdma_pq->queue_id, vq_id, job); + if (ret < 0) { + DPAA2_QDMA_ERR("DPDMAI device enqueue failed: %d", ret); + return ret; + } + + qdma_vq->num_enqueues++; + + return 1; +} + +/* Function to receive a QDMA job for a given device and queue*/ +static int +dpdmai_dev_dequeue(struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job) +{ + struct qdma_io_meta *io_meta; + struct dpaa2_queue *rxq; + struct qbman_result *dq_storage; + struct qbman_pull_desc pulldesc; + const struct qbman_fd *fd; + struct qbman_swp *swp; + struct qbman_fle *fle; + uint32_t fqid; + uint8_t status; + int ret; + + DPAA2_QDMA_FUNC_TRACE(); + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_QDMA_ERR("Failure in affining portal"); + return 0; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + rxq = &(dpdmai_dev->rx_queue[rxq_id]); + dq_storage = rxq->q_storage->dq_storage[0]; + fqid = rxq->fqid; + + /* Prepare dequeue descriptor */ + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (uint64_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + qbman_pull_desc_set_numframes(&pulldesc, 1); + + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN("VDQ command not issued. QBMAN busy"); + continue; + } + break; + } + + /* Check if previous issued command is completed. */ + while (!qbman_check_command_complete(dq_storage)) + ; + /* Loop until dq_storage is updated with new token by QBMAN */ + while (!qbman_check_new_result(dq_storage)) + ; + + /* Check for valid frame. */ + status = qbman_result_DQ_flags(dq_storage); + if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) { + DPAA2_QDMA_DP_DEBUG("No frame is delivered"); + return 0; + } + + /* Get the FD */ + fd = qbman_result_DQ_fd(dq_storage); + + /* + * Fetch metadata from FLE. job and vq_id were set + * in metadata in the enqueue operation. + */ + fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)); + io_meta = (struct qdma_io_meta *)(fle) - 1; + if (vq_id) + *vq_id = io_meta->id; + + *job = (struct rte_qdma_job *)(size_t)io_meta->cnxt; + (*job)->status = DPAA2_GET_FD_ERR(fd); + + /* Free FLE to the pool */ + rte_mempool_put(qdma_dev.fle_pool, io_meta); + + DPAA2_QDMA_DP_DEBUG("packet received"); + + return 1; +} + +int __rte_experimental +rte_qdma_vq_dequeue_multi(uint16_t vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) +{ + int i; + + DPAA2_QDMA_FUNC_TRACE(); + + for (i = 0; i < nb_jobs; i++) { + job[i] = rte_qdma_vq_dequeue(vq_id); + if (!job[i]) + break; + } + + return i; +} + +struct rte_qdma_job * __rte_experimental +rte_qdma_vq_dequeue(uint16_t vq_id) +{ + struct qdma_virt_queue *qdma_vq = &qdma_vqs[vq_id]; + struct qdma_hw_queue *qdma_pq = qdma_vq->hw_queue; + struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_pq->dpdmai_dev; + struct rte_qdma_job *job = NULL; + struct qdma_virt_queue *temp_qdma_vq; + int dequeue_budget = QDMA_DEQUEUE_BUDGET; + int ring_count, ret, i; + uint16_t temp_vq_id; + + DPAA2_QDMA_FUNC_TRACE(); + + /* Return error in case of wrong lcore_id */ + if (rte_lcore_id() != (unsigned int)(qdma_vq->lcore_id)) { + DPAA2_QDMA_ERR("QDMA dequeue for vqid %d on wrong core", + vq_id); + return NULL; + } + + /* Only dequeue when there are pending jobs on VQ */ + if (qdma_vq->num_enqueues == qdma_vq->num_dequeues) + return NULL; + + if (qdma_vq->exclusive_hw_queue) { + /* In case of exclusive queue directly fetch from HW queue */ + ret = dpdmai_dev_dequeue(dpdmai_dev, qdma_pq->queue_id, + NULL, &job); + if (ret < 0) { + DPAA2_QDMA_ERR( + "Dequeue from DPDMAI device failed: %d", ret); + return NULL; + } + } else { + /* + * Get the QDMA completed jobs from the software ring. + * In case they are not available on the ring poke the HW + * to fetch completed jobs from corresponding HW queues + */ + ring_count = rte_ring_count(qdma_vq->status_ring); + if (ring_count == 0) { + /* TODO - How to have right budget */ + for (i = 0; i < dequeue_budget; i++) { + ret = dpdmai_dev_dequeue(dpdmai_dev, + qdma_pq->queue_id, &temp_vq_id, &job); + if (ret == 0) + break; + temp_qdma_vq = &qdma_vqs[temp_vq_id]; + rte_ring_enqueue(temp_qdma_vq->status_ring, + (void *)(job)); + ring_count = rte_ring_count( + qdma_vq->status_ring); + if (ring_count) + break; + } + } + + /* Dequeue job from the software ring to provide to the user */ + rte_ring_dequeue(qdma_vq->status_ring, (void **)&job); + if (job) + qdma_vq->num_dequeues++; + } + + return job; +} + void __rte_experimental rte_qdma_vq_stats(uint16_t vq_id, struct rte_qdma_vq_stats *vq_status) diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h index fe1da41..c6a0578 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h @@ -18,10 +18,31 @@ /** FLE pool cache size */ #define QDMA_FLE_CACHE_SIZE(_num) (_num/(RTE_MAX_LCORE * 2)) +/** Notification by FQD_CTX[fqid] */ +#define QDMA_SER_CTX (1 << 8) + +/** + * Source descriptor command read transaction type for RBP=0: + * coherent copy of cacheable memory + */ +#define DPAA2_SET_SDD_RD_COHERENT(sdd) ((sdd)->cmd = (0xb << 28)) +/** + * Destination descriptor command write transaction type for RBP=0: + * coherent copy of cacheable memory + */ +#define DPAA2_SET_SDD_WR_COHERENT(sdd) ((sdd)->cmd = (0x6 << 28)) + /** Maximum possible H/W Queues on each core */ #define MAX_HW_QUEUE_PER_CORE 64 /** + * In case of Virtual Queue mode, this specifies the number of + * dequeue the 'qdma_vq_dequeue/multi' API does from the H/W Queue + * in case there is no job present on the Virtual Queue ring. + */ +#define QDMA_DEQUEUE_BUDGET 64 + +/** * Represents a QDMA device. * A single QDMA device exists which is combination of multiple DPDMAI rawdev's. */ diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h index 29a1e4b..17fffcb 100644 --- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h +++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h @@ -175,6 +175,76 @@ struct rte_qdma_job { rte_qdma_vq_create(uint32_t lcore_id, uint32_t flags); /** + * Enqueue multiple jobs to a Virtual Queue. + * If the enqueue is successful, the H/W will perform DMA operations + * on the basis of the QDMA jobs provided. + * + * @param vq_id + * Virtual Queue ID. + * @param job + * List of QDMA Jobs containing relevant information related to DMA. + * @param nb_jobs + * Number of QDMA jobs provided by the user. + * + * @returns + * - >=0: Number of jobs successfully submitted + * - <0: Error code. + */ +int __rte_experimental +rte_qdma_vq_enqueue_multi(uint16_t vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs); + +/** + * Enqueue a single job to a Virtual Queue. + * If the enqueue is successful, the H/W will perform DMA operations + * on the basis of the QDMA job provided. + * + * @param vq_id + * Virtual Queue ID. + * @param job + * A QDMA Job containing relevant information related to DMA. + * + * @returns + * - >=0: Number of jobs successfully submitted + * - <0: Error code. + */ +int __rte_experimental +rte_qdma_vq_enqueue(uint16_t vq_id, + struct rte_qdma_job *job); + +/** + * Dequeue multiple completed jobs from a Virtual Queue. + * Provides the list of completed jobs capped by nb_jobs. + * + * @param vq_id + * Virtual Queue ID. + * @param job + * List of QDMA Jobs returned from the API. + * @param nb_jobs + * Number of QDMA jobs requested for dequeue by the user. + * + * @returns + * Number of jobs actually dequeued. + */ +int __rte_experimental +rte_qdma_vq_dequeue_multi(uint16_t vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs); + +/** + * Dequeue a single completed jobs from a Virtual Queue. + * + * @param vq_id + * Virtual Queue ID. + * + * @returns + * - A completed job or NULL if no job is there. + */ +struct rte_qdma_job * __rte_experimental +rte_qdma_vq_dequeue(uint16_t vq_id); + +/** * Get a Virtual Queue statistics. * * @param vq_id diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma_version.map b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma_version.map index 7335c35..fe42a22 100644 --- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma_version.map +++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma_version.map @@ -10,6 +10,10 @@ EXPERIMENTAL { rte_qdma_stop; rte_qdma_vq_create; rte_qdma_vq_destroy; + rte_qdma_vq_dequeue; + rte_qdma_vq_dequeue_multi; + rte_qdma_vq_enqueue; + rte_qdma_vq_enqueue_multi; rte_qdma_vq_stats; local: *; -- 1.9.1