From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00043.outbound.protection.outlook.com [40.107.0.43]) by dpdk.org (Postfix) with ESMTP id 9D4861B3A8 for ; Thu, 4 Apr 2019 13:50:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VLSfqRGouvN8UqM9V4Z3fpnjfrctXTuHZzcSuYBJ3lo=; b=crYRsUNCTU5kuEiUuAF2pOQZgDgYjhXUA1KaoCIM4FutqZZWyC09FmLreakGrICpUwOfwgSOkqdUjyfCzI9TCO2FNXDOrWzKaAeaPAQX0nIZouruKqGhbtviCt3VvC14lfNfSCLIR5J8Ojm3w5QE4wBsh3b6QNclCdicEpO3mDQ= Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com (10.168.65.19) by VI1PR0401MB1965.eurprd04.prod.outlook.com (10.166.140.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1750.15; Thu, 4 Apr 2019 11:50:28 +0000 Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18]) by VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18%12]) with mapi id 15.20.1750.017; Thu, 4 Apr 2019 11:50:28 +0000 From: Hemant Agrawal To: "dev@dpdk.org" CC: "thomas@monjalon.net" , Shreyansh Jain Thread-Topic: [PATCH v3 7/7] raw/dpaa2_qdma: add support for non prefetch mode Thread-Index: AQHU6tyZKB+FGZNmQUCESGm5dY+4eg== Date: Thu, 4 Apr 2019 11:50:28 +0000 Message-ID: <20190404114818.21286-7-hemant.agrawal@nxp.com> References: <20190404110215.14410-1-hemant.agrawal@nxp.com> <20190404114818.21286-1-hemant.agrawal@nxp.com> In-Reply-To: <20190404114818.21286-1-hemant.agrawal@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [92.120.1.72] x-mailer: git-send-email 2.17.1 x-clientproxiedby: LO2P265CA0082.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8::22) To VI1PR0401MB2541.eurprd04.prod.outlook.com (2603:10a6:800:56::19) authentication-results: spf=none (sender IP is ) smtp.mailfrom=hemant.agrawal@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 79ab8d2f-916c-4e94-8f66-08d6b8f3bb6e x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600139)(711020)(4605104)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:VI1PR0401MB1965; x-ms-traffictypediagnostic: VI1PR0401MB1965: x-microsoft-antispam-prvs: x-forefront-prvs: 0997523C40 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(346002)(376002)(136003)(39860400002)(396003)(366004)(199004)(189003)(44832011)(8936002)(71190400001)(25786009)(305945005)(6512007)(52116002)(478600001)(68736007)(105586002)(4326008)(7736002)(486006)(53936002)(2906002)(1076003)(5660300002)(102836004)(5640700003)(76176011)(446003)(186003)(476003)(386003)(66066001)(11346002)(26005)(316002)(106356001)(2616005)(6916009)(2501003)(14454004)(3846002)(81166006)(6486002)(71200400001)(6436002)(99286004)(6116002)(6506007)(81156014)(8676002)(97736004)(50226002)(54906003)(1730700003)(86362001)(256004)(14444005)(36756003)(2351001); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB1965; H:VI1PR0401MB2541.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: jk6yTJXAoA4Jy3sHJDT//FuGLs2TxjXwtyxdLBERsJYyEnMatyU8vAEn1s/eqO3s9Ze/2BAA9y/bRjXLpSZN2vLlDQl7dMOWKU4LPNLz7ajpbKp/c7NykHlu9j4Ru5s9S/GiT0TuckOMsqDyWPEV63ThMtge/3yrWUWkqG4fJ9rlcFX7NXWBlC5SFz4XmGmYm3A+avhM0t84ptOYw1ad2/Wa+aTwxNUhmfdr+SjuqwAh2TGuMW5R45uNpcN0LBm36960inXRRoU2RBUb/XMUtNTstUKojQXrmGz7r2CJkNaNBK4xYE6cfcoFWc9z02cUP8dWQ9X/tw7vhQvV+73qMZdagdV/pqzTYFTTTqz2Z9Ogs11z8eMjqIma53Ej/DoBH5tlnMYo42YnYwmAblVDZnMsOT5goJqKuGYlqbSVkcE= Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 79ab8d2f-916c-4e94-8f66-08d6b8f3bb6e X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2019 11:50:28.4856 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB1965 Subject: [dpdk-dev] [PATCH v3 7/7] raw/dpaa2_qdma: add support for non prefetch mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2019 11:50:31 -0000 This patch add support for non prefetch mode in Rx functions. Signed-off-by: Hemant Agrawal --- drivers/raw/dpaa2_qdma/Makefile | 1 + drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 215 +++++++++++++++++++++++++++- drivers/raw/dpaa2_qdma/meson.build | 2 +- 3 files changed, 212 insertions(+), 6 deletions(-) diff --git a/drivers/raw/dpaa2_qdma/Makefile b/drivers/raw/dpaa2_qdma/Makef= ile index ee95662f1..450c76e76 100644 --- a/drivers/raw/dpaa2_qdma/Makefile +++ b/drivers/raw/dpaa2_qdma/Makefile @@ -21,6 +21,7 @@ LDLIBS +=3D -lrte_eal LDLIBS +=3D -lrte_mempool LDLIBS +=3D -lrte_mempool_dpaa2 LDLIBS +=3D -lrte_rawdev +LDLIBS +=3D -lrte_kvargs LDLIBS +=3D -lrte_ring LDLIBS +=3D -lrte_common_dpaax =20 diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/d= paa2_qdma.c index 38f329a50..a41c1e385 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -14,6 +14,7 @@ #include #include #include +#include =20 #include #include @@ -23,6 +24,8 @@ #include "dpaa2_qdma.h" #include "dpaa2_qdma_logs.h" =20 +#define DPAA2_QDMA_NO_PREFETCH "no_prefetch" + /* Dynamic log type identifier */ int dpaa2_qdma_logtype; =20 @@ -43,6 +46,14 @@ static struct qdma_virt_queue *qdma_vqs; /* QDMA per core data */ static struct qdma_per_core_info qdma_core_info[RTE_MAX_LCORE]; =20 +typedef int (dpdmai_dev_dequeue_multijob_t)(struct dpaa2_dpdmai_dev *dpdma= i_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs); + +dpdmai_dev_dequeue_multijob_t *dpdmai_dev_dequeue_multijob; + static struct qdma_hw_queue * alloc_hw_queue(uint32_t lcore_id) { @@ -608,12 +619,156 @@ static inline uint16_t dpdmai_dev_get_job(const stru= ct qbman_fd *fd, return vqid; } =20 +/* Function to receive a QDMA job for a given device and queue*/ static int -dpdmai_dev_dequeue_multijob(struct dpaa2_dpdmai_dev *dpdmai_dev, - uint16_t rxq_id, - uint16_t *vq_id, - struct rte_qdma_job **job, - uint16_t nb_jobs) +dpdmai_dev_dequeue_multijob_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) +{ + struct dpaa2_queue *rxq; + struct qbman_result *dq_storage, *dq_storage1 =3D NULL; + struct qbman_pull_desc pulldesc; + struct qbman_swp *swp; + struct queue_storage_info_t *q_storage; + uint32_t fqid; + uint8_t status, pending; + uint8_t num_rx =3D 0; + const struct qbman_fd *fd; + uint16_t vqid; + int ret, pull_size; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret =3D dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_QDMA_ERR("Failure in affining portal"); + return 0; + } + } + swp =3D DPAA2_PER_LCORE_PORTAL; + + pull_size =3D (nb_jobs > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_jobs; + rxq =3D &(dpdmai_dev->rx_queue[rxq_id]); + fqid =3D rxq->fqid; + q_storage =3D rxq->q_storage; + + if (unlikely(!q_storage->active_dqs)) { + q_storage->toggle =3D 0; + dq_storage =3D q_storage->dq_storage[q_storage->toggle]; + q_storage->last_num_pkts =3D pull_size; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, + q_storage->last_num_pkts); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs( + DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN( + "VDQ command not issued.QBMAN busy\n"); + /* Portal was busy, try again */ + continue; + } + break; + } + q_storage->active_dqs =3D dq_storage; + q_storage->active_dpio_id =3D DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, + dq_storage); + } + + dq_storage =3D q_storage->active_dqs; + rte_prefetch0((void *)(size_t)(dq_storage)); + rte_prefetch0((void *)(size_t)(dq_storage + 1)); + + /* Prepare next pull descriptor. This will give space for the + * prefething done on DQRR entries + */ + q_storage->toggle ^=3D 1; + dq_storage1 =3D q_storage->dq_storage[q_storage->toggle]; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, pull_size); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage1, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage1)), 1); + + /* Check if the previous issued command is completed. + * Also seems like the SWP is shared between the Ethernet Driver + * and the SEC driver. + */ + while (!qbman_check_command_complete(dq_storage)) + ; + if (dq_storage =3D=3D get_swp_active_dqs(q_storage->active_dpio_id)) + clear_swp_active_dqs(q_storage->active_dpio_id); + + pending =3D 1; + + do { + /* Loop until the dq_storage is updated with + * new token by QBMAN + */ + while (!qbman_check_new_result(dq_storage)) + ; + rte_prefetch0((void *)((size_t)(dq_storage + 2))); + /* Check whether Last Pull command is Expired and + * setting Condition for Loop termination + */ + if (qbman_result_DQ_is_pull_complete(dq_storage)) { + pending =3D 0; + /* Check for valid frame. */ + status =3D qbman_result_DQ_flags(dq_storage); + if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) =3D=3D 0)) + continue; + } + fd =3D qbman_result_DQ_fd(dq_storage); + + vqid =3D dpdmai_dev_get_job(fd, &job[num_rx]); + if (vq_id) + vq_id[num_rx] =3D vqid; + + dq_storage++; + num_rx++; + } while (pending); + + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + /* issue a volatile dequeue command for next pull */ + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN("VDQ command is not issued." + "QBMAN is busy (2)\n"); + continue; + } + break; + } + + q_storage->active_dqs =3D dq_storage1; + q_storage->active_dpio_id =3D DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, dq_storage1); + + return num_rx; +} + +static int +dpdmai_dev_dequeue_multijob_no_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_queue *rxq; struct qbman_result *dq_storage; @@ -958,6 +1113,43 @@ dpaa2_dpdmai_dev_uninit(struct rte_rawdev *rawdev) return 0; } =20 +static int +check_devargs_handler(__rte_unused const char *key, const char *value, + __rte_unused void *opaque) +{ + if (strcmp(value, "1")) + return -1; + + return 0; +} + +static int +dpaa2_get_devargs(struct rte_devargs *devargs, const char *key) +{ + struct rte_kvargs *kvlist; + + if (!devargs) + return 0; + + kvlist =3D rte_kvargs_parse(devargs->args, NULL); + if (!kvlist) + return 0; + + if (!rte_kvargs_count(kvlist, key)) { + rte_kvargs_free(kvlist); + return 0; + } + + if (rte_kvargs_process(kvlist, key, + check_devargs_handler, NULL) < 0) { + rte_kvargs_free(kvlist); + return 0; + } + rte_kvargs_free(kvlist); + + return 1; +} + static int dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int dpdmai_id) { @@ -1060,6 +1252,17 @@ dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int= dpdmai_id) goto init_err; } =20 + if (dpaa2_get_devargs(rawdev->device->devargs, + DPAA2_QDMA_NO_PREFETCH)) { + /* If no prefetch is configured. */ + dpdmai_dev_dequeue_multijob =3D + dpdmai_dev_dequeue_multijob_no_prefetch; + DPAA2_QDMA_INFO("No Prefetch RX Mode enabled"); + } else { + dpdmai_dev_dequeue_multijob =3D + dpdmai_dev_dequeue_multijob_prefetch; + } + if (!dpaa2_coherent_no_alloc_cache) { if (dpaa2_svr_family =3D=3D SVR_LX2160A) { dpaa2_coherent_no_alloc_cache =3D @@ -1139,6 +1342,8 @@ static struct rte_dpaa2_driver rte_dpaa2_qdma_pmd =3D= { }; =20 RTE_PMD_REGISTER_DPAA2(dpaa2_qdma, rte_dpaa2_qdma_pmd); +RTE_PMD_REGISTER_PARAM_STRING(dpaa2_qdma, + "no_prefetch=3D "); =20 RTE_INIT(dpaa2_qdma_init_log) { diff --git a/drivers/raw/dpaa2_qdma/meson.build b/drivers/raw/dpaa2_qdma/me= son.build index 2a4b69c16..1577946fa 100644 --- a/drivers/raw/dpaa2_qdma/meson.build +++ b/drivers/raw/dpaa2_qdma/meson.build @@ -4,7 +4,7 @@ version =3D 2 =20 build =3D dpdk_conf.has('RTE_LIBRTE_DPAA2_MEMPOOL') -deps +=3D ['rawdev', 'mempool_dpaa2', 'ring'] +deps +=3D ['rawdev', 'mempool_dpaa2', 'ring', 'kvargs'] sources =3D files('dpaa2_qdma.c') =20 allow_experimental_apis =3D true --=20 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 0A40DA0679 for ; Thu, 4 Apr 2019 13:51:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4AF091B3F5; Thu, 4 Apr 2019 13:50:40 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00043.outbound.protection.outlook.com [40.107.0.43]) by dpdk.org (Postfix) with ESMTP id 9D4861B3A8 for ; Thu, 4 Apr 2019 13:50:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VLSfqRGouvN8UqM9V4Z3fpnjfrctXTuHZzcSuYBJ3lo=; b=crYRsUNCTU5kuEiUuAF2pOQZgDgYjhXUA1KaoCIM4FutqZZWyC09FmLreakGrICpUwOfwgSOkqdUjyfCzI9TCO2FNXDOrWzKaAeaPAQX0nIZouruKqGhbtviCt3VvC14lfNfSCLIR5J8Ojm3w5QE4wBsh3b6QNclCdicEpO3mDQ= Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com (10.168.65.19) by VI1PR0401MB1965.eurprd04.prod.outlook.com (10.166.140.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1750.15; Thu, 4 Apr 2019 11:50:28 +0000 Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18]) by VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18%12]) with mapi id 15.20.1750.017; Thu, 4 Apr 2019 11:50:28 +0000 From: Hemant Agrawal To: "dev@dpdk.org" CC: "thomas@monjalon.net" , Shreyansh Jain Thread-Topic: [PATCH v3 7/7] raw/dpaa2_qdma: add support for non prefetch mode Thread-Index: AQHU6tyZKB+FGZNmQUCESGm5dY+4eg== Date: Thu, 4 Apr 2019 11:50:28 +0000 Message-ID: <20190404114818.21286-7-hemant.agrawal@nxp.com> References: <20190404110215.14410-1-hemant.agrawal@nxp.com> <20190404114818.21286-1-hemant.agrawal@nxp.com> In-Reply-To: <20190404114818.21286-1-hemant.agrawal@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [92.120.1.72] x-mailer: git-send-email 2.17.1 x-clientproxiedby: LO2P265CA0082.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8::22) To VI1PR0401MB2541.eurprd04.prod.outlook.com (2603:10a6:800:56::19) authentication-results: spf=none (sender IP is ) smtp.mailfrom=hemant.agrawal@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 79ab8d2f-916c-4e94-8f66-08d6b8f3bb6e x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600139)(711020)(4605104)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:VI1PR0401MB1965; x-ms-traffictypediagnostic: VI1PR0401MB1965: x-microsoft-antispam-prvs: x-forefront-prvs: 0997523C40 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(346002)(376002)(136003)(39860400002)(396003)(366004)(199004)(189003)(44832011)(8936002)(71190400001)(25786009)(305945005)(6512007)(52116002)(478600001)(68736007)(105586002)(4326008)(7736002)(486006)(53936002)(2906002)(1076003)(5660300002)(102836004)(5640700003)(76176011)(446003)(186003)(476003)(386003)(66066001)(11346002)(26005)(316002)(106356001)(2616005)(6916009)(2501003)(14454004)(3846002)(81166006)(6486002)(71200400001)(6436002)(99286004)(6116002)(6506007)(81156014)(8676002)(97736004)(50226002)(54906003)(1730700003)(86362001)(256004)(14444005)(36756003)(2351001); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB1965; H:VI1PR0401MB2541.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: jk6yTJXAoA4Jy3sHJDT//FuGLs2TxjXwtyxdLBERsJYyEnMatyU8vAEn1s/eqO3s9Ze/2BAA9y/bRjXLpSZN2vLlDQl7dMOWKU4LPNLz7ajpbKp/c7NykHlu9j4Ru5s9S/GiT0TuckOMsqDyWPEV63ThMtge/3yrWUWkqG4fJ9rlcFX7NXWBlC5SFz4XmGmYm3A+avhM0t84ptOYw1ad2/Wa+aTwxNUhmfdr+SjuqwAh2TGuMW5R45uNpcN0LBm36960inXRRoU2RBUb/XMUtNTstUKojQXrmGz7r2CJkNaNBK4xYE6cfcoFWc9z02cUP8dWQ9X/tw7vhQvV+73qMZdagdV/pqzTYFTTTqz2Z9Ogs11z8eMjqIma53Ej/DoBH5tlnMYo42YnYwmAblVDZnMsOT5goJqKuGYlqbSVkcE= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 79ab8d2f-916c-4e94-8f66-08d6b8f3bb6e X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2019 11:50:28.4856 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB1965 Subject: [dpdk-dev] [PATCH v3 7/7] raw/dpaa2_qdma: add support for non prefetch mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190404115028.W7u-lkOita6dReezk4ebiG_N5q1TVELPxE5Khy5snh8@z> This patch add support for non prefetch mode in Rx functions. Signed-off-by: Hemant Agrawal --- drivers/raw/dpaa2_qdma/Makefile | 1 + drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 215 +++++++++++++++++++++++++++- drivers/raw/dpaa2_qdma/meson.build | 2 +- 3 files changed, 212 insertions(+), 6 deletions(-) diff --git a/drivers/raw/dpaa2_qdma/Makefile b/drivers/raw/dpaa2_qdma/Makef= ile index ee95662f1..450c76e76 100644 --- a/drivers/raw/dpaa2_qdma/Makefile +++ b/drivers/raw/dpaa2_qdma/Makefile @@ -21,6 +21,7 @@ LDLIBS +=3D -lrte_eal LDLIBS +=3D -lrte_mempool LDLIBS +=3D -lrte_mempool_dpaa2 LDLIBS +=3D -lrte_rawdev +LDLIBS +=3D -lrte_kvargs LDLIBS +=3D -lrte_ring LDLIBS +=3D -lrte_common_dpaax =20 diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/d= paa2_qdma.c index 38f329a50..a41c1e385 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -14,6 +14,7 @@ #include #include #include +#include =20 #include #include @@ -23,6 +24,8 @@ #include "dpaa2_qdma.h" #include "dpaa2_qdma_logs.h" =20 +#define DPAA2_QDMA_NO_PREFETCH "no_prefetch" + /* Dynamic log type identifier */ int dpaa2_qdma_logtype; =20 @@ -43,6 +46,14 @@ static struct qdma_virt_queue *qdma_vqs; /* QDMA per core data */ static struct qdma_per_core_info qdma_core_info[RTE_MAX_LCORE]; =20 +typedef int (dpdmai_dev_dequeue_multijob_t)(struct dpaa2_dpdmai_dev *dpdma= i_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs); + +dpdmai_dev_dequeue_multijob_t *dpdmai_dev_dequeue_multijob; + static struct qdma_hw_queue * alloc_hw_queue(uint32_t lcore_id) { @@ -608,12 +619,156 @@ static inline uint16_t dpdmai_dev_get_job(const stru= ct qbman_fd *fd, return vqid; } =20 +/* Function to receive a QDMA job for a given device and queue*/ static int -dpdmai_dev_dequeue_multijob(struct dpaa2_dpdmai_dev *dpdmai_dev, - uint16_t rxq_id, - uint16_t *vq_id, - struct rte_qdma_job **job, - uint16_t nb_jobs) +dpdmai_dev_dequeue_multijob_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) +{ + struct dpaa2_queue *rxq; + struct qbman_result *dq_storage, *dq_storage1 =3D NULL; + struct qbman_pull_desc pulldesc; + struct qbman_swp *swp; + struct queue_storage_info_t *q_storage; + uint32_t fqid; + uint8_t status, pending; + uint8_t num_rx =3D 0; + const struct qbman_fd *fd; + uint16_t vqid; + int ret, pull_size; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret =3D dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_QDMA_ERR("Failure in affining portal"); + return 0; + } + } + swp =3D DPAA2_PER_LCORE_PORTAL; + + pull_size =3D (nb_jobs > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_jobs; + rxq =3D &(dpdmai_dev->rx_queue[rxq_id]); + fqid =3D rxq->fqid; + q_storage =3D rxq->q_storage; + + if (unlikely(!q_storage->active_dqs)) { + q_storage->toggle =3D 0; + dq_storage =3D q_storage->dq_storage[q_storage->toggle]; + q_storage->last_num_pkts =3D pull_size; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, + q_storage->last_num_pkts); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs( + DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN( + "VDQ command not issued.QBMAN busy\n"); + /* Portal was busy, try again */ + continue; + } + break; + } + q_storage->active_dqs =3D dq_storage; + q_storage->active_dpio_id =3D DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, + dq_storage); + } + + dq_storage =3D q_storage->active_dqs; + rte_prefetch0((void *)(size_t)(dq_storage)); + rte_prefetch0((void *)(size_t)(dq_storage + 1)); + + /* Prepare next pull descriptor. This will give space for the + * prefething done on DQRR entries + */ + q_storage->toggle ^=3D 1; + dq_storage1 =3D q_storage->dq_storage[q_storage->toggle]; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, pull_size); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage1, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage1)), 1); + + /* Check if the previous issued command is completed. + * Also seems like the SWP is shared between the Ethernet Driver + * and the SEC driver. + */ + while (!qbman_check_command_complete(dq_storage)) + ; + if (dq_storage =3D=3D get_swp_active_dqs(q_storage->active_dpio_id)) + clear_swp_active_dqs(q_storage->active_dpio_id); + + pending =3D 1; + + do { + /* Loop until the dq_storage is updated with + * new token by QBMAN + */ + while (!qbman_check_new_result(dq_storage)) + ; + rte_prefetch0((void *)((size_t)(dq_storage + 2))); + /* Check whether Last Pull command is Expired and + * setting Condition for Loop termination + */ + if (qbman_result_DQ_is_pull_complete(dq_storage)) { + pending =3D 0; + /* Check for valid frame. */ + status =3D qbman_result_DQ_flags(dq_storage); + if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) =3D=3D 0)) + continue; + } + fd =3D qbman_result_DQ_fd(dq_storage); + + vqid =3D dpdmai_dev_get_job(fd, &job[num_rx]); + if (vq_id) + vq_id[num_rx] =3D vqid; + + dq_storage++; + num_rx++; + } while (pending); + + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + /* issue a volatile dequeue command for next pull */ + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN("VDQ command is not issued." + "QBMAN is busy (2)\n"); + continue; + } + break; + } + + q_storage->active_dqs =3D dq_storage1; + q_storage->active_dpio_id =3D DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, dq_storage1); + + return num_rx; +} + +static int +dpdmai_dev_dequeue_multijob_no_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_queue *rxq; struct qbman_result *dq_storage; @@ -958,6 +1113,43 @@ dpaa2_dpdmai_dev_uninit(struct rte_rawdev *rawdev) return 0; } =20 +static int +check_devargs_handler(__rte_unused const char *key, const char *value, + __rte_unused void *opaque) +{ + if (strcmp(value, "1")) + return -1; + + return 0; +} + +static int +dpaa2_get_devargs(struct rte_devargs *devargs, const char *key) +{ + struct rte_kvargs *kvlist; + + if (!devargs) + return 0; + + kvlist =3D rte_kvargs_parse(devargs->args, NULL); + if (!kvlist) + return 0; + + if (!rte_kvargs_count(kvlist, key)) { + rte_kvargs_free(kvlist); + return 0; + } + + if (rte_kvargs_process(kvlist, key, + check_devargs_handler, NULL) < 0) { + rte_kvargs_free(kvlist); + return 0; + } + rte_kvargs_free(kvlist); + + return 1; +} + static int dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int dpdmai_id) { @@ -1060,6 +1252,17 @@ dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int= dpdmai_id) goto init_err; } =20 + if (dpaa2_get_devargs(rawdev->device->devargs, + DPAA2_QDMA_NO_PREFETCH)) { + /* If no prefetch is configured. */ + dpdmai_dev_dequeue_multijob =3D + dpdmai_dev_dequeue_multijob_no_prefetch; + DPAA2_QDMA_INFO("No Prefetch RX Mode enabled"); + } else { + dpdmai_dev_dequeue_multijob =3D + dpdmai_dev_dequeue_multijob_prefetch; + } + if (!dpaa2_coherent_no_alloc_cache) { if (dpaa2_svr_family =3D=3D SVR_LX2160A) { dpaa2_coherent_no_alloc_cache =3D @@ -1139,6 +1342,8 @@ static struct rte_dpaa2_driver rte_dpaa2_qdma_pmd =3D= { }; =20 RTE_PMD_REGISTER_DPAA2(dpaa2_qdma, rte_dpaa2_qdma_pmd); +RTE_PMD_REGISTER_PARAM_STRING(dpaa2_qdma, + "no_prefetch=3D "); =20 RTE_INIT(dpaa2_qdma_init_log) { diff --git a/drivers/raw/dpaa2_qdma/meson.build b/drivers/raw/dpaa2_qdma/me= son.build index 2a4b69c16..1577946fa 100644 --- a/drivers/raw/dpaa2_qdma/meson.build +++ b/drivers/raw/dpaa2_qdma/meson.build @@ -4,7 +4,7 @@ version =3D 2 =20 build =3D dpdk_conf.has('RTE_LIBRTE_DPAA2_MEMPOOL') -deps +=3D ['rawdev', 'mempool_dpaa2', 'ring'] +deps +=3D ['rawdev', 'mempool_dpaa2', 'ring', 'kvargs'] sources =3D files('dpaa2_qdma.c') =20 allow_experimental_apis =3D true --=20 2.17.1