From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id D8CADA0679 for ; Thu, 4 Apr 2019 13:05:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60E461B21A; Thu, 4 Apr 2019 13:04:40 +0200 (CEST) Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30051.outbound.protection.outlook.com [40.107.3.51]) by dpdk.org (Postfix) with ESMTP id ADB381B1D6 for ; Thu, 4 Apr 2019 13:04:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VLSfqRGouvN8UqM9V4Z3fpnjfrctXTuHZzcSuYBJ3lo=; b=eM6US/PAQc2vSsLDYuRuGWy79BWyhe982wddi+2V8gUgpKmQRdOFVjyaF7mWd62URE8AWcnY7uP1+Ix46cFoJBJvCrP4DSaBc0WL3wQl4PK0iwKHKl0PWOGA2np6XTuMWSfxJ6eR1EK3gDIoQTHPlga9z99J9fTubiuYtPzKRKE= Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com (10.168.65.19) by VI1PR0401MB2493.eurprd04.prod.outlook.com (10.168.65.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1771.16; Thu, 4 Apr 2019 11:04:28 +0000 Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18]) by VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18%12]) with mapi id 15.20.1750.017; Thu, 4 Apr 2019 11:04:28 +0000 From: Hemant Agrawal To: "dev@dpdk.org" CC: "thomas@monjalon.net" , Shreyansh Jain Thread-Topic: [PATCH v2 7/7] raw/dpaa2_qdma: add support for non prefetch mode Thread-Index: AQHU6tYrWV13iPwLRkyDtsIFJLK+wg== Date: Thu, 4 Apr 2019 11:04:28 +0000 Message-ID: <20190404110215.14410-7-hemant.agrawal@nxp.com> References: <20190326121610.28024-1-hemant.agrawal@nxp.com> <20190404110215.14410-1-hemant.agrawal@nxp.com> In-Reply-To: <20190404110215.14410-1-hemant.agrawal@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [92.120.1.72] x-mailer: git-send-email 2.17.1 x-clientproxiedby: BMXPR01CA0041.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:c::27) To VI1PR0401MB2541.eurprd04.prod.outlook.com (2603:10a6:800:56::19) authentication-results: spf=none (sender IP is ) smtp.mailfrom=hemant.agrawal@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: a4e43030-c885-4cc0-f76a-08d6b8ed4e10 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600139)(711020)(4605104)(4618075)(2017052603328)(7193020); SRVR:VI1PR0401MB2493; x-ms-traffictypediagnostic: VI1PR0401MB2493: x-microsoft-antispam-prvs: x-forefront-prvs: 0997523C40 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(136003)(396003)(376002)(346002)(366004)(39860400002)(199004)(189003)(8936002)(105586002)(71190400001)(6486002)(71200400001)(53936002)(6436002)(44832011)(2351001)(5660300002)(478600001)(14454004)(106356001)(97736004)(50226002)(68736007)(4326008)(6916009)(5640700003)(25786009)(3846002)(6512007)(7736002)(86362001)(1076003)(6116002)(54906003)(316002)(11346002)(446003)(2616005)(386003)(305945005)(2906002)(476003)(36756003)(76176011)(6506007)(486006)(256004)(14444005)(8676002)(26005)(99286004)(186003)(1730700003)(2501003)(81156014)(52116002)(81166006)(66066001)(102836004); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB2493; H:VI1PR0401MB2541.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: j6gY/zt4hveZ0KfoWGOdoI61RLAEHiq60sfrjc0v8IGzwQozmcJnrvjlqY5xbuqSYDr4XCiXrVsANEEbp4/8WqD6Gsb1wpRBnFm35PLKOhFAnfTjCk0LXDuvFT+Po2BXMC1E71Yd4AMorqqlviOkQUgYOIEBUpBLysnG2gTi5YKeRdDCnAwXkZnUxyGPF/8zj+EL2NH6W3auadcJ5tq7QxCEP8fSN/ZvxN8YjpVu3Mbs25ZgjpTYA9h7+01oLtVy98UjSW7J0RsL71LsvxRh2UXyloG6sHz04pqOxlYshZ+Kimvh9lhREdeOuWANt3KaWRDfCj4uCEJ1Q+KJl45AdZpIQ8q39XtlBC1nZDeSxk90qgxRDyWe6rdw0UIQshy0ePMyTUIK6NCOIVh4XhUc9vZN+3Q3XHFqflnoHlN/jMw= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: a4e43030-c885-4cc0-f76a-08d6b8ed4e10 X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2019 11:04:28.0162 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2493 Subject: [dpdk-dev] [PATCH v2 7/7] raw/dpaa2_qdma: add support for non prefetch mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190404110428.yqN0Agx616YY-4pxgJGTCFZOtLo3GTSBFBZi3sFmytk@z> This patch add support for non prefetch mode in Rx functions. Signed-off-by: Hemant Agrawal --- drivers/raw/dpaa2_qdma/Makefile | 1 + drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 215 +++++++++++++++++++++++++++- drivers/raw/dpaa2_qdma/meson.build | 2 +- 3 files changed, 212 insertions(+), 6 deletions(-) diff --git a/drivers/raw/dpaa2_qdma/Makefile b/drivers/raw/dpaa2_qdma/Makef= ile index ee95662f1..450c76e76 100644 --- a/drivers/raw/dpaa2_qdma/Makefile +++ b/drivers/raw/dpaa2_qdma/Makefile @@ -21,6 +21,7 @@ LDLIBS +=3D -lrte_eal LDLIBS +=3D -lrte_mempool LDLIBS +=3D -lrte_mempool_dpaa2 LDLIBS +=3D -lrte_rawdev +LDLIBS +=3D -lrte_kvargs LDLIBS +=3D -lrte_ring LDLIBS +=3D -lrte_common_dpaax =20 diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/d= paa2_qdma.c index 38f329a50..a41c1e385 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -14,6 +14,7 @@ #include #include #include +#include =20 #include #include @@ -23,6 +24,8 @@ #include "dpaa2_qdma.h" #include "dpaa2_qdma_logs.h" =20 +#define DPAA2_QDMA_NO_PREFETCH "no_prefetch" + /* Dynamic log type identifier */ int dpaa2_qdma_logtype; =20 @@ -43,6 +46,14 @@ static struct qdma_virt_queue *qdma_vqs; /* QDMA per core data */ static struct qdma_per_core_info qdma_core_info[RTE_MAX_LCORE]; =20 +typedef int (dpdmai_dev_dequeue_multijob_t)(struct dpaa2_dpdmai_dev *dpdma= i_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs); + +dpdmai_dev_dequeue_multijob_t *dpdmai_dev_dequeue_multijob; + static struct qdma_hw_queue * alloc_hw_queue(uint32_t lcore_id) { @@ -608,12 +619,156 @@ static inline uint16_t dpdmai_dev_get_job(const stru= ct qbman_fd *fd, return vqid; } =20 +/* Function to receive a QDMA job for a given device and queue*/ static int -dpdmai_dev_dequeue_multijob(struct dpaa2_dpdmai_dev *dpdmai_dev, - uint16_t rxq_id, - uint16_t *vq_id, - struct rte_qdma_job **job, - uint16_t nb_jobs) +dpdmai_dev_dequeue_multijob_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) +{ + struct dpaa2_queue *rxq; + struct qbman_result *dq_storage, *dq_storage1 =3D NULL; + struct qbman_pull_desc pulldesc; + struct qbman_swp *swp; + struct queue_storage_info_t *q_storage; + uint32_t fqid; + uint8_t status, pending; + uint8_t num_rx =3D 0; + const struct qbman_fd *fd; + uint16_t vqid; + int ret, pull_size; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret =3D dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_QDMA_ERR("Failure in affining portal"); + return 0; + } + } + swp =3D DPAA2_PER_LCORE_PORTAL; + + pull_size =3D (nb_jobs > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_jobs; + rxq =3D &(dpdmai_dev->rx_queue[rxq_id]); + fqid =3D rxq->fqid; + q_storage =3D rxq->q_storage; + + if (unlikely(!q_storage->active_dqs)) { + q_storage->toggle =3D 0; + dq_storage =3D q_storage->dq_storage[q_storage->toggle]; + q_storage->last_num_pkts =3D pull_size; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, + q_storage->last_num_pkts); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs( + DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN( + "VDQ command not issued.QBMAN busy\n"); + /* Portal was busy, try again */ + continue; + } + break; + } + q_storage->active_dqs =3D dq_storage; + q_storage->active_dpio_id =3D DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, + dq_storage); + } + + dq_storage =3D q_storage->active_dqs; + rte_prefetch0((void *)(size_t)(dq_storage)); + rte_prefetch0((void *)(size_t)(dq_storage + 1)); + + /* Prepare next pull descriptor. This will give space for the + * prefething done on DQRR entries + */ + q_storage->toggle ^=3D 1; + dq_storage1 =3D q_storage->dq_storage[q_storage->toggle]; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, pull_size); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage1, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage1)), 1); + + /* Check if the previous issued command is completed. + * Also seems like the SWP is shared between the Ethernet Driver + * and the SEC driver. + */ + while (!qbman_check_command_complete(dq_storage)) + ; + if (dq_storage =3D=3D get_swp_active_dqs(q_storage->active_dpio_id)) + clear_swp_active_dqs(q_storage->active_dpio_id); + + pending =3D 1; + + do { + /* Loop until the dq_storage is updated with + * new token by QBMAN + */ + while (!qbman_check_new_result(dq_storage)) + ; + rte_prefetch0((void *)((size_t)(dq_storage + 2))); + /* Check whether Last Pull command is Expired and + * setting Condition for Loop termination + */ + if (qbman_result_DQ_is_pull_complete(dq_storage)) { + pending =3D 0; + /* Check for valid frame. */ + status =3D qbman_result_DQ_flags(dq_storage); + if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) =3D=3D 0)) + continue; + } + fd =3D qbman_result_DQ_fd(dq_storage); + + vqid =3D dpdmai_dev_get_job(fd, &job[num_rx]); + if (vq_id) + vq_id[num_rx] =3D vqid; + + dq_storage++; + num_rx++; + } while (pending); + + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + /* issue a volatile dequeue command for next pull */ + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN("VDQ command is not issued." + "QBMAN is busy (2)\n"); + continue; + } + break; + } + + q_storage->active_dqs =3D dq_storage1; + q_storage->active_dpio_id =3D DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, dq_storage1); + + return num_rx; +} + +static int +dpdmai_dev_dequeue_multijob_no_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_queue *rxq; struct qbman_result *dq_storage; @@ -958,6 +1113,43 @@ dpaa2_dpdmai_dev_uninit(struct rte_rawdev *rawdev) return 0; } =20 +static int +check_devargs_handler(__rte_unused const char *key, const char *value, + __rte_unused void *opaque) +{ + if (strcmp(value, "1")) + return -1; + + return 0; +} + +static int +dpaa2_get_devargs(struct rte_devargs *devargs, const char *key) +{ + struct rte_kvargs *kvlist; + + if (!devargs) + return 0; + + kvlist =3D rte_kvargs_parse(devargs->args, NULL); + if (!kvlist) + return 0; + + if (!rte_kvargs_count(kvlist, key)) { + rte_kvargs_free(kvlist); + return 0; + } + + if (rte_kvargs_process(kvlist, key, + check_devargs_handler, NULL) < 0) { + rte_kvargs_free(kvlist); + return 0; + } + rte_kvargs_free(kvlist); + + return 1; +} + static int dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int dpdmai_id) { @@ -1060,6 +1252,17 @@ dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int= dpdmai_id) goto init_err; } =20 + if (dpaa2_get_devargs(rawdev->device->devargs, + DPAA2_QDMA_NO_PREFETCH)) { + /* If no prefetch is configured. */ + dpdmai_dev_dequeue_multijob =3D + dpdmai_dev_dequeue_multijob_no_prefetch; + DPAA2_QDMA_INFO("No Prefetch RX Mode enabled"); + } else { + dpdmai_dev_dequeue_multijob =3D + dpdmai_dev_dequeue_multijob_prefetch; + } + if (!dpaa2_coherent_no_alloc_cache) { if (dpaa2_svr_family =3D=3D SVR_LX2160A) { dpaa2_coherent_no_alloc_cache =3D @@ -1139,6 +1342,8 @@ static struct rte_dpaa2_driver rte_dpaa2_qdma_pmd =3D= { }; =20 RTE_PMD_REGISTER_DPAA2(dpaa2_qdma, rte_dpaa2_qdma_pmd); +RTE_PMD_REGISTER_PARAM_STRING(dpaa2_qdma, + "no_prefetch=3D "); =20 RTE_INIT(dpaa2_qdma_init_log) { diff --git a/drivers/raw/dpaa2_qdma/meson.build b/drivers/raw/dpaa2_qdma/me= son.build index 2a4b69c16..1577946fa 100644 --- a/drivers/raw/dpaa2_qdma/meson.build +++ b/drivers/raw/dpaa2_qdma/meson.build @@ -4,7 +4,7 @@ version =3D 2 =20 build =3D dpdk_conf.has('RTE_LIBRTE_DPAA2_MEMPOOL') -deps +=3D ['rawdev', 'mempool_dpaa2', 'ring'] +deps +=3D ['rawdev', 'mempool_dpaa2', 'ring', 'kvargs'] sources =3D files('dpaa2_qdma.c') =20 allow_experimental_apis =3D true --=20 2.17.1