From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-eopbgr60064.outbound.protection.outlook.com [40.107.6.64]) by dpdk.org (Postfix) with ESMTP id 968791BB3A; Fri, 11 Jan 2019 13:24:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jV2LIt5u4c8Ve3KCMJ0f2v7/FBjOpmH3lG1RuBw+S+A=; b=rnm9wBrSjZVLBFntkoE8cTs2o07NK5YzQWK1nweLXZFlTb1sxAVY0DDS44HWNR2F4io7MJtCmgsuB6iXsRK/3RBmBOHkUHeLf1M48SkYq+f0KLZHtxsNquIqD4zUURBilRSOGzM6mAXN/Ly/muMKBdAoQeRmmJgmlf1Mvn2w5/U= Received: from VI1PR04MB4688.eurprd04.prod.outlook.com (20.177.56.80) by VI1PR04MB5694.eurprd04.prod.outlook.com (20.178.126.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1516.14; Fri, 11 Jan 2019 12:24:12 +0000 Received: from VI1PR04MB4688.eurprd04.prod.outlook.com ([fe80::b1eb:7e7e:7b90:7b4]) by VI1PR04MB4688.eurprd04.prod.outlook.com ([fe80::b1eb:7e7e:7b90:7b4%4]) with mapi id 15.20.1516.016; Fri, 11 Jan 2019 12:24:12 +0000 From: Shreyansh Jain To: "dev@dpdk.org" CC: "ferruh.yigit@intel.com" , Hemant Agrawal , "stable@dpdk.org" Thread-Topic: [PATCH v3 03/19] bus/fslmc: fix to use correct physical core for logical core Thread-Index: AQHUqaiPaIeE3PRYTUqj/TPG3tzyvA== Date: Fri, 11 Jan 2019 12:24:12 +0000 Message-ID: <20190111122305.7133-4-shreyansh.jain@nxp.com> References: <20190111115712.6482-1-shreyansh.jain@nxp.com> <20190111122305.7133-1-shreyansh.jain@nxp.com> In-Reply-To: <20190111122305.7133-1-shreyansh.jain@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [14.143.254.130] x-mailer: git-send-email 2.17.1 x-clientproxiedby: TY2PR01CA0017.jpnprd01.prod.outlook.com (2603:1096:404:a::29) To VI1PR04MB4688.eurprd04.prod.outlook.com (2603:10a6:803:71::16) authentication-results: spf=none (sender IP is ) smtp.mailfrom=shreyansh.jain@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR04MB5694; 6:J7sjnIJVYfg/K4lKr0wLHRPwKxGsRoZk8T4z2nqDfs6VRWYjjb+uIHWk3360ZoFsInO7RjCSCQVknWtmoL9UrTOeJ8Hso0mviV4cKcDCSaaaRb+QUk31CaJfR/yU3UNqBhFLvNOufOcOLlALqbIsjOz3OY5AHZoj2oPlozwhYtTEm1f1+3J+KKWsaT0gjStD7lK7dA+RY9iokD7WQwTFkrMkNR3ZOoHeJ8GSNoQvZNFGYlIRG4TK+Moorh6OuvXPHoQt/ElACdabbbKNbdAxxWb9xQ27fUF2r9jeAW8Rp/iZdWEbjilYtM8UCWTudEfzfReRCk59oxrT7EEtgMGDfvy/aYnXVxj5LXhz1T/m21gAO/Ulw3vzSiBggpQVjO8F7edkztu4nVZjxdUFobDPyzMYlMHA33peOSyOqDZaXFJdtmVlk/9ZqNTnxcGZCwAIHbkpuAwTOuFexvN5LUqO6g==; 5:YMtokWbwi4IIVW1yxugmQCZXR7OSG/lUkgVSopGmlYfjnGe9c3CiWIb2INeVmVIgsLcoAxHp+w6s5VWCgbr+daa4/F/Zm1mHxSotlNL2i+KE8GQL9kGxH+Ur+uxEFICld/ojiQAAjzh7Ojpurr/CcoDYdo1ywt+xMJrjZhT/Ts6bXo35bkqml8ga1LMgGQ+/c1tTC6PKclMi1GIQdivHUQ==; 7:4yMhMcM3NwPaTVZBxXnZcQZoI3zliJFeS+Fhn6KcvfcBhmnCnTgjSdmb3gqyXDmEVf7RRWOHXutFkqvd0KU+S5lzLTj2hJ37j1mJxPHTaj1WqWsOokOwqdDQPGkeWZZs6Elja2ZTmwvqTKbEJuV1WQ== x-ms-office365-filtering-correlation-id: e6683a51-a363-49bc-618e-08d677bfb150 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600109)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:VI1PR04MB5694; x-ms-traffictypediagnostic: VI1PR04MB5694: x-microsoft-antispam-prvs: x-forefront-prvs: 09144DB0F7 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(366004)(136003)(39860400002)(346002)(396003)(199004)(189003)(446003)(71190400001)(26005)(8676002)(186003)(11346002)(14454004)(54906003)(476003)(2616005)(102836004)(386003)(7736002)(55236004)(6506007)(105586002)(99286004)(305945005)(478600001)(256004)(1076003)(50226002)(36756003)(68736007)(2906002)(53936002)(5640700003)(6512007)(6916009)(76176011)(66066001)(106356001)(6486002)(25786009)(6436002)(78486014)(97736004)(486006)(52116002)(316002)(86362001)(6116002)(3846002)(81166006)(1730700003)(81156014)(8936002)(2351001)(44832011)(2501003)(71200400001)(5660300001)(4326008); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR04MB5694; H:VI1PR04MB4688.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: 1hISGgHhUTYlElbFKbN1m7fTF7et4x1ISSQzu5/4+Rz3UuOQBUBxYJhwNeuhc6NH30Zn5U551I+hGP7wgEvbYVfM+Qjacntc7M6egU4LTh4ZlWRgp3C6j0x58GzOrENd18wrn+7BsxHv4C8FcwnMZ4cKZ3/Z+a2iBVRzmcxzykmlIE/OxVdZdwXRbhRGhzo7voMDrxkkUSyu9xRymu0LHr0nnLWiz3qeLJzFsrgMeTsJkrno4n2DoWuFzFIVTQQWwTCvWRhVGFog8UKkwoqraP8C5RYfQVpVzV5XLd+UG93tEiHPoVJcKU/lESbcindYeyjYP3JZ8i9ftHJUpWRFUWiFMpb8XRtTYwTu6z0O5o8xOzwdDF2nBP63TPPk/3JcQRZGPGIfB/swAzZ8K+2UCWEWnnXoYEstXVktrwIktuU= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: e6683a51-a363-49bc-618e-08d677bfb150 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2019 12:24:08.9298 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5694 Subject: [dpdk-dev] [PATCH v3 03/19] bus/fslmc: fix to use correct physical core for logical core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2019 12:24:13 -0000 From: Hemant Agrawal Existing code is using the lcore id as the physical core id. Add code to get the right physical id. Also, dpaa2 can not support one lcore mapping to multiple cpus, print err on such cases. Fixes: ce9efbf5bb09 ("bus/fslmc: support dynamic logging") Cc: stable@dpdk.org Signed-off-by: Hemant Agrawal --- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 76 ++++++++++++++++++++---- 1 file changed, 63 insertions(+), 13 deletions(-) diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/p= ortal/dpaa2_hw_dpio.c index 4fc6efec5..ba2e28ce1 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -53,6 +53,10 @@ static uint32_t io_space_count; /* Variable to store DPAA2 platform type */ uint32_t dpaa2_svr_family; =20 +/* Physical core id for lcores running on dpaa2. */ +/* DPAA2 only support 1 lcore to 1 phy cpu mapping */ +static unsigned int dpaa2_cpu[RTE_MAX_LCORE]; + /* Variable to store DPAA2 DQRR size */ uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ @@ -92,7 +96,8 @@ dpaa2_core_cluster_sdest(int cpu_id) } =20 #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV -static void dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id) +static void +dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) { #define STRING_LEN 28 #define COMMAND_LEN 50 @@ -125,7 +130,7 @@ static void dpaa2_affine_dpio_intr_to_respective_core(i= nt32_t dpio_id) return; } =20 - cpu_mask =3D cpu_mask << rte_lcore_id(); + cpu_mask =3D cpu_mask << dpaa2_cpu[lcoreid]; snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity", cpu_mask, token); ret =3D system(command); @@ -139,7 +144,7 @@ static void dpaa2_affine_dpio_intr_to_respective_core(i= nt32_t dpio_id) fclose(file); } =20 -static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev) +static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcore= id) { struct epoll_event epoll_ev; int eventfd, dpio_epoll_fd, ret; @@ -176,32 +181,36 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev= *dpio_dev) } dpio_dev->epoll_fd =3D dpio_epoll_fd; =20 - dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id); + dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid); =20 return 0; } #endif =20 static int -dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id) +dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) { int sdest, ret; + int cpu_id; =20 /* Set the Stashing Destination */ - if (cpu_id < 0) { - cpu_id =3D rte_get_master_lcore(); - if (cpu_id < 0) { + if (lcoreid < 0) { + lcoreid =3D rte_get_master_lcore(); + if (lcoreid < 0) { DPAA2_BUS_ERR("Getting CPU Index failed"); return -1; } } + + cpu_id =3D dpaa2_cpu[lcoreid]; + /* Set the STASH Destination depending on Current CPU ID. * Valid values of SDEST are 4,5,6,7. Where, */ =20 sdest =3D dpaa2_core_cluster_sdest(cpu_id); - DPAA2_BUS_DEBUG("Portal=3D %d CPU=3D %u SDEST=3D %d", - dpio_dev->index, cpu_id, sdest); + DPAA2_BUS_DEBUG("Portal=3D %d CPU=3D %u lcore id =3D%u SDEST=3D %d", + dpio_dev->index, cpu_id, lcoreid, sdest); =20 ret =3D dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token, sdest); @@ -211,7 +220,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_de= v, int cpu_id) } =20 #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV - if (dpaa2_dpio_intr_init(dpio_dev)) { + if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) { DPAA2_BUS_ERR("Interrupt registration failed for dpio"); return -1; } @@ -220,7 +229,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_de= v, int cpu_id) return 0; } =20 -struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int cpu_id) +struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) { struct dpaa2_dpio_dev *dpio_dev =3D NULL; int ret; @@ -236,7 +245,7 @@ struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int cpu_id) DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); =20 - ret =3D dpaa2_configure_stashing(dpio_dev, cpu_id); + ret =3D dpaa2_configure_stashing(dpio_dev, lcoreid); if (ret) DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); =20 @@ -340,6 +349,39 @@ dpaa2_affine_qbman_ethrx_swp(void) } } =20 +/* + * This checks for not supported lcore mappings as well as get the physica= l + * cpuid for the lcore. + * one lcore can only map to 1 cpu i.e. 1@10-14 not supported. + * one cpu can be mapped to more than one lcores. + */ +static int +dpaa2_check_lcore_cpuset(void) +{ + unsigned int lcore_id, i; + int ret =3D 0; + + for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) + dpaa2_cpu[lcore_id] =3D 0xffffffff; + + for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + for (i =3D 0; i < RTE_MAX_LCORE; i++) { + if (CPU_ISSET(i, &lcore_config[lcore_id].cpuset)) { + RTE_LOG(DEBUG, EAL, "lcore id =3D %u cpu=3D%u\n", + lcore_id, i); + if (dpaa2_cpu[lcore_id] !=3D 0xffffffff) { + DPAA2_BUS_ERR( + "ERR:lcore map to multi-cpu not supported"); + ret =3D -1; + } else { + dpaa2_cpu[lcore_id] =3D i; + } + } + } + } + return ret; +} + static int dpaa2_create_dpio_device(int vdev_fd, struct vfio_device_info *obj_info, @@ -349,6 +391,7 @@ dpaa2_create_dpio_device(int vdev_fd, struct vfio_region_info reg_info =3D { .argsz =3D sizeof(reg_info)}; struct qbman_swp_desc p_des; struct dpio_attr attr; + static int check_lcore_cpuset; =20 if (obj_info->num_regions < NUM_DPIO_REGIONS) { DPAA2_BUS_ERR("Not sufficient number of DPIO regions"); @@ -368,6 +411,13 @@ dpaa2_create_dpio_device(int vdev_fd, /* Using single portal for all devices */ dpio_dev->mc_portal =3D rte_mcp_ptr_list[MC_PORTAL_INDEX]; =20 + if (!check_lcore_cpuset) { + check_lcore_cpuset =3D 1; + + if (dpaa2_check_lcore_cpuset() < 0) + goto err; + } + dpio_dev->dpio =3D malloc(sizeof(struct fsl_mc_io)); memset(dpio_dev->dpio, 0, sizeof(struct fsl_mc_io)); =20 --=20 2.17.1