From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0062.outbound.protection.outlook.com [104.47.40.62]) by dpdk.org (Postfix) with ESMTP id A361D2C8 for ; Mon, 20 Nov 2017 13:00:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=2hCVLZ0Lh7hzQi4EXdooF0VbeY0rycL/sRTCyQMuha4=; b=eNScB3AjOnLAXIXftBfV7AhbpHSF7l/4eTIWoA7mgHHGMOEpE4H28HIcWMMFdIHDAOlOBDWmRtqBgO+7v31dtpjh3hxzw4vOSXVJwQ/L2RK+bKdBy0QDeasrAFv7oOuqp2w/9Yyks/vHgVudHFeMBLvR63ic/7JasGmlC5/OtOI= Received: from hyd1357T5810.caveonetworks.com (14.140.2.178) by DM5PR07MB3065.namprd07.prod.outlook.com (10.172.88.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.239.5; Mon, 20 Nov 2017 12:00:32 +0000 From: Shijith Thotton To: dev@dpdk.org Cc: Ferruh Yigit Date: Mon, 20 Nov 2017 17:29:51 +0530 Message-Id: <1511179191-29975-1-git-send-email-shijith.thotton@caviumnetworks.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: MAXPR0101CA0030.INDPRD01.PROD.OUTLOOK.COM (10.174.62.144) To DM5PR07MB3065.namprd07.prod.outlook.com (10.172.88.139) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5342c841-8e8a-4d57-5d66-08d5300e4d9a X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(7168020)(4627115)(201703031133081)(201702281549075)(2017052603258); SRVR:DM5PR07MB3065; X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3065; 3:qiG80fAtVzoUKKjAa8kJcx30p/d3UhsgsccvHxWh7A/RJBvAjnuvH/VrxP0jBteE9TlVlVm4bifAnT6Wba00Emju4elUyVqmSrVNYw8uxVgLcoitOalSb/l590vE8uNkHrvxL7K3HACOD4GCZAO+HJPf7TTZzfuKs5TuzUWK1V3d4ZmOhoYl0/Q694IOs9ejgrDnY+0r56SUKo/javzVpUhUnZ6W+DJ55+bsz1Rf3Wam+Xq7tw7l/J4QtaslTpuv; 25:GgorB7I39eKRYG6s2HIFcYlK+jSUHr6kKp4kFoi2M8HP21Eq0xlQLOxUwo/b+kW5h0uk34ljDl3XHpW23eA2938pV+F4SbZlIDnyn3B+BgsCg25M++LfRzXO6Pe2zKUZDDnq8qsPibv4LD5nwv+Z2KXwGqkqDo7vSS7kSpz4XvCzKQSMs+dGyAoDN/3XDYyLaqn8/cFulZQgvXiyk9ZqDGRXcBJycOxUftgLivI3hJyghJHSDGhC0m5u560J6do4R9shoAPKBToRqt9jQlpWkY1e14XesOrcGI+W6/ZZzNiNbvdS7LqF7PaSyt+wh5v4i+wyMJugnOTvThAY/hbvEZXtVekEg0WaFoXqDo4wMf8=; 31:MLV95GOJli+eqPfXwt87RwOh77e8pHMYELUCtA87t8RPZprvrgowxyMlLNOYpmRwDbOXsI8OcNsBQPASgdsCrlzsGiokMBdsPErZBDUl5DumG3ioKp5mjOkb4qa/UJfiKkVVrUHpe6uqFBVKq53ue7GTA1eTKeL5yjK3ScgcETHZvD4uUUjq1df5E9LSKsVTkVZYV2N1MGHkKgD2A4dMyC6ekTdrSUdT4es6miNL6Nw= X-MS-TrafficTypeDiagnostic: DM5PR07MB3065: X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3065; 20:PxWktCDCWkLbTryCaaGxjpT06iBO8hktL1L3fHpz6ckjESk7E1sKDTqr6rAXa22eceKHwwh2aEHgesx827ha9Z91n0+FLWUSLa4aeN1z8DHoY7CDf0pm0QtwPNz30aD1RXwKLTiEmuO7030eT3wf1obu3RxRJbxSqG/YXTYlYH18ihRxpTdpJrwtoIDyemHXSWGO15t/OUZWE8OoXHsgBH/itBjMnBMs9CjZVsZT9N97lbc8oJBhXGTUrL5YJG53i817t+TtcHRG70jucfNvPgNI1ZkOYAI1lircDBvXWEMSP3Ipt6B6svog3eXskOxsu2JU3/80pukIlHzd1tCX7O1wsUggVbf+aWjXToNbM7Ei931sPui1zU12eP7NnrLgP8Be1NOxopq4zU7FeEITQaYH4L/IEssmbatoXuJ/3mz2T1z/E/NfaUSCS8wb8nAfFBylW9OXwt29BP9fNsEzMf4eELMT4kO4SCkVv/TL5pFLlqUmY+G3nrhMqR06FBgOzU97rANdOnoR8NKLuoXfyV35Z0ujSoCrvKq5cVU78B5XFYMr44/11nQthxaL5fjw/W9wdMrxVX7NoPKeQdy+O+e+Rc+fBXyrD/ekLWgLTFA=; 4:wusNkAnlc3eU2cg+UJF6PwBoeSde7JExwGrAqnruCmn9Cw00apaC1PFlxGcC6ZaoM3ldt3O02OW6roKjhmnq+HKL8s0fberV1glX0/mGm7SBjxof9tI3sfiiFW3D8JcExY+b7AZWhg+P9XPAhjDgADdGHzTHZT6yREKyWNT6pgtpKHWkR/LiGlQM0B3JNoK0k+o4jxAozUqn3t+uypLTSWeqGJhAjyBAyc/VJyGq5ziyDTr3cY53FcZBaUJRxtSfJqkql7p9vYSA3x1if7BgMg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(93006095)(100000703101)(100105400095)(3231022)(3002001)(10201501046)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123560025)(20161123555025)(20161123564025)(20161123562025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:DM5PR07MB3065; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:DM5PR07MB3065; X-Forefront-PRVS: 04976078F0 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(346002)(376002)(189002)(199003)(5660300001)(316002)(4326008)(25786009)(6916009)(4720700003)(16586007)(97736004)(6666003)(42882006)(101416001)(189998001)(72206003)(33646002)(478600001)(50986999)(47776003)(2906002)(105586002)(66066001)(106356001)(2351001)(5003940100001)(2361001)(53416004)(6486002)(16526018)(5009440100003)(55236003)(6116002)(3846002)(6506006)(50466002)(48376002)(7736002)(305945005)(575784001)(53936002)(50226002)(8936002)(8676002)(69596002)(81156014)(81166006)(68736007)(36756003)(6512007)(110426004); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR07MB3065; H:hyd1357T5810.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Shijith.Thotton@cavium.com; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR07MB3065; 23:tloEHMaoTFWQhN1piBQ+moW6TownmvrytHQ+E9pq4?= =?us-ascii?Q?/APWh2LzPSK1FNT1deEvjW1Rg6PBcv9QXF9TOyLlXbnjotSRiJ+vT0ezENBG?= =?us-ascii?Q?YS6JRnTFWswplQhGt8CnNaZtjmzCaOTQ/83ZhMwFu6B8INvzfcHmD02SulcM?= =?us-ascii?Q?SY2y89dGXhDekrh5Qa/m/DMupkEFAx5X0B+xyEXeZpF46MW8WU1gNiQgNJKW?= =?us-ascii?Q?JAzokVBQUIjOvPAs5RO0Y9BP8It6n0xfEYFLlNruYEFomrer+M89+l0GTaw7?= =?us-ascii?Q?I+5jWmkRmfigjEuh50B2EdtkxQtBIwvNNi9e/vvPczYB1bSA8AJHFvQXJpPR?= =?us-ascii?Q?j8PjjgNbyG45IaheYCi0Ln0Wc3uSo1EhKeHpLDWAiXlyKz0s3V3rErvl25gc?= =?us-ascii?Q?fYKnWxgKN/57+fjcU9uLr7nY9SdZyzZTULTY6AHgU8KYoWOQjQpAqm1iTOuW?= =?us-ascii?Q?OSm/MRO6uuk2ttMTZNe4J1e5fwHyH29ev7a93I06ivrG40/jY+nTNz6/YMkb?= =?us-ascii?Q?e1V51FmHcKcy4msUiiVP16Ix6eO/xUIofZc03mDusnxT3XQ5xcT+n2aYKQ+z?= =?us-ascii?Q?h/HhMjAttlrEHLBryaZd2z5J/j0EketMX3VDE4+iwJ9Sk/EhTgqfIg56vax6?= =?us-ascii?Q?f2JSVUJtBjmVZrmUFtEG37GRjVDlL6nfO+NgXE9C0SWEekucqxlrWkAa28fD?= =?us-ascii?Q?hV/NjsH0coFt3h8ocx7xdMvXdFWeZ9TwYPWwfUkX8CqV7Htyg6Aei0zh+NY3?= =?us-ascii?Q?BFFZK8hRUgPl5CXRROGFi8/CZjnGSnvBklwc93vOQxAhWqybZqyXf6FQhChm?= =?us-ascii?Q?L82C/Dv+SUZ6ZTAIz1C2pF33pEQha4Tha5J7Y+CXTk1f7M9/wR8u9z0hmgT0?= =?us-ascii?Q?6+y6CRgHcrTAQOAJWnYqHbQRvjjeVeVIYwcgeesIe9GfN5M2ZlMqbJUrCCF9?= =?us-ascii?Q?PD2wat4XUafkcVBgWW9t+I+7i5WVZ4Nj0cwVSf7jZJmmZZIIa/1Qk941vw7g?= =?us-ascii?Q?i5QYk8GYalTNulDwzujBaFeBb5hM7hAbEJFAWgwF39j1LsXaGS8CLkfwXalf?= =?us-ascii?Q?q0dyxVuyhzjHtkBEG81PSnHq5gBB44QxEhQHUaozAwOki7RBeMk+8sZ+SBhU?= =?us-ascii?Q?iR5RO/tECS1DgitcF/K1Zuwh8ndRC7Qff5pbRjgT/ZtpUHuNvi9ZHNavuq0v?= =?us-ascii?Q?sZr9TXx/WowMGzXuLxSjpj7AgPxCbLHvpiBoWJ1HKYGvclnmnVn8mRhDq/B7?= =?us-ascii?Q?LVPIKeJddZfNMs6pao=3D?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3065; 6:eKxPZiQ/7zci0yiSHjBBbfkYUNSykUjayJ2l5qBa9dfZQI8lqA9YeqSinjj3B36X3kstWar+D1xrfjXRS5qZi6CRYcbSr84ge5EBmg4/52GM6i8qTY8DsyR26cU7Ylq8vQGmqSe61b6P8EDwWVyxfOIbXyt4Po3MURl54M9hikI9C2T1DZH9Dm3k8FcXqznCUXuiL362DQF6gwu7Ekgo88ZiaND35fRf1DEKjfO8QdgpIaXtM7GkF6GH39xO4OSb/xirg171AxbS2DUATsMa06rjUkeynuxalCV9e3Vvxx5i3dOCq61hhiPlRiGGsWJJorc1pzIwN8GBwfVJq1XEC/KLG8FmWHvv6n34AvVLS7A=; 5:snB2pOQ8oQNdrg+gYR6bIl7Scj+6x7Di1C5n/LV52p5vagPrIf7wmjoLgfxgBGv3Outqv7wT2J1Y9mEq7TqzfHaRv1ckU+5geLD2a3t0jtSUYo4tDA9h7OkdMBWd2w24mhYdIV+DrCQmWO4ICLFDZflzc8Napzd6b1XAE2zEGM8=; 24:jFEJmGjM+R9X5dbHLfyfen56f7ybMkEoMUIhBBEeX0SkB+tFkqyhZ1buI/UwrJs82LCa59gMQGFkRKMY4Sf2T/We8Ej3Ul8NyqNkwZmuXPs=; 7:tLBwoYwhPxYnrN4mm1Av5wpZamfQpoAiiYm6XmXpuyH5nPTCUfUQu3w3hq+nNzOJzd9HqdF/4gTKiRWnIitl+Gn3lhQ317biu81Vyltn98dI1u75KVqixFT0QkP0woByUl4wfakHKkk86RvfG7a8Cf/FI/9BCun4jhrcHCTMygC5k261caJGW3GeCApvYHrmlwGGuGqO/GexvLSYDG72TciAsfQsFXskEXTvXGdawlqwUi0HJ3o3Ibwb1YD0HmI1 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Nov 2017 12:00:32.1024 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5342c841-8e8a-4d57-5d66-08d5300e4d9a X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR07MB3065 Subject: [dpdk-dev] [PATCH] net/liquidio: add support for queue re-configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Nov 2017 12:00:36 -0000 Support for re-configuration of number of queues per port and descriptor size. Renamed variable representing number of descriptors as nb_desc from max_count. Signed-off-by: Shijith Thotton --- drivers/net/liquidio/base/lio_23xx_vf.c | 54 +---------- drivers/net/liquidio/base/lio_23xx_vf.h | 5 - drivers/net/liquidio/base/lio_hw_defs.h | 2 + drivers/net/liquidio/lio_ethdev.c | 156 +++++++++++++++++++------------- drivers/net/liquidio/lio_rxtx.c | 81 ++++++----------- drivers/net/liquidio/lio_struct.h | 6 +- 6 files changed, 132 insertions(+), 172 deletions(-) diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c index e30c20d..7f7e98d 100644 --- a/drivers/net/liquidio/base/lio_23xx_vf.c +++ b/drivers/net/liquidio/base/lio_23xx_vf.c @@ -150,6 +150,8 @@ reg_val &= 0xEFFFFFFFFFFFFFFFL; + lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val); + reg_val = lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no)); @@ -211,7 +213,7 @@ /* Write the start of the input queue's ring and its size */ lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no), iq->base_addr_dma); - lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->max_count); + lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc); /* Remember the doorbell & instruction count register addr * for this queue @@ -243,7 +245,7 @@ lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no), droq->desc_ring_dma); - lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->max_count); + lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc); lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no), (droq->buffer_size | (OCTEON_RH_SIZE << 16))); @@ -538,51 +540,3 @@ return 0; } -int -cn23xx_vf_set_io_queues_off(struct lio_device *lio_dev) -{ - uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT; - uint64_t q_no; - - /* Disable the i/p and o/p queues for this Octeon. - * IOQs will already be in reset. - * If RST bit is set, wait for Quiet bit to be set - * Once Quiet bit is set, clear the RST bit - */ - PMD_INIT_FUNC_TRACE(); - - for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) { - volatile uint64_t reg_val; - - reg_val = lio_read_csr64(lio_dev, - CN23XX_SLI_IQ_PKT_CONTROL64(q_no)); - while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) && !(reg_val & - CN23XX_PKT_INPUT_CTL_QUIET) && loop) { - reg_val = lio_read_csr64( - lio_dev, - CN23XX_SLI_IQ_PKT_CONTROL64(q_no)); - loop = loop - 1; - } - - if (loop == 0) { - lio_dev_err(lio_dev, - "clearing the reset reg failed or setting the quiet reg failed for qno %lu\n", - (unsigned long)q_no); - return -1; - } - - reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST; - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no), - reg_val); - - reg_val = lio_read_csr64(lio_dev, - CN23XX_SLI_IQ_PKT_CONTROL64(q_no)); - if (reg_val & CN23XX_PKT_INPUT_CTL_RST) { - lio_dev_err(lio_dev, "unable to reset qno %lu\n", - (unsigned long)q_no); - return -1; - } - } - - return 0; -} diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h index ad8db0d..ea93524 100644 --- a/drivers/net/liquidio/base/lio_23xx_vf.h +++ b/drivers/net/liquidio/base/lio_23xx_vf.h @@ -80,11 +80,6 @@ return default_lio_conf; } -/** Turns off the input and output queues for the device - * @param lio_dev which device io queues to disable - */ -int cn23xx_vf_set_io_queues_off(struct lio_device *lio_dev); - #define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT 100000 void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev); diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h index fe5c3bb..5595075 100644 --- a/drivers/net/liquidio/base/lio_hw_defs.h +++ b/drivers/net/liquidio/base/lio_hw_defs.h @@ -113,6 +113,7 @@ enum lio_card_type { #define LIO_FW_VERSION_LENGTH 32 +#define LIO_Q_RECONF_MIN_VERSION "1.7.0" #define LIO_VF_TRUST_MIN_VERSION "1.7.1" /** Tag types used by Octeon cores in its work. */ @@ -156,6 +157,7 @@ enum octeon_tag_type { #define LIO_CMD_ADD_VLAN_FILTER 0x17 #define LIO_CMD_DEL_VLAN_FILTER 0x18 #define LIO_CMD_VXLAN_PORT_CONFIG 0x19 +#define LIO_CMD_QUEUE_COUNT_CTL 0x1f #define LIO_CMD_VXLAN_PORT_ADD 0x0 #define LIO_CMD_VXLAN_PORT_DEL 0x1 diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index 84b8a32..6b5d52e 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -1228,12 +1228,10 @@ struct rte_lio_xstats_name_off { fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no; - if ((lio_dev->droq[fw_mapped_oq]) && - (num_rx_descs != lio_dev->droq[fw_mapped_oq]->max_count)) { - lio_dev_err(lio_dev, - "Reconfiguring Rx descs not supported. Configure descs to same value %u or restart application\n", - lio_dev->droq[fw_mapped_oq]->max_count); - return -ENOTSUP; + /* Free previous allocation if any */ + if (eth_dev->data->rx_queues[q_no] != NULL) { + lio_dev_rx_queue_release(eth_dev->data->rx_queues[q_no]); + eth_dev->data->rx_queues[q_no] = NULL; } mbp_priv = rte_mempool_get_priv(mp); @@ -1267,10 +1265,6 @@ struct rte_lio_xstats_name_off { int oq_no; if (droq) { - /* Run time queue deletion not supported */ - if (droq->lio_dev->port_configured) - return; - oq_no = droq->q_no; lio_delete_droq_queue(droq->lio_dev, oq_no); } @@ -1314,12 +1308,10 @@ struct rte_lio_xstats_name_off { lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no); - if ((lio_dev->instr_queue[fw_mapped_iq] != NULL) && - (num_tx_descs != lio_dev->instr_queue[fw_mapped_iq]->max_count)) { - lio_dev_err(lio_dev, - "Reconfiguring Tx descs not supported. Configure descs to same value %u or restart application\n", - lio_dev->instr_queue[fw_mapped_iq]->max_count); - return -ENOTSUP; + /* Free previous allocation if any */ + if (eth_dev->data->tx_queues[q_no] != NULL) { + lio_dev_tx_queue_release(eth_dev->data->tx_queues[q_no]); + eth_dev->data->tx_queues[q_no] = NULL; } retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no], @@ -1331,7 +1323,7 @@ struct rte_lio_xstats_name_off { } retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq, - lio_dev->instr_queue[fw_mapped_iq]->max_count, + lio_dev->instr_queue[fw_mapped_iq]->nb_desc, socket_id); if (retval) { @@ -1362,10 +1354,6 @@ struct rte_lio_xstats_name_off { if (tq) { - /* Run time queue deletion not supported */ - if (tq->lio_dev->port_configured) - return; - /* Free sg_list */ lio_delete_sglist(tq); @@ -1534,6 +1522,8 @@ struct rte_lio_xstats_name_off { lio_send_rx_ctrl_cmd(eth_dev, 0); + lio_wait_for_instr_fetch(lio_dev); + /* Clear recorded link status */ lio_dev->linfo.link.link_status64 = 0; } @@ -1607,34 +1597,14 @@ struct rte_lio_xstats_name_off { lio_dev_close(struct rte_eth_dev *eth_dev) { struct lio_device *lio_dev = LIO_DEV(eth_dev); - uint32_t i; lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id); if (lio_dev->intf_open) lio_dev_stop(eth_dev); - lio_wait_for_instr_fetch(lio_dev); - - lio_dev->fn_list.disable_io_queues(lio_dev); - - cn23xx_vf_set_io_queues_off(lio_dev); - - /* Reset iq regs (IQ_DBELL). - * Clear sli_pktx_cnts (OQ_PKTS_SENT). - */ - for (i = 0; i < lio_dev->nb_rx_queues; i++) { - struct lio_droq *droq = lio_dev->droq[i]; - - if (droq == NULL) - break; - - uint32_t pkt_count = rte_read32(droq->pkts_sent_reg); - - lio_dev_dbg(lio_dev, - "pending oq count %u\n", pkt_count); - rte_write32(pkt_count, droq->pkts_sent_reg); - } + /* Reset ioq regs */ + lio_dev->fn_list.setup_device_regs(lio_dev); if (lio_dev->pci_dev->kdrv == RTE_KDRV_IGB_UIO) { cn23xx_vf_ask_pf_to_do_flr(lio_dev); @@ -1724,7 +1694,76 @@ struct rte_lio_xstats_name_off { lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n"); } -static int lio_dev_configure(struct rte_eth_dev *eth_dev) +static int +lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq, + int num_rxq) +{ + struct lio_device *lio_dev = LIO_DEV(eth_dev); + struct lio_dev_ctrl_cmd ctrl_cmd; + struct lio_ctrl_pkt ctrl_pkt; + + if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) { + lio_dev_err(lio_dev, "Require firmware version >= %s\n", + LIO_Q_RECONF_MIN_VERSION); + return -ENOTSUP; + } + + /* flush added to prevent cmd failure + * incase the queue is full + */ + lio_flush_iq(lio_dev, lio_dev->instr_queue[0]); + + memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt)); + memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd)); + + ctrl_cmd.eth_dev = eth_dev; + ctrl_cmd.cond = 0; + + ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL; + ctrl_pkt.ncmd.s.param1 = num_txq; + ctrl_pkt.ncmd.s.param2 = num_rxq; + ctrl_pkt.ctrl_cmd = &ctrl_cmd; + + if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) { + lio_dev_err(lio_dev, "Failed to send queue count control command\n"); + return -1; + } + + if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) { + lio_dev_err(lio_dev, "Queue count control command timed out\n"); + return -1; + } + + return 0; +} + +static int +lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq) +{ + struct lio_device *lio_dev = LIO_DEV(eth_dev); + + if (lio_dev->nb_rx_queues != num_rxq || + lio_dev->nb_tx_queues != num_txq) { + if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq)) + return -1; + lio_dev->nb_rx_queues = num_rxq; + lio_dev->nb_tx_queues = num_txq; + } + + if (lio_dev->intf_open) + lio_dev_stop(eth_dev); + + /* Reset ioq registers */ + if (lio_dev->fn_list.setup_device_regs(lio_dev)) { + lio_dev_err(lio_dev, "Failed to configure device registers\n"); + return -1; + } + + return 0; +} + +static int +lio_dev_configure(struct rte_eth_dev *eth_dev) { struct lio_device *lio_dev = LIO_DEV(eth_dev); uint16_t timeout = LIO_MAX_CMD_TIMEOUT; @@ -1737,22 +1776,21 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); - /* Re-configuring firmware not supported. - * Can't change tx/rx queues per port from initial value. + /* Inform firmware about change in number of queues to use. + * Disable IO queues and reset registers for re-configuration. */ - if (lio_dev->port_configured) { - if ((lio_dev->nb_rx_queues != eth_dev->data->nb_rx_queues) || - (lio_dev->nb_tx_queues != eth_dev->data->nb_tx_queues)) { - lio_dev_err(lio_dev, - "rxq/txq re-conf not supported. Restart application with new value.\n"); - return -ENOTSUP; - } - return 0; - } + if (lio_dev->port_configured) + return lio_reconf_queues(eth_dev, + eth_dev->data->nb_tx_queues, + eth_dev->data->nb_rx_queues); lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues; lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues; + /* Set max number of queues which can be re-configured. */ + lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues; + lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues; + resp_size = sizeof(struct lio_if_cfg_resp); sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0); if (sc == NULL) @@ -1879,9 +1917,6 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) lio_free_soft_command(sc); - /* Disable iq_0 for reconf */ - lio_dev->fn_list.disable_io_queues(lio_dev); - /* Reset ioq regs */ lio_dev->fn_list.setup_device_regs(lio_dev); @@ -2021,11 +2056,6 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) rte_delay_ms(LIO_PCI_FLR_WAIT * 2); } - if (cn23xx_vf_set_io_queues_off(lio_dev)) { - lio_dev_err(lio_dev, "Setting io queues off failed\n"); - goto error; - } - if (lio_dev->fn_list.setup_device_regs(lio_dev)) { lio_dev_err(lio_dev, "Failed to configure device registers\n"); goto error; diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c index 376893a..759556c 100644 --- a/drivers/net/liquidio/lio_rxtx.c +++ b/drivers/net/liquidio/lio_rxtx.c @@ -42,7 +42,7 @@ #define LIO_MAX_SG 12 /* Flush iq if available tx_desc fall below LIO_FLUSH_WM */ -#define LIO_FLUSH_WM(_iq) ((_iq)->max_count / 2) +#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2) #define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL static void @@ -70,7 +70,7 @@ { uint32_t i; - for (i = 0; i < droq->max_count; i++) { + for (i = 0; i < droq->nb_desc; i++) { if (droq->recv_buf_list[i].buffer) { rte_pktmbuf_free((struct rte_mbuf *) droq->recv_buf_list[i].buffer); @@ -89,7 +89,7 @@ uint32_t i; void *buf; - for (i = 0; i < droq->max_count; i++) { + for (i = 0; i < droq->nb_desc; i++) { buf = rte_pktmbuf_alloc(droq->mpool); if (buf == NULL) { lio_dev_err(lio_dev, "buffer alloc failed\n"); @@ -164,7 +164,7 @@ { droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev, "info_list", droq->q_no, - (droq->max_count * + (droq->nb_desc * LIO_DROQ_INFO_SIZE), RTE_CACHE_LINE_SIZE, socket_id); @@ -206,10 +206,10 @@ c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev); - droq->max_count = num_descs; + droq->nb_desc = num_descs; droq->buffer_size = desc_size; - desc_ring_size = droq->max_count * LIO_DROQ_DESC_SIZE; + desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE; droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev, "droq", q_no, desc_ring_size, @@ -228,7 +228,7 @@ lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n", q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma); lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no, - droq->max_count); + droq->nb_desc); droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id); if (droq->info_list == NULL) { @@ -237,7 +237,7 @@ } droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list", - (droq->max_count * + (droq->nb_desc * LIO_DROQ_RECVBUF_SIZE), RTE_CACHE_LINE_SIZE, socket_id); @@ -274,11 +274,6 @@ PMD_INIT_FUNC_TRACE(); - if (lio_dev->droq[oq_no]) { - lio_dev_dbg(lio_dev, "Droq %d in use\n", oq_no); - return 0; - } - /* Allocate the DS for the new droq. */ droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq), RTE_CACHE_LINE_SIZE, socket_id); @@ -303,7 +298,7 @@ /* Send credit for octeon output queues. credits are always * sent after the output queue is enabled. */ - rte_write32(lio_dev->droq[oq_no]->max_count, + rte_write32(lio_dev->droq[oq_no]->nb_desc, lio_dev->droq[oq_no]->pkts_credit_reg); rte_wmb(); @@ -342,13 +337,13 @@ do { droq->refill_idx = lio_incr_index( droq->refill_idx, 1, - droq->max_count); + droq->nb_desc); desc_refilled++; droq->refill_count--; } while (droq->recv_buf_list[droq->refill_idx].buffer); } refill_index = lio_incr_index(refill_index, 1, - droq->max_count); + droq->nb_desc); } /* while */ return desc_refilled; @@ -379,7 +374,7 @@ desc_ring = droq->desc_ring; - while (droq->refill_count && (desc_refilled < droq->max_count)) { + while (droq->refill_count && (desc_refilled < droq->nb_desc)) { /* If a valid buffer exists (happens if there is no dispatch), * reuse the buffer, else allocate. */ @@ -402,7 +397,7 @@ droq->info_list[droq->refill_idx].length = 0; droq->refill_idx = lio_incr_index(droq->refill_idx, 1, - droq->max_count); + droq->nb_desc); desc_refilled++; droq->refill_count--; } @@ -449,7 +444,7 @@ buf_cnt = lio_droq_get_bufcount(droq->buffer_size, (uint32_t)info->length); droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt, - droq->max_count); + droq->nb_desc); droq->refill_count += buf_cnt; } else { if (info->length <= droq->buffer_size) { @@ -462,7 +457,7 @@ droq->recv_buf_list[droq->read_idx].buffer = NULL; droq->read_idx = lio_incr_index( droq->read_idx, 1, - droq->max_count); + droq->nb_desc); droq->refill_count++; if (likely(nicbuf != NULL)) { @@ -556,7 +551,7 @@ pkt_len += cpy_len; droq->read_idx = lio_incr_index( droq->read_idx, - 1, droq->max_count); + 1, droq->nb_desc); droq->refill_count++; /* Prefetch buffer pointers when on a @@ -737,7 +732,7 @@ iq->base_addr_dma = iq->iq_mz->iova; iq->base_addr = (uint8_t *)iq->iq_mz->addr; - iq->max_count = num_descs; + iq->nb_desc = num_descs; /* Initialize a list to holds requests that have been posted to Octeon * but has yet to be fetched by octeon @@ -756,7 +751,7 @@ lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n", iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma, - iq->max_count); + iq->nb_desc); iq->lio_dev = lio_dev; iq->txpciq.txpciq64 = txpciq.txpciq64; @@ -853,14 +848,6 @@ { uint32_t iq_no = (uint32_t)txpciq.s.q_no; - if (lio_dev->instr_queue[iq_no]) { - lio_dev_dbg(lio_dev, "IQ is in use. Cannot create the IQ: %d again\n", - iq_no); - lio_dev->instr_queue[iq_no]->txpciq.txpciq64 = txpciq.txpciq64; - lio_dev->instr_queue[iq_no]->app_ctx = app_ctx; - return 0; - } - lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue", sizeof(struct lio_instr_queue), RTE_CACHE_LINE_SIZE, socket_id); @@ -870,23 +857,15 @@ lio_dev->instr_queue[iq_no]->q_index = q_index; lio_dev->instr_queue[iq_no]->app_ctx = app_ctx; - if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) - goto release_lio_iq; + if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) { + rte_free(lio_dev->instr_queue[iq_no]); + lio_dev->instr_queue[iq_no] = NULL; + return -1; + } lio_dev->num_iqs++; - if (lio_dev->fn_list.enable_io_queues(lio_dev)) - goto delete_lio_iq; return 0; - -delete_lio_iq: - lio_delete_instr_queue(lio_dev, iq_no); - lio_dev->num_iqs--; -release_lio_iq: - rte_free(lio_dev->instr_queue[iq_no]); - lio_dev->instr_queue[iq_no] = NULL; - - return -1; } int @@ -957,14 +936,14 @@ * position if queue gets full before Octeon could fetch any instr. */ if (rte_atomic64_read(&iq->instr_pending) >= - (int32_t)(iq->max_count - 1)) { + (int32_t)(iq->nb_desc - 1)) { st.status = LIO_IQ_SEND_FAILED; st.index = -1; return st; } if (rte_atomic64_read(&iq->instr_pending) >= - (int32_t)(iq->max_count - 2)) + (int32_t)(iq->nb_desc - 2)) st.status = LIO_IQ_SEND_STOP; copy_cmd_into_iq(iq, cmd); @@ -972,7 +951,7 @@ /* "index" is returned, host_write_index is modified. */ st.index = iq->host_write_index; iq->host_write_index = lio_incr_index(iq->host_write_index, 1, - iq->max_count); + iq->nb_desc); iq->fill_cnt++; /* Flush the command into memory. We need to be sure the data is in @@ -1074,7 +1053,7 @@ skip_this: inst_count++; - old = lio_incr_index(old, 1, iq->max_count); + old = lio_incr_index(old, 1, iq->nb_desc); } iq->flush_index = old; @@ -1094,7 +1073,7 @@ /* Add last_done and modulo with the IQ size to get new index */ iq->lio_read_index = (iq->lio_read_index + (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) % - iq->max_count; + iq->nb_desc; } int @@ -1552,7 +1531,7 @@ struct lio_soft_command * static inline uint32_t lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no) { - return ((lio_dev->instr_queue[q_no]->max_count - 1) - + return ((lio_dev->instr_queue[q_no]->nb_desc - 1) - (uint32_t)rte_atomic64_read( &lio_dev->instr_queue[q_no]->instr_pending)); } @@ -1562,7 +1541,7 @@ struct lio_soft_command * { return ((uint32_t)rte_atomic64_read( &lio_dev->instr_queue[q_no]->instr_pending) >= - (lio_dev->instr_queue[q_no]->max_count - 2)); + (lio_dev->instr_queue[q_no]->nb_desc - 2)); } static int diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h index 10e3976..4b4670f 100644 --- a/drivers/net/liquidio/lio_struct.h +++ b/drivers/net/liquidio/lio_struct.h @@ -131,7 +131,7 @@ struct lio_droq { rte_atomic64_t pkts_pending; /** Number of descriptors in this ring. */ - uint32_t max_count; + uint32_t nb_desc; /** The number of descriptors pending refill. */ uint32_t refill_count; @@ -298,8 +298,8 @@ struct lio_instr_queue { uint32_t status:8; - /** Maximum no. of instructions in this queue. */ - uint32_t max_count; + /** Number of descriptors in this ring. */ + uint32_t nb_desc; /** Index in input ring where the driver should write the next packet */ uint32_t host_write_index; -- 1.8.3.1