From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1168243A2A; Wed, 7 Feb 2024 04:15:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B00C42E33; Wed, 7 Feb 2024 04:14:37 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2069.outbound.protection.outlook.com [40.107.94.69]) by mails.dpdk.org (Postfix) with ESMTP id DF7B742DDD for ; Wed, 7 Feb 2024 04:14:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=la5AYwI6Awkx4+fqE7nV+M3WNGV7qHcYsY8uy7a9Bm8ZCl0hf5G+3QKQgQiyvfAiM+nrc6/dMakOq62c/Ga92u6Ch7NlSkgXfi86ZZ5myN9WwGq4yiT5MrP8MvetNGBHZp9GF10ZuRn4zV2MIMr7cW+T3b6w6w0uWmtokoJSBkh10wWGxKmH+uOkFvXRA+4Dt2zXEFf9CVTBoE9hqxD18/InbyfyiaTAUD5+ARfnZmJX6HRtph8UmNs95VUtm2FSRDCi6jDiOvjUe2DHf11I3zIjubdKU5QU7Z/COvx5nNWOTGBUP2QVjCRztynm6ukonfeCk4K3zji4qROe+pK18w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=okDj1gGOQDERFOPPHnEUfrA7tZ9RXZ88UMHOjZEblJg=; b=ZH+9QuuMRb9OZnMdXZ0t1I4iHvH2mxVh+XBbux02YRqyrpLNrjnPpfHxRfSjGI3vFik3MWIpjmCaJLJGMZHSosLGqrzhh0RaXQ9tuHvmlKvqGjhS6L9QAixcdE/gK5h4zzQ9wOU6DW23E7WPvgQzDxbOpASHcrrhLai1vWuCwpZ+qREtLMi7gmf5uxuaaQWmqRrPxx3LTsLZl7BQ+LnONNzLYyYSMeKM4R1LsXQpbuYiKl1b4+s9wzps+D9iyjH0WudDpkhC1iW0Lbju4IG3XbOdQ40AtlspB7f9D/lQfahTeUxSN/WjtYX0NVfdz8+2om4FTbEgKNr/T7N2Ibc+Mw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=dpdk.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=okDj1gGOQDERFOPPHnEUfrA7tZ9RXZ88UMHOjZEblJg=; b=ftYXRmcHEOtaJWWvW0ZOGY9FWFEV7/mCmZL5Sr7UhXO5a5zyJDMgxdPs40LHt52t6UuPOLGOrM3BuW7R9ufsnKyBJ49xUFEdTgJG1G7PzqHcvJnTN0jo5jTIQ+YLUwMBP6hdtYIGXrfgI7LmsqvfuxCVbJ/5C9v5juwljbvk+Yo= Received: from DM6PR02CA0070.namprd02.prod.outlook.com (2603:10b6:5:177::47) by DS7PR12MB6070.namprd12.prod.outlook.com (2603:10b6:8:9e::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.16; Wed, 7 Feb 2024 03:14:23 +0000 Received: from CO1PEPF000044FB.namprd21.prod.outlook.com (2603:10b6:5:177:cafe::e4) by DM6PR02CA0070.outlook.office365.com (2603:10b6:5:177::47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.36 via Frontend Transport; Wed, 7 Feb 2024 03:14:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1PEPF000044FB.mail.protection.outlook.com (10.167.241.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7292.0 via Frontend Transport; Wed, 7 Feb 2024 03:14:23 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.34; Tue, 6 Feb 2024 21:14:20 -0600 From: Andrew Boyer To: CC: Andrew Boyer Subject: [PATCH v3 13/13] net/ionic: optimize device start operation Date: Tue, 6 Feb 2024 19:13:17 -0800 Message-ID: <20240207031317.32293-14-andrew.boyer@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240207031317.32293-1-andrew.boyer@amd.com> References: <20240202193238.62669-1-andrew.boyer@amd.com> <20240207031317.32293-1-andrew.boyer@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044FB:EE_|DS7PR12MB6070:EE_ X-MS-Office365-Filtering-Correlation-Id: ecedaaa3-0e7e-477a-a62b-08dc278ae242 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Sw+g5Jp+QLCWM7UgGOT/xUk3R3FVQKH2QY3T8TAf+hBG3BIMGPrt4gzHnjXGDknYCfHZZBWOjR9h3t+CWJjwnyCw4JC4hHjg9Hzh0nH0ykN+PuZFSUrZiR4Gfxr4S+Dl7wBt8KJf9HGupcT0clvzQXcRqPytJ8ESpegtgzN81ImW0Xj1udQODRnaORkhxv6R33rPMv519cJ0cUeLEjc2CP5p2o06ElMF+JqeQawPD+PskM3g1ms+TT4mNIsIxsmZ1/FPdSbbaWuDWiA/OE2j8U/qDz3ItMO3UBTg7oAg8JkEVA2kUNIuCc86pj9YK5m+PUQQp9YLHLg+EIQ8G+5428cTHTK5ok+GWlVe9fKsr635pklhayPIE2SCV9cAsIm/SQajMqTBOEgOTozGJY63tMs7PZnytZetr5kv2f/aAaF3YjraOp1Y9mXkFGGYYr9C7zCRfqGkW2G1jbavkjD9tWDVJsj7h6PAKyNdrq/duMthHkVe4Wgujd1R+zOSfDvLwQoH7G6Fp/vofZQDtCLsoXVSOrk6lo9wLR94mWYWuvMlm2KDtN+Zgqe/NiBbG16qqx/FT5TDPAe6GI4AgnbWHSWY6PlIwK++UOwKQdp1AL36Eryt7lt8t7kpTRADQZS0sjE0COSj9MX06ng32JH8DHXRvBtPwiVt870c5KCzgQHSS40T3MkKLqdFemYkkPaiMG1qNdmFaGvd3RhE4s1D8eTdbUqxiuwZTZLIjpBewp4aEh6EJ4tTqQfzeUsM0IVx7+Zy45bJgFM3C1XDwuv3+w== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(4636009)(136003)(346002)(396003)(376002)(39860400002)(230922051799003)(64100799003)(186009)(1800799012)(451199024)(82310400011)(46966006)(36840700001)(40470700004)(478600001)(6666004)(81166007)(36756003)(1076003)(26005)(41300700001)(86362001)(36860700001)(356005)(426003)(82740400003)(336012)(16526019)(2616005)(83380400001)(47076005)(8676002)(8936002)(4326008)(40480700001)(44832011)(40460700003)(2906002)(5660300002)(30864003)(70206006)(6916009)(316002)(70586007)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2024 03:14:23.3037 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ecedaaa3-0e7e-477a-a62b-08dc278ae242 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044FB.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6070 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the queue_start operation into first-half and second-half helpers. This allows us to batch up the queue commands during dev_start(), reducing the outage window when restarting the process by about 1ms per queue. Signed-off-by: Andrew Boyer --- drivers/net/ionic/ionic_lif.c | 136 +++++++++++++++++++++++---------- drivers/net/ionic/ionic_lif.h | 6 +- drivers/net/ionic/ionic_rxtx.c | 81 ++++++++++++++++---- drivers/net/ionic/ionic_rxtx.h | 10 +++ 4 files changed, 176 insertions(+), 57 deletions(-) diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c index 45317590fa..93a1011772 100644 --- a/drivers/net/ionic/ionic_lif.c +++ b/drivers/net/ionic/ionic_lif.c @@ -1601,13 +1601,16 @@ ionic_lif_set_features(struct ionic_lif *lif) } int -ionic_lif_txq_init(struct ionic_tx_qcq *txq) +ionic_lif_txq_init_nowait(struct ionic_tx_qcq *txq) { struct ionic_qcq *qcq = &txq->qcq; struct ionic_queue *q = &qcq->q; struct ionic_lif *lif = qcq->lif; struct ionic_cq *cq = &qcq->cq; - struct ionic_admin_ctx ctx = { + struct ionic_admin_ctx *ctx = &txq->admin_ctx; + int err; + + *ctx = (struct ionic_admin_ctx) { .pending_work = true, .cmd.q_init = { .opcode = IONIC_CMD_Q_INIT, @@ -1621,32 +1624,41 @@ ionic_lif_txq_init(struct ionic_tx_qcq *txq) .sg_ring_base = rte_cpu_to_le_64(q->sg_base_pa), }, }; - int err; if (txq->flags & IONIC_QCQ_F_SG) - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); if (txq->flags & IONIC_QCQ_F_CMB) { - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); } else { - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); } IONIC_PRINT(DEBUG, "txq_init.index %d", q->index); IONIC_PRINT(DEBUG, "txq_init.ring_base 0x%" PRIx64 "", q->base_pa); IONIC_PRINT(DEBUG, "txq_init.ring_size %d", - ctx.cmd.q_init.ring_size); - IONIC_PRINT(DEBUG, "txq_init.ver %u", ctx.cmd.q_init.ver); + ctx->cmd.q_init.ring_size); + IONIC_PRINT(DEBUG, "txq_init.ver %u", ctx->cmd.q_init.ver); ionic_q_reset(q); ionic_cq_reset(cq); - err = ionic_adminq_post_wait(lif, &ctx); + /* Caller responsible for calling ionic_lif_txq_init_done() */ + err = ionic_adminq_post(lif, ctx); if (err) - return err; + ctx->pending_work = false; + return err; +} - q->hw_type = ctx.comp.q_init.hw_type; - q->hw_index = rte_le_to_cpu_32(ctx.comp.q_init.hw_index); +void +ionic_lif_txq_init_done(struct ionic_tx_qcq *txq) +{ + struct ionic_lif *lif = txq->qcq.lif; + struct ionic_queue *q = &txq->qcq.q; + struct ionic_admin_ctx *ctx = &txq->admin_ctx; + + q->hw_type = ctx->comp.q_init.hw_type; + q->hw_index = rte_le_to_cpu_32(ctx->comp.q_init.hw_index); q->db = ionic_db_map(lif, q); IONIC_PRINT(DEBUG, "txq->hw_type %d", q->hw_type); @@ -1654,18 +1666,19 @@ ionic_lif_txq_init(struct ionic_tx_qcq *txq) IONIC_PRINT(DEBUG, "txq->db %p", q->db); txq->flags |= IONIC_QCQ_F_INITED; - - return 0; } int -ionic_lif_rxq_init(struct ionic_rx_qcq *rxq) +ionic_lif_rxq_init_nowait(struct ionic_rx_qcq *rxq) { struct ionic_qcq *qcq = &rxq->qcq; struct ionic_queue *q = &qcq->q; struct ionic_lif *lif = qcq->lif; struct ionic_cq *cq = &qcq->cq; - struct ionic_admin_ctx ctx = { + struct ionic_admin_ctx *ctx = &rxq->admin_ctx; + int err; + + *ctx = (struct ionic_admin_ctx) { .pending_work = true, .cmd.q_init = { .opcode = IONIC_CMD_Q_INIT, @@ -1679,32 +1692,41 @@ ionic_lif_rxq_init(struct ionic_rx_qcq *rxq) .sg_ring_base = rte_cpu_to_le_64(q->sg_base_pa), }, }; - int err; if (rxq->flags & IONIC_QCQ_F_SG) - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); if (rxq->flags & IONIC_QCQ_F_CMB) { - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); } else { - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); } IONIC_PRINT(DEBUG, "rxq_init.index %d", q->index); IONIC_PRINT(DEBUG, "rxq_init.ring_base 0x%" PRIx64 "", q->base_pa); IONIC_PRINT(DEBUG, "rxq_init.ring_size %d", - ctx.cmd.q_init.ring_size); - IONIC_PRINT(DEBUG, "rxq_init.ver %u", ctx.cmd.q_init.ver); + ctx->cmd.q_init.ring_size); + IONIC_PRINT(DEBUG, "rxq_init.ver %u", ctx->cmd.q_init.ver); ionic_q_reset(q); ionic_cq_reset(cq); - err = ionic_adminq_post_wait(lif, &ctx); + /* Caller responsible for calling ionic_lif_rxq_init_done() */ + err = ionic_adminq_post(lif, ctx); if (err) - return err; + ctx->pending_work = false; + return err; +} - q->hw_type = ctx.comp.q_init.hw_type; - q->hw_index = rte_le_to_cpu_32(ctx.comp.q_init.hw_index); +void +ionic_lif_rxq_init_done(struct ionic_rx_qcq *rxq) +{ + struct ionic_lif *lif = rxq->qcq.lif; + struct ionic_queue *q = &rxq->qcq.q; + struct ionic_admin_ctx *ctx = &rxq->admin_ctx; + + q->hw_type = ctx->comp.q_init.hw_type; + q->hw_index = rte_le_to_cpu_32(ctx->comp.q_init.hw_index); q->db = ionic_db_map(lif, q); rxq->flags |= IONIC_QCQ_F_INITED; @@ -1712,8 +1734,6 @@ ionic_lif_rxq_init(struct ionic_rx_qcq *rxq) IONIC_PRINT(DEBUG, "rxq->hw_type %d", q->hw_type); IONIC_PRINT(DEBUG, "rxq->hw_index %d", q->hw_index); IONIC_PRINT(DEBUG, "rxq->db %p", q->db); - - return 0; } static int @@ -1962,9 +1982,11 @@ ionic_lif_configure(struct ionic_lif *lif) int ionic_lif_start(struct ionic_lif *lif) { + struct rte_eth_dev *dev = lif->eth_dev; uint32_t rx_mode; - uint32_t i; + uint32_t i, j, chunk; int err; + bool fatal = false; err = ionic_lif_rss_setup(lif); if (err) @@ -1985,25 +2007,57 @@ ionic_lif_start(struct ionic_lif *lif) "on port %u", lif->nrxqcqs, lif->ntxqcqs, lif->port_id); - for (i = 0; i < lif->nrxqcqs; i++) { - struct ionic_rx_qcq *rxq = lif->rxqcqs[i]; - if (!(rxq->flags & IONIC_QCQ_F_DEFERRED)) { - err = ionic_dev_rx_queue_start(lif->eth_dev, i); + chunk = ionic_adminq_space_avail(lif); + + for (i = 0; i < lif->nrxqcqs; i += chunk) { + if (lif->rxqcqs[0]->flags & IONIC_QCQ_F_DEFERRED) { + IONIC_PRINT(DEBUG, "Rx queue start deferred"); + break; + } + + for (j = 0; j < chunk && i + j < lif->nrxqcqs; j++) { + err = ionic_dev_rx_queue_start_firsthalf(dev, i + j); + if (err) { + fatal = true; + break; + } + } + for (j = 0; j < chunk && i + j < lif->nrxqcqs; j++) { + /* Commands that failed to post return immediately */ + err = ionic_dev_rx_queue_start_secondhalf(dev, i + j); if (err) - return err; + /* Don't break */ + fatal = true; } } + if (fatal) + return -EIO; - for (i = 0; i < lif->ntxqcqs; i++) { - struct ionic_tx_qcq *txq = lif->txqcqs[i]; - if (!(txq->flags & IONIC_QCQ_F_DEFERRED)) { - err = ionic_dev_tx_queue_start(lif->eth_dev, i); + for (i = 0; i < lif->ntxqcqs; i += chunk) { + if (lif->txqcqs[0]->flags & IONIC_QCQ_F_DEFERRED) { + IONIC_PRINT(DEBUG, "Tx queue start deferred"); + break; + } + + for (j = 0; j < chunk && i + j < lif->ntxqcqs; j++) { + err = ionic_dev_tx_queue_start_firsthalf(dev, i + j); + if (err) { + fatal = true; + break; + } + } + for (j = 0; j < chunk && i + j < lif->ntxqcqs; j++) { + /* Commands that failed to post return immediately */ + err = ionic_dev_tx_queue_start_secondhalf(dev, i + j); if (err) - return err; + /* Don't break */ + fatal = true; } } + if (fatal) + return -EIO; /* Carrier ON here */ lif->state |= IONIC_LIF_F_UP; diff --git a/drivers/net/ionic/ionic_lif.h b/drivers/net/ionic/ionic_lif.h index ee13f5b7c8..591cf1a2ff 100644 --- a/drivers/net/ionic/ionic_lif.h +++ b/drivers/net/ionic/ionic_lif.h @@ -228,11 +228,13 @@ int ionic_tx_qcq_alloc(struct ionic_lif *lif, uint32_t socket_id, struct ionic_tx_qcq **qcq_out); void ionic_qcq_free(struct ionic_qcq *qcq); -int ionic_lif_rxq_init(struct ionic_rx_qcq *rxq); +int ionic_lif_rxq_init_nowait(struct ionic_rx_qcq *rxq); +void ionic_lif_rxq_init_done(struct ionic_rx_qcq *rxq); void ionic_lif_rxq_deinit_nowait(struct ionic_rx_qcq *rxq); void ionic_lif_rxq_stats(struct ionic_rx_qcq *rxq); -int ionic_lif_txq_init(struct ionic_tx_qcq *txq); +int ionic_lif_txq_init_nowait(struct ionic_tx_qcq *txq); +void ionic_lif_txq_init_done(struct ionic_tx_qcq *txq); void ionic_lif_txq_deinit_nowait(struct ionic_tx_qcq *txq); void ionic_lif_txq_stats(struct ionic_tx_qcq *txq); diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c index 774dc596c0..ad04e987eb 100644 --- a/drivers/net/ionic/ionic_rxtx.c +++ b/drivers/net/ionic/ionic_rxtx.c @@ -203,27 +203,54 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id, * Start Transmit Units for specified queue. */ int __rte_cold -ionic_dev_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id) +ionic_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - uint8_t *tx_queue_state = eth_dev->data->tx_queue_state; - struct ionic_tx_qcq *txq; int err; + err = ionic_dev_tx_queue_start_firsthalf(dev, tx_queue_id); + if (err) + return err; + + return ionic_dev_tx_queue_start_secondhalf(dev, tx_queue_id); +} + +int __rte_cold +ionic_dev_tx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id) +{ + uint8_t *tx_queue_state = dev->data->tx_queue_state; + struct ionic_tx_qcq *txq = dev->data->tx_queues[tx_queue_id]; + if (tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { IONIC_PRINT(DEBUG, "TX queue %u already started", tx_queue_id); return 0; } - txq = eth_dev->data->tx_queues[tx_queue_id]; - IONIC_PRINT(DEBUG, "Starting TX queue %u, %u descs", tx_queue_id, txq->qcq.q.num_descs); - err = ionic_lif_txq_init(txq); + return ionic_lif_txq_init_nowait(txq); +} + +int __rte_cold +ionic_dev_tx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id) +{ + uint8_t *tx_queue_state = dev->data->tx_queue_state; + struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(dev); + struct ionic_tx_qcq *txq = dev->data->tx_queues[tx_queue_id]; + int err; + + if (tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + err = ionic_adminq_wait(lif, &txq->admin_ctx); if (err) return err; + ionic_lif_txq_init_done(txq); + tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; return 0; @@ -680,22 +707,31 @@ ionic_rx_init_descriptors(struct ionic_rx_qcq *rxq) * Start Receive Units for specified queue. */ int __rte_cold -ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) +ionic_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - uint8_t *rx_queue_state = eth_dev->data->rx_queue_state; - struct ionic_rx_qcq *rxq; - struct ionic_queue *q; int err; + err = ionic_dev_rx_queue_start_firsthalf(dev, rx_queue_id); + if (err) + return err; + + return ionic_dev_rx_queue_start_secondhalf(dev, rx_queue_id); +} + +int __rte_cold +ionic_dev_rx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id) +{ + uint8_t *rx_queue_state = dev->data->rx_queue_state; + struct ionic_rx_qcq *rxq = dev->data->rx_queues[rx_queue_id]; + struct ionic_queue *q = &rxq->qcq.q; + if (rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { IONIC_PRINT(DEBUG, "RX queue %u already started", rx_queue_id); return 0; } - rxq = eth_dev->data->rx_queues[rx_queue_id]; - q = &rxq->qcq.q; - rxq->frame_size = rxq->qcq.lif->frame_size - RTE_ETHER_CRC_LEN; /* Recalculate segment count based on MTU */ @@ -707,10 +743,27 @@ ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) ionic_rx_init_descriptors(rxq); - err = ionic_lif_rxq_init(rxq); + return ionic_lif_rxq_init_nowait(rxq); +} + +int __rte_cold +ionic_dev_rx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id) +{ + uint8_t *rx_queue_state = dev->data->rx_queue_state; + struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(dev); + struct ionic_rx_qcq *rxq = dev->data->rx_queues[rx_queue_id]; + int err; + + if (rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + err = ionic_adminq_wait(lif, &rxq->admin_ctx); if (err) return err; + ionic_lif_rxq_init_done(rxq); + /* Allocate buffers for descriptor ring */ if (rxq->flags & IONIC_QCQ_F_SG) err = ionic_rx_fill_sg(rxq); diff --git a/drivers/net/ionic/ionic_rxtx.h b/drivers/net/ionic/ionic_rxtx.h index 7ca23178cc..a342afec54 100644 --- a/drivers/net/ionic/ionic_rxtx.h +++ b/drivers/net/ionic/ionic_rxtx.h @@ -46,6 +46,16 @@ void ionic_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); int ionic_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int ionic_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +/* Helpers for optimized dev_start() */ +int ionic_dev_rx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +int ionic_dev_rx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +int ionic_dev_tx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id); +int ionic_dev_tx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id); + /* Helpers for optimized dev_stop() */ void ionic_dev_rx_queue_stop_firsthalf(struct rte_eth_dev *dev, uint16_t rx_queue_id); -- 2.17.1