From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BAC2FA2EEB for ; Thu, 12 Sep 2019 17:24:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60BCC1EADD; Thu, 12 Sep 2019 17:24:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 072751EAAE; Thu, 12 Sep 2019 17:24:36 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8CFMDji015760; Thu, 12 Sep 2019 08:24:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=gEuR/mN/5gndkCN2lTPWktarWeZdA7FTy47IJIGoTrg=; b=s3C+9sTmHo7QXlqPLgxffOBjFLViSIbGYI+6rqaWWoxDWao5HoFhID4DvZeqgPYNdgZv z3j38v/8Bri3AmqNLL1fi0aKbf4v3MFHuRsTu+XqJwG3Zxd79Q5tYeM7dAIhIYDuJYsq 3cV3r4Gq+KbT+KS+SaeOvEhJNMcfzKI++YdpvDY2G7oVrAcpINnVrKS6ZN9X41UXxrSx XFf9XDlhHxGLBXwZrhVV7kTE0frZ7r1gsqbuvViZKpt6Ycb5TBoYGo8+2AnVLKpNEYL4 wDAuUnRQK6rU1SHG4ZWG8OInyNl59pBKTToadJaUhDRGEW1nKPLcQs5j8LvOql4cUlIc Ug== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2uvc2jy6av-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 12 Sep 2019 08:24:36 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 12 Sep 2019 08:24:34 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 12 Sep 2019 08:24:34 -0700 Received: from dut1171.mv.qlogic.com (unknown [10.112.88.18]) by maili.marvell.com (Postfix) with ESMTP id 14A553F703F; Thu, 12 Sep 2019 08:24:34 -0700 (PDT) Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id x8CFOXn1003063; Thu, 12 Sep 2019 08:24:33 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id x8CFOXTX003062; Thu, 12 Sep 2019 08:24:33 -0700 From: Shahed Shaikh To: CC: , , , Date: Thu, 12 Sep 2019 08:24:12 -0700 Message-ID: <20190912152416.2990-2-shshaikh@marvell.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20190912152416.2990-1-shshaikh@marvell.com> References: <20190912152416.2990-1-shshaikh@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-12_08:2019-09-11,2019-09-12 signatures=0 Subject: [dpdk-stable] [PATCH v2 1/5] net/qede: refactor Rx and Tx queue setup X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" This patch refactors Rx and Tx queue setup flow required to allow odd number of queues to be configured in next patch. This is the first patch of the series required to fix an issue where qede port initialization in ovs-dpdk fails due to 1 Rx/Tx queue configuration. Detailed explaination is given in next patch. Fixes: 2af14ca79c0a ("net/qede: support 100G") Cc: stable@dpdk.org Signed-off-by: Shahed Shaikh --- drivers/net/qede/qede_rxtx.c | 228 ++++++++++++++++++++++------------- 1 file changed, 141 insertions(+), 87 deletions(-) diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index c38cbb905..cb8ac9bf6 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -124,36 +124,20 @@ qede_calc_rx_buf_size(struct rte_eth_dev *dev, uint16_t mbufsz, return QEDE_FLOOR_TO_CACHE_LINE_SIZE(rx_buf_size); } -int -qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - __rte_unused const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp) +static struct qede_rx_queue * +qede_alloc_rx_queue_mem(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + struct rte_mempool *mp, + uint16_t bufsz) { struct qede_dev *qdev = QEDE_INIT_QDEV(dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; struct qede_rx_queue *rxq; - uint16_t max_rx_pkt_len; - uint16_t bufsz; size_t size; int rc; - PMD_INIT_FUNC_TRACE(edev); - - /* Note: Ring size/align is controlled by struct rte_eth_desc_lim */ - if (!rte_is_power_of_2(nb_desc)) { - DP_ERR(edev, "Ring size %u is not power of 2\n", - nb_desc); - return -EINVAL; - } - - /* Free memory prior to re-allocation if needed... */ - if (dev->data->rx_queues[queue_idx] != NULL) { - qede_rx_queue_release(dev->data->rx_queues[queue_idx]); - dev->data->rx_queues[queue_idx] = NULL; - } - /* First allocate the rx queue data structure */ rxq = rte_zmalloc_socket("qede_rx_queue", sizeof(struct qede_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); @@ -161,7 +145,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (!rxq) { DP_ERR(edev, "Unable to allocate memory for rxq on socket %u", socket_id); - return -ENOMEM; + return NULL; } rxq->qdev = qdev; @@ -170,27 +154,8 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->queue_id = queue_idx; rxq->port_id = dev->data->port_id; - max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len; - - /* Fix up RX buffer size */ - bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; - /* cache align the mbuf size to simplfy rx_buf_size calculation */ - bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); - if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) || - (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) { - if (!dev->data->scattered_rx) { - DP_INFO(edev, "Forcing scatter-gather mode\n"); - dev->data->scattered_rx = 1; - } - } - - rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len); - if (rc < 0) { - rte_free(rxq); - return rc; - } - rxq->rx_buf_size = rc; + rxq->rx_buf_size = bufsz; DP_INFO(edev, "mtu %u mbufsz %u bd_max_bytes %u scatter_mode %d\n", qdev->mtu, bufsz, rxq->rx_buf_size, dev->data->scattered_rx); @@ -203,7 +168,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, DP_ERR(edev, "Memory allocation fails for sw_rx_ring on" " socket %u\n", socket_id); rte_free(rxq); - return -ENOMEM; + return NULL; } /* Allocate FW Rx ring */ @@ -221,7 +186,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, " on socket %u\n", socket_id); rte_free(rxq->sw_rx_ring); rte_free(rxq); - return -ENOMEM; + return NULL; } /* Allocate FW completion ring */ @@ -240,14 +205,71 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, qdev->ops->common->chain_free(edev, &rxq->rx_bd_ring); rte_free(rxq->sw_rx_ring); rte_free(rxq); - return -ENOMEM; + return NULL; + } + + return rxq; +} + +int +qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, + uint16_t nb_desc, unsigned int socket_id, + __rte_unused const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct qede_dev *qdev = QEDE_INIT_QDEV(dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + struct qede_rx_queue *rxq; + uint16_t max_rx_pkt_len; + uint16_t bufsz; + int rc; + + PMD_INIT_FUNC_TRACE(edev); + + /* Note: Ring size/align is controlled by struct rte_eth_desc_lim */ + if (!rte_is_power_of_2(nb_desc)) { + DP_ERR(edev, "Ring size %u is not power of 2\n", + nb_desc); + return -EINVAL; } - dev->data->rx_queues[queue_idx] = rxq; - qdev->fp_array[queue_idx].rxq = rxq; + /* Free memory prior to re-allocation if needed... */ + if (dev->data->rx_queues[qid] != NULL) { + qede_rx_queue_release(dev->data->rx_queues[qid]); + dev->data->rx_queues[qid] = NULL; + } + + max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len; + + /* Fix up RX buffer size */ + bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; + /* cache align the mbuf size to simplfy rx_buf_size calculation */ + bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); + if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) || + (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) { + if (!dev->data->scattered_rx) { + DP_INFO(edev, "Forcing scatter-gather mode\n"); + dev->data->scattered_rx = 1; + } + } + + rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len); + if (rc < 0) + return rc; + + bufsz = rc; + + rxq = qede_alloc_rx_queue_mem(dev, qid, nb_desc, + socket_id, mp, bufsz); + if (!rxq) + return -ENOMEM; + + dev->data->rx_queues[qid] = rxq; + qdev->fp_array[qid].rxq = rxq; DP_INFO(edev, "rxq %d num_desc %u rx_buf_size=%u socket %u\n", - queue_idx, nb_desc, rxq->rx_buf_size, socket_id); + qid, nb_desc, rxq->rx_buf_size, socket_id); return 0; } @@ -278,6 +300,17 @@ static void qede_rx_queue_release_mbufs(struct qede_rx_queue *rxq) } } +static void _qede_rx_queue_release(struct qede_dev *qdev, + struct ecore_dev *edev, + struct qede_rx_queue *rxq) +{ + qede_rx_queue_release_mbufs(rxq); + qdev->ops->common->chain_free(edev, &rxq->rx_bd_ring); + qdev->ops->common->chain_free(edev, &rxq->rx_comp_ring); + rte_free(rxq->sw_rx_ring); + rte_free(rxq); +} + void qede_rx_queue_release(void *rx_queue) { struct qede_rx_queue *rxq = rx_queue; @@ -288,11 +321,7 @@ void qede_rx_queue_release(void *rx_queue) qdev = rxq->qdev; edev = QEDE_INIT_EDEV(qdev); PMD_INIT_FUNC_TRACE(edev); - qede_rx_queue_release_mbufs(rxq); - qdev->ops->common->chain_free(edev, &rxq->rx_bd_ring); - qdev->ops->common->chain_free(edev, &rxq->rx_comp_ring); - rte_free(rxq->sw_rx_ring); - rte_free(rxq); + _qede_rx_queue_release(qdev, edev, rxq); } } @@ -306,8 +335,8 @@ static int qede_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) int hwfn_index; int rc; - if (rx_queue_id < eth_dev->data->nb_rx_queues) { - rxq = eth_dev->data->rx_queues[rx_queue_id]; + if (rx_queue_id < qdev->num_rx_queues) { + rxq = qdev->fp_array[rx_queue_id].rxq; hwfn_index = rx_queue_id % edev->num_hwfns; p_hwfn = &edev->hwfns[hwfn_index]; rc = ecore_eth_rx_queue_stop(p_hwfn, rxq->handle, @@ -329,32 +358,18 @@ static int qede_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) return rc; } -int -qede_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_txconf *tx_conf) +static struct qede_tx_queue * +qede_alloc_tx_queue_mem(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) { struct qede_dev *qdev = dev->data->dev_private; struct ecore_dev *edev = &qdev->edev; struct qede_tx_queue *txq; int rc; - PMD_INIT_FUNC_TRACE(edev); - - if (!rte_is_power_of_2(nb_desc)) { - DP_ERR(edev, "Ring size %u is not power of 2\n", - nb_desc); - return -EINVAL; - } - - /* Free memory prior to re-allocation if needed... */ - if (dev->data->tx_queues[queue_idx] != NULL) { - qede_tx_queue_release(dev->data->tx_queues[queue_idx]); - dev->data->tx_queues[queue_idx] = NULL; - } - txq = rte_zmalloc_socket("qede_tx_queue", sizeof(struct qede_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); @@ -362,7 +377,7 @@ qede_tx_queue_setup(struct rte_eth_dev *dev, DP_ERR(edev, "Unable to allocate memory for txq on socket %u", socket_id); - return -ENOMEM; + return NULL; } txq->nb_tx_desc = nb_desc; @@ -382,7 +397,7 @@ qede_tx_queue_setup(struct rte_eth_dev *dev, "Unable to allocate memory for txbd ring on socket %u", socket_id); qede_tx_queue_release(txq); - return -ENOMEM; + return NULL; } /* Allocate software ring */ @@ -397,7 +412,7 @@ qede_tx_queue_setup(struct rte_eth_dev *dev, socket_id); qdev->ops->common->chain_free(edev, &txq->tx_pbl); qede_tx_queue_release(txq); - return -ENOMEM; + return NULL; } txq->queue_id = queue_idx; @@ -408,12 +423,44 @@ qede_tx_queue_setup(struct rte_eth_dev *dev, tx_conf->tx_free_thresh ? tx_conf->tx_free_thresh : (txq->nb_tx_desc - QEDE_DEFAULT_TX_FREE_THRESH); - dev->data->tx_queues[queue_idx] = txq; - qdev->fp_array[queue_idx].txq = txq; - DP_INFO(edev, "txq %u num_desc %u tx_free_thresh %u socket %u\n", queue_idx, nb_desc, txq->tx_free_thresh, socket_id); + return txq; +} + +int +qede_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + struct qede_dev *qdev = dev->data->dev_private; + struct ecore_dev *edev = &qdev->edev; + struct qede_tx_queue *txq; + + PMD_INIT_FUNC_TRACE(edev); + + if (!rte_is_power_of_2(nb_desc)) { + DP_ERR(edev, "Ring size %u is not power of 2\n", + nb_desc); + return -EINVAL; + } + + /* Free memory prior to re-allocation if needed... */ + if (dev->data->tx_queues[queue_idx] != NULL) { + qede_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = qede_alloc_tx_queue_mem(dev, queue_idx, nb_desc, + socket_id, tx_conf); + if (!txq) + return -ENOMEM; + + dev->data->tx_queues[queue_idx] = txq; + qdev->fp_array[queue_idx].txq = txq; return 0; } @@ -443,6 +490,16 @@ static void qede_tx_queue_release_mbufs(struct qede_tx_queue *txq) } } +static void _qede_tx_queue_release(struct qede_dev *qdev, + struct ecore_dev *edev, + struct qede_tx_queue *txq) +{ + qede_tx_queue_release_mbufs(txq); + qdev->ops->common->chain_free(edev, &txq->tx_pbl); + rte_free(txq->sw_tx_ring); + rte_free(txq); +} + void qede_tx_queue_release(void *tx_queue) { struct qede_tx_queue *txq = tx_queue; @@ -453,10 +510,7 @@ void qede_tx_queue_release(void *tx_queue) qdev = txq->qdev; edev = QEDE_INIT_EDEV(qdev); PMD_INIT_FUNC_TRACE(edev); - qede_tx_queue_release_mbufs(txq); - qdev->ops->common->chain_free(edev, &txq->tx_pbl); - rte_free(txq->sw_tx_ring); - rte_free(txq); + _qede_tx_queue_release(qdev, edev, txq); } } -- 2.17.1