From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A6860A04DC; Mon, 19 Oct 2020 11:06:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 80F43CFC3; Mon, 19 Oct 2020 10:53:46 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg702.qq.com [203.205.195.102]) by dpdk.org (Postfix) with ESMTP id 1DE47C91E for ; Mon, 19 Oct 2020 10:53:11 +0200 (CEST) X-QQ-mid: bizesmtp6t1603097588tlrm82j8a Received: from localhost.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Mon, 19 Oct 2020 16:53:08 +0800 (CST) X-QQ-SSF: 01400000002000C0C000B00A0000000 X-QQ-FEAT: oLU221C22Z8xJwPU0+HARfE+HkyhQYyU0YrAvS1yiryxuSothuXUoEI21Ng0r wGwBu04Rn9AP1BK4SHj+Fj8VOhbGqbX1ec6LpjMsw3C9d1r1zTGx+E/iuXsOzwQIIkHgxdE VPfxrALKTlUl62iVdExaNLG3LHuwElu5CagGp7npyfNKgG1VLJ5OEip4M/7O//s0G1Ag+GN pD3t7nZwdn+vOUZBLuMLd97FYDkQGf0Mx6Uj1qx4My4HUyFFksarOYVBXvAnZxUbEaiXsIZ hCFbDSmNEau0B2sTmXhfsOFeBI52ojXI5Va6itamjnofP4EfcnR38tW46Vxv1uDuaQf/KGx N/R2L9ELhYTqMVSg9vh3hIXdklrKg== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Mon, 19 Oct 2020 16:53:49 +0800 Message-Id: <20201019085415.82207-33-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20201019085415.82207-1-jiawenwu@trustnetic.com> References: <20201019085415.82207-1-jiawenwu@trustnetic.com> X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v4 32/58] net/txgbe: add Rx and Tx queue info get X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Rx and Tx queue information get operation. Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_ethdev.c | 2 + drivers/net/txgbe/txgbe_ethdev.h | 6 +++ drivers/net/txgbe/txgbe_rxtx.c | 77 ++++++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+) diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 9151542ef..d74d822ad 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -1743,6 +1743,8 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = { .uc_hash_table_set = txgbe_uc_hash_table_set, .uc_all_hash_table_set = txgbe_uc_all_hash_table_set, .set_mc_addr_list = txgbe_dev_set_mc_addr_list, + .rxq_info_get = txgbe_rxq_info_get, + .txq_info_get = txgbe_txq_info_get, }; RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd); diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index f47c64ca6..ab1ffe9fc 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -117,6 +117,12 @@ int txgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int txgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +void txgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo); + +void txgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo); + uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index f1b038013..fd6a3f436 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -1946,9 +1947,48 @@ txgbe_dev_tx_queue_release(void *txq) txgbe_tx_queue_release(txq); } +/* (Re)set dynamic txgbe_tx_queue fields to defaults */ +static void __rte_cold +txgbe_reset_tx_queue(struct txgbe_tx_queue *txq) +{ + static const struct txgbe_tx_desc zeroed_desc = {0}; + struct txgbe_tx_entry *txe = txq->sw_ring; + uint16_t prev, i; + + /* Zero out HW ring memory */ + for (i = 0; i < txq->nb_tx_desc; i++) + txq->tx_ring[i] = zeroed_desc; + + /* Initialize SW ring entries */ + prev = (uint16_t)(txq->nb_tx_desc - 1); + for (i = 0; i < txq->nb_tx_desc; i++) { + volatile struct txgbe_tx_desc *txd = &txq->tx_ring[i]; + + txd->dw3 = rte_cpu_to_le_32(TXGBE_TXD_DD); + txe[i].mbuf = NULL; + txe[i].last_id = i; + txe[prev].next_id = i; + prev = i; + } + + txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1); + txq->tx_tail = 0; + + /* + * Always allow 1 descriptor to be un-allocated to avoid + * a H/W race condition + */ + txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); + txq->ctx_curr = 0; + memset((void *)&txq->ctx_cache, 0, + TXGBE_CTX_NUM * sizeof(struct txgbe_ctx_info)); +} + static const struct txgbe_txq_ops def_txq_ops = { .release_mbufs = txgbe_tx_queue_release_mbufs, .free_swring = txgbe_tx_free_swring, + .reset = txgbe_reset_tx_queue, }; /* Takes an ethdev and a queue and sets up the tx function to be used based on @@ -3218,3 +3258,40 @@ txgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +void +txgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo) +{ + struct txgbe_rx_queue *rxq; + + rxq = dev->data->rx_queues[queue_id]; + + qinfo->mp = rxq->mb_pool; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->nb_desc = rxq->nb_rx_desc; + + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + qinfo->conf.offloads = rxq->offloads; +} + +void +txgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct txgbe_tx_queue *txq; + + txq = dev->data->tx_queues[queue_id]; + + qinfo->nb_desc = txq->nb_tx_desc; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + + qinfo->conf.tx_free_thresh = txq->tx_free_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; +} + -- 2.18.4