From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E9B5A0C55; Wed, 13 Oct 2021 15:38:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 79BD0411DB; Wed, 13 Oct 2021 15:38:19 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 6432E411C8 for ; Wed, 13 Oct 2021 15:38:17 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10135"; a="226200659" X-IronPort-AV: E=Sophos;i="5.85,371,1624345200"; d="scan'208";a="226200659" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2021 06:38:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,371,1624345200"; d="scan'208";a="524628530" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga001.jf.intel.com with ESMTP; 13 Oct 2021 06:38:07 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Wed, 13 Oct 2021 14:37:02 +0100 Message-Id: <20211013133704.31296-5-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211013133704.31296-1-konstantin.ananyev@intel.com> References: <0211007112750.25526-1-konstantin.ananyev@intel.com> <20211013133704.31296-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v6 4/6] ethdev: make fast-path functions to use new flat array X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rework fast-path ethdev functions to use rte_eth_fp_ops[]. While it is an API/ABI breakage, this change is intended to be transparent for both users (no changes in user app is required) and PMD developers (no changes in PMD is required). One extra thing to note - RX/TX callback invocation will cause extra function call with these changes. That might cause some insignificant slowdown for code-path where RX/TX callbacks are heavily involved. Signed-off-by: Konstantin Ananyev --- lib/ethdev/ethdev_private.c | 31 +++++ lib/ethdev/rte_ethdev.h | 270 +++++++++++++++++++++++++----------- lib/ethdev/version.map | 3 + 3 files changed, 226 insertions(+), 78 deletions(-) diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index d810c3a1d4..c905c2df6f 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, fpo->txq.data = dev->data->tx_queues; fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs; } + +uint16_t +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts, + void *opaque) +{ + const struct rte_eth_rxtx_callback *cb = opaque; + + while (cb != NULL) { + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, + nb_pkts, cb->param); + cb = cb->next; + } + + return nb_rx; +} + +uint16_t +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque) +{ + const struct rte_eth_rxtx_callback *cb = opaque; + + while (cb != NULL) { + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts, + cb->param); + cb = cb->next; + } + + return nb_pkts; +} diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 4007bd0e73..f4c92b3b5e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -4884,6 +4884,33 @@ int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features); #include +/** + * @internal + * Helper routine for rte_eth_rx_burst(). + * Should be called at exit from PMD's rte_eth_rx_bulk implementation. + * Does necessary post-processing - invokes Rx callbacks if any, etc. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the receive queue from which to retrieve input packets. + * @param rx_pkts + * The address of an array of pointers to *rte_mbuf* structures that + * have been retrieved from the device. + * @param nb_rx + * The number of packets that were retrieved from the device. + * @param nb_pkts + * The number of elements in @p rx_pkts array. + * @param opaque + * Opaque pointer of Rx queue callback related data. + * + * @return + * The number of packets effectively supplied to the @p rx_pkts array. + */ +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts, + void *opaque); + /** * * Retrieve a burst of input packets from a receive queue of an Ethernet @@ -4975,39 +5002,51 @@ static inline uint16_t rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) { - struct rte_eth_dev *dev = &rte_eth_devices[port_id]; uint16_t nb_rx; + struct rte_eth_fp_ops *p; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_RX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return 0; + } +#endif + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; #ifdef RTE_ETHDEV_DEBUG_RX RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0); - if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id); + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n", + queue_id, port_id); return 0; } #endif - nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id], - rx_pkts, nb_pkts); -#ifdef RTE_ETHDEV_RXTX_CALLBACKS - struct rte_eth_rxtx_callback *cb; + nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts); - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id], +#ifdef RTE_ETHDEV_RXTX_CALLBACKS + { + void *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED); - - if (unlikely(cb != NULL)) { - do { - nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, - nb_pkts, cb->param); - cb = cb->next; - } while (cb != NULL); + if (unlikely(cb != NULL)) + nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, + rx_pkts, nb_rx, nb_pkts, cb); } #endif @@ -5031,16 +5070,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, static inline int rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) { - struct rte_eth_dev *dev; + struct rte_eth_fp_ops *p; + void *qd; + + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return -EINVAL; + } + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - dev = &rte_eth_devices[port_id]; - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP); - if (queue_id >= dev->data->nb_rx_queues || - dev->data->rx_queues[queue_id] == NULL) + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP); + if (qd == NULL) return -EINVAL; - return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]); + return (int)(*p->rx_queue_count)(qd); } /**@{@name Rx hardware descriptor states @@ -5088,21 +5138,30 @@ static inline int rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, uint16_t offset) { - struct rte_eth_dev *dev; - void *rxq; + struct rte_eth_fp_ops *p; + void *qd; #ifdef RTE_ETHDEV_DEBUG_RX - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return -EINVAL; + } #endif - dev = &rte_eth_devices[port_id]; + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; + #ifdef RTE_ETHDEV_DEBUG_RX - if (queue_id >= dev->data->nb_rx_queues) + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (qd == NULL) return -ENODEV; #endif - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP); - rxq = dev->data->rx_queues[queue_id]; - - return (*dev->rx_descriptor_status)(rxq, offset); + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP); + return (*p->rx_descriptor_status)(qd, offset); } /**@{@name Tx hardware descriptor states @@ -5149,23 +5208,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, static inline int rte_eth_tx_descriptor_status(uint16_t port_id, uint16_t queue_id, uint16_t offset) { - struct rte_eth_dev *dev; - void *txq; + struct rte_eth_fp_ops *p; + void *qd; #ifdef RTE_ETHDEV_DEBUG_TX - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return -EINVAL; + } #endif - dev = &rte_eth_devices[port_id]; + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; + #ifdef RTE_ETHDEV_DEBUG_TX - if (queue_id >= dev->data->nb_tx_queues) + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (qd == NULL) return -ENODEV; #endif - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP); - txq = dev->data->tx_queues[queue_id]; - - return (*dev->tx_descriptor_status)(txq, offset); + RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP); + return (*p->tx_descriptor_status)(qd, offset); } +/** + * @internal + * Helper routine for rte_eth_tx_burst(). + * Should be called before entry PMD's rte_eth_tx_bulk implementation. + * Does necessary pre-processing - invokes Tx callbacks if any, etc. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the transmit queue through which output packets must be + * sent. + * @param tx_pkts + * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures + * which contain the output packets. + * @param nb_pkts + * The maximum number of packets to transmit. + * @return + * The number of output packets to transmit. + */ +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque); + /** * Send a burst of output packets on a transmit queue of an Ethernet device. * @@ -5236,42 +5326,55 @@ static inline uint16_t rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + struct rte_eth_fp_ops *p; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_TX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return 0; + } +#endif + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; #ifdef RTE_ETHDEV_DEBUG_TX RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0); - if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id); + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n", + queue_id, port_id); return 0; } #endif #ifdef RTE_ETHDEV_RXTX_CALLBACKS - struct rte_eth_rxtx_callback *cb; - - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id], + { + void *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED); - - if (unlikely(cb != NULL)) { - do { - nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts, - cb->param); - cb = cb->next; - } while (cb != NULL); + if (unlikely(cb != NULL)) + nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, + tx_pkts, nb_pkts, cb); } #endif - rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, - nb_pkts); - return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts); + nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts); + + rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts); + return nb_pkts; } /** @@ -5334,31 +5437,42 @@ static inline uint16_t rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct rte_eth_dev *dev; + struct rte_eth_fp_ops *p; + void *qd; #ifdef RTE_ETHDEV_DEBUG_TX - if (!rte_eth_dev_is_valid_port(port_id)) { - RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id); + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); rte_errno = ENODEV; return 0; } #endif - dev = &rte_eth_devices[port_id]; + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; #ifdef RTE_ETHDEV_DEBUG_TX - if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id); + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id); + rte_errno = ENODEV; + return 0; + } + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n", + queue_id, port_id); rte_errno = EINVAL; return 0; } #endif - if (!dev->tx_pkt_prepare) + if (!p->tx_pkt_prepare) return nb_pkts; - return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id], - tx_pkts, nb_pkts); + return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts); } #else diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 29fb71f1af..61011b110a 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -7,6 +7,8 @@ DPDK_22 { rte_eth_allmulticast_disable; rte_eth_allmulticast_enable; rte_eth_allmulticast_get; + rte_eth_call_rx_callbacks; + rte_eth_call_tx_callbacks; rte_eth_dev_adjust_nb_rx_tx_desc; rte_eth_dev_callback_register; rte_eth_dev_callback_unregister; @@ -76,6 +78,7 @@ DPDK_22 { rte_eth_find_next_of; rte_eth_find_next_owned_by; rte_eth_find_next_sibling; + rte_eth_fp_ops; rte_eth_iterator_cleanup; rte_eth_iterator_init; rte_eth_iterator_next; -- 2.26.3