From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 242F9A0547; Mon, 21 Jun 2021 17:32:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A206741158; Mon, 21 Jun 2021 17:32:55 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 071ED40040 for ; Mon, 21 Jun 2021 17:32:53 +0200 (CEST) IronPort-SDR: qULMy5Ic4pl/q2jvvOp1/wmtZ6Mza5D8rTyK8fWLd3Uv1t9twBefa9o3ShohRqrgCFGrKt844t OKPSopn6hdtw== X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="270714337" X-IronPort-AV: E=Sophos;i="5.83,289,1616482800"; d="scan'208";a="270714337" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2021 08:32:52 -0700 IronPort-SDR: hOrVsMngEfX9bUNhfyPEgXfWBdEat1iN4cUVB8v78G3laY6mbcbjKsS3MCdC370tBDx34AtZKv 2VH4gwnpXwNw== X-IronPort-AV: E=Sophos;i="5.83,289,1616482800"; d="scan'208";a="453921129" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.238.17]) ([10.213.238.17]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2021 08:32:48 -0700 To: "Ananyev, Konstantin" , Thomas Monjalon , "Richardson, Bruce" Cc: =?UTF-8?Q?Morten_Br=c3=b8rup?= , "dev@dpdk.org" , "olivier.matz@6wind.com" , "andrew.rybchenko@oktetlabs.ru" , "honnappa.nagarahalli@arm.com" , "jerinj@marvell.com" , "gakhil@marvell.com" References: <20210614105839.3379790-1-thomas@monjalon.net> <98CBD80474FA8B44BF855DF32C47DC35C6184E@smartserver.smartshare.dk> <2004320.XGyPsaEoyj@thomas> <0bb118ba-2658-a7d7-ad8f-bf27f62849f7@intel.com> <43d5d092-73cc-e5a1-5d5d-2e4f9c6c1759@intel.com> From: Ferruh Yigit X-User: ferruhy Message-ID: <4e63beea-3121-2aaf-f6d6-dc8807c4088d@intel.com> Date: Mon, 21 Jun 2021 16:32:45 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] parray: introduce internal API for dynamic arrays X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 6/21/2021 3:42 PM, Ananyev, Konstantin wrote: > >>>>>>>> One more thought here - if we are talking about rte_ethdev[] in particular, I think we can: >>>>>>>> 1. move public function pointers (rx_pkt_burst(), etc.) from rte_ethdev into a separate flat array. >>>>>>>> We can keep it public to still use inline functions for 'fast' calls rte_eth_rx_burst(), etc. to avoid >>>>>>>> any regressions. >>>>>>>> That could still be flat array with max_size specified at application startup. >>>>>>>> 2. Hide rest of rte_ethdev struct in .c. >>>>>>>> That will allow us to change the struct itself and the whole rte_ethdev[] table in a way we like >>>>>>>> (flat array, vector, hash, linked list) without ABI/API breakages. >>>>>>>> >>>>>>>> Yes, it would require all PMDs to change prototype for pkt_rx_burst() function >>>>>>>> (to accept port_id, queue_id instead of queue pointer), but the change is mechanical one. >>>>>>>> Probably some macro can be provided to simplify it. >>>>>>>> >>>>>>> >>>>>>> We are already planning some tasks for ABI stability for v21.11, I think >>>>>>> splitting 'struct rte_eth_dev' can be part of that task, it enables hiding more >>>>>>> internal data. >>>>>> >>>>>> Ok, sounds good. >>>>>> >>>>>>> >>>>>>>> The only significant complication I can foresee with implementing that approach - >>>>>>>> we'll need a an array of 'fast' function pointers per queue, not per device as we have now >>>>>>>> (to avoid extra indirection for callback implementation). >>>>>>>> Though as a bonus we'll have ability to use different RX/TX funcions per queue. >>>>>>>> >>>>>>> >>>>>>> What do you think split Rx/Tx callback into its own struct too? >>>>>>> >>>>>>> Overall 'rte_eth_dev' can be split into three as: >>>>>>> 1. rte_eth_dev >>>>>>> 2. rte_eth_dev_burst >>>>>>> 3. rte_eth_dev_cb >>>>>>> >>>>>>> And we can hide 1 from applications even with the inline functions. >>>>>> >>>>>> As discussed off-line, I think: >>>>>> it is possible. >>>>>> My absolute preference would be to have just 1/2 (with CB hidden). >>>>> >>>>> How can we hide the callbacks since they are used by inline burst functions. >>>> >>>> I probably I owe a better explanation to what I meant in first mail. >>>> Otherwise it sounds confusing. >>>> I'll try to write a more detailed one in next few days. >>> >>> Actually I gave it another thought over weekend, and might be we can >>> hide rte_eth_dev_cb even in a simpler way. I'd use eth_rx_burst() as >>> an example, but the same principle applies to other 'fast' functions. >>> >>> 1. Needed changes for PMDs rx_pkt_burst(): >>> a) change function prototype to accept 'uint16_t port_id' and 'uint16_t queue_id', >>> instead of current 'void *'. >>> b) Each PMD rx_pkt_burst() will have to call rte_eth_rx_epilog() function at return. >>> This inline function will do all CB calls for that queue. >>> >>> To be more specific, let say we have some PMD: xyz with RX function: >>> >>> uint16_t >>> xyz_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >>> { >>> struct xyz_rx_queue *rxq = rx_queue; >>> uint16_t nb_rx = 0; >>> >>> /* do actual stuff here */ >>> .... >>> return nb_rx; >>> } >>> >>> It will be transformed to: >>> >>> uint16_t >>> xyz_recv_pkts(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >>> { >>> struct xyz_rx_queue *rxq; >>> uint16_t nb_rx; >>> >>> rxq = _rte_eth_rx_prolog(port_id, queue_id); >>> if (rxq == NULL) >>> return 0; >>> nb_rx = _xyz_real_recv_pkts(rxq, rx_pkts, nb_pkts); >>> return _rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_pkts); >>> } >>> >>> And somewhere in ethdev_private.h: >>> >>> static inline void * >>> _rte_eth_rx_prolog(uint16_t port_id, uint16_t queue_id); >>> { >>> struct rte_eth_dev *dev = &rte_eth_devices[port_id]; >>> >>> #ifdef RTE_ETHDEV_DEBUG_RX >>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); >>> RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, NULL); >>> >>> if (queue_id >= dev->data->nb_rx_queues) { >>> RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id); >>> return NULL; >>> } >>> #endif >>> return dev->data->rx_queues[queue_id]; >>> } >>> >>> static inline uint16_t >>> _rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **rx_pkts, const uint16_t nb_pkts); >>> { >>> struct rte_eth_dev *dev = &rte_eth_devices[port_id]; >>> >>> #ifdef RTE_ETHDEV_RXTX_CALLBACKS >>> struct rte_eth_rxtx_callback *cb; >>> >>> /* __ATOMIC_RELEASE memory order was used when the >>> * call back was inserted into the list. >>> * Since there is a clear dependency between loading >>> * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is >>> * not required. >>> */ >>> cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id], >>> __ATOMIC_RELAXED); >>> >>> if (unlikely(cb != NULL)) { >>> do { >>> nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, >>> nb_pkts, cb->param); >>> cb = cb->next; >>> } while (cb != NULL); >>> } >>> #endif >>> >>> rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx); >>> return nb_rx; >>> } >>> >>> Now, as you said above, in rte_ethdev.h we will keep only a flat array >>> with pointers to 'fast' functions: >>> struct { >>> eth_rx_burst_t rx_pkt_burst >>> eth_tx_burst_t tx_pkt_burst; >>> eth_tx_prep_t tx_pkt_prepare; >>> ..... >>> } rte_eth_dev_burst[]; >>> >>> And rte_eth_rx_burst() will look like: >>> >>> static inline uint16_t >>> rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, >>> struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) >>> { >>> if (port_id >= RTE_MAX_ETHPORTS) >>> return 0; >>> return rte_eth_dev_burst[port_id](port_id, queue_id, rx_pkts, nb_pkts); >>> } >>> >>> Yes, it will require changes in *all* PMDs, but as I said before the changes will be a mechanic ones. >>> >> >> I did not like the idea to push to calling Rx/TX callbacks responsibility to the >> drivers, I think it should be in the ethdev layer. > > Well, I'd say it is an ethdev layer function that has to be called by PMD 😊 > >> >> What about making 'rte_eth_rx_epilog' an API and call from 'rte_eth_rx_burst()', >> which will add another function call for Rx/Tx callback but shouldn't affect the >> Rx/Tx burst. > > But then we either need to expose call-back information to the user or pay the penalty > for extra function call, correct? > Right. As a middle ground, we can keep Rx/Tx burst functions as inline, but have the Rx/Tx callback part of it as function, so get the hit only for callbacks.